0% found this document useful (0 votes)
11 views9 pages

OE-IOP

Uploaded by

forgetfoget8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views9 pages

OE-IOP

Uploaded by

forgetfoget8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

UNIT–I: Introduction to Operating Systems

1. Definition of Operating System


 Formal Definition:
An Operating System (OS) is a system software that manages hardware,
controls resources, and provides services to application programs and users.
 Role:
o It is a bridge between the user and the computer hardware.
o Without an OS, using a computer would be difficult (every application
would need to know how to directly control hardware).
 Example: Windows, Linux, macOS, Android.

2. User View vs System View


User View

 Focus: Convenience & usability.


 Users expect the system to be fast, reliable, and easy to use.
 Different types of systems provide different user views:
o PCs/Laptops: Emphasis on user-friendly interfaces, multitasking, file
management.
o Smartphones: Touch-based, app-driven interface.
o Servers: Focus on performance and resource sharing rather than
convenience.
 OS hides hardware details (e.g., you save a file → OS manages disk blocks
internally).

System View

 Focus: Resource management & efficiency.


 OS is seen as a resource allocator:
o CPU time, memory space, file storage, I/O devices.
 OS ensures:
o Fair distribution of resources among users and processes.
o Proper scheduling to maximize CPU utilization.
 Example: In a multiuser system, the OS prevents one process from consuming
all CPU resources.
3. Operating System Structure
Different ways of organizing OS design:

1. Monolithic Systems

o Entire OS is a single large kernel that runs in supervisor mode.


o All functionalities (I/O, process scheduling, file system, memory
management) are part of one program.
o Advantages: Fast execution, less overhead.
o Disadvantages: Hard to modify/debug, less secure (a bug may crash the
whole system).
o Example: Early UNIX.

2. Layered Approach

o OS divided into layers (levels), each built on top of lower levels.


o Lowest layer = hardware; highest layer = user interface.
o Advantages: Simpler to design/debug (modular).
o Disadvantages: Overhead in communication between layers.
o Example: THE Operating System (classic layered design).

3. Microkernel

o Kernel contains only essential services:


 Communication between processes, basic scheduling, minimal
memory management.
o Other services (file systems, device drivers, networking) run in user
space.
o Advantages: More reliable, easier to extend, more secure.
o Disadvantages: More context switching, performance overhead.
o Example: QNX, Mach (basis for macOS).

4. Modules (Modern OS)

o Hybrid approach (combines monolithic + microkernel).


o Core kernel + dynamically loadable modules (like plug-ins).
o Provides flexibility and efficiency.
o Example: Linux, Solaris.

4. Operating System Services


The OS provides services to both users and programs:
1. Program Execution

o OS loads programs into memory, executes them, and provides


mechanisms for termination (normal or abnormal).

2. I/O Operations

o Since users cannot directly control I/O devices, OS provides a uniform


interface for I/O operations (e.g., read, write).

3. File System Manipulation

o Programs need to create, delete, read, write, and manage files.


o OS handles storage details (directories, access permissions).

4. Communication

o Processes may need to exchange information:


 Message passing (e.g., sending data through pipes or sockets).
 Shared memory (common memory space accessible by
processes).

5. Error Detection

o OS continuously monitors for errors (hardware failures, memory errors,


illegal instructions).
o Takes corrective action (e.g., terminate faulty process).

6. Resource Allocation

o In multi-user or multiprogramming environments, resources must be


allocated fairly and efficiently.

7. Accounting

o Keeps track of system usage (e.g., CPU time, memory used) for statistics,
billing, and optimization.

8. Protection & Security

o Ensures processes cannot interfere with each other.


o Provides authentication (passwords, encryption) and authorization (file
permissions).
Summary
 OS is both a user convenience tool and a resource manager.
 Structures vary from monolithic to modular, depending on system needs.
 Provides a set of core services essential for reliable and efficient computing.

Process

1. Process Concept
 Definition:
A process is a program in execution.

o A program is a passive entity (set of instructions stored on disk).


o A process is an active entity (program + execution state + resources).

 Key Components of a Process:

1. Code (Text Section): Program instructions.


2. Data Section: Global variables.
3. Heap: Dynamically allocated memory (at runtime).
4. Stack: Contains function calls, local variables, return addresses.
5. Program Counter (PC): Address of the next instruction to execute.
6. Registers: Store temporary values for execution.

 Types of Processes:

o System Processes: Part of OS, provide core functions (e.g., daemons in


UNIX).
o User Processes: Programs initiated by users (e.g., running MS Word).

2. Process Control Block (PCB)


 Definition:
A data structure maintained by the OS for each process, containing all information
about that process.
 Purpose:
o Acts as the “identity card” of a process.
o Helps OS to suspend and resume processes.

 Contents of PCB:

1. Process State: Current state (new, ready, running, waiting, terminated).


2. Program Counter: Next instruction address.
3. CPU Registers: General-purpose, accumulator, index, stack pointers, etc.
4. CPU Scheduling Information: Priority, scheduling queues, etc.
5. Memory Management Information: Base/limit registers, page tables,
segment tables.
6. Accounting Information: CPU time used, clock time, job number.
7. I/O Status Information: List of I/O devices allocated, open files, pending
I/O requests.

 Diagram of PCB (for visualization in notes):

+--------------------------+
| Process State |
+--------------------------+
| Program Counter |
+--------------------------+
| CPU Registers |
+--------------------------+
| CPU Scheduling Info |
+--------------------------+
| Memory Management Info |
+--------------------------+
| Accounting Info |
+--------------------------+
| I/O Status Info |
+--------------------------+

3. Context Switching
 Definition:
The mechanism of saving the state of a running process (into its PCB) and loading the
state of another process (from its PCB).

o Enables multitasking and CPU sharing.

 Steps in Context Switch:

1. Save the state (registers, program counter, etc.) of the current process into its
PCB.
2. Load the saved state of the next process from its PCB.
3. Transfer control to the new process.
 When does context switching happen?

o Process scheduling (time slice over in Round Robin).


o Higher-priority process arrives.
o I/O request or interrupt occurs.

 Overhead of Context Switching:

o No useful work is done during context switch (only saving/restoring).


o More frequent switching → higher overhead → performance drops.
o Efficient scheduling minimizes context switch overhead.

Summary
 A process = program in execution, with code, data, stack, heap.
 The PCB stores all process-related info (state, registers, memory, I/O).
 Context switching enables multitasking but adds overhead.

UNIT–II: CPU Scheduling & Device Management

1. CPU Scheduling
1.1 Scheduling Criteria

The performance of a CPU scheduling algorithm is judged by these criteria:

1. CPU Utilization – Keep CPU as busy as possible (ideal = 100%).


2. Throughput – Number of processes completed per unit time.
3. Turnaround Time – Time taken from submission to completion of a process.
o Formula: Turnaround Time = Completion Time – Arrival Time
4. Waiting Time – Time a process spends waiting in the ready queue.
o Formula: Waiting Time = Turnaround Time – Burst Time
5. Response Time – Time from request submission to first response.
o Important in interactive systems.
6. Fairness – No process should starve indefinitely.
1.2 Scheduling Algorithms
a) First-Come, First-Served (FCFS)

 Method: Processes are scheduled in the order they arrive in the ready queue.
 Type: Non-preemptive.
 Characteristics:
o Simple to implement (queue).
o Problem: Long processes delay short ones (convoy effect).
 Example:
Processes with burst times: P1=24, P2=3, P3=3.
o Order: P1 → P2 → P3.
o Avg waiting time = (0 + 24 + 27)/3 = 17 units.

b) Shortest Job First (SJF)

 Method: Select process with smallest CPU burst time.


 Type: Can be preemptive (Shortest Remaining Time First – SRTF) or non-
preemptive.
 Characteristics:
o Optimal (minimum average waiting time).
o Problem: Requires knowledge of future burst times → difficult to predict.
o Risk of starvation for long processes.

c) Round Robin (RR)

 Method: Each process gets a small unit of CPU time (time quantum). After
quantum expires, the process goes to end of ready queue.
 Type: Preemptive.
 Characteristics:
o Good for time-sharing systems.
o If quantum is too large → behaves like FCFS.
o If quantum is too small → high overhead (too many context switches).
 Example:
Processes P1=24, P2=3, P3=3; quantum=4.
o Processes get CPU in cyclic order.
o Avg waiting time is reduced compared to FCFS.
2. Device Management: Disk Scheduling
2.1 Introduction

 Disk I/O time depends on seek time (head movement), rotational latency, and
transfer time.
 Goal: Minimize seek time by choosing an optimal order of servicing requests.

2.2 Disk Scheduling Algorithms


a) First-Come, First-Served (FCFS)

 Requests are processed in the order they arrive.


 Advantage: Fair, simple.
 Disadvantage: Poor average performance.
 Example: Requests = 98, 183, 37, 122, 14, 124, 65, 67; head at 53.
o Order: 53→98→183→37→122→14→124→65→67.
o Head movement = 640 cylinders.

b) Shortest Seek Time First (SSTF)

 Choose request closest to current head position.


 Advantage: Better performance than FCFS.
 Disadvantage: May cause starvation for far requests.
 Example (same as above, head=53):
o Nearest request is 65, then 67, then 37…
o Head movement reduced to 236 cylinders.

c) SCAN (Elevator Algorithm)

 Disk arm moves in one direction, servicing requests until it reaches the end, then
reverses.
 Analogy: Like an elevator – services requests in order going up, then going
down.
 Advantage: More uniform wait time, avoids starvation.
 Example: Same request queue, head=53 moving towards higher end.
o Service order: 65, 67, 98, 122, 124, 183, then reverse to 37, 14.
o Head movement = 208 cylinders.
Summary
 CPU Scheduling: Decides which process runs on CPU.
o Algorithms: FCFS (simple but slow), SJF (optimal but hard to predict), RR
(fair, interactive).
 Disk Scheduling: Decides order of servicing disk I/O requests.
o Algorithms: FCFS (simple), SSTF (efficient but unfair), SCAN (balanced,
like elevator).

You might also like