0% found this document useful (0 votes)
10 views

OS

An Operating System (OS) serves as an interface between users and computer hardware, managing essential tasks such as file, memory, and process management. It operates as a resource allocator and control program, ensuring efficient use of hardware resources while providing services like program execution and error handling. Key functions include managing processes, memory, devices, and files, as well as ensuring security and system performance.

Uploaded by

divyashreer58
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

OS

An Operating System (OS) serves as an interface between users and computer hardware, managing essential tasks such as file, memory, and process management. It operates as a resource allocator and control program, ensuring efficient use of hardware resources while providing services like program execution and error handling. Key functions include managing processes, memory, devices, and files, as well as ensuring security and system performance.

Uploaded by

divyashreer58
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

UNIT 1

OPERATING SYSTEM - OVERVIEW

An Operating System (OS) is an interface between a computer user and computer hardware. An operating
system is a software which performs all the basic tasks like file management, memory management,
process management, handling input and output, and controlling peripheral devices such as disk drives and
printers.
Some popular Operating Systems include Linux Operating System, Windows Operating System, VMS,
OS/400, AIX, z/OS, etc.

Definition
An operating system is a program that acts as an interface between the user and the computer hardware
and controls the execution of all kinds of programs.

Computer System Components

1. Hardware – provides basic computing resources (CPU,Memory, I/O devices).


2. Operating system – controls and coordinates the use of the hardware among the various application
programs for the various users.
3. Applications programs – define the ways in which the system resources are used to solve the computing
problems of the users (compilers, database systems, video games, business programs).
4. Users (people, machines, other computers).
System View of an Operating System
The operating system is a resource allocator:-
-Manages all resources.
-Decides between conflicting requests for efficient and fair resource use.
The operating system is a control program:-Controls execution of programs to prevent errors and
improper use of the computer.

Operating System Views


-Resource allocator:-to allocate resources (software and hardware) of the computer system and manage
them efficiently.
-Control program:-Controls execution of user programs and operation of I/O devices.
-Kernel :-The program that executes forever (everything else is an application with respect to the kernel).

Main Functions of Operating System

Processor Management

In multiprogramming environment, the OS decides which process gets the processor when and for how
much time. This function is called process scheduling. An Operating System does the following activities for
processor management −

 Keeps tracks of processor and status of process. The program responsible for this task is known as
traffic controller.
 Allocates the processor (CPU) to a process.
 De-allocates processor when a process is no longer required.
Memory Management
Memory management refers to management of Primary Memory or Main Memory. Main memory is a
large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a program to be
executed, it must in the main memory. An Operating System does the following activities for memory
management −
 Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not
in use.
 In multiprogramming, the OS decides which process will get memory when and how much.
 Allocates the memory when a process requests it to do so.
 De-allocates the memory when a process no longer needs it or has been terminated.
Device Management (I/O device management)
An Operating System manages device communication via their respective drivers. It does the following
activities for device management −
 Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.
 Decides which process gets the device when and for how much time.
 Allocates the device in the efficient way.
 De-allocates devices.

File Management
A file system is normally organized into directories for easy navigation and usage. These directories may
contain files and other directions.
An Operating System does the following activities for file management

 Keeps track of information, location, uses, status etc. The collective facilities are often
known as file system.
 Decides who gets the resources.
 Allocates the resources.
 De-allocates the resources.

Other Important Activities


Following are some of the important activities that an Operating System performs −
 Security − By means of password and similar other techniques, it prevents unauthorized
access to programs and data.

 Control over system performance − Recording delays between request for a service and
response from the system.

 Job accounting − Keeping track of time and resources used by various jobs and users.

 Error detecting aids − Production of dumps, traces, error messages, and other debugging
and error detecting aids.

 Coordination between other softwares and users − Coordination and assignment of


compilers, interpreters, assemblers and other software to the various users of the
computer systems.
Goals of Operating System
1. Convenience: An OS makes a computer more convenient to use.
2. Efficiency: An OS allows the computer system resources to be used in an efficient manner.
3. Ability to Evolve: An OS should be constructed in such a way as to permit the effective development,
testing and introduction of new system functions without at the same time interfering with service.

Operating System - Services


An Operating System provides services to both the users and to the programs.
 It provides programs an environment to execute.
 It provides users the services to execute the programs in a convenient manner.

Following are a few common services provided by an operating system −


 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection

Program execution
Operating systems handle many kinds of activities from user programs to system programs like printer
spooler, name servers, file server, etc. Each of these activities is encapsulated as a process.
A process includes the complete execution context (code to execute, data to manipulate, registers, OS
resources in use). Following are the major activities of an operating system with respect to program
management –

Loads a program into memory.


 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.

I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers hide the
peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.
 I/O operation means read or write operation with any file or any specific I/O device.
 Operating system provides the access to the required I/O device when required.

File system manipulation


A file represents a collection of related information. Computers can store files on the disk (secondary
storage), for long-term storage purpose. Examples of storage media include magnetic tape, magnetic disk
and optical disk drives like CD, DVD. Each of these media has its own properties like speed, capacity, data
transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These directories may
contain files and other directions. Following are the major activities of an operating system with respect to
file management −
 Program needs to read a file or write a file.
 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.

Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the memory
hardware. Following are the major activities of an operating system with respect to error handling −
 The OS constantly checks for possible errors.
 The OS takes an appropriate action to ensure correct and consistent computing.

Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles and files
storage are to be allocated to each user or job. Following are the major activities of an operating system
with respect to resource management −
 The OS manages all kinds of resources using schedulers.
 CPU scheduling algorithms are used for better utilization of CPU.

Protection
Considering a computer system having multiple users and concurrent execution of multiple processes, the
various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or users to the
resources defined by a computer system. Following are the major activities of an operating system with
respect to protection −
 The OS ensures that all access to system resources is controlled.
 The OS ensures that external I/O devices are protected from invalid access attempts.
 The OS provides authentication features for each user by means of passwords.

Operating System - Properties


Batch processing
Batch processing is a technique in which an Operating System collects the programs and data together in a
batch before processing starts. An operating system does the following activities related to batch
processing −
 The OS defines a job which has predefined sequence of commands, programs and data as a
single unit.
 The OS keeps a number a jobs in memory and executes them without any manual
information.
 Jobs are processed in the order of submission, i.e., first come first served fashion.
When a job completes its execution, its memory is released and the output for the job gets copied into an
output spool for later printing or processing.

Advantages
 Batch processing takes much of the work of the operator to the computer.
 Increased performance as a new job get started as soon as the previous job is finished,
without any manual intervention.
Disadvantages
 Difficult to debug program.
 A job could enter an infinite loop.
 Due to lack of protection scheme, one batch job can affect pending jobs.

Multiprogramming
Sharing the processor, when two or more programs reside in memory at the same time, is referred as
multiprogramming. Multiprogramming assumes a single shared processor. Multiprogramming increases
CPU utilization by organizing jobs so that the CPU always has one to execute.
The following figure shows the memory layout for a multiprogramming system.
An OS does the following activities related to multiprogramming.
 The operating system keeps several jobs in memory at a time.
 This set of jobs is a subset of the jobs kept in the job pool.
 The operating system picks and begins to execute one of the jobs in the memory.
 Multiprogramming operating systems monitor the state of all active programs and system
resources using memory management programs to ensures that the CPU is never idle,
unless there are no jobs to process.

Advantages
 High and efficient CPU utilization.
 User feels that many programs are allotted CPU almost simultaneously.

Disadvantages
 CPU scheduling is required.
 To accommodate many jobs in memory, memory management is required.

Multitasking (Time Sharing System)(Fair Sharing)(Multi program round robin)


Multitasking is when multiple jobs are executed by the CPU simultaneously by switching between them.
Switches occur so frequently that the users may interact with each program while it is running. An OS does
the following activities related to multitasking −
 The user gives instructions to the operating system or to a program directly, and receives an
immediate response.
 The OS handles multitasking in the way that it can handle multiple operations/executes
multiple programs at a time.
 Multitasking Operating Systems are also known as Time-sharing systems.
 These Operating Systems were developed to provide interactive use of a computer system
at a reasonable cost.

 A time-shared operating system uses the concept of CPU scheduling and multiprogramming to
provide each user with a small portion of a time-shared CPU.
 Each user has at least one separate program in memory.
 A program that is loaded into memory and is executing is commonly referred to as a process.
 When a process executes, it typically executes for only a very short time before it either
finishes or needs to perform I/O.
 Since interactive I/O typically runs at slower speeds, it may take a long time to complete.
During this time, a CPU can be utilized by another process.
 The operating system allows the users to share the computer simultaneously. Since each action
or command in a time-shared system tends to be short, only a little CPU time is needed for each
user.
 As the system switches CPU rapidly from one user/program to the next, each user is given the
impression that he/she has his/her own CPU, whereas actually one CPU is being shared among
many users.

Real time system


Real time systems are usually dedicated, embedded systems.
They typically read from and react to sensor data. The system must guarantee response to events within
fixed periods of time to ensure correct performance.
Operating system does the following activities related to real time system activity.
 In such systems, Operating Systems typically read from and react to sensor data.
 The Operating system must guarantee response to events within fixed periods of time to
ensure correct performance.

Process
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.

A process is defined as an entity which represents the basic unit of work to be


implemented in the system.

To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned
in the program.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image shows a
simplified layout of a process inside main memory –

S.N. Component & Description

1 Stack
The process Stack contains the temporary data such as
method/ Function parameters, return address and
local Variables.
2 Heap

This is dynamically allocated memory to a process during


its run time.
3 Text

This includes the current activity represented by the value


of Program Counter and the contents of the processor's
registers.
4 Data

This section contains the global and static variables.

Process Life Cycle

When a process executes, it passes through different states. These stages may
differ in different operating systems, and the names of these states are also not
standardized.
In general, a process can have one of the following five states at a time.

S.N. State & Description

1 Start
This is the initial state when a process is first started/created.

2 Ready
The process is waiting to be assigned to a processor. Ready processes are waiting
to have the processor allocated to them by the operating system so that they can
run. Process may come into this state after Start state or while running it by but
interrupted by the scheduler to assign CPU to some other process.
3 Running
Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.

4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.

5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it
is moved to the terminated state where it waits to be removed from main memory.

Process Control Block (PCB)

A Process Control Block is a data structure maintained by the Operating System


for every process. The PCB is identified by an integer process ID (PID). A PCB
keeps all the information needed to keep track of a process as listed below in the
table −

S.N. Information & Description

1 Process State
The current state of the process i.e., whether it is ready,
running, waiting, or whatever.

2 Process privileges
This is required to allow/disallow access to system resources.
3 Process ID
Unique identification for each of the process in the
operating system.

4 Pointer
A pointer to parent process.

5 Program Counter
Program Counter is a pointer to the address of the next
instruction to be executed for this process.

6 CPU registers
Various CPU registers where process need to be stored for
execution for running state.

7 CPU Scheduling Information


Process priority and other scheduling information which is
required to schedule the process.

8 Memory management information

This includes the information of page table, memory limits,


Segment table depending on memory used by the operating
system.
9 Accounting information

This includes the amount of CPU used for process execution,


time limits, execution ID etc.

10 IO status information

This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and


may contain different information in different operating systems. Here is a
simplified diagram of a PCB –
The PCB is maintained for a process throughout its lifetime, and is deleted once the
process terminates.

Process Scheduling

Definition

The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process
on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems.
Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.

Process Scheduling Queues

The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a


separate queue for each of the process states and PCBs of all processes in the
same execution state are placed in the same queue. When the state of a process is
changed, its PCB is unlinked from its current queue and moved to its new state
queue.
The Operating System maintains the following important process scheduling queues
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in
main memory, ready and waiting to execute. A new process is always
put in this queue.
 Device queues − The processes which are blocked due to
unavailability of an I/O device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority,
etc.). The OS scheduler determines how to move processes between the ready and run
queues which can only have one entry per processor core on the system; in the above
diagram, it has been merged with the CPU.

Scheduling Queue
The processes that are entering into the system are stored in the Job Queue. Suppose if the processes are in the
Ready state are generally placed in the Ready Queue.
The processes waiting for a device are placed in Device Queues. There are unique device queues which are available
for every I/O device.
First place a new process in the Ready queue and then it waits in the ready queue till it is selected for execution.
Once the process is assigned to the CPU and is executing, any one of the following events occur −
 The process issue an I/O request, and then placed in the I/O queue.
 The process may create a new sub process and wait for termination.
 The process may be removed forcibly from the CPU, which is an interrupt, and it is put back in the ready
queue.
In the first two cases, the process switches from the waiting state to the ready state, and then puts it back
in the ready queue. A process continues this cycle till it terminates, at which time it is removed from all
queues and has its PCB and resources deallocated.

Schedulers

Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to
decide which process to run. Schedulers are of three types −

 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs


are admitted to the system for processing. It selects processes from the queue
and loads them into memory for execution. Process loads into the memory for CPU
scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs,
such as I/O bound and processor bound. It also controls the degree of
multiprogramming. If the degree of multiprogramming is stable, then the average rate
of process creation must be equal to the average departure rate of processes
leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When a process changes
the state from new to ready, then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system


performance in accordance with the chosen set of criteria. It is the change of
ready state to running state of the process. CPU scheduler selects a process among the
processes that are ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which
process to execute next. Short-term schedulers are faster than long-term
schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the


memory. It reduces the degree of multiprogramming. The medium-term scheduler
is in-charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended
processes cannot make any progress towards completion. In this condition, to
remove the process from memory and make space for other processes, the
suspended process is moved to the secondary storage. This process is called
swapping, and the process is said to be swapped out or rolled out. Swapping may
be necessary to improve the process mix.

Comparison among Scheduler


S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.

3 It controls the degree It provides lesser control It reduces the degree


of multiprogramming over degree of multiprogramming.
of
multiprogramming
4 It is almost absent or It is also minimal in time It is a part of Time sharing
minimal in time sharing sharing system systems.
system

5 It selects processes from It selects those processes It can re-introduce the


pool and loads them into which are ready to process into memory and
memory for execution execute execution can be continued.

Cooperting Process

Cooperating processes are those that can affect or are affected by other processes running on the system.
Cooperating processes may share data with each other.

Reasons for needing cooperating processes

There may be many reasons for the requirement of cooperating processes. Some of these are given as
follows −
 Modularity
Modularity involves dividing complicated tasks into smaller subtasks. These subtasks can completed by
different cooperating processes. This leads to faster and more efficient completion of the required tasks.
 Information Sharing
Sharing of information between multiple processes can be accomplished using cooperating processes. This
may include access to the same files. A mechanism is required so that the processes can access the files in
parallel to each other.
 Convenience
There are many tasks that a user needs to do such as compiling, printing, editing etc. It is convenient if
these tasks can be managed by cooperating processes.
 Computation Speedup
Subtasks of a single task can be performed parallely using cooperating processes. This increases the
computation speedup as the task can be executed faster. However, this is only possible if the system has
multiple processing elements.

Methods of Cooperation

Cooperating processes can coordinate with each other using shared data or messages. Details
about these are given as follows −
 Cooperation by Sharing
The cooperating processes can cooperate with each other using shared data such as memory, variables,
files, databases etc. Critical section is used to provide data integrity and writing is mutually exclusive to
prevent inconsistent data.
A diagram that demonstrates cooperation by sharing is given as follows −

In the above diagram, Process P1 and P2 can cooperate with each other using shared data such as
memory, variables, files, databases etc.
 Cooperation by Communication
The cooperating processes can cooperate with each other using messages. This may lead to deadlock if
each process is waiting for a message from the other to perform a operation. Starvation is also possible if a
process never receives a message.
A diagram that demonstrates cooperation by communication is given as follows −
Interprocess Communication
Inter process communication is the mechanism provided by the operating system that allows processes to
communicate with each other. This communication could involve a process letting another process know
that some event has occurred or the transferring of data from one process to another.

A diagram that illustrates inter process communication is as follows −

Synchronization in Interprocess Communication

Synchronization is a necessary part of interprocess communication. It is either provided by the interprocess


control mechanism or handled by the communicating processes. Some of the methods to provide
synchronization are as follows −

 Semaphore
A semaphore is a variable that controls the access to a common resource by multiple processes. The two
types of semaphores are binary semaphores and counting semaphores.
 Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at a time. This is
useful for synchronization and also prevents race conditions.
 Barrier
A barrier does not allow individual processes to proceed until all the processes reach it. Many parallel
languages and collective routines impose barriers.
 Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while checking if the lock is
available or not. This is known as busy waiting because the process is not doing any useful operation even
though it is active.

Approaches to Interprocess Communication

The different approaches to implement interprocess communication are given as follows −

 Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-way data channel
between two processes. This uses standard input and output methods. Pipes are used in all POSIX systems
as well as Windows operating systems.
 Socket
The socket is the endpoint for sending or receiving data in a network. This is true for data sent between
processes on the same computer or data sent between different computers on the same network. Most of
the operating systems use sockets for interprocess communication.
 File
A file is a data record that may be stored on a disk or acquired on demand by a file server. Multiple
processes can access a file as required. All operating systems use files for data storage.
 Signal
Signals are useful in interprocess communication in a limited way. They are system messages that are sent
from one process to another. Normally, signals are not used to transfer data but are used for remote
commands between processes.
 Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple processes. This is done so
that the processes can communicate with each other. All POSIX systems, as well as Windows operating
systems use shared memory.
 Message Queue
Multiple processes can read and write data to the message queue without being connected to each other.
Messages are stored in the queue until their recipient retrieves them. Message queues are quite useful for
interprocess communication and are used by most operating systems.

A diagram that demonstrates message queue and shared memory methods of interprocess communication
is as follows −
Scheduling algorithms
SCHEDULING CRITERIA:

Many algorithms exist for CPU scheduling. Various criteria have been suggested for
comparing these CPU scheduling algorithms. Common criteria include:

 CPU utilization: This may range from 0% to 100% ideally. In real


systems it ranges from 40% for lightly-loaded systems to 90% for
heavily-loaded systems.

 Throughput: The number of processes completed per time unit is


throughput. Long processes may be of the order of one process per hour
whereas in case of short processes, throughput may be 10 or 12
processes per second.

 Turnaround time: The interval of time between submission and


completion of a process is called turnaround time. It includes execution
time and waiting time.

 Waiting time: Sum of all the times spent by a process at different


instances waiting in the ready queue is called waiting time.

 Response time: In an interactive process the user is using some output


generated while the process continues to generate new results. Instead of using
the turnaround time that gives the difference between time of submission and
time of completion, response time is sometimes used. Response time is thus the
difference between time of submission and the time the first response occurs.

Desirable features include maximum CPU utilization, throughput and minimum


turnaround time, waiting time and response time.
A Process Scheduler schedules different processes to be assigned to the CPU
based on particular scheduling algorithms. There are six popular process
scheduling algorithms which we are going to discuss in this chapter −

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-First (SJF) Scheduling
 Priority Scheduling
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive
algorithms are designed so that once a process enters the running state, it cannot
be preempted until it completes its allotted time, whereas the preemptive
scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.

First Come First Serve (FCFS)

 Jobs are executed on first come, first serve basis.


 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Consider a set of three processes P1, P2 and P3 arriving at time instant 0 and
having CPU burst times as shown below:

Process Burst time (msecs)

P1 24

P2 3

P3 3
The Gantt chart below shows the result.

P1 P2 P3

0 24 27 30

Average waiting time and average turnaround time are calculated

as follows: The waiting time for process P1 = 0 msecs

P2 = 24msec

P3=27 msecs

Average waiting time = (0 + 24 + 27) / 3 = 51 / 3 = 17 msecs.

P1 completes at the end of 24 msecs, P2 at the end of 27 msecs and P3 at the end
of 30 msecs. Average turnaround time = (24 + 27 + 30) / 3 = 81 / 3 = 27 msecs.

If the processes arrive in the order P2, P3 and P3, then the result will be as follows:

P2 P3 P1

0 3 6 30

Average waiting time = (0 + 3 + 6) / 3 = 9 / 3 = 3 msecs.

Average turnaround time = (3 + 6 + 30) / 3 = 39 / 3 = 13 msecs.

Thus if processes with smaller CPU burst times arrive earlier, then average waiting
and average turnaround times are less.

The algorithm also suffers from what is known as a convoy effect. Consider the
following scenario. Let there be a mix of one CPU-bound process and many I/O
bound processes in the ready queue.

The CPU-bound process gets the CPU and executes (long I/O burst).
In the meanwhile, I/O bound processes finish I/O and wait for CPU thus leaving
the I/O devices idle.

The CPU-bound process releases the CPU as it goes for an I/O.

I/O bound processes have short CPU bursts and they execute and go for I/O quickly.
The CPU is idle till the CPU-bound process finishes the I/O and gets hold of the
CPU.
The above cycle repeats. This is called the convoy effect. Here small processes wait
for one big process to release the CPU.

Since the algorithm is non-preemptive in nature, it is not suited for time- sharing
systems.

Shortest Job First (SJF)

 This is also known as shortest job first, or SJF


 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is
known in advance.
 Impossible to implement in interactive systems where required CPU
time is not known.
 The processer should know in advance how much time process will take.

As an example, consider the following set of processes P1, P2, P3, P4 and their CPU
burst times:

Process Burst time (msecs)

P1 6

P2 8

P3 7

P4 3
Using SJF algorithm, the processes would be scheduled as shown below.

P4 P1 P3 P2

0 3 9 16 24

Average waiting time = (0 + 3 + 9 + 16) / 4 = 28 / 4 = 7 msecs.

Average turnaround time = (3 + 9 + 16 + 24) / 4 = 52 / 4 = 13 msecs.

If the above processes were scheduled using FCFS algorithm, then

Average waiting time = (0 + 6 + 14 + 21) / 4 = 41 / 4 = 10.25 msecs.


Average turnaround time = (6 + 14 + 21 + 24) / 4 = 65 / 4 = 16.25 msecs.

The SJF algorithm produces the most optimal scheduling scheme. For a given set of
processes, the algorithm gives the minimum average waiting and turnaround times.
This is because, shorter processes are scheduled earlier than longer ones and hence
waiting time for shorter processes decreases more than it increases the waiting
time of long processes.

The main disadvantage with the SJF algorithm lies in knowing the length of the
next CPU burst. In case of long-term or job scheduling in a batch system, the
time required to complete a job as given by the user can be used to schedule. SJF
algorithm is therefore applicable in long-term scheduling.

The algorithm cannot be implemented for CPU scheduling as there is no way to


accurately know in advance the length of the next CPU burst. Only an
approximation of the length can be used to implement the algorithm.

But the SJF scheduling algorithm is provably optimal and thus serves as a
benchmark to compare other CPU scheduling algorithms.

SJF algorithm could be either preemptive or non-preemptive. If a new process joins


the ready queue with a shorter next CPU burst than what is remaining of the current
executing process, then the CPU is allocated to the new process. In case of non-
preemptive scheduling, the current executing process is not preempted and the
new process gets the next chance, it being the process with the shortest next
CPU burst.

Given below are the arrival and burst times of four processes P1, P2, P3 and P4.

Process Arrival time (msecs) Burst time (msecs)

P1 0 8 (1+7)

P2 1 4

P3 2 9

P4 3 5
If SJF preemptive scheduling is used, the following Gantt chart shows the

P1 P2 P4 P1 P3

0 1 5 10 17 26

Average waiting time = ((10 – 1) + 0 + (17 – 2) + (5 – 3)) / 4 = 26 / 4 = 6.5 msecs. If

non-preemptive SJF scheduling is used, the result is as follows:

P1 P2 P4 P3

0 8 12 17 26

Average waiting time = ((0 + (8 – 1) + (12 – 3) + (17 – 2)) / 4 = 31 / 4 = 7.75 msecs.

Priority Based Scheduling

 Priority scheduling is a non-preemptive algorithm and one of the most


common scheduling algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is
to be executed first and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time
requirements or any other resource requirement.
In the following example, we will assume lower numbers to represent
higher priority.
Process Priority Burst time (msecs)
P1 3 10
P2 1 1
P3 3 2
P4 4 1
P5 2 5
Using priority scheduling, the processes are scheduled as shown below:

P2 P5 P1 P3 P4
0 1 6 16
0 1 6 16 18 19
Average waiting time = (6 + 0 + 16 + 18 + 1) / 5 = 41 / 5 = 8.2 msecs.
Priorities can be defined either internally or externally. Internal definition of
priority is based on some measurable factors like memory requirements, number
of open files, and so on. External priorities are defined by criteria such as
importance of the user depending on the user’s department and other influencing
factors.
Priority-based algorithms can be either preemptive or non-preemptive. In case of
preemptive scheduling, if a new process joins the ready queue with a priority
higher than the process that is executing, then the current process is preempted
and CPU allocated to the new process. But in case of non- preemptive algorithm,
the new process having highest priority from among the ready processes is
allocated the CPU only after the current process gives up the CPU.

Starvation or indefinite blocking is one of the major disadvantages of priority


scheduling. Every process is associated with a priority. In a heavily-loaded system,
low priority processes in the ready queue are starved or never get a chance to
execute. This is because there is always a higher priority process ahead of them in
the ready queue.
A solution to starvation is aging. Aging is a concept where the priority of a process
waiting in the ready queue is increased gradually. Eventually even the lowest
priority process ages to attain the highest priority at which time it gets a chance to
execute on the CPU.

Round Robin Scheduling

 Round Robin is the preemptive process scheduling algorithm.


 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted
and other process executes for a given time period.
 Context switching is used to save states of preempted processes.

Consider the same example explained under FCFS algorithm.

Process Burst time (msecs)

P1 24

P2 3

P3 3

Let the duration of a time slice be 4 msecs, which is to say CPU


switches between processes every 4 msecs in a round-robin fashion. The
Gantt chart below shows the scheduling of processes.
P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

Average waiting time = (4 + 7 + (10 – 4)) / 3 = 17/ 3 = 5.66 msecs.

You might also like