PPT-Unit-4 CPU Scheduling and Algorithms
PPT-Unit-4 CPU Scheduling and Algorithms
Chapter Outcomes:
• Justify the need and objective of job scheduling with relevant example.
• Explain with example the procedure of allocating CPU to the given process using the
specified OS.
• Calculate turnaround time and average waiting time of the given scheduling algorithm.
• Explain functioning of the given necessary condition leading to deadlock.
Learning Objectives:
• To understand Basic Concept of CPU Scheduling
• To study Types of Scheduling
• To learn Types of Scheduling Algorithms
• To study Basic Concepts of Deadlock
Scheduling Concept
• Scheduling is an important function of an operating system. The process scheduling is the
activity of the process manager that handles the removal of the running process from the
CPU and the selection of another process on the basis of a particular strategy.
• The scheduler is the kernel component (module/program) responsible for deciding which
program should be executed on the CPU. A scheduler selects a job/task which is to be
submitted next for execution.
• When it receives control, the scheduler the program and gives the CPU to another
program. The function of deciding which program should be given the CPU, and for how
long, is called scheduling.
Scheduling Model
• CPU scheduling is the process of selecting a process from the ready queue and assigning
the CPU to it. CPU scheduling algorithms decides which of the processes in the ready queue
is to be allocated to the CPU.
Every process in OS that requests CPU service carries out the following sequence of actions:
• 1. In first action, join the ready queue and wait for CPU service.
• 2. In second action, execute (receive CPU service) for the duration of the current CPU burst
or for the duration of the time slice (timeout).
• 3. In third action, join the I/O queue to wait for I/O service or return to the ready queue to
wait for more CPU service.
• 4. In fourth action, terminate and exit if service is completed i.e., if there are no more CPU
or I/O bursts. If more service is required, return to the ready queue to wait for more CPU
service.
Scheduling Model
The CPU scheduler is the part of the operating system that selects the next process to which
the CPU will be allocated, de-allocates the CPU from the process currently executing, and
allocates the CPU to the newly selected process
CPU and I/O Burst Cycles
• CPU scheduling is greatly affected by how a
process behaves during its execution. Almost all
the processes continue to switch between CPU
(for processing) and I/O devices (for performing
I/O) during their execution.
• Processes alternate back and forth between
these two states. Process execution begins with
a CPU burst. It is followed by an I/O burst, which
is followed by another CPU burst, then another
I/O burst and so on. Eventually the last CPU
burst will end with a system request to
terminate execution, rather than another I/O
burst
CPU and I/O Burst Cycles
CPU Bound Process: The process which spends more time in computations or with CPU and
very rarely with the I/O devices is called as CPU bound process.
I/O Bound Process: The process which spends more time in I/O operation than computation
(time spends with CPU) is I/O bound process
Scheduling Criteria
CPU Utilization: CPU utilization is defined as, the percentage of time the CPU is busy in
executing processes. For higher utilization, CPU must be kept as busy as possible, that is, there
must be some process running at all times. CPU Utilization may range from 0 to 100 percent. In
a real system it should range from 40 percent to 90 percent.
Throughput: If the CPU is busy executing processes, then work is being done. One measure of
work is the number of processes that are completed per time unit called throughput. For long
processes, this rate may be one process per hour, for short transactions, throughput might be
10 processes per second. Throughput is defined as, the total number of processes that a
system can execute per unit of time. 3
Scheduling Criteria
Turnaround Time (TAT): It is the difference between the time a process enters the system and
the time it exits the system. From the point of view of a particular process, the important
criterion is how long it takes to execute that process. The interval from the time of submission
of a process to the time of completion is the turnaround time. Turnaround Time is the sum; of
the periods spent waiting to get into memory writing in the ready queue, executing on the
CPU and doing I/O. Turnaround time is defined as, the amount of time that has rolled by from
the time of creation to the termination of a process.
Waiting Time: The CPU scheduling algorithm does not affect the amount of time during which
a process executes of does I/O, if affects only the amount of time that a process spends
waiting in the ready queue. Waiting time is the sum of the periods spent waiting in the ready
queue. Waiting time is defined as, the time spent by a process while waiting in the ready
queue.
Scheduling Criteria
Response Time: Response time is defined as, the time elapsed between the moment when a
user initiates a request and the instant when the system starts responding to this request. In
an interactive system TAT (Turnaround Time) may not be best criterion. Often a process can
produce some output early and can continue computing new results while previous results are
being output to the user. Thus another measure is the time from submission of a request until
the first response is produced. This measure called response time is the amount of time it
takes to start responding, but not the time that it takes to output that response. The
Turnaround Time is generally limited by the speed of the output device.
Balanced Utilization: Balanced utilization is defined as, the percentage of time all the system
resources are busy. It considers not only the CPU utilization but the utilization of I/O devices,
memory, and all other resources. To get more work done by the system, the CPU and I/O
devices must he kept running simultaneously. For this, it is desirable to load a mixture of CPU-
bound and I/O-bound processes in the memory.
Preemptive Scheduling
• A pre-emptive scheduling allows a higher priority process to replace a currently running
process, even if its time slot is not completed or it has not requested for any I/O.
• If a higher priority process is enters the system, the currently running process is stopped
and the CPU transfers the control to the higher priority process.
• The currently running process may be interrupted and moved to the ready state by the
operating system. Window-95 introduces pre-emptive scheduling and all subsequent
versions of Windows operating systems have used pre-emptive scheduling.
• The Mac OS X operating system for the Macintosh also uses preemptive scheduling.
• The pre-emptive scheduling algorithms is based on priority where a scheduler may
preempt a low priority running process anytime when a high priority process enters into a
ready state.
• The advantage of pre-emptive scheduling algorithms is they allow real multiprogramming.
• The disadvantages of pre-emptive scheduling algorithms are they are complex and they
lead the system to race condition.
Non-Preemptive Scheduling
• In non-preemptive scheduling once the CPU is assigned to a process, the processor do not
release until the completion of that process. It means that a running process has the
control of the CPU and other allocated resources until the normal termination of that
process.
• In non-pre-emptive scheduling, once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or by switching to the waiting
state. This scheduling method was used by Microsoft Windows 3.
• In Microsoft Windows 3 case, once a process is in the running state, it continues to execute
until it terminates or blocks itself to wait for I/O or by requesting some operating system
services i.e., if a higher priority process enters the system, the running process cannot be
forced to release the control.
Non-Preemptive Scheduling
• Non-preemptive algorithms are designed so that once a process enters the running state, it
cannot be preempted until it completes its allotted time.
• The advantages of non-preemptive scheduling algorithms are they are simple and they
cannot lead the system to race condition.
• The disadvantage of non-preemptive scheduling algorithms is they do not allow real
multiprogramming.
Difference between pre-emptive and Non-Pre-emptive algorithm
Preemptive Scheduling Non-preemptive Scheduling
Even if CPU is allocated to one process, CPU Once, the CPU has been allocated to a
can be preempted to other process if other process the process keeps the CPU until
process is having higher priority or some releases CPU either by terminating or by
other fulfilling criteria. switching to waiting state.
Throughput is less Throughput is high
It is suitable for real time system. It is not suitable for real time system
Algorithm design is complex. Algorithm design is simple.
Only the processes having higher priority are Processes having any priority can get
scheduled. scheduled.
It doesn’t treat all processes as equal. It treats all process as equal.
Difference between pre-emptive and Non-Pre-emptive algorithm
Preemptive Scheduling Non-preemptive Scheduling
It has high overhead. It has low overhead
Examples: Round Robin, Priority algorithms. Example: FCFS algorithm.
In pre-emptive scheduling, a running process In non-preemptive scheduling, the processor
may be interrupted by another process that executes a process till termination without
needs to execute. any interruption.
The system resources are used efficiently. The system resources are not used efficiently
Circumstances for pre-emptive: Circumstances for non-pre-emptive:
• process switch from running to ready state • process switches from running to waiting
• process switch from waiting to ready state state
• process terminates
Dispatcher
• Dispatcher is a component which involves in the CPU scheduling. The dispatcher is the
module that actually gives control of the CPU to the process selected by the short term
scheduler.
• The module of the operating system that performs the function of setting up the execution
of the selected process on the CPU is known as dispatcher.
• The CPU scheduler only selects a process to be executed next on the CPU but it cannot
assign CPU to the selected process.
The following are functions performed by dispatcher:
1. Loading the register of the process.
2. Switching operating system to the user mode.
3. Restart the program by jumping to the proper location in the user program.
The dispatcher needs to be as fast as possible, as it is run on every context switch. The time
taken by dispatcher to stop one process and start another process to run is called dispatch
latency time.
First Come First Serve (FCFS) Scheduling Algorithm
• It is the simplest type of algorithm in which the process that requests the CPU first is
allocated the CPU first.
• In FCFS scheduling algorithm processes are scheduled in the order they are received. The
FCFS algorithm is easily implemented by using a queue data structure for the ready queue.
• Jobs are processed in the order of their arrival in the ready queue. It can be implemented
with FIFO (First In First Out (FIFO)) queue.
• The FCFS scheduling algorithm is non-pre-emptive. Once the CPU has been allocated to a
process, that process keeps the CPU until it wants to release the CPU, either by terminating
or by requesting I/O.
• In time-sharing system it is not useful because process will hold the CPU until it finishes or
changes a state to wait state.
• In FCFS once, the processes is given to the CPU it keeps it till the completion of execution
First Come First Serve (FCFS) Scheduling Algorithm
• Performance of FCFS is often very poor because a process with a long CPU burst will hold
up other processes. Moreover, it can affect overall throughput since I/O on processes in the
waiting state may complete; while the CPU bound process is still running.
• Average waiting time for FCFS algorithm is not minimal, and it also varies substantially if the
process CPU burst time vary greatly.
First Come First Serve (FCFS) Scheduling Algorithm
Example-1: Consider the four jobs are scheduled for execution (all jobs arrived at same time).
Find the average waiting time and turnaround time.
Gantt Chart:
First Come First Serve (FCFS) Scheduling Algorithm
First Come First Serve (FCFS) Scheduling Algorithm
Example 2: Consider the four jobs are scheduled for execution. Find the average waiting time
and turnaround time
Gantt Chart:
First Come First Serve (FCFS) Scheduling Algorithm
First Come First Serve (FCFS) Scheduling Algorithm
Advantages of FCFS Scheduling Algorithm:
• FCFS is easier to understand and implement as processes are simply to be added at the end
and removed from the front of queue. No process from in between the queue is required
to be accessed.
• FCFS is well suited for batch systems where the longer time periods for each process arc
often acceptable.
• Deadlock is defined as, "a situation where a set of processes are blocked because each
process is holding a resource and waiting for another resource acquired by some other
process".
44
Necessary Conditions to Deadlock
No Pre-emption: Resources cannot be pre-empted; i.e. resource can only be released
voluntarily by the process holding it, after the process has completed its task. Resources
previously granted cannot be forcibly taken away from a process. They must be explicitly
released by the process holding them.
Circular Wait: There exist a set (P0, P1, ----- Pn) of waiting processes such that P0 is
waiting for a resource which is held by P1, P1 is waiting for a resource which is held by
P2. Pn-1 is waiting for resources which are held by Pn and Pn is waiting for a resource
which is held by P0. Thus there must be a circular chain of two or more processes, each
of which is waiting for a resource held by the next member of the chain.
45
Deadlock Handling
A deadlock in operating system can be handled in following four different ways:
46
Deadlock Prevention
• Eliminating Mutual Exclusion Condition
• Eliminating Hold and Wait Condition
• Eliminating No Preemption Condition
• Eliminating Circular Wait Condition
47
Deadlock Prevention
• A deadlock can be prevented by eliminating any one of the four necessary conditions of the
deadlock which results in the inefficient use of resources. Deadlock avoidance approaches
ensure that deadlocks cannot arise/occur in a system, by not allowing the conditions for
deadlocks to hold simultaneously.
• Deadlock avoidance requires that the operating system be given information in advance
regarding the resources a process will request and use. This information is used by the
operating system to schedule the allocation of resources so that no process waits for a
resource.
• Deadlock prevention prevent deadlocks by restraining how requests can be made. The
restraints ensure that at least one of the necessary conditions for deadlock cannot occur,
and hence, that deadlock cannot hold.
48
Safe State
• A state is safe if the system can allocate all resources
requested by all processes without entering a
deadlock state.
• A state is safe if there exists a safe sequence of
processes (P0, P1, P2, ..., PN) such that all of the
resource requests for Pi can be granted using the
resources currently allocated to Pi and all processes Pj
where j < i.
• If a safe sequence does not exist, then the system is in
an unsafe state, which may lead to deadlock. (all safe
states are deadlock free, but not all unsafe states lead
to deadlocks.)
49
Bankers Algorithm
Dijkstra was the first person to propose an algorithm in 1965 for deadlock avoidance. This is
known as Banker’s algorithm. Banker's algorithm is a deadlock avoidance algorithm. It is
named so because this algorithm is used in banking systems to determine whether a loan
can be granted or not.
Banker’s algorithm in the operating system is such that it can know in advance before a
resource is allocated to a process, whether it can lead to deadlock (“unsafe state”) or it can
certainly manage to avoid it (“safe state”).
50
Safety Algorithm
This algorithm is used to find out whether a system is in safe state or not. This algorithm can
be described as follows:
1. Let Work and Finish be vectors of length m and n, respectively.
Initialize Work= Available and
Finish[i] = false for i = 0, 1, ... , n – 1.
2. Find an index i such that both
(i) Finish[i] == false
(ii) Needi≤ Work
If no such i exists, go to step 4.
3. Work = Work + Allocation;
Finish[i] = true
Go to step 2.
4. If Finish[i] == true for all i, then the system is in a safe state.
This algorithm may require an order of m x n2 operations to determine whether a state is safe.
51
Example
A system with five processes P0 through P4 and three resource types A, B, and C. Resource
type A has ten instances, resource type B has five instances, and resource type C has seven
instances. At time T0 the resources are allocated to processes as per given below:
52
Introduction
53
Deadlock Recovery
Process Termination:
Abort all deadlocked processes
Abort one process at a time until the deadlock cycle is eliminated
Resource Preemption:
Select a process for pre-emption
Rollback of the process
Prevent Starvation
54
Thank You
Vijay Patil
Department of Computer Engineering (NBA Accredited)
Vidyalankar Polytechnic
Vidyalankar College Marg, Wadala(E), Mumbai 400 037
E-mail: [email protected] 55
Copy protected with Online-PDF-No-Copy.com 56