3.
CPU-Scheduling (Single Processor
Scheduling)
3. CPU-Scheduling …
3.1 Basic Concepts of CPU-Scheduling
• Scheduling refers to a set of policies and mechanisms to control the order
of work to be performed by a computer system. Of all the resources of a
computer system that are scheduled before use, the CPU/processor is the
most important.
• Processor Scheduling is the means by which operating systems allocate
processor time for processes.
3. CPU-Scheduling …
• The operating system makes three types of scheduling decisions regarding
process execution:
o Long-term Scheduling: Determines when new processes are admitted
to the system.
o Medium-term Scheduling: Determines when a program is brought
partially or fully into main memory for execution.
o Short-term Scheduling: Determines which ready process will get
processor time next and is performed when a ready process is
allocated the processor (dispatched).
3. CPU-Scheduling …
• The part of the operating system that makes scheduling decisions is called
the scheduler, and the algorithm it uses is called the scheduling
algorithm.
• In short-term scheduling, the module that gives control of the CPU to the
selected process is called the dispatcher, which performs:
o Process switch (Context Switch)
o Mode Switch: Kernel → User
o Control branching to the proper location in the user program
3. CPU-Scheduling …
3.2 Scheduling Criteria
• There are some requirements that should be met by short-term
schedulers/scheduling algorithms.
• Some of these requirements are user-oriented, which relate to the
behavior of the system as perceived by the individual user or
process and the rest are system-oriented, which focus on effective
and efficient utilization of the processor
3. CPU-Scheduling …
• Response Time: Minimize the time from submission of a request until the
response begins.
• Turnaround Time: Minimize the interval between submission and
completion.
• Throughput: Maximize the number of processes completed per unit time.
• Fairness: Ensure each process gets a fair share of the CPU, avoiding
starvation.
• Predictability: Processes should execute in approximately the same time
regardless of system load.
• CPU Utilization: Maximize the percentage of time that the CPU is busy.
• Scheduling requirements are interdependent, and optimization involves
trade-offs (e.g., minimizing response time increases context-switching
overhead, reducing throughput).
3. CPU-Scheduling …
• Scheduling requirements are interdependent and some of them are
contradictory and hence it is impossible to optimize all of them
simultaneously. For instance minimizing response time requires
frequent switching which increases the overhead of the system and
so reduces throughput
• The design of scheduling policy involves compromising among
competing requirements
• There are two main characteristics of short-term scheduling
algorithms
3. CPU-Scheduling …
• Selection function
– It determines which process, among ready processes, is selected
for execution
– It may be based on
• Priority
• Resource requirement
• Execution behavior: time spent in system so far (waiting and
executing), time spent in execution so far, total service time
required by the process
• Decision mode
– It specifies the instants in time at which the selection function is
exercised
– There are two types of decision modes: preemptive and non-
preemptive
3. CPU-Scheduling …
3.3 Types of Scheduling
• Preemptive Scheduling: Allows processes to be temporarily suspended
and moved to the ready state.
o Ensures acceptable response time and fairness.
o Causes context-switching overhead.
o Events triggering preemption: arrival of a new process, occurrence of
an interrupt, or a clock interrupt.
• Non-Preemptive Scheduling: A process runs until it terminates or blocks
itself.
o Simple and easy to implement.
o Used in early batch systems.
o Provides efficiency but results in high response time.
3. CPU-Scheduling …
3.4 CPU-Scheduling Algorithms
3.4.1 First-Come-First-Served Scheduling (FCFS)
• Basic Concept
– The process that requested the CPU first is allocated the CPU
and keeps it until it released it, either due to completion or
request of an I/O operation. The process that has been in the
ready queue the longest is selected for running.
– Its selection function is waiting time and it uses non
preemptive scheduling/decision mode
– Process execution begins with CPU burst, followed by an I/O
burst, followed by another CPU burst, then by another I/O burst
and so on.
3. CPU-Scheduling …
• Example: Consider the following processes arrive at time 0
Process P1 P2 P3
CPU Burst/Service Time (in ms) 24 3 3
• Case i. If they arrive in the order of P1, P2, P3
Process P1 P2 P3
Service Time (Ts) 24 3 3
Turnaround time (Tr) 24 27 30
Response time 0 24 27
Tr/Ts 1 9 10
Average response time = (0+24+27)/3 = 17
Average turnaround time = (24+27+30)/3=27
Throughput = 3/30= 1/10
3. CPU-Scheduling …
• Case ii. If they arrive in the order of P3, P2, P1
Process P3 P2 p1
Service Time (Ts) 3 3 24
Turnaround time (Tr) 3 6 30
Response time 0 3 6
Tr/Ts 1 2 1.25
Average response time = (0+3+6)/3 = 3
Average turnaround time = (3+6+30)/3=13
Throughput = 3/30= 1/10
3. CPU-Scheduling …
• Consider the following processes arrive at time 0, 1,
2, 3 respectively
Process P1 P2 P3 P4
Arrival Time (Ta) 0 1 2 3
Service Time (Ts) 1 100 1 100
Turnaround time (Tr) 1 100 100 199
Response time 0 0 99 99
Tr/Ts 1 1 100 1.99
Average response time = (0+0+99+99)/4 = 49.5
Average turnaround time = (1+100+100+199)/4=100
Throughput = 4/202
3. CPU-Scheduling …
• Advantages
– It is the simplest of all non-preemptive scheduling algorithms: process
selection & maintenance of the queue is simple
– There is a minimum overhead and no starvation
– It is often combined with priority scheduling to provide efficiency
• Drawbacks
– Poor CPU and I/O utilization: CPU will be idle when a process is
blocked for some I/O operation
– Poor and unpredictable performance: it depends on the arrival of
processes
– Unfair CPU allocation: If a big process is executing, all other processes
will be forced to wait for a long time until the process releases the
CPU. This situation is known as the Convoy Effect
– It performs much better for long processes than short ones
3. CPU-Scheduling …
3.4.2 Shortest Job First Scheduling (SJF)
• Basic Concept
– Process with the shortest expected processing time (CPU burst)
is selected next
– Its selection function is execution time and it uses non
preemptive scheduling/decision mode
• Example: Consider the following processes arrive at time 0
Process P1 P2 P3
CPU Burst (in ms) 24 3 3
Process P1 P2 P3
Turnaround time 3 6 30
Response time 0 3 6
Average response time = (0+3+6)/3 = 3
Average turnaround time = (3+6+30)/3=13
Throughput = 3/30
3. CPU-Scheduling …
• Consider the following processes arrive at time 0, 2,
4, 6, 8 respectively
Process P1 P2 P3 P4 P5
Arrival Time (Ta) 0 2 4 6 8
Service Time (Ts) 3 6 4 5 2
Turnaround time(Tr) 3 7 11 14 3
Response time 0 1 7 9 1
Tr/Ts 1 1.17 2.75 2.8 1.5
Average response time = (0+1+7+9+1)/5 = 3.6
Average turnaround time = (3+7+11+14+3)/5=7.6
Throughput = 5/20
3. CPU-Scheduling …
• Advantages
– It produces optimal average turn around time and average response
time
– There is a minimum overhead
• Drawbacks
– Starvation: some processes may not get the CPU at all as long as there
is a steady supply of shorter processes. It can be modified by reducing
the calculated average time of the process by a constant for each
allocation so that it will be eventually move up to the next queue
– It is not desirable for a time-sharing or transaction processing
environment because of its lack of processing
– Variability of response time is increased, especially for longer
processes
3. CPU-Scheduling …
• Difficulty with SJFS
– Figuring out the shortest process or the required processing
time of each process: one approach to is to
• Use aging: a technique of estimating the next value in a series by
taking the weighed average of the current measured value and the
previous estimate
• Estimate the next CPU burst based on the average of the measured
length of previous CPU burst
• Sn+1 = 1/n (T1 + T2 + T3 … Tn) where Ti is processor execution time
for the ith instance and Si is predicted value for the ith instance
– Arrival of processes may not be simultaneous
3. CPU-Scheduling …
3.4.3 Shortest Remaining Time Scheduling (SRTS)
• Basic Concept
– The process that has the shortest expected remaining process
time
– If a new process arrives with a shorter next CPU burst than what
is left of the currently executing process, the new process gets
the CPU
– Its selection function is remaining execution time and uses
preemptive decision mode
3. CPU-Scheduling …
• Example: Consider the following processes arrive at time 0, 2, 4, 6,
8 respectively
Process P1 P2 P3 P4 P5
Arrival Time (Ta) 0 2 4 6 8
Service Time (Ts) 3 6 4 5 2
Turnaround time(Tr) 3 13 4 14 2
Response time 0 1 0 9 0
Tr/Ts 1 2.17 1 2.8 1
Average response time = (0+1+0+9+0)/5 = 2
Average turnaround time = (3+13+4+14+2)/5=7.2
Throughput = 5/20
3. CPU-Scheduling …
• Advantages
– It gives superior turnaround time performance to SJFS, because
a short job is given immediate preference to a running longer
process
• Drawbacks
– There is a risk of starvation of longer processes
– High overhead due to frequent process switch
3. CPU-Scheduling …
3.4.4 Priority Scheduling
• Basic Concept
– Each process is assigned a priority and the runnable process
with the highest priority is allowed to run i.e. a ready process
with highest priority is given the CPU.
– It is often convenient to group processes into priority classes
and use priority scheduling among the classes but round robin
scheduling within each class
– Can be preemptive or non-preemptive.
3. CPU-Scheduling …
• Example: Consider the following processes arrive at
time 0
Process P1 P2 P3 P4 P5
Priority 2 4 5 3 1
Service Time (Ts) 3 6 4 5 2
Turnaround time(Tr) 18 10 4 15 20
Response time 15 4 0 10 18
Tr/Ts 6 1.67 1 3 10
Average response time = (0+15+4+10+18)/5 = 9.4
Average turnaround time = (18+10+4+15+20)/5=13.4
Throughput =5/20= 0.25
3. CPU-Scheduling …
• Consider the following priority classes
Processes type: Deans Heads Instructors Secretaries Students
Priority: 5 4 3 2 1
• As long as there are runnable processes in priority level 5, just run
each one for one quantum, round robin fashion, and never bother
with lower priority classes
• If priorities are not adjusted occasionally, lower priority classes may
all starve to death
• Advantages
– It considers the fact that some processes are more important than others, i.e.
it takes external factors into account
3. CPU-Scheduling …
• Drawbacks
– A high priority process may run indefinitely and it can prevent all other
processes from running. This creates starvation on other processes.
There are two possible solutions for this problem:
• Assigning a maximum quantum to each process
• Assigning priorities dynamically, i.e. avoid using static priorities
– Assigning a process a priority of 1/q where q is the fraction of the last
quantum that is used
• A process that used only 2ms of its 100ms quantum would get a priority of
50=1/2/100
• A process that used 50ms of its 100ms quantum would get a priority of
2=1/50/100
– Decreasing the priority of the currently running process at each clock
tick
• Priority inversion (solved using priority boosting).
3. CPU-Scheduling …
3.4.5 Round Robin (RR)
• Basic Concept
– A small amount of time called a quantum or time slice is
defined. According to the quantum, a clock interrupt is
generated at periodic intervals. When the interrupt occurs, the
currently running process is placed in the ready queue, and the
next ready process is selected on a FCFS basis.
– The CPU is allocated to each process for a time interval of upto
one quantum. When a process finishes its quantum it is added
to the ready queue, when it is requesting I/O it is added to the
waiting queue
– The ready queue is treated as a circular queue
– Its selection function is based on quantum and it uses
preemptive decision mode
3. CPU-Scheduling …
• Advantages
– The oldest, simplest, fairest and most widely used preemptive
scheduling
– Reduces the penalty that short processes suffer with FCFS
• Drawbacks
– CPU-bound processes tend to receive unfair portion of CPU
time, which results in poor performance for I/O bound
processes
– It makes implicit assumption that all processes are equally
important. It does not take external factors into account
– Maximum overhead due to frequent process switch
3. CPU-Scheduling …
• Difficulty with RRS
o The length of the quantum should be decided carefully
• E.g.1 quantum =20ms, context switch =5ms, % of context switch =
5/25 *100=20%
– Poor CPU utilization
– Good interactivity
• E.g.1 quantum =500ms, context switch =5ms, % of context switch =
5/505 *100<1%
– Improved CPU utilization
– Poor interactivity
• Setting the quantum too short causes
– Poor CPU utilization
– Good interactivity
3. CPU-Scheduling …
• Setting the quantum too long causes
– Improved CPU utilization
– Poor interactivity
– A quantum around 100ms is often reasonable compromise
o To enable interactivity, a maximum response time (MR) can be specified, and the
quantum (m) can be computed dynamically as follows
• m = MR/n , where n is number of processes
– A new m is calculated when the number of processes changes
• Not to allow m to be very small
– Fix a minimum quantum (min_time_slice) and choose the maximum
of MR/n and min_time_slice: m = max (MR/n, min_time_slice)
– Use biased round robin that gives extra service to high priority
processes: e.g. process k is given a time slice of m k
• mk = max (pkMR/pi , min_time_slice) , where pi is the priority of process i and
bigger pi means higher priority
3. CPU-Scheduling …
3.5 Multi-level Queue and Multi-Level Feedback Queue Scheduling
3.5.1 Multi-level Queue Scheduling
– A scheduling method that uses multiple queues with different priority levels.
– Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
– Each queue has its own scheduling algorithm,
foreground – RR
background – FCFS
– Scheduling must be done between the queues.
• Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
• Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
• 20% to background in FCFS
• Processes are statically assigned to queues based on specific criteria (e.g.,
priority, process type).
• Does not inherently balance response time and turnaround time; the
balance depends on queue configurations.
• Useful for systems with fixed workloads and distinct categories (e.g., batch
jobs vs. interactive processes).
• Can use fixed-priority scheduling, but this may lead to starvation unless
mitigated by techniques like time slicing.
3. CPU-Scheduling …
3.5.2 Multilevel Feedback Queue
• Processes are dynamically moved between queues based on execution
history.
• Provides a balance between response time and turnaround time.
• Useful for general-purpose systems with varied workloads.
• A process can move between the various queues; aging can be implemented
this way.
• Multilevel-feedback-queue scheduler defined by the following parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will enter when that
process needs service
3. CPU-Scheduling …
• Example of Multilevel Feedback Queue
• Three queues:
– Q0 – time quantum 8 milliseconds
– Q1 – time quantum 16 milliseconds
– Q2 – FCFS
• Scheduling
– A new job enters queue Q0 which is served FCFS. When it
gains CPU, job receives 8 milliseconds. If it does not
finish in 8 milliseconds, job is moved to queue Q1.
– At Q1 job is again served FCFS and receives 16 additional
milliseconds. If it still does not complete, it is preempted
and moved to queue Q2.
3. CPU-Scheduling …
3. CPU-Scheduling …
3.6 Comparison of Scheduling Algorithms
Scheduling Algorithm Preemptive Starvation Complexity Fairness
Poor (long processes
FCFS No No Low
delay short ones)
SJF No Yes Medium Unfair to long processes
Unfair to long
SRTS Yes Yes High
processes
Unfair (priority
Priority Yes/No Yes Medium
inversion problem)
Good (time quantum
Round Robin Yes No High
dependent)
Multi-Level Feedback
Yes No High Good (adaptable)
Queue