Lecture 09 - Process Scheduling
Lecture 09 - Process Scheduling
new terminated
admitted scheduled
exit, kill
ready running
interrupt/yield
event wait for event
occurrence blocked
Frequency
100 CPU bursts
80
Processes cycle 60
40
compute 20
0 8 16
wait for I/O Burst Duration (milliseconds)
CPU-bound
I/O-bound
Less than 1 second For keeping user’s attention for thought-intensive activities
Example: graphics
Process
A Switch to Kernel
• Overhead
• Direct Cost: time to actually switch
• Indirect Cost: performance hit from memory (invalidate & reload
cache, swapped out pages, etc.)
→ doing too many process switches can chew up CPU time
Dr. Noha Adly Operating Systems – CS x61 17
Scheduling Algorithms
P1 P2 P3
0 24 27 30
P2 P3 P1
0 3 6 30
• Example set of
processes, consider
each a batch job
Ti = actual CPU execution time for the ith instance of this process
Si = predicted value for the ith instance
S1 = predicted value for first instance; not calculated
But this formula gives equal weight to each observation, while it is
desirable to give more weight to more recent observations
For α=0.8
• The larger α, the greater the weight given to the more recent observations
• α = 0.8, most of the weight is given to the four most recent observations
• α = 0.2, the averaging is spread out over the eight or so most recent
observations
• The advantage of using a value of α close to 1 is that the average will quickly
reflect a rapid change in the observed quantity
• The disadvantage is that if there is a brief surge in the value of the observed
quantity and it then settles back to some average value, the use of a large
value of α will result in jerky changes in the average
Dr. Noha Adly Operating Systems – CS x61 30
Use Of Exponential Averaging
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS. When it gains
CPU, job receives 8 milliseconds. If it does not finish in 8
milliseconds, job is moved to queue Q1.
At Q1 job is again served FCFS and receives 16 additional
milliseconds. If it still does not complete, it is preempted and
moved to queue Q2.
SPN P3 P4 P1 P5 P2
RR/10 P1 P2 P3 P4 P5 P2 P5 P2
User-Level Threads
Kernel schedules on process level, unaware of threads
Kernel picks process A, giving it control to its quantum
Thread scheduler inside process A
Decides which thread to run, say A1
No clock interrupts to multiprogramming threads, so Thread
Scheduler cannot interrupt a thread
If A1 uses its quantum, kernel selects another process to run
If A1 blocks, Thread scheduler chose another thread
KLT requires full context switch while ULT performs thread switch
taking handful of m/c instructions
KLT can be made more complex
Provide Kernel with identity of threads within processes and make
decision accordingly
Given 2 threads with same priority, give higher priority to the thread that
avoids context switch
KLT has the choice to schedule from threads of all processes,
maximizing balancing resources
ULT can employ application-specific thread schedule, tuning an
application better than KLT.
Example: Web server with blocked worker thread and choosing between
dispatcher thread and two worker thread: which one to chose? ULT can
chose dispatcher so it can then start another worker. KLT does not know
• kernel threads
• Six classes of scheduling
• Default: time-sharing based
on a multi-level feedback
queue