0% found this document useful (0 votes)
2 views

Lecture 09 - Process Scheduling

The document discusses processor scheduling in operating systems, detailing the roles of long-term, medium-term, and short-term schedulers in managing process execution and resource allocation. It also covers various scheduling algorithms, including First Come First Served (FCFS), Round Robin (RR), and Shortest Process Next (SPN), highlighting their advantages and disadvantages. Additionally, it emphasizes the importance of response time, turnaround time, and system efficiency in achieving effective scheduling outcomes.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture 09 - Process Scheduling

The document discusses processor scheduling in operating systems, detailing the roles of long-term, medium-term, and short-term schedulers in managing process execution and resource allocation. It also covers various scheduling algorithms, including First Come First Served (FCFS), Round Robin (RR), and Shortest Process Next (SPN), highlighting their advantages and disadvantages. Additionally, it emphasizes the importance of response time, turnaround time, and system efficiency in achieving effective scheduling outcomes.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Processor Scheduling

Operating Systems – CS x61 1


Dr. Noha Adly
Scheduling
 An OS must allocate resources amongst competing
processes.
 Scheduler: A module in OS to execute scheduling
decisions (for I/O devices, CPU, etc).
 The resource provided by a CPU is execution time
 Whenever the CPU is idle
 scheduler must pick another job that is ready
 jobs that are ready are in some sort of queue
 Scheduling takes place because:
 I/O request - I/O completion
 Process interrupted - Process termination

Dr. Noha Adly Operating Systems – CS x61 2


Types of Processor Scheduling
 Long-term scheduling: whether to add a new process to the set of
processes that are currently active.
 Controls the degree of multiprogramming
 More processes → smaller percentage of time each process is executed
 Less processes → CPU will be idle
 Which process to add?
 FCFS, priority, mix I/O-bound/CPU-bound process, balance I/O usage
 Medium-term scheduling: whether to add a process partially in main
memory and therefore available for execution.
 Part of the memory management function – swapping
 Swapping-in decisions are based on the need to manage the
performance related to degree of multiprogramming
 Short-term scheduling: which ready process to execute next
 Execute more frequently
 Known as the Dispatcher

Dr. Noha Adly Operating Systems – CS x61 3


Two Suspend States
Remember this diagram

Where do the long/medium/short-term schedulers intervene?


Dr. Noha Adly Operating Systems – CS x61 4
Scheduling and Process State Transitions

Dr. Noha Adly Operating Systems – CS x61 5


Levels of Scheduling

Dr. Noha Adly Operating Systems – CS x61 6


Queuing Diagram for Scheduling

Dr. Noha Adly Operating Systems – CS x61 7


When is short-term scheduling invoked?
 Scheduling absolutely required when:
 A process exists
 A process becomes blocked (on I/O, etc)
 Scheduling may be required when:
 New process created
 I/O interrupt
 Clock interrupt

new terminated
admitted scheduled
exit, kill
ready running
interrupt/yield
event wait for event
occurrence blocked

Dr. Noha Adly Operating Systems – CS x61 8


Preemptive vs. Non-Preemptive Algorithms
 Non-preemptive: A process runs until it voluntarily gives up the CPU
a) Terminates
b) Blocks itself waiting for I/O or service
 used when suspending a process is impossible or very expensive
 Preemptive: A Running Process can be interrupted and moved to the
Ready state, then resumed later
 Preemption may occur when
– new process arrives
– on an interrupt that placed a blocked process in Ready
– periodically, based on clock interrupt
 Most OS use preemptive CPU scheduling, implemented via timer
interrupt
 -- incurs greater overhead (context switch)
 ++ Allows for better service since it prevents monopolization of CPU
Dr. Noha Adly Operating Systems – CS x61 9
Different Needs

 Batch: no terminal users


 payroll, inventory, interest calculation, etc.
 preemption or non-preemption is acceptable
 Interactive: lots-o-I/O from users
 Preemption is needed to avoid one process from monopolizing
CPU and denying services to others
 Real-time: process must run and complete on time
 Typically real-time only runs processes specific to the application
at hand

Dr. Noha Adly Operating Systems – CS x61 10


Observations

 Processes are dependent 160


on I/O 140
Most processes have a
120
 dependency level varies large number of short

Frequency
100 CPU bursts
80
 Processes cycle 60
40
 compute 20
0 8 16
 wait for I/O Burst Duration (milliseconds)

 Process execution is characterized by


 length of CPU burst
 number of bursts

Dr. Noha Adly Operating Systems – CS x61 11


CPU and I/O Bursts

CPU-bound

Long CPU burst Short CPU burst Waiting for I/O

I/O-bound

Interrupt: back from I/O


CPU not needed.
operation, ready to use the
Process goes to blocked/waiting state.
CPU.

As CPU gest faster, processes tend to get more I/O-bound

Dr. Noha Adly Operating Systems – CS x61 12


Response Time
 User productivity tends to improve with a more rapid
response time
 Especially true for expert users
 Becomes very noticeable as response time drops below 1 second
 User time or “think time” – Time user spends thinking
about the response.
 System time – Time system takes to generate its
response.
 Short response times are important
 User response time tends to decrease as system response time
decreases
 If the system gets too slow to a user, they may slow down or abort
the operation
 Turnaround time (TAT) – total time that an item spends in
the system (waiting time + service time)
Dr. Noha Adly Operating Systems – CS x61 13
Response Times (continued…)

15 seconds or greater Rules out conversational interaction.


Most users will not wait this long.
If it cannot be avoided, allow the user to continue on to something else
(like foreground/background threads).

4 to 15 seconds Generally user loses track of the current operation in short-term


memory
Ok after end of a major closure

2 to 4 seconds Inhibits user concentration


Bothers a user who is focused on the job
Ok after end of a minor closure

1 to 2 seconds Important when information has to be remembered over several


responses
Important limit for terminal activities

Less than 1 second For keeping user’s attention for thought-intensive activities
Example: graphics

Less than 1/10 second Response to key presses or mouse clicks

Dr. Noha Adly Operating Systems – CS x61 14


Aim of Scheduling

 Assign processes to be executed by the processor(s)


needs to meet system objectives, such as:
 minimize response time
 maximize throughput (jobs/minute)
 maximize processor efficiency (utilization)
 support multiprogramming
 The scheduling function should
 Share time fairly among processes
 Prevent starvation of a process
 Have low overhead
 Prioritise processes when necessary (e.g. real time deadlines)

Dr. Noha Adly Operating Systems – CS x61 15


Evaluation Criteria for Short-Term
Scheduling Policies
 User-oriented -- performance related
 Turnaround time: time from submission to completion (batch jobs)
 Response Time: time between submission of request and the first output.
(interactive users)
 User-oriented -- other
 Predictability: runs the same way (Time & cost) regardless of the system
load
 System-oriented -- Performance related
 Maximize Throughput: rate at which processes are completed (jobs
processed / unit time)
 Processor utilization: percentage of time that the processor is busy
 System-oriented -- other
 Fairness share time fairly among processes
 No Starvation; No process should suffer starvation
 Enforcing priorities Favor higher priority processes
 Balancing resources Keep system resources busy (mix I/O+CPU bound)
 Have low overhead

Dr. Noha Adly Operating Systems – CS x61 16


Switching is Expensive

Process
A Switch to Kernel

Switch to Kernel Pick a


User process

Process Reset Store


Load
B MMU Process
Process
State
State
Memory

• Overhead
• Direct Cost: time to actually switch
• Indirect Cost: performance hit from memory (invalidate & reload
cache, swapped out pages, etc.)
→ doing too many process switches can chew up CPU time
Dr. Noha Adly Operating Systems – CS x61 17
Scheduling Algorithms

1. First Come First Served (FCFS)


2. Round Robin (RR) – time slicing.
3. Shortest Process Next (SPN)
4. Shortest Remaining Time (SRT)
5. Highest Response Ratio Next (HRRN)
6. Feedback (FB)

Dr. Noha Adly Operating Systems – CS x61 18


FCFS Scheduling

• First Come, First Served.


• Assume non-preemptive version
• Example: Jobs are P1 (duration: 24 units); P2 (3); and
P3 (3), arriving in this order at time 0

P1 P2 P3

0 24 27 30

• Average waiting time: (0 + 24 + 27)/3 = 17

Dr. Noha Adly Operating Systems – CS x61 19


FCFS: Continued

What if execution order is P2 , P3 , P1 ?

P2 P3 P1

0 3 6 30

 Average waiting time: (6 + 0 + 3)/3 = 3


 Much better!
 Convoy effect: many short jobs stuck behind one long job
 ++ Easy to implement: queue with single linked list; pick
front of queue, service, place at end of queue
 ++ Fair
 -- penalize short jobs in the presence of long jobs
Dr. Noha Adly Operating Systems – CS x61 20
 turnaround time = total time that the job spends in the
system
 normalized turnaround time = ratio of turnaround time to
service time - indicates the relative delay experienced
 The normalized turnaround time for process Y is worst:
total time in the system is 100 times the required
processing time.
 long processes do not suffer: Process Z has a
turnaround time that is almost double that of Y, but its
normalized residence time is under 2.0
Dr. Noha Adly Operating Systems – CS x61 21
Process Scheduling Example

• Example set of
processes, consider
each a batch job

– Service time represents total execution time

Dr. Noha Adly Operating Systems – CS x61 22


First-Come-First-Served FCFS

 Each process joins the Ready queue


 Non-preemptive: a process relinquish the CPU if it blocks for I/O or
exits
 The oldest process in the Ready queue is selected to run next
 Disadvantage:
 A short process have to wait a long time before it can execute
 Favors CPU-bound processes : I/O processes have to wait until CPU-
bound process completes
 Often combined with a priority scheme to provide an effective
scheduler

Dr. Noha Adly Operating Systems – CS x61 23


Round Robin

 Each process gets a small unit of CPU time (time


quantum/ time slice), usually 20-50 milliseconds.
 Preemption based on a clock (time slicing)
 Clock interrupt is generated at periodic intervals
 When an interrupt occurs, the currently running process
is placed in the Ready queue and next ready job is
selected based on FCFS to run

Dr. Noha Adly Operating Systems – CS x61 24


Effect of Length Of Quantum
 If too short
++ short processes will move quickly
-- too many overhead (context switching), reduce
CPU efficiency
 If quantum is 4ms, Context Switch cost is
1ms
 For every 4ms of work, kernel does 1ms
→ 20% overhead
→ avoid short quantum
 If too long
 Suppose time quantum is 100ms
→ Now overhead is 1%
-- Poor performance to short interactive requests
 Guide: quantum should be slightly greater than the
time required for a typical interaction
 So apps block before quantum expires, context
switching is done only once
Dr. Noha Adly Operating Systems – CS x61 25
‘Virtual Round Robin’
• RR unfair to I/O-bound processes
• I/O bound give up their time slice while
CPU bound use their complete time
quantum.
• I/O bound have to wait for CPU-bound
to relinquish CPU
• Poor performance for I/O-bound
processes
• inefficient use of I/O devices
• increase in the variance of response
time
• Could be improved using a Virtual
Round Robin (VRR) scheduling scheme
• Add an auxiliary queue where processes
are placed after released from IO block
• At dispatch, processes in auxiliary
queue get preference over the main
queue.

Dr. Noha Adly Operating Systems – CS x61 26


Shortest Process Next (SPN)
 Also known as Shortest Job First
 Non-preemptive
 Assume each job has a known execution time: provided or estimated
 Process with shortest expected processing time is selected
next
 Short process jumps ahead of longer processes
 Performance
 Improvement in response time
 Variability of response time increases, specialty for longer
processes, thus predictability of longer processes is
reduced
 starvation for longer processes, if there is a steady supply
of short processes

Dr. Noha Adly Operating Systems – CS x61 27


Estimating Execution Time
• SPN needs to estimate the required processing time of a process
• For batch jobs, require a programmer’s estimate.
•If estimate is substantially off, system may abort job.
•In an interactive environment, OS may keep a running average of
each “burst” for each process.
• Estimate based on past behavior

 Ti = actual CPU execution time for the ith instance of this process
 Si = predicted value for the ith instance
 S1 = predicted value for first instance; not calculated
 But this formula gives equal weight to each observation, while it is
desirable to give more weight to more recent observations

Dr. Noha Adly Operating Systems – CS x61 28


Estimating Execution Time

 A common technique for predicting a future value on the basis of a


time series of past values is exponential averaging

 Where α is a constant weighting factor, 0< α <1, determining the


relative weight given to recent observations relative to older ones
 Expanding the above equation

 For α=0.8

 The older the observation, the less it is counted in to the average


 α decide to have the estimation process forget old runs quickly, or
remember them for a long time
 This technique is sometimes called aging.

Dr. Noha Adly Operating Systems – CS x61 29


Exponential Smoothing Coefficients

• The larger α, the greater the weight given to the more recent observations
• α = 0.8, most of the weight is given to the four most recent observations
• α = 0.2, the averaging is spread out over the eight or so most recent
observations
• The advantage of using a value of α close to 1 is that the average will quickly
reflect a rapid change in the observed quantity
• The disadvantage is that if there is a brief surge in the value of the observed
quantity and it then settles back to some average value, the use of a large
value of α will result in jerky changes in the average
Dr. Noha Adly Operating Systems – CS x61 30
Use Of Exponential Averaging

• The exponential averaging tracks changes in process behaviour faster


than does simple averaging
• the larger value of α results in a more rapid reaction to the change in
the observed value

Dr. Noha Adly Operating Systems – CS x61 31


Shortest Remaining Time (SRT)

 Preemptive version of Shortest Process Next (SPN)


 If a new process arrives with estimated processing times less than the
remaining time of current executing process → preempt running process
 Must estimate processing time and choose the shortest
 Achieves better turnaround time vs. SPN, because a short job is given
immediate preference to a running longer job
 SRT does not bias in favor of long processes (as does FCFS)
 Unlike RR, no additional interrupts are generated reducing overhead.
 But elapsed service times must be recorded, contributing to overhead
 Risk of starvation for long jobs

Dr. Noha Adly Operating Systems – CS x61 32


Highest Response Ratio Next (HRRN)
 An important performance criteria to minimize is:
Normalized TurnAround Time = TAT / ServiceTime
 We can approximate it based on past history. So consider

 Choose next process with the highest response ratio R


 Attractive because it accounts for the age of a process.
 While shorter jobs are favored (a smaller denominator yields a larger
ratio), aging without service increases the ratio so that a longer
process will eventually get past competing shorter jobs.

Dr. Noha Adly Operating Systems – CS x61 33


Priorities
 Processes are assigned priorities
 Scheduler will choose a Ready
process of higher priority
 Have multiple ready queues to
represent each level of priority
 Scheduler starts with RQ0
 If RQ0 is empty, then RQ1 is
examined
 Lower-priority may suffer starvation
 Allow a process to change its
priority based on its age or
execution history
 Note: All scheduling policies are
priority scheduling!
 Question: How to assign priorities?

Dr. Noha Adly Operating Systems – CS x61 34


Example for Priorities

Static priorities can lead to starvation!


Dr. Noha Adly Operating Systems – CS x61 35
Feedback Scheduling
 If it is hard to predict the remaining time a process
then SPN, SRT, and HRRN cannot be used.
 give preference for shorter jobs by penalizing jobs
that have been running longer.
 ie. if we cannot focus on time remaining, focus on time spent in execution so far.

 Feedback scheduling is done on a preemptive basis with a dynamic priority


mechanism.
 A process is placed in RQ0, after 1st preemption, it is placed in RQ1
 A process is demoted to the next lower-priority queue each time it is
preempted and returns to the ready queue
 Within each queue, FCFS is used except once in the lowest-priority queue, a
process cannot go lower and is treated in a RR fashion.
 With this strategy
 A short process will complete quickly
 a longer process will gradually drift downward
 Newer, shorter processes are favored over older and longer
 Turnaround time for longer processes can be intolerable.
 Starvation may occur if new jobs enter the system frequently
Dr. Noha Adly Operating Systems – CS x61 36
Feedback Performance

 Variation 1: perform preemption in the same fashion as RR: at periodic


intervals. Example q=1, behaviour similar to RR
 Variation 2: vary the preemption times according to the queue:
 A process scheduled frm RQ0 is allowed to execute for one time unit ,then is
preempted;
 a process scheduled from RQ1 is allowed to execute two time units, and so on.
 Still a longer process may still suffer
 Variation 3: promote a process to a higher-priority queue after it
spends a certain amount of time waiting for service in its current queue
Dr. Noha Adly Operating Systems – CS x61 37
Multilevel Feedback Queue

 A process can move between the various


queues; aging can be implemented this way
 Multilevel-feedback-queue scheduler defined by
the following parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will
enter when that process needs service

Dr. Noha Adly Operating Systems – CS x61 38


Example of MFQ

 Three queues:
 Q0 – RR with time quantum 8 milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q2 – FCFS
 Scheduling
 A new job enters queue Q0 which is served FCFS. When it gains
CPU, job receives 8 milliseconds. If it does not finish in 8
milliseconds, job is moved to queue Q1.
 At Q1 job is again served FCFS and receives 16 additional
milliseconds. If it still does not complete, it is preempted and
moved to queue Q2.

Dr. Noha Adly Operating Systems – CS x61 39


A B C D E Mean
Arrival 0 2 4 6 8
Service Time 3 6 4 5 2
FCFS
Finish 3 9 13 18 20
Turnaround 3 7 9 12 12 8.60
Tr / Ts 1.00 1.17 2.25 2.40 6.00 2.56
RR (q=1)
Finish 4 18 17 20 15
Turnaround 4 16 13 14 7 10.80
Tr / Ts 1.33 2.67 3.25 2.80 3.50 2.71
RR (q=4)
Finish 3 17 11 20 19
Turnaround 3 15 7 14 11 10.00
Tr / Ts 1.00 2.50 1.75 2.80 5.50 2.71
SPN
Finish 3 9 15 20 11
Turnaround 3 7 11 14 3 7.60
Tr / Ts 1.00 1.17 2.75 2.80 1.50 1.84
SRT
Finish 3 15 8 20 10
Turnaround 3 13 4 14 2 7.20
Tr / Ts 1.00 2.17 1.00 2.80 1.00 1.59
HRRN
Finish 3 9 13 20 15
Turnaround 3 7 9 14 7 8.00
Tr / Ts 1.00 1.17 2.25 2.80 3.50 2.14
Feedback (q=1)
Finish 4 20 16 19 11
Turnaround 4 18 12 13 3 10.00
Tr / Ts 1.33 3.00 3.00 2.60 1.50 2.29
Feedback ( q=2^(i-1) )
Finish 4 17 18 20 14
Turnaround 4 15 14 14 6 10.60
Tr / Ts 1.33 2.50 3.50 2.80 3.00 2.63

Dr. Noha Adly Operating Systems – CS x61 40


Algorithms

 First Come First Served


 Processes queued in order of arrival
 Runs until finished or blocks on I/O
 Tends to penalize short processes (have to wait for long
processes)
 Favors CPU-bound processes (I/O process quickly block)
 Round Robin
 FCFS with preemption
 Size of ticks affects performance
 Favors processor-bound processes
 Virtual Round Robin
 Second queue for formerly blocked processes – given priority
 At end of time slice, add to end of standard queue
Dr. Noha Adly Operating Systems – CS x61 41
Algorithms (continued…)

 Shortest Process Next


 Select process with shortest expected running time (non-
preemptive)
 Difficult to estimate required time (keep history)
 Tends to be less predictable
 Can starve long processes
 Short processes may still wait if a long process has just started
 Shortest Remaining Time
 Preemptive version of Shortest Process Next
 May switch processes when a new process arrives
 Still may starve long processes

Dr. Noha Adly Operating Systems – CS x61 42


Algorithms (continued…)

 Highest Response Ratio Next


 Non-preemptive, tries to get best average normalized turnaround
time
 Depends on Response Ratio
 W = time spent waiting
 S = expected service time R = (W + S) / S
 Select process with highest R
 Feedback
 Starts in high-priority queue, moves down in priority as it executes
 Lower-priority queues often given longer time slices
 Can starve long processes
 Move up processes if they wait too long

Dr. Noha Adly Operating Systems – CS x61 43


Dr. Noha Adly Operating Systems – CS x61 44
Fair-Share Scheduling (FSS)

 User’s application runs as a collection of processes (threads)


 User is concerned about the performance of the application
 Need to make scheduling decisions based on groups of process
belonging to an application
 Each user is assigned a share of the processor
 Concept is extended to groups of users
 Objective is to monitor usage to give fewer resources to users who
have had more than their fair share and more to those who have had
less than their fair share
 FSS is based on
 The execution history of each process
 The execution history of a related group of processes

Dr. Noha Adly Operating Systems – CS x61 52


Fair-Share Scheduling
 FSS is based on dynamic priority, considering
 Base priority of the process
 Recent processor usage of the process
 Recent processor usage of the group the process belongs to

 The higher the numerical value, the lower the priority


 The priority of a process drops:
 As the process uses the CPU, and
 As the group the process belongs to uses the CPU
Dr. Noha Adly Operating Systems – CS x61 53
Fair-Share Scheduler
• A is in one group, B and C in another group,
each group has a weighting of 0.5
• A,B,C are CPU-bound →usually ready to run
• All processes have a base priority of 60
• Processor utilization is measured as follows:
• CPU is interrupted 60 times per second;
• during each interrupt, the processor usage
field of the currently running process is
incremented, as is the corresponding group
processor field.
• Once per second, priorities are recalculated
• In the figure, process A is scheduled first.
• At the end of one second, it is pre-empted
• Processes B and C now have the higher
priority, and process B is scheduled
• At the end of the second time unit, process A has
the highest priority
• the pattern repeats:A, B,A, C,A, B, and so on.
Dr. Noha Adly Operating Systems – CS x61 54
Scheduling Algorithm Evaluation

 How do we pick an algorithm?


 Many algorithms
 Many parameters
 Maximize or Minimize some criteria.
 Utilization, throughput, etc.
 What do we base our choice on?
 A single example?

Dr. Noha Adly Operating Systems – CS x61 55


Deterministic Modeling

 Given a predetermined workload, simulate


 run the workload through different schedulers
 calculate statistics

Process Burst Time


P1 10
P2 29
P3 3
P4 7
P5 12

Dr. Noha Adly Operating Systems – CS x61 56


Algorithm Evaluation: Deterministic
Modeling
FCFS P1 P2 P3 P4 P5

Ave Wait = (0 + 10 + 39 + 42 + 49)/5 = 28

SPN P3 P4 P1 P5 P2

Ave Wait = (10 + 32 + 0 + 3 + 20)/5 = 13

RR/10 P1 P2 P3 P4 P5 P2 P5 P2

Ave Wait =(0 + 32 + 20 + 23 + 40)/5 = 23

What about context switch overhead?


Advantages Disadvantages
Simple too specific
Fast too much exact knowledge is required
Exact Results tied to example data 57
Dr. Noha Adly Operating Systems – CS x61
Queuing Theory

 Using statistics, we can determine the distribution of CPU


and I/O bursts.
 Probability distribution function
 The result is a mathematical formula which describes the
probability of a particular burst
 Mathematics can then tell us performance
 A computer system can be described as a network of
servers
 each server has a queue of waiting processes
 Simply add an imaginary server to each queue.
 Now, we can compute statistics

Dr. Noha Adly Operating Systems – CS x61 58


Thread Scheduling

 When processes have multiple threads, there are two


levels of parallelism
 Processes
 Threads
 Scheduling depends on how threads are supported
 User-level Threads (ULT)
 Kernel-Level Threads (KLT)

Dr. Noha Adly Operating Systems – CS x61 59


Thread Scheduling

 User-Level Threads
 Kernel schedules on process level, unaware of threads
 Kernel picks process A, giving it control to its quantum
 Thread scheduler inside process A
 Decides which thread to run, say A1
 No clock interrupts to multiprogramming threads, so Thread
Scheduler cannot interrupt a thread
 If A1 uses its quantum, kernel selects another process to run
 If A1 blocks, Thread scheduler chose another thread

 Kernel Level Threads


 Kernel picks a thread to run, whichever process it belongs to
 Thread is given a quantum and preempted if uses it, or if blocks
 Another thread is picked by kernel
Dr. Noha Adly Operating Systems – CS x61 60
(a) Possible scheduling of user-level threads with a 50-msec
process quantum and threads that run 5 msec per CPU burst.
(b) Possible scheduling of kernel-level threads with the same
characteristics as (a). 61
Dr. Noha Adly Operating Systems – CS x61
ULT vs KLT

 KLT requires full context switch while ULT performs thread switch
taking handful of m/c instructions
 KLT can be made more complex
 Provide Kernel with identity of threads within processes and make
decision accordingly
 Given 2 threads with same priority, give higher priority to the thread that
avoids context switch
 KLT has the choice to schedule from threads of all processes,
maximizing balancing resources
 ULT can employ application-specific thread schedule, tuning an
application better than KLT.
 Example: Web server with blocked worker thread and choosing between
dispatcher thread and two worker thread: which one to chose? ULT can
chose dispatcher so it can then start another worker. KLT does not know

Dr. Noha Adly Operating Systems – CS x61 62


Traditional UNIX Scheduling
 Designed to provide good response time for interactive users while
ensuring that low-priority background jobs do not starve
 Multilevel Feedback (Fair-share scheduling) using round robin within
each of the priority queues.
 1-second preemption
 Priority is based on process type and execution history
 Priority divides all processes into fixed bands of priority levels,
optimizing access to block devices (disks) while responding quickly to
systems calls
 Priorities are recomputed once per second.
• In decreasing order of priority, the bands are as follows:
• Swapper
• Block I/O device control
• File manipulation
• Character I/O device control
Dr. Noha Adly • User processes Operating Systems – CS x61 63
Scheduling Formula

• Base divide processes into fixed bands of priority levels


• nice: allows a user to voluntarily reduce the priority of his process, in order to
be nice to the other users. Nobody ever uses it
• Restrictions placed on CPU and nice to prevent a process from getting too far
away from its base priority band
• within the process band, the use of execution history penalizes CPU-bound
process at the expense of I/O-bound process
Dr. Noha Adly Operating Systems – CS x61 64
Example of Traditional UNIX Process Scheduling

• Processes A, B, and C are


created at the same time
with base priorities of 60
• The clock interrupts the
system 60 times per
second and increments a
counter for the running
process.
• The example assumes that
none of the processes
block themselves and that
no other processes are
ready to run.

Dr. Noha Adly Operating Systems – CS x61 65


Solaris Scheduling

• kernel threads
• Six classes of scheduling
• Default: time-sharing based
on a multi-level feedback
queue

Dr. Noha Adly Operating Systems – CS x61 66

You might also like