0% found this document useful (0 votes)
35 views

Wa0081.

Process scheduling is essential for multiprogramming operating systems to allow multiple processes to share CPU time using time multiplexing. A typical process alternates between CPU and I/O bursts. In a multiprogramming system, one process can use the CPU while another is waiting for I/O, improving efficiency over uniprogramming systems. The short-term scheduler selects the next ready process to run based on the scheduling algorithm, such as first-come first-served, shortest job first, or priority scheduling.

Uploaded by

satya narayana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Wa0081.

Process scheduling is essential for multiprogramming operating systems to allow multiple processes to share CPU time using time multiplexing. A typical process alternates between CPU and I/O bursts. In a multiprogramming system, one process can use the CPU while another is waiting for I/O, improving efficiency over uniprogramming systems. The short-term scheduler selects the next ready process to run based on the scheduling algorithm, such as first-come first-served, shortest job first, or priority scheduling.

Uploaded by

satya narayana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

 Process scheduling is an essential part of a Multiprogramming operating

systems. Such operating systems allow more than one process to be loaded into
the executable memory at a time and the loaded process shares the CPU using
time multiplexing.
 A typical process involves both I/O time and CPU time.
 In a uniprogramming system like MS-DOS, time spent waiting for I/O is wasted
and CPU is free during this time.
 In multiprogramming systems, one process can use CPU while another is waiting
for I/O. This is possible only with process scheduling.
 Process execution begins with a CPU burst. That is followed by
an I/O burst, which is followed by another CPU burst, then
another I/O burst, and so on. Eventually, the final CPU burst
ends with a system request to terminate execution.

 An I/O-bound program typically has many short


CPU bursts. A CPU-bound program might have a few
long CPU bursts.
 The short-term scheduler, or CPU
scheduler selects a process from the
processes in memory that are ready to
execute and allocates the CPU to that
process.
CPU-scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for example, as the
result of an I/O request or an invocation of wait() for the termination of a child process).
2. When a process switches from the running state to the ready state (for example, when an
interrupt occurs)
3. When a process switches from the waiting state to the ready state (for example, at
completion of I/O)
4. When a process terminates
 For conditions 1 and 4 there is no choice - A new process must be selected.
 For conditions 2 and 3 there is a choice - To either continue running the current process,
or select a different one.
 If scheduling takes place only under conditions 1 and 4, the system is said to be non-
preemptive, or cooperative. Under these conditions, once a process starts running it
keeps running, until it either voluntarily blocks or until it finishes. Otherwise the system
is said to be preemptive.
The is the module that gives control of the CPU to the process selected by
the scheduler. This function involves:
• Switching context.
• Switching to user mode.
• Jumping to the proper location in the newly loaded program.
The dispatcher needs to be as fast as possible, as it is run on every context switch.
is the amount of time required for the scheduler to stop one process and start another.
 Different CPU-scheduling algorithms have different properties, and the choice
of a particular algorithm may favor one class of processes over another.
 Which characteristics are used for comparison can make a substantial difference
in which algorithm is judged to be best.
There are several different criteria to consider when trying to select the "best" scheduling algorithm for
a particular situation and environment, including:
- Ideally the CPU would be busy 100% of the time, so as to waste 0 CPU cycles.
On a real system CPU usage should range from 40% ( lightly loaded ) to 90% ( heavily loaded. )
- Number of processes completed per unit time. May range from 10/second to 1/hour
depending on the specific processes.
- Time required for a particular process to complete, from submission time to
completion. (Wall clock time.)
– is the sum of the times, processes spend in the ready queue waiting their turn to get
on the CPU.
- Amount of time it takes from when a request was submitted until the first
response is produced. Remember, it is the time till the first response and not the completion of
process execution(final response).
• In general one wants to optimize the average value of a
criteria ( Maximize CPU utilization and throughput, and
minimize all the others. ) However some times one wants
to do something different, such as to minimize the
maximum response time.

• Sometimes it is most desirable to minimize the variance of


a criteria than the actual value. i.e. users are more accepting
of a consistent predictable system than an inconsistent one,
even if it is a little bit slower.
Scheduling
Algorithms
First-Come, First-Served Scheduling
➢ The first-come, first-served(FCFS) is the simplest scheduling algorithm.
➢ The process that requests the CPU first is allocated the CPU first. The
implementation of the FCFS policy is easily managed with a FIFO queue.
➢ When a process enters the ready queue, its PCB is linked onto the tail of
the queue. When the CPU is free, it is allocated to the process at the head
of the queue.
➢ The running process is then removed from the queue.
➢ On the negative side, the average waiting time under the FCFS policy is
often quite long.
Consider the following three processes

E In the first Gantt chart below, process P1 arrives first. The average waiting time for the three processes
is ( 0 + 24 + 27 ) / 3 = 17.0 ms.
X
A
M In the second Gantt chart below, the same three processes have an average wait time of ( 0 + 3 +
P 6 ) / 3 = 3.0 ms. This reduction is substantial.
L
E
Thus, the average waiting time under an FCFS policy is generally not minimal and may vary
substantially if the processes’ CPU burst times vary greatly.
Consider the
snapshot of a system
given here
E
A Gantt chart is a horizontal bar chart developed as a production control tool in 1917 by Henry L. Gantt, an American
X engineer and social scientist.
A
M
P
L
E
Consider the set of 5 processes whose arrival time and burst time are given below. If the CPU
scheduling policy is FCFS, calculate the average waiting time and average turn around time.

➢ Turn Around time = Exit time – Arrival time


➢ Waiting time = Turn Around time – Burst time

Average Turn Around time = (4 + 8 +


2 + 9 + 6) / 5 = 29 / 5 = 5.8
Average waiting time = (0 + 5 + 0 + 8
+ 3) / 5 = 16 / 5 = 3.2
➢ FCFS can also block the system in a busy dynamic system in another way, known as the
convoy effect.
§ When one CPU intensive process blocks the CPU, a number of I/O intensive processes can
get backed up behind it, leaving the I/O devices idle.

§ When the CPU hog finally relinquishes the CPU, then the I/O processes pass through the
CPU quickly, leaving the CPU idle while everyone queues up for I/O, and then the cycle
repeats itself when the CPU intensive process gets back to the ready queue.

➢ The FCFS scheduling algorithm is nonpreemptive.


§ Once the CPU has been allocated to a process, that process keeps the CPU until it releases
the CPU, either by terminating or by requesting I/O.

§ The FCFS algorithm is thus particularly troublesome for time-sharing systems, where it is
important that each user get a share of the CPU at regular intervals.
Shortest-Job-First Scheduling
➢ Shortest-job-first (SJF) scheduling algorithm associates with
each process the length of the process’s next CPU burst.

➢ When the CPU is available, it is assigned to the process that


has the smallest next CPU burst. If the next CPU bursts of
two processes are the same, FCFS scheduling is used to
break the tie.

➢ Easy to implement in Batch systems where required CPU


time is known in advance.

➢ Impossible to implement in interactive systems where


required CPU time is not known.
Consider the following processes

E
X Gantt Chart representation is:

A
M
P
L
E

The average waiting time is (3 + 16 + 9 + 0) / 4 = 7 milliseconds


The SJF algorithm can be either preemptive or nonpreemptive.
The choice arises when a new process arrives at the ready queue
while a previous process is still executing.

Preemptive SJF scheduling is sometimes called shortest-remaining-


time-first scheduling (SRTF)
Consider the following five processes each
having its own unique burst time and
arrival time. Compare average waiting
time for non preemptive and preemptive
SJF
E
X
A
M
P
L
E
• SJF can be proven to be the fastest scheduling algorithm, but it suffers from one important
problem: How do you know how long the next CPU burst is going to be?
• For long-term batch jobs this can be done based upon the limits that users set for their jobs
when they submit them, which encourages them to set low limits, but risks their having to
re-submit the job if they set the limit too low. However that does not work for short-term
CPU scheduling on an interactive system.
• Another option would be to statistically measure the run time characteristics of jobs,
particularly if the same tasks are run repeatedly and predictably. But once again that really
isn't a viable option for short term CPU scheduling in the real world.
• A more practical approach is to predict the length of the next burst, based on some
historical measurement of recent burst times for this process. One simple, fast, and
relatively accurate method is the exponential average of the measured lengths of previous
CPU bursts.
Priority Scheduling

➢ The SJF algorithm is a special case of the general priority-scheduling algorithm.

➢ A priority is associated with each process, and the CPU is allocated to the process
with the highest priority. Equal-priority processes are scheduled in FCFS order.

➢ An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse
of the (predicted) next CPU burst. The larger the CPU burst, the lower the priority,
and vice versa.

➢ In practice, priorities are implemented using integers within a fixed range, but there
is no agreed-upon convention as to whether "high" priorities use large numbers or
small numbers.
consider the following set of processes, assumed to have
arrived at time 0 in the order P1, P2, · · ·, P5, with the length
of the CPU burst given in milliseconds:

Gantt Chart representation is:

The average waiting time is 8.2 milliseconds


Try this!!!!

E
X
A
M
P
Now
Try this!!!!
L
E

The average waiting time is 9.6 milliseconds


• Priorities can be assigned either internally or externally.

➢ Internal priorities are assigned by the OS using criteria such as average burst time, ratio of
CPU to I/O activity, system resource use, and other factors available to the kernel.
➢ External priorities are assigned by users, based on the importance of the job, fees paid,
politics, etc.
• Priority scheduling can be either preemptive or non-preemptive.

➢ When a process arrives at the ready queue, its priority is compared with the priority of the
currently running process.
➢ A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly
arrived process is higher than the priority of the currently running process.
➢ A nonpreemptive priority scheduling algorithm will simply put the new process at the head
of the ready queue.
Priority scheduling can suffer from a major problem known as indefinite
blocking, or starvation, in which a low-priority task can wait forever
because there are always some other jobs around that have higher priority.
➢ If this problem is allowed to occur, then processes will either run eventually
when the system load lightens, or will eventually get lost when the system is shut
down or crashes. (There are rumors of jobs that have been stuck for years.)

➢ One common solution to this problem is aging, in which priorities of jobs


increase the longer they wait.

➢ Under this scheme a low-priority job will eventually get its priority raised high
enough that it gets run.
Round-Robin Scheduling

➢ The round-robin (RR) scheduling algorithm is designed especially for


timesharing systems.
➢ Round robin scheduling is similar to FCFS scheduling, except that CPU bursts
are assigned with limits called time quantum.
➢ When a process is given the CPU, a timer is set for whatever value has been
set for a time quantum.
➢ If the process finishes its burst before the time quantum timer expires, then it is
swapped out of the CPU just like the normal FCFS algorithm.
➢ If the timer goes off first, then the process is swapped out of the CPU and
moved to the back end of the ready queue.
• The ready queue is maintained as a circular queue, so when all processes
have had a turn, then the scheduler gives the first process another turn, and
so on.
• RR scheduling can give the effect of all processors sharing the CPU
equally, although the average wait time can be longer than with other
scheduling algorithms.

The average waiting time is calculated for this schedule. P1 waits for 6 milliseconds
(10 - 4), P2 waits for 4 milliseconds, and P3 waits for 7 milliseconds. Thus, the
average waiting time is 17/3 = 5.66 milliseconds.
• In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time
quantum in a row (unless it is the only runnable process).
• If a process’s CPU burst exceeds 1 time quantum, that process is preempted and is put
back in the ready queue. The RR scheduling algorithm is thus preemptive.
• The performance of RR is sensitive to the time quantum selected. If the quantum is
large enough, then RR reduces to the FCFS algorithm; If it is very small, then each
process gets 1/nth of the processor time and share the CPU equally.
• BUT, a real system invokes overhead for every context switch, and the smaller the
time quantum the more context switches there are.
• Turnaround time also depends on the size of the time quantum. In general, turnaround
time is minimized if most processes finish their next cpu burst within one time
quantum.
• The way in which a smaller time
quantum increases context
switches.
• A rule of thumb is that 80 percent
of the CPU bursts should be
shorter than the time quantum.

You might also like