Wa0081.
Wa0081.
systems. Such operating systems allow more than one process to be loaded into
the executable memory at a time and the loaded process shares the CPU using
time multiplexing.
A typical process involves both I/O time and CPU time.
In a uniprogramming system like MS-DOS, time spent waiting for I/O is wasted
and CPU is free during this time.
In multiprogramming systems, one process can use CPU while another is waiting
for I/O. This is possible only with process scheduling.
Process execution begins with a CPU burst. That is followed by
an I/O burst, which is followed by another CPU burst, then
another I/O burst, and so on. Eventually, the final CPU burst
ends with a system request to terminate execution.
E In the first Gantt chart below, process P1 arrives first. The average waiting time for the three processes
is ( 0 + 24 + 27 ) / 3 = 17.0 ms.
X
A
M In the second Gantt chart below, the same three processes have an average wait time of ( 0 + 3 +
P 6 ) / 3 = 3.0 ms. This reduction is substantial.
L
E
Thus, the average waiting time under an FCFS policy is generally not minimal and may vary
substantially if the processes’ CPU burst times vary greatly.
Consider the
snapshot of a system
given here
E
A Gantt chart is a horizontal bar chart developed as a production control tool in 1917 by Henry L. Gantt, an American
X engineer and social scientist.
A
M
P
L
E
Consider the set of 5 processes whose arrival time and burst time are given below. If the CPU
scheduling policy is FCFS, calculate the average waiting time and average turn around time.
§ When the CPU hog finally relinquishes the CPU, then the I/O processes pass through the
CPU quickly, leaving the CPU idle while everyone queues up for I/O, and then the cycle
repeats itself when the CPU intensive process gets back to the ready queue.
§ The FCFS algorithm is thus particularly troublesome for time-sharing systems, where it is
important that each user get a share of the CPU at regular intervals.
Shortest-Job-First Scheduling
➢ Shortest-job-first (SJF) scheduling algorithm associates with
each process the length of the process’s next CPU burst.
E
X Gantt Chart representation is:
A
M
P
L
E
➢ A priority is associated with each process, and the CPU is allocated to the process
with the highest priority. Equal-priority processes are scheduled in FCFS order.
➢ An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse
of the (predicted) next CPU burst. The larger the CPU burst, the lower the priority,
and vice versa.
➢ In practice, priorities are implemented using integers within a fixed range, but there
is no agreed-upon convention as to whether "high" priorities use large numbers or
small numbers.
consider the following set of processes, assumed to have
arrived at time 0 in the order P1, P2, · · ·, P5, with the length
of the CPU burst given in milliseconds:
E
X
A
M
P
Now
Try this!!!!
L
E
➢ Internal priorities are assigned by the OS using criteria such as average burst time, ratio of
CPU to I/O activity, system resource use, and other factors available to the kernel.
➢ External priorities are assigned by users, based on the importance of the job, fees paid,
politics, etc.
• Priority scheduling can be either preemptive or non-preemptive.
➢ When a process arrives at the ready queue, its priority is compared with the priority of the
currently running process.
➢ A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly
arrived process is higher than the priority of the currently running process.
➢ A nonpreemptive priority scheduling algorithm will simply put the new process at the head
of the ready queue.
Priority scheduling can suffer from a major problem known as indefinite
blocking, or starvation, in which a low-priority task can wait forever
because there are always some other jobs around that have higher priority.
➢ If this problem is allowed to occur, then processes will either run eventually
when the system load lightens, or will eventually get lost when the system is shut
down or crashes. (There are rumors of jobs that have been stuck for years.)
➢ Under this scheme a low-priority job will eventually get its priority raised high
enough that it gets run.
Round-Robin Scheduling
The average waiting time is calculated for this schedule. P1 waits for 6 milliseconds
(10 - 4), P2 waits for 4 milliseconds, and P3 waits for 7 milliseconds. Thus, the
average waiting time is 17/3 = 5.66 milliseconds.
• In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time
quantum in a row (unless it is the only runnable process).
• If a process’s CPU burst exceeds 1 time quantum, that process is preempted and is put
back in the ready queue. The RR scheduling algorithm is thus preemptive.
• The performance of RR is sensitive to the time quantum selected. If the quantum is
large enough, then RR reduces to the FCFS algorithm; If it is very small, then each
process gets 1/nth of the processor time and share the CPU equally.
• BUT, a real system invokes overhead for every context switch, and the smaller the
time quantum the more context switches there are.
• Turnaround time also depends on the size of the time quantum. In general, turnaround
time is minimized if most processes finish their next cpu burst within one time
quantum.
• The way in which a smaller time
quantum increases context
switches.
• A rule of thumb is that 80 percent
of the CPU bursts should be
shorter than the time quantum.