Operating System Summary of Chapter 5
Operating System Summary of Chapter 5
Basic Concepts
Process scheduling is the activity that handles the removal of the running process from the CPU
and the selection of another process in the ready queue to allocate it to the CPU based on a
particular criteria.
The objective of multiprogramming is to have some process running at all times, in order to
maximize CPU utilization. By switching the CPU among processes, the operating system can
make the computer more productive.
A process is executed until it must wait, typically for the completion of some I/O request.
CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait.
Process execution begins with a CPU burst, that is followed by an I/O burst, which is
followed by another CPU burst, then another I/O burst, and so on.
Eventually, the final CPU burst ends with a system request to terminate execution.
CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one of the processes in the ready
queue to be executed. The selection process is carried out by the CPU scheduler.
CPU scheduler selects from among the processes in memory that are ready to execute and allocates
the CPU to one of them.
Conceptually all the processes in the ready queue are lined up waiting for a chance to run on the
CPU.
Preemptive Scheduling
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state (e.g. as the result of an I/O request).
2. Switches from running to ready state (e.g. when an interrupt occurs).
3. Switches from waiting to ready (e.g. at completion of I/O).
4. Terminates (e.g. when process finishes execution).
Under 1 and 4, there is no choice in terms of scheduling (nonpreemptive). A new process
must be selected for execution.
Under 2 and 3, there is a choice (preemptive).
Summary of Chapter 5 Process Scheduling
Nonpreemptive: once the CPU has been allocated to a process, the process keeps the CPU
until it releases the CPU either by terminating or by switching to the waiting state.
Preemptive: assign the CPU to the newly arrived process if its priority is higher than the
priority of the currently running process.
Unfortunately, preemptive scheduling incurs a cost associated with access to shared data.
Consider the case of two processes that share data. While one is updating the data, it is
preempted so that the second process can run. The second process then tries to read the
data, but they are in an inconsistent state. In such situations, we need new mechanisms to
coordinate access to shared data; we discuss this topic in Chapter 6 Process Synchronization
Scheduling Criteria
Many criteria have been suggested for comparing CPU-scheduling algorithms.
Which characteristics are used for comparison can make a substantial difference in which
algorithm is judged to be best.
The criteria include the following:
1. CPU utilization – keep the CPU as busy as possible
2. Throughput – number of processes that complete their execution per time unit
3. Turnaround time – The interval from the time of submission of a process to the time of
completion. Turnaround time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and doing I/O.
4. Waiting time – amount of time a process has been waiting in the ready queue. Waiting
time is the sum of the periods spent waiting in the ready queue.
5. Response time – is the time from the submission of a request until the first response is
produced.
Scheduling Optimization Criteria
It is desirable to maximize CPU utilization and throughput and to minimize turnaround time,
waiting time, and response time.
Maximize CPU utilization
Maximize throughput
Summary of Chapter 5 Process Scheduling
Disadvantages of FCFS:
o The scheduling method is non preemptive, the process will run to the completion.
o Due to the non-preemptive nature of the algorithm, the problem of starvation may
occur.
o Although it is easy to implement, but it is poor in performance since the average waiting
time is higher as compared to other scheduling algorithms.
Input the processes along with their burst time (bt).
Find waiting time (wt) for all processes.
First process need not to wait so waiting time for process 1 will be 0.
Waiting time for all other processes i.e. for process i:
wt[i] = bt[i-1] + wt[i-1]
Summary of Chapter 5 Process Scheduling
Advantages
Maximum throughput.
Minimum average waiting and turnaround time.
Disadvantages
May suffer with the problem of starvation.
It is not implementable because the exact Burst time for a process can't be known in
advance.
Priority Scheduling
A priority number (integer) is associated with each process.
The CPU is allocated to the process with the highest priority (smallest integer highest
priority) and so on. Processes with the same priority are executed on first come first served
basis.
Summary of Chapter 5 Process Scheduling
If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units. Each process must wait no
longer than (n − 1) × q time units until its next time quantum.
The performance of the RR algorithm depends heavily on the size of the time quantum.
At one extreme, if the time quantum is extremely large, the RR is the same as the FCFS. In
contrast, if the time quantum is extremely small (1 millisecond), the RR approach is called
processor sharing and creates the appearance that each of n processes has its own
processor running at 1/n the speed of the real processor.
Multilevel Queue Scheduling
It may happen that processes in the ready queue can be divided into different classes where
each class has its own scheduling needs.
Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
The processes are permanently assigned to one queue, generally based on some property
of the process, such as memory size, process priority, or process type.
Each queue has its own scheduling algorithm
foreground – RR
background – FCFS
Scheduling must be done between the queues
Fixed priority scheduling: each queue has absolute priority over lower priority queue.
(i.e., serve all from foreground then from background). Possibility of starvation.
Time slice – each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR and 20% to background in FCFS.
Summary of Chapter 5 Process Scheduling
Scheduling
A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8
milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1.
At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not
complete, it is preempted and moved to queue Q2 and served as FCFS.
Advantages & Disadvantages of MLFQS
Advantages:
It is more flexible since it allows different processes to move between different queues.
It prevents starvation by moving a process that waits too long for lower priority queue
to the higher priority queue.
Disadvantages:
For the selection of the best scheduler, it require some other means to select the
values.
It produces more CPU overheads.
It is most complex algorithm to design.
Multiple-Processor Scheduling
CPU scheduling is more complex when multiple CPUs are available.
We focus on homogeneous processors within a multiprocessor, where processors are
identical.
Asymmetric multiprocessing (AMP)– only one processor accesses the system data
structures, alleviating the need for data sharing
Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes in
common ready queue, or each has its own private queue of processes.
Processor affinity – process has affinity for processor on which it is currently running
soft affinity: possible for a process to migrate between processors.
hard affinity: allowing a process to specify that it is not to migrate to other processors.
Summary of Chapter 5 Process Scheduling
Multicore Processors
Recent trend to place multiple processor cores on same physical chip.
Faster and consume less power.
Multiple threads per core also growing.
Algorithm Evaluation
How do we select a CPU-scheduling algorithm for a particular system?
The first problem is defining the criteria to be used in selecting an algorithm.
To select an algorithm, we must first define the relative importance of these elements. Our
criteria may include several measures, such as:
Maximizing CPU utilization under the constraint that the maximum response time is 1
second.
Maximizing throughput such that turnaround time is linearly proportional to total
execution time.
Deterministic modeling (Analytic evaluation): takes a particular predetermined workload
and defines the performance of each algorithm for that workload.
Deterministic modeling is simple and fast. It gives us exact numbers, allowing us to compare
the algorithms. However, it requires exact numbers for input, and its answers apply only to
those cases.
Queueing models: knowing arrival rates and service rates, we can compute utilization,
average queue length, average wait time, and so on.