0% found this document useful (0 votes)
53 views

Operating System Summary of Chapter 5

The document summarizes key concepts in process scheduling, including: 1) Process scheduling involves selecting processes from the ready queue to run on the CPU based on criteria like priority. This aims to maximize CPU utilization. 2) Processes alternate between CPU and I/O bursts in a cycle. Scheduling decisions can occur when a process switches states. 3) Scheduling algorithms like FCFS, SJF, priority, and round robin (RR) select the next process to run, balancing factors like waiting time and throughput.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

Operating System Summary of Chapter 5

The document summarizes key concepts in process scheduling, including: 1) Process scheduling involves selecting processes from the ready queue to run on the CPU based on criteria like priority. This aims to maximize CPU utilization. 2) Processes alternate between CPU and I/O bursts in a cycle. Scheduling decisions can occur when a process switches states. 3) Scheduling algorithms like FCFS, SJF, priority, and round robin (RR) select the next process to run, balancing factors like waiting time and throughput.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Summary of Chapter 5 Process Scheduling

Basic Concepts
Process scheduling is the activity that handles the removal of the running process from the CPU
and the selection of another process in the ready queue to allocate it to the CPU based on a
particular criteria.
The objective of multiprogramming is to have some process running at all times, in order to
maximize CPU utilization. By switching the CPU among processes, the operating system can
make the computer more productive.
 A process is executed until it must wait, typically for the completion of some I/O request.
 CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait.
 Process execution begins with a CPU burst, that is followed by an I/O burst, which is
followed by another CPU burst, then another I/O burst, and so on.
 Eventually, the final CPU burst ends with a system request to terminate execution.
CPU Scheduler
 Whenever the CPU becomes idle, the operating system must select one of the processes in the ready
queue to be executed. The selection process is carried out by the CPU scheduler.
 CPU scheduler selects from among the processes in memory that are ready to execute and allocates
the CPU to one of them.
 Conceptually all the processes in the ready queue are lined up waiting for a chance to run on the
CPU.
Preemptive Scheduling
 CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state (e.g. as the result of an I/O request).
2. Switches from running to ready state (e.g. when an interrupt occurs).
3. Switches from waiting to ready (e.g. at completion of I/O).
4. Terminates (e.g. when process finishes execution).
 Under 1 and 4, there is no choice in terms of scheduling (nonpreemptive). A new process
must be selected for execution.
 Under 2 and 3, there is a choice (preemptive).
Summary of Chapter 5 Process Scheduling

 Nonpreemptive: once the CPU has been allocated to a process, the process keeps the CPU
until it releases the CPU either by terminating or by switching to the waiting state.
 Preemptive: assign the CPU to the newly arrived process if its priority is higher than the
priority of the currently running process.
 Unfortunately, preemptive scheduling incurs a cost associated with access to shared data.
Consider the case of two processes that share data. While one is updating the data, it is
preempted so that the second process can run. The second process then tries to read the
data, but they are in an inconsistent state. In such situations, we need new mechanisms to
coordinate access to shared data; we discuss this topic in Chapter 6 Process Synchronization
Scheduling Criteria
 Many criteria have been suggested for comparing CPU-scheduling algorithms.
 Which characteristics are used for comparison can make a substantial difference in which
algorithm is judged to be best.
 The criteria include the following:
1. CPU utilization – keep the CPU as busy as possible
2. Throughput – number of processes that complete their execution per time unit
3. Turnaround time – The interval from the time of submission of a process to the time of
completion. Turnaround time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and doing I/O.
4. Waiting time – amount of time a process has been waiting in the ready queue. Waiting
time is the sum of the periods spent waiting in the ready queue.
5. Response time – is the time from the submission of a request until the first response is
produced.
Scheduling Optimization Criteria
It is desirable to maximize CPU utilization and throughput and to minimize turnaround time,
waiting time, and response time.
 Maximize CPU utilization
 Maximize throughput
Summary of Chapter 5 Process Scheduling

 Minimize turnaround time


 Minimize waiting time
 Minimize response time
Scheduling Algorithms
 CPU scheduling deals with the problem of deciding which of the processes in the ready
queue is to be allocated the CPU. There are many different CPU-scheduling algorithms.
 First-Come, First-Served (FCFS)
 Shortest-Job-First (SJF)
 Priority
 Round Robin (RR)
FCFS Scheduling
 First come, first served (FCFS), is the simplest scheduling algorithm. FCFS simply queues
processes in the order that they arrive in the ready queue.
 In this, the process that comes first will be executed first and next process starts only after
the previous gets fully executed.
 Advantages of FCFS
o Simple and easy to implement

 Disadvantages of FCFS:
o The scheduling method is non preemptive, the process will run to the completion.
o Due to the non-preemptive nature of the algorithm, the problem of starvation may
occur.
o Although it is easy to implement, but it is poor in performance since the average waiting
time is higher as compared to other scheduling algorithms.
 Input the processes along with their burst time (bt).
 Find waiting time (wt) for all processes.
 First process need not to wait so waiting time for process 1 will be 0.
 Waiting time for all other processes i.e. for process i:
wt[i] = bt[i-1] + wt[i-1]
Summary of Chapter 5 Process Scheduling

 Find turnaround time for all processes.


Turnaround time = wt + bt
 Find average waiting time =
total_waiting_time / no_of_processes
 Similarly, find average turnaround time =
total_turn_around_time / no_of_processes
 Thus, the average waiting time and turnaround time under FCFS algorithm is generally not
minimal and may vary substantially if the processes CPU burst times vary greatly.
 Note that the FCFS scheduling algorithm is nonpreemptive.
Shortest-Job-First (SJF) Scheduling
 Shortest job first (SJF) or shortest job next, is a scheduling algorithm that selects the waiting
process with the smallest execution time to execute next. SJF is a non-preemptive
algorithm.
 SJF is optimal – gives minimum average waiting time for a given set of processes.
o The difficulty is knowing the length of the next CPU request.

Advantages
 Maximum throughput.
 Minimum average waiting and turnaround time.
Disadvantages
 May suffer with the problem of starvation.
 It is not implementable because the exact Burst time for a process can't be known in
advance.
Priority Scheduling
 A priority number (integer) is associated with each process.
 The CPU is allocated to the process with the highest priority (smallest integer  highest
priority) and so on. Processes with the same priority are executed on first come first served
basis.
Summary of Chapter 5 Process Scheduling

 Priority can be decided based on memory requirements, time requirements


or any other resource requirement.
o Preemptive: preempt the CPU if the priority of the newly arrived process is
higher than the priority of the currently running process.
o Nonpreemptive: simply put the new process at the head of the ready
queue.
 SJF is a priority scheduling where priority is the predicted next CPU burst time.
Advantages:
 Higher priority processes execute first.
Disadvantages:
 There is a chance of starvation if only higher priority processes keep coming in the ready
queue.
 If two processes have the same priorities, then some other scheduling algorithm needs
to be used.
Priority Scheduling Problem
 Problem  Starvation – A low priority processes that is ready to run but waiting for the
CPU can be considered blocked and may never execute.
 Solution  Aging – is a technique of gradually increasing the priority of processes that
wait in the system for a long time.
 For example, if priorities range from 127 (low) to 0 (high), we could increase the priority
of a waiting process by 1 every 15 minutes. Eventually, even a process with an initial
priority of 127 would have the highest priority in the system and would be executed.
Summary of Chapter 5 Process Scheduling

Round Robin (RR)


 The RR scheduling algorithm is designed especially for timesharing systems. It is similar
to FCFS scheduling, but preemption is added to enable the system to switch between
processes.
 Each process gets a small unit of CPU time (time quantum or time slice), usually 10-100
milliseconds. The ready queue is treated as a circular queue. After this time has elapsed,
the process is preempted and added to the end of the ready queue.
 It is simple, easy to implement, and starvation-free as all processes get fair share of CPU.
 To implement RR scheduling, we keep the ready queue as a FIFO queue of processes.
New processes are added to the tail of the ready queue. The CPU scheduler picks the
first process from the ready queue, sets a timer to interrupt after 1 time quantum, and
dispatches the process.
Advantages:
 There is fairness since every process gets equal share of CPU.
 No starvation as every process gets a chance for its execution.
 The newly created process is added to end of ready queue.
Disadvantages:
 The CPU is left idle due to a lot of context switching.
 Larger waiting time, response time, and turnaround time.
 Low throughput.
 More overhead of context switching.
 Time consuming scheduling for small quantum .
 The average waiting time and turnaround time under the RR policy is often long.
 In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time
quantum in a row (unless it is the only runnable process).
 If a process’s CPU burst exceeds 1 time quantum, that process is preempted and is put back
in the ready queue.
 The RR scheduling algorithm is thus preemptive.
Summary of Chapter 5 Process Scheduling

 If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units. Each process must wait no
longer than (n − 1) × q time units until its next time quantum.
 The performance of the RR algorithm depends heavily on the size of the time quantum.
 At one extreme, if the time quantum is extremely large, the RR is the same as the FCFS. In
contrast, if the time quantum is extremely small (1 millisecond), the RR approach is called
processor sharing and creates the appearance that each of n processes has its own
processor running at 1/n the speed of the real processor.
Multilevel Queue Scheduling
 It may happen that processes in the ready queue can be divided into different classes where
each class has its own scheduling needs.
 Ready queue is partitioned into separate queues:
 foreground (interactive)
 background (batch)
 The processes are permanently assigned to one queue, generally based on some property
of the process, such as memory size, process priority, or process type.
 Each queue has its own scheduling algorithm
 foreground – RR
 background – FCFS
 Scheduling must be done between the queues
 Fixed priority scheduling: each queue has absolute priority over lower priority queue.
(i.e., serve all from foreground then from background). Possibility of starvation.
 Time slice – each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR and 20% to background in FCFS.
Summary of Chapter 5 Process Scheduling

Advantages & Disadvantages of MLQS


 Advantages:
 Low scheduling overhead
 Disadvantages:
 Some processes may starve for CPU if some higher priority queues are never
becoming empty.
 It is inflexible since processes do not change their foreground or background
nature.
Multilevel Feedback Queue
 It allows a process to move between queues according to the characteristics of their CPU
bursts.
 Multilevel-feedback-queue scheduler defined by the following parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will enter when that process needs
service
 A process that waits too long in a lower-priority queue may be moved to a higher-priority
queue. This form of aging prevents starvation.
Example of Multilevel Feedback Queue
 Three queues:
 Q0 – RR with time quantum 8 milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q2 – FCFS
 Scheduling between the three queues is FCFS.
Summary of Chapter 5 Process Scheduling

 Scheduling
 A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8
milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1.
 At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not
complete, it is preempted and moved to queue Q2 and served as FCFS.
Advantages & Disadvantages of MLFQS
 Advantages:
 It is more flexible since it allows different processes to move between different queues.
 It prevents starvation by moving a process that waits too long for lower priority queue
to the higher priority queue.
 Disadvantages:
 For the selection of the best scheduler, it require some other means to select the
values.
 It produces more CPU overheads.
 It is most complex algorithm to design.
Multiple-Processor Scheduling
 CPU scheduling is more complex when multiple CPUs are available.
 We focus on homogeneous processors within a multiprocessor, where processors are
identical.
 Asymmetric multiprocessing (AMP)– only one processor accesses the system data
structures, alleviating the need for data sharing
 Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes in
common ready queue, or each has its own private queue of processes.
 Processor affinity – process has affinity for processor on which it is currently running
 soft affinity: possible for a process to migrate between processors.
 hard affinity: allowing a process to specify that it is not to migrate to other processors.
Summary of Chapter 5 Process Scheduling

Multicore Processors
 Recent trend to place multiple processor cores on same physical chip.
 Faster and consume less power.
 Multiple threads per core also growing.
Algorithm Evaluation
 How do we select a CPU-scheduling algorithm for a particular system?
 The first problem is defining the criteria to be used in selecting an algorithm.
 To select an algorithm, we must first define the relative importance of these elements. Our
criteria may include several measures, such as:
 Maximizing CPU utilization under the constraint that the maximum response time is 1
second.
 Maximizing throughput such that turnaround time is linearly proportional to total
execution time.
 Deterministic modeling (Analytic evaluation): takes a particular predetermined workload
and defines the performance of each algorithm for that workload.
 Deterministic modeling is simple and fast. It gives us exact numbers, allowing us to compare
the algorithms. However, it requires exact numbers for input, and its answers apply only to
those cases.
 Queueing models: knowing arrival rates and service rates, we can compute utilization,
average queue length, average wait time, and so on.

You might also like