CPU Scheduling Algorithms Report
CPU Scheduling Algorithms Report
Submitted To:
Submitted By:
1 | Abstract
This report, entitled “CPU Scheduling Algorithms and their Implementation in Visual C++
Environment”, contains a brief introduction to the basic algorithm techniques used for CPU
scheduling and their implementation in Visual C++ environment. This report was put together by
a group of 2 students as the fulfillment to the semester project assigned by Mr. Kashif Alam,
lecturer “Operating Systems” at HIIT (Hamdard University, Main Campus, Karachi).
2 | Acknowledgment
We, the students who were assigned this report, hereby acknowledge that the material
used/written in this report is not, wholly or partially, plagiarized; proper references have been
mentioned where we used external printed/written/published material, and the instructor has the
right to reject a portion of the report or the report altogether if he finds the fact to be something
else than what has been mentioned above.
We have tried our level best to keep the references correct and the facts & figures up-to-date, but
being human, any error or lapses are expected and are requested to be tolerated.
Amar Jeet
[BECS/H/F10/0117]
Uzair Ahmed
[BECS/H/F10/0118]
Table of Contents
TOPIC PAGE
1 Abstract 1
2 Acknowledgment 1
3 “CPU SCHEDULING” – AN INTRODUCTION 4
3.1 What is CPU Scheduling? 4
3.2 Why do we need Scheduling? 4
3.3 Goals of CPU Scheduling 4
3.4 When to Schedule? 4
4 TYPES OF SCHEDULING 5
4.1 Long-term Scheduling 5
4.2 Medium-term Scheduling 5
4.3 Short-term Scheduling 5
5 SCHEDULING ALGORITHMS 6
5.1 Preemptive vs. Non-preemptive Algorithms 7
5.2 FCFS Algorithm 7
5.2a Advantages of FCFS 7
5.2b Disadvantages of FCFS 8
5.3 SJF/SRT Algorithms 8
5.3a Advantages of SJF/SRT 8
5.3b Disadvantages of SJF/SRT 9
5.4 Round-Robin Algorithm 9
5.4a The “Quantum Issue” 9
5.4b Advantages of RR 10
5.4c Disadvantages of RR 10
5.5 Priority-based Scheduling 10
5.5a Advantages of PBS 10
5.5b Disadvantages of PBS 10
5.6 MFQ Algorithm 11
5.6a Working of MFQ Algorithm 11
5.6b Advantages of MFQ 11
5.6b Disadvantages of MFQ 11
contd…
TOPIC PAGE
6 OVERVIEW & APPLICATIONS 11
6.1 Comparison Table 12
6.2 Choosing the Right Algorithm 12
6.3 Implementation of Different Algorithm Techniques 12
6.4 Algorithms used in Common Operating Systems 13
7 VISUAL REPRESENTATION OF ALGORITHMS IN VC++ 13
7.1 Software and Language Details 13
7.2 Working of the Program 14
7.3 Algorithms of Programs 14
7.3a Algorithm of SJF (Non-Preemptive) 14
7.3b Algorithm of SJF (Preemptive) 14
7.3c Algorithm of FCFS 14
7.3d Algorithm of Round-Robin 14
• Utilization/Efficiency: Keep the CPU busy 100% of the time with useful work.
• Throughput: Maximize the number of jobs processed per hour.
• Turnaround time: From the time of submission to the time of completion, minimize the
time batch users must wait for output.
• Waiting time: Sum of times spent in ready queue, minimize this.
• Response Time: Time from submission till the first response is produced, minimize
response time for interactive users.
• Fairness: Make sure each process gets a fair share of the CPU.
Third, when a process blocks on I/O, on a semaphore, or for some other reason, another process
has to be selected to run. Sometimes the reason for blocking may play a role in the choice. For
example, if A is an important process and it is waiting for B to exit its critical region, letting B
run next will allow it to exit its critical region and thus let A continue. The trouble, however, is
that the scheduler generally does not have the necessary information to take this dependency into
account.
Fourth, when an I/O interrupt occurs, a scheduling decision may be made. If the interrupt came
from an I/O device that has now completed its work, some process that was blocked waiting for
the I/O may now be ready to run. It is up to the scheduler to decide if the newly ready process
should be run, if the process that was running at the time of the interrupt should continue
running, or if some third process should run.
• OS calls,
• Signals (semaphores, etc.)
5 | SCHEDULING ALGORITHMS
Scheduling algorithms are the techniques used for distributing resources among parties which
simultaneously and asynchronously request them. Scheduling disciplines are used in routers (to
handle packet traffic) as well as in operating systems (to share CPU time among
both threads and processes), disk drives (I/O scheduling), printers (print spooler), most
embedded systems, etc.
The main purposes of scheduling algorithms are to minimize resource starvation and to ensure
fairness amongst the parties utilizing the resources. Scheduling deals with the problem of
deciding which of the outstanding requests is to be allocated resources. There are many different
scheduling algorithms. In this section, we introduce several of them.
There are several types of CPU scheduling algorithms, each having its own characteristics,
advantages and disadvantages. We will discuss some of the most important and significant one
of those in this report, namely:
• First-Come First-Served
• Shortest Job First (SJF) / Shortest Remaining Time (SRT)
• Round-Robin
• Priority-based
• Multi-level Feedback Queue
Before we move on to the algorithm techniques, it is necessary that we define the concept of pre-
emption and non-preemption.
On the other hand, the Non-Preemptive scheduling is designed so that once a process enters the
running state (allowed the resources to run), it cannot be removed from the processor until it has
completed its service time (or it explicitly yields the processor).
With the concept of preemption described, we now move on to the algorithm techniques.
5.2-a | Advantages of FCFS: The greatest strength of this algorithm is that it is easy to
understand and equally easy to program. It is also fair in the same sense that allocating scarce
sports or concert tickets to people who are willing to stand on line starting at 2 A.M. is fair. With
this algorithm, a single linked list keeps track of all ready processes. Picking a process to run just
requires removing one from the front of the queue. Adding a new job or unblocked process just
requires attaching it to the end of the queue. What could be simpler?
5.2-b | Disadvantages of FCFS: Unfortunately, FCFS also has a huge disadvantage, and it is
that it pays no attention to processing time or prioritizes the processes. If a process arrives in the
ready queue with processing time of 100ms and then several other processes arrive with
processing time of 1ms each, the FCFS will start the 100ms process and other, much shorter
processes will have to wait for 100ms. In this way, the turnaround time for FCFS can become
significantly low as larger processes hog the resources.
The SRT algorithm, on the other hand, introduces the concept of preemption with same
techniques described with SJF. In this case, the scheduler always chooses the process that has the
shortest expected remaining processing time. When a new process joins the ready queue, it may
in fact have a shorter remaining time than the currently running process. Accordingly, the
scheduler may preempt the current process when a new process becomes ready. As with SJF, the
scheduler must have an estimate of processing time to perform the selection function.
Fig. 2 – The working of SJF technique (a) Running four processes in original order
(b) Running same processes in SJF order
Take a look at the example of Fig. 2. In the original order, A runs first and hogs the system for
shorter processes. If run in SJF order, the shorter jobs run first and A gets its turn last.
5.3-a | Advantages of SJF/SRT: The SJF/SRT algorithms do not have a bias in favor of longer
processes as it happens in FCFS algorithms. It also provides the maximum throughput in most of
the cases.
5.3-b | Disadvantages of SJF/SRT: Main disadvantages of SJF/SRT are that these algorithms
need advanced knowledge of estimation and preemption. These aspects make it hard to
implement these algorithms. Moreover, if a lot of short processes arrive and keep on arriving in
an SRT-based system, the larger processes will suffer from starvation.
5.4-a | The “Quantum Issue”: The only interesting issue with round robin is the length of the
quantum. Switching from one process to another requires a certain amount of time for doing the
administration—saving and loading registers and memory maps, updating various tables and
lists, flushing and reloading the memory cache, etc. Suppose that this process switch or context
switch, as it is sometimes called, takes 1 msec, including switching memory maps, flushing and
reloading the cache, etc. Also suppose that the quantum is set at 4 msec. With these parameters,
after doing 4 msec of useful work, the CPU will have to spend 1 msec on process switching.
Twenty percent of the CPU time will be wasted on administrative overhead. Clearly this is too
much.
To improve the CPU efficiency, we could set the quantum to, say, 100 msec. Now the wasted
time is only 1 percent. But consider what happens on a timesharing system if ten interactive
users hit the carriage return key at roughly the same time. Ten processes will be put on the list of
runnable processes. If the CPU is idle, the first one will start immediately, the second one may
not start until 100 msec later, and so on. The unlucky last one may have to wait 1 sec before
getting a chance, assuming all the others use their full quanta. Most users will perceive a 1-sec
response to a short command as sluggish.
Another factor is that if the quantum is set longer than the mean CPU burst, preemption will
rarely happen. Instead, most processes will perform a blocking operation before the quantum
runs out, causing a process switch. Eliminating preemption improves performance because
process switches then only happen when they are logically necessary, that is, when a process
blocks and cannot continue.
5.4-b | Advantages of RR: The biggest advantage that RR-algorithm has over other algorithm
techniques is that it is fair to every process, giving all of them exactly the same time, and thus
starvation is virtually impossible to happen.
5.4-c | Disadvantages of RR: The main problem with Round-Robin is that it assumes that all
processes are equally important, thus each receives an equal portion of the CPU. This sometimes
produces bad results. Consider three processes that start at the same time and each require three
time slices to finish. A comparison of FIFO and RR in this case is explained by this figure:
Fig. 4 – RR and FIFO (L.H.S.) Executing three processes with the FIFO algorithm
(R.H.S.) Executing same processes with the RR algorithm
The picture above describes the problem with RR algorithm. In the FIFO technique, Process A
finishes after 3 slices; B, 6; and C, 9. The average is (3+6+9)/3 = 6 slices. But in the case of RR,
Process A finishes after 7 slices; B, 8; and C, 9; so the average is (7+8+9)/3 = 8 slices. This
implies that Round-Robin is fair, but uniformly inefficient in some of the cases.
5.5-a | Advantages of PBS: The main advantage of Priority-based scheduling is that it uses
round-robin technique to make sure that a single job does not hog the resources of the processor.
5.5-b | Disadvantages of PBS: The biggest disadvantage of the priority-based scheduling is the
concept of priority itself! If we add a lot of processes to a queue with varying priorities assigned
to them, chances are that the processes with the least priority will never run if more processes
keep on coming. In this case, the starvation follows.
5.6-b | Advantages of MFQ: The main advantage of MFQ algorithm is that it is highly efficient
and it produces results better than any other algorithm because of its categorization feature. This
is the reason it is used in most of the operating systems today like Windows, Solaris, MAC OS
X, Linux, NetBSD, FreeBSD, etc.
With the basic scheduling techniques discussed, we present an overview and comparison of
different scheduling algorithms in the table that follows. The terms used in this table are defined
in Appendix A.
CPU Turnaround
Algorithm Throughput Response Time
Overhead Time
FCFS/FIFO Low Low High Low
SJF/SRT Medium High Medium Medium
Round-Robin High Medium Medium High
Priority-based Medium Low High High
MQS High High Medium Medium
The basic purpose of the Operating Systems course is to develop an understanding of how the
basic components of an OS work. Process is one of the most fundamental and important part of
an OS and it is very important for a computer scientist to understand how it works. Our semester
project explains one dimension of this working, by explaining how the processes are scheduled
for the processing.
The simulation of the program works for a maximum of three tasks/jobs/processes that a user
enters and it effectively explains how a job enters the queue, how scheduling takes place and the
working of algorithms, and it calculates A.W.T. for all of the entered tasks.
Arrival Time: The time at which a job arrives at the ready queue.
Context Switching: The process of switching from one process to another. It includes saving
and loading registers and memory maps, updating various tables and lists, flushing and reloading
the memory cache, etc.
CPU Overhead: The processing time required by a device prior to the execution of a command.
Degree of Multiprogramming: A factor that affects the maximum CPU utilization. Greater the
degree, higher is the value of CPU utilization.
Multithreading: Using multiple tasks (threads) to execute two or more related (or the same)
pieces of code at the same time.
PCB (Process Control Block): A table that is maintained by OS for every process. It contains
information about the process’ state, its program counter, stack pointer, memory allocation, the
status of its open files, its accounting and scheduling information, and everything else about the
process that must be saved when the process is switched from running to ready or blocked.
Response time: Amount of time it takes from when a request was submitted until the first
response is produced.
Thread: A series of related computing/processing messages with the same subject, consisting of
an original message and all the replies that follow.
Throughput: Number of processes that complete their execution per time unit.
Turnaround Time: Total time between submission of a process and its completion.
Waiting Time: Time for which a job has to wait before it starts processing is W.T.
Appendix B – Bibliography
This appendix contains the references to the published/written/printed material that was referred
during the compilation of this report.
Books
• “Operating Systems: Internals and Design Principles” by W. Stallings
• “Operating System Concepts” by A. Silberschatz and P. B. Galvin
Other resources
• MS Encarta – Microsoft’s Encarta encyclopedia
• “A Methodology for Comparing Service Policies Using a Trust Model” – a research
paper by Henry Hexmoor, Southern Illinois University
• “OS Concepts” – Compiled by Sami-ul-Haq