0% found this document useful (0 votes)
64 views

CPU Scheduling Algorithms Report

This report discusses CPU scheduling algorithms and their implementation in Visual C++. It provides an introduction to CPU scheduling, explaining that scheduling algorithms are needed for modern multi-tasking systems to efficiently allocate processor time between processes. The report then describes several common scheduling algorithms like FCFS, SJF, Round Robin, and Priority Scheduling. It compares the advantages and disadvantages of each approach. The report also discusses implementing examples of these algorithms as programs in Visual C++, with pseudocode provided for the SJF and FCFS algorithms. Visual representation of the algorithms is intended to help understand how they work in practice.

Uploaded by

Deep P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views

CPU Scheduling Algorithms Report

This report discusses CPU scheduling algorithms and their implementation in Visual C++. It provides an introduction to CPU scheduling, explaining that scheduling algorithms are needed for modern multi-tasking systems to efficiently allocate processor time between processes. The report then describes several common scheduling algorithms like FCFS, SJF, Round Robin, and Priority Scheduling. It compares the advantages and disadvantages of each approach. The report also discusses implementing examples of these algorithms as programs in Visual C++, with pseudocode provided for the SJF and FCFS algorithms. Visual representation of the algorithms is intended to help understand how they work in practice.

Uploaded by

Deep P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Hamdard Institute of Information Technology (HIIT)

Hamdard University, Main Campus, Karachi

“CPU Scheduling Algorithms


and their Implementation”
[Report – Operating Systems]

Submitted To:

Mr. Kashif Alam


[Lecturer, Lab Instructor – Operating Systems]

Submitted By:

Amar Jeet [BECS/H/F10/0117]


Uzair Ahmed [BECS/H/F10/0118]

Department of Computer-Systems Engineering, FEST, Hamdard University


Project Report “CPU Scheduling Algorithms” Operating Systems

1 | Abstract
This report, entitled “CPU Scheduling Algorithms and their Implementation in Visual C++
Environment”, contains a brief introduction to the basic algorithm techniques used for CPU
scheduling and their implementation in Visual C++ environment. This report was put together by
a group of 2 students as the fulfillment to the semester project assigned by Mr. Kashif Alam,
lecturer “Operating Systems” at HIIT (Hamdard University, Main Campus, Karachi).

2 | Acknowledgment
We, the students who were assigned this report, hereby acknowledge that the material
used/written in this report is not, wholly or partially, plagiarized; proper references have been
mentioned where we used external printed/written/published material, and the instructor has the
right to reject a portion of the report or the report altogether if he finds the fact to be something
else than what has been mentioned above.

We have tried our level best to keep the references correct and the facts & figures up-to-date, but
being human, any error or lapses are expected and are requested to be tolerated.

Amar Jeet
[BECS/H/F10/0117]

Uzair Ahmed
[BECS/H/F10/0118]

Submitted to: Mr. Kashif Alam 1


Project Report “CPU Scheduling Algorithms” Operating Systems

Table of Contents

TOPIC PAGE
1 Abstract 1
2 Acknowledgment 1
3 “CPU SCHEDULING” – AN INTRODUCTION 4
3.1 What is CPU Scheduling? 4
3.2 Why do we need Scheduling? 4
3.3 Goals of CPU Scheduling 4
3.4 When to Schedule? 4
4 TYPES OF SCHEDULING 5
4.1 Long-term Scheduling 5
4.2 Medium-term Scheduling 5
4.3 Short-term Scheduling 5
5 SCHEDULING ALGORITHMS 6
5.1 Preemptive vs. Non-preemptive Algorithms 7
5.2 FCFS Algorithm 7
5.2a Advantages of FCFS 7
5.2b Disadvantages of FCFS 8
5.3 SJF/SRT Algorithms 8
5.3a Advantages of SJF/SRT 8
5.3b Disadvantages of SJF/SRT 9
5.4 Round-Robin Algorithm 9
5.4a The “Quantum Issue” 9
5.4b Advantages of RR 10
5.4c Disadvantages of RR 10
5.5 Priority-based Scheduling 10
5.5a Advantages of PBS 10
5.5b Disadvantages of PBS 10
5.6 MFQ Algorithm 11
5.6a Working of MFQ Algorithm 11
5.6b Advantages of MFQ 11
5.6b Disadvantages of MFQ 11
contd…

Submitted to: Mr. Kashif Alam 2


Project Report “CPU Scheduling Algorithms” Operating Systems

Table of Contents (continued)

TOPIC PAGE
6 OVERVIEW & APPLICATIONS 11
6.1 Comparison Table 12
6.2 Choosing the Right Algorithm 12
6.3 Implementation of Different Algorithm Techniques 12
6.4 Algorithms used in Common Operating Systems 13
7 VISUAL REPRESENTATION OF ALGORITHMS IN VC++ 13
7.1 Software and Language Details 13
7.2 Working of the Program 14
7.3 Algorithms of Programs 14
7.3a Algorithm of SJF (Non-Preemptive) 14
7.3b Algorithm of SJF (Preemptive) 14
7.3c Algorithm of FCFS 14
7.3d Algorithm of Round-Robin 14

Appendix A – Definitions & Concepts 15


Appendix B – Bibliography 16

Submitted to: Mr. Kashif Alam 3


Project Report “CPU Scheduling Algorithms” Operating Systems

“CPU Scheduling Algorithms and their


Implementation in Visual C++ Environment”
3 | “CPU SCHEDULING” – AN INTRODUCTION

3.1 | What exactly is Scheduling?


In computer science, scheduling is the method by which threads, processes or data flows are
given access to system resources (e.g. processor time, communications bandwidth). This is
usually done to load balance a system effectively or achieve a target quality of service. [ref 1.1]

3.2 | Why do we need Scheduling?


The basic reason behind the implementation of Scheduling Algorithms is that the modern
computer-systems of today rely on multi-tasking and multiplexing. Multi-tasking refers to the
process of executing more than one thread at a time and multiplexing refers to the transmission
of multiple flows simultaneously. Another reason is that the modern systems rely heavily on I/O
operations and they have to respond quickly whenever an I/O request is invoked. [ref 1.2]

3.3 | Goals of CPU Scheduling


A good scheduling algorithm is designed to have the following features:

• Utilization/Efficiency: Keep the CPU busy 100% of the time with useful work.
• Throughput: Maximize the number of jobs processed per hour.
• Turnaround time: From the time of submission to the time of completion, minimize the
time batch users must wait for output.
• Waiting time: Sum of times spent in ready queue, minimize this.
• Response Time: Time from submission till the first response is produced, minimize
response time for interactive users.
• Fairness: Make sure each process gets a fair share of the CPU.

3.4 | When to Schedule?


A key issue related to scheduling is when to make scheduling decisions. It turns out that there are
a variety of situations in which scheduling is needed.
First, when a new process is created, a decision needs to be made whether to run the parent
process or the child process. Since both processes are in ready state, it is a normal scheduling
decision and it can go either way, that is, the scheduler can legitimately choose to run either the
parent or the child next.
Second, a scheduling decision must be made when a process exits. That process can no longer
run (since it no longer exists), so some other process must be chosen from the set of ready
processes. If no process is ready, a system-supplied idle process is normally run.

Submitted to: Mr. Kashif Alam 4


Project Report “CPU Scheduling Algorithms” Operating Systems

Third, when a process blocks on I/O, on a semaphore, or for some other reason, another process
has to be selected to run. Sometimes the reason for blocking may play a role in the choice. For
example, if A is an important process and it is waiting for B to exit its critical region, letting B
run next will allow it to exit its critical region and thus let A continue. The trouble, however, is
that the scheduler generally does not have the necessary information to take this dependency into
account.
Fourth, when an I/O interrupt occurs, a scheduling decision may be made. If the interrupt came
from an I/O device that has now completed its work, some process that was blocked waiting for
the I/O may now be ready to run. It is up to the scheduler to decide if the newly ready process
should be run, if the process that was running at the time of the interrupt should continue
running, or if some third process should run.

4 | TYPES OF CPU SCHEDULING

The aim of processor scheduling is to assign processes to be executed by the processor or


processors over time, in a way that meets system objectives, such as response time, throughput,
and processor efficiency. In many systems, this scheduling activity is broken down into three
separate functions: long-, medium-, and short-term scheduling. The names suggest the relative
time scales with which these functions are performed.

4.1 | Long-term Scheduling


The long-term scheduler determines which programs are admitted to the system for processing.
Thus, it controls the degree of multiprogramming. Once admitted, a job or user program
becomes a process and is added to the queue for the short-term scheduler. In some systems, a
newly created process begins in a swapped-out condition, in which case it is added to a queue for
the medium-term scheduler. The decision as to which job to admit next can be on a simple first-
come-first served basis, or it can be a tool to manage system performance. The criteria used may
include priority, expected execution time, and I/O requirements.

4.2 | Medium-term Scheduling


Medium-term scheduling is part of the swapping function. Typically, the swapping-in decision is
based on the need to manage the degree of multiprogramming. On a system that does not use
virtual memory, memory management is also an issue. Thus, the swapping-in decision will
consider the memory requirements of the swapped-out processes.

4.3 | Short-term Scheduling


The short-term scheduler is invoked whenever an event occurs that may lead to the blocking of
the current process or that may provide an opportunity to preempt a currently running process in
favor of another. Examples of such events include:
• Clock interrupts,
• I/O interrupts,

Submitted to: Mr. Kashif Alam 5


Project Report “CPU Scheduling Algorithms” Operating Systems

• OS calls,
• Signals (semaphores, etc.)

Fig. 1 – “Levels of Scheduling”

5 | SCHEDULING ALGORITHMS
Scheduling algorithms are the techniques used for distributing resources among parties which
simultaneously and asynchronously request them. Scheduling disciplines are used in routers (to
handle packet traffic) as well as in operating systems (to share CPU time among
both threads and processes), disk drives (I/O scheduling), printers (print spooler), most
embedded systems, etc.
The main purposes of scheduling algorithms are to minimize resource starvation and to ensure
fairness amongst the parties utilizing the resources. Scheduling deals with the problem of

Submitted to: Mr. Kashif Alam 6


Project Report “CPU Scheduling Algorithms” Operating Systems

deciding which of the outstanding requests is to be allocated resources. There are many different
scheduling algorithms. In this section, we introduce several of them.

There are several types of CPU scheduling algorithms, each having its own characteristics,
advantages and disadvantages. We will discuss some of the most important and significant one
of those in this report, namely:

• First-Come First-Served
• Shortest Job First (SJF) / Shortest Remaining Time (SRT)
• Round-Robin
• Priority-based
• Multi-level Feedback Queue

Before we move on to the algorithm techniques, it is necessary that we define the concept of pre-
emption and non-preemption.

5.1 | Preemptive vs. Non-preemptive Algorithms


Preemptive scheduling is driven by the notion of prioritized computation. It requires that the
process with the highest priority should always be the one currently using the processor. If a
process is currently using the processor and a new process with a higher priority enters the ready
list, the process on the processor should be removed and returned to the ready list until it is once
again the highest-priority process in the system.

On the other hand, the Non-Preemptive scheduling is designed so that once a process enters the
running state (allowed the resources to run), it cannot be removed from the processor until it has
completed its service time (or it explicitly yields the processor).

With the concept of preemption described, we now move on to the algorithm techniques.

5.2 | FCFS Algorithm


FCFS (First-Come, First-Served) or FIFO (First-In, First-Out) is one of the simplest and easily
implementable algorithm techniques. The FCFS algorithm simply puts the processes up for
processing in the same order as they arrive in the ready queue. When a process comes in, it is
added to the tail of ready queue. When a running process terminates, it is de-queued and the
process at the head of ready queue is set to run. FIFO is a non-preemptive algorithm, i.e. no
process can be forced to give-up its resources in case a higher-priority process arrives in the
queue.

5.2-a | Advantages of FCFS: The greatest strength of this algorithm is that it is easy to
understand and equally easy to program. It is also fair in the same sense that allocating scarce
sports or concert tickets to people who are willing to stand on line starting at 2 A.M. is fair. With
this algorithm, a single linked list keeps track of all ready processes. Picking a process to run just

Submitted to: Mr. Kashif Alam 7


Project Report “CPU Scheduling Algorithms” Operating Systems

requires removing one from the front of the queue. Adding a new job or unblocked process just
requires attaching it to the end of the queue. What could be simpler?

5.2-b | Disadvantages of FCFS: Unfortunately, FCFS also has a huge disadvantage, and it is
that it pays no attention to processing time or prioritizes the processes. If a process arrives in the
ready queue with processing time of 100ms and then several other processes arrive with
processing time of 1ms each, the FCFS will start the 100ms process and other, much shorter
processes will have to wait for 100ms. In this way, the turnaround time for FCFS can become
significantly low as larger processes hog the resources.

5.3 | SJF/SRT Algorithms


SJF (Shortest Job First), also called SPN (Shortest Process Next), is quite similar to the SRT
(Shortest Remaining Time) algorithm, with only the aspect of preemption which differentiates
them. In the non-preemptive SJF algorithm, the process with the shortest expected processing
time is selected next. Thus a short process will jump to the head of the queue past longer jobs.
This requires advanced knowledge or estimations about the time required for a process to
complete.

The SRT algorithm, on the other hand, introduces the concept of preemption with same
techniques described with SJF. In this case, the scheduler always chooses the process that has the
shortest expected remaining processing time. When a new process joins the ready queue, it may
in fact have a shorter remaining time than the currently running process. Accordingly, the
scheduler may preempt the current process when a new process becomes ready. As with SJF, the
scheduler must have an estimate of processing time to perform the selection function.

Fig. 2 – The working of SJF technique (a) Running four processes in original order
(b) Running same processes in SJF order

Take a look at the example of Fig. 2. In the original order, A runs first and hogs the system for
shorter processes. If run in SJF order, the shorter jobs run first and A gets its turn last.

5.3-a | Advantages of SJF/SRT: The SJF/SRT algorithms do not have a bias in favor of longer
processes as it happens in FCFS algorithms. It also provides the maximum throughput in most of
the cases.

Submitted to: Mr. Kashif Alam 8


Project Report “CPU Scheduling Algorithms” Operating Systems

5.3-b | Disadvantages of SJF/SRT: Main disadvantages of SJF/SRT are that these algorithms
need advanced knowledge of estimation and preemption. These aspects make it hard to
implement these algorithms. Moreover, if a lot of short processes arrive and keep on arriving in
an SRT-based system, the larger processes will suffer from starvation.

5.4 | Round-Robin Algorithm


Round-Robin is one of the most important and significant algorithms. Almost all of the modern
algorithm techniques work on same grounds as that of Round-Robin. It uses the concept of cyclic
execution and allocates all processes a similar time of execution known as “Quantum” or “Time-
slice”. It basically creates a queue of processes with all waiting processes, picks the first job,
gives it the time-slice to execute and when its time is over, RR picks it and puts it at the tail of
queue and the process begins once again with the next job in queue.

Fig. 3 – Round-Robin Technique (a) The list of runnable processes


(b) List of runnable processes after job B has used its Quantum (Time-slice)

5.4-a | The “Quantum Issue”: The only interesting issue with round robin is the length of the
quantum. Switching from one process to another requires a certain amount of time for doing the
administration—saving and loading registers and memory maps, updating various tables and
lists, flushing and reloading the memory cache, etc. Suppose that this process switch or context
switch, as it is sometimes called, takes 1 msec, including switching memory maps, flushing and
reloading the cache, etc. Also suppose that the quantum is set at 4 msec. With these parameters,
after doing 4 msec of useful work, the CPU will have to spend 1 msec on process switching.
Twenty percent of the CPU time will be wasted on administrative overhead. Clearly this is too
much.

To improve the CPU efficiency, we could set the quantum to, say, 100 msec. Now the wasted
time is only 1 percent. But consider what happens on a timesharing system if ten interactive
users hit the carriage return key at roughly the same time. Ten processes will be put on the list of
runnable processes. If the CPU is idle, the first one will start immediately, the second one may
not start until 100 msec later, and so on. The unlucky last one may have to wait 1 sec before
getting a chance, assuming all the others use their full quanta. Most users will perceive a 1-sec
response to a short command as sluggish.

Submitted to: Mr. Kashif Alam 9


Project Report “CPU Scheduling Algorithms” Operating Systems

Another factor is that if the quantum is set longer than the mean CPU burst, preemption will
rarely happen. Instead, most processes will perform a blocking operation before the quantum
runs out, causing a process switch. Eliminating preemption improves performance because
process switches then only happen when they are logically necessary, that is, when a process
blocks and cannot continue.

5.4-b | Advantages of RR: The biggest advantage that RR-algorithm has over other algorithm
techniques is that it is fair to every process, giving all of them exactly the same time, and thus
starvation is virtually impossible to happen.

5.4-c | Disadvantages of RR: The main problem with Round-Robin is that it assumes that all
processes are equally important, thus each receives an equal portion of the CPU. This sometimes
produces bad results. Consider three processes that start at the same time and each require three
time slices to finish. A comparison of FIFO and RR in this case is explained by this figure:

Fig. 4 – RR and FIFO (L.H.S.) Executing three processes with the FIFO algorithm
(R.H.S.) Executing same processes with the RR algorithm

The picture above describes the problem with RR algorithm. In the FIFO technique, Process A
finishes after 3 slices; B, 6; and C, 9. The average is (3+6+9)/3 = 6 slices. But in the case of RR,
Process A finishes after 7 slices; B, 8; and C, 9; so the average is (7+8+9)/3 = 8 slices. This
implies that Round-Robin is fair, but uniformly inefficient in some of the cases.

5.5 | Priority-based Scheduling


The Priority-based scheduling (PBS) involves the concepts of both preemption and Round-Robin
algorithm. It runs the highest-priority processes first, using round-robin technique among
processes of equal priority. If a new job arrives, scheduler inserts it in the run queue behind all
processes of greater or equal priority.

5.5-a | Advantages of PBS: The main advantage of Priority-based scheduling is that it uses
round-robin technique to make sure that a single job does not hog the resources of the processor.

5.5-b | Disadvantages of PBS: The biggest disadvantage of the priority-based scheduling is the
concept of priority itself! If we add a lot of processes to a queue with varying priorities assigned
to them, chances are that the processes with the least priority will never run if more processes
keep on coming. In this case, the starvation follows.

Submitted to: Mr. Kashif Alam 10


Project Report “CPU Scheduling Algorithms” Operating Systems

5.6 | MFQ Algorithm


MFQ (Multi-level Feedback Queue) or MQS (Multi-level Queue Scheduling) is one of the most
difficult, diverse and widely used algorithms in today’s operating systems. It involves multiple
queues of different tasks, based on their external priorities and every queue can actually have a
different algorithm for it. It means that we can categorize different jobs based on their nature and
execute them according the best-available algorithm. For example, we can use RR algorithm for
longer jobs and SJF for I/O-bound requests. MFQ basically works with multi-core systems, and
is highly efficient in this case.

5.6-a | Working of MFQ Algorithm


The MFQ algorithm works in the following g fashion:

1. A new process is positioned at the end of the top-level FIFO queue.


2. At some stage the process reaches the head of the queue and is assigned to the CPU.
3. If the process is completed, it leaves the system.
4. If the process voluntarily relinquishes control, it leaves the queuing network, and when the
process becomes ready again it enters the system on the same queue level.
5. If the process uses all the quantum time, it is preempted and positioned at the end of the next,
lower-level queue.
6. This will continue until the process completes or it reaches the base-level queue.
 At the base-level queue the processes circulate in round robin fashion until they complete
and leave the system.
 Optionally, if a process gets blocked for I/O, it is 'promoted' one level, and placed at the
end of the next-higher queue. This allows I/O-bound processes to be favored by the
scheduler and allows processes to 'escape' the base level queue.

5.6-b | Advantages of MFQ: The main advantage of MFQ algorithm is that it is highly efficient
and it produces results better than any other algorithm because of its categorization feature. This
is the reason it is used in most of the operating systems today like Windows, Solaris, MAC OS
X, Linux, NetBSD, FreeBSD, etc.

5.6-c | Disadvantages of MFQ: The MFQ algorithm is potentially inefficient in single-core


systems because it cannot work on its categorized queues in a single core simultaneously.
Moreover, a process is given just one chance to complete at a given queue level before it is
forced down to a lower level queue. It often makes the processes starve of resources.

6 | OVERVIEW & APPLICATIONS

With the basic scheduling techniques discussed, we present an overview and comparison of
different scheduling algorithms in the table that follows. The terms used in this table are defined
in Appendix A.

Submitted to: Mr. Kashif Alam 11


Project Report “CPU Scheduling Algorithms” Operating Systems

6.1 | Comparison Table

CPU Turnaround
Algorithm Throughput Response Time
Overhead Time
FCFS/FIFO Low Low High Low
SJF/SRT Medium High Medium Medium
Round-Robin High Medium Medium High
Priority-based Medium Low High High
MQS High High Medium Medium

Table 1 – Comparison of different scheduling algorithms

6.2 | Choosing the Right Algorithm


When designing an operating system, a programmer must consider which scheduling algorithm
will perform best for the use the system is going to see. There is no universal “best” scheduling
algorithm, and many operating systems use extended or combinations of the scheduling
algorithms above. For example, Windows NT/XP/Vista uses a multi-level feedback queue, a
combination of Priority-based Scheduling, Round-Robin, and First-In, First-Out. In this system,
processes can dynamically increase or decrease in priority depending on if it has been serviced
already, or if it has been waiting extensively. Every priority level is represented by its own
queue, with round-robin scheduling amongst the high priority processes and FIFO among the
lower ones. In this sense, response time is short for most processes, and short but critical system
processes get completed very quickly. Since processes can only use one time unit of the round
robin in the highest priority queue, starvation can be a problem for longer high priority
processes. Thus a programmer must first visualize all problems his OS is expected to face and
then thoroughly test different set of these problems with different techniques. Whichever
technique produces comparatively better results is obviously the better choice.

6.3 | Implementation of Different Algorithms


The algorithm used in an Operating System might be as simple as FCFS or RR, and on the other
extreme, it could be a combination of different algorithms like MQS. More advanced algorithms
take into account process priority, or the importance of the process. This allows some processes
to use more time than other processes. The kernel always uses whatever resources it needs to
ensure proper functioning of the system, and so can be said to have infinite priority. In SMP
(Symmetric Multi-Processing) systems, processor affinity is considered to increase overall
system performance, even if it may cause a process itself to run more slowly. This generally
improves performance by reducing cache thrashing. A table consisting of information on old and
modern CPU scheduling techniques with respect to Operating Systems is given as follows.

Submitted to: Mr. Kashif Alam 12


Project Report “CPU Scheduling Algorithms” Operating Systems

6.4 | Algorithms used in Common Operation Systems

Operating System Preemption Algorithm


Windows 3.1x None Cooperative Scheduler
Preemptive for 32-bit processes, Cooperative
Windows 95, 98, Me Half
for 16-bit processes
Windows NT (including 2000,
Yes Multilevel feedback queue
XP, Vista, 7 and Server)
Mac OS pre-9 None Cooperative Scheduler
Preemptive for MP tasks, Cooperative
Mac OS 9 Some
Scheduler for processes and threads
Mac OS X Yes Multilevel feedback queue
Linux pre-2.6 Yes Multilevel feedback queue
Linux 2.6-2.6.23 Yes O(1) scheduler
Linux post-2.6.23 Yes Completely Fair Scheduler
Solaris Yes Multilevel feedback queue
NetBSD Yes Multilevel feedback queue
FreeBSD Yes Multilevel feedback queue
Table 2 - Algorithms used in Common Operation Systems

7 | VISUAL REPRESENTATION OF ALGORITHMS IN VC++

The basic purpose of the Operating Systems course is to develop an understanding of how the
basic components of an OS work. Process is one of the most fundamental and important part of
an OS and it is very important for a computer scientist to understand how it works. Our semester
project explains one dimension of this working, by explaining how the processes are scheduled
for the processing.

7.1 | Software and Language Details


We, the students who compiled this report, worked on the Visual Studio (VC++ environment) in
3rd semester of our studies and so we selected the same software as the platform to develop a
program that explains how different scheduling algorithms work by using simulation. The
software used for basic code testing and debugging was C-Free 5.0. At the later stage, the VC++
2010 Ultimate was used for final simulation and execution purposes. A JAVA version of the
same program is also being developed in NetBeans environment.

Submitted to: Mr. Kashif Alam 13


Project Report “CPU Scheduling Algorithms” Operating Systems

7.2 | Working of the Program


The interface of our program is very simple and it gets the necessary pre-requisites from a user to
run the simulation. On the home-screen, users can select the algorithms of their choice which are
offered by several buttons, and then they have to enter the number of tasks they want to calculate
the average waiting time for. If a user selects Round-Robin algorithm, he will be prompted to
enter the Quantum too. Once the user has done this preliminary work of input, the simulation
begins. Since the program is in early stages, it is not quite efficient. So, it assumes that all tasks
enter at 0ms (arrival time).

The simulation of the program works for a maximum of three tasks/jobs/processes that a user
enters and it effectively explains how a job enters the queue, how scheduling takes place and the
working of algorithms, and it calculates A.W.T. for all of the entered tasks.

7.3 | Algorithms of the Programs


The algorithms for the programs used in the implementation are given below:

7.3-a | The algorithm for SJF (Non-preemptive):


1. READ Burst Times for ready Processes.
2. Sort the processes in ascending order.
3. PROCESS the list of processes in ascending order.

7.3-b | The algorithm for SJF (Preemptive):


1. READ Burst Times for ready Processes.
2. Sort the processes in ascending order.
3. PROCESS the list of processes in ascending order:
a. If a new process arrives during processing, halt the ongoing process, save its
parameters (remainin time etc.) in PCB and add new process to the ready queue
according to its burst time and arrival time.
b. Start the queue again for processing.

7.3-c | The algorithm for FCFS:


1. READ arrival times for processes.
2. Sort processes in ascending order.
3. Start the ready queue and process the processes according to sorted list.

7.3-d | The algorithm for Round-Robin:


1. READ arrival times and burst times for processes.
2. READ quantum from user.
3. Start LOOP with time = quantum value.
a. For every arriving process, insert it into ready queue accordingly and give
running process a time equal to quantum.
b. PROCESS each process till queue becomes empty.

Submitted to: Mr. Kashif Alam 14


Project Report “CPU Scheduling Algorithms” Operating Systems

Appendix A – Definitions & Concepts


This appendix contains definitions of the computer-science terms used in the report.

Allocation: The assignment of particular areas of a magnetic disk/primary or secondary memory


to particular data or instructions.

Arrival Time: The time at which a job arrives at the ready queue.

Context Switching: The process of switching from one process to another. It includes saving
and loading registers and memory maps, updating various tables and lists, flushing and reloading
the memory cache, etc.

CPU Overhead: The processing time required by a device prior to the execution of a command.

Degree of Multiprogramming: A factor that affects the maximum CPU utilization. Greater the
degree, higher is the value of CPU utilization.

Multiplexing: Transmit multiple data-flows simultaneously.

Multitasking: Execute multiple processes at the same time.

Multithreading: Using multiple tasks (threads) to execute two or more related (or the same)
pieces of code at the same time.

PCB (Process Control Block): A table that is maintained by OS for every process. It contains
information about the process’ state, its program counter, stack pointer, memory allocation, the
status of its open files, its accounting and scheduling information, and everything else about the
process that must be saved when the process is switched from running to ready or blocked.

Response time: Amount of time it takes from when a request was submitted until the first
response is produced.

Thread: A series of related computing/processing messages with the same subject, consisting of
an original message and all the replies that follow.

Throughput: Number of processes that complete their execution per time unit.

Turnaround Time: Total time between submission of a process and its completion.

Waiting Time: Time for which a job has to wait before it starts processing is W.T.

Submitted to: Mr. Kashif Alam 15


Project Report “CPU Scheduling Algorithms” Operating Systems

Appendix B – Bibliography

This appendix contains the references to the published/written/printed material that was referred
during the compilation of this report.

Books
• “Operating Systems: Internals and Design Principles” by W. Stallings
• “Operating System Concepts” by A. Silberschatz and P. B. Galvin

Internet / Online Resources


• Wikipedia.com – Online encyclopedia
• Wikiversity.org – Wikipedia’s online learning portal
• cs.gmu.edu - George Mason University’s online notes
• bridgeport.edu/sed/ - Bridgeport University’s online notes
• csail.mit.edu/rinard/osnotes/ - MIT’s online notes

Other resources
• MS Encarta – Microsoft’s Encarta encyclopedia
• “A Methodology for Comparing Service Policies Using a Trust Model” – a research
paper by Henry Hexmoor, Southern Illinois University
• “OS Concepts” – Compiled by Sami-ul-Haq

Submitted to: Mr. Kashif Alam 16

You might also like