0% found this document useful (0 votes)
182 views

Unit-2 Process Management

1. A process is the active entity that executes instructions, while a program is the passive code stored on disk. A process is represented in memory by a process control block (PCB) that contains its state, resources, and other metadata. 2. Processes can be in one of several states like new, ready, running, waiting, or terminated. The scheduler selects processes from ready queues to allocate the CPU. Context switching occurs when changing processes to save and load their state. 3. Process creation and termination are operations that change the number of processes. A parent process can create child processes that may run concurrently or wait for children to finish before resuming.

Uploaded by

fyjf6i6jyf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
182 views

Unit-2 Process Management

1. A process is the active entity that executes instructions, while a program is the passive code stored on disk. A process is represented in memory by a process control block (PCB) that contains its state, resources, and other metadata. 2. Processes can be in one of several states like new, ready, running, waiting, or terminated. The scheduler selects processes from ready queues to allocate the CPU. Context switching occurs when changing processes to save and load their state. 3. Process creation and termination are operations that change the number of processes. A parent process can create child processes that may run concurrently or wait for children to finish before resuming.

Uploaded by

fyjf6i6jyf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Operating System:15CS43T 2020-21

UNIT- 2
PROCESS MANAGEMENT
2.1 Process Concepts
The program execution is called as process. A process is a more than one program code.
The program is a “Passive entity” means a file containing a list of instructions stored on
disk.

Process is an active entity with a program counter specifying the next instruction to
execute and set of associated resources.

2.1.1 Process State


The execution of process changes its state. The figure 2.1 illustrates the process state.

Fig 2.1 Process State

New:-The process is being created.

Ready:-The process is waiting to be assigned to a processor.

Running:-Instructions are being executed.

Waiting:-The process is waiting for some event to occur.

Terminated:- The process has finished execution.

Only one process can be running in running state at any instant of time but many
processes can be in ready or waiting state.

Computer Science and Engineering 14


Operating System:15CS43T 2020-21

2.1.2 Process Control Block (PCB)

It is also known as “Task control blocks”. Each process in operating system is


represented by process control block. It contains many pieces of information associated
with a specific process. The figure 2.1 shows the process control block (PCB) information.

Process State

Process Number

Program
Counter

Registers

Memory Limits

List of open files

.............. so on

Fig 2.2 Process control block (PCB)

1. Process State: The process state may be new, ready, running, waiting and terminated
state.
2. Process Number: It indicates the process number or ID for identifying it.

3. Program Counter: It contains the address of the next instruction to be executed.

4. Registers: It includes registers like accumulator, general-purpose registers, index


register etc. The contents of these register must be saved when an interrupt occurs.

5. Memory limits OR Memory Management Information: It includes the values of


base and limit registers, page tables or segment tables depending on the memory used
by the operating system.

6. Accounting information: It includes information like CPU time, Utilized time,


account number etc.
7. I/O States Information: It includes the list of I/O devices allocated to the process, a
list of open files etc.

8. CPU Scheduling information: It includes the scheduling parameters like process


priority, pointers to scheduling queues etc.

Computer Science and Engineering 15


Operating System:15CS43T 2020-21

2.2 Process Scheduling


In multiprogramming, processor has been allocated to more than one concurrent program.
The process scheduler selects the process from queue and allocates to CPU for execution.

2.2.1 Scheduling Queues


The processes will be put into “Job pool” or “Job queue”. The processes that are
residing in main memory and are ready and waiting to execute are kept on a list called the
“Ready Queue”.

“Device Queue” are also exists in the system. The list of process waiting for a particular
I/O device is called “Device Queue” and each device has its own device queue. Each
device has its own device queue illustrated in figure 2.3

Fig2.3 Ready Queue and Various Device Queue

Each queue has a queue header which contains two pointers “Head and Tailed”. The
head pointer points to first process control block (PCB) and tail pointer points to the last
process control block (PCB) in the queue.

In addition, each PCB has a pointer which points to the next PCB in the list queue are
stored as “linked list”.

Computer Science and Engineering 16


Operating System:15CS43T 2020-21

2.2.2 Queuing Diagram

Process scheduling is done as shown in the queuing diagram (figure 2.4)

Fig 2.4 Queuing-diagram representation of process scheduling

Each rectangular box represents a “Queue”. Two types of queue are available.
1. Ready queue
2. Device queue
A new process initially put in the “ready queue” and it waits for CPU. Once it is
allocated to the CPU, the following event will occurs:

 Process could issue on I/O request and then be placed in an I/O queue.
 Process could create a new sub process and wait for its termination.
 Process time slot is expired and waits in the ready queue.
 Process may be removed forcibly from the CPU due to interrupt.

2.2.3 Scheduler

Scheduler is a part of operating system which selects process from different queues. The
selection of process is carried out by the appropriate scheduler. The following are some of
the scheduler.

 Long-term Scheduler or job scheduler:


 Short-term Scheduler or CPU scheduler
 Medium-term scheduler

Long –term scheduler or Job scheduler : selects which processes should be brought into
the ready queue
 The long-term scheduler selects processes from this pool (queue) and loads them
into memory for execution.

Computer Science and Engineering 17


Operating System:15CS43T 2020-21

 The long-term scheduler executes much less frequently.


 It controls the degree of multiprogramming (The number of processes in main
memory).

If the degree of multiprogramming is stable, then the average rate of process creation
must be equal to the average departure rate of processes leaving the system.
 The long-term scheduler may need to be invoked only when a process leaves the
system.
 Processes can be described as either:
o I/O-bound process – spends more time doing I/O than computations, many
short CPU bursts.
o CPU-bound process – spends more time doing computations; few very
long CPU bursts.
Short-term scheduler or CPU Scheduler: selects which process should be executed next
and allocates CPU

 The short-term scheduler decides which of the processes are ready in main memory
for allocating to the CPU for execution.
 The short-term scheduler executes more frequently.
 In short-term scheduler, process may execute in few milliseconds before waiting
for an I/O request.
 It executes the process in very high speed.
Medium-term scheduler
 The idea of medium-term scheduler is to remove processes from memory and these
to reduce the degree of multiprogramming.

 The process can be reintroduced into memory and its execution can be continued
where it left off. This scheme is called “Swapping”. The figure 2.5 illustrates the
medium-term scheduling.

Fig 2.5 Medium-term scheduling to the queuing diagram


Computer Science and Engineering 18
Operating System:15CS43T 2020-21

2.2.4 Context switch


Switching the CPU to another process requires to save the old process status information
and loading the new process status information. This task is known as a “Context
Switch”. The system does not work usefully while switching and hence it is an overhead.
The speed of context switch varies from machine to machine and is highly dependent on
hardware support.

2.3 Operations on Processes


There are two operations of process.
 Process creation
 Process termination
2.3.1 Process Creation
A process may create several new processes during the course of execution. The creating
process is called a “Parent Process” and the new processes are called “Child Process”.

In general, a process will need certain resources (CPU time, memory, files I/O devices) to
accomplish task. When a process creates a sub process, that sub process may be able to
obtain its resources directly from the operating system, or it may be constrained to a subset
of the resources of parent process

There are two types of process execution when a parent process creates new process or
child process.

1. The parent continuous to executes concurrently with its child.


2. The parent waits until some or also its children have terminated.
There are two possibilities in address space of the new process
1. The child process is a duplicate of the parent process (It has the same program
and data as the parent.
2. The child process has a new program loaded into it.
The figure 2.6 illustrates the creation of process using system calls

parent wait Resume

fork()

exec() exit()
Child

Fig 2.6 Process Creation


Computer Science and Engineering 19
Operating System:15CS43T 2020-21

2.3.2 Process Termination


A process terminates when it finishes executing the last statement and it informs the
operating system to delete the process by using “exit()” system call. All the resource of the
process is de-allocated by the operating system.

A parent process may terminates the execution of child process for the following
reason:
 The child has exceeded its usage of some of the resources it has been allocated.

 The task assigned to the child process execution may not require further.

 The parent is exiting and the operating system does not allow a child to continue if
its parent terminates

The Operating system does not permit further execution of child process, if parent process
has been terminated already. The operating system initiates a process called “Cascading
Termination”.

2.4 Inter-process Communication (IPC)


Inter-process communication (IPC) is a set of methods for the exchange of data between
one or more processes

Process executing concurrently in the operating system may be either.


1. Independent process
2. Cooperating process
1. Independent process

 A process is independent if it cannot be affected by the other processes executing


in the system.

 Any process that does not share data with any other process is independent.

2. Cooperating process

 A process is cooperating if it can affect by other process executing in the system.

 Any process that shares data with other process is a cooperating process.

Benefits of cooperating process or Reasons for providing cooperating process

1. Information sharing: - Several users can access shared information


simultaneously.

2. Computation speed up: - Task can be divided into subtasks and each sub task can
be executing parallel with the help of multiprogramming system.

Computer Science and Engineering 20


Operating System:15CS43T 2020-21

3. Modularity:-Dividing the system functions into separate process or thread.

4. Convenience: - User can do the many tasks at the same time.

2.4.1 Models of inter-process communication

There are two fundamental models of interprocess communication

 Shared Memory System

 Message Passing System

2.4.1.1 Shared Memory System

Cooperating processes exchange the information through shared memory, a region of


memory provided by the OS.

Inter process communication using shared memory requires communication processes to


establish a region of shared memory. Typically, a shared-memory region resides in the
address space of the process creating the shared-memory segment. Other processes that
wish to communicate using this shared-memory segment must attach it to their address
space. Operating system tries to prevent one process from accessing another memory
process. Shared memory requires that two or more agree to remove this restriction. They
can then exchange information by reading and writing data in the shared area. .

Process A

Shared
Process B

Kernel

Fig 2.7 Shared Memory

2.4.1.2 Message-Passing System


In message-passing system, cooperating process communicate with each other by
exchanging messages illustrated in figure 2.8

Message passing provides a mechanism to allow processes to communicate and to


synchronize their actions without sharing the same address space and is particularly useful

Computer Science and Engineering 21


Operating System:15CS43T 2020-21

in a distributed environment, where the communicating may reside on the different


computers connected by a network.

Message-passing is useful for exchanging smaller amounts of data, because no conflicts


need to be avoided.
A message-passing system provides two operations
1. Send message
2. Receive message.
Message sent by a process can be of either fixed or variable size. If two process “P” and
“Q” wants to communicate then communication link must exists between them. This link
can be either “Direct or indirect”.

Process A
M
Process B M

Kernel
M
Fig 2.8 Message passing

The following methods are used for implementing the communication link.

1. Direct or Indirect communication.

2. Synchronous or asynchronous communication.

3. Automatic or explicit buffering.

1. Direct or Indirect communication

In direct, each process wants to communicate must explicitly name the recipient or sender
of the communication. Here send() and receive() functions are defined as follows –

1. send(P, message) – Send a message to process P.

2. receive(Q, message) – Receive a message from process Q.

Properties of Direct communication

 A communication link is established automatically between every pair of


processes.

Computer Science and Engineering 22


Operating System:15CS43T 2020-21

 A link is associated with exactly two processes.

 Exactly one link exists between each pair of process.

In indirect, the messages are sending to and receive from “Mailboxes”. Each mailbox has
a unique identification. Here send() and receive() method are defined as follows

1.send(A, message) - Send a message to mailbox A

2. receive(A, message) – Receive a message from mailbox A

Properties of Indirect communication

 Two processes can communicated only if they have shared mailbox.

 A link may be associated with more than two processes.

 A member of links exists between each pair of process. Each link must have one
mailbox.

2. Synchronous or asynchronous
Message passing may be either “Blocking or non-blocking”.
 Blocking send The sending process is blocked until the message is received by
the receiving process.

 Non-blocking send The sending process sends the message and resumes
operations.

 Blocking receiver The receiver block until a message is available.

 Non-blocking receive The receiver retrieves either a valid message or null.


3. Automatic or Explicit Buffering
A link has some capacity to store some messages temporarily in a queue. This queue can
be implemented in 3 ways.

1. Zero CapacityThe queue has maximum length is zero, so link cannot have any
message waiting in it. The sender must wait until the recipient receives the
message.

2. Bounded CapacityThe queue has finite length. If the queue is not full or a new
message can be placed and sender can continue execution without waiting.

3. Unbounded CapacityThe queue length is potentially infinite, thus any number


of messages can wait in it. The sender never blocks.

Computer Science and Engineering 23


Operating System:15CS43T 2020-21

The zero capacity case is sometimes referred to as a message system with no buffering;
the other cases are referred to as systems with automatic buffering

Differences between Shared memory and message passing system


Shared memory Message passing
It is a region of memory shared by Communication takes place by means of
communicating processes message exchanged between the
cooperating processes.
Useful for Large amount of data Useful for small amount of data exchange
exchange
Difficult to implement Easier to implement
Faster Slower

2.5 Process scheduling: Basic concepts


The scheduling is the fundamental function of operating system. Computer resources are
scheduled by the operating system before use.

 In a single-process system, only one process can run at a time.

 In multi-processor system, all processes are running at all times to increasing the
CPU Utilization.

2.5.1 CPU—I/O Burst Cycle


The process execution alternate between CPU execution cycle and I/O waiting cycle. The
process execution begins with a CPU burst followed by an I/O burst, which is followed by
another CPU burst, then another I/O burst and so on. The final CPU burst ends with
system request to terminate execution illustrates in figure 2.9.

Fig 2.9 Alternating sequence of CPU and I/O bursts

Computer Science and Engineering 24


Operating System:15CS43T 2020-21

2.5.2 CPU Scheduler


CPU scheduling is a function of operating system where scheduler selects process from
the memory that is ready to execute and allocates the CPU to that process.

There are two types scheduling schema. They are


 Preemptive scheduling
 Non Preemptive scheduling
1. Preemptive scheduling
If the CPU scheduling takes place
 When a process switches from the running state to the ready state.

 When a process switches from the waiting state to the ready state.

Then it is said to pre-emptive scheduling.

2. Non Preemptive scheduling


If the CPU scheduling takes place
 When a process switches from running state to waiting states.

 When a process terminates

Then it is said to be non pre-emptive scheduling.

2.5.3 Dispatcher
The dispatcher is the module that gives control of the CPU to the process selected by the
CPU scheduler. The function of dispatcher as follows

1. Switching context

2. Switching to user mode

3. Jumping to the proper location in the user program to restart that program.

The time it takes for the dispatcher to stop one process and start another is known as
“Dispatcher latency”.

2.6 Scheduling Criteria


The following are the some of the scheduling criteria are considered while choosing the
different types scheduling algorithm.

1. CPU Utilization: - The primary objective of CPU is to keep as busy. The CPU
utilization can range from 0 to 100%.

2. Throughput: - One measure of work is the number of processes that are


completed per time unit called “Throughput”.
Computer Science and Engineering 25
Operating System:15CS43T 2020-21

3. Turnaround time: - The interval from the time of submission of a process to the
time of completion is the “Turnaround time”.

4. Waiting time: - Waiting time is the sum of the periods spent waiting in the ready
queue or the amount of time that process sends in ready queue.

5. Response time: - Response time is the time from the submission of a request until
the first response is produced.

Optimization Criteria:
Max CPU utilization

Max throughput

Min turnaround time

Min waiting time

Min response time

Other times related to a process:


1. Arrival Time(AT):The time at which a process arrives in a ready queue
2. Completion Time(CT):The time at which a process terminates
3. Burst Time(BT):The amount of CPU time needed to execute a process

Turnaround Time(TAT) = CT-AT


TAT = BT+WT
WT = TAT-BT

2.7 Scheduling Algorithms


CPU scheduling deals with the problem of deciding which of the process in the ready
queue is allocated to the CPU. There are many CPU scheduling algorithm.

1. First-Come, First-Served (FCFS) scheduling algorithm

2. Shortest-Job-First (SJF) scheduling

3. Priority Scheduling (PS)

4. Round-Robin scheduling (RR)

5. Multilevel Queue scheduling (MLQ)

6. Multilevel feedback queue scheduling (MLFQ)

Computer Science and Engineering 26


Operating System:15CS43T 2020-21

2.7.1 First-Come, First-Served (FCFS) scheduling algorithm


FCFS scheduling is the simplest scheduling algorithm. The process that requests CPU first
will be allocated first.

The FCFS scheduling algorithm is “Non-Preemptive”. The implementation of FCFS


policy is easily managed with a FIFO queue. When a process enters the ready queue, its
PCB is linked onto the tail of the queue. When the CPU is freed, it is allocated to the
process at the head of the queue.

Example 1: Three Processes P1, P2, P3 that arrives at zero (0) time with the CPU burst
time given in milliseconds.
Process Burst time

P1 24
P2 03
P3 03

Solution:
Gantt chart
P1 P2 P3
0 24 27 30
The right side value of a process in the Ghantt chart indicates completion time.
Unit of Time: milliseconds

Arrival Burst Completion Turnaround


Process Waiting Time
Time Time Time Time
No. (WT=TAT-BT)
(AT) (BT) (CT) (TAT=CT-AT)

P1 0 24 24 24 0
P2 0 3 27 27 24
P3 0 3 30 30 27
Total Time 30 81 81 51

Average Turnaround Time = (Total Turnaround Time)/(No.of Process)


= 81/3
= 27 ms
Average Waiting Time = (Total Wating Time)/(No.of Process)
= 51/3
= 17 ms

Computer Science and Engineering 27


Operating System:15CS43T 2020-21

Example-2:For the same above example if the processes arrive in the order of P2, P3,
P1at time 0.
Ghantt Chart
P2 P3 P1
0 3 6 30
Unit of Time:milliseconds
P.No AT BT CT TAT WT
P1 0 24 30 30 6
P2 0 3 3 3 0
P3 0 3 6 6 3
Total Time 30 39 39 9
Average TAT = (Total TAT)/(No.of Process)
= 39/3
= 13 ms
Average WT = (Total WT)/(No.of Process)
= 9/3
= 3 ms
From both the examples it can be inferred that change in the order of process arrival yield
different average turnaround time and waiting time, this is knows as convoy effect.

Example 3: -The three Processes P1, P2, P3 that arrives at time zero (0) with the CPU
burst time given in milliseconds.
Process Burst time
P1 22
P2 04
P3 02

Solution:
Gantt chart
P1 P2 P3
0 22 26 28
Unit of Time: milliseconds
P.No AT BT CT TAT WT
P1 0 22 22 22 0
P2 0 4 26 26 22
P3 0 2 28 28 26
Total Time 28 76 76 48

Average TAT = (Total TAT)/(No.of Process)

Computer Science and Engineering 28


Operating System:15CS43T 2020-21

= 76/3
= 25.33 ms
Average WT = (Total WT)/(No.of Process)
= 48/3
= 16 ms
Example-4: Calculate average turnaround time and waiting time for the following
processes.
P.No AT BT
P1 0 4
P2 1 3
P3 2 1
P4 3 2
P5 4 5

Solution:

Ghant Chart
P1 P2 P3 P4 P5
0 4 7 8 10 15

Unit of Time: milliseconds


P.No AT BT CT TAT WT
P1 0 4 4 4 0
P2 1 3 7 6 3
P3 2 1 8 6 5
P4 3 2 10 7 5
P5 4 5 15 11 6
Total Time 15 44 34 19

Average TAT = (Total TAT)/(No.of Process)


= 34/5
= 6.8 ms
Average WT = (Total WT)/(No.of Process)
= 19/5
= 3.8 ms

Properties of FCFS
 The average waiting time is generally not minimal and may vary substantially
 A mix of CPU bound processes and I/O- bound processes may give rise to the convoy
effect-poor utilization of CPU and I/O devices
 FCFS is non preemptive: process can hog the CPU

Computer Science and Engineering 29


Operating System:15CS43T 2020-21

Advantage : Simple, low overhead


Disadvantage: Not good for time-sharing, since users in such systems need the CPU at
regular intervals

2.7.2 Shortest Job First Scheduling (SJF)


 This algorithm associates with each process the length of the process’s next CPU
burst. When the CPU is available, it is assigned to the process that has the smallest
next CPU burst.

 If CPU burst of next two processes the same , FCFS is used to break ties.

 SJF is provably optimal in that it gives the minimum average waiting time.

 SJF may be either pre-emptive or non-preemptive.

 The real difficulty with SJF is knowing the length of the next CPU request. So this is
suitable for long-term scheduling.

 Preemptive SJF scheduling is sometimes called shortest-remaining-time-first


scheduling

 SJF scheduling is an “optimal scheduling algorithm”- gives minimum average


waiting time for a given set of processes.

Non Preemptive SJF:

A Non-Preemptive SJF algorithm will allow the currently executing process to finish it
CPU burst.

Example-1: Arrival time for all processes is 0 ms


Process Burst time
P1 6
P2 10
P3 08
P4 02

Solution:
Gantt chart
P4 P1 P3 P2
0 2 8 16 26
Unit of Time: milliseconds
P.No AT BT CT TAT WT
P1 0 6 8 8 2
P2 0 10 26 26 16
Computer Science and Engineering 30
Operating System:15CS43T 2020-21

P3 0 8 16 16 8
P4 0 2 2 2 0
Total Time 26 52 52 26

Average TAT = (Total TAT)/(No.of Process)


= 52/4
= 13 ms
Average WT = (Total WT)/(No.of Process)
= 26/4
= 6.5 ms

Example-2:
Process Arrival Time Burst time
P1 0 7
P2 1 5
P3 2 1
P4 3 2
P5 4 8
Solution:
Gantt chart
P1 P3 P4 P2 P5
0 7 8 10 15 23
Unit of Time: milliseconds
P.No AT BT CT TAT WT
P1 0 7 7 7 0
P2 1 5 15 14 9
P3 2 1 8 6 5
P4 3 2 10 7 5
P5 4 8 23 19 11
Total Time 23 63 53 30

Average TAT = (Total TAT)/(No.of Process)

= 53/5

= 10.6 ms

Average WT = (Total WT)/(No.of Process)

= 30/5

= 6 ms

Computer Science and Engineering 31


Operating System:15CS43T 2020-21

Preemptive SJF(shortest-remaining-time-first)scheduling:
A Preemptive SJF algorithm will preempt the currently executing process. Preemptive SJF
scheduling is called shortest remaining time first scheduling (SRTF).

The next CPU burst of the newly arrived process may be shorter that what is left off the
currently executing process. A preemptive SJF algorithm will preempt the currently
executing process.

In preemptive SJF, the completion time would be the right side value of a last
occurrence of a process in the Ghantt Chart

Example-1:
Process Arrival time Burst time
P1 0 8
P2 1 4
P3 2 9
P4 3 5

Solution:
Gantt chart
P1 P2 P2 P2 P2 P4 P1 P3
0 1 2 3 4 5 10 17 26
Unit of Time: milliseconds
P.No AT BT CT TAT WT
P1 0 8 17 17 9
P2 1 4 5 4 0
P3 2 9 26 24 15
P4 3 5 10 7 2
Total Time 26 58 52 26

Average TAT = (Total TAT)/(No.of Process)

= 52/4

= 13 ms

Average WT = (Total WT)/(No.of Process)

= 26/4

= 6.5 ms

Computer Science and Engineering 32


Operating System:15CS43T 2020-21

Example-2:
Process Arrival time Burst time
P0 2 3
P1 4 6
P2 6 4
P3 8 5
P4 0 2

Solution:
Gantt chart
P4 P0 P0 P1 P2 P2 P1 P3
0 2 4 5 6 8 10 15 20
Unit of Time: milliseconds
P.No AT BT CT TAT WT
P0 2 3 5 3 0
P1 4 6 15 11 5
P2 6 4 10 4 0
P3 8 5 20 12 7
P4 0 2 2 2 0
Total Time 20 52 32 12
Average TAT = (Total TAT)/(No.of Process)

= 32/5

= 6.4 ms

Average WT = (Total WT)/(No.of Process)

= 12/5

= 2.4 ms

Advantages of SJF
 Suitable for long-term (job) scheduler.
 Can implemented in both pre-emptive and non pre-emptive.

Disadvantages of SJF
 Difficult to known the length of next CPU burst time. Hence not suitable for
Short-term(CPU) scheduling

2.7.3 Priority algorithm


 A priority is assigned to each process, and the CPU is allocated to the process with
the highest priority.

Computer Science and Engineering 33


Operating System:15CS43T 2020-21

 Equal-priority processes are scheduled in FCFS order

 Priorities set either internally or externally


 internally by OS
 Externally by User or OS administrator

 Priority scheduling can be either pre-emptive or non-pre-emptive scheduling.

Priorities are generally indicated by some fixed range of numbers, such as 0 to 7 . ‘0’ as
the highest priority and 7 as the lowest priority

A major problem with PS is indefinite blocking or starvation: On a loaded system,


high priority processes can prevent low-priority processes from ever getting the CPU.

One solution against starvation is called aging: Aging is a technique where the priority
is gradually increased on processes that wait in the system for a long time

Non pre-emptive Priority Scheduling: In which a process is allowed to run till its
completion.

Example 1:- Consider the following processes with burst time and priority. Arrival time is
0 ms

Process Burst time Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Solution:
Gantt chart
P2 P5 P1 P3 P4
0 1 6 16 18 19
Unit of Time: milliseconds
P.No AT Priority BT CT TAT WT
P1 0 3 10 16 16 6
P2 0 1 1 1 1 0
P3 0 4 2 18 18 16
P4 0 5 1 19 19 18
P5 0 2 5 6 6 1
Total Time 19 60 60 41

Average TAT = (Total TAT)/(No.of Process)

Computer Science and Engineering 34


Operating System:15CS43T 2020-21

= 60/5
= 12 ms
Average WT = (Total WT)/(No.of Process)
= 41/5
= 8.2 ms

Example-2: Consider the following processes with burst time and priority. Arrival time is
0 ms
Process Burst time Priority
P1 5 3
P2 4 2
P3 2 1
P4 3 5
P5 8 4

Solution:
Gantt chart
P3 P2 P1 P5 P4
0 2 6 11 19 22

Unit of Time: milliseconds


P.No AT Priority BT CT TAT WT
P1 0 3 5 11 11 6
P2 0 2 4 6 6 2
P3 0 1 2 2 2 0
P4 0 5 3 22 22 19
P5 0 4 8 19 19 11
Total Time 22 60 60 38

Average TAT = (Total TAT)/(No.of Process)


= 60/5
= 12 ms
Average WT = (Total WT)/(No.of Process)
= 38/5
= 7.6 ms
Pre-emptive Priority Scheduling: In which a process is allowed to run until its
completion or any process with higher priority arrives.

Computer Science and Engineering 35


Operating System:15CS43T 2020-21

Example-1:
Process Arrival Time Burst Priority
P1 0 time
5 3
P2 1 4 2
P3 2 2 1
P4 3 3 5
P5 4 8 4

Solution:
Gantt chart
P1 P2 P3 P3 P2 P1 P5 P4
0 1 2 3 4 7 11 19 21
Unit of Time: milliseconds
P.No AT Priority BT CT TAT WT
P1 0 3 5 11 11 6
P2 1 2 4 7 6 2
P3 2 1 2 4 2 0
P4 3 5 3 21 18 15
P5 4 4 8 19 15 7
Total Time 22 62 52 30
Average TAT = (Total TAT)/(No.of Process)
= 52/5
= 10.4 ms
Average WT = (Total WT)/(No.of Process)
= 30/5
= 6 ms
2.7.4 Round Robin (RR) scheduling
 RR Scheduling is designed especially for time-sharing systems.

 Similar to FCFS, but employs preemption

 The time interval is called time quantum or time slice


-- A time slice is usually between 10 and 100 ms

 The ready queue is treated as a circular queue


--The CPU is handed to each process in the ready queue for a time interval of up 1
time slice.

 A process may use less time than a full slice.


 If there are n processes in the ready queue and the time quantum is q, then each
process gets 1/n of the CPU time in chunks at most q time units at once.

Computer Science and Engineering 36


Operating System:15CS43T 2020-21

 No process waits more than (n-1)* q time units


 Typically RR gives longer turnaround time than SJF. But shorter response time.
The performance of the RR algorithm depends heavily on the size of the time
quantum.

 If the time quantum is large, The RR policy is same as that of FCFS policy.

 If the time quantum is extremely small(say, 1 millisecond), the RR approach is


called processor sharing. For context switching, time quantum must be large.
In general, average turnaround time improves if most processes finish their next CPU
burst within one time quantum. A rule of thumb is that 80 percent of the CPU bursts
should be shorter than the time quantum.

Advantage: It is suitable for time sharing system.

Disadvantage:Turn-around time cannot be improved: If quantum is too small, too much


time wasted in context switch; if too large(i.e longer than mean CPU burst), approaches
FCFS.

Example-1:The time quantum=4 ms, Arrival time is 0 ms.

Process Burst time


P1 24
P2 3
P3 3

Solution:
Circular Queue
P1 P2 P3 P1 P1 P1 P1 P1

Gantt chart
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

Time Quantum = 4 ms Unit of Time: milliseconds


P.No AT BT CT TAT WT
P1 0 24 30 30 6
P2 0 3 7 7 4
P3 0 3 10 10 7
Total Time 30 47 47 17

Average TAT = (Total TAT)/(No.of Process)

Computer Science and Engineering 37


Operating System:15CS43T 2020-21

= 47/3
= 15.66 ms
Average WT = (Total WT)/(No.of Process)
= 17/3
= 5.66 ms

Example 2: Time Quantum = 2 ms


Process Arrival Time Burst time
P1 0 4
P2 1 5
P3 2 2
P4 3 1
P5 4 6

Solution:
Circular Queue
P1 P2 P3 P1 P4 P5 P2 P5 P2 P5

Gantt chart
P1 P2 P3 P1 P4 P5 P2 P5 P2 P5
0 2 4 6 8 9 11 13 15 16 18
Time Quantum = 2 ms Unit of Time: milliseconds
P.No AT BT CT TAT WT
P1 0 4 8 8 4
P2 1 5 16 15 10
P3 2 2 6 4 2
P4 3 1 9 6 5
P5 4 6 18 14 8
Total Time 18 57 47 29

Average TAT = (Total TAT)/(No.of Process)

= 47/3

= 15.66 ms

Average WT = (Total WT)/(No.of Process)

= 29/3

= 9.66 ms

Computer Science and Engineering 38


Operating System:15CS43T 2020-21

2.7.5 Multilevel Queue Scheduling (MQL)


The multilevel queue scheduling algorithm partitions the ready queue into several separate
queues. The processes are permanently assigned to one queue based on their property.
Each queue has its own scheduling algorithm and also there must be scheduling between
the queues which is commonly implemented as a fixed-priority pre-emptive scheduling
algorithm. The figure 2.10 illustrates multilevel queue scheduling. Each queue has its own
scheduling algorithm.

Foreground–RR
Background – FCFS
Multilevel queue scheduling algorithm with five queues, listed below in order of priority.
1. System processes
2. Interactive Processes
3. Interactive editing processes
4. Batch Processes
5. Student Processes.

Fig 2.10 Multilevel queue scheduling

2.7.6 Multilevel Feedback Queue Scheduling


Multilevel feedback queue scheduling allows processes to switch from one queue type to
another queue. The idea is to separate processes according to the CPU burst.

If a process uses too much CPU time, it will be moved to a lower-priority queue. This
scheme leaves I/O bound & interactive process in the higher priority queue. In addition, a
process that waits too long in a lower-priority queue may be moved to a higher priority
queue. This form of aging prevents starvation.

In general, a multilevel feedback queue scheduling is defines the following parameter.


 The number of queue.
 The scheduling algorithm for each queue.

Computer Science and Engineering 39


Operating System:15CS43T 2020-21

 The method used to determine when to upgrade a process to a higher priority


queue.
 The method used to determine when to demote a process to a low priority
queue.
 The method used to determine which queue process will enters when that
process need service.
Example: consider a multilevel feedback-queue scheduler with three queues. The
scheduler first executes all processes in queue 0. Only when queue 0 is empty will it
execute processes in queue 1. A process entering the ready queue is put in queue 0. A
process in queue 0 is given time quantum of 8 ms. If it does not finish within this time, it
is moved to the tail of queue 1. If queue 0 is empty, the process at the head of queue 1 is
given a quantum of 16 ms. If it does not complete, it is preempted and is put into queue 2.
Processes in queue 2 are run on the FCFS basis nut run only when queues 0 and 1 are
empty.

Fig 2.11 Multilevel feedback queue scheduling

References:

[1] Operating System Principles – Abraham Silberschatz, Peter Baer Galvin, Greg
Gagne, 8th edition, Wiley-India.
[2 ] Operating Systems, I. Chandra Mohan, PHI, 2013
[3 ] http://www.tutorialspoint.com/operating_system/
[4] http://courses.cs.vt.edu/~csonline/OS/Lessons/index.html
[5] http://www.nptel.ac.in

Computer Science and Engineering 40

You might also like