Process Management Mod
Process Management Mod
an I/O device
Processes migrate between the various
New
Suspend
Dispatch
Activate
Ready Ready Running Exit
Suspend Release
Suspend Timeout
Event
Event Event Wait
Occurs
Occurs
Activate
Blocked
Suspend Blocked
Suspend
Context Switch
When CPU switches to another process, the
scheduler).
The scheduler selects from among the processes that are
When to select ?
Non –preemptive
Process runs until voluntarily relinquishes the CPU
–process blocks on an event (e.g., I/O)
–process terminates
Preemptive
The scheduler actively interrupts and deschedules an
executing
process and schedules another process to run on the CPU.
Process/CPU scheduling algorithms decides which process
to execute from a list of processes in the ready queue.
There a number of algorithms for CPU scheduling. All these
algorithms have certain properties
To compare different algorithms there are certain criteria
These criteria can be can be used to compare different
algorithms to find out the best algorithm.
1. CPU utilization : The CPU should be kept as busy as possible. CPU
utilization ranges from 0% to 100 %. Typically it ranges from 40 %
(for a lightly loaded system ) to 90 % ( for a heavily loaded system)
2. Throughput : If the CPU is busy executing processes, then work is
being done.
One measure of work is the number of processes completed per time
unit, called throughput.
Varies from 1 process per hour to 10 processes per second.
3. Turnaround time: How long it takes to execute a process ?
Measured from the time of interval of submission of a process to the
time of completion.
4. Waiting time : The amount of time that a process spends in the
ready queue. Sum of the periods spent waiting in the ready queue.
5. Response time : Time from the submission of a request until the
first response is produced. NOT THE TIME IT TAKES TO OUTPUT THE
RESPONSE
Ideally the Scheduling algorithm should :
Maximize : CPU utilization , Throughput.
Minimize : Turnaround time, Waiting time, Response time
32
Process Burst Time
P1 24 ms
P2 3 ms
P3 3 ms
Suppose that the processes arrive in the order: P1 , P2 , P3 , all at
the same time. The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30
Waiting time for P1 = 0; P2 = 24; P3 = 27 ms
Average waiting time: (0 + 24 + 27)/3 = 17 ms
T (p1) = 24ms
T (p2) = 3ms + 24ms = 27ms
T (p3) = 3ms + 27ms = 30ms
P2 P3 P1
0 3 6 30
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case.
Convoy effect : All small processes have to wait for
one big process to get off the cpu.
Alternative : Allow shorter processes to go first
0 3 7 8 12 16
0 2 4 5 7 11 16
• Shortest-Remaining-Time-First
Scheduling(SRTF)
Process Arrival time Burst Time
P1 0 8 ms
P2 1 4 ms
P3 2 9 ms
P4 3 5 ms
P1 P2 P4 P1 P3
0 1 5 10 17
26 Average WT= ((10-1)+(1-1)+(17-2)+(5-2))/4 = 6.5 ms
Average TT = ((17-0)+(5-1)+(26-2)+(10-3))/4= 13 ms
39
A priority number (integer) is associated with
each process.
CPU is allocated to the process with the highest
priority (smallest integer highest priority).
SJF is a priority algorithm with shorter CPU burst
having more priority.
Drawback : Can leave some low priority
processes waiting indefinitely for the CPU. This is
called starvation.
Solution to the problem of indefinite blockage of
low priority jobs is aging.
Aging is a technique of gradually increasing
the priority of processes that wait in the
system for a long time.
For example, if priorities range from 127
(low) to 0 (high), we could increase the
priority of a waiting process by 1 every 15
minutes.
Eventually, even a process with an initial
priority of 127 would have the highest
priority in the system and would be
executed.
Priorities can be defined either internally or
externally.
Internally defined priorities use some measurable
quantity or quantities to compute the priority of a
process. For example, memory requirements of
each process.
External priorities are set by criteria outside the
OS, such as the importance of the process, the
type of process (system/user) etc.
Priority scheduling can be either pre-emptive or non
preemptive.
When a process arrives at the ready queue, its priority
is compared with the priority of the currently running
process.
A pre-emptive priority scheduling algorithm
will preempt the CPU if the priority of the newly
arrived process is higher than the priority of the
currently running process.
A non preemptive priority scheduling
algorithm will simply continue with the current
process
Process Duration Priority Arrival Time
P1 6 4 0
P2 8 1 0
P3 7 3 0
P4 3 2 0
P2 (8) P4 (3) P3 (7) P1 (6)
0 8 11 18 24
P2 waiting time: 0
P4 waiting time: 8 The average waiting time (AWT):
P3 waiting time: 11 (0+8+11+18)/4 = 9.25ms
P1 waiting time: 18
44
Priority algorithm
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
P2 P5 P1 P3 P4
0 1 6 16 18 19
45
Arrival Time Burst Time Priority
P1 0 5 3
P2 2 6 1
P3 3 3 2
| p1 | p2 | p3 | p1 |
0 2 8 11 14
Average waiting time =(9+0+5)/3=4.66 ms
Average turn around time=(14+6+8)/3=9.33ms
The round-robin (RR) scheduling algorithm is
designed especially for time-sharing
systems.
It is similar to FCFS scheduling, but pre-
emption is added to switch between
processes.
A small unit of time, called a time quantum
or time slice, is defined.
A time quantum is generally from 10 to 100
milliseconds.
The ready queue is treated as a circular
queue.
To implement RR scheduling
We keep the ready queue as a FIFO queue of processes.
New processes are added to the tail of the ready queue.
The CPU scheduler picks the first process from the ready queue, sets a
timer to interrupt after 1 time quantum, and dispatches the process.
The process may have a CPU burst of less than 1 time quantum.
◦ In this case, the process itself will release the CPU voluntarily.
◦ The scheduler will then proceed to the next process in the ready
queue.
Otherwise, if the CPU burst of the currently running process is longer
than 1 time quantum,
◦ the timer will go off and will cause an interrupt to the OS.
◦ A context switch will be executed, and the process will be put at the
tail of the ready queue.
◦ The CPU scheduler will then select the next process in the ready
queue.
The performance of the RR algorithm depends
heavily on the size of the time quantum. If the
time quantum is extremely large, the RR policy is
the same as the FCFS policy.
If the time quantum is extremely small the RR
approach is called processor sharing and (in
theory) creates the appearance that each of the n
users has its own processor running at 1/n th the
speed of the real processor.
◦ Shorter response time
◦ Fair sharing of CPU
Process Burst Time
P1 53
P2 17
P3 68
P4 24
The Gantt chart is:
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
P1 P2 P3 P4 P1 P3 P4 P3
0 4 8 12 16 20 24 25 26
Average TAT= ( (20-0) + (8-1) + (26-2) + (25-3) )/4 = 73/4 =
18.25ms
Average WT=(16-4)+ (4-1)+ (8+(20-12)+(25-24)-2)+(12+(24-16)-
3)=11.75ms
1. FCFS
Process Arrival Time Exec. Time
P1 0 5
P2 2 4
P3 3 7
P4 5 6
P1 P2 P3 P4
0 5 9 16 22
Avg WT= (0+ (5-2) + (9-3) + (16-5))/4=5 time units
Avg TAT =((5-0) +(9-2) + (16-3) + (22-5))/4= 10.5 time units
2. Priority preemptive
Process Arrival Time Exec. Time
Priority
P1 0 5 2
P2 2 4 1
P3 3 7 3
P4 5 6 4
P1 P2 P1 P3 P4
0 2 6 9 16 22
P1 P2 P3 P2 P4 P1
0 1 2 5 9 13 21
Avg WT= (12+ 3+ 0 + 6)/4=5.25 time units
Avg TAT =(21+8 + 3 + 10 )/4= 10.5 time units
General class of algorithms involving multiple ready queues
Appropriate for situations where processes are easily
classified into different groups (e.g., foreground and
background)
Processes permanently assigned to one ready queue
depending on some property of process.
Each queue has its own scheduling algorithm
foreground – RR
background – FCFS
Scheduling as well between the queues--often a priority
preemptive scheduling. For example, foreground queue
could have absolute priority over background queue. (New
foreground jobs displace running background jobs ; serve
all from foreground then from background).
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS. When it
gains CPU, job receives 8 milliseconds. If it does not finish in
8
milliseconds, job is moved to queue Q1.
}
Input is obtained from a keyboard one keystroke
at a time.
Each input character is stored in variable chin. It
is then transferred to variable chout and sent to
the display.
Instead of each application having it’s own
procedure this procedure may be shared and only
a copy is loaded to the memory global to all
applications thus saving space.
Any program can call this procedure repeatedly
to accept user input and display it on the user's
screen.
Consider a single processor
multiprogramming system supporting a
single user.
The user can be working on a number of
applications simultaneously.
Assume each of the application needs to use the
procedure echo, which is shared by all the
applications.
However this sharing can lead to problems.
The sharing of main memory among processes is useful but could lead to
problems.
Consider the following sequence:
P1 invokes echo and is interrupted after executing
chin = getchar(); // assume x is entered
P2 is activated and invokes echo, which runs to completion
displayed
And the character input to P2 is displayed by both P1 and P2.
Problem
The shared global variable chin and chout is
72
Previous Example : Assumption-Single processor,
multiprogramming operating system.
Same problem could occur on a multiprocessor system.
Processes P1 and P2 are both executing each on a separate
processor. Both processes invoke the echo procedure.
Process P1 Process P2
. .
chin = getchar(); .
. chin =
getchar();
chout = chin; chout = chin;
putchar(chout); .
.
The result is that the character input to P1 is lost
before being displayed, and the character input
to P2 is displayed by both P1 and P2.
Suppose that we permit only one process at a
How do we fix this? Well, you could leave a note on the fridge
before going, in which case, both threads (you and your
roommate) would have this code:
if (noMilk) {
if(noNote) {
leaveNote();
buyMilk();
removeNote();
}
}
But this has a problem
Suppose the first process is executing, and it checks there's
before it can execute the code inside the inner if, the
scheduler switches again.
Now task A checks that there's no note, and executes the
entry section
critical section
exit section
remainder section
The solutions for the CS problem should satisfy the following
characteristics :
1. Mutual Exclusion :
At any time, at most one process can be in its critical section (CS)
2. Progress:
If there are some processes in the queue waiting to get into their
CR and currently no process is inside the CR, then only the waiting
processes must be granted permission to execute in the CR
i.e. The CS will not be reserved for a process that is currently in a
non-critical section.
3. Bounded wait :
A process in its CS remains there only for a finite time because
generally the execution does not take much time, so the waiting
processes in the queue need not wait for a long time.
Eg: Process using a CS
do {
Entry section
CRITICAL SECTION
Exit section
Remainder section
} while()
In this eg. The CS is situated in a loop.
In each iteration of the loop, the process uses the CS and also
performs other computations called Remainder section
Approaches to achieving Mutual Exclusion
Two Process Solution
Attempt 1
• Processes enter their CSs according to the value of turn(shared variable)
and in strict alternation
do { do {
while (turn = 2) do {nothing}; while (turn = 1) do {nothing};
turn:=2 turn:=1
Process 1 Process 2
turn is a shared variable.
It is initialized to 1 before processs p1 and
p2 are created.
Each of these processes contains a CS for
Drawback
Suffers from busy waiting(technique in which a process
CR CR
state_flag_P1=1; state_flag_P2=1;
interrupted.
P2 is scheduled, it finds state_flag_P1=1
CR CR
are false.
Assume P1 is about to enter the CR hence it makes
process_flag[0]=true; process_turn=1;
It executes while statement , but since
process_turn=1;
It executes while statement , but since
and process_turn=1;
At this time P2 interrupts and gets execution and
executes process_flag[1]=true;
And continues with process_turn=0;
enter CR.
1. TSL instruction
Many computers have a special instruction called
is executed---
1. Copy the contents of IND to ACC.
2. Set the contents of IND to “N”.
This instruction is an indivisible instruction, which
means that it cannot be interrupted during its
execution consisting of these 2 steps.
Hence process switch cannot take place during the
all.
How can we use this TSL instruction to implement
mutual exclusion?
1. IND can take on values N ---- Not free (CR)
Begin
Initial Section
Call ENTER-CRITICALREGION
CR
Call EXIT-CRITICALREGION
Remainder Section
End
Let IND=“F”
PA is scheduled. PA executes ENTER-CR routine.
ACC becomes “F” and IND becomes “N”.
Contents of ACC are compared with “F”.
Since it is equal process PA enters its CR.
Assume process PA loses the control of CPU due to
context switch to PB when in CR.
PB executes ENTER-CR routine. PB executes EN.0.
IND which is ”N” is copied to ACC and IND becomes
“N”. ACC=N.
The comparison fails and loops back to EN.0 and
therefore does not execute its CR.
Assume that PA is rescheduled and it
completes its execution of CR and then
executes the Exit-CR routine where IND
becomes “F”.
Hence now since IND becomes “F” and
CR without interruption.
A process switch after a certain time slice happens
simultaneously.
Other Instructions
DI
CR
EI
Remaining Sections
Drawback:
1. Dangerous approach since it gives user processes the
power to turn off the interrupts.
A process in the CR executing an infinite loop.
The interrupt will never be enabled and no other process
can ever proceed
2. The approach fails for a multiprocessor system i.e if there are 2
or more CPUs disabling interrupt affects only the CPU that
executed the disable instruction
Used for synchronization
Used to protect any resource such as shared global
memory that needs to be accessed and updated by
many processes simultaneously.
Semaphore acts as a guard or lock on that resource.
Whenever a process needs to access the resource, it
first needs to take permission from the semaphore.
If the resource is free , ie. no process is accessing or
updating it, the process will be allowed , otherwise
permission is denied.
In case of denial the requesting process needs to wait,
until semaphore permits it.
Semaphore can be a “Counting Semaphore” where it
can take any integer value or a “Binary semaphore”
where it can take on values 0 or 1
The semaphore is accessed by only 2 indivisible
operation known as wait and signal operations,
denoted as P and V respectively.
P- proberen(Dutch word)/wait /down
V- verogen(Dutch word)/signal /up
P and V form the mutual exclusion primitives for any
process.
Hence if a process has a CS, it has to be
encapsulated between these 2 operations
The general structure of such a process becomes --
Initial routine
P(S)
CS
V(S)
Remainder section
The P and V primitives ensure that only one process is
in its CS.
P(S)
{
while (S<=0);
S--;
}
V(S)
{
S++;
}
P and V routine must be executed indivisibly.
Drawback
Requires busy waiting. If a process(PA) is in its CS and is
Begin
LOCK
N
IF Move the current
S>0 PCB from Running
Y Sate to the
S=S-1 semaphore Queue
UNLOCK
End
V(S)
Begin
LOCK
S=S+1
UNLOCK
End
Algorithms
3 -1 P3 is scheduled -2 P2 and
P2 P3
P3 in
SQ
4 -2 P1 scheduled. -1 P2 P3
is moved
P1 exits CS from SQ to RQ
5 -1 P2 is scheduled 0 P3 is moved
Enters CS, from SQ to RQ
exits CS SQ is empty
6 0 P3 is scheduled 1 No process in
Producer consumer problem(Bounded buffer problem)
Eg-1 Compilers can be considered producer process
and assemblers as consumer process.
A compiler produces the object code and an
assembler consumes the object code.
Eg-2 Process that gives a command for printing a file is a producer
process, and the process for printing the file on the printer is a
consumer process.
There is a buffer in an application maintained by 2 processes.
One process is called a producer that produces some data and
fills the buffer.
Another process is called a consumer that needs data produced
in the buffer and consumes it.
Producer Consumer problem solution using SEMAPHORE
To solve the producer consumer problem using semaphore, the
following requirement should be met :
1. The producer process should not produce an item when the buffer is
full.
2. The consumer process should not consume an item when the buffer
is empty.
3. The producer and consumer process should not try to access and
update the buffer at the same time.
4. When a producer process is ready to produce an item and the buffer
is full, the item should not be lost i.e the producer must be blocked
and wait for the consumer to consume an item.
5. When a consumer process is ready to consume an item and the
buffer is empty, consumer must be blocked and wait for the
producer to produce item.
6. When a consumer process consumes an item i.e a slot in the
buffer is created, the blocked producer process must be signaled
about it.
7. When a producer process produces an item in the empty buffer,
the blocked consumer process must be signaled about it.
Producer Consumer problem solution using SEMAPHORE
In the Producer-Consumer problem, semaphores are used for two
purpose:
◦ mutual exclusion and
◦ synchronization.
becomes 0.
Adds item to the buffer
becomes 1.
Executes signal(full), full becomes 1
Assuming time slice of producer process is not yet
over….
Producer produces the next item.
Empty=1
Buffer_access=0
Buffer_access =1
full=2
Continuing…………
Producer produces the next item.
Empty=0
Buffer_access=0
Producer adds item to the buffer.
Buffer_access =1
full=3
Continuing…………
Producer produces the next item.
Empty=-1
PRODUCER PROCESS GETS BLOCKED
Assume the consumer is scheduled to run ….
Consumer executes wait(full) , full becomes 2
Executes wait(Buffer_access), Buffer_access
becomes 0.
Consumes item from the buffer.
Executes signal(Buffer_access), Buffer_access=1.
Executes signal(empty), empty becomes 0 and it
wakes up the producer process(removes it from
the blocked state).
After this if the producer process is scheduled…
It will start from wait(Buffer_access),
Buffer_access=0.
It add an item to the buffer.
Buffer_access=1.
full becomes 3.
Eg: 2
Assume empty=3 full =0 Buffer_access=1
And consumer process is scheduled
Consumer executes wait(full), full becomes -1.
NO
i.e if one writer is writing, other writers must wait.
Also when a writer is writing , a reader is not allowed to access
the data item.
This synchronization problem cannot be solved by simply
Shared data:
semaphore mutex, wrt
int readcount
Initially:
mutex = 1, wrt = 1,
readcount =0
Classic synchronization problem(Reader-Writer problem)
Philosopher i
do {
P(fork[i]);
P(fork[i+1]);
/* eat */