0% found this document useful (0 votes)
2 views

Module2 OS

Module-2 discusses process synchronization, focusing on the critical-section problem and its solutions, including software and hardware approaches. It covers various synchronization tools such as semaphores, monitors, and atomic transactions, along with classical synchronization problems like the bounded-buffer, readers-writers, and dining-philosophers problems. Additionally, it addresses issues like deadlock, starvation, and the implementation of synchronization in operating systems like Solaris, Windows XP, and Linux.

Uploaded by

cc619701
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Module2 OS

Module-2 discusses process synchronization, focusing on the critical-section problem and its solutions, including software and hardware approaches. It covers various synchronization tools such as semaphores, monitors, and atomic transactions, along with classical synchronization problems like the bounded-buffer, readers-writers, and dining-philosophers problems. Additionally, it addresses issues like deadlock, starvation, and the implementation of synchronization in operating systems like Solaris, Windows XP, and Linux.

Uploaded by

cc619701
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Module-2

Process Synchronization
 To introduce the critical-section problem, whose solutions can be used to
ensure the consistency of shared data
 To present both software and hardware solutions of the critical-section
problem
 To introduce the concept of an atomic transaction and describe
mechanisms to ensure atomicity
 Concurrent access to shared data may result in data inconsistency
 Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes
 Suppose that we wanted to provide a solution to the consumer-producer problem
that fills all the buffers. We can do so by having an integer count that keeps track
of the number of full buffers. Initially, count is set to 0. It is incremented by the
producer after it produces a new buffer and is decremented by the consumer
after it consumes a buffer
Producer
while (true) {

/* produce an item and put in nextProduced */ while (count ==


BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
Consumer
while (true) {
while (count == 0)
; // do nothing nextConsumed =
buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed
}
Race Condition
count++ could be implemented as

register1 = count register1 =


register1 + 1 count = register1
count-- could be implemented as

register2 = count register2 =


register2 - 1 count = register2
Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5} S1:
producer execute register1 = register1 + 1 {register1 = 6} S2:
consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4} S4: producer
execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the processes
that will enter the critical section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a request
to enter its critical section and before that request is granted
⚫ Assume that each process executes at a nonzero speed
⚫ No assumption concerning relative speed of the N processes
Peterson’s Solution
 Two process solution
 Assume that the LOAD and STORE instructions are atomic; that is,
cannot be interrupted.
 The two processes share two variables:
 int turn;
 Boolean flag[2]
 The variable turn indicates whose turn it is to enter the critical section.
 The flag array is used to indicate if a process is ready to enter the critical
section. flag[i] = true implies that process Pi is ready!
Algorithm for Process Pi
do {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j); critical
section
flag[i] = FALSE;
remainder section
} while (TRUE);
Synchronization Hardware
 Many systems provide hardware support for critical section code
 Uniprocessors – could disable interrupts
 Currently running code would execute without preemption
 Generally too inefficient on multiprocessor systems
Operating systems using this not broadly scalable
 Modern machines provide special atomic hardware instructions
Atomic = non-interruptable
 Either test memory word and set value Or swap contents of two memory words
Solution to Critical-section Problem Using Locks
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
TestAndSet Instruction
Definition:
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
Solution using TestAndSet
Shared boolean variable lock., initialized to false. Solution:
do {
while ( TestAndSet (&lock ))
; // do nothing
// critical section lock
= FALSE;
// remainder section
} while (TRUE);

Swap Instruction
Definition:
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
Solution using Swap
Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean
variable key
Solution:
do {
key = TRUE;
while ( key == TRUE) Swap
(&lock, &key );

// critical section lock


= FALSE;
// remainder section
} while (TRUE);

Bounded-waiting Mutual Exclusion with TestandSet()


do {
waiting[i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
waiting[i] = FALSE;
// critical section j
= (i + 1) % n;
while ((j != i) && !waiting[j]) j
= (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);
Semaphore
 Synchronization tool that does not require busy waiting
 nSemaphore S – integer variable
 Two standard operations modify S: wait() and signal()
 Originally called P() and V()
 Less complicated
 Can only be accessed via two indivisible (atomic) operations
wait (S) {
while S <= 0
; // no-op
S--;
}
signal (S) {
S++;
}

Semaphore as General Synchronization Tool


 Counting semaphore – integer value can range over an unrestricted
domain
 Binary semaphore – integer value can range only between 0 and 1; can be
simpler to implement
 Also known as mutex locksnCan implement a counting semaphore S
as a binary semaphore
 Provides mutual exclusionSemaphore mutex; // initialized to do
{
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);

Semaphore Implementation
 Must guarantee that no two processes can execute wait () and signal () on the
same semaphore at the same time
 Thus, implementation becomes the critical section problem where the wait and
signal code are placed in the crtical section.
 Could now have busy waiting in critical section implementation
But implementation code is short
Little busy waiting if critical section rarely occupied
 Note that applications may spend lots of time in critical sections and
therefore this is not a good solution.

Semaphore Implementation with no Busy waiting


 With each semaphore there is an associated waiting queue. Each entry in a
waiting queue has two data items:
 value (of type integer)
 pointer to next record in the list
 Two operations:
 block – place the process invoking the operation on the appropriate waiting
queue.
 wakeup – remove one of processes in the waiting queue and place it in the
ready queue.

Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
Implementation of signal:
signal(semaphore *S) { S-
>value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
Deadlock and Starvation
 Deadlock – two or more processes are waiting indefinitely for an event that can
be caused by only one of the waiting processes
 Let S and Q be two semaphores initialized to 1
P0
P1 wait (S);
wait (Q);
wait (Q);
wait (S);
.

.
.
.
.
signal (S);
signal (Q);
signal (Q);
signal (S);

 Starvation – indefinite blocking. A process may never be


removed from the semaphore queue in which it is suspended
 Priority Inversion - Scheduling problem when lower-priority process holds a lock
needed by higher-priority process
Classical Problems of Synchronization
 Bounded-Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem

Bounded-Buffer Problem
 N buffers, each can hold one item
 Semaphore mutex initialized to the value 1
 Semaphore full initialized to the value 0
 Semaphore empty initialized to the value N.
 The structure of the producer process
do { // produce an item in nextp wait
(empty);
wait (mutex);
// add the item to the buffer signal
(mutex);
signal (full);
} while (TRUE);
The structure of the consumer process do {
wait (full);
wait (mutex);
// remove an item from buffer to nextc signal
(mutex);
signal (empty);

// consume the item in nextc


} while (TRUE);
Readers-Writers Problem
A data set is shared among a number of concurrent processes
 Readers – only read the data set; they do not perform any updates
 Writers – can both read and writenProblem – allow multiple readers to read
at the same time. Only one single writer can access the shared data at the same
time
 Shared Data
 Data set
 Semaphore mutex initialized to 1
 Semaphore wrt initialized to 1
 Integer readcount initialized to 0 The
structure of a writer process

do { wait (wrt) ;

// writing is performed signal


(wrt) ;
} while (TRUE);
The structure of a reader process do {
wait (mutex) ;
readcount ++ ;
if (readcount == 1)
wait (wrt) ;
signal (mutex)

// reading is performed wait


(mutex) ;
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);

Dining-Philosophers Problem

 Shared data
 Bowl of rice (data set)

 Semaphore chopstick [5] initialized to 1


 The structure of Philosopher i: do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );

// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think
} while (TRUE);
Problems with Semaphores
Incorrect use of semaphore operations:
l signal (mutex)
….
wait (mutex)

wait (mutex) … wait


(mutex)
Omitting of wait (mutex) or signal (mutex) (or both)

Monitors
A high-level abstraction that provides a convenient and effective mechanism for process
synchronization
Only one process may be active within the monitor at a time monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

procedure Pn (…) {……}
Initialization code ( ….) { … }

}
}
Schematic view of a Monitor

Condition Variables

condition x, y;
Two operations on a condition variable:
x.wait () – a process that invokes the operation is suspended.
x. signal () – resumes one of processes (if any) that
invoked x.wait ()
Monitor with Condition Variables

Solution to Dining Philosophers

monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5]; void
pickup (int i) {
state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}

void putdown (int i) { state[i] =


THINKING;
// test left and right neighbors test((i
+ 4) % 5);
test((i + 1) % 5);
}

void test (int i) {


if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) && (state[(i + 1)
% 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}

Each philosopher I invokes the operations pickup() and putdown() in


the following sequence:
DiningPhilosophters.pickup (i); EAT
DiningPhilosophers.putdown (i);

Monitor Implementation Using Semaphores

Variables
semaphore mutex; // (initially = 1) semaphore
next; // (initially = 0) int
next-count = 0;nEach procedure F
will be replaced by
wait(mutex);

body of F;

if (next_count > 0)
signal(next)
within a monitor is ensured. else
signal(mutex);nMutual exclusion
Monitor Implementation
For each condition variable x, we have:
semaphore x_sem; // (initially = 0)
int x-count = 0;nThe operation x.wait can
be implemented as:

x-count++;
if (next_count > 0)
signal(next);
else signal(mutex);
wait(x_sem);
x-count--;

The operation x.signal can be implemented as:


if (x-count > 0) {
next_count++;
signal(x_sem);
wait(next);
next_count--;
}
A Monitor to Allocate Single Resource
monitor ResourceAllocator
{
boolean busy;
condition x;
void acquire(int time) { if
(busy)
x.wait(time);
busy = TRUE;
}
void release() {
busy = FALSE;
x.signal();
}
initialization code() {
busy = FALSE;
}
}
Synchronization Examples
 Solaris
 Windows XP
 Linux

 Pthreads
Solaris Synchronization
 Implements a variety of locks to support multitasking, multithreading
(including real-time threads), and multiprocessing
 Uses adaptive mutexes for efficiency when protecting data from short code
segments
 Uses condition variables and readers-writers locks when longer sections of
code need access to data
 Uses turnstiles to order the list of threads waiting to acquire either an adaptive
mutex or reader-writer lock

Windows XP Synchronization
 Uses interrupt masks to protect access to global resources on uniprocessor
systems
 Uses spinlocks on multiprocessor systems
 Also provides dispatcher objects which may act as either mutexes and
semaphores
 Dispatcher objects may also provide events

 An event acts much like a condition variable


Linux Synchronization
 Linux:lPrior to kernel Version 2.6, disables interrupts to implement short
critical sections
 Version 2.6 and later, fully preemptive
 Linux provides:
 semaphores
 spin locks
Pthreads Synchronization
 Pthreads API is OS-independent
 It provides:
 mutex locks
 condition variablesnNon-portable extensions include:
 read-write locks
 spin locks

Atomic Transactions
 System Model
 Log-based Recovery
 Checkpoints
 Concurrent Atomic Transactions

System Model
 Assures that operations happen as a single logical unit of work, in its entirety,
or not at all
 Related to field of database systems
 Challenge is assuring atomicity despite computer system failures
 Transaction - collection of instructions or operations that performs single
logical function
 Here we are concerned with changes to stable storage – disk
 Transaction is series of read and write operations
 Terminated by commit (transaction successful) or abort (transaction failed)
operation Aborted transaction must be rolled back to undo any changes it
performed
Types of Storage Media
 Volatile storage – information stored here does not survive system crashes
 Example: main memory, cache
 Nonvolatile storage – Information usually survives crashes
 Example: disk and tape
 Stable storage – Information never lost
 Not actually possible, so approximated via replication or RAID to devices
with independent failure modes
 Goal is to assure transaction atomicity where failures cause loss of
information on volatile storage

Log-Based Recovery
 Record to stable storage information about all modifications by a transaction
 Most common is write-ahead logging
 Log on stable storage, each log record describes single transaction write
operation, including
Transaction name
Data item name
Old value
New value
 <Ti starts> written to log when transaction Ti starts
 <Ti commits> written when Ti commits
 Log entry must reach stable storage before operation on data occurs

Log-Based Recovery Algorithm


Using the log, system can handle any volatile memory errors
 Undo(Ti) restores value of all data updated by Ti
 Redo(Ti) sets values of all data in transaction Ti to new values
 Undo(Ti) and redo(Ti) must be idempotent
 Multiple executions must have the same result as one execution
 If system fails, restore state of all updated data via log
 If log contains <Ti starts> without <Ti commits>, undo(Ti)
 If log contains <Ti starts> and <Ti commits>, redo(Ti)
Checkpoints
Log could become long, and recovery could take long Checkpoints
shorten log and recovery time.
Checkpoint scheme:
1. Output all log records currently in volatile storage to stable storage
2. Output all modified data from volatile to stable storage 3.Output a log record
<checkpoint> to the log on stable storage Now recovery only includes Ti, such that Ti
started executing before the most recent checkpoint, and all transactions after Ti All
other transactions already on stable storage

Concurrent Transactions
 Must be equivalent to serial execution – serializability
 Could perform all transactions in critical section
 Inefficient, too restrictive
 Concurrency-control algorithms provide serializability

Serializability

 Consider two data items A and B


 Consider Transactions T0 and T1
 Execute T0, T1 atomically
 Execution sequence called schedule
 Atomically executed transaction order called serial schedule

 For N transactions, there are N! valid serial schedules


Schedule 1: T0 then T1
Nonserial Schedule
 Nonserial schedule allows overlapped execute
 Resulting execution not necessarily incorrect
 Consider schedule S, operations Oi, Oj
 Conflict if access same data item, with at least one write
 If Oi, Oj consecutive and operations of different transactions &
Oi and Oj don’t conflict
 Then S’ with swapped order Oj Oi equivalent to S
 If S can become S’ via swapping nonconflicting operations
 S is conflict serializable

Schedule 2: Concurrent Serializable Schedule

Locking Protocol

 Ensure serializability by associating lock with each data item


 Follow locking protocol for access control
 Locks
 Shared – Ti has shared-mode lock (S) on item Q, Ti can read Q but not write
Q
 Exclusive – Ti has exclusive-mode lock (X) on Q, Ti can read and write Q
 Require every transaction on item Q acquire appropriate lock
 If lock already held, new request may have to wait
 Similar to readers-writers algorithm

Two-phase Locking Protocol


 Generally ensures conflict serializability
 Each transaction issues lock and unlock requests in two phases
 Growing – obtaining locks
 Shrinking – releasing locks

 Does not prevent deadlock


Timestamp-based Protocols
 Select order among transactions in advance – timestamp- ordering
 Transaction Ti associated with timestamp TS(Ti) before Ti starts
 TS(Ti) < TS(Tj) if Ti entered system before Tj
 TS can be generated from system clock or as logical counter
incremented at each entry of transaction
 Timestamps determine serializability order
 If TS(Ti) < TS(Tj), system must ensure produced schedule equivalent to serial
schedule where Ti appears before Tj
Timestamp-based Protocol Implementation

 Data item Q gets two timestamps


 W-timestamp(Q) – largest timestamp of any transaction that executed
write(Q) successfully
 R-timestamp(Q) – largest timestamp of successful read(Q)
 Updated whenever read(Q) or write(Q) executed
 Timestamp-ordering protocol assures any conflicting read and write executed
in timestamp order
Suppose Ti executes read(Q)
 If TS(Ti) < W-timestamp(Q), Ti needs to read value of Q that was already
overwritten
read operation rejected and Ti rolled back
 If TS(Ti) ≥ W-timestamp(Q)
read executed, R-timestamp(Q) set to max(R- timestamp(Q),
TS(Ti))
Timestamp-ordering Protocol

Supose Ti executes write(Q)


If TS(Ti) < R-timestamp(Q), value Q produced by Ti was needed previously and Ti
assumed it would never be produced
Write operation rejected, Ti rolled back
If TS(Ti) < W-tiimestamp(Q), Ti attempting to write obsolete value of Q
Write operation rejected and Ti rolled back Otherwise,
write executed
Any rolled back transaction Ti is assigned new timestamp and restarted
Algorithm ensures conflict serializability and freedom from deadlock
Schedule Possible Under Timestamp Protocol
To develop a description of deadlocks, which prevent sets of concurrent processes from
completing their tasks
To present a number of different methods for preventing or avoiding deadlocks in a
computer system
The Deadlock Problem
A set of blocked processes each holding a resource and waiting to acquire a resource held by
another process in the set
Example
System has 2 disk drives
P1 and P2 each hold one disk drive and each needs another one Example
semaphores A and B, initialized to 1
P0 P

wait (A); wait(B)


wait (B); wait(A)

Bridge Crossing Example

Traffic only in one direction


Each section of a bridge can be viewed as a resource
If a deadlock occurs, it can be resolved if one car backs up (preempt resources and
rollback)
Several cars may have to be backed up if a deadlock occurs
Starvation is possible
Note – Most OSes do not prevent or deal with deadlocks

System Model
Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O devices Each
resource type Ri has Wi instances. Each process
utilizes a resource as follows:
request
use
release

Deadlock Characterization
Deadlock can arise if four conditions hold simultaneously
Mutual exclusion: only one process at a time can use a resource
Hold and wait: a process holding at least one resource is waiting to acquire additional
resources held by other processes
No preemption: a resource can be released only voluntarily by the process
holding it, after that process has completed its task
Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is
waiting for a resource that is held by P1, P1 is waiting for a resource that is held by
P2, …, Pn–1 is waiting for a resource that is held by
Pn, and P0 is waiting for a resource that is held by P0.
n
Resource-Allocation Graph
A set of vertices V and a set of edges E
V is partitioned into two types:
P = {P1, P2, …, Pn}, the set consisting of all the processes in the system R = {R1,
R2, …, Rm}, the set consisting of all resource types in the system request edge –
directed edge P1 ® Rj
assignment edge – directed edge Rj ® Pi

Process

Resource Type with 4 instances

Pi requests instance of Rjn


Pi

Rj

Pi is holding an instance of Rj

Pi

Rj
Example of a Resource Allocation Graph

Resource Allocation Graph With A Deadlock

Graph With A Cycle But No Deadlock


Basic Facts
If graph contains no cycles Þ no deadlocknIf graph contains a cycle Þlif only one
instance per resource type, then deadlock
if several instances per resource type, possibility of deadlock
Methods for Handling Deadlocks
Ensure that the system will never enter a deadlock statenAllow the system to enter a
deadlock state and then recovernIgnore the problem and pretend that deadlocks never
occur in the system; used by most operating systems, including UNIX

Deadlock Prevention
Restrain the ways request can be made
Mutual Exclusion – not required for sharable resources; must hold for nonsharable
resources
Hold and Wait – must guarantee that whenever a process requests a resource, it does
not hold any other resources
Require process to request and be allocated all its resources before it begins
execution, or allow process to request resources only when the process has none
Low resource utilization; starvation possible

No Preemption –
If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released
Preempted resources are added to the list of resources for which the process is waiting
Process will be restarted only when it can regain its old resources, as well as the new ones
that it is requesting
Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration

Deadlock Avoidance
Requires that the system has some additional a priori information
available
Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need
The deadlock-avoidance algorithm dynamically examines the resource-allocation state to
ensure that there can never be a circular-wait condition
Resource-allocation state is defined by the number of available and allocated resources, and
the maximum demands of the processes
Safe State
When a process requests an available resource, system must decide if immediate
allocation leaves the system in a safe state
System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the processes is
the systems such that for each Pi, the resources that Pi can still request can be satisfied
by currently available resources + resources held by all the Pj, with j < inThat is:
If Pi resource needs are not immediately available, then Pi can wait until all Pj
have finished
When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate
When Pi terminates, Pi +1 can obtain its needed resources, and so on
Basic Facts
nIf a system is in safe state Þ no deadlocksnIf a system is in unsafe state Þ
possibility of deadlocknAvoidance Þ ensure that a system will never enter an unsafe
state.
Safe, Unsafe , Deadlock State

Avoidance algorithms
nSingle instance of a resource type lUse a
resource-allocation graph nnMultiple
instances of a resource type l Use the
banker’s algorithm
Resource-Allocation Graph Scheme
nClaim edge Pi ® Rj indicated that process Pj may request resource Rj; represented by a
dashed linenClaim edge converts to request edge when a process requests a
resourcenRequest edge converted to an assignment edge when the resource is allocated
to the process
nWhen a resource is released by a process, assignment edge reconverts to a claim
edgenResources must be claimed a priori in the system

Resource-Allocation Graph
Unsafe State In Resource-Allocation Graph
Resource-Allocation Graph Algorithm
nSuppose that process Pi requests a resource Rj
nnThe request can be granted only if converting the request edge to an assignment edge
does not result in the formation of a cycle in the resource allocation graph

Banker’s Algorithm
nMultiple instancesnEach process must a priori claim maximum usenWhen a process
requests a resource it may have to wait nWhen a process gets all its resources it must return
them in a finite amount of time

Data Structures for the Banker’s Algorithm


Let n = number of processes, and m = number of resources types. nAvailable:
Vector of length m. If available [j] = k, there are k instances of resource type Rj
available
nMax: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of
resource type RjnAllocation: n x m matrix. If Allocation[i,j] = k then Pi is currently
allocated k instances of RjnNeed: n x m matrix. If Need[i,j] = k, then Pi may need k
more instances of Rj to complete its task

Need [i,j] = Max[i,j] – Allocation [i,j]


Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work = Available
Finish [i] = false for i = 0, 1, …, n- 1
2. Find and i such that both:
(a) Finish [i] = false(b) Needi £ Work
If no such i exists, go to step 4
3. Work = Work + Allocationi
Finish[i] = true
go to step 2
4. If Finish [i] == true for all i, then the system is in a safe state

Resource-Request Algorithm for Process Pi

Request = request vector for process Pi. If Requesti [j] = k then process Pi
wants k instances of resource type Rj1. If Requesti £ Needi go to step 2. Otherwise, raise
error condition, since process has exceeded its maximum claim
2. If Requesti £ Available, go to step 3. Otherwise Pi must wait, since
resources are not available
3. Pretend to allocate requested resources to Pi by modifying the state as
follows:
Available = Available – Request;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
lIf safe Þ the resources are allocated to Pi
lIf unsafe Þ Pi must wait, and the old resource-allocation state is restored
Example of Banker’s Algorithm

n5 processes P0 through P4; 3


resource types:
A (10 instances), B (5instances), and C (7 instances)
Snapshot at time T0:
Allocation Max
Available ABC A
BC ABC
P0 010 753
332
P1 200 322
P2 302 902
P3 211 222
P4 002 433

nThe content of the matrix Need is defined to be Max – Allocation


Need ABC
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1 nThe system is in a
safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety criteria

Example: P1 Request (1,0,2)


nCheck that Request £ Available (that is, (1,0,2) £ (3,3,2) Þ true
Allocation Need Available
ABC ABC ABC
P0 010 743

230
P1 302 020
P2 301 600
P3 211 011
P4 002 431
nExecuting safety algorithm shows that sequence < P1, P3, P4, P0, P2> satisfies safety
requirement
nCan request for (3,3,0) by P4 be granted?
nCan request for (0,2,0) by P0 be granted?
Deadlock Detection
nAllow system to enter deadlock state nDetection algorithmnRecovery scheme

Single Instance of Each Resource Type


nMaintain wait-for graph
lNodes are processes
lPi ® Pj if Pi is waiting for PjnPeriodically invoke an algorithm that searches for a cycle in
the graph. If there is a cycle, there exists a deadlock
nAn algorithm to detect a cycle in a graph requires an order of n2 operations, where n is
the number of vertices in the graph
Resource-Allocation Graph and Wait-for Graph

Resource-Allocation Graph Corresponding wait-for


graph
Several Instances of a Resource Type
nAvailable: A vector of length m indicates the number of available resources of each
type.nAllocation: An n x m matrix defines the number of resources of each type currently
allocated to each process.nRequest: An n x m matrix indicates the current request of each
process. If
Request [ij] = k, then process Pi is requesting k more instances of resource
type.
Rj.

Detection Algorithm

1. Let Work and Finish be vectors of length m and n, respectively


Initialize:
(a) Work = Available(b) For i = 1,2, …, n, if Allocationi ¹ 0, then
Finish[i] = false;otherwise, Finish[i] = true2. Find an index i such
that both:
(a) Finish[i] == false(b) Requesti £ WorkIf no such i exists, go to step
4
3. Work = Work + Allocationi Finish[i] = true
go to step 24. If Finish[i] == false, for some i, 1 £ i £ n, then the
system is
in deadlock state. Moreover, if Finish[i] == false, then Pi is deadlocked

Algorithm requires an order of O(m x n2)


operations to detect whether the system is in
deadlocked state
Example of Detection Algorithm
nFive processes P0
through P4; three resource
types A (7 instances), B (2
instances), and C (6
instances) nSnapshot at
time T0:
Available
ABC
Allocation Request

ABC ABC

P0 010 000 000


P1 200 202
P2 303 000
P3 211 100
P4 002 002
nSequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i

You might also like