0% found this document useful (0 votes)
42 views62 pages

006 Deadlock

The document discusses language support for concurrency and synchronization in C, C++, Java, and Scala. It covers explicit locking mechanisms in C/C++, exceptions in C++ critical sections, and implicit locking with synchronized methods and wait/notify in Java. It also introduces Scala actors as a higher-level concurrency abstraction. The document defines resources that threads require and distinguishes preemptable from non-preemptable resources.

Uploaded by

Alivezeh Panda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views62 pages

006 Deadlock

The document discusses language support for concurrency and synchronization in C, C++, Java, and Scala. It covers explicit locking mechanisms in C/C++, exceptions in C++ critical sections, and implicit locking with synchronized methods and wait/notify in Java. It also introduces Scala actors as a higher-level concurrency abstraction. The document defines resources that threads require and distinguishes preemptable from non-preemptable resources.

Uploaded by

Alivezeh Panda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 62

Operating Systems

Language Support for Concurrent


Programming, Deadlocks

Note: Some slides and/or pictures in the following are adapted from slides
©2005 Silberschatz, Galvin, and Gagne. Slides courtesy of Anthony D.
Joseph, John Kubiatowicz, AJ Shankar, George Necula, Alex Aiken, Eric
Brewer, Ras Bodik, Ion Stoica, Doug Tygar, and David Wagner.
Goals for Today
• Language Support for Synchronization
• Discussion of Resource Contention and Deadlocks
– Conditions for its occurrence
– Solutions for breaking and avoiding deadlock
Recap: Readers/Writers Problem
W

R
R
R

• Motivation: Consider a shared database


– Two classes of users:
» Readers – never modify database
» Writers – read and modify database
– Is using a single lock on the whole database sufficient?
» Like to have many readers at the same time
» Only one writer at a time
Recap: Readers/Writers Solution
• Correctness Constraints:
– Readers can access database when no writers
– Writers can access database when no readers or writers
– Only one thread manipulates state variables at a time
• Basic structure of a solution:
– Reader()
Wait until no writers
Access database
Check out – wake up a waiting writer
– Writer()
Wait until no active readers or writers
Access database
Check out – wake up waiting readers or writer
C-Language Support for Synchronization
• C language: All locking/unlocking is explicit: you need to
check every possible exit path from a critical section.

int Rtn() {
lock.acquire();

if (error) {
lock.release();
return errReturnCode;
}

lock.release();
return OK;
}
C++ Language Support for Synchronization
• Languages with exceptions like C++
– Languages that support exceptions are more challenging:
exceptions create many new exit paths from the critical section.
– Consider:
void Rtn() {
lock.acquire();

DoFoo();

lock.release();
}
void DoFoo() {

if (exception) throw errException;

}
– Notice that an exception in DoFoo() will exit without releasing
the lock
C++ Language Support for Synchronization
(cont’d)
• Must catch all exceptions in critical sections
– Catch exceptions, release lock, and re-throw exception:
void Rtn() {
lock.acquire();
try {

DoFoo();

} catch (...) { // really three dots!
// catch all exceptions
lock.release(); // release lock
throw; // re-throw unknown
exception
}
lock.release();
}
void DoFoo() {

if (exception) throw errException;

}
C++ Language Support for Synchronization
(cont’d)
• Alternative (Recommended by Stroustrup): Use the lock class
destructor to release the lock.
• Set it on entry to critical section contained in a { } block, gets
automatically destroyed (& released) on block exit.
• Exceptions will unwind the stack, call destructor, free the lock

class lock { mutex m;


mutex &m_; ...
public: {
lock(mutex &m) : m_(m) { lock mylock(m);
m.acquire(); ...
} ...
~lock() { ... // no explicit unlock
m_.release(); }
} Critical Section
};
Java Language Support for Synchronization
Java supports both low-level and high-level synchronization:
•Low-level:
– Lock class: a lock, with methods:
» lock.lock()
» lock.unlock()
– Condition: a condition variable associated with a lock, methods:
» condvar.await()
» condvar.signal()

•High-level: every object has an implicit lock and condition var


– synchronized keyword, applies to methods or blocks
– Implicit condition variable methods:
» wait()
» notify() and notifyAll()
Java Language Low-level Synchronization
public class SynchronizedQueue {
private Lock lock = new ReentrantLock();
private Condition cv = lock.newCondition();
private LinkedList<Integer> q = new LinkedList<Integer>();

public void enqueue(int item) {


try {
lock.lock();
q.add(item);
cv.signal();
} finally {
lock.unlock();
}
}
Java Language Low-level Synchronization
...
public synchronized int dequeue() {
int retval = 0;
try {
lock.lock();
while (q.size() == 0) {
cv.await();
}
retval = q.removeFirst();
} finally {
lock.unlock();
}
return retval;
}
}
Java Language High-level Synchronization
• Every object in Java has an implicit lock associated with it.
• The synchronized keyword wraps this lock around a method
or a block:

public class TheBank {


public synchronized Withdraw(..) {
... // the implicit lock (on “this”) is held in here
}
}
OR
synchronized (that) { // Specify which object to lock
... // the implicit lock on “that” is held in here
}
The JVM takes care of releasing the lock on normal and
abnormal exits from the method or block.
Java Language High-level Synchronization
• In addition to an implicit lock, every object has a single
implicit condition variable associated with it
– How to wait inside a synchronization method of block:
» void wait();
» void wait(long timeout); // Wait for timeout (msecs)
» void wait(long timeout, int nanoseconds); //variant
– How to signal in a synchronized method or block:
» void notify(); // wakes up oldest waiter
» void notifyAll(); // like broadcast, wakes everyone
– Condition variables can wait for a bounded length of time. This
is useful for handling exception cases:
t1 = time.now();
while (!ATMRequest()) {
wait (CHECKPERIOD);
t2 = time.new();
if (t2 – t1 > LONG_TIME) checkMachine();
}
– Not all Java VMs equivalent!
» Different scheduling policies, not necessarily preemptive!
Concurrency Bugs (Lu et al. 2008)
Most concurrency bugs (98%) are either
1.Atomicity violations (not protecting shared resources)
2.Order violations
3.Deadlocks
Type 1. problems are caused by under-protecting shared
resources, type 3. often caused by over-protection.
Fixes to type 3. bugs often create type 1. bugs. 
Good news:
4.Most non-deadlock bugs involve only one variable.
5.Most (97%) of deadlocks involve two threads which access
at most two resources.
Not-so-good news: concurrency bugs seem to be a small
fraction of all reported bugs, but consume a large fraction of
debugging time (days per bug instead of hours).
Java Language High-level Synchronization
public class SynchronizedQueue {
public LinkedList<Integer> q = new LinkedList<Integer>();

public synchronized void enqueue (int item) {


q.add(item);
notify();
}

public synchronized int dequeue () {


try {
while (q.size() == 0) {
wait();
}
return q.removeFirst();
} catch (InterruptedException e) {
return 0;
}
}
}
Scala Language: Actors
• Scala is a state-of-the-art language which runs Scala or Java
code on a Java Virtual Machine.
• Scala supports Actors, a higher-level abstraction for
concurrent programming.

• So far: threads:
Execute methods
that modify state

m1() m1() m3() m6() m8()


Objects m4()
with state:m2() m2() m5() m7() m9()
Scala Language: Actors
• Actors combine state, methods, and a single thread.

m1() m1() m3() m6() m8()


m4()
m2() m2() m5() m7() m9()

• Actor state is not shared, actors interact by sending and


receiving messages.
• Each actor has a single, synchronized message queue,
which is part of its implementation.
• Actor code typically comprises a while loop which waits for
inbound messages, and dispatches to a message handler.
Scala Actor Bank Account Example
val b = actor { // b is an actor representing a bank account
var balance = 0.0
loop {
react { // dispatch on the message type
case ("deposit", amount:Double) => balance += amount
case ("withdraw", amount:Double) => balance -= amount
case ("interest", rate:Double) => balance += balance*rate
case "balance" => println("balance="+balance)
}
}
}
var grow = true

val g = actor { // g is an actor that periodically adds interest


while (grow) {
b ! ("interest", 0.05) // send an interest update message to b
Thread.sleep(3000)
}
}
Resources
• Resources – passive entities needed by threads to do their
work
– CPU time, disk space, memory
• Two types of resources:
– Preemptable – can take it away
» CPU, Embedded security chip
– Non-preemptable – must leave it with the thread
» Disk space, printer, chunk of virtual address space
» Critical section
• Resources may require exclusive access or may be sharable
– Read-only files are typically sharable
– Printers are not sharable during time of printing
• One of the major tasks of an operating system is to manage
resources
Starvation vs Deadlock
• Starvation vs. Deadlock
– Starvation: thread waits indefinitely
» Example, low-priority thread waiting for resources constantly in
use by high-priority threads
– Deadlock: circular waiting for resources
» Thread A owns Res 1 and is waiting for Res 2
Thread B owns Res 2 and is waiting for Res 1

Thread
Wait
Owned A
For
By

Res 1 Res 2

Owned
Wait
Thread By
For B

– Deadlock  Starvation but not vice versa


» Starvation can end (but doesn’t have to)
» Deadlock can’t end without external intervention
Conditions for Deadlock
• Deadlock not always deterministic – Example 2 mutexes:
x=1, y=1 Thread A Thread B Deadlock
x.P(); y.P(); A: x.P();
y.P(); x.P(); B: y.P();
… … A: y.P();
y.V(); x.V(); B: x.P();
x.V(); y.V(); ...
– Deadlock won’t always happen with this code
» Have to have exactly the right timing (“wrong” timing?)
• Deadlocks occur with multiple resources
– Means you can’t decompose the problem
– Can’t solve deadlock for each resource independently
• Example: System with 2 disk drives and two threads
– Each thread needs 2 disk drives to function
– Each thread gets one disk and waits for another one
Bridge Crossing Example

• Each segment of road can be viewed as a resource


– Car must own the segment under them
– Must acquire segment that they are moving into
• For bridge: must acquire both halves
– Traffic only in one direction at a time
– Problem occurs when two cars in opposite directions on bridge:
each acquires one segment and needs next
• If a deadlock occurs, it can be resolved if one car backs up
(preempt resources and rollback)
– Several cars may have to be backed up
• Starvation is possible
– East-going traffic really fast  no one goes west
Bridge Crossing Example

• Each segment of road can be viewed as a resource


– Car must own the segment under them
– Must acquire segment that they are moving into
• For bridge: must acquire both halves
– Traffic only in one direction at a time
– Problem occurs when two cars in opposite directions on bridge:
each acquires one segment and needs next
• If a deadlock occurs, it can be resolved if one car backs up
(preempt resources and rollback)
– Several cars may have to be backed up
• Starvation is possible
– East-going traffic really fast  no one goes west
Dining Philosopher Problem

• Five chopsticks/Five philosopher (really cheap restaurant)


– Free for all: Philosopher will grab any one they can
– Need two chopsticks to eat
• What if all grab at same time?
– Deadlock!
• How to fix deadlock?
– Make one of them give up a chopstick (Hah!)
– Eventually everyone will get chance to eat
• How to prevent deadlock?
– (Answer later)
Four requirements for Deadlock
• Mutual exclusion
– Only one thread at a time can use a resource
• Hold and wait
– Thread holding at least one resource is waiting to acquire
additional resources held by other threads
• No preemption
– Resources are released only voluntarily by the thread holding
the resource, after thread is finished with it
• Circular wait
– There exists a set {T1, …, Tn} of waiting threads
» T1 is waiting for a resource that is held by T2
» T2 is waiting for a resource that is held by T3
» …
» Tn is waiting for a resource that is held by T1
Resource-Allocation Graph
• System Model
– A set of Threads T1, T2, . . ., Tn
Symbols
– Resource types R1, R2, . . ., Rm
T1 T2
CPU cycles, memory space, I/O devices
– Each resource type Ri has Wi instances.
– Each thread utilizes a resource as follows:
» Request() / Use() / Release() R1
R2
• Resource-Allocation Graph:
– V is partitioned into two types:
» T = {T1, T2, …, Tn}, the set threads in the system.
» R = {R1, R2, …, Rm}, the set of resource types in system
– request edge – directed edge Ti  Rj
– assignment edge – directed edge Rj  Ti
Resource Allocation Graph Examples
• Recall:
– request edge – directed edge Ti  Rj
– assignment edge – directed edge Rj  Ti
R1 R2
R1 R2 R1 T2

T1 T2 T3
T1 T2 T3
T1 T3

R3 T4
R3 R2
R4
R4
Simple Resource Allocation Graph Allocation Graph
Allocation Graph With Deadlock With Cycle, but
No Deadlock
Methods for Handling Deadlocks
• Allow system to enter deadlock and then recover
– Requires deadlock detection algorithm (Java JMX
findDeadlockedThreads(), try also jvisualvm)
– Some technique for forcibly preempting resources and/or
terminating tasks

• Deadlock prevention: ensure that system will never enter a


deadlock
– Need to monitor all lock acquisitions
– Selectively deny those that might lead to deadlock

• Ignore the problem and pretend that deadlocks never occur


in the system
– Used by most operating systems, including UNIX
Deadlock Detection Algorithm
• Only one of each type of resource  look for loops
• More General Deadlock Detection Algorithm
– Let [X] represent an m-ary vector of non-negative
integers (quantities of resources of each type):
[FreeResources]: Current free resources each type
[RequestX]: Current requests from thread X
[AllocX]: Current resources held by thread X
– See if tasks can eventually terminate on their own
[Avail] = [FreeResources] R1
Add all nodes to UNFINISHED T2
do {
done = true
Foreach node in UNFINISHED {
if ([Requestnode] <= [Avail]) { T1 T3
remove node from UNFINISHED
[Avail] = [Avail] + [Allocnode]
done = false
} T4
R2
}
} until(done)
– Nodes left in UNFINISHED  deadlocked
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [0,0]
UNFINISHED = {T1,T2,T3,T4}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([Requestnode] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [Allocnode] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [0,0]
UNFINISHED = {T1,T2,T3,T4}
T1 T3
do {
False
done = true
Foreach node in UNFINISHED {
if ([RequestT1] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT1] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [0,0]
UNFINISHED = {T1,T2,T3,T4}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([Requestnode] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [Allocnode] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [0,0]
UNFINISHED = {T1,T2,T3,T4}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT2] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT2] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [0,0]
UNFINISHED = {T1,T3,T4}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT2] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT2] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,0]
UNFINISHED = {T1,T3,T4}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT2] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT2] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,0]
UNFINISHED = {T1,T3,T4}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT2] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT2] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,0]
UNFINISHED = {T1,T3,T4}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([Requestnode] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [Allocnode] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,0]
UNFINISHED = {T1,T3,T4}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT3] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT3] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,0]
UNFINISHED = {T1,T3,T4}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([Requestnode] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [Allocnode] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,0]
UNFINISHED = {T1,T3,T4}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT4] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT4] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,0]
UNFINISHED = {T1,T3}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT4] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT4] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,1]
UNFINISHED = {T1,T3}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT4] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT4] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,1]
UNFINISHED = {T1,T3}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT4] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT4] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,1]
UNFINISHED = {T1,T3}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT4] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT4] R2
done = false False
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,1]
UNFINISHED = {T1,T3}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([Requestnode] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [Allocnode] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,1]
UNFINISHED = {T1,T3}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT1] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT1] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,1]
UNFINISHED = {T3}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT1] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT1] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,2]
UNFINISHED = {T3}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT1] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT1] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,2]
UNFINISHED = {T3}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT1] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT1] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,2]
UNFINISHED = {T3}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([Requestnode] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [Allocnode] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,2]
UNFINISHED = {T3}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT3] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT3] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [1,2]
UNFINISHED = {}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT3] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT3] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [2,2]
UNFINISHED = {}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT3] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT3] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [2,2]
UNFINISHED = {}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT3] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT3] R2
done = false
}
}
} until(done)
Deadlock Detection Algorithm
Example
[RequestT1] = [1,0]; AllocT1 = [0,1]
[RequestT2] = [0,0]; AllocT2 = [1,0] R1
[RequestT3] = [0,1]; AllocT3 = [1,0] T2
[RequestT4] = [0,0]; AllocT4 = [0,1]
[Avail] = [2,2]
UNFINISHED = {}
T1 T3
do {
done = true
Foreach node in UNFINISHED {
if ([RequestT3] <= [Avail]) {
remove node from UNFINSHED T4
[Avail] = [Avail] + [AllocT3] R2
done = false
}
}
} until(done)
DONE!
Techniques for Preventing Deadlock
• Infinite resources
– Include enough resources so that no one ever runs out of
resources. Doesn’t have to be infinite, just large
– Give illusion of infinite resources (e.g. virtual memory)
– Examples:
» Bay bridge with 12,000 lanes. Never wait!
» Infinite disk space (not realistic yet?)

• No Sharing of resources (totally independent threads)


– Not very realistic

• Don’t allow waiting


– How the phone company avoids deadlock
» Call to your Mom in Toledo, works its way through the phone lines,
but if blocked get busy signal
– Technique used in Ethernet/some multiprocessor nets
» Everyone speaks at once. On collision, back off and retry
Techniques for Preventing Deadlock
(cont’d)
• Make all threads request everything they’ll need at the
beginning
– Problem: Predicting future is hard, tend to over-estimate
resources
– Example:
» Don’t leave home until we know no one is using any
intersection between here and where you want to go!

• Force all threads to request resources in a particular order


preventing any cyclic use of resources
– Thus, preventing deadlock
– Example (x.P, y.P, z.P,…)
» Make tasks request disk, then memory, then…
Banker’s Algorithm for Preventing
Deadlock
• Toward right idea:
– State maximum resource needs in advance
– Allow particular thread to proceed if:
(available resources - #requested)  max
remaining that might be needed by any thread

• Banker’s algorithm (less conservative):


– Allocate resources dynamically
» Evaluate each request and grant if some
ordering of threads is still deadlock free afterward
» Keeps system in a “SAFE” state, i.e. there exists a sequence {T1,
T2, … Tn} with T1 requesting all remaining resources, finishing, then
T2 requesting all remaining resources, etc..
– Algorithm allows the sum of maximum resource needs of all
current threads to be greater than total resources
Banker’s Algorithm
• Technique: pretend each request is granted, then run
deadlock detection algorithm, substitute
([Requestnode] ≤ [Avail])  ([Maxnode]-[Allocnode] ≤ [Avail])

[FreeResources]: Current free resources each type


[AllocX]: Current resources held by thread X
[MaxX]: Max resources requested by thread X

[Avail] = [FreeResources]
Add all nodes to UNFINISHED
do {
done = true
Foreach node in UNFINISHED {
if ([Maxnode]–[Allocnode]<= [Avail]) {
remove node from UNFINISHED
[Avail] = [Avail] + [Allocnode]
done = false
}
}
} until(done)
Banker’s Algorithm Example

• Banker’s algorithm with dining philosophers


– “Safe” (won’t cause deadlock) if when try to grab chopstick
either:
» Not last chopstick
» Is last chopstick but someone will have
two afterwards
– What if k-handed philosophers? Don’t allow if:
» It’s the last one, no one would have k
» It’s 2nd to last, and no one would have k-1
» It’s 3rd to last, and no one would have k-2
» …
Summary: Deadlock
• Starvation vs. Deadlock
– Starvation: thread waits indefinitely
– Deadlock: circular waiting for resources

• Four conditions for deadlocks


– Mutual exclusion
» Only one thread at a time can use a resource
– Hold and wait
» Thread holding at least one resource is waiting to acquire
additional resources held by other threads
– No preemption
» Resources are released only voluntarily by the threads
– Circular wait
  set {T1, …, Tn} of threads with a cyclic waiting pattern

• Deadlock preemption
• Deadlock prevention (Banker’s algorithm)

You might also like