Module2 OS
Module2 OS
Process Synchronization
To introduce the critical-section problem, whose solutions can be used to
ensure the consistency of shared data
To present both software and hardware solutions of the critical-section
problem
To introduce the concept of an atomic transaction and describe
mechanisms to ensure atomicity
Concurrent access to shared data may result in data inconsistency
Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes
Suppose that we wanted to provide a solution to the consumer-producer problem
that fills all the buffers. We can do so by having an integer count that keeps track
of the number of full buffers. Initially, count is set to 0. It is incremented by the
producer after it produces a new buffer and is decremented by the consumer
after it consumes a buffer
Producer
while (true) {
Swap Instruction
Definition:
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
Solution using Swap
Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean
variable key
Solution:
do {
key = TRUE;
while ( key == TRUE) Swap
(&lock, &key );
Semaphore Implementation
Must guarantee that no two processes can execute wait () and signal () on the
same semaphore at the same time
Thus, implementation becomes the critical section problem where the wait and
signal code are placed in the crtical section.
Could now have busy waiting in critical section implementation
But implementation code is short
Little busy waiting if critical section rarely occupied
Note that applications may spend lots of time in critical sections and
therefore this is not a good solution.
Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
Implementation of signal:
signal(semaphore *S) { S-
>value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
Deadlock and Starvation
Deadlock – two or more processes are waiting indefinitely for an event that can
be caused by only one of the waiting processes
Let S and Q be two semaphores initialized to 1
P0
P1 wait (S);
wait (Q);
wait (Q);
wait (S);
.
.
.
.
.
signal (S);
signal (Q);
signal (Q);
signal (S);
Bounded-Buffer Problem
N buffers, each can hold one item
Semaphore mutex initialized to the value 1
Semaphore full initialized to the value 0
Semaphore empty initialized to the value N.
The structure of the producer process
do { // produce an item in nextp wait
(empty);
wait (mutex);
// add the item to the buffer signal
(mutex);
signal (full);
} while (TRUE);
The structure of the consumer process do {
wait (full);
wait (mutex);
// remove an item from buffer to nextc signal
(mutex);
signal (empty);
do { wait (wrt) ;
Dining-Philosophers Problem
Shared data
Bowl of rice (data set)
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
Problems with Semaphores
Incorrect use of semaphore operations:
l signal (mutex)
….
wait (mutex)
Monitors
A high-level abstraction that provides a convenient and effective mechanism for process
synchronization
Only one process may be active within the monitor at a time monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
…
procedure Pn (…) {……}
Initialization code ( ….) { … }
…
}
}
Schematic view of a Monitor
Condition Variables
condition x, y;
Two operations on a condition variable:
x.wait () – a process that invokes the operation is suspended.
x. signal () – resumes one of processes (if any) that
invoked x.wait ()
Monitor with Condition Variables
monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5]; void
pickup (int i) {
state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}
Variables
semaphore mutex; // (initially = 1) semaphore
next; // (initially = 0) int
next-count = 0;nEach procedure F
will be replaced by
wait(mutex);
…
body of F;
…
if (next_count > 0)
signal(next)
within a monitor is ensured. else
signal(mutex);nMutual exclusion
Monitor Implementation
For each condition variable x, we have:
semaphore x_sem; // (initially = 0)
int x-count = 0;nThe operation x.wait can
be implemented as:
x-count++;
if (next_count > 0)
signal(next);
else signal(mutex);
wait(x_sem);
x-count--;
Pthreads
Solaris Synchronization
Implements a variety of locks to support multitasking, multithreading
(including real-time threads), and multiprocessing
Uses adaptive mutexes for efficiency when protecting data from short code
segments
Uses condition variables and readers-writers locks when longer sections of
code need access to data
Uses turnstiles to order the list of threads waiting to acquire either an adaptive
mutex or reader-writer lock
Windows XP Synchronization
Uses interrupt masks to protect access to global resources on uniprocessor
systems
Uses spinlocks on multiprocessor systems
Also provides dispatcher objects which may act as either mutexes and
semaphores
Dispatcher objects may also provide events
Atomic Transactions
System Model
Log-based Recovery
Checkpoints
Concurrent Atomic Transactions
System Model
Assures that operations happen as a single logical unit of work, in its entirety,
or not at all
Related to field of database systems
Challenge is assuring atomicity despite computer system failures
Transaction - collection of instructions or operations that performs single
logical function
Here we are concerned with changes to stable storage – disk
Transaction is series of read and write operations
Terminated by commit (transaction successful) or abort (transaction failed)
operation Aborted transaction must be rolled back to undo any changes it
performed
Types of Storage Media
Volatile storage – information stored here does not survive system crashes
Example: main memory, cache
Nonvolatile storage – Information usually survives crashes
Example: disk and tape
Stable storage – Information never lost
Not actually possible, so approximated via replication or RAID to devices
with independent failure modes
Goal is to assure transaction atomicity where failures cause loss of
information on volatile storage
Log-Based Recovery
Record to stable storage information about all modifications by a transaction
Most common is write-ahead logging
Log on stable storage, each log record describes single transaction write
operation, including
Transaction name
Data item name
Old value
New value
<Ti starts> written to log when transaction Ti starts
<Ti commits> written when Ti commits
Log entry must reach stable storage before operation on data occurs
Concurrent Transactions
Must be equivalent to serial execution – serializability
Could perform all transactions in critical section
Inefficient, too restrictive
Concurrency-control algorithms provide serializability
Serializability
Locking Protocol
System Model
Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O devices Each
resource type Ri has Wi instances. Each process
utilizes a resource as follows:
request
use
release
Deadlock Characterization
Deadlock can arise if four conditions hold simultaneously
Mutual exclusion: only one process at a time can use a resource
Hold and wait: a process holding at least one resource is waiting to acquire additional
resources held by other processes
No preemption: a resource can be released only voluntarily by the process
holding it, after that process has completed its task
Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is
waiting for a resource that is held by P1, P1 is waiting for a resource that is held by
P2, …, Pn–1 is waiting for a resource that is held by
Pn, and P0 is waiting for a resource that is held by P0.
n
Resource-Allocation Graph
A set of vertices V and a set of edges E
V is partitioned into two types:
P = {P1, P2, …, Pn}, the set consisting of all the processes in the system R = {R1,
R2, …, Rm}, the set consisting of all resource types in the system request edge –
directed edge P1 ® Rj
assignment edge – directed edge Rj ® Pi
Process
Rj
Pi is holding an instance of Rj
Pi
Rj
Example of a Resource Allocation Graph
Deadlock Prevention
Restrain the ways request can be made
Mutual Exclusion – not required for sharable resources; must hold for nonsharable
resources
Hold and Wait – must guarantee that whenever a process requests a resource, it does
not hold any other resources
Require process to request and be allocated all its resources before it begins
execution, or allow process to request resources only when the process has none
Low resource utilization; starvation possible
No Preemption –
If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released
Preempted resources are added to the list of resources for which the process is waiting
Process will be restarted only when it can regain its old resources, as well as the new ones
that it is requesting
Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration
Deadlock Avoidance
Requires that the system has some additional a priori information
available
Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need
The deadlock-avoidance algorithm dynamically examines the resource-allocation state to
ensure that there can never be a circular-wait condition
Resource-allocation state is defined by the number of available and allocated resources, and
the maximum demands of the processes
Safe State
When a process requests an available resource, system must decide if immediate
allocation leaves the system in a safe state
System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the processes is
the systems such that for each Pi, the resources that Pi can still request can be satisfied
by currently available resources + resources held by all the Pj, with j < inThat is:
If Pi resource needs are not immediately available, then Pi can wait until all Pj
have finished
When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate
When Pi terminates, Pi +1 can obtain its needed resources, and so on
Basic Facts
nIf a system is in safe state Þ no deadlocksnIf a system is in unsafe state Þ
possibility of deadlocknAvoidance Þ ensure that a system will never enter an unsafe
state.
Safe, Unsafe , Deadlock State
Avoidance algorithms
nSingle instance of a resource type lUse a
resource-allocation graph nnMultiple
instances of a resource type l Use the
banker’s algorithm
Resource-Allocation Graph Scheme
nClaim edge Pi ® Rj indicated that process Pj may request resource Rj; represented by a
dashed linenClaim edge converts to request edge when a process requests a
resourcenRequest edge converted to an assignment edge when the resource is allocated
to the process
nWhen a resource is released by a process, assignment edge reconverts to a claim
edgenResources must be claimed a priori in the system
Resource-Allocation Graph
Unsafe State In Resource-Allocation Graph
Resource-Allocation Graph Algorithm
nSuppose that process Pi requests a resource Rj
nnThe request can be granted only if converting the request edge to an assignment edge
does not result in the formation of a cycle in the resource allocation graph
Banker’s Algorithm
nMultiple instancesnEach process must a priori claim maximum usenWhen a process
requests a resource it may have to wait nWhen a process gets all its resources it must return
them in a finite amount of time
Request = request vector for process Pi. If Requesti [j] = k then process Pi
wants k instances of resource type Rj1. If Requesti £ Needi go to step 2. Otherwise, raise
error condition, since process has exceeded its maximum claim
2. If Requesti £ Available, go to step 3. Otherwise Pi must wait, since
resources are not available
3. Pretend to allocate requested resources to Pi by modifying the state as
follows:
Available = Available – Request;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
lIf safe Þ the resources are allocated to Pi
lIf unsafe Þ Pi must wait, and the old resource-allocation state is restored
Example of Banker’s Algorithm
230
P1 302 020
P2 301 600
P3 211 011
P4 002 431
nExecuting safety algorithm shows that sequence < P1, P3, P4, P0, P2> satisfies safety
requirement
nCan request for (3,3,0) by P4 be granted?
nCan request for (0,2,0) by P0 be granted?
Deadlock Detection
nAllow system to enter deadlock state nDetection algorithmnRecovery scheme
Detection Algorithm
ABC ABC