Unit 3 Inter Process Communication
Unit 3 Inter Process Communication
Inter-process
Communication
2
• Let's return to our consideration of the bounded buffer.
• As we pointed out, our original solution allowed at most BUFFER_SIZE - 1 items in
the buffer at the same time. Suppose we want to modify the algorithm to remedy
this deficiency. One possibility is to add an integer variable counter, initialized to
0.
• counter is incremented every time we add a new item to the buffer and is
decremented every time we remove one item from the buffer.
• The code for the producer process can be modified as follows:
3
PRODUCER
while (true) {
/* produce an item and put in nextProduced */
Synchronization
while (count == BUFFER_SIZE)
; // do nothing ………… WAIT
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
CONSUMER
while (true) {
while (count == 0)
Assumptions ; // do nothing -------- WAIT
Bounded Buffer /* consume the item in nextConsumed
counter : shared variable */
nextConsumed = buffer[out];
Provide number of items in a buffer out = (out + 1) % BUFFER_SIZE;
count=0 count--;
count++: increase }
count-- : decrease 4
• The producer-consumer problem is an example of a multi-process synchronization
problem.
• The problem describes two processes, the producer and the consumer that shares
a common fixed-size buffer use it as a queue.
• The producer’s job is to generate data, put it into the buffer, and start again.
• At the same time, the consumer is consuming the data (i.e., removing it from
the buffer), one piece at a time.
• Problem: Given the common fixed-size buffer, the task is to make sure that the
producer can’t add data into the buffer when it is full and the consumer can’t
remove data from an empty buffer.
• Solution: The producer is to either go to sleep or discard data if the buffer is full.
The next time the consumer removes an item from the buffer, it notifies the
producer, who starts to fill the buffer again. In the same manner, the consumer can
go to sleep if it finds the buffer to be empty. The next time the producer puts data
into the buffer, it wakes up the sleeping consumer. 5
Synchronization
• Although both the producer and consumer routines shown above are correct separately, they may not function
correctly when executed concurrently.
• As an illustration, suppose that the value of the variable counter is currently 5 and that the producer and
consumer processes execute the statements "counter++" and "counter--" concurrently.
• Following the execution of these two statements, the value of the variable counter may be 4, 5, or 6!
• The only correct result, though, is counter == 5, which is generated correctly if the producer and consumer
execute separately.
• We can show that the value of counter may be incorrect as follows.
• The statement "counter++" may be implemented in machine language (on a typical machine) as
• register1 = counter
register1= register1+ 1
counter= register1
where register1 is one of the local CPU registers.
• Similarly, the statement register2"counter--" is implemented as follows:
register2 = counter
register2= register2 -1
counter= register2
• where again register2 is one of the local CPU registers. Even though register1and register2 may be the same
physical register (an accumulator, say), remember that the contents of this register will be saved and restored
by the interrupt handler. 6
Synchronization
• The concurrent execution of "counter++" and "counter--" is equivalent to a sequential
execution in which the lower-level statements presented previously are interleaved in
some arbitrary order (but the order within each high-level statement is preserved). One
such interleaving is
• Need of co-operation/synchronization
10
CRITICAL SECTION
• Process Synchronization controls the execution of processes running concurrently
so as to produce the consistent results.
• Critical section is a part of the program where shared resources are accessed by
the process.
• Critical Section Problem-
• If multiple processes access the critical section concurrently, then results
produced might be inconsistent.
• This problem is called as critical section problem.
• Synchronization mechanisms allow the processes to access critical section in a
synchronized manner to avoid the inconsistent results.
11
CRITICAL SECTION
• Consider a system consisting of n processes {Po, P1 , ... , Pn-1 }.
• Each process has a segment of code, called a critical section, in which the process
may be changing common variables, updating a table, writing a file, and so on.
• The important feature of the system is that, when one process is executing in its
critical section, no other process is to be allowed to execute in its critical section.
That is, no two processes are executing in their critical sections at the same time.
• The critical-section problem is to design a protocol that the processes
can use to cooperate.
• Each process must request permission to enter its critical section. The section of
code implementing this request is the entry section.
It acts as a gateway for a process to enter inside the critical section.
It ensures that only one process is present inside the critical section at any time.
It does not allow any other process to enter inside the critical section if one process is
already present inside it.
12
CRITICAL SECTION ENVIRONMENT
13
CRITICAL SECTION
• The critical section may be followed by an exit section.
It acts as an exit gate for a process to leave the critical section.
When a process takes exit from the critical section, some changes are made so that other processes can enter inside
the critical section.
• The remaining code is the remainder section. The general structure of a typical process Pi is shown in
Figure 6.1. The entry section and exit section are enclosed in boxes to highlight these important segments
of code.
• A solution to the critical-section problem must satisfy the following three requirements:
Solution to the Critical Section Problem must meet three conditions...
• Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be
executing in their critical sections (disabling interrupt,lock variables,algo)
• Progress - If no process is executing in its critical section and there exist some processes that wish to
enter their critical sections, then only those processes that are not executing in their remainder sections
can participate in deciding which will enter into its critical section and this selection can not be postponed
indefinitely.
• Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter
their critical sections after a process has made a request to enter its critical section and before that
request is granted.
14
Mutual Exclusion using Critical Regions
A enters Critical Region A leaves Critical Region
Critical Region
Process A
B
B leaves CR
B attempts to Enters CR
Enter CR
Process B Critical Region
B blocked
T1 T2 T3 T4
15
Critical Section Solutions
• Race condition occurs in CS. So protection is needed to CS
• s/w solution
• Peterson’s solution (two process solution)
• Semaphore (n process solution)
• h/w solution
• TestAndSet
• Swap
16
Peterson Solution
• A classic software-based solution to the critical-section problem known as Peterson's solution.
• Because of the way modern computer architectures perform basic machine-language instructions, such as
load and store, there are no guarantees that Peterson's solution will work correctly on such architectures.
However we present the solution because it provides a good algorithmic description of solving the critical-
section problem and illustrates some of the complexities involved in designing software that addresses the
requirements of mutual exclusion, progress, and bounded waiting.
• Peterson's solution is restricted to two processes that alternate execution between their critical sections
and remainder sections. The processes are numbered Po and P1. For convenience, when presenting Pi, we
use Pj to denote the other process; that is, j equals 1 - i.
• The eventual value of turn determines which of the two processes is allowed to enter its critical section
first.
• We now prove that this solution is correct. We need to show that:
1. Mutual exclusion is preserved.
2. The progress requirement is satisfied.
3. The bounded-waiting requirement is met.
17
TWO-PROCESS SOLUTION
-
• Meets all three requirements;
• solves the critical-section problem for two
processes .
• Mutual exclusion
• Progress
• No strictness. Pi enter into CS till Pj is not interested
• Turn variable breaks any deadlock possibility
• Bounded waiting
• Initially Pi in CS
• Then If Pj try Pj has to wait
• Then if P0 tries again it has to wait
• Then if P1 try it can enter
18
Synchronization Hardware
• Software-based solutions such as Peterson's are not guaranteed to work on modern computer
architectures.
• Instead, we can generally state that any solution to the critical-section problem requires a simple tool-a
lock.
• Race conditions are prevented by requiring that critical regions be protected by locks. That is, a process
must acquire a lock before entering a critical section; it releases the lock when it exits the critical
section.
• This is illustrated in Figure 6.3.
19
Synchronization Hardware
• Hardware features can make any programming task easier and improve system efficiency.
• The critical-section problem could be solved simply in a uniprocessor environment if we could prevent
interrupts from occurring while a shared variable was being modified.
• In this manner, we could be sure that the current sequence of instructions would be allowed to execute in
order without preemption.
• No other instructions would be run, so no unexpected modifications could be made to the shared
variable. This is often the approach taken by nonpreemptive kernels.
• Unfortunately, this solution is not as feasible in a multiprocessor environment.
• Disabling interrupts on a multiprocessor can be time consuming, as the message is passed to all the
processors. This message passing delays entry into each critical section, and system efficiency decreases.
Also consider the effect on a system's clock if the clock is kept updated by interrupts.
• Many modern computer systems therefore provide special hardware instructions that allow us either to
test and modify the content of a word or to swap the contents of two words automatically that is, as one
uninterruptible unit.
• We can use these special instructions to solve the critical-section problem in a relatively simple manner.
• Rather than discussing one specific instruction for one specific machine, we abstract the main concepts
behind these types of instructions by describing the TestAndSet () and Swap() instructions.
20
Synchronization Hardware
boolean TestAndSet(boolean *target)
TestAndSet () instruction {
boolean rv = *target;
• The TestAndSet () instruction can be defined as shown in Figure 6.4. *target = TRUE;
return rv;
• This instruction is executed atomically. Thus, if two TestAndSet ()
instructions are executed simultaneously (each on a different CPU),
}
they will be executed sequentially in some arbitrary order. If the Figure 6.4 The definition of the
machine supports the TestAndSet () instruction, then we can TestAndSet () instruction.
implement mutual exclusion by declaring a Boolean variable lock,
initialized to false.
• The structure of process P; is shown in Figure 6.5. do {
• Process P0 arrives. while (TestAndSet(&lock))
; //do nothing
• It executes the test-and-set(Lock) instruction.
//critical section
• Since lock value is set to 0, so it returns value 0 to the while loop lock = FALSE;
and sets the lock value to 1. // remainder section
• The returned value 0 breaks the while loop condition. } while (TRUE);
• Process P0 enters the critical section and executes.
Figure 6.5 Mutual-exclusion implementation with
• Now, even if process P0 gets preempted in the middle, no other
process can enter the critical section. TestAndSet ().
• Any other process can enter only after process P0 completes and
sets the lock value to 0. 21
Synchronization Hardware
boolean TestAndSet(boolean *target)
TestAndSet () instruction {
• Here, the shared variable is lock which is initialized to false. boolean rv = *target;
• TestAndSet(lock) algorithm works in this way – it always returns *target = TRUE;
whatever value is sent to it and sets lock to true. return rv;
}
• The first process will enter the critical section at once as Figure 6.4 The definition of the
TestAndSet(lock) will return false and it’ll break out of the while TestAndSet () instruction.
loop.
• The other processes cannot enter now as lock is set to true and
do {
so the while loop continues to be true. Mutual exclusion is
while (TestAndSet(&lock))
ensured.
; //do nothing
• Once the first process gets out of the critical section, lock is //critical section
changed to false. So, now the other processes can enter one by lock = FALSE;
one. // remainder section
• Progress is also ensured. } while (TRUE);
Figure 6.5 Mutual-exclusion implementation
• However, after the first process any process can go in. with TestAndSet ().
• There is no queue maintained, so any new process that finds the
lock to be false again, can enter.
22
• So bounded waiting is not ensured.
Synchronization Hardware
void Swap(boolean *a, boolean *b)
{
Swap() instruction boolean temp = *a;
*a =*b;
• The Swap() instruction, in contrast to the *b = temp;
TestAndSet () instruction, operates on the contents }
of two words; it is defined as shown in Figure 6.6.
• Like the TestAndSet () instruction, it is executed Figure 6.6 The definition of the Swap ()
instruction.
atomically.
• If the machine supports the Swap() instruction, do {
then mutual exclusion can be provided as follows. key = TRUE;
• A global Boolean variable lock is declared and is while (key == TRUE)
Swap(&lock, &key);
initialized to false. // critical section
• In addition, each process has a local Boolean lock = FALSE;
variable key. The structure of process P; is shown //remainder section
in Figure 6.7. } while (TRUE);
• Although these algorithms satisfy the mutual- Figure 6.7 Mutual-exclusion implementation
exclusion requirement, they do not satisfy the with the Swap() instruction.
bounded-waiting requirement. 23
Synchronization Hardware
Bounded-waiting mutual exclusion with TestAndSet (
• In Figure 6.8, we present another algorithm using the do {
TestAndSet () instruction that satisfies all the critical-section waiting[i] = TRUE;
key = TRUE;
requirements. The common data structures are while (waiting[i] && key)
• boolean waiting[n]; key= TestAndSet(&lock);
waiting[i] = FALSE;
• boolean lock; //critical section
j = (i + 1) % n;
• These data structures are initialized to false.
while ((j != i) && !waiting[j])
• To prove that the mutual exclusion requirement is met, we note j = (j + 1) % n;
that process P; can enter its critical section only if either if (j == i)
lock = FALSE;
waiting [i] == false or key == false. else
• The value of key can become false only if the TestAndSet () is waiting[j] = FALSE;
executed. // remainder section
} while (TRUE) ;
• The first process to execute the TestAndSet () will find key==
false; all others must wait. Figure 6.8 Bounded-waiting
mutual exclusion with
• The variable waiting [i] can become false only if another TestAndSet (). 24
waiting P P P2 P3 ... Pn
0 1
do {
waiting[i] = TRUE; key F F F F F F
key = TRUE;
while (waiting[i] && key)
key= TestAndSet(&lock);
waiting [i] P0 P1 P2 P3 P4
waiting[i] = FALSE; Process P2 wants to
//critical section enter in CS. According to
j = (i + 1) % n; algorithm , key F F F F F
while ((j != i) && !waiting[j]) waiting[2]=TRUE and
j = (j + 1) % n; Key=TRUE.
if (j == i) Then it can proceed.
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section I J
} while (TRUE) ;
2 3
Figure 6.8 Bounded-waiting mutual 2 0
exclusion with TestAndSet (). 2 1
2 2
25
N=NUMBER OF PROCESSES
Bounded-waiting mutual exclusion
with TestAndSet ()
• To prove that the progress requirement is met, we note that the arguments
presented for mutual exclusion also apply here, since a process exiting the
critical section either sets lock to false or sets waiting[j] to false.
• Both allow a process that is waiting to enter its critical section to proceed.
• To prove that the bounded-waiting requirement is met, we note that, when
a process leaves its critical section, it scans the array waiting in the cyclic
ordering (i + 1, i + 2, ... , n -1, 0, ... , i- 1).
• It designates the first process in this ordering that is in the entry section
(waiting[j] ==true) as the next one to enter the critical section.
• Any process waiting to enter its critical section will thus do so within n - 1
turns.
26
Bounded-waiting mutual exclusion with
TestAndSet ()
• Unlock and Lock Algorithm uses TestAndSet to regulate the value of lock but it adds another value, waiting[i], for
each process which checks whether or not a process has been waiting.
• A ready queue is maintained with respect to the process in the critical section.
• All the processes coming in next are added to the ready queue with respect to their process number, not
necessarily sequentially.
• Once the ith process gets out of the critical section, it does not turn lock to false so that any process can avail the
critical section now, which was the problem with the previous algorithms.
• Instead, it checks if there is any process waiting in the queue.
• The queue is taken to be a circular queue.
• j is considered to be the next process in line and the while loop checks from jth process to the last process and
again from 0 to (i-1)th process if there is any process waiting to access the critical section.
• If there is no process waiting then the lock value is changed to false and any process which comes next can enter
the critical section. If there is, then that process’ waiting value is turned to false, so that the first while loop
becomes false and it can enter the critical section.
• This ensures bounded waiting. So the problem of process synchronization can be solved through this algorithm
27
28
SEMAPHORE: A synchronization tool
29
Semaphores
• Semaphores are integer variables that are used to solve the critical section problem by using two atomic
operations, wait and signal that are used for process synchronization.
• The definitions of wait and signal are as follows − Signal
• Wait The signal operation increments the
wait(S) value of its argument S.
{ signal(S)
while (S<=0); //no op {
S--; S++;
}
}
• The wait operation decrements the value of its argument S, if it is positive.
• If S is negative or zero, then no operation is performed.
All modifications to the integer value of the semaphore in the wait () and signal() operations must be
executed indivisibly.
That is, when one process modifies the semaphore value, no other process can simultaneously modify that
same semaphore value. In addition, in the case of wait (S), the testing of the integer value of S (S<=0), as well
30
as its possible modification (S--), must be executed without interruption.
Characteristic of Semaphore
31
Usage:Types of Semaphores
• There are two main types of semaphores i.e.
counting semaphores and binary semaphores.
Details about these are given as follows −
• 1. Counting Semaphores
• The value of a counting semaphore can range
over an unrestricted domain.
• These are integer value semaphores . These
semaphores are used to coordinate the resource
access, where the semaphore count is the number
of available resources. If the resources are added,
semaphore count automatically incremented and if
the resources are removed, the count is
decremented.
32
Usage:Types of Semaphores
2. Binary Semaphores
The binary semaphores are like counting
semaphores. The value of a binary
semaphore can range only between 0 and 1.
The wait operation only works when the
semaphore is 1 and the signal operation
succeeds when semaphore is 0. It is
sometimes easier to implement binary
semaphores than counting semaphores.
33
Usage:
• On some systems, binary semaphores are known as mutex locks, as they
are locks that provide mutual exclusion. do {
• We can use binary semaphores to deal with the critical-section problem for
multiple processes. Then processes share a semaphore, mutex, initialized wait (mutex) ;
to 1.
• Each process Pi is organized as shown in Figure 6.9. // critical section
• Counting semaphores can be used to control access to a given resource
consisting of a finite number of instances. signal(mutex);
• The semaphore is initialized to the number of resources available.
• Each process that wishes to use a resource performs a wait() operation on //remainder section
the semaphore (thereby decrementing the count).
• When a process releases a resource, it performs a signal() operation } while (TRUE);
(incrementing the count).
• When the count for the semaphore goes to 0, all resources are being used.
• After that, processes that wish to use a resource will block until the count Figure 6.9 Mutual-exclusion
becomes greater than 0. implementation with
semaphores.
34
We can also use semaphores to solve various synchronization problems.
For example, consider two concurrently running processes: P1 with a statement s1 and
P2 with a statement s2 .
Suppose we require that s2 be executed only after s1 has completed.
We can implement this scheme readily by letting P1 and P2 share a common
semaphore synch, initialized to 0, and by inserting the statements
S1;
signal(synch) ;
in process P1 and the statements
wait(synch);
S2;
in process P2. Because synch is initialized to 0, P2 will execute S2 only after P1 has
invoked signal (synch), which is after statement S1 has been executed.
35
Implementation
• The main disadvantage of the semaphore definition given here is that it requires busy waiting .
• While a process is in its critical section, any other process that tries to enter its critical section must loop
continuously in the entry code.
• This continual looping is clearly a problem in a real multiprogramming system, where a single CPU is
shared among many processes. Busy waiting wastes CPU cycles that some other process might be able to
use productively. This type of semaphore is also called a spinlock because the process "spins" while
waiting for the lock. (Spinlocks do have an advantage in that no context switch is required when a process
must wait on a lock, and a context switch may take considerable time. Thus, when locks are expected to
be held for short times, spinlocks are useful; they are often employed on multiprocessor systems where
one thread can "spin" on one processor while another thread performs its critical section on another
processor.)
• To overcome the need for busy waiting, we can modify the definition of the wait() and signal() semaphore
operations.
• When a process executes the wait () operation and finds that the semaphore value is not positive, it must
wait. However, rather than engaging in busy waiting, the process can block itself. The block operation
places a process into a waiting queue associated with the semaphore, and the state of the process is
switched to the waiting state. Then control is transferred to the CPU scheduler, which selects another
process to execute. 36
Implementation
• A process that is blocked, waiting on a semaphore S, should be restarted when some other process
executes a signal() operation.
• The process is restarted by a wakeup () operation, which changes the process from the waiting state to
the ready state.
• The process is then placed in the ready queue. (The CPU may or may not be switched from the running
process to the newly ready process, depending on the CPU-scheduling algorithm.)
• To implement semaphores under this definition, we define a semaphore as
• a "C' struct:
• typedef struct {
• int value;
• struct process *list;
• } semaphore;
37
Implementation
42
Difference between Semaphore vs. Mutex
Parameter Semaphore Mutex
Data Type It is an integer variable. It is an object.
Mechanism Semaphore is a signaling mechanism. Mutex is a type of locking
mechanism.
Types There are two types of semaphore, which There are no types of mutex.
are binary semaphore and counting
semaphore.
• The implementation of a semaphore with a waiting queue may result in a situation where two or more processes are
waiting indefinitely for an event that can be caused only by one of the waiting processes. The event in question is the
execution of a signal() When such a state is reached, these processes are said to be deadlocked.
• To illustrate this, we consider a system consisting of two processes, Po and P1, each accessing two semaphores, S and
Q, set to the value 1:
Suppose that Po executes wait (S) and then P1 executes wait (Q).
P0 P1
wait(S); wait(Q); When Po executes wait (Q), it must wait until P1 executes signal (Q).
wait(Q); wait(S);
. . Similarly, when P1 executes wait (S), it must wait until Po executes signal(S).
. . Since these signal() operations cannot be executed, Po and P1 are deadlocked.
signal(S); signal(Q);
signal(Q); signal(S); We say that a set of processes is in a deadlock state when every process in the set is
waiting for an event that can be caused only by another process in the set.
The events with which we are mainly concerned here are resource acquisition and
P0 P1
S=1,Q=1 release.
S=0 Q=0 Another problem related to deadlocks is indefinite blocking or starvation, a
Q=-1 S=-1 situation in which processes wait indefinitely within the semaphore.
Indefinite blocking may occur if we remove processes from the list associated with
44
a semaphore in LIFO (last-in, first-out) order.
Priority Inversion
• A scheduling challenge arises when a higher-priority process needs to read or modify kernel data that are currently
being accessed by a lower-priority process-or a chain of lower-priority processes.
• Since kernel data are typically protected with a lock, the higher-priority process will have to wait for a lower-priority one
to finish with the resource.
• The situation becomes more complicated if the lower-priority process is preempted in favor of another process with a
higher priority.
• As an example, assume we have three processes, L, M, and H, whose priorities follow the order L < M < H. Assume that
process H requires resource R, which is currently being accessed by process L.
• Ordinarily, process H would wait for L to finish using resource R. However, now suppose that process M becomes
runnable, thereby preempting process L. Indirectly, a process with a lower priority-process M-has affected how long
process H must wait for L to relinquish resource R.
• This problem is known as Priority Inversion.
• It occurs only in systems with more than two priorities, so one solution is to have only two priorities.
• That is insufficient for most general-purpose operating systems, however. Typically these systems solve the problem by
implementing a priority inheritance protocol. According to this protocol, all processes that are accessing resources
needed by a higher-priority process inherit the higher priority until they are finished with the resources in question.
• When they are finished, their priorities revert to their original values. In the example above, a priority-inheritance
protocol would allow process L to temporarily inherit the priority of process H, thereby preventing process M from
preempting its execution. When process L had finished using resource R, it would relinquish its inherited priority45from
Hand assume its original priority. Because resource R would now be available, process H-not M-would run next.
46
Classic Problems of Synchronization
• The Bounded-Buffer Problem
• The Readers-Writers Problem
• The Dining-Philosophers Problem
47
Classic Problem of Synchronization
The Bounded-Buffer Problem
producer consumer
Assumptions Initialization:
49
Producer
The Bounded-Buffer Problem
IN OU Consumer
T buffers = n mutex=1 empty = n full = 0
IN OU
EMPTY 5 4 Consumer T
Producer
MUTEX 1 0 do { …………….. do { FULL 1 0
wait(full);
MUTEX 0 1 // produce an item in nextp MUTEX 1 0
wait(mutex);
………..
FULL 0 1 …………….. MUTEX 0 1
wait(empty);
// remove an item from buffer EMPTY 4 5
wait(mutex); to nextc
………. ………………
// add nextp to buffer signal(mutex);
……….. signal(empty);
signal(mutex); …………….
signal(full); // consume the item in nextc
} while(True); } while(True);
50
The Readers-Writers Problem
• Reader performs only read operation
• Writer can perform read or write or read+write
operation
• Writers should have exclusive access to database
X X X reader
writer reader writer
reader
writer reader
writer reader
reader
reader reader
51
For reader / writer it is a section
The Readers-Writers Problem
• Suppose that a database is to be shared among several concurrent processes.
• Some of these processes may want only to read the database, whereas others may
want to update (that is, to read and write) the database.
• We distinguish between these two types of processes by referring to the former as
readers and to the latter as writers. Obviously, if two readers access the shared data
simultaneously, no adverse effects will result. However, if a writer and some other
process (either a reader or a writer) access the database simultaneously, chaos may
occur.
• To ensure that these difficulties do not arise, we require that the writers have exclusive
access to the shared database while writing to the database.
• This synchronization problem is referred to as the readers-writers problem.
52
The Readers-Writers Problem
• The first readers-writers problem, requires that no reader be kept waiting unless a writer has already
obtained permission to use the shared object.
• In other words, no reader should wait for other readers to finish simply because a writer is waiting.
• The second readers writers problem requires that, once a writer is ready, that writer performs its write as
soon as possible. In other words, if a writer is waiting to access the object, no new readers may start
reading.
• A solution to either problem may result in starvation.
• In the first case, writers may starve; in the second case, readers may starve.
• For this reason, other variants of the problem have been proposed.
• Next, we present a solution to the first readers-writers problem.
• In the solution to the first readers-writers problem, the reader processes share the following data
structures:
• semaphore mutex, wrt;
• int readcount;
53
he Readers-Writers Problem
• The semaphores mutex and wrt are initialized to 1; readcount is initialized to 0.
• The semaphore wrt is common to both reader and writer processes.
• The mutex semaphore is used to ensure mutual exclusion when the variable readcount is
updated.
• The readcount variable keeps track of how many processes are currently reading the
object.
• The semaphore wrt functions as a mutual-exclusion semaphore for the writers. It is also
used by the first or last reader that enters or exits the critical section. It is not used by
readers who enter or exit while other readers are in their critical sections.
• Note that, if a writer is in the critical section and n readers are waiting, then one reader is
queued on wrt, and n- 1 readers are queued on mutex.
• Also observe that, when a writer executes signal ( wrt), we may resume the execution of
either the waiting readers or a single waiting writer. The selection is made by the
scheduler.
54
• The readers-writers problem and its solutions have been generalized to provide locks on
he Readers-Writers Problem
• Acquiring a reader-writer lock requires specifying the mode of the lock either read or
write access.
• When a process wishes only to read shared data, it requests the reader-writer lock in
read mode; a process wishing to modify the shared data must request the lock in write
mode. Multiple processes are permitted to concurrently acquire a reader-writer lock in
read mode, but only one process may acquire the lock for writing, as exclusive access is
required for writers.
• Reader-writer locks are most useful in the following situations:
• In applications where it is easy to identify which processes only read shared data and
which processes only write shared data.
• In applications that have more readers than writers. This is because readerwriter locks
generally require more overhead to establish than semaphores or mutual-exclusion
locks.
• The increased concurrency of allowing multiple readers compensates for the overhead
involved in setting up the readerwriter lock.
55
Structure of a reader process & writer process
IN O
UT wrt = 1 mutex = 1 readcount = 0
READER WRITER
• do { • do {
W(MUTEX) 1 0
• wait(mutex); //mutual exclusion for readers • wait(wrt);
READCOUNT 0 1 • readcount++ ; // reader prcoess=1
• if (readcount==1)
W(WRT) 1 0 • // writing is performed
• wait(wrt); // don’t allow writer
S(MUTEX) 0 1 • signal(mutex) ; // other reader can enter
W(MUTEX) 1 0 while the current reader is in CS • signal(wrt);
• }while (TRUE);
READCOUNT 1 0 • //reading is performed IN OU
S(WRT) 0 1 T
• wait(mutex); w(WRT) 1 0
S(MUTEX) 0 1
• readcount - -; // reader wants to leave
• if (readcount==0) //no reader is in CS s(WRT) 0 1
• signal(wrt); // writer can enter
• signal(mutex);
56
• } while(TRUE);
Dining philosopher problem
• Consider five philosophers who spend their lives thinking and eating.
• The philosophers share a circular table surrounded by five chairs, each belonging to one
philosopher.
• In the center of the table is a bowl of rice, and the table is laid with five single chopsticks (Figure
6.14).
• When a philosopher thinks, she does not interact with her colleagues.
• From time to time, a philosopher gets hungry and tries to pick up the two chopsticks that are
closest to her (the chopsticks that are between her and her left and right neighbors).
• A philosopher may pick up only one chopstick at a time.
• Obviously, she cannot pick up a chopstick that is already in the hand of a neighbor.
• When a hungry philosopher has both her chopsticks at the same time, she eats without releasing
her chopsticks. When she is finished eating, she puts down both of her chopsticks and starts
thinking again.
• The dining-philosophers problem is considered a classic synchronization problem neither because
of its practical importance nor because computer scientists dislike philosophers but because it is
an example of a large class of concurrency-control problems. 57
Dining philosopher problem
Problems:
4 deadlock
5 starvation
60
Structure of Philosopher i
Problem 1
• If all philosophers hungry
& occupy their left
chopstick at the same
time their chopstick value
is zero.
Deadlock state
61
Problem 2
2
• If two philosophers are
3
faster than others, they
think fast and eat fast,
1
faster processes occupy
chopsticks first
• Or greedy
starvation 62
Solutions to DP problem
• Asymmetric solution
• Odd ones use first left and then right chopstick while
even can use right and then left chopstick.
63