Osc Unit 3
Osc Unit 3
Communication
Critical section is a part of the code where shared resources are accessed. To prevent conflicts
and ensure data consistency, only one process should be allowed in the critical section at a time.
This is achieved by using special techniques to control access to shared resources, making sure
that only one process can use a resource at a time. The goal is to prevent race conditions and
ensure that processes can work together without interfering with each other.
2
• A Cooperating process is one that can affect or be affected by
other processes executing in the system.
• Cooperating processes can either directly share a logical
address space or be allowed to share data only through files or
messages.
• The former is achieved through the use of lightweight
processes or threads.
• Concurrent access to shared data may result in data
inconsistency.
• Here, we discuss various mechanisms to ensure the orderly
execution of cooperating processes that share a logical address
space, so that data consistency is maintained.
3
Background..
• Concurrent access to shared data may result in data
inconsistency
• Maintaining data consistency requires mechanisms to ensure
the orderly execution of cooperating processes
• Suppose that we wanted to provide a solution to the
consumer-producer problem that fills all the buffers. We can
do so by having an integer count that keeps track of the
number of full buffers. Initially, count is set to 0. It is
incremented by the producer after it produces a new buffer
and is decremented by the consumer after it consumes a
buffer.
4
Inter Process Communication
There are 2 types of process –
– Independent Processes – Processes which
do not share data with other processes .
– Cooperating Processes – Processes that
shares data with other processes.
• Cooperating process require Interprocess
communication (IPC) mechanism.
• Inter Process Communication is the
mechanism by which cooperating process
share data and information.
There are 2 ways by which Interprocess
communication is achieved –
Shared memory
Message Passing
1.Shared Memory
21
Consumer
while (true) {
while (count == 0) ; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed*/
}
22
Race Condition
• count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1
28
Algorithm for Process Pi
do {
flag[i] = true;
turn = j;
while (flag[j] && turn = = j);
critical section
flag[i] = false;
remainder section
} while (true);
29
Peterson’s Solution (Cont.)
• Provable that the three CS requirement are met:
1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn = i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
30
Synchronization Hardware
• Many systems provide hardware support for implementing the critical section code.
31
Solution to Critical-section Problem Using Locks
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
32
test_and_set Instruction
Definition:
boolean test_and_set (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
1. Executed atomically
2. Returns the original value of passed parameter
3. Set the new value of passed parameter to “TRUE”.
33
Solution using test_and_set()
Shared Boolean variable lock, initialized to FALSE
Solution:
do {
while (test_and_set(&lock));
/* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);
34
35
do { Bounded-waiting Mutual Exclusion
waiting[i] = TRUE;
with TestandSet()
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
Common data structures are
boolean waiting[n];
waiting[i] = FALSE;
boolean lock;
// critical section Initialized to FALSE
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
} while (TRUE);
36
Turn Variable or Strict Alternation Approach
37
•The actual problem of the lock variable approach was the fact that the
process was entering in the critical section only when the lock variable is 1.
More than one process could see the lock variable as 1 at the same time hence
the mutual exclusion was not guaranteed there.
•This problem is addressed in the turn variable approach. Now, A process can
enter in the critical section only in the case when the value of the turn
variable equal to the PID of the process.
•There are only two values possible for turn variable, i or j. if its value is not i
then it will definitely be j or vice versa
.
•In the entry section, in general, the process Pi will not enter in the critical
section until its value is j or the process Pj will not enter in the critical section
until its value is i.
•Initially, two processes Pi and Pj are available and want to execute into
critical section.
38
The turn variable is equal to i hence Pi will get the chance to enter into the critical
section. The value of Pi remains I until Pi finishes critical section.
39
Pi finishes its critical section and assigns j to turn variable. Pj will get the chance
to enter into the critical section. The value of turn remains j until Pj finishes its
critical section.
Mutual Exclusion:
The strict alternation approach provides mutual exclusion in every case. This
procedure works only for two processes. The pseudo code is different for both of
the processes. The process will only enter when it sees that the turn variable is
equal to its Process ID otherwise not Hence No process can enter in the critical
section regardless of its turn.
40
Progress:
Progress is not guaranteed in this mechanism. If Pi doesn't want to get enter into the critical
section on its turn then Pj got blocked for infinite time. Pj has to wait for so long for its turn
since the turn variable will remain 0 until Pi assigns it to j.
Portability:
The solution provides portability. It is a pure software mechanism implemented at user
mode and doesn't need any special instruction from the Operating System.
41
Semaphores
•Synchronization tool
•Does not require busy waiting
•Semaphore S – integer Variable
•Two standard operations modify S: wait() and signal()
– Originally called P() and V()
• P (from the Dutch proberen, “to test”)
• V (verhogen, “to increment”)
•Less Complicated
•Can only be accessed via two indivisible (atomic) operations
wait (S) {
while (S <= 0)
; // no-op
S--;
}
signal (S) {
S++;
}
Semaphore Types
1. Counting Semaphore
• Integer value can range over an unrestricted domain. It is used to control
access to a resource that has multiple instances.
2. Binary Semaphore
– Integer value can range only between 0 and 1;
– Can be simpler to implement
– Also known as mutex locks, as they are locks that provide mutual exclusion.
– If s=1 it mean no process is in critical section.
Advantages :
1. Provides mutual exclusion
2. Can solve various synchronization problems
44
Semaphore Implementation
• Must guarantee that no two processes can execute wait () and
signal () on the same semaphore at the same time
• Thus, implementation becomes the critical section problem
where the wait and signal code are placed in the critical section
– Could now have busy waiting in critical section
implementation
• Note that applications may spend lots of time in critical sections
and therefore this is not a good solution.
• Busy waiting wastes CPU cycles that some other process might
be able to use productively. This type of Semaphore is also called
a Spinlock because the process “spins” while waiting for the
lock.
45
Semaphore Implementation
with no Busy waiting
• Two operations:
– block – place the process invoking the operation on the
appropriate waiting queue
– wakeup – remove one of processes in the waiting queue
and place it in the ready queue
46
Disadvantages of Semaphores
• The main disadvantage is it requires busy waiting
• While a process is in its critical section, any other
process that tries to enter its critical section must loop
continuously in the entry code.
• Busy waiting wastes CPU cycles that some other
process might be able to use productively.
• This type of semaphore is also called a spinlock
because the process “spins” while waiting for the lock.
47
To overcome the need for busy waiting, we can modify the definition of wait()
and signal() semaphore operations
• When a process executes the wait() operation and finds that
the semaphore value is not positive, it must wait
• However, rather than engaging in busy waiting, the process
can block itself.
• The block operation places a process into a waiting queue
associated with the semaphore, and the state of the process is
switched to the waiting state.
• Then control is transferred to the CPU scheduler, which
selects another process to execute.
48
Semaphore Implementation with no Busy waiting (Cont.)
Struct Semaphore
{ int value;
Struct process *list; }semaphore;
• Implementation of wait:
wait(semaphore *S) {
S->value-- ;
if (S->value < 0)
{ add this process to S->list;
block();
}
}
• Implementation of signal:
signal(semaphore *S)
{ S->value++;
if (S->value <= 0)
{ remove a process P from S->list;
wakeup(P);
} 49
Deadlocks and Starvation
• The implementation of a semaphore with a waiting queue may result in a
situation where two or more processes are waiting indefinitely for an event
that can be caused only by one of the waiting processes. The event is the
execution of signal() operation. When such a state is reached, these
processes are said to be deadlocked.
• To illustrate this, we consider a system consisting of two processes, P0 and
P1, each accessing two semaphores, S and Q, set to the value 1:
50
• Suppose that P0 executes wait(S) and then P1 executes wait(Q), it must
wait until P1 executes signal(Q). Similarly, when P1 executes wait(S), it
must wait until P0 executes signal(S).
• Since these signal() operations cannot be executed, P0 and P1 are
deadlocked.
• We say that a set of processes is in a deadlocked state when every process
in the set is waiting for an event that can be caused only by another process
in the set. The event with which we are mainly concerned here are resource
acquisition and release.
• Another problem related to deadlock is indefinite blocking, or starvation, a
situation in which processes wait indefinitely within the semaphore.
• In definite blocking may occur if we add and remove processes from the
list associated with a semaphore in LIFO(Last-In-First-Out) order.
51
Questions on Semaphore
• A counting semaphore was initialized to 10. Then 6 P
(wait) operations and 4V (signal) operations were
completed on this semaphore. The resulting value of
the semaphore is
(a) 0 (b) 8 (c) 10 (d) 12
52
• The following program consists of 3 concurrent processes and 3 binary
semaphores. The semaphores are initialized as S0=1, S1=0, S2=0.
Process P0 Process P1 Process P2
while (true) { wait (S1); wait (S2);
wait (S0); Release (S0); release (S0);
print (0);
release (S1);
release (S2);
}
53
• A shared variable x, initialized to zero, is operated on by four concurrent processes
W, X, Y, Z as follows. Each of the processes W and X reads x from memory,
increments by one, stores it to memory, and then terminates. Each of the processes
Y and Z reads x from memory, decrements by two, stores it to memory, and then
terminates. Each process before reading x invokes the P operation (i.e., wait) on a
counting semaphore S and invokes the V operation (i.e., signal) on the semaphore S
after storing x to memory. Semaphore S is initialized to two. What is the maximum
possible value of x after all processes complete execution?(Gate CSE 2013)
A) -2
B) -1
C) 1
D) 2
54
MUTEX (MUTual EXclusion)
• Used to synchronize two threads.
55
Difference between Lock, Mutex
and Semaphore
• A lock allows only one thread to enter the part
that's locked and the lock is not shared with any
other processes.
59
Bounded Buffer Problem (Cont.)
• The structure of the producer process
do {
wait (empty);
wait (mutex);
signal (mutex);
signal (full);
} while (TRUE);
60
Bounded Buffer Problem (Cont.)
• The structure of the consumer process
do {
wait (full);
wait (mutex);
signal (mutex);
signal (empty);
} while (TRUE);
61
Readers-Writers Problem
• A data set is shared among a number of concurrent processes
– Readers – only read the data set; they do not perform any
updates
– Writers – can both read and write
do {
wait (wrt) ;
// writing is performed
signal (wrt) ;
} while (TRUE);
63
•
Readers-Writers Problem (Cont.)
The structure of a reader process
do {
wait (mutex) ; //Ensure that no other reader can execute the <Entry> section while you are in it
readcount ++ ; //Indicate that you are a reader trying to enter the Critical Section
Entry Section
wait (mutex) ; //Ensure that no other reader can execute the <Exit> section while you are in it
readcount - - ; //Indicate that you are no longer needing the shared resource. One less readers
if (readcount == 0) //Checks if you are the last (only) reader who is reading the shared file
Exit Section
signal (wrt) ; //If you are last reader, then you can unlock the resource. This makes it available
to writers.
signal (mutex) ; //Let other readers enter the <Exit> section, now that you are done with it.
} while (TRUE);
64
Readers/Writers problem’s
solution
1. The first reader must lock the resource(shared file) if such is available. Once the file is locked
from writers, it may be used by many subsequent readers without having them to re-lock it
again.
2. Before entering the CS, every new reader must go through the entry section. However, there
may only be a single reader in the entry section at a time. This is done to avoid race conditions
on the readers (e.g. two readers increment the readcount at the same time, so no one feels
entitled to lock the resource from writers). To accomplish this, every reader which enters the
<ENTRY Section> will lock the <ENTRY Section> for themselves until they are done with it. Note:
readers are not locking the resource. They are only locking the entry section so no other reader
can enter it while they are in it. Once the reader is done executing the entry section, it will
unlock it by signalling the mutex semaphore. Same is valid for the <EXIT Section>. There can be
no more than a single reader in the exit section at a time, therefore, every reader must claim
and lock the Exit section for themselves before using it.
3. Once the first reader is in the entry section, it will lock the resource. Doing this will prevent any
writers from accessing it. Subsequent readers can just utilize the locked (from writers)
resource. The very last reader (indicated by the readcount variable) must unlock the resource,
thus making it available to writers.
4. In this solution, every writer must claim the resource individually. This means that a stream of
readers can subsequently lock all potential writers out and starve them. This is so, because
after the first reader locks the resource, no writer can lock it, before it gets released. And it will
only be released by the very last reader. Hence, this solution does not satisfy fairness.
65
Readers-Writers Problem Variations
• First variation – no reader kept waiting unless writer
has permission to use shared object
66
Dining-Philosophers Problem
67
Dining-Philosophers Problem
Algorithm
• The structure of Philosopher i:
do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
68
Problems with Semaphores
• Incorrect use of semaphore operations:
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
71
Schematic view of a Monitor
72
Condition Variables
• But Monitors are not powerful enough to model some
synchronization schemes.
• So the solution is provided by Condition Variables.
Declaration:
condition x, y;
73
Synchronization Examples
• Windows XP
• Linux
74
Windows XP Synchronization
• Uses interrupt masks to protect access to global resources on
uniprocessor systems
– Events
• An event acts much like a condition variable
– Timers notify one or more thread when time expired
– Dispatcher objects either signaled-state (object available) or non-
signaled state (thread will block)
75
Linux Synchronization
• Linux:
– Prior to kernel Version 2.6, disables interrupts to
implement short critical sections
– Version 2.6 and later, fully preemptive
• Linux provides:
– semaphores
– spinlocks
– reader-writer versions of both
76
• In a slightly modified implementation, it
would be possible for a semaphore's value to
be less than zero. When a process executes
wait(), the semaphore count is automatically
decremented. The magnitude of the negative
value would determine how many processes
were waiting on the semaphore:
77
Exam Questions
1. Explain briefly all classic problems of synchronization.
2. What is the Critical Section?
3. What is Critical–Section problem? Explain in detail.
4. Explain the following:
a. Critical Section
b. Starvation
c. Critical resource
5. What is the significance of Checkpoints?
6. Analyse the various mechanisms of inter processor
communication in Unix.
7. What is meant by Semaphore? Explain with an example.
8. What is Semaphore? What are the types?
9. What is a Semaphore? Explain the two operations.
Exam Questions |<<