Operating System Chapter 8
Operating System Chapter 8
CS-316
Ms Iqra Mehmood
UNIVERSITY OF JHANG
Process Synchronization
Process Synchronization is the coordination of execution of multiple processes in a multi-
process system to ensure that they access shared resources in a controlled and predictable
manner. It aims to resolve the problem of race conditions and other synchronization issues in
a concurrent system.
The main objective of process synchronization is to ensure that multiple processes access
shared resources without interfering with each other and to prevent the possibility of
inconsistent data due to concurrent access. To achieve this, various synchronization
techniques such as semaphores, monitors, and critical sections are used.
X++ Y–
sleep(1) sleep(1)
shared = X shared = Y
Note: We are assuming the final value of a common variable(shared) after execution of
Process P1 and Process P2 is 10 (as Process P1 increment variable (shared=10) by 1 and
Process P2 decrement variable (shared=11) by 1 and finally it becomes shared=10). But we
are getting undesired value due to a lack of proper synchronization.
Actual meaning of race-condition
If the order of execution of the process(first P1 -> then P2) then we will get the value of
common variable (shared) =9.
If the order of execution of the process(first P2 -> then P1) then we will get the final value
of common variable (shared) =11.
Here the (value1 = 9) and (value2=10) are racing, If we execute these two processes in
our computer system then sometime we will get 9 and sometime we will get 10 as the
final value of a common variable(shared). This phenomenon is called race condition.
Critical Section Problem
A critical section is a code segment that can be accessed by only one process at a time. The
critical section contains shared variables that need to be synchronized to maintain the
consistency of data variables. So the critical section problem means designing a way for
cooperative processes to access shared resources without creating data inconsistencies.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
Mutual Exclusion: If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
Progress: If no process is executing in the critical section and other processes are
waiting outside the critical section, then only those processes that are not executing in
their remainder section can participate in deciding which will enter the critical section
next, and the selection can not be postponed indefinitely.
Bounded Waiting: A bound must exist on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted.
Peterson’s Solution
Peterson’s Solution is a classical software-based solution to the critical section problem. In
Peterson’s solution, we have two shared variables:
boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical
section
int turn: The process whose turn is to enter the critical section.
while(1){
while(TestAndSet(lock));
In the above algorithm the TestAndSet() function takes a boolean value and returns the same
value. TestAndSet() function sets the lock variable to true.
When lock varibale is initially false the TestAndSet(lock) condition checks for
TestAndSet(false). As TestAndSet function returns the same value as its
argument, TestAndSet(false) returns false. Now, while loop while(TestAndSet(lock)) breaks
and the process enters the critical section.
As one process is inside the critical section and lock value is now 'true', if any other process
tries to enter the critical section then the new process checks for while(TestAndSet(true)) which
will return true inside while loop and as a result the other process keeps executing the while
loop.
As no queue is maintained for the processes stuck in the while loop, bounded waiting is not
ensured. If a process waits for a set amount of time before entering the critical section, it
is said to be a bounded waiting condition.
In test and set algorithm the incoming process trying to enter the critical section does not wait
in a queue so any process may get the chance to enter the critical section as soon as the process
finds the lock variable to be false. It may be possible that a particular process never gets the
chance to enter the critical section and that process waits indefinitely.
Swap
Swap function uses two boolean variables lock and key. Both lock and key variables are
initially initialized to false. Swap algorithm is the same as lock and set algorithm. The Swap
algorithm uses a temporary variable to set the lock to true when a process enters the
critical section of the program.
while(1){
key=true;
while(key){
swap(lock,key);
}
In the code above when a process P1 enters the critical section of the program it first executes
the while loop
while(key){
swap(lock,key);
}
As key value is set to true just before the for loop so swap(lock, key) swaps the value
of lock and key. Lock becomes true and the key becomes false. In the next iteration of the while
loop breaks and the process, P1 enters the critical section.
The value of lock and key when P1 enters the critical section is lock = true and key = false.
Let's say another process, P2, tries to enter the critical section while P1 is in the critical section.
Let's take a look at what happens if P2 tries to enter the critical section.
key is set to true again after the first while loop is executed i.e while(1). Now, the second while
loop in the program i.e while(key) is checked. As key is true the process enters the second while
loop. swap(lock, key) is executed again. as both key and lock are true so after swapping also
both will be true. So, the while keeps executing and the process P2 keeps running the while
loop until Process P1 comes out of the critical section and makes lock false.
When Process P1 comes out of the critical section the value of lock is again set to false so that
other processes can now enter the critical section.
When a process is inside the critical section than all other incoming process trying to enter the
critical section is not maintained in any order or queue. So any process out of all the waiting
process can get the chance to enter the critical section as the lock becomes false. So, there may
be a process that may wait indefinitely. So, bounded waiting is not ensured in Swap
algorithm also.
All the processes are maintained in a ready queue before entering into the critical section. The
processes are added to the queue with respect to their process number. The queue is the circular
queue.
while(1){
waiting[i] = true;
key = true;
while(waiting[i] && key){
key = TestAndSet(lock);
}
CRITICAL SECTION CODE
j = (i+1) % n;
while(j != i && !waiting[j])
j = (j+1) % n;
if(j == i)
lock = false;
else
waiting[j] = false;
REMAINDER SECTION CODE
}
In Unlock and lock algorithm the lock is not set to false as one process comes out of the critical
section. In other algorithms like swap and Test and set the lock was being set to false as the
process comes out of the critical section so that any other process can enter the critical section.
But in Unlock and lock, once the ith process comes out of the critical section the algorithm
checks the waiting queue for the next process waiting to enter the critical section i.e jth process.
If there is a jth process waiting in the ready queue to enter the critical section, the waiting[j] of
the jth process is set to false so that the while loop while(waiting[i] && key) becomes false and
the jth process enters the critical section.
If no process is waiting in the ready queue to enter the critical section the algorithm then sets
the lock to false so that any other process comes and enters the critical section easily.
Since a ready queue is always maintained for the waiting processes, the Unlock and lock
algorithm ensures bounded waiting.
Semaphores
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be
signaled by another thread. This is different than a mutex as the mutex can be signaled only
by the thread that is called the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization. A
Semaphore is an integer variable, which can be accessed only through two operations wait()
and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
Binary Semaphores: They can only be either 0 or 1. They are also known as mutex
locks, as the locks can provide mutual exclusion. All the processes can share the same
mutex semaphore that is initialized to 1. Then, a process has to wait until the lock becomes
0. Then, the process can make the mutex semaphore 1 and start its critical section. When
it completes its critical section, it can reset the value of the mutex semaphore to 0 and
some other process can enter its critical section.
Counting Semaphores: They can have any value and are not restricted to a certain
domain. They can be used to control access to a resource that has a limitation on the
number of simultaneous accesses. The semaphore can be initialized to the number of
instances of the resource. Whenever a process wants to use that resource, it checks if the
number of remaining instances is more than zero, i.e., the process has an instance
available. Then, the process can enter its critical section thereby decreasing the value of
the counting semaphore by 1. After the process is over with the use of the instance of the
resource, it can leave the critical section thereby adding 1 to the number of available
instances of the resource.