FILE5 process synchronisation
FILE5 process synchronisation
UNIT – 02
PROCESS SYNCHRONIZATION
Process Synchronization is the coordination of execution of multiple processes in a multi-process
system to ensure that they access shared resources in a controlled and predictable manner. It aims to
resolve the problem of race conditions and other synchronization issues in a concurrent system.
Critical Section Problem
A critical section is a code segment that can be accessed by only one process at a time. The critical
section contains shared variables that need to be synchronized to maintain the consistency of data
variables. Sothe critical section problem means designing a way for cooperative processes to access
shared resources without creating data inconsistencies.
In the above example, the operations that involve balance variable should be put in critical sections of
both deposit and withdraw.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
• Mutual Exclusion: If a process is executing in its critical section, then no other process is
allowed to execute in the critical section.
• Progress: If no process is executing in the critical section and other processes are waiting
outside the critical section, then only those processes that are not executing in their remainder
section can participate in deciding which will enter the critical section next, and the selection
cannot be postponed indefinitely.
• Bounded Waiting: A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.
Bakery Algorithm
Bakery Algorithm
OPERATING SYSTEM -SEP
The Bakery Algorithm is a simple process synchronization algorithm which is used for preventing
the problem of race conditions in critical sections of the program or in an operating system.
The bakery algorithm is a mutual exclusion algorithm. Hence, it allows multiple processes to access
the critical section of the program in a right manner. This algorithm operates by giving a unique
number to each process that requests for accessing to the critical section.
The bakery algorithm is based on the first come first serve property. Therefore, the process with the
smallest number is given priority to access the critical section first. In case when two or more
processes assigned the same number, then the process with lowest process ID is given priority.
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores.
Counting Semaphore
• Description: Can take non-negative integer values (not limited to 0 or 1).
• Use Case: Useful for managing a resource pool with multiple identical resources (e.g.,
multiple printers).
• Behaviour: The count represents the number of available resources. It can be incremented or
decremented.
Binary Semaphore (Mutex Semaphore)
• Description: Can only take the value 0 or 1.
• Use Case: Used for mutual exclusion, ensuring only one thread accesses a critical section at a
time.
• Behaviour: Works like a lock; 1 means available, 0 means locked.
Advantages of Semaphores
• Semaphores allow only one process into the critical section. They follow the mutual exclusion
principle strictly and are much more efficient than some other methods of synchronization.
• There is no resource wastage because of busy waiting in semaphores as processor time is not
wasted unnecessarily to check if a condition is fulfilled to allow a process to access the
critical section.
• Semaphores are implemented in the machine independent code of the microkernel. So they
are machine independent.
Disadvantages of Semaphores
• Semaphores are complicated so the wait and signal operations must be implemented in the
correct order to prevent deadlocks.
• Semaphores are impractical for last scale use as their use leads to loss of modularity. This
happens because the wait and signal operations prevent the creation of a structured layout for
the system.
• Semaphores may lead to a priority inversion where low priority processes may access the
critical section first and high priority processes later.
Synchronization problem: Below are some of the classical problem depicting flaws of process
synchronaization in systems where cooperating processes are present.
❖ Bounded Buffer Problem: Bounded buffer problem, which is also called producer consumer
problem, is one of the classic problems of synchronization.
There is a buffer of n slots and each slot is capable of storing one unit of data. There
are two processes running, namely, producer and consumer, which are operating on the buffer.
OPERATING SYSTEM -SEP
• A producer tries to insert data into an empty slot of the buffer. A consumer tries to remove
data from a filled slot in the buffer. As you might have guessed by now, those two processes
won't produce the expected output if they are being executed concurrently.
• There needs to be a way to make the producer and consumer work in an independent manner.
One solution of this problem is to use semaphores. The semaphores which will be used here are:
• m, a binary semaphore which is used to acquire and release the lock.
• empty, a counting semaphore whose initial value is the number of slots in the buffer, since,
initially all slots are empty.
• full, a counting semaphore whose initial value is 0.
At any instant, the current value of empty represents the number of empty slots in the buffer and full
represents the number of occupied slots in the buffer.
The Producer Operation
The pseudocode of the producer function looks like this:
do
{
// wait until empty > 0 and then decrement 'empty'
wait(empty);
// acquire lock
wait(mutex);
/* perform the insert operation in a slot */
// release lock
signal(mutex);
// increment 'full'
signal(full);
}
while(TRUE)
• Looking at the above code for a producer, we can see that a producer first waits until there is
atleast one empty slot.
• Then it decrements the empty semaphore because, there will now be one less empty slot,
since the producer is going to insert data in one of those slots.
• Then, it acquires lock on the buffer, so that the consumer cannot access the buffer until
producer completes its operation.
• After performing the insert operation, the lock is released and the value of full is incremented
because the producer has just filled a slot in the buffer.
The Consumer Operation
do
{
// wait until full > 0 and then decrement 'full'
wait(full);
// acquire the lock
OPERATING SYSTEM -SEP
wait(mutex);
/* perform the remove operation in a slot */
// release the lock
signal(mutex);
// increment 'empty'
signal(empty);
}
while(TRUE);
• The consumer waits until there is atleast one full slot in the buffer.
• Then it decrements the full semaphore because the number of occupied slots will be
decreased by one, after the consumer completes its operation.
• After that, the consumer acquires lock on the buffer.
• Following that, the consumer completes the removal operation so that the data from one of
the full slots is removed.
• Then, the consumer releases the lock.
• Finally, the empty semaphore is incremented by 1, because the consumer has just removed
data from an occupied slot, thus making it empty.
//acquire lock
wait(mutex);
read_count++;
if(read_count == 1)
wait(wrt);
//release lock
signal(mutex);
/* perform the reading operation */
// acquire lock
wait(mutex);
read_count--;
if(read_count == 0)
signal(wrt);
// release lock
signal(mutex);
}
• As seen above in the code for the writer, the writer just waits on the w semaphore until it gets
a chance to write to the resource.
• After performing the write operation, it increments w so that the next writer can access the
resource.
• On the other hand, in the code for the reader, the lock is acquired whenever the read_count is
updated by a process.
• When a reader wants to access the resource, first it increments the read_count value, then
accesses the resource and then decrements the read_count value.
• The semaphore w is used by the first reader which enters the critical section and the last
reader which exits the critical section.
• The reason for this is, when the first readers enters the critical section, the writer is blocked
from the resource. Only new readers can access the resource now.
• Similarly, when the last reader exits the critical section, it signals the writer using the w
semaphore because there are zero readers now and a writer can have the chance to access the
resource.