0% found this document useful (0 votes)
0 views

FILE5 process synchronisation

The document discusses process synchronization in operating systems, focusing on the critical section problem and methods to ensure mutual exclusion, progress, and bounded waiting. It introduces the Bakery Algorithm for process synchronization, explaining its operation and advantages, as well as semaphores and their types, advantages, and disadvantages. Additionally, it describes classical synchronization problems like the Bounded Buffer Problem and the Readers-Writers Problem, providing pseudocode for producer and consumer operations.

Uploaded by

bgmiuserpro
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

FILE5 process synchronisation

The document discusses process synchronization in operating systems, focusing on the critical section problem and methods to ensure mutual exclusion, progress, and bounded waiting. It introduces the Bakery Algorithm for process synchronization, explaining its operation and advantages, as well as semaphores and their types, advantages, and disadvantages. Additionally, it describes classical synchronization problems like the Bounded Buffer Problem and the Readers-Writers Problem, providing pseudocode for producer and consumer operations.

Uploaded by

bgmiuserpro
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

OPERATING SYSTEM -SEP

UNIT – 02
PROCESS SYNCHRONIZATION
Process Synchronization is the coordination of execution of multiple processes in a multi-process
system to ensure that they access shared resources in a controlled and predictable manner. It aims to
resolve the problem of race conditions and other synchronization issues in a concurrent system.
Critical Section Problem
A critical section is a code segment that can be accessed by only one process at a time. The critical
section contains shared variables that need to be synchronized to maintain the consistency of data
variables. Sothe critical section problem means designing a way for cooperative processes to access
shared resources without creating data inconsistencies.
In the above example, the operations that involve balance variable should be put in critical sections of
both deposit and withdraw.

In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
• Mutual Exclusion: If a process is executing in its critical section, then no other process is
allowed to execute in the critical section.
• Progress: If no process is executing in the critical section and other processes are waiting
outside the critical section, then only those processes that are not executing in their remainder
section can participate in deciding which will enter the critical section next, and the selection
cannot be postponed indefinitely.
• Bounded Waiting: A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.

Bakery Algorithm
Bakery Algorithm
OPERATING SYSTEM -SEP

The Bakery Algorithm is a simple process synchronization algorithm which is used for preventing
the problem of race conditions in critical sections of the program or in an operating system.
The bakery algorithm is a mutual exclusion algorithm. Hence, it allows multiple processes to access
the critical section of the program in a right manner. This algorithm operates by giving a unique
number to each process that requests for accessing to the critical section.
The bakery algorithm is based on the first come first serve property. Therefore, the process with the
smallest number is given priority to access the critical section first. In case when two or more
processes assigned the same number, then the process with lowest process ID is given priority.

The algorithm ensures:


• Mutual exclusion (no two processes in the critical section at once)
• Progress (if no process is in the critical section, one will get in)
• Bounded waiting (no starvation; every process gets a turn)
How the Bakery Algorithm Works
The Bakery Algorithm is based on the idea of a "take-a-number" system, similar to how customers in
a bakery get numbered tickets and are served in order.
1. Each process gets a ticket
• When a process wants to enter the critical section, it picks a number that is one greater
than the highest current ticket among all processes.
• If no other process has a ticket, it picks 1.
2. Ordering of processes
• The process with the smallest ticket number gets access to the critical section first.
• If two processes have the same number, the one with the smaller process ID goes first.
3. Entering the Critical Section
• A process waits until it has the smallest ticket number before entering the critical
section.
• Once finished, it resets its ticket to 0.
4. Fairness & Mutual Exclusion
• Every process gets a unique number, ensuring fair access.
• The algorithm guarantees that only one process enters the critical section at a time.
Algorithm
int choosing[N] = {0}; // Flag to indicate if process is choosing a number
int number[N] = {0}; // Ticket numbers for processes
void lock(int i) {
choosing[i] = 1;
OPERATING SYSTEM -SEP

number[i] = 1 + max(number[0], number[1], ..., number[N-1]);


choosing[i] = 0;
for (int j = 0; j < N; j++) {
while (choosing[j]); // Wait if another process is choosing a number
while ((number[j] != 0) &&
(number[j] < number[i] || (number[j] == number[i] && j < i)));
}
}
void unlock(int i) {
number[i] = 0; // Reset the ticket number after exiting the critical section
}
Advantages of the Bakery Algorithm
• Ensures mutual exclusion.
• Provides fairness (processes are served in order).
• Works for arbitrary number of processes.
• Does not require atomic operations.
Disadvantages of the Bakery Algorithm
• Not scalable for large systems (each process must check all other processes).
• High memory usage (requires storing ticket numbers for each process).
• Busy-waiting (processes continuously check conditions, leading to inefficiency)
Semaphores meaning:
Semaphores are integer variables that are used to solve the critical section problem by using two
atomic operations, wait and signal that are used for process synchronization.
• Wait: The wait operation decrements the value of its argument S, if it is positive. If S is
negative or zero, then no operation is performed.
wait(S) {
while (S<=0);
S--;
}
• Signal: The signal operation increments the value of its argument S.
signal(S) {
S++;
}
OPERATING SYSTEM -SEP

Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores.
Counting Semaphore
• Description: Can take non-negative integer values (not limited to 0 or 1).
• Use Case: Useful for managing a resource pool with multiple identical resources (e.g.,
multiple printers).
• Behaviour: The count represents the number of available resources. It can be incremented or
decremented.
Binary Semaphore (Mutex Semaphore)
• Description: Can only take the value 0 or 1.
• Use Case: Used for mutual exclusion, ensuring only one thread accesses a critical section at a
time.
• Behaviour: Works like a lock; 1 means available, 0 means locked.
Advantages of Semaphores
• Semaphores allow only one process into the critical section. They follow the mutual exclusion
principle strictly and are much more efficient than some other methods of synchronization.
• There is no resource wastage because of busy waiting in semaphores as processor time is not
wasted unnecessarily to check if a condition is fulfilled to allow a process to access the
critical section.
• Semaphores are implemented in the machine independent code of the microkernel. So they
are machine independent.
Disadvantages of Semaphores
• Semaphores are complicated so the wait and signal operations must be implemented in the
correct order to prevent deadlocks.
• Semaphores are impractical for last scale use as their use leads to loss of modularity. This
happens because the wait and signal operations prevent the creation of a structured layout for
the system.
• Semaphores may lead to a priority inversion where low priority processes may access the
critical section first and high priority processes later.
Synchronization problem: Below are some of the classical problem depicting flaws of process
synchronaization in systems where cooperating processes are present.
❖ Bounded Buffer Problem: Bounded buffer problem, which is also called producer consumer
problem, is one of the classic problems of synchronization.
There is a buffer of n slots and each slot is capable of storing one unit of data. There
are two processes running, namely, producer and consumer, which are operating on the buffer.
OPERATING SYSTEM -SEP

• A producer tries to insert data into an empty slot of the buffer. A consumer tries to remove
data from a filled slot in the buffer. As you might have guessed by now, those two processes
won't produce the expected output if they are being executed concurrently.
• There needs to be a way to make the producer and consumer work in an independent manner.
One solution of this problem is to use semaphores. The semaphores which will be used here are:
• m, a binary semaphore which is used to acquire and release the lock.
• empty, a counting semaphore whose initial value is the number of slots in the buffer, since,
initially all slots are empty.
• full, a counting semaphore whose initial value is 0.
At any instant, the current value of empty represents the number of empty slots in the buffer and full
represents the number of occupied slots in the buffer.
The Producer Operation
The pseudocode of the producer function looks like this:
do
{
// wait until empty > 0 and then decrement 'empty'
wait(empty);
// acquire lock
wait(mutex);
/* perform the insert operation in a slot */
// release lock
signal(mutex);
// increment 'full'
signal(full);
}
while(TRUE)

• Looking at the above code for a producer, we can see that a producer first waits until there is
atleast one empty slot.
• Then it decrements the empty semaphore because, there will now be one less empty slot,
since the producer is going to insert data in one of those slots.
• Then, it acquires lock on the buffer, so that the consumer cannot access the buffer until
producer completes its operation.
• After performing the insert operation, the lock is released and the value of full is incremented
because the producer has just filled a slot in the buffer.
The Consumer Operation
do
{
// wait until full > 0 and then decrement 'full'
wait(full);
// acquire the lock
OPERATING SYSTEM -SEP

wait(mutex);
/* perform the remove operation in a slot */
// release the lock
signal(mutex);
// increment 'empty'
signal(empty);
}
while(TRUE);

• The consumer waits until there is atleast one full slot in the buffer.
• Then it decrements the full semaphore because the number of occupied slots will be
decreased by one, after the consumer completes its operation.
• After that, the consumer acquires lock on the buffer.
• Following that, the consumer completes the removal operation so that the data from one of
the full slots is removed.
• Then, the consumer releases the lock.
• Finally, the empty semaphore is incremented by 1, because the consumer has just removed
data from an occupied slot, thus making it empty.

❖ Readers Writer Problem


There is a shared resource which should be accessed by multiple processes. There are two
types of processes in this context. They are reader and writer. Any number of readers can read
from the shared resource simultaneously, but only one writer can write to the shared resource.
When a writer is writing data to the resource, no other process can access the resource. A
writer cannot write to the resource if there are non zero number of readers accessing the
resource at that time.
The code for the writer process looks like this:
while(TRUE)
{
wait(w);
/* perform the write operation */
signal(w);
}
the code for the reader process looks like this:
while(TRUE)
{
OPERATING SYSTEM -SEP

//acquire lock
wait(mutex);
read_count++;
if(read_count == 1)
wait(wrt);
//release lock
signal(mutex);
/* perform the reading operation */
// acquire lock
wait(mutex);
read_count--;
if(read_count == 0)
signal(wrt);
// release lock
signal(mutex);
}
• As seen above in the code for the writer, the writer just waits on the w semaphore until it gets
a chance to write to the resource.
• After performing the write operation, it increments w so that the next writer can access the
resource.
• On the other hand, in the code for the reader, the lock is acquired whenever the read_count is
updated by a process.
• When a reader wants to access the resource, first it increments the read_count value, then
accesses the resource and then decrements the read_count value.
• The semaphore w is used by the first reader which enters the critical section and the last
reader which exits the critical section.
• The reason for this is, when the first readers enters the critical section, the writer is blocked
from the resource. Only new readers can access the resource now.
• Similarly, when the last reader exits the critical section, it signals the writer using the w
semaphore because there are zero readers now and a writer can have the chance to access the
resource.

You might also like