Process Synchronization and Its Application
Process Synchronization and Its Application
On
Process Synchronization and its application
Submitted by
Name:
Department:
Semester:
Roll Number:
Introduction:
Process Synchronization is the coordination of execution of multiple processes in a multi-
process system to ensure that they access shared resources in a controlled and predictable
manner. It aims to resolve the problem of race conditions and other synchronization issues in
a concurrent system.
The main objective of process synchronization is to ensure that multiple processes access
shared resources without interfering with each other, and to prevent the possibility of
inconsistent data due to concurrent access. To achieve this, various synchronization
techniques such as semaphores, monitors, and critical sections are used.
In a multi-process system, synchronization is necessary to ensure data consistency and
integrity, and to avoid the risk of deadlocks and other synchronization problems. Process
synchronization is an important aspect of modern operating systems, and it plays a crucial
role in ensuring the correct and efficient functioning of multi-process systems.
On the basis of synchronization, processes are categorized as one of the following two types:
Race Condition
A Race Condition typically occurs when two or more threads try to read, write and
possibly make the decisions based on the memory that they are accessing
concurrently.
Critical Section
The regions of a program that try to access shared resources and may cause race
conditions are called critical section. To avoid race condition among the processes,
we need to assure that only one process at a time can execute within the critical
section.
A critical section is a code segment that can be accessed by only one process at a time. The
critical section contains shared variables that need to be synchronized to maintain the
consistency of data variables. So the critical section problem means designing a way for
cooperative processes to access shared resources without creating data inconsistencies.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
• Mutual Exclusion: If a process is executing in its critical section, then no
other process is allowed to execute in the critical section.
• Progress: If no process is executing in the critical section and other processes
are waiting outside the critical section, then only those processes that are not
executing in their remainder section can participate in deciding which will enter
in the critical section next, and the selection can not be postponed indefinitely.
• Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted.
Lock Variable
1. Entry Section →
2. While (lock! = 0);
3. Lock = 1;
4. //Critical Section
5. Exit Section →
6. Lock =0;
If we look at the Pseudo Code, we find that there are three sections in the code. Entry
Section, Critical Section and the exit section.Initially the value of lock variable is 0.
The process which needs to get into the critical section, enters into the entry section
and checks the condition provided in the while loop.The process will wait infinitely
until the value of lock is 1 (that is implied by while loop). Since, at the very first time
critical section is vacant hence the process will enter the critical section by setting the
lock variable as 1.When the process exits from the critical section, then in the exit
section, it reassigns the value of lock as 0.Every Synchronization mechanism is
judged on the basis of four conditions.
1. Mutual Exclusion
2. Progress
3. Bounded Waiting
4. Portability
Out of the four parameters, Mutual Exclusion and Progress must be provided by any
solution. Lets analyze this mechanism on the basis of the above mentioned
conditions.
Mutual Exclusion
The lock variable mechanism doesn't provide Mutual Exclusion in some of the cases.
This can be better described by looking at the pseudo code by the Operating System
point of view I.E. Assembly code of the program. Let's convert the Code into the
assembly language.
1. Load Lock, R0
2. CMP R0, #0
3. JNZ Step 1
4. Store #1, Lock
5. Store #0, Lock
Let us consider that we have two processes P1 and P2. The process P1 wants to
execute its critical section. P1 gets into the entry section. Since the value of lock is 0
hence P1 changes its value from 0 to 1 and enters into the critical section.
Now, CPU changes P1's state from waiting to running. P1 is yet to finish its critical
section. P1 has already checked the value of lock variable and remembers that its
value was 0 when it previously checked it. Hence, it also enters into the critical
section without checking the updated value of lock variable.
Now, we got two processes in the critical section. According to the condition of
mutual exclusion, more than one process in the critical section must not be present
at the same time. Hence, the lock variable mechanism doesn't guarantee the mutual
exclusion.
The problem with the lock variable mechanism is that, at the same time, more than
one process can see the vacant tag and more than one process can enter in the
critical section. Hence, the lock variable doesn't provide the mutual exclusion that's
why it cannot be used in general.
Since, this method is failed at the basic step; hence, there is no need to talk about the
other conditions to be fulfilled.
Test Set Lock Mechanism Modification in the assembly code
In lock variable mechanism, Sometimes Process reads the old value of lock variable
and enters the critical section. Due to this reason, more than one process might get
into critical section. However, the code shown in the part one of the following section
can be replaced with the code shown in the part two. This doesn't affect the
algorithm but, by doing this, we can manage to provide the mutual exclusion to
some extent but not completely. In the updated version of code, the value of Lock is
loaded into the local register R0 and then value of lock is set to 1. However, in step 3,
the previous value of lock (that is now stored into R0) is compared with 0. if this is 0
then the process will simply enter into the critical section otherwise will wait by
executing continuously in the loop. The benefit of setting the lock immediately to 1
by the process itself is that, now the process which enters into the critical section
carries the updated value of lock variable that is 1. In the case when it gets
preempted and scheduled again then also it will not enter the critical section
regardless of the current value of the lock variable as it already knows what the
updated value of lock variable is.
TSL Instruction
However, the solution provided in the above segment provides mutual exclusion to
Section 1 Section 2
some extent but it doesn't make sure that the mutual exclusion will always be there.
There is a possibility of having more than one process in the critical section.What if
the process gets preempted just after executing the first instruction of the assembly
code written in section 2. In that case, it will carry the old value of lock variable with it
and it will enter into the critical section regardless of knowing the current value of
lock variable. This may make the two processes present in the critical section at the
same time.To get rid of this problem, we have to make sure that the preemption
must not take place just after loading the previous value of lock variable and before
setting it to
1. The problem can be solved if we can be able to merge the first two instructions.In
order to address the problem, the operating system provides a special instruction
called Test Set Lock (TSL) instruction which simply loads the value of lock variable
into the local register R0 and sets it to 1 simultaneouslyThe process which executes
the TSL first will enter into the critical section and no other process after that can
enter until the first process comes out. No process can execute the critical section
even in the case
of preemption of the first process. The assembly code of the solution will look like
following.
1. TSL Lock, R0
2. CMP R0, #0
3. JNZ step 1
o Mutual Exclusion
o Progress
o Bounded Waiting
Bounded Waiting is not guaranteed in TSL. Some process might not get a
chance for so long. We cannot predict for a process that it will definitely get a
chance to enter in critical section after a certain time.
o Architectural Neutrality
Peterson Solution
Till now, each of our solution is affected by one or the other problem. However, the
Peterson solution provides you all the necessary requirements such as Mutual
Exclusion, Progress, Bounded Waiting and Portability
This is a two process solution. Let us consider two cooperative processes P1 and P2.
The entry section and exit section are shown below. Initially, the value of interested
variables and turn variable is 0. Initially process P1 arrives and wants to enter into the
critical section. It sets its interested variable to True (instruction line 3) and also sets
turn to 1 (line number 4). Since the condition given in line number 5 is completely
satisfied by P1 therefore it will enter in the critical section.
1.P1 → 1 2 3 4 5 CS
Meanwhile, Process P1 got preempted and process P2 got scheduled. P2 also wants
to enter in the critical section and executes instructions 1, 2, 3 and 4 of entry section.
On instruction 5, it got stuck since it doesn't satisfy the condition (value of other
interested variable is still true). Therefore it gets into the busy waiting.
1. P2 → 1 2 3 4 5
P1 again got scheduled and finish the critical section by executing the instruction no.
6 (setting interested variable to false). Now if P2 checks then it are going to satisfy
the condition since other process's interested variable becomes false. P2 will also get
enter the critical section.
1. P1 → 6
2. P2 → 5 CS
Any of the process may enter in the critical section for multiple numbers of times.
Hence the procedure occurs in the cyclic order.
Mutual Exclusion
The method provides mutual exclusion for sure. In entry section, the while condition
involves the criteria for two variables therefore a process cannot enter in the critical
section until the other process is interested and the process is the last one to update
turn variable.
Progress
An uninterested process will never stop the other interested process from entering in
the critical section. If the other process is also interested then the process will wait.
Bounded waiting
The interested variable mechanism failed because it was not providing bounded
waiting. However, in Peterson solution, A deadlock can never happen because the
process which first sets the turn variable will enter in the critical section for sure.
Therefore, if a process is preempted after executing line number 4 of the entry
section then it will definitely get into the critical section in its next chance.
Portability
This is the complete software solution and therefore it is portable on every hardware.
All the solutions we have seen till now were intended to provide mutual exclusion
with busy waiting. However, busy waiting is not the optimal allocation of resources
because it keeps CPU busy all the time in checking the while loops condition
continuously although the process is waiting for the critical section to become
available. All the synchronization mechanism with busy waiting are also suffering
from the priority inversion problem that is there is always a possibility of spin lock
whenever there is a process with the higher priority has to wait outside the critical
section since the mechanism intends to execute the lower priority process in the
critical section. However these problems need a proper solution without busy waiting
and priority inversion.
Let's examine the basic model that is sleep and wake. Assume that we have two system calls
as sleep and wake. The process which calls sleep will get blocked while the process which
call will get waked up.There is a popular example called producer consumer problem which
is the most popular problem simulating sleep and wake mechanism.The concept of sleep
and wake is very simple. If the critical section is not empty then the process will go and sleep.
It will be waked up by the other process which is currently executing inside the critical section
something while the other process reads that. The process which is writing
something is called producer while the process which is reading is called consumer.
In order to read and write, both of them are using a buffer. The code that simulates the sleep
and wake mechanism in terms of providing the solution to producer consumer problem is
shown below.
The producer produces the item and inserts it into the buffer. The value of the global
variable count got increased at each insertion. If the buffer is filled completely and no slot is
On the consumer's end, the value of count got decreased by 1 at each consumption. If
the buffer is empty at any point of time then the consumer will sleep otherwise, it keeps
The consumer will be waked up by the producer if there is at least 1 item available in the
buffer which is to be consumed. The producer will be waked up by the consumer if there is at
least one slot available in the buffer so that the producer can write that.
Well, the problem arises in the case when the consumer got preempted just before it was
about to sleep. Now the consumer is neither sleeping nor consuming. Since the producer is
not aware of the fact that consumer is not actually sleeping therefore it keep waking the
This leads to the wastage of system calls. When the consumer get scheduled again, it will sleep
also sleep at that time keeping in the mind that the consumer will wake him up when there is
The consumer is also sleeping and not aware with the fact that the producer will wake him
up. This is a kind of deadlock where neither producer nor consumer is active and waiting for
each other to wake them up. This is a serious problem which needs to be addressed.
A flag bit can be used in order to get rid of this problem. The producer can set the bit when it
calls wake-up on the first time. When the consumer got scheduled, it checks the bit.
The consumer will now get to know that the producer tried to wake him and therefore it will
not sleep and get into the ready state to consume whatever produced by the producer. This
solution works for only one pair of producer and consumer, what if there are n producers
and n consumers. In that case, there is a need to maintain an integer which can record how
many wake-up calls have been made and how many consumers need not sleep. This integer
variable is called semaphore. We will discuss more about semaphore later in detail.
can be signaled by another thread. This is different than a mutex as the mutex can be signaled only
A semaphore uses two atomic operations, wait and signal for process synchronization. A
Semaphore is an integer variable, which can be accessed only through two operations wait() and
signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
• Binary Semaphores: They can only be either 0 or 1. They are also known as
mutex locks, as the locks can provide mutual exclusion. All the processes
can
share the same mutex semaphore that is initialized to 1. Then, a process has to
wait until the lock becomes 0. Then, the process can make the mutex semaphore
1 and start its critical section. When it completes its critical section, it can reset
the value of the mutex semaphore to 0 and some other process can enter its
critical section.
• Counting Semaphores: They can have any value and are not restricted over
a certain domain. They can be used to control access to a resource that has
a
limitation on the number of simultaneous accesses. The semaphore can be initialized to the
number of instances of the resource. Whenever a process wants to use that resource, it
checks if the number of remaining instances is more than zero, i.e., the process has an
instance available. Then, the process can enter its critical section thereby decreasing the value
of the counting semaphore by 1. After the process is over with the use of the instance of the
resource, it can leave the critical section thereby adding 1 to the number of available
instances of the resource.
References
GeeksForGeeks
Javatpoint
Scaler.com