0% found this document useful (0 votes)
17 views

Process Synchronization and Its Application

The document summarizes process synchronization and its application. Process synchronization coordinates the execution of processes to ensure shared resources are accessed in a controlled manner. There are four sections of a program: entry section, critical section, exit section, and remainder section. The critical section contains shared variables that need to be synchronized to maintain consistency. Synchronization techniques like semaphores and monitors are used to ensure mutual exclusion, progress, and bounded waiting when processes access critical sections.

Uploaded by

Ankit
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Process Synchronization and Its Application

The document summarizes process synchronization and its application. Process synchronization coordinates the execution of processes to ensure shared resources are accessed in a controlled manner. There are four sections of a program: entry section, critical section, exit section, and remainder section. The critical section contains shared variables that need to be synchronized to maintain consistency. Synchronization techniques like semaphores and monitors are used to ensure mutual exclusion, progress, and bounded waiting when processes access critical sections.

Uploaded by

Ankit
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Technical Report Writing

On
Process Synchronization and its application
Submitted by
Name:

Department:

Semester:
Roll Number:

Department Of Computer Science and


Engineering Academy Of Technology
Aedconagar, Hooghly-712121
West Bengal , India
 Abstract :
Process Synchronization means coordinating the execution of processes
such that no two processes access the same shared resources and data.
There are four sections of a program: Entry Section, Critical Section,
Exit Section, and Remainder Section. A segment of code that a signal
process can access at a particular point of time is known as the critical
section.

 Introduction:
Process Synchronization is the coordination of execution of multiple processes in a multi-
process system to ensure that they access shared resources in a controlled and predictable
manner. It aims to resolve the problem of race conditions and other synchronization issues in
a concurrent system.
The main objective of process synchronization is to ensure that multiple processes access
shared resources without interfering with each other, and to prevent the possibility of
inconsistent data due to concurrent access. To achieve this, various synchronization
techniques such as semaphores, monitors, and critical sections are used.
In a multi-process system, synchronization is necessary to ensure data consistency and
integrity, and to avoid the risk of deadlocks and other synchronization problems. Process
synchronization is an important aspect of modern operating systems, and it plays a crucial
role in ensuring the correct and efficient functioning of multi-process systems.
On the basis of synchronization, processes are categorized as one of the following two types:

• Independent Process: The execution of one process does not affect


the execution of other processes.
• Cooperative Process: A process that can affect or be affected by
other processes executing in the system.
Process synchronization problem arises in the case of Cooperative process also
because resources are shared in Cooperative processes.

 Procedure & Discussion


When two or more process cooperates with each other, their order of
execution must be preserved otherwise there can be conflicts in their
execution and inappropriate outputs can be produced. A cooperative
process is the one which can affect the execution of other process or can
be affected by the execution of other process. Such processes need to be
synchronized so that their order of execution can be guaranteed. The
procedure involved in preserving the appropriate order of execution of
cooperative processes is known as Process Synchronization. There are
various synchronization mechanisms that are used to synchronize the
processes.

Race Condition

A Race Condition typically occurs when two or more threads try to read, write and
possibly make the decisions based on the memory that they are accessing
concurrently.

Critical Section

The regions of a program that try to access shared resources and may cause race
conditions are called critical section. To avoid race condition among the processes,
we need to assure that only one process at a time can execute within the critical
section.

A critical section is a code segment that can be accessed by only one process at a time. The
critical section contains shared variables that need to be synchronized to maintain the
consistency of data variables. So the critical section problem means designing a way for
cooperative processes to access shared resources without creating data inconsistencies.

In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
• Mutual Exclusion: If a process is executing in its critical section, then no
other process is allowed to execute in the critical section.
• Progress: If no process is executing in the critical section and other processes
are waiting outside the critical section, then only those processes that are not
executing in their remainder section can participate in deciding which will enter
in the critical section next, and the selection can not be postponed indefinitely.
• Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted.
Lock Variable

This is the simplest synchronization mechanism. This is a Software Mechanism


implemented in User mode. This is a busy waiting solution which can be used for
more
than two processes. In this mechanism, a Lock variable lock is used. Two values of
lock can be possible, either 0 or 1. Lock value 0 means that the critical section is
vacant while the lock value 1 means that it is occupied .A process which wants to get
into the critical section first checks the value of the lock variable. If it is 0 then it sets
the value of lock as 1 and enters into the critical section, otherwise it waits. The
pseudo code of the mechanism looks like following.

1. Entry Section →
2. While (lock! = 0);
3. Lock = 1;
4. //Critical Section
5. Exit Section →
6. Lock =0;

If we look at the Pseudo Code, we find that there are three sections in the code. Entry
Section, Critical Section and the exit section.Initially the value of lock variable is 0.
The process which needs to get into the critical section, enters into the entry section
and checks the condition provided in the while loop.The process will wait infinitely
until the value of lock is 1 (that is implied by while loop). Since, at the very first time
critical section is vacant hence the process will enter the critical section by setting the
lock variable as 1.When the process exits from the critical section, then in the exit
section, it reassigns the value of lock as 0.Every Synchronization mechanism is
judged on the basis of four conditions.

1. Mutual Exclusion
2. Progress
3. Bounded Waiting
4. Portability

Out of the four parameters, Mutual Exclusion and Progress must be provided by any
solution. Lets analyze this mechanism on the basis of the above mentioned
conditions.

Mutual Exclusion

The lock variable mechanism doesn't provide Mutual Exclusion in some of the cases.
This can be better described by looking at the pseudo code by the Operating System
point of view I.E. Assembly code of the program. Let's convert the Code into the
assembly language.

1. Load Lock, R0
2. CMP R0, #0
3. JNZ Step 1
4. Store #1, Lock
5. Store #0, Lock

Let us consider that we have two processes P1 and P2. The process P1 wants to
execute its critical section. P1 gets into the entry section. Since the value of lock is 0
hence P1 changes its value from 0 to 1 and enters into the critical section.

Meanwhile, P1 is preempted by the CPU and P2 gets scheduled. Now there is no


other process in the critical section and the value of lock variable is 0. P2 also wants
to execute its critical section. It enters into the critical section by setting the lock
variable to 1.

Now, CPU changes P1's state from waiting to running. P1 is yet to finish its critical
section. P1 has already checked the value of lock variable and remembers that its
value was 0 when it previously checked it. Hence, it also enters into the critical
section without checking the updated value of lock variable.

Now, we got two processes in the critical section. According to the condition of
mutual exclusion, more than one process in the critical section must not be present
at the same time. Hence, the lock variable mechanism doesn't guarantee the mutual
exclusion.

The problem with the lock variable mechanism is that, at the same time, more than
one process can see the vacant tag and more than one process can enter in the
critical section. Hence, the lock variable doesn't provide the mutual exclusion that's
why it cannot be used in general.

Since, this method is failed at the basic step; hence, there is no need to talk about the
other conditions to be fulfilled.
Test Set Lock Mechanism Modification in the assembly code

In lock variable mechanism, Sometimes Process reads the old value of lock variable
and enters the critical section. Due to this reason, more than one process might get
into critical section. However, the code shown in the part one of the following section
can be replaced with the code shown in the part two. This doesn't affect the
algorithm but, by doing this, we can manage to provide the mutual exclusion to
some extent but not completely. In the updated version of code, the value of Lock is
loaded into the local register R0 and then value of lock is set to 1. However, in step 3,
the previous value of lock (that is now stored into R0) is compared with 0. if this is 0
then the process will simply enter into the critical section otherwise will wait by
executing continuously in the loop. The benefit of setting the lock immediately to 1
by the process itself is that, now the process which enters into the critical section
carries the updated value of lock variable that is 1. In the case when it gets
preempted and scheduled again then also it will not enter the critical section
regardless of the current value of the lock variable as it already knows what the
updated value of lock variable is.

TSL Instruction

However, the solution provided in the above segment provides mutual exclusion to

Section 1 Section 2

1. Load Lock, R0 1. Load Lock, R0


2. CMP R0, #0 2. Store #1, Lock
3. JNZ step1 3. CMP R0, #0
4. store #1, Lock 4. JNZ step 1

some extent but it doesn't make sure that the mutual exclusion will always be there.
There is a possibility of having more than one process in the critical section.What if
the process gets preempted just after executing the first instruction of the assembly
code written in section 2. In that case, it will carry the old value of lock variable with it
and it will enter into the critical section regardless of knowing the current value of
lock variable. This may make the two processes present in the critical section at the
same time.To get rid of this problem, we have to make sure that the preemption
must not take place just after loading the previous value of lock variable and before
setting it to
1. The problem can be solved if we can be able to merge the first two instructions.In
order to address the problem, the operating system provides a special instruction
called Test Set Lock (TSL) instruction which simply loads the value of lock variable
into the local register R0 and sets it to 1 simultaneouslyThe process which executes
the TSL first will enter into the critical section and no other process after that can
enter until the first process comes out. No process can execute the critical section
even in the case
of preemption of the first process. The assembly code of the solution will look like
following.

1. TSL Lock, R0
2. CMP R0, #0
3. JNZ step 1

Let's examine TSL on the basis of the four conditions.

o Mutual Exclusion

Mutual Exclusion is guaranteed in TSL mechanism since a process can never


be preempted just before setting the lock variable. Only one process can see
the lock variable as 0 at a particular time and that's why, the mutual exclusion
is guaranteed.

o Progress

According to the definition of the progress, a process which doesn't want to


enter in the critical section should not stop other processes to get into it. In
TSL mechanism, a process will execute the TSL instruction only when it wants
to get into the critical section. The value of the lock will always be 0 if no
process doesn't want to enter into the critical section hence the progress is
always guaranteed in TSL.

o Bounded Waiting

Bounded Waiting is not guaranteed in TSL. Some process might not get a
chance for so long. We cannot predict for a process that it will definitely get a
chance to enter in critical section after a certain time.

o Architectural Neutrality

TSL doesn't provide Architectural Neutrality. It depends on the hardware


platform. The TSL instruction is provided by the operating system. Some
platforms might not provide that. Hence it is not Architectural natural.

Peterson Solution

This is a software mechanism implemented at user mode. It is a busy waiting solution


can be implemented for only two processes. It uses two variables that are turn
variable and interested variable. The Code of the solution is given below
1. # define N 2
2. # define TRUE 1
3. # define FALSE 0
4. int interested[N] = FALSE;
5. int turn;
6. voidEntry_Section (int
process) 7. {
8. int other;
9. other = 1-process;
10. interested[process] = TRUE;
11. turn = process;
12. while (interested [other] =True && TURN=process);
13. }
14. voidExit_Section (int process)
15. {
16. interested [process] = FALSE;
17. }

Till now, each of our solution is affected by one or the other problem. However, the
Peterson solution provides you all the necessary requirements such as Mutual
Exclusion, Progress, Bounded Waiting and Portability

Analysis of Peterson Solution


1. voidEntry_Section (int
process) 2. {
3. 1. int other;
4. 2. other = 1-process;
5. 3. interested[process] = TRUE;
6. 4. turn = process;
7. 5. while (interested [other] =True &&
TURN=process); 8.}
9.
10. Critical Section
11.
12. voidExit_Section (int process)
13. {
14. 6. interested [process] = FALSE;
15. }

This is a two process solution. Let us consider two cooperative processes P1 and P2.
The entry section and exit section are shown below. Initially, the value of interested
variables and turn variable is 0. Initially process P1 arrives and wants to enter into the
critical section. It sets its interested variable to True (instruction line 3) and also sets
turn to 1 (line number 4). Since the condition given in line number 5 is completely
satisfied by P1 therefore it will enter in the critical section.

1.P1 → 1 2 3 4 5 CS

Meanwhile, Process P1 got preempted and process P2 got scheduled. P2 also wants
to enter in the critical section and executes instructions 1, 2, 3 and 4 of entry section.
On instruction 5, it got stuck since it doesn't satisfy the condition (value of other
interested variable is still true). Therefore it gets into the busy waiting.

1. P2 → 1 2 3 4 5

P1 again got scheduled and finish the critical section by executing the instruction no.
6 (setting interested variable to false). Now if P2 checks then it are going to satisfy
the condition since other process's interested variable becomes false. P2 will also get
enter the critical section.

1. P1 → 6
2. P2 → 5 CS

Any of the process may enter in the critical section for multiple numbers of times.
Hence the procedure occurs in the cyclic order.

Mutual Exclusion

The method provides mutual exclusion for sure. In entry section, the while condition
involves the criteria for two variables therefore a process cannot enter in the critical
section until the other process is interested and the process is the last one to update
turn variable.

Progress

An uninterested process will never stop the other interested process from entering in
the critical section. If the other process is also interested then the process will wait.
Bounded waiting

The interested variable mechanism failed because it was not providing bounded
waiting. However, in Peterson solution, A deadlock can never happen because the
process which first sets the turn variable will enter in the critical section for sure.
Therefore, if a process is preempted after executing line number 4 of the entry
section then it will definitely get into the critical section in its next chance.

Portability

This is the complete software solution and therefore it is portable on every hardware.

Synchronization Mechanism without busy waiting

All the solutions we have seen till now were intended to provide mutual exclusion
with busy waiting. However, busy waiting is not the optimal allocation of resources
because it keeps CPU busy all the time in checking the while loops condition
continuously although the process is waiting for the critical section to become
available. All the synchronization mechanism with busy waiting are also suffering
from the priority inversion problem that is there is always a possibility of spin lock
whenever there is a process with the higher priority has to wait outside the critical
section since the mechanism intends to execute the lower priority process in the
critical section. However these problems need a proper solution without busy waiting
and priority inversion.

Sleep and Wake (Producer Consumer problem)

Let's examine the basic model that is sleep and wake. Assume that we have two system calls

as sleep and wake. The process which calls sleep will get blocked while the process which

call will get waked up.There is a popular example called producer consumer problem which

is the most popular problem simulating sleep and wake mechanism.The concept of sleep

and wake is very simple. If the critical section is not empty then the process will go and sleep.

It will be waked up by the other process which is currently executing inside the critical section

so that the process can get inside the critical section.


In producer consumer problem, let us say there are two processes, one process writes

something while the other process reads that. The process which is writing

something is called producer while the process which is reading is called consumer.

In order to read and write, both of them are using a buffer. The code that simulates the sleep

and wake mechanism in terms of providing the solution to producer consumer problem is

shown below.

1. #define N 100 //maximum slots in buffer


2. #define count=0 //items in the buffer
3. void producer (void)
4. {
5. int item;
6. while(True)
7. {
8. item = produce_item(); //producer produces an item
9. if(count == N) //if the buffer is full then the producer will sleep 10.
Sleep();
11. insert_item (item); //the item is inserted into buffer
12. countcount=count+1;
13. if(count==1) //The producer will wake up the
14. //consumer if there is at least 1 item in the buffer
15. wake-up(consumer);
16. }
17. }
18.
19. void consumer (void)
20. {
21. int item;
22. while(True)
23. {
24. {
25. if(count == 0) //The consumer will sleep if the buffer is empty.
26. sleep();
27. item = remove_item();
28. countcount = count - 1;
29. if(count == N-1) //if there is at least one slot available in the buffer
30. //then the consumer will wake up producer
31. wake-up(producer);
32. consume_item(item); //the item is read by consumer.
33. }
34. }
35. }

The producer produces the item and inserts it into the buffer. The value of the global

variable count got increased at each insertion. If the buffer is filled completely and no slot is

available then the producer will sleep, otherwise it keep inserting.

On the consumer's end, the value of count got decreased by 1 at each consumption. If

the buffer is empty at any point of time then the consumer will sleep otherwise, it keeps

consuming the items and decreasing the value of count by 1.

The consumer will be waked up by the producer if there is at least 1 item available in the

buffer which is to be consumed. The producer will be waked up by the consumer if there is at

least one slot available in the buffer so that the producer can write that.

Well, the problem arises in the case when the consumer got preempted just before it was

about to sleep. Now the consumer is neither sleeping nor consuming. Since the producer is

not aware of the fact that consumer is not actually sleeping therefore it keep waking the

consumer while the consumer is not responding since it is not sleeping.

This leads to the wastage of system calls. When the consumer get scheduled again, it will sleep

because it was about to sleep when it was preempted.


The producer keep writing in the buffer and it got filled after some time. The producer will

also sleep at that time keeping in the mind that the consumer will wake him up when there is

a slot available in the buffer.

The consumer is also sleeping and not aware with the fact that the producer will wake him

up. This is a kind of deadlock where neither producer nor consumer is active and waiting for

each other to wake them up. This is a serious problem which needs to be addressed.

Using a flag bit to get rid of this problem

A flag bit can be used in order to get rid of this problem. The producer can set the bit when it

calls wake-up on the first time. When the consumer got scheduled, it checks the bit.

The consumer will now get to know that the producer tried to wake him and therefore it will

not sleep and get into the ready state to consume whatever produced by the producer. This

solution works for only one pair of producer and consumer, what if there are n producers

and n consumers. In that case, there is a need to maintain an integer which can record how

many wake-up calls have been made and how many consumers need not sleep. This integer

variable is called semaphore. We will discuss more about semaphore later in detail.

Semaphores: A semaphore is a signaling mechanism and a thread that is waiting on a semaphore

can be signaled by another thread. This is different than a mutex as the mutex can be signaled only

by the thread that is called the wait function.

A semaphore uses two atomic operations, wait and signal for process synchronization. A
Semaphore is an integer variable, which can be accessed only through two operations wait() and
signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.

• Binary Semaphores: They can only be either 0 or 1. They are also known as
mutex locks, as the locks can provide mutual exclusion. All the processes
can
share the same mutex semaphore that is initialized to 1. Then, a process has to
wait until the lock becomes 0. Then, the process can make the mutex semaphore
1 and start its critical section. When it completes its critical section, it can reset
the value of the mutex semaphore to 0 and some other process can enter its
critical section.
• Counting Semaphores: They can have any value and are not restricted over
a certain domain. They can be used to control access to a resource that has
a
limitation on the number of simultaneous accesses. The semaphore can be initialized to the
number of instances of the resource. Whenever a process wants to use that resource, it
checks if the number of remaining instances is more than zero, i.e., the process has an
instance available. Then, the process can enter its critical section thereby decreasing the value
of the counting semaphore by 1. After the process is over with the use of the instance of the
resource, it can leave the critical section thereby adding 1 to the number of available
instances of the resource.

Advantages and Disadvantages:


Advantages of Process Synchronization:

• Ensures data consistency and integrity


• Avoids race conditions
• Prevents inconsistent data due to concurrent access
• Supports efficient and effective use of shared resources

Disadvantages of Process Synchronization:

• Adds overhead to the system


• Can lead to performance degradation
• Increases the complexity of the system
• Can cause deadlocks if not implemented properly.
 Conclusion
Píocess synchíonization is the task of ensuíing that multiple
píocesses can safely shaíe íesouíces without inteífeíing with each
otheí. It is a cíitical paít of opeíating system design, as it ensuíes that
data integíity and íesouíce efficiency aíe maintained. ľheíe aíe a
numbeí of diffeíent synchíonization mechanisms available, each with
its own advantages and disadvantages. ľhe choice of
synchíonization mechanism depends on the specific needs of the
application. By undeístanding the benefits and challenges of píocess
synchíonization, developeís can choose the íight synchíonization
mechanisms foí theií specific needs.

 References
 GeeksForGeeks
 Javatpoint
 Scaler.com

You might also like