Term Paper Nay Ma
Term Paper Nay Ma
Section: B
Batch: BICE-21
Semester: 04
Date of Submission: 20/ 03/ 2023
1
1. Abstract:
To conduct our research, we conducted a thorough literature review of published studies and
analyzed case studies of real-world implementations. Our findings indicate that while all
synchronization techniques have their strengths and weaknesses, each can be effective in specific
scenarios.
Based on our research, we conclude that no single process synchronization technique can be
universally applied to all concurrent programming scenarios. Instead, developers should
carefully consider the requirements and constraints of their specific use cases and select the most
appropriate synchronization technique accordingly. Our research contributes to a better
understanding of process synchronization and provides valuable insights for software developers
and researchers in the field of concurrent programming and operating systems.
2
Table of Contents
1. Abstract: 2
3. Types of Process: 5
I. Independent Process: 5
II. Cooperative Process: 5
4. Elements of a Process: 6
6. Race Condition: 7
Types of Race Condition: 8
How to Detect Race Condition: 9
Prevention Of Race Condition: 10
What is Critical Section Problem: 11
Example of Critical Section Problem: 12
Solving Critical Section Problem: 13
8. Deadlock: 14
What Is Deadlock: 14
Example of Deadlock: 14
Conditions for Deadlock: 15
13. Conclusion: 24
References: 25
3
2. Introduction to Process Synchronization:
For example, consider a bank that stores the account balance of each customer in the same
database. Now suppose you initially have x rupees in your account. Now, you take out some
amount of money from your bank account, and at the same time, someone tries to look at the
amount of money stored in your account. As you are taking out some money from your account,
after the transaction, the total balance left will be lower than x. But the transaction takes time,
and hence the person reads x as your account balance which leads to inconsistent data. If in some
way, we could make sure that only one process occurs at a time, we could ensure consistent data.
4
Fig. 1: An illustration on process synchronization
In the above image, if Process1 and Process2 happen at the same time, user 2 will get the wrong
account balance as Y because of Process1 being transacted when the balance is X [2].
3. Types of Process:
I. Independent Process:
An independent process is a type of process that operates on its own without requiring
synchronization or coordination with other processes. These processes can carry out their
tasks without any interference or influence from other processes. Since independent
processes do not share resources with other processes, they do not need to use
synchronization mechanisms such as locks, semaphores, or monitors. Simple tasks such as
basic input/output operations or printing output to the screen are examples of independent
processes.
5
II. Cooperative Process:
A Cooperative process is a type of process that requires synchronization and coordination
with other processes to achieve a common objective. These processes share resources like
memory, files, or input/output devices with other processes, which can result in conflicts and
inconsistencies if not synchronized properly. To prevent such issues, cooperative processes
use synchronization mechanisms such as locks, semaphores, monitors, or message passing to
communicate and coordinate their activities with other processes. Examples of cooperative
processes include multiple threads operating within a single program or multiple programs
working together to complete a task that cannot be done by a single process.
4. Elements of a Process:
Program Code: The program code is the set of instructions that define the behavior of the
process. It describes the steps the process needs to execute to accomplish a specific task.
Program Counter: The program counter is a register that holds the memory address of the next
instruction to be executed by the process.
Data: The data used by the process, such as variables and constants, are stored in memory
locations allocated to the process.
Stack: The stack is a region of memory used by the process to store temporary data, such as
function call frames and local variables.
6
Heap: The heap is a region of memory used by the process to dynamically allocate memory for
data structures.
Process ID: The process ID is a unique identifier assigned to the process by the operating
system.
Registers: Registers are small, fast memory locations within the CPU used to store frequently
accessed data. They are used to store the state of the process, including program counter, stack
pointer, and other important information.
These elements work together to define the behavior and state of a process in a computer system.
Process synchronization is necessary in a computer system to ensure that multiple processes can
share resources, such as memory, input/output devices, and files, in an orderly and predictable
manner. When multiple processes try to access the same resource simultaneously, conflicts and
inconsistencies can arise, leading to issues such as data corruption, race conditions, and
deadlocks [4].
To prevent conflicts and inconsistencies when multiple processes require access to shared
resources in a computer system, synchronization methods such as locks, semaphores, monitors,
and message passing are implemented. These methods enable processes to communicate and
collaborate, ensuring that they do not interfere with each other's operations and that they access
shared resources in a mutually exclusive way.
Process synchronization is necessary to ensure that the computer system operates correctly,
reliably, and efficiently. It can avoid issues that can cause system crashes, data loss, or poor
7
performance, and ensure that the processes are executed in a coordinated and efficient manner,
leading to better overall system performance.
6. Race Condition:
A race condition happens when a system or device tries to carry out multiple operations
simultaneously, but the order in which these operations need to be executed is important for them
to work correctly. This issue is often seen in programming and computer science when two
threads or processes try to access the same resource simultaneously, leading to complications in
the system. It's a prevalent problem in applications that involve multiple threads.
8
Types of Race Condition:
1. Critical Race condition: It can occur in a scenario where multiple light switches are
connected to a common ceiling light. In such a case, the position of the switch becomes
irrelevant, as either switch can turn the light on or off. If two people try to turn on the
light simultaneously using different switches, one instruction might cancel the other, or
the circuit breaker could trip due to the conflicting signals.
For example, if there are two switches to turn on a light, and both switches are flipped
simultaneously, and it turns the light on, it's a noncritical race condition. This is because
the end state of the light is the same as flipping just one switch, and there's no impact on
the overall outcome.
3. Read-modify-write Race condition: It occurs when two processes read and write a
value in a program simultaneously, which can lead to software bugs. For example, if two
checks are processed at the same time and the system reads the same account balance for
both, it may give an incorrect balance, resulting in an overdrawn account. The
expectation is that the two processes should occur sequentially, but when they happen
simultaneously, it can cause unexpected results.
4. Check-then-act Race condition: It occurs when two workflows check a value and will
take different actions based on the value, but only one will proceed with it. The process
that doesn't proceed with the value will consider it null, which can lead to an incorrect
outcome that the program will use to proceed further. In other words, both workflows
will check the value, but only one will act on it, leaving the other with outdated
information.
9
An example on Race condition:
For example, suppose there are two processes, T1 and T2, both of which want to call the
bankAc function at the same time. T1 passes the value of 300 as a parameter, while T2 passes the
value of 100. The shared variable has a previous value of 1000.
Assuming T1 and T2 execute on different processors, T2 will load 1000 into its register and add
300 to it, resulting in 1300. Meanwhile, T2 will also load 1000 into its register and add 100 to it,
resulting in 1100.
T1 will then store the result of 1400 in the shared variable, while T2 will store the result of 1150
in the shared variable.
Detecting and identifying race conditions can be challenging since they are a semantic problem
that can arise from multiple issues within the code. It's better to prevent these problems by
designing code carefully. Programmers use dynamic and static analysis tools to identify race
conditions, but both have limitations. Static testing tools scan a program without running it, but
they can produce false reports. Dynamic analysis tools have fewer false reports but may not
detect race conditions that aren't executed directly within the program.
10
Data races are often the cause of race conditions, which occur when two threads target the same
memory location concurrently, and at least one is a write operation. Data races are easier to
detect because they require specific conditions to occur, and tools like the Go Project's Data Race
Detector can monitor for them. Race conditions are more closely related to application semantics
and can lead to broader problems.
11
Networking: A race condition is a problem that often happens in a network when two or
more users try to use the same channel simultaneously and the computer doesn't know
which user to give priority to. This is especially common in networks with long delays.
To avoid this problem, a priority system can be put in place to give one user priority over
the others. This ensures that only one user can access the network channel at a time.
The critical section is the part of the code where processes access shared resources like common
variables and files and perform write operations on them. Since multiple processes execute
simultaneously, interruptions can occur during execution, leading to data inconsistencies with
shared resources. The critical section problem involves developing a set of rules that processes
must follow to ensure that only one process can execute its critical section at a time.
A critical section is a part of the code where multiple processes can access shared data resources.
To avoid data inconsistencies caused by concurrent execution of processes, a protocol is
designed to ensure that only one process can execute in the critical section at any given time.
Only one process is allowed to execute in the critical section at a time, while other processes
must wait for the current process to complete its execution.
12
Fig. 2: Critical section in OS [7]
Let us consider two processes named T1 and T2 and let “Add = 6”. Both processes are trying to
perform operations on the variable, but since they are executing concurrently, their operations
may interfere with each other.
The code is given below:
Add+3 // Process T1
Add=9
Add-3 // Process T2
Add=3
For instance, in the above example, T2 changed the value of "Add" to 3, which was not expected.
To avoid such issues, we can use a solution called the "critical section problem." In this solution,
only one process can access the shared resource at a time, and other processes have to wait until
13
the current process is done. This way, we can ensure that the shared resource remains consistent
and synchronized.
There are several other software-based solutions to the Critical Section Problem such as
Peterson's solution, semaphores, and mutex which we have covered later.
14
8. Deadlock:
What Is Deadlock:
A deadlock is critical situation in OS. It occurs when multiple processes are blocked because
each process is holding onto a resource and also requires a different resource that is being held
by another process. This creates a situation where none of the processes can proceed, resulting in
all processes being stuck and unable to execute [8].
Example of Deadlock:
Suppose, there are two processes in the OS running simultaneously. Process holds a lock on
some rows in the Resource1 table and needs to update some rows in the Resource2 table.
Simultaneously, Process P2 holds locks on those very rows (Which P1 needs to update) in the
Resource2 table but needs to update the rows in the Resource1 table held by Process P1.
15
Fig. 3: Deadlock Situation
Mutual Exclusion: Resources that are non-sharable can only be used by one process at a
time. This means that only a single process can access the resource at any given time, and
other processes must wait for the resource to become available before they can use it.
Hold and Wait: When a process is holding onto at least one resource and waiting to
acquire other resources that are held by other processes, it is said to be in a state of
waiting. The process cannot proceed until it acquires the additional resources it requires,
which are currently being held by other processes.
No Preemption: In a system where a resource can only be released voluntarily, the
process that is holding the resource can only release it after it has completed its task. This
means that the process cannot release the resource until it has finished using it, and it
cannot be taken away by another process until it is voluntarily released by the process
that is holding it.
Circular Wait: When a process is waiting for a resource that is being held by another
process, and that process is also waiting for a different resource that is being held by yet
another process, this can create a circular chain. The cycle continues until the final
process is waiting for a resource that is being held by the first process, thus forming a
closed loop of processes waiting for resources that are held by other processes.
Example: Suppose there are three USB drives and three processes in a computer system.
Each process is capable of holding one of the USB drives. If each process requests an
additional USB drive that is being held by another process, then a deadlock situation can
arise. In this case, each process will be waiting for the USB drive to be released, which is
currently in use by another process. As a result, a circular chain is created where each
process is waiting for a resource that is being held by another process, ultimately
resulting in a deadlock situation.
16
9. Requirements of Process Synchronization:
For successful synchronization, we need to solve the critical section problem. There are three
requirements a solution has to fulfill to ensure proper synchronization and avoid problems like
race condition and deadlock.
Mutual Exclusion: The solution can only allow one process into the critical section at a
time. Multiple processes can’t run in the critical section concurrently. Variations of
binary semaphores are used to control the entry of processes into the critical section.
Progress: When the critical section doesn’t have any executing process and there exists
processes waiting to execute their critical section, one of them must be granted the access
to the critical section. As for which process should get the access, this decision is made in
a finite amount of time by processes that are not, at that time, executing their remainder
section.
No starvation/ Bounded waiting: Starvation refers to the phenomenon where a process
is constantly denied the access to necessary resources, in this case the access to critical
section. For an efficient synchronization, it must be avoided as it creates delay in the
process execution. When requested the access to the critical section, every process must
be given a limit of how many processes can execute their critical section before it. This
condition gives every process a fair chance to execute their critical sections. It can come
in the form of a timestamp too, ensuring no process has to wait indefinitely to get into the
critical section.
Among these, mutual exclusion and progress can be considered as primary requirements and
bounded waiting as secondary requirement.
17
10. Solutions To Critical Section Problem:
As the critical section plays the most important role in process synchronization, it is of utmost
importance that the critical section problem is solved. There exist different specific approaches
for this. Based on the context of the problem and resources available, different solutions can be
implemented to solve the critical section problem, ensuring the synchronization of multiple
processes. Some of the widely used solutions are given below.
Peterson’s solution:
One of the most widely utilized software-based solution to the critical section problem is
Peterson’s solution. It was formulated in 1981 by Gary L. Peterson, a researcher and professor of
computer science at the University of Rochester. Originally, the algorithm supported only two
processes. This solution can be considered to be a derivation of the Dekker’s Solution to the
concurrency problem.
In this algorithm, two variables are shared between the processes. They are:
1. int turn: An integer variable referencing to the process allowed to enter the critical
section.
2. Boolean Flag: A Boolean array named Flag that indicates if the processes are ready to
execute their critical section. The array is initialized as false. If a specific process wants
to enter the critical section, the value is set as true, otherwise it remains false.
In order to enter the critical section, a process sets its Boolean flag variable to TRUE and the
integer turn variable to the other process. For example, if the processes in question are p i and pj,
and pj wants to execute its critical section, flag[i] will be set to TRUE and turn will be set to j.
This gives the other process a chance to enter the Critical section. In the instance where both
processes want to access the critical section, changes in the value of turn will occur twice, but
only will last as the other value will be rewritten almost immediately.
18
The basic algorithm for synchronization between two processes is-
In the algorithm, we can see that if process pi wants to enter the critical section, flag[i] turns
TRUE, and the value of turn is set to j. Then there’s an while loop with the condition of flag[j]
being TRUE and value of turn being j. This loop keeps executing repeatedly until 1. flag[j] is not
TRUE, or the value of turn is not j anymore. It works under the assumption when both the
conditions in the while loop are true, the process p j is in the critical section. When it becomes
apparent that is not the case, the algorithm lets process p i enter the critical section. After
executing the critical section, p i sets its flag variable to FALSE, indicating it doesn’t want the
access anymore.
In Peterson’s solution, all of the requirements of synchronizations are fulfilled. The while loop
stops one process from entering the critical section when the other process could be executing
their critical section, thus preserving mutual exclusion. When one process is ready to enter the
19
section and no other process is already executing its critical section, the access in granted. This
proves there is progress. And as there are only two processes, at most one process can get the
access before it, so it can be considered as the limit and it can be said that there’s no starvation.
It needs to be mentioned that his solution does have a few disadvantages. Because of the design
of modern computer architectures, it may or may not work properly. The while loop in the
algorithm creates busy waiting that wastes CPU cycles. Also, as it is restricted between two
processes only, it is not directly relevant to today’s technologies.
But despite all of its insufficiencies, Peterson’s solution works as a good reference for modern
solutions to the problem. It also illustrates the requirements of synchronization clearly. This
solution can be modified to handle more than two processes. The two-process solution is used
repeatedly in n-1 levels to eliminate at least one process per level until only one remains [10].
Synchronization hardware:
The critical section problem can be solved with the help of hardware in some cases. Some
systems employ hardware-based solutions as they can increase system efficiency.
If a mechanism could prevent interruptions from occurring while a process is executing in the
critical section, then that would satisfy the requirements of process synchronization and solve the
critical section problem.
Synchronization hardware works by utilizing a lock that must be acquired in the entry section of
the code by a process. The lock works in a similar manner to a Boolean flag in the sense it can
have two values only: 0 & 1. The acquisition grants the process entry to the critical section. As
while in critical section, said process holds the lock, no other process can enter the critical
section during that time, thus interruptions are prevented. The process releases the lock when it
exits from the critical section and it becomes available for other processes to obtain. This is how
mutual exclusion is preserved in this method.
20
In a uni-processor environment, this works as a perfect solution to the problem. But in a
multiprocessor environment, it decreases efficiency. As the process of preventing interruptions
on a multiprocessor environment can be time consuming, it can cause delay to the entry of
processes into the critical section. Also, in this method, there’s no sequence of entry to the
critical section, thus it doesn’t preserve bounded waiting requirement of synchronization. As the
number of processes increase, so does the chance of starvation. Also, as hardware-based
solutions tend to be platform dependent, it behaves differently for different systems.
Mutex:
As it can be seen from the previous discussion, even though hardware synchronization can fulfill
some requirements of synchronization, all things considered, it is not a perfect solution to the
critical section problem. There exists a similar but software-based solution called Mutex locks.
The word mutex is an amalgamation of “mutual” & “exclusion”. This is an easy and efficient
mechanism when it comes to managing mutual exclusion among multiple processes.
Mutex works in a similar manner to a hardware synchronization as it uses a shared variable that
can either be locked or unlocked. Two procedures are used with mutexes. When a process needs
access to a critical region, it calls mutex_lock. If the mutex is currently unlocked (meaning that
the critical region is available), the call succeeds and the calling thread is free to enter the critical
region. On the other hand, if the mutex is already locked, the calling thread is blocked until the
thread in the critical region is finished and calls mutex_unlock. If multiple threads are blocked on
the mutex, one of them is chosen at random and allowed to acquire the lock [11].
21
Semaphore:
After initialization, it can only be accessed using atomic two operations that take the semaphore
as parameter: wait( ) and signal( ). The operation wait is denoted by P and signal is denoted by
V. The operations P and V are executed by the operating system in response to calls issued by
any one process naming a semaphore as parameter (this alleviates the process from having
control) [13]. Simultaneous modification of the semaphore is not allowed. The algorithm of the
operations can be written as:
The operation P checks if any process is currently in the critical section. When the critical
section is filled to its capacity with executing processes, any new process trying to access gets
22
stuck in the while loop. When a process exits the critical section, it calls the operation V and
resets the value of the semaphore. Only then a new process can enter the critical section.
1. Binary Semaphore: The semaphore can only have two values: 0 and 1. Binary
semaphores are basically mutex locks. When the value of the variable is set to 1,
processes can enter the critical section. If a process enters the critical section, the variable
is decremented by 1. After the process is done executing its critical section, it calls the V
operation and sets the semaphore value back to 1.
2. Counting Semaphore: This type of semaphores can have any value. When processes
have access to multiple resources, counting semaphores are used to control the access. If
there are multiple instances of a resource, then initially the value of the semaphore
variable is set to the number of instances. When a process enters the critical section of
one instance, the variable is decremented by one. When the value becomes 0 and other
processes try to access the critical section, they get stuck inside the while loop until a
process executing the critical section exits and calls operation V.
Semaphore is method that can solve most of the classic problems of synchronization. But still it
is flawed in some ways. As it employs busy wait and spin lock, it wastes CPU cycles that could
otherwise be used in a much more productive manner. Based on the context of the problem, we
can modify it to make it more efficient.
23
3. The Dining- Philosophers Problem
4. The Sleeping- Barber Problem
Any proposed solution to synchronization must be able to solve these problems to be accepted as
a proper solution.
Windows NT resembles UNIX pretty closely when it comes to the design of synchronization
mechanisms. It uses functions to context switch between different processes. For low priority
processes, it utilizes spinlock to achieve synchronization whereas the higher priority tasks are
handled with dispatcher object. A dispatcher object synchronizes corresponding to different
mechanisms, like semaphores, mutexes, timers and events. Based on the particular processor
system is being run on, it provides hardware synchronization too.
Apple Macintosh uses process as the basic unit of scheduling whereas the other two uses threads.
There are functions in this operating system that can be called by executing processes to block
24
itself and allow other processes run in a time slicing schedule. For high level synchronization,
mechanisms like semaphore and queue are used.
According to a comparative study on this topic, The UNIX operating system is the most reliable
of the three operating systems taken into consideration, Windows NT comes next and then the
Apple Macintosh operating system [14].
13. Conclusion:
It can be noted that process synchronization is a fundamental concept for operating systems. As
information and data are shared between processes, the lack of proper synchronization can lead
up to multiple processes making change in a resource at the same time and causing corruption.
Thus, in a multiprocessor environment, achieving efficient synchronization between processes is
of utmost importance.
From our discussion on different mechanisms of process synchronization, we can say that each
of the solution has its own unique characteristics. Therefore, no solution can be objectively
termed as the “best”. Depending on the requirement of the users and developers, specific
solutions are used in every case.
The field of operating systems and concurrent programming is expanding at a swift pace. With
every advancement in the related fields, will come different complexities these solutions need to
resolve. As the problems become more and more complex, so will the solutions. But the basis of
it will be built upon the solutions available to us now through years and years of experiments,
and the insurmountable research that went into it.
25
References:
26