Unit 3 Process Synchronization
Unit 3 Process Synchronization
Process Synchronization
Independent Process: The execution of one process does not affect the
execution of other processes.
Let us take a look at why exactly we need Process Synchronization. For example,
If a process1 is trying to read the data present in a memory location while
another process2 is trying to change the data present at the same location, there is a
high chance that the data read by the process1 will be incorrect.
1
Race Condition:
When more than one process is either running the same code or modifying the
same memory or any shared data, there is a risk that the result or value of the
shared data may be incorrect because all processes try to access and modify this
shared resource. Thus, all the processes race to say that my result is correct. This
condition is called the race condition. Since many processes use the same data, the
results of the processes may depend on the order of their execution.
This is mostly a situation that can arise within the critical section. In the critical
section, a race condition occurs when the end result of multiple thread
executions varies depending on the sequence in which the threads execute.
A Race condition typically occurs when two or more threads try to read, write and
possibly make the decisions based on the memory that they are accessing
concurrently.
Sections of a Program
Critical Section: This part allows one process to enter and modify the
shared variable.
Exit Section: Exit section allows the other process that are waiting in the
Entry Section, to enter into the Critical Sections. It also checks that a process
that finished its execution should be removed through this Section.
Remainder Section: All other parts of the Code, which is not in Critical,
Entry, and Exit Section, are known as the Remainder Section.
2
Critical Section Problem:
A critical section is a code segment that can be accessed by only one process at a
time. The critical section contains shared variables that need to be synchronized to
maintain the consistency of data variables. So the critical section problem means
designing a way for cooperative processes to access shared resources without
creating data inconsistencies.
The entry to the critical section is handled by the wait() function, and it is
represented as P().
In the critical section, only a single process can be executed. Other processes,
waiting to execute their critical section, need to wait until the current process
completes its execution.
3
Rules for Critical Section
Progress: This solution is used when no one is in the critical section, and
someone wants in. Then those processes not in their reminder section should
decide who should go in, in a finite time.
Bound Waiting: When a process makes a request for getting into critical
section, there is a specific limit about number of processes can get into their
critical section. So, when the limit is reached, the system must allow request
to the process to get into its critical section.
In Process Synchronization, critical section plays the main role so that the problem
must be solved.
Here are some widely used methods to solve the critical section problem.
Peterson Solution
In this solution, when a process is executing in a critical state, then the other
process only executes the rest of the code, and the opposite can happen. This
method also helps to make sure that only a single process runs in the critical
section at a specific time.
Example
4
PROCESS Pi
FLAG[i] = true
while( (turn != i) AND (CS is !free) )
{
wait;
}
CRITICAL SECTION FLAG[i] = false
turn = j; //choose another process to go to CS
Assume there are N processes (P1, P2, … PN) and every process at some
point of time requires to enter the Critical Section
5
The process which enters into the critical section while exiting would change
the TURN to another number from the list of ready processes.
Example: turn is 2 then P2 enters the Critical section and while exiting
turn=3 and therefore P3 breaks out of wait loop.
Synchronization Hardware
Some times the problems of the Critical Section are also resolved by hardware.
Some operating system offers a lock functionality where a Process acquires a lock
when entering the Critical section and releases the lock after leaving it.
So when another process is trying to enter the critical section, it will not be able to
enter as it is locked. It can only do so if it is free by acquiring the lock itself.
Mutex Locks
In this approach, in the entry section of code, a LOCK is obtained over the critical
resources used inside the critical section. In the exit section that lock is released.
Semaphore Solution
It uses two atomic operations, 1)wait, and 2) signal for the process
synchronization.
Example
WAIT ( S ):
while ( S <= 0 );
S = S - 1;
SIGNAL ( S ):
S = S + 1;
6
Semaphores in Operating System
Semaphores are integer variables that are used to solve the critical section problem
by using two atomic operations, wait and signal that are used for process
synchronization.
Wait
wait(S)
{
while (S<=0);
S--;
}
Signal
signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows −
Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These
semaphores are used to coordinate the resource access, where the semaphore count
is the number of available resources. If the resources are added, semaphore count
7
automatically incremented and if the resources are removed, the count is
decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to
0 and 1. The wait operation only works when the semaphore is 1 and the signal
operation succeeds when semaphore is 0. It is sometimes easier to implement
binary semaphores than counting semaphores.
Advantages of Semaphores
Semaphores allow only one process into the critical section. They follow the
mutual exclusion principle strictly and are much more efficient than some
other methods of synchronization.
Disadvantages of Semaphores
Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations prevent the
creation of a structured layout for the system.
8
CLASSICAL PROBLEM OF SYNCHRONIZATION
9
These classical synchronization problems illustrate the challenges faced in multi-process
or multi-threaded environments and the need for synchronization mechanisms and
techniques to address these challenges. Various synchronization primitives such as
semaphores, mutexes, condition variables, and atomic operations are used to solve
these problems in modern operating systems and concurrent programming
environments.
MONITORS
A monitor is a high-level synchronization construct used in operating systems and
concurrent programming to simplify the management of shared resources and enable
safe, synchronized access to them. It was introduced by Per Brinch Hansen in 1973 and
is often associated with programming languages like Java and Python, where monitor-
based constructs are used.
A monitor encapsulates both the data structure (shared resource) and the set of
procedures (methods) that operate on that data structure. It provides a way to ensure
that only one process or thread can access the shared resource at a time, preventing
race conditions and providing a higher level of abstraction for synchronization.
Here are some key characteristics and concepts related to monitors in operating
systems:
1. Mutual Exclusion: Monitors enforce mutual exclusion, which means that only
one process or thread can be active within the monitor at any given time. This
prevents concurrent access to the shared resource and eliminates data
corruption.
2. Condition Variables: Monitors often include condition variables, which are used
to allow processes or threads to wait for certain conditions to be met before they
can proceed. Condition variables are commonly used for tasks like signaling other
threads when data becomes available or waiting for a resource to be released.
3. Synchronization: Monitors are used to synchronize the access to shared
resources. They ensure that only one thread can enter the monitor at a time and
that others must wait until the monitor is available.
4. Abstraction: Monitors provide an abstraction that simplifies the management of
shared resources and makes it easier to reason about concurrency and
synchronization. Programmers can encapsulate complex synchronization logic
within a monitor and expose a clean interface for accessing the resource.
5. Wait and Signal Operations: In many monitor implementations, threads can
perform "wait" and "signal" operations on condition variables. The "wait"
operation causes a thread to release the monitor and enter a waiting state, and
10
the "signal" operation can be used to wake up one or more waiting threads when
a specific condition is met.
6. Priority Inversion: Monitors can suffer from priority inversion, where a higher-
priority thread is delayed by lower-priority threads holding the monitor. To
mitigate this, priority inheritance or priority ceiling protocols are sometimes used.
7. Thread Safety: Monitors help ensure thread safety, as they encapsulate shared
resources and their access procedures. This simplifies concurrent programming
and reduces the likelihood of synchronization bugs.
8. Examples: High-level programming languages like Java and Python provide
monitor-like constructs. In Java, for example, the synchronized keyword is used to
create synchronized methods and blocks that function as monitors.
All the processes in a system require some resources such as central processing
unit(CPU), file storage, input/output devices, etc to execute it. Once the execution
is finished, the process releases the resource it was holding. However, when many
processes run on a system they also compete for these resources they require for
execution. This may arise a deadlock situation.
System Model :
Memory, printers, CPUs, open files, tape drives, CD-ROMs, and other
resources are examples of resource categories.
By definition, all resources within a category are equivalent, and any of the
resources within that category can equally satisfy a request from that
category. If this is not the case (i.e. if there is some difference between the
resources within a category), then that category must be subdivided further.
For example, the term “printers” may need to be subdivided into “laser
printers” and “color inkjet printers.”
Operations :
In normal operation, a process must request a resource before using it and release
it when finished, as shown below.
1. Request –
If the request cannot be granted immediately, the process must wait until the
resource(s) required to become available. The system, for example, uses the
functions open(), malloc(), new(), and request ().
2. Use –
The process makes use of the resource, such as printing to a printer or
reading from a file.
3. Release –
The process relinquishes the resource, allowing it to be used by other processes.
Mutual Exclusion: Only one process can use a resource at any given time
i.e. the resources are non-sharable.
13
Hold and wait: A process is holding at least one resource at a time and is
waiting to acquire other resources held by some other process.
Circular Wait: A set of processes are waiting for each other in a circular
fashion. For example, let’s say there are a set of processes {P0, ,P1,P2,P3}
such that P0 depends on P1, P1 depends on P2, P2 depends
on P3 and P3 depends on P0. This creates a circular relation between all
these processes and they have to wait forever to be executed.
14
Example
In the above figure, there are two processes and two resources. Process 1 holds
"Resource 1" and needs "Resource 2" while Process 2 holds "Resource 2" and
requires "Resource 1". This creates a situation of deadlock because none of the
two processes can be executed. Since the resources are non-shareable they can
only be used by one process at a time(Mutual Exclusion). Each process is holding
15
a resource and waiting for the other process the release the resource it requires.
None of the two processes releases their resources before their execution and this
creates a circular wait. Therefore, all four conditions are satisfied.
The first two methods are used to ensure the system never enters a deadlock.
Deadlock Prevention
This is done by restraining the ways a request can be made. Since deadlock occurs
when all the above four conditions are met, we try to prevent any one of them,
thus preventing a deadlock.
Deadlock Avoidance
We let the system fall into a deadlock and if it happens, we detect it using a
detection algorithm and try to recover.
Abort one process at a time until the system recovers from the deadlock.
Resource Preemption: Resources are taken one by one from a process and
assigned to higher priority processes until the deadlock is resolved.
16
Deadlock Ignorance
In the method, the system assumes that deadlock never occurs. Since the problem
of deadlock situation is not frequent, some systems simply ignore it. Operating
systems such as UNIX and Windows follow this approach. However, if a deadlock
occurs we can reboot our system and the deadlock is resolved automatically.
Deadlock Starvation
Deadlock is also called circular wait. Starvation is also called lived lock.
It is a good method if the state of the resource can be saved and restored
easily.
17
It does not need run-time computations because the problem is solved in
system design.
The processes must know the maximum resource of each type required to
execute it.
18