Process Synchronization
Process Synchronization
Process Synchronization means sharing system resources by processes in a such a way that,
Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data.
Maintaining data consistency demands mechanisms to ensure synchronized execution of
cooperating processes.
Process Synchronization was introduced to handle problems that arose while multiple process
executions. Some of the problems are discussed below.
A Critical Section is a code segment that accesses shared variables and has to be executed as an
atomic action. It means that in a group of cooperating processes, at a given point of time, only
one process must be executing its critical section. If any other process also wants to execute its
critical section, it must wait until the first one finishes.
A solution to the critical section problem must satisfy the following three conditions :
Mutual Exclusion
While one process executes the shared variable, all other processes desiring to do so at the same time
moment should be kept waiting; when that process has finished executing the shared variable, one of
the processes waiting; to do so should be allowed to proceed.
Means: Out of a group of cooperating processes, only one process can be in its critical section at a given
point of time.
Progress
If no process is in its critical section, and if one or more threads want to execute their critical
section then any one of these threads must be allowed to get into its critical section.
Bounded Waiting
After a process makes a request for getting into its critical section, there is a limit for how many
other processes can get into their critical section, before this process's request is granted. So after
the limit is reached, system must grant the process permission to get into its critical section.
Many systems provide hardware support for critical section code. The critical section problem
could be solved easily in a single-processor environment if we could disallow interrupts to occur
while a shared variable or resource is being modified.
In this manner, we could be sure that the current sequence of instructions would be allowed to
execute in order without pre-emption. Unfortunately, this solution is not feasible in a
multiprocessor environment.
This message transmission lag, delays entry of threads into critical section and the system
efficiency decreases.
As the synchronization hardware solution is not easy to implement fro everyone, a strict software
approach called Mutex Locks was introduced. In this approach, in the entry section of code, a
LOCK is acquired over the critical resources modified and used inside critical section, and in the
exit section that LOCK is released. When a process wants to enter in its critical section, it first
test the lock. If lock is 0, the process first sets it to 1 and then enters the critical section. If the
lock is already 1, the process just waits until (lock) variable becomes 0. Thus, a 0 means that no
process in its critical section, and 1 means hold your horses - some process is in its critical
section.
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
Conclusion
The flaw in this proposal can be best explained by example. Suppose process A sees that the lock
is 0. Before it can set the lock to 1 another process B is scheduled, runs, and sets the lock to 1.
When the process A runs again, it will also set the lock to 1, and two processes will be in their
critical section simultaneously.
Semaphores
In 1965, Dijkstra proposed a new and very significant technique for managing concurrent
processes by using the value of a simple integer variable to synchronize the progress of
interacting processes. This integer variable is called semaphore. So it is basically a
synchronizing tool and is accessed only through two low standard atomic operations, wait and
signal designated by P() and V() respectively.
• Wait : decrement the value of its argument S as soon as it would become non-negative.
• Signal : increment the value of its argument, S as an individual operation.
Properties of Semaphores
1. Simple
2. Works with many processes
3. Can have many different critical sections with different semaphores
4. Each critical section has unique access semaphores
5. Can permit multiple processes into the critical section at once, if desirable.
Types of Semaphores
1. Binary Semaphore
2. Counting Semaphores
These are used to implement bounded concurrency. Integer value can range over an
unrestricted domain
Limitations of Semaphores
Following are some of the classical problem faced while process synchronization in systems
where cooperating processes are present.
This problem was commonly used to illustrate the power of synchronization primitives. In this
scheme we assumed that the pool consists of “N” buffer and each capable of holding one item.
The “mutex” semaphore provides mutual exclusion for access to the buffer pool and is initialized
to the value one. The empty and full semaphores count the number of empty and full buffer
respectively. The semaphore empty is initialized to “N” and the semaphore full is initialized to
zero. This problem is known as procedure and consumer problem. The code of the producer is
producing full buffer and the code of consumer is producing empty buffer. The structure of
producer process is as follows:
Reader Write Problem
• The dining philosopher’s problem involves the allocation of limited resources from a
group of processes in a deadlock-free and starvation-free manner.
• Consider 5 philosophers to spend their lives in thinking & eating. A philosopher shares
common circular table surrounded by 5 chairs each occupies by one philosopher. In the
center of the table there is a bowl of rice and the table is laid with 5 chopsticks. When a
philosopher thinks she does not interact with her colleagues. From time to time a
philosopher gets hungry and tries to pickup two chopsticks that are closest to her. A
philosopher may pickup one chopstick or two chopsticks at a time but she cannot pickup
a chopstick that is already in hand of the neighbor. When a hungry philosopher has both
her chopsticks at the same time, she eats without releasing her chopsticks. When she
finished eating, she puts down both of her chopsticks and starts thinking again. This
problem is considered as classic synchronization problem. According to this problem
each chopstick is represented by a semaphore. A philosopher grabs the chopsticks by
executing the wait operation on that semaphore. She releases the chopsticks by executing
the signal operation on the appropriate semaphore. The structure of dining philosopher is
as follows:
do
{
MONITOR
• Processes can call the monitor procedures but cannot access the internal data structures.
• Only one process at a time may be be active in a monitor. Active in a monitor means in
ready queue or CPU with the program counter somewhere in a monitor method.
• A monitor is a language construct.
Compare this with semaphores, which are usually an OS construct.
• The compiler usually enforces mutual exclusion.
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
procedure Pn (…) {……}
Initialization code (…) { … }
}
}
• Condition variables allow for blocking and unblocking.
• You should not think of a condition variable as a variable in the traditional sense. It does not
have a value. Think of it as an object in the OOP sense. It has two methods, wait and signal
that manipulate the calling process.
This section is related to the field of database system. Atomic transaction describes the various
techniques of database and how they are can be used by the operating system. It ensures that the
critical sections are executed automatically. To determine how the system should ensure
atomicity we need first to identify the properties of the devices used to for storing the data
accessed by the transactions. The various types storing devices are as follows:
Volatile Storage: Information residing in volatile storage does not survive in case of system crash.
Example of volatile storage is main memory and cache memory.
Non volatile Storage: Information residing in this type of storage usually survives in case of system
crash. Examples are Magnetic Disk, Magnetic Tape and Hard Disk.
Stable Storage: Information residing in stable storage is never lost. Example is non volatile cache
memory.
The various techniques used for ensuring the atomicity are as follows:
1. Log based Recovery: This technique is used for achieving the atomicity by using data structure called
log. A log has the following fields:
a. Transaction Name: This is the unique name of the transaction that performed the write
operation.
b. Data Item Name: This is the unique name given to the data.
c. Old Value: This is the value of the data before to the write operation.
d. New value: This is the value of the data after the write operation.
This recovery technique uses two processes such as Undo and Redo. Undo restores the value of old data
updated by a transaction to the old values. Redo sets the value of the data updated by a transaction to the
new values.
2. Checkpoint: In this principle system maintains the log. The checkpoint requires the following
sequences of action.
a. Output all the log records from volatile storage into stable storage.
b. Output all modified data residing in volatile to the stable storage.
c. Output a checkpoint onto the stable storage.
If transactions are overlapped then their execution resulting schedule is known as non-serial
scheduling or concurrent schedule as like below:
4. Locking: This technique governs how the locks are acquired and released. There are two types of lock
such as shared lock and exclusive lock. If a transaction T has obtained a shared lock (S) on data item Q
then T can read this item but cannot write. If a transaction T has obtained an exclusive lock (S) on data
item Q then T can both read and write in the data item Q.
5. Timestamp: In this technique each transaction in the system is associated with unique fixed
timestamp denoted by TS. This timestamp is assigned by the system before the transaction starts. If a
transaction Ti has been assigned with a timestamp TS (Ti) and later a new transaction Tj enters the system
then TS (Ti) < TS (Tj). There are two types of timestamp such as W-timestamp and R-timestamp. W-
timestamp denotes the largest timestamp of any transaction that performed write operation successfully.
R-timestamp denotes the largest timestamp of any transaction that executed read operation successfully.