0% found this document useful (0 votes)
8 views

Process Synchronization

Uploaded by

pupumishra2580
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Process Synchronization

Uploaded by

pupumishra2580
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Process Synchronization

Process Synchronization means sharing system resources by processes in a such a way that,
Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data.
Maintaining data consistency demands mechanisms to ensure synchronized execution of
cooperating processes.

Process Synchronization was introduced to handle problems that arose while multiple process
executions. Some of the problems are discussed below.

Critical Section Problem

A Critical Section is a code segment that accesses shared variables and has to be executed as an
atomic action. It means that in a group of cooperating processes, at a given point of time, only
one process must be executing its critical section. If any other process also wants to execute its
critical section, it must wait until the first one finishes.

Solution to Critical Section Problem

A solution to the critical section problem must satisfy the following three conditions :

Mutual Exclusion

While one process executes the shared variable, all other processes desiring to do so at the same time
moment should be kept waiting; when that process has finished executing the shared variable, one of
the processes waiting; to do so should be allowed to proceed.
Means: Out of a group of cooperating processes, only one process can be in its critical section at a given
point of time.

Progress

If no process is in its critical section, and if one or more threads want to execute their critical
section then any one of these threads must be allowed to get into its critical section.

Bounded Waiting

After a process makes a request for getting into its critical section, there is a limit for how many
other processes can get into their critical section, before this process's request is granted. So after
the limit is reached, system must grant the process permission to get into its critical section.

Proposals for Achieving Mutual Exclusion

Synchronization Hardware (Hardware Solution)

Many systems provide hardware support for critical section code. The critical section problem
could be solved easily in a single-processor environment if we could disallow interrupts to occur
while a shared variable or resource is being modified.

In this manner, we could be sure that the current sequence of instructions would be allowed to
execute in order without pre-emption. Unfortunately, this solution is not feasible in a
multiprocessor environment.

Disabling interrupt on a multiprocessor environment can be time consuming as the message is


passed to all the processors.

This message transmission lag, delays entry of threads into critical section and the system
efficiency decreases.

Modern machines provide special atomic hardware instructions


 Atomic = non-interruptible
Either test memory word and set value Or swap contents of two memory words

Mutex Locks (Software Solution)

As the synchronization hardware solution is not easy to implement fro everyone, a strict software
approach called Mutex Locks was introduced. In this approach, in the entry section of code, a
LOCK is acquired over the critical resources modified and used inside critical section, and in the
exit section that LOCK is released. When a process wants to enter in its critical section, it first
test the lock. If lock is 0, the process first sets it to 1 and then enters the critical section. If the
lock is already 1, the process just waits until (lock) variable becomes 0. Thus, a 0 means that no
process in its critical section, and 1 means hold your horses - some process is in its critical
section.

do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);

Conclusion

The flaw in this proposal can be best explained by example. Suppose process A sees that the lock
is 0. Before it can set the lock to 1 another process B is scheduled, runs, and sets the lock to 1.
When the process A runs again, it will also set the lock to 1, and two processes will be in their
critical section simultaneously.

Semaphores

In 1965, Dijkstra proposed a new and very significant technique for managing concurrent
processes by using the value of a simple integer variable to synchronize the progress of
interacting processes. This integer variable is called semaphore. So it is basically a
synchronizing tool and is accessed only through two low standard atomic operations, wait and
signal designated by P() and V() respectively.

The classical definition of wait and signal are :

• Wait : decrement the value of its argument S as soon as it would become non-negative.
• Signal : increment the value of its argument, S as an individual operation.

wait(S): while S <= 0 do skip;


S := S – 1;
signal(S): S := S + 1;

Properties of Semaphores

1. Simple
2. Works with many processes
3. Can have many different critical sections with different semaphores
4. Each critical section has unique access semaphores
5. Can permit multiple processes into the critical section at once, if desirable.
Types of Semaphores

Semaphores are mainly of two types:

1. Binary Semaphore

It is a special form of semaphore used for implementing mutual exclusion, hence it is


often called Mutex. A binary semaphore is initialized to 1 and only takes the value 0 and
1 during execution of a program.

2. Counting Semaphores

These are used to implement bounded concurrency. Integer value can range over an
unrestricted domain

Limitations of Semaphores

1. Priority Inversion is a big limitation of semaphores.


2. Their use is not enforced, but is by convention only.
3. With improper use, a process may block indefinitely. Such a situation is called Deadlock.

Classical Problem of Synchronization

Following are some of the classical problem faced while process synchronization in systems
where cooperating processes are present.

Bounded Buffer Problem:

This problem was commonly used to illustrate the power of synchronization primitives. In this
scheme we assumed that the pool consists of “N” buffer and each capable of holding one item.
The “mutex” semaphore provides mutual exclusion for access to the buffer pool and is initialized
to the value one. The empty and full semaphores count the number of empty and full buffer
respectively. The semaphore empty is initialized to “N” and the semaphore full is initialized to
zero. This problem is known as procedure and consumer problem. The code of the producer is
producing full buffer and the code of consumer is producing empty buffer. The structure of
producer process is as follows:
Reader Write Problem

Dining Philosophers Problem

• The dining philosopher’s problem involves the allocation of limited resources from a
group of processes in a deadlock-free and starvation-free manner.

• Consider 5 philosophers to spend their lives in thinking & eating. A philosopher shares
common circular table surrounded by 5 chairs each occupies by one philosopher. In the
center of the table there is a bowl of rice and the table is laid with 5 chopsticks. When a
philosopher thinks she does not interact with her colleagues. From time to time a
philosopher gets hungry and tries to pickup two chopsticks that are closest to her. A
philosopher may pickup one chopstick or two chopsticks at a time but she cannot pickup
a chopstick that is already in hand of the neighbor. When a hungry philosopher has both
her chopsticks at the same time, she eats without releasing her chopsticks. When she
finished eating, she puts down both of her chopsticks and starts thinking again. This
problem is considered as classic synchronization problem. According to this problem
each chopstick is represented by a semaphore. A philosopher grabs the chopsticks by
executing the wait operation on that semaphore. She releases the chopsticks by executing
the signal operation on the appropriate semaphore. The structure of dining philosopher is
as follows:
do
{

MONITOR

• A monitor is a collection of procedures, variables, and data structures grouped together in


a package

• Processes can call the monitor procedures but cannot access the internal data structures.
• Only one process at a time may be be active in a monitor. Active in a monitor means in
ready queue or CPU with the program counter somewhere in a monitor method.
• A monitor is a language construct.
Compare this with semaphores, which are usually an OS construct.
• The compiler usually enforces mutual exclusion.

monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
procedure Pn (…) {……}
Initialization code (…) { … }
}
}
• Condition variables allow for blocking and unblocking.

• cv.wait() blocks a process.


The process is said to be waiting for (or waiting on) the condition variable cv.
• cv.signal() (also called cv.notify) unblocks a process waiting for the condition
variable cv.
When this occurs, we need to still require that only one process is active in the monitor.
This can be done in several ways:
o on some systems the old process (the one executing the signal) leaves the monitor
and the new one enters
o on some systems the signal must be the last statement executed inside the
monitor.
o on some systems the old process will block until the monitor is available again.
o on some systems the new process (the one unblocked by the signal) will remain
blocked until the monitor is available again.

• If a condition variable is signaled with nobody waiting, the signal is lost.


Compare this with semaphores, in which a signal will allow a process that executes a wait in the
future to no block.

• You should not think of a condition variable as a variable in the traditional sense. It does not
have a value. Think of it as an object in the OOP sense. It has two methods, wait and signal
that manipulate the calling process.

Implemented in other languages including Mesa, C#, Java


Atomic Transaction:

This section is related to the field of database system. Atomic transaction describes the various
techniques of database and how they are can be used by the operating system. It ensures that the
critical sections are executed automatically. To determine how the system should ensure
atomicity we need first to identify the properties of the devices used to for storing the data
accessed by the transactions. The various types storing devices are as follows:

Volatile Storage: Information residing in volatile storage does not survive in case of system crash.
Example of volatile storage is main memory and cache memory.
Non volatile Storage: Information residing in this type of storage usually survives in case of system
crash. Examples are Magnetic Disk, Magnetic Tape and Hard Disk.
Stable Storage: Information residing in stable storage is never lost. Example is non volatile cache
memory.

The various techniques used for ensuring the atomicity are as follows:

1. Log based Recovery: This technique is used for achieving the atomicity by using data structure called
log. A log has the following fields:
a. Transaction Name: This is the unique name of the transaction that performed the write
operation.
b. Data Item Name: This is the unique name given to the data.
c. Old Value: This is the value of the data before to the write operation.
d. New value: This is the value of the data after the write operation.

This recovery technique uses two processes such as Undo and Redo. Undo restores the value of old data
updated by a transaction to the old values. Redo sets the value of the data updated by a transaction to the
new values.

2. Checkpoint: In this principle system maintains the log. The checkpoint requires the following
sequences of action.
a. Output all the log records from volatile storage into stable storage.
b. Output all modified data residing in volatile to the stable storage.
c. Output a checkpoint onto the stable storage.
If transactions are overlapped then their execution resulting schedule is known as non-serial
scheduling or concurrent schedule as like below:

4. Locking: This technique governs how the locks are acquired and released. There are two types of lock
such as shared lock and exclusive lock. If a transaction T has obtained a shared lock (S) on data item Q
then T can read this item but cannot write. If a transaction T has obtained an exclusive lock (S) on data
item Q then T can both read and write in the data item Q.
5. Timestamp: In this technique each transaction in the system is associated with unique fixed
timestamp denoted by TS. This timestamp is assigned by the system before the transaction starts. If a
transaction Ti has been assigned with a timestamp TS (Ti) and later a new transaction Tj enters the system
then TS (Ti) < TS (Tj). There are two types of timestamp such as W-timestamp and R-timestamp. W-
timestamp denotes the largest timestamp of any transaction that performed write operation successfully.
R-timestamp denotes the largest timestamp of any transaction that executed read operation successfully.

You might also like