0% found this document useful (0 votes)
27 views

Operating System Chapter 8

Uploaded by

Hafsa Safdar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Operating System Chapter 8

Uploaded by

Hafsa Safdar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Chapter 8 Operating System

CS-316

Ms Iqra Mehmood
UNIVERSITY OF JHANG
Process Synchronization
Process Synchronization is the coordination of execution of multiple processes in a multi-
process system to ensure that they access shared resources in a controlled and predictable
manner. It aims to resolve the problem of race conditions and other synchronization issues in
a concurrent system.
The main objective of process synchronization is to ensure that multiple processes access
shared resources without interfering with each other and to prevent the possibility of
inconsistent data due to concurrent access. To achieve this, various synchronization
techniques such as semaphores, monitors, and critical sections are used.

In a multi-process system, synchronization is necessary to ensure data consistency and


integrity, and to avoid the risk of deadlocks and other synchronization problems. Process
synchronization is an important aspect of modern operating systems, and it plays a crucial
role in ensuring the correct and efficient functioning of multi-process systems.
On the basis of synchronization, processes are categorized as one of the following two types:
 Independent Process: The execution of one process does not affect the execution of
other processes.
 Cooperative Process: A process that can affect or be affected by other processes
executing in the system.
Process synchronization problem arises in the case of Cooperative processes also because
resources are shared in Cooperative processes.
Race Condition
When more than one process is executing the same code or accessing the same memory or
any shared variable in that condition there is a possibility that the output or the value of the
shared variable is wrong so for that all the processes doing the race to say that my output is
correct this condition known as a race condition. Several processes access and process the
manipulations over the same data concurrently, and then the outcome depends on the
particular order in which the access takes place. A race condition is a situation that may occur
inside a critical section. This happens when the result of multiple thread execution in the
critical section differs according to the order in which the threads execute. Race conditions
in critical sections can be avoided if the critical section is treated as an atomic instruction.
Also, proper thread synchronization using locks or atomic variables can prevent race
conditions.
Example:
Let’s understand one example to understand the race condition better:
Let’s say there are two processes P1 and P2 which share a common variable (shared=10),
both processes are present in – queue and waiting for their turn to be executed. Suppose,
Process P1 first come under execution, and the CPU store a common variable between them
(shared=10) in the local variable (X=10) and increment it by 1(X=11), after then when the
CPU read line sleep(1),it switches from current process P1 to process P2 present in ready-
queue. The process P1 goes in a waiting state for 1 second.
Now CPU execute the Process P2 line by line and store common variable (Shared=10) in its
local variable (Y=10) and decrement Y by 1(Y=9), after then when CPU read sleep(1), the
current process P2 goes in waiting for state and CPU remains idle for some time as there is
no process in ready-queue, after completion of 1 second of process P1 when it comes in
ready-queue, CPU takes the process P1 under execution and execute the remaining line of
code (store the local variable (X=11) in common variable (shared=11) ), CPU remain idle for
sometime waiting for any process in ready-queue,after completion of 1 second of Process P2,
when process P2 comes in ready-queue, CPU start executing the further remaining line of
Process P2(store the local variable (Y=9) in common variable (shared=9) ).
Initially Shared = 10
Process 1 Process 2

int X = shared int Y = shared

X++ Y–

sleep(1) sleep(1)

shared = X shared = Y

Note: We are assuming the final value of a common variable(shared) after execution of
Process P1 and Process P2 is 10 (as Process P1 increment variable (shared=10) by 1 and
Process P2 decrement variable (shared=11) by 1 and finally it becomes shared=10). But we
are getting undesired value due to a lack of proper synchronization.
Actual meaning of race-condition
 If the order of execution of the process(first P1 -> then P2) then we will get the value of
common variable (shared) =9.
 If the order of execution of the process(first P2 -> then P1) then we will get the final value
of common variable (shared) =11.
 Here the (value1 = 9) and (value2=10) are racing, If we execute these two processes in
our computer system then sometime we will get 9 and sometime we will get 10 as the
final value of a common variable(shared). This phenomenon is called race condition.
Critical Section Problem
A critical section is a code segment that can be accessed by only one process at a time. The
critical section contains shared variables that need to be synchronized to maintain the
consistency of data variables. So the critical section problem means designing a way for
cooperative processes to access shared resources without creating data inconsistencies.

In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
 Mutual Exclusion: If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
 Progress: If no process is executing in the critical section and other processes are
waiting outside the critical section, then only those processes that are not executing in
their remainder section can participate in deciding which will enter the critical section
next, and the selection can not be postponed indefinitely.
 Bounded Waiting: A bound must exist on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted.
Peterson’s Solution
Peterson’s Solution is a classical software-based solution to the critical section problem. In
Peterson’s solution, we have two shared variables:
 boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical
section
 int turn: The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions:


 Mutual Exclusion is assured as only one process can access the critical section at any
time.
 Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.
 Bounded Waiting is preserved as every process gets a fair chance.
Disadvantages of Peterson’s Solution
 It involves busy waiting. (In the Peterson’s solution, the code statement- “while(flag[j]
&& turn == j);” is responsible for this. Busy waiting is not favored because it wastes CPU
cycles that could be used to perform other tasks.)
 It is limited to 2 processes.
 Peterson’s solution cannot be used in modern CPU architectures.
Synchronization Problems
These problems are used for testing nearly every newly proposed synchronization scheme.
The following problems of synchronization are considered as classical problems:
1. Bounded-buffer (or Producer-Consumer) Problem,
2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem
Bounded-Buffer (or Producer-Consumer) Problem
The Bounded Buffer problem is also called the producer-consumer problem. This problem is
generalized in terms of the Producer-Consumer problem. The solution to this problem is, to
create two counting semaphores “full” and “empty” to keep track of the current number of
full and empty buffers respectively. Producers produce a product and consumers consume
the product, but both use of one of the containers each time.
Dining-Philosophers Problem
The Dining Philosopher Problem states that K philosophers seated around a circular table
with one chopstick between each pair of philosophers. There is one chopstick between each
philosopher. A philosopher may eat if he can pickup the two chopsticks adjacent to him. One
chopstick may be picked up by any one of its adjacent followers but not both. This problem
involves the allocation of limited resources to a group of processes in a deadlock-free and
starvation-free manner.

Readers and Writers Problem


Suppose that a database is to be shared among several concurrent processes. Some of these
processes may want only to read the database, whereas others may want to update (that is, to
read and write) the database. We distinguish between these two types of processes by
referring to the former as readers and to the latter as writers. Precisely in OS we call this
situation as the readers-writers problem. Problem parameters:
 One set of data is shared among a number of processes.
 Once a writer is ready, it performs its write. Only one writer may write at a time.
 If a process is writing, no other process can read it.
 If at least one reader is reading, no other process can write.
 Readers may not write and only read.
Sleeping Barber Problem
 Barber shop with one barber, one barber chair and N chairs to wait in. When no customers
the barber goes to sleep in barber chair and must be woken when a customer comes in.
When barber is cutting hair new customers take empty seats to wait, or leave if no
vacancy. This is basically the Sleeping Barber Problem.

Advantages of Process Synchronization


 Ensures data consistency and integrity
 Avoids race conditions
 Prevents inconsistent data due to concurrent access
 Supports efficient and effective use of shared resources
Disadvantages of Process Synchronization
 Adds overhead to the system
 This can lead to performance degradation
 Increases the complexity of the system
 Can cause deadlocks if not implemented properly.

Hardware Synchronization Algorithms


Process Synchronization problem can be solved by software as well as a hardware
solution. Peterson solution is one of the software solutions to the process synchronization
problem. Peterson algorithm allows two or more processes to share a single-use resource
without any conflict. In this article, we will discuss the Hardware solution to the problem. The
hardware solution is as follows:

1. Test and Set


2. Swap
3. Unlock and Lock
Test and Set
Test and set algorithm uses a boolean variable 'lock' which is initially initialized to false. This
lock variable determines the entry of the process inside the critical section of the code. Let's
first see the algorithm and then try to understand what the algorithm is doing.

boolean lock = false;

boolean TestAndSet(boolean &target){


boolean returnValue = target;
target = true;
return returnValue;
}

while(1){
while(TestAndSet(lock));

CRITICAL SECTION CODE;


lock = false;
REMAINDER SECTION CODE;

In the above algorithm the TestAndSet() function takes a boolean value and returns the same
value. TestAndSet() function sets the lock variable to true.

When lock varibale is initially false the TestAndSet(lock) condition checks for
TestAndSet(false). As TestAndSet function returns the same value as its
argument, TestAndSet(false) returns false. Now, while loop while(TestAndSet(lock)) breaks
and the process enters the critical section.

As one process is inside the critical section and lock value is now 'true', if any other process
tries to enter the critical section then the new process checks for while(TestAndSet(true)) which
will return true inside while loop and as a result the other process keeps executing the while
loop.

while(true); // this keeps executing until lock becomes false.

As no queue is maintained for the processes stuck in the while loop, bounded waiting is not
ensured. If a process waits for a set amount of time before entering the critical section, it
is said to be a bounded waiting condition.

In test and set algorithm the incoming process trying to enter the critical section does not wait
in a queue so any process may get the chance to enter the critical section as soon as the process
finds the lock variable to be false. It may be possible that a particular process never gets the
chance to enter the critical section and that process waits indefinitely.

Swap
Swap function uses two boolean variables lock and key. Both lock and key variables are
initially initialized to false. Swap algorithm is the same as lock and set algorithm. The Swap
algorithm uses a temporary variable to set the lock to true when a process enters the
critical section of the program.

Let's see the swap algorithm pseudo-code:

boolean lock = false;


individual key = false;

void swap(boolean &a, boolean &b){


boolean temp = a;
a = b;
b = temp;
}

while(1){
key=true;
while(key){
swap(lock,key);
}

CRITICAL SECTION CODE


lock = false;
REMAINDER SECTION CODE
}

In the code above when a process P1 enters the critical section of the program it first executes
the while loop

while(key){
swap(lock,key);
}

As key value is set to true just before the for loop so swap(lock, key) swaps the value
of lock and key. Lock becomes true and the key becomes false. In the next iteration of the while
loop breaks and the process, P1 enters the critical section.

The value of lock and key when P1 enters the critical section is lock = true and key = false.

Let's say another process, P2, tries to enter the critical section while P1 is in the critical section.
Let's take a look at what happens if P2 tries to enter the critical section.

key is set to true again after the first while loop is executed i.e while(1). Now, the second while
loop in the program i.e while(key) is checked. As key is true the process enters the second while
loop. swap(lock, key) is executed again. as both key and lock are true so after swapping also
both will be true. So, the while keeps executing and the process P2 keeps running the while
loop until Process P1 comes out of the critical section and makes lock false.

When Process P1 comes out of the critical section the value of lock is again set to false so that
other processes can now enter the critical section.

When a process is inside the critical section than all other incoming process trying to enter the
critical section is not maintained in any order or queue. So any process out of all the waiting
process can get the chance to enter the critical section as the lock becomes false. So, there may
be a process that may wait indefinitely. So, bounded waiting is not ensured in Swap
algorithm also.

Unlock and lock


Unlock and lock algorithm uses the TestAndSet method to control the value of lock. Unlock
and lock algorithm uses a variable waiting[i] for each process i. Here i is a positive integer i.e
1,2,3,... which corresponds to processes P1, P2, P3... and so on. waiting[i] checks if the
process i is waiting or not to enter into the critical section.

All the processes are maintained in a ready queue before entering into the critical section. The
processes are added to the queue with respect to their process number. The queue is the circular
queue.

Let's see the Unlock and lock algorithm pseudo-code first:

boolean lock = false;


Individual key = false;
Individual waiting[i];

boolean TestAndSet(boolean &target){


boolean returnValue = target;
target = true;
return returnValue;
}

while(1){
waiting[i] = true;
key = true;
while(waiting[i] && key){
key = TestAndSet(lock);
}
CRITICAL SECTION CODE
j = (i+1) % n;
while(j != i && !waiting[j])
j = (j+1) % n;
if(j == i)
lock = false;
else
waiting[j] = false;
REMAINDER SECTION CODE
}

In Unlock and lock algorithm the lock is not set to false as one process comes out of the critical
section. In other algorithms like swap and Test and set the lock was being set to false as the
process comes out of the critical section so that any other process can enter the critical section.

But in Unlock and lock, once the ith process comes out of the critical section the algorithm
checks the waiting queue for the next process waiting to enter the critical section i.e jth process.
If there is a jth process waiting in the ready queue to enter the critical section, the waiting[j] of
the jth process is set to false so that the while loop while(waiting[i] && key) becomes false and
the jth process enters the critical section.
If no process is waiting in the ready queue to enter the critical section the algorithm then sets
the lock to false so that any other process comes and enters the critical section easily.

Since a ready queue is always maintained for the waiting processes, the Unlock and lock
algorithm ensures bounded waiting.

Semaphores
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be
signaled by another thread. This is different than a mutex as the mutex can be signaled only
by the thread that is called the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization. A
Semaphore is an integer variable, which can be accessed only through two operations wait()
and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
 Binary Semaphores: They can only be either 0 or 1. They are also known as mutex
locks, as the locks can provide mutual exclusion. All the processes can share the same
mutex semaphore that is initialized to 1. Then, a process has to wait until the lock becomes
0. Then, the process can make the mutex semaphore 1 and start its critical section. When
it completes its critical section, it can reset the value of the mutex semaphore to 0 and
some other process can enter its critical section.
 Counting Semaphores: They can have any value and are not restricted to a certain
domain. They can be used to control access to a resource that has a limitation on the
number of simultaneous accesses. The semaphore can be initialized to the number of
instances of the resource. Whenever a process wants to use that resource, it checks if the
number of remaining instances is more than zero, i.e., the process has an instance
available. Then, the process can enter its critical section thereby decreasing the value of
the counting semaphore by 1. After the process is over with the use of the instance of the
resource, it can leave the critical section thereby adding 1 to the number of available
instances of the resource.

You might also like