0% found this document useful (0 votes)
10 views75 pages

Inter Process Communication (IPC)

Inter-Process Communication (IPC) allows processes to communicate and synchronize actions, utilizing methods like shared memory and message passing. It enhances efficiency and coordination among processes but increases system complexity and requires careful resource management. Key concepts include independent and co-operating processes, race conditions, critical sections, and synchronization techniques such as semaphores.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views75 pages

Inter Process Communication (IPC)

Inter-Process Communication (IPC) allows processes to communicate and synchronize actions, utilizing methods like shared memory and message passing. It enhances efficiency and coordination among processes but increases system complexity and requires careful resource management. Key concepts include independent and co-operating processes, race conditions, critical sections, and synchronization techniques such as semaphores.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Inter Process Communication (IPC)

● A process can be of two types:


● Independent process.
● Co-operating process.
● An independent process is not affected by the execution of other
processes while a co-operating process can be affected by other
executing processes.
● The process which are running independently, will execute
very efficiently, in reality.
● there are many situations when co-operative nature can be
utilized for increasing computational speed, convenience, and
modularity.

● Inter-process communication (IPC) is a mechanism that


allows processes to communicate with each other and
synchronize their actions.
● The communication between these processes can be seen as a
method of co-operation between them.
● Processes can communicate with each other through both:
● Shared Memory
● Message passing
● Figure 1: shows a basic structure of communication between
processes via the shared memory method and via the message
passing method.
● An operating system can implement both methods of
communication.
● The shared memory methods of communication and then message
passing.
● shared memory:Communication between processes using shared
memory requires processes to share some variable, and it
completely depends on how the programmer will implement it.
● Suppose process1 and process2 are executing simultaneously, and
they share some resources or use some information from another
process.
● Process1 generates information about certain computations or
resources being used and keeps it as a record in shared memory.
● When process2 needs to use the shared information, it will check
in the record stored in shared memory and take note of the
information generated by process1 and act accordingly.
● Processes can use shared memory for extracting information as a
record from another process as well as for delivering any specific
information to other processes.
An example of communication between processes using the
shared memory method.

● i) Shared Memory Method

Ex: Producer-Consumer problem

● There are two processes: Producer and Consumer.


● The producer produces some items and the Consumer consumes
that item.
● The two processes share a common space or memory location
known as a buffer where the item produced by the Producer is
stored and from which the Consumer consumes the item if needed.
● There are two versions of this problem: 1. unbounded buffer
problem in which the Producer can keep on producing items and
there is no limit on the size of the buffer.
● [Link] buffer problem: in which the Producer can produce up
to a certain number of items before it starts waiting for Consumer
to consume it.
● We will discuss the bounded buffer problem. First, the
Producer and the Consumer will share some common memory,
then the producer will start producing items. If the total produced
item is equal to the size of the buffer, the producer will wait to get
it consumed by the Consumer. Similarly, the consumer will first
check for the availability of the item. If no item is available, the
Consumer will wait for the Producer to produce it. If there are
items available, Consumer will consume them.
ii) Messaging Passing Method

● Communication between processes via message passing.


● In this method, processes communicate with each other without
using any kind of shared memory.
● If two processes p1 and p2 want to communicate with each other,
they proceed as follows:

● Establish a communication link (if a link already exists, no need


to establish it again.)
● Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)


● The message size can be of fixed size or of variable size.
● If it is of fixed size, it is easy for an OS designer but complicated
for a programmer and if it is of variable size then it is easy for a
programmer but complicated for the OS designer.
● A standard message have two parts: header and body.

Header part: used for storing message type, destination id, source
id, message length, and control information.
● Control information: contains information like what to do if runs
out of buffer space, sequence number, priority.
● Generally, message is sent using FIFO style.

● Advantages of IPC:
● Enables processes to communicate with each other and share
resources, leading to increased efficiency and flexibility.
● Facilitates coordination between multiple processes, leading
to better overall system performance.
● Used to implement various synchronization
● Disadvantages of IPC:
● Increases system complexity.
● making it harder to design, implement, and debug.
● Requires careful management of system resources.
● Methods in Interprocess Communication
● Inter-process communication (IPC) is set of interfaces, which is
usually programmed in order for the programs to communicate
between series of processes.
● This allows running programs concurrently in an Operating
System. These are the methods in IPC:
● Pipes (Same Process) – This allows flow of data in one
direction only.
● Data from the output is usually buffered until input process
receives it which must have a common origin.
● Names Pipes (Different Processes) – This is a pipe with a
specific name it can be used in processes that don’t have a
shared common process origin.
● E.g. is FIFO where the details written to a pipe is first named.
● Message Queuing – This allows messages to be passed
between processes using either a single queue or several
message queue.
● Semaphores – This is used in solving problems associated with
synchronization and to avoid race condition.
● These are integer values which are greater than or equal to 0.
● Shared memory – This allows the interchange of data through
a defined area of memory.
● Semaphore values have to be obtained before data can get
access to shared memory.
● Sockets – This method is mostly used to communicate over a
network between a client and a server.
● It allows for a standard connection which is computer and OS
independent.
● Introduction of Process Synchronization:
● Process Synchronization is the coordination of execution of
multiple processes in a multi-process system to ensure that
they access shared resources in a controlled and predictable
manner.
● It aims to resolve the problem of race conditions and other
synchronization issues in a concurrent system.
● Objective of process synchronization is to ensure that multiple
processes access shared resources without interfering with
each other and to prevent the possibility of inconsistent data
due to concurrent access.
● To achieve this, various synchronization techniques such as
semaphores, monitors, and critical sections are used.
● In a multi-process system, synchronization is necessary to ensure
data consistency and integrity, and to avoid the risk of deadlocks.
● Process synchronization plays a crucial role in ensuring the correct
and efficient functioning of multi-process systems.
● Processes are categorized as one of the following two types(On
synchronization basis)
● Independent Process: The execution of one process does not
affect the execution of other processes.
● Cooperative Process: A process that can affect or be affected
by other processes executing in the system. (variable, memory,
code, resources-cpu, printer)
● Talking about uniprocessor(single cpu)
● In shared=5 (again updated to 4)

P1

1.. Int x=shared

2.X++ (value of shared is 5 to 6)

[Link] (1);---- pause (process hold on and cpu move to p2

[Link]= x; (shared to x =6)

terminated

P2

Int y=shared

Y - - ……………….(y=4)

Sleep (1);
shared= y; (again updated 4)

It is going parallel, in case of serial problem does not occurs

Because these process are parallel and cooperative and not


synchronized . this is the need the process should be synchronized
● Race Condition
● When more than one process is executing the same code or
accessing the same memory or any shared variable in that
condition there is a possibility that the output or the value of
the shared variable is wrong so for that all the processes doing
the race to say that my output is correct this condition known
as a race condition.
● Several processes access and process the manipulations over the
same data concurrently, and then the outcome depends on the
particular order in which the access takes place.
● A race condition is a situation that may occur inside a critical
section.
● This happens when the result of multiple thread execution in the
critical section differs according to the order in which the threads
execute.
● Example:
● Let’s understand one example to understand the race
condition better:
● Let’s say there are two processes P1 and P2 which share a common
variable (shared=10), both processes are present in – queue and
waiting for their turn to be executed.
● Suppose, Process P1 first come under execution, and the CPU store
a common variable between them (shared=10) in the local variable
(X=10) and increment it by 1(X=11), after then when the CPU
read line sleep(1),it switches from current process P1 to process P2
present in ready-queue.
● The process P1 goes in a waiting state for 1 second.
● Now CPU execute the Process P2 line by line and store common
variable (Shared=10) in its local variable (Y=10) and decrement Y
by 1(Y=9), after then when CPU read sleep(1), the current process
P2 goes in waiting for state and CPU remains idle for some time as
there is no process in ready-queue,
● after completion of 1 second of process P1 when it comes in
ready-queue, CPU takes the process P1 under execution and
execute the remaining line of code (store the local variable (X=11)
in common variable (shared=11) ),
● CPU remain idle for sometime waiting for any process in ready-
queue,after completion of 1 second of Process P2, when process P2
comes in ready-queue,
● CPU start executing the further remaining line of Process P2(store
the local variable (Y=9) in common variable (shared=9) ).
● Initially Shared = 10

● Process 1 ● Process 2

● int X = shared ● int Y = shared

● X++ ● Y–

● sleep(1) ● sleep(1)

● shared = X ● shared = Y
● Note: We are assuming the final value of a common
variable(shared) after execution of Process P1 and Process P2 is 10
(as Process P1 increment variable (shared=10) by 1 and Process P2
decrement variable (shared=11) by 1 and finally it becomes
shared=10).

● But we are getting undesired value due to a lack of proper


synchronization.

● Actual meaning of race-condition


● If the order of execution of the process(first P1 -> then P2) then
we will get the value of common variable (shared) =9.
● If the order of execution of the process(first P2 -> then P1) then
we will get the final value of common variable (shared) =11.
● Here the (value1 = 9) and (value2=10) are racing,

● If we execute these two processes in our computer system then


sometime we will get 9 and sometime we will get 10 as the final
value of a common variable(shared).

● This phenomenon is called race condition.


● Critical Section Problem
● A critical section is a part of program where shared resources
are accessed by various processes.
● The critical section contains shared variables that need to be
synchronized to maintain the consistency of data variables.
● So the critical section problem means designing a way for
cooperative processes to access shared resources without creating
data inconsistencies.

Program 1

Header files

main()

..

.
.

Program 2

Header files

main()

. entire code is divided into two parts

1. Critical section: place where shared variable or resources are


placed
2. Non critical sections
3. Common portion is stored in critical section and

4. not common in non critical section


● In the entry section, the process requests for entry in the Critical
Section.
● Any solution to the critical section problem must satisfy three
requirements:
● Mutual Exclusion: If a process is executing in its critical
section, then no other process is allowed to execute in the
critical section.
● Progress: If no process is executing in the critical section and
other processes are waiting outside the critical section, then
only those processes that are not executing in their
remainder section can participate in deciding which will
enter the critical section next, and the selection can not be
postponed indefinitely.
● Bounded Waiting: (TIME) A bound must exist on the
number of times that other processes are allowed to enter
their critical sections after a process has made a request to
enter its critical section and before that request is granted.

● Peterson’s Solution
● Peterson’s Solution is a classical software-based solution to the
critical section problem.
● In Peterson’s solution, we have two shared variables:
● boolean flag[i]: Initialized to FALSE,
● initially no one is interested in entering the critical section
● int turn: The process whose turn is to enter the critical section.

● Peterson’s Solution preserves all three conditions:


● Mutual Exclusion is assured as only one process can access the
critical section at any time.
● Progress is also assured, as a process outside the critical section
does not block other processes from entering the critical section.
● Bounded Waiting is preserved as every process gets a fair
chance.
● Disadvantages of Peterson’s Solution
● It involves busy waiting.
● (In the Peterson’s solution, the code statement- “while(flag[j]
&& turn == j);” is responsible for this.
● Busy waiting is not favored because it wastes CPU cycles that
could be used to perform other tasks.)
● It is limited to 2 processes.
● Peterson’s solution cannot be used in modern CPU
architectures.
● Semaphores
● Semaphores are normal variables used to coordinate the
activities of multiple processes in a computer system.
● They are used to enforce mutual exclusion, avoid race conditions,
and implement synchronization between processes.
● The process of using Semaphores provides two operations:
● wait (P) and signal (V).
● The wait operation decrements the value of the semaphore,
and the signal operation increments the value of the
semaphore.
● When the value of the semaphore is zero, any process that
performs a wait operation will be blocked until another
process performs a signal operation.
● Semaphores are used to implement critical sections, which are
regions of code that must be executed by only one process at a
time.
● By using semaphores, processes can coordinate access to shared
resources, such as shared memory or I/O devices.
● A semaphore is a special kind of synchronization data that can
be used only through specific synchronization primitives.
● When a process performs a wait operation on a semaphore, the
operation checks whether the value of the semaphore is >0.
● If so, it decrements the value of the semaphore and lets the process
continue its execution; otherwise, it blocks the process on the
semaphore.
● A signal operation on a semaphore activates a process blocked on
the semaphore if any, or increments the value of the semaphore by
1.
● Due to these semantics, semaphores are also called counting
semaphores.
● The initial value of a semaphore determines how many processes
can get past the wait operation.
● Wait: The wait operation decrements the value of its argument S, if
it is POSITIVE , if S is NEGATIVE or ZERO then operation is
performed

wait(S)

While (S<=0);

S– -;

}
Signal : The signal operation increments the value of its argument
S.

Signal (S)

S++;

Semaphores are of two types:

1. Binary Semaphore –
This is also known as a mutex lock.

2. A mutex (mutual exclusion) lock is a synchronization tool that ensures


only one thread or process can access a shared resource (like data or a
code section) at a time, preventing data corruption and race conditions
in concurrent programming.

3. It acts like a key: a thread "locks" the mutex before entering the critical
section, and other threads must wait until it's "unlocked" after the task
is finished, guaranteeing exclusive access and orderly execution.

4. It can have only two values – 0 and 1. Its value is initialized to 1.


5. It is used to implement the solution of critical section problems
with multiple processes.
6. Counting Semaphore – (takes the values greater than one)
Its value can range over an unrestricted domain.

7. It is used to control access to a resource that has multiple


instances.

Now let us see how it does so.

First, look at two operations that can be used to access and change the
value of the semaphore variable.
Some points regarding P and V operation:

1. P operation is also called wait, sleep, or down operation, and

2. V operation is also called signal, wake-up, or up operation.


3. Both operations are atomic and semaphore(s) is always initialized
to one.
4. Here atomic means that variable on which read, modify and update
happens at the same time/moment with no pre-emption i.e. in-
between read, modify and update no other operation is performed
that may change the variable.
5. A critical section is surrounded by both operations to implement
process synchronization.

6. The critical section of Process P is in between P and V operation.

● let us see how it implements mutual exclusion.


● Let there be two processes P1 and P2 and a semaphore s is
initialized as 1.
● Now if suppose P1 enters in its critical section then the value of
semaphore s becomes 0.
● Now if P2 wants to enter its critical section then it will wait until s
> 0, this can only happen when P1 finishes its critical section and
calls V operation on semaphore s.
● The description above is for binary semaphore which can take only
two values 0 and 1 and ensure mutual exclusion.
● There is one other type of semaphore called counting semaphore
which can take values greater than one.
● Now suppose there is a resource whose number of instances is 4.
Now we initialize S = 4 and the rest is the same as for binary
semaphore. Whenever the process wants that resource it calls P or
waits for function and when it is done it calls V or signal function.
If the value of S becomes zero then a process has to wait until S
becomes positive. For example, Suppose there are 4 processes P1,
P2, P3, P4, and they all call wait operation on S(initialized with 4).
If another process P5 wants the resource then it should wait until
one of the four processes calls the signal function and the value of
semaphore becomes positive.
Limitations :

1. limitations of semaphore is priority inversion..


2. The operating system has to keep track of all calls to wait and
signal the semaphore.
● There are two types of semaphores:
● Binary Semaphores
● Counting Semaphores.
● Binary Semaphores: They can only be either 0 or 1.
● They are also known as mutex locks, as the locks can provide
mutual exclusion.
● All the processes can share the same mutex semaphore that is
initialized to 1.
● Then, a process has to wait until the lock becomes 0. Then, the
process can make the mutex semaphore 1 and start its critical
section. When it completes its critical section, it can reset the
value of the mutex semaphore to 0 and some other process can
enter its critical section.
● Counting Semaphores: They can have any value and are not
restricted to a certain domain.

● They can be used to control access to a resource that has a


limitation on the number of simultaneous accesses.
● The semaphore can be initialized to the number of instances of
the resource.

● Whenever a process wants to use that resource, it checks if the


number of remaining instances is more than zero, i.e., the
process has an instance available. Then, the process can enter its
critical section thereby decreasing the value of the counting
semaphore by 1. After the process is over with the use of the
instance of the resource, it can leave the critical section thereby
adding 1 to the number of available instances of the resource.

Advantages of Semaphores:

● A simple and effective mechanism for process synchronization


● Supports coordination between multiple processes
● Provides a flexible and robust way to manage shared resources.
● It can be used to implement critical sections in a program.
● It can be used to avoid race conditions.

Disadvantages of Semaphores:

● It Can lead to performance degradation due to overhead


associated with wait and signal operations.
● Can result in deadlock if used incorrectly.
● It can cause performance issues in a program if not used properly.
● It can be difficult to debug and maintain.

Difference between Counting and Binary Semaphores :

Criteria Binary Semaphore Counting Semaphore

Definition A Binary Semaphore is a A counting semaphore is


semaphore whose a semaphore that has
integer value range over multiple values of the
0 and 1. counter. The value can
range over an unrestricted
domain.

Structure typedef struct { typedef struct {


Implementatio
int int
n
semaphore_variable; semaphore_variable;

}binary_semaphore; Queue list;


//A queue to store the list
of task

}counting_semaphore;
Representation 0 means that a process or The value can range from
a thread is accessing the 0 to N, where N is the
critical section, other number of process or
process should wait for it thread that has to enter
to exit the critical the critical section.
section. 1 represents the
critical section is free.

Mutual Yes, it guarantees No, it doesn’t guarantees


Exclusion mutual exclusion, since mutual exclusion, since
just one process or more than one process or
thread can enter the thread can enter the
critical section at a time. critical section at a time.

Bounded wait No, it doesn’t guarantees Yes, it guarantees


bounded wait, as only bounded wait, since it
one process can enter the maintains a list of all the
critical section, and there process or threads, using
is no limit on how long a queue, and each process
the process can exist in or thread get a chance to
the critical section, enter the critical section
making another process once. So no question of
to starve. starvation.

Starvation No waiting queue is Waiting queue is present


present then FCFS (first then FCFS (first come
come first serve) is not first serve) is followed
followed so,starvation is so,no starvation hence no
possible and busy wait busy wait.
present

Number of Used only for a single Used for any number of


instance instance of resource type instance of resource of
[Link] can be usedonly for type [Link] can be used for
2 processes. any number of processes.

● Dining Philosopher Problem Using Semaphores


● The Dining Philosopher Problem states that K philosophers are
seated around a circular table with one chopstick between each pair
of philosophers.
● There is one chopstick between each philosopher.
● A philosopher may eat if he can pick up the two chopsticks
adjacent to him.
● One chopstick may be picked up by any one of its adjacent
followers but not both.

● Dining Philosopher

● There are three states of the philosopher:


● THINKING, HUNGRY, and EATING.
● Eating needs two fork
● Pick one fork at a time
● How to prevent deadlock

Idea Invent:

Think until the left fork is available, when it is , pick it up


Think until the right fork is available, when it is , pick it up

Eat

Put the left fork down

Put the right fork down

Repeat from the start


● Here there are two semaphores: Mutex and a semaphore array for the
philosophers.

● Mutex is used such that no two philosophers may access the pickup or

put it down at the same time.

● The array is used to control the behavior of each philosopher. But,

semaphores can result in deadlock due to programming errors.

Outline of a philosopher process:

The steps for the Dining Philosopher Problem solution


using semaphores are as follows

1. Initialize the semaphores for each fork to 1 (indicating that they are
available).

2. Initialize a binary semaphore (mutex) to 1 to ensure that only one


philosopher can attempt to pick up a fork at a time.

3. For each philosopher process, create a separate thread that executes


the following code:

● While true:
● Think for a random amount of time.
● Acquire the mutex semaphore to ensure that only one
philosopher can attempt to pick up a fork at a time.
● Attempt to acquire the semaphore for the fork to the
left.
● If successful, attempt to acquire the semaphore for the fork to
the right.
● If both forks are acquired successfully, eat for a random amount
of time and then release both semaphores.
● If not successful in acquiring both forks, release the semaphore
for the fork to the left (if acquired) and then release the mutex
semaphore and go back to thinking.

4. Run the philosopher threads concurrently.

By using semaphores to control access to the forks, the Dining


Philosopher Problem can be solved in a way that avoids deadlock and
starvation.

The use of the mutex semaphore ensures that only one philosopher can
attempt to pick up a fork at a time, while the use of the fork semaphores
ensures that a philosopher can only eat if both forks are available.

Overall, the Dining Philosopher Problem solution using semaphores is a


classic example of how synchronization mechanisms can be used to
solve complex synchronization problems in concurrent programming.
Sleeping Barber problem in Process
Synchronization
● The Sleeping Barber problem is a classic problem in process
synchronization that is used to illustrate synchronization issues that
can arise in a concurrent system. The problem is as follows:
● There is a barber shop with one barber and a number of chairs
for waiting customers.
● Customers arrive at random times and if there is an available
chair, they take a seat and wait for the barber to become
available.
● If there are no chairs available, the customer leaves. When the
barber finishes with a customer, he checks if there are any waiting
customers.
● If there are, he begins cutting the hair of the next customer in the
queue.
● If there are no customers waiting, he goes to sleep.
● The problem is to write a program that coordinates the actions of
the customers and the barber in a way that avoids
synchronization problems, such as deadlock or starvation.
● One solution to the Sleeping Barber problem is to use semaphores
to coordinate access to the waiting chairs and the barber chair. The
solution involves the following steps:
● Initialize two semaphores: one for the number of waiting chairs
and one for the barber chair.
● The waiting chairs semaphore is initialized to the number of chairs,
and the barber chair semaphore is initialized to zero.
● Customers should acquire the waiting chairs semaphore before
taking a seat in the waiting room.
● If there are no available chairs, they should leave.
● When the barber finishes cutting a customer’s hair, he releases the
barber chair semaphore and checks if there are any waiting
customers.
● If there are, he acquires the barber chair semaphore and begins
cutting the hair of the next customer in the queue.
● The barber should wait on the barber chair semaphore if there are
no customers waiting.
● The solution ensures that the barber never cuts the hair of more
than one customer at a time, and that customers wait if the barber
is busy.
● It also ensures that the barber goes to sleep if there are no
customers waiting.
● However, there are variations of the problem that can require more
complex synchronization mechanisms to avoid synchronization
issues.
● For example, if multiple barbers are employed, a more complex
mechanism may be needed to ensure that they do not interfere with
each other.

Problem : The analogy is based upon a hypothetical barber shop with


one barber. There is a barber shop which has one barber, one barber
chair, and n chairs for waiting for customers if there are any to sit on the
chair.

● If there is no customer, then the barber sleeps in his own chair.


● When a customer arrives, he has to wake up the barber.
● If there are many customers and the barber is cutting a
customer’s hair, then the remaining customers either wait if
there are empty chairs in the waiting room or they leave if no
chairs are empty.
Solution :
The solution to this problem includes three [Link] is for the
customer which counts the number of customers present in the waiting
room (customer in the barber chair is not included because he is not
waiting).

Second, the barber 0 or 1 is used to tell whether the barber is idle or is


working, And the third mutex is used to provide the mutual exclusion which
is required for the process to execute.

In the solution, the customer has the record of the number of customers waiting

in the waiting room if the number of customers is equal to the number of chairs

in the waiting room then the upcoming customer leaves the barbershop. When

the barber shows up in the morning, he executes the procedure barber, causing

him to block on the semaphore customers because it is initially 0. Then the

barber goes to sleep until the first customer comes up.

When a customer arrives, he executes customer procedure the customer

acquires the mutex for entering the critical region, if another customer enters
thereafter, the second one will not be able to anything until the first one has

released the mutex.

The customer then checks the chairs in the waiting room if waiting customers

are less then the number of chairs then he sits otherwise he leaves and releases

the mutex. (RESOURCE)

If the chair is available then customer sits in the waiting room and increments

the variable waiting value and also increases the customer’s semaphore

(COUNT) this wakes up the barber if he is sleeping.

At this point, customer and barber are both awake and the barber is ready to

give that person a haircut.

When the haircut is over, the customer exits the procedure and if there are no

customers in waiting room barber sleeps.


Introduction of Deadlock in Operating System
A process in operating system uses resources in the following way.

1. Requests a resource
2. Use the resource
3. Releases the resource

A deadlock is a situation where a set of processes are blocked because


each process is holding a resource and waiting for another resource
acquired by some other process.

Example: when two trains are coming toward each other on the same
track and there is only one track, none of the trains can move once they
are in front of each other.

A similar situation occurs in operating systems when there are two or


more processes that hold some resources and wait for resources held by
other(s).
Example, in the below diagram, Process 1 is holding Resource 1 and
waiting for resource 2 which is acquired by process 2, and process 2 is
waiting for resource 1.

Examples Of Deadlock

1. The system has 2 tape drives. P1 and P2 each hold one tape
drive and each needs another one.
2. Semaphores A and B, initialized to 1, P0, and P1 are in
deadlock as follows:
● P0 executes wait(A) and preempts.
● P1 executes wait(B).
● Now P0 and P1 enter in deadlock.
P0 P1

wait(A); wait(B)

wait(B); wait(A)

3. Assume the space is available for allocation of 200K bytes, and


the following sequence of events occurs.

P0 P1

Request 80KB; Request 70KB;


Request 60KB; Request 80KB;

Deadlock occurs if both processes progress to their second request.

Necessary Conditions for Deadlock:

1. Mutual Exclusion: Whatever resource you are using in the


scenario. That should be used in mutual exclusive way .

● (one by one means interleaving is not allowed)


● Two or more resources are non-shareable
● (Only one process can use at a time)

[Link] Preemption: A resource cannot be taken from a process unless the


process releases the resource. (task should be complete)

3. Hold and Wait: A process is holding at least one resource and waiting
for resources.
4. Circular Wait: A set of processes waiting for each other in circular
form.

Methods for handling deadlock


There are three ways to handle deadlock

1) Deadlock prevention or avoidance:

Prevention:

● The idea is to not let the system into a deadlock state.


● This system will make sure that above mentioned four conditions
will not arise.
● These techniques are very costly so we use this in cases where our
priority is making a system deadlock-free.

One can zoom into each category individually, Prevention is done
by negating one of the above-mentioned necessary conditions for
deadlock.
● Prevention can be done in four different ways:

1. Eliminate mutual exclusion 3. Allow


preemption

2. Solve hold and Wait 4. Circular


wait Solution

● Avoidance:
Avoidance is kind of futuristic. By using the strategy of
“Avoidance”, we have to make an assumption.
● We need to ensure that all information about resources that the
process will need is known to us before the execution of the
process.
● We use Banker’s algorithm (Which is in turn a gift from Dijkstra)
to avoid deadlock.
In prevention and avoidance, we get the correctness of data but
performance decreases.

2) Deadlock detection and recovery: If Deadlock prevention or


avoidance is not applied to the software then we can handle this by
deadlock detection and recovery. which consist of two phases:

1. In the first phase, we examine the state of the process and


check whether there is a deadlock or not in the system.
2. If found deadlock in the first phase then we apply the algorithm
for recovery of the deadlock.

In Deadlock detection and recovery, we get the correctness of data but


performance decreases.

Recovery from Deadlock

1. Manual Intervention:

When a deadlock is detected, one option is to inform the operator and let
them handle the situation manually. While this approach allows for
human judgment and decision-making, it can be time-consuming and
may not be feasible in large-scale systems.

2. Automatic Recovery:
An alternative approach is to enable the system to recover from deadlock
automatically.

This method involves breaking the deadlock cycle by either aborting


processes or preempting resources. Let’s delve into these strategies in
more detail.

Recovery from Deadlock: Process Termination:

1. Abort all deadlocked processes:

This approach breaks the deadlock cycle, but it comes at a significant


cost.

The processes that were aborted may have executed for a considerable
amount of time, resulting in the loss of partial computations.

These computations may need to be recomputed later.

2. Abort one process at a time:

Instead of aborting all deadlocked processes simultaneously, this


strategy involves selectively aborting one process at a time until the
deadlock cycle is eliminated. However, this incurs overhead as a
deadlock-detection algorithm must be invoked after each process
termination to determine if any processes are still deadlocked.

Factors for choosing the termination order:


– Completion time and the progress made so far

– Resources consumed by the process

– Resources required to complete the process

– Number of processes to be terminated

– Process type (interactive or batch)

Recovery from Deadlock: Resource Preemption:

1. Selecting a victim:

Resource preemption involves choosing which resources and processes


should be preempted to break the deadlock.

The selection order aims to minimize the overall cost of recovery.


Factors considered for victim selection may include the number of
resources held by a deadlocked process and the amount of time the
process has consumed.

2. Rollback:

If a resource is preempted from a process, the process cannot continue


its normal execution as it lacks the required resource. Rolling back the
process to a safe state and restarting it is a common approach.
Determining a safe state can be challenging, leading to the use of total
rollback, where the process is aborted and restarted from scratch.

3. Starvation prevention:

To prevent resource starvation, it is essential to ensure that the same


process is not always chosen as a victim. If victim selection is solely
based on cost factors, one process might repeatedly lose its resources
and never complete its designated task. To address this, it is advisable to
limit the number of times a process can be chosen as a victim, including
the number of rollbacks in the cost factor.

3) Deadlock ignorance: If a deadlock is very rare, then let it happen and


reboot the system. This is the approach that both Windows and UNIX
take. we use the ostrich algorithm for deadlock ignorance.

In Deadlock, ignorance performance is better than the above two


methods but the correctness of data.

Safe State:

A safe state can be defined as a state in which there is no deadlock. It is


achievable if:

● If a process needs an unavailable resource, it may wait until the


same has been released by a process to which it has already
been allocated. if such a sequence does not exist, it is an unsafe
state.
● All the requested resources are allocated to the process.

Deadlock Characterization
Computer ScienceMCAOperating System

A deadlock happens in operating system when two or more processes need


some resource to complete their execution that is held by the other process.

A deadlock occurs if the four Coffman conditions hold true. But these conditions are not

mutually exclusive. They are given as follows −

Mutual Exclusion
There should be a resource that can only be held by one process at a time. In
the diagram below, there is a single instance of Resource 1 and it is held by
Process 1 only.

Hold and Wait


A process can hold multiple resources and still request more resources from
other processes which are holding them. In the diagram given below, Process 2
holds Resource 2 and Resource 3 and is requesting the Resource 1 which is held
by Process 1.

No Preemption
A resource cannot be preempted from a process by force. A process can only
release a resource voluntarily. In the diagram below, Process 2 cannot preempt
Resource 1 from Process 1. It will only be released when Process 1 relinquishes
it voluntarily after its execution is complete.

Circular Wait
A process is waiting for the resource held by the second process, which is
waiting for the resource held by the third process and so on, till the last process
is waiting for a resource held by the first process. This forms a circular chain.
For example: Process 1 is allocated Resource2 and it is requesting Resource 1.
Similarly, Process 2 is allocated Resource 1 and it is requesting Resource 2. This
forms a circular wait loop.

Resource Allocation Graph (RAG) in


Operating System

Resource Allocation Graph (RAG)

A resource allocation graphs shows which resource is held by which


process and which process is waiting for a resource of a specific
kind.
● It is amazing and straight – forward tool to outline how interacting
processes can deadlock.
● Therefore, resource allocation graph describe what the condition of
the system as far as process and resources are concern like what
number of resources are allocated and what is the request of each
process.
● Everything can be represented in terms of graph. One of the
benefit of having a graph is, sometimes it is convenient to see a
deadlock straight forward by utilizing RAG and however you
probably won’t realize that by taking a tour of table.
● Yet tables are better if the system contains bunches of process and
resource and graph is better if the system contains less number of
process and resource.

. So, resource allocation graph is explained to us what is the state of the


system in terms of processes and resources. Like how many resources
are available, how many are allocated and what is the request of each
process. Everything can be represented in terms of the diagram.

Advantages of having a diagram is, sometimes it is possible to see a


deadlock directly by using RAG, but then you might not be able to know
that by looking at the table. But the tables are better if the system
contains lots of process and resource and Graph is better if the system
contains less number of process and resource. We know that any graph
contains vertices and edges.

Types of Vertices in RAG

RAG also contains vertices and edges. In RAG vertices are two
types

1. Process Vertex: Every process will be represented as a process


vertex. Generally, the process will be represented with a circle.

2. Resource Vertex: Every resource will be represented as a resource


vertex. It is also two types:

● Single instance type resource: It represents as a box, inside the


box, there will be one [Link] the number of dots indicate how
many instances are present of each resource type.
● Multi-resource instance type resource: It also represents as a
box, inside the box, there will be many dots present.
How many Types of Edges are there in
RAG?

Now coming to the edges of [Link] are two types of edges in RAG –

● Assign Edge: If you already assign a resource to a process then it is


called Assign edge.
● Request Edge: It means in future the process might want some resource
to complete the execution, that is called request edge.
So, if a process is using a resource, an arrow is drawn from the resource node to
the process node. If a process is requesting a resource, an arrow is drawn from
the process node to the resource node.

Example 1 (Single instances RAG)


If there is a cycle in the Resource Allocation Graph and each resource in the
cycle provides only one instance, then the processes will be in deadlock. For
example, if process P1 holds resource R1, process P2 holds resource R2 and
process P1 is waiting for R2 and process P2 is waiting for R1, then process P1
and process P2 will be in deadlock.

Here’s another example, that shows Processes P1 and P2 acquiring resources


R1 and R2 while process P3 is waiting to acquire both resources. In this
example, there is no deadlock because there is no circular dependency. So cycle
in single-instance resource type is the sufficient condition for deadlock.

Example 2 (Multi-instances RAG)


From the above example, it is not possible to say the RAG is in a safe state or in
an unsafe [Link] to see the state of this RAG, let’s construct the allocation
matrix and request matrix.

● The total number of processes are three; P1, P2 & P3 and the total
number of resources are two; R1 & R2.
● Allocation matrix –
● For constructing the allocation matrix, just go to the resources and see to
which process it is allocated.
● R1 is allocated to P1, therefore write 1 in allocation matrix and similarly, R2
is allocated to P2 as well as P3 and for the remaining element just write 0.
● Request matrix –
● In order to find out the request matrix, you have to go to the process and
see the outgoing edges.
● P1 is requesting resource R2, so write 1 in the matrix and similarly, P2
requesting R1 and for the remaining element write 0.
● So now available resource is = (0, 0).
● Checking deadlock (safe or not) –

● So, there is no deadlock in this [Link] though there is a cycle, still


there is no [Link] in multi-instance resource cycle is not
sufficient condition for deadlock.

● Above example is the same as the previous example except that, the
process P3 requesting for resource R1. So the table becomes as shown in
below.

● So,the Available resource is = (0, 0), but requirement are (0, 1), (1, 0) and
(1, 0).So you can’t fulfill any one [Link], it is in deadlock.
Therefore, every cycle in a multi-instance resource type graph is not a
deadlock, if there has to be a deadlock, there has to be a [Link], in case
of RAG with multi-instance resource type, the cycle is a necessary
condition for deadlock, but not sufficient.

Banker’s Algorithm in Operating System


(Deadlock Avoidance Algorithm ) also
used for deadlock Detection
The banker’s algorithm is a resource allocation and deadlock avoidance
algorithm that tests for safety by simulating the allocation for the predetermined
maximum possible amounts of all resources, then makes an “s-state” check to
test for possible activities, before deciding whether allocation should be allowed
to continue.

Why Banker’s Algorithm is Named So?

● The banker’s algorithm is named so because it is used in the banking


system to check whether a loan can be sanctioned to a person or not.
● Suppose there are n number of account holders in a bank and the total
sum of their money is S.
● If a person applies for a loan then the bank first subtracts the loan amount
from the total money that the bank has and if the remaining amount is
greater than S then only the loan is sanctioned.
● It is done because if all the account holders come to withdraw their money
then the bank can easily do it.
● It also helps the OS to successfully share the resources between all the
processes.
● It is called the banker’s algorithm because bankers need a similar
algorithm- they admit loans that collectively exceed the bank’s funds and
then release each borrower’s loan in installments. The banker’s algorithm
uses the notation of a safe allocation state to ensure that granting a
resource request cannot lead to a deadlock either immediately or in the
future.
In other words, the bank would never allocate its money in such a way that
it can no longer satisfy the needs of all its customers. The bank would try
to be in a safe state always.

The following Data structures are used to implement the Banker’s Algorithm:
Let ‘n’ be the number of processes in the system and ‘m’ be the number of
resource types.

Available

● It is a 1-d array of size ‘m’ indicating the number of available resources of


each type.
● Available[ j ] = k means there are ‘k’ instances of resource type Rj

Max

● It is a 2-d array of size ‘n*m’ that defines the maximum demand of each
process in a system.
● Max[ i, j ] = k means process Pi may request at most ‘k’ instances of
resource type Rj.

Allocation

● It is a 2-d array of size ‘n*m’ that defines the number of resources of each
type currently allocated to each process.
● Allocation[ i, j ] = k means process Pi is currently allocated ‘k’ instances of
resource type Rj

Need
● It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of
each process.
● Need [ i, j ] = k means process Pi currently needs ‘k’ instances of
resource type Rj
● Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]

Allocation specifies the resources currently allocated to process Pi and Needi


specifies the additional resources that process Pi may still request to complete its
task.
Banker’s algorithm consists of a Safety algorithm and a Resource request
algorithm.

Banker’s Algorithm

1. Active:= Running U Blocked;

for k=1…r

New_ request[k]:= Requested_ resources[requesting_ process, k];

2. Simulated_ allocation:= Allocated_ resources;

for k=1…..r //Compute projected allocation state

Simulated_ allocation [requesting _process, k]:= Simulated_ allocation


[requesting _process, k] + New_ request[k];

3. feasible:= true;

for k=1….r // Check whether projected allocation state is feasible


if Total_ resources[k]< Simulated_ total_ alloc [k] then feasible:= false;

4. if feasible= true

then // Check whether projected allocation state is a safe allocation state

while set Active contains a process P1 such that

For all k, Total _resources[k] – Simulated_ total_ alloc[k]>= Max_ need [l ,k]-
Simulated_ allocation[ l, k]

Delete Pl from Active;

for k=1…..r

Simulated_ total_ alloc[k]:= Simulated_ total_ alloc[k]- Simulated_


allocation[l, k];

5. If set Active is empty

then // Projected allocation state is a safe allocation state

for k=1….r // Delete the request from pending requests

Requested_ resources[requesting_ process, k]:=0;

for k=1….r // Grant the request

Allocated_ resources[requesting_ process, k]:= Allocated_


resources[requesting_ process, k] + New_ request[k];

Total_ alloc[k]:= Total_ alloc[k] + New_ request[k];


Safety Algorithm

The algorithm for finding out whether or not a system is in a safe state can be
described as follows:

1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n
2) Find an i such that both
a) Finish[i] = false
b) Needi <= Work
if no such i exists goto step (4)
3) Work = Work + Allocation[i]
Finish[i] = true
goto step (2)
4) if Finish [i] = true for all i
then the system is in a safe state

Resource-Request Algorithm
Let Requesti be the request array for process Pi. Requesti [j] = k means process
Pi wants k instances of resource type Rj. When a request for resources is made
by process Pi, the following actions are taken:

1) If Requesti <= Needi


Goto step (2) ; otherwise, raise an error condition, since the process
has exceeded its maximum claim.
2) If Requesti <= Available
Goto step (3); otherwise, Pi must wait, since the resources are not
available.
3) Have the system pretend to have allocated the requested resources
to process Pi by modifying the state as
follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi– Requesti

Example:

Considering a system with five processes P0 through P4 and three


resources of type A, B, C. Resource type A has 10 instances, B has 5
instances and type C has 7 instances. Suppose at time t0 following
snapshot of the system has been taken:

Q.1: What will be the content of the Need matrix?


Need [i, j] = Max [i, j] – Allocation [i, j]
So, the content of Need Matrix is:

Q.2: Is the system in a safe state? If Yes, then what is the safe sequence?

Applying the Safety algorithm on the given system,


Q.3: What will happen if process P1 requests one additional instance of
resource type A and two instances of resource type C?
We must determine whether this new system state is safe. To do so, we again
execute Safety algorithm on the above data structures.

Hence the new system state is safe, so we can immediately grant the request for
process P1 .
Code for Banker’s Algorithm
UNIVERSITY QUESTIONS

● What is Interprocess communication . Explain any one method in detail


● What is Deadlock..
● Explain four Necessary conditions for Deadlock
● Explain Critical section problem
● List down the different types of Semaphores

You might also like