Distributed Computing Module 3 Important Topics PYQs
Distributed Computing Module 3 Important Topics PYQs
Important-Topics-PYQs
For more notes visit
https://rtpnotes.vercel.app
Distributed-Computing-Module-3-Important-Topics-PYQs
1. Explain the issues in Deadlock Detection.
1. Detection of Deadlocks
Correctness Conditions for Deadlock Detection
2. Resolution of a Detected Deadlock
2. List the requirements of Mutual Exclusion Algorithm.
1. Safety Property
2. Liveness Property
3. Fairness
3. Calculate the rate at which a system can execute the critical section requests if the
synchronization delay and average critical section execution times are 3 and 1 second
respectively.
Given:
4. List any three performance metrics of mutual exclusion algorithms.
5. Compare Token based approach and Non-token based approach.
Token-based Approach
Non-token-based Approach
6. Explain how wait for graph can be used in Deadlock Detection.
What is a Wait-For Graph (WFG)?
How is WFG Used for Deadlock Detection?
Example: Detecting Deadlock Using WFG
Scenario:
WFG Representation:
Maintain a Wait-For Graph (WFG): This graph shows which process is waiting for which
other process.
Search for Cycles in the WFG: A cycle means a deadlock. But in a distributed system
(with multiple machines), the WFG is spread across many sites, so detecting cycles
becomes tricky.
1. Progress:
It must detect real deadlocks in finite time (not keep waiting).
Once a deadlock happens, detection should keep moving and not pause.
2. Safety:
It must not report false deadlocks (also called phantom deadlocks).
In distributed systems, machines don’t share a global memory or clock, so sometimes
they see outdated or incomplete info. This can make them think there’s a deadlock
when there isn’t one.
Also, remove old wait-for information from the system immediately after breaking the
deadlock. If we don’t, the system might wrongly detect deadlocks that no longer exist (again,
false deadlocks).
1. Safety Property
2. Liveness Property
💡 Imagine you're waiting in line. Liveness ensures the line keeps moving and no one
gets stuck or skipped forever.
3. Fairness
💡 Like a ticket system at a bakery — whoever takes a ticket first gets served first.
Synchronization delay: Time needed to coordinate entry and exit into the critical section
(e.g., acquiring and releasing locks, waiting for other processes, etc.)
Critical Section execution time: Actual time a process spends inside the CS.
Given:
Non-token-based Approach
No token is used.
A process asks permission from all other processes before entering CS.
Uses more messages, but there’s no risk of token loss.
A Wait-For Graph (WFG) is a visual way to check for deadlocks in a system. It is a directed
graph where:
Scenario:
WFG Representation:
The Ricart–Agrawala Algorithm helps computers take turns fairly and efficiently. Here's how
it works:
1. Single-Resource Model
Deadlock Detection:
In a Wait-For Graph (WFG), each node can have at most one outgoing edge.
If a cycle is present, a deadlock has occurred.
Example:
2. AND Model
Deadlock Detection:
Example:
3. OR Model
A process requests multiple resources, but it only needs any one of them to proceed.
If at least one resource is granted, the process continues execution.
Deadlock Detection:
A cycle in the WFG does not always mean a deadlock because a process may still
proceed if one of the requested resources is available.
Example:
4. AND-OR Model
Example:
Example:
A cloud server needs any 2 out of 5 available CPU cores to process a request.
If at least 2 cores are available, execution proceeds.
If fewer than 2 cores are available, the process waits.
Entering the CS
Once the requesting process gets the token, it enters the CS.
After CS Execution
The process checks which other processes have requested the token and sends it
to the next one in line.
1. Outdated Requests
Each REQUEST has a sequence number to tell if it's new or old.
This avoids sending the token to a process that no longer needs it.
2. Tracking Requests
A request queue or table is used to track which processes have asked for the token
and whether their request is still pending.
Correctness
Mutual Exclusion is guaranteed because:
Only one token exists.
Only the token holder can enter the CS.
Example
Instead of asking all sites for permission (like in Lamport or Ricart-Agrawala algorithms), a
process only asks a subset of sites called a quorum.
Quorums are designed to overlap, so at least one site knows about both requests and
ensures only one process enters the CS at a time.
A site locks its quorum members before executing the CS.
It must receive a RELEASE message before granting permission to another process.
🔹 Example:
Imagine you need approval from a group of teachers to submit an assignment. Instead of
asking all teachers, you ask only a small group, ensuring at least one teacher is in multiple
groups to avoid conflicts.
✅ Reduces Message Complexity – Instead of contacting all sites, only a subset is involved.
✅ Faster Execution – Since fewer messages are exchanged, CS is accessed quicker.
✅ Scalable – Works better in large distributed systems compared to Lamport’s or Ricart-
Agrawala.
Key Idea
Each site (process) needs to get permission from only a subset of sites before entering
the Critical Section (CS).
These subsets (called request sets, Ri) are designed so that any two request sets share
at least one common site.
This shared site ensures no two processes can enter CS at the same time.
1. Request Phase
A process sends a REQUEST message to all sites in its request set (Ri).
2. Grant Phase
Each site can grant REPLY to only one request at a time.
If it has already granted a REPLY, it queues the new request.
3. Execution Phase
When a process gets all REPLYs from its request set, it enters the CS.
4. Release Phase
After leaving the CS, the process sends RELEASE messages to all sites in Ri.
These sites then grant REPLY to any pending requests in their queues.
Example
R1 = {P1, P2}
R2 = {P2, P3}
R3 = {P3, P4}
R4 = {P4, P1}
Now, if P1 wants to enter CS:
It sends REQUEST to P1 and P2.
If both grant REPLY → P1 enters CS.
Meanwhile, if P3 tries to enter CS:
It sends REQUEST to P3 and P4.
If P2 has already replied to P1 and is still waiting for RELEASE, P3 can’t proceed — this
ensures mutual exclusion.
1. Requesting CS
A process sends a REQUEST(timestamp, processID) to all other processes.
It also adds this request to its own local queue.
2. Receiving a REQUEST
Each process adds the request to its queue.
Then it sends back a REPLY to the sender.
3. Entering CS
A process enters the CS only when:
Its request is at the top of its local queue, AND
It has received REPLY messages from all other processes.
4. Releasing CS
After CS execution, the process:
Removes its request from the queue.
Sends a RELEASE message to all.
5. Receiving RELEASE
Each process removes the released request from its local queue.
1. Deadlock Prevention
2. Deadlock Avoidance
Idea: Allow resource requests only if it’s safe (i.e., won’t lead to a deadlock).
How?
Before granting a resource, the system checks if the global state is safe.
If granting the request might cause deadlock → don't give the resource.
Problem:
Needs complete knowledge of the system’s current state (which is hard in distributed
systems).
Messages may arrive late, and the system state can change quickly
So this is also impractical in distributed systems.
3. Deadlock Detection