0% found this document useful (0 votes)
10 views

Notes

Gate and University notes for cse

Uploaded by

Yash Tulsyan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Notes

Gate and University notes for cse

Uploaded by

Yash Tulsyan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Distributed Mutual Exclusion

UNIT-2
MUTUAL EXCLUSION IN DISTRIBUTED
SYSTEM
• Mutual exclusion is a concurrency control property which is introduced to
prevent race conditions. It is the requirement that a process can not enter
its critical section while another concurrent process is currently present or
executing in its critical section i.e only one process is allowed to execute the
critical section at any given instance of time.
• Mutual exclusion in single computer system Vs. distributed system: In
single computer system, memory and other resources are shared between
different processes. The status of shared resources and the status of users
is easily available in the shared memory so with the help of shared variable
(For example: Semaphores) mutual exclusion problem can be easily solved.
• In Distributed systems, we neither have shared memory nor a common
physical clock and there for we can not solve mutual exclusion problem
using shared variables. To eliminate the mutual exclusion problem in
distributed system approach based on message passing is used.
• A site in distributed system do not have complete information of state of the
system due to lack of shared memory and a common physical clock.
Requirements of Mutual exclusion Algorithm
• Freedom from Deadlock: Two or more site should not endlessly
wait for any message that will never arrive.
• Freedom from Starvation: Every site who wants to execute
critical section should get an opportunity to execute it in finite
time. Any site should not wait indefinitely to execute critical
section while other site are repeatedly executing critical section
• Fairness: Each site should get a fair chance to execute critical
section. Any request to execute critical section must be executed
in the order they are made i.e Critical section execution requests
should be executed in the order of their arrival in the system.
• Fault Tolerance: In case of failure, it should be able to recognize
it by itself in order to continue functioning without any disruption.
Solution to distributed mutual exclusion:
• As we know shared variables or a local kernel can not be used
to implement mutual exclusion in distributed systems. Message
passing is a way to implement mutual exclusion. Below are the
three approaches based on message passing to implement
mutual exclusion in distributed systems:
1. Token Based Algorithm
2. Non-token based approach
3. Quorum based approach
Token Based Algorithm
• A unique token is shared among all the sites.
• If a site possesses the unique token, it is allowed to enter its critical section
• This approach uses sequence number to order requests for the critical
section.
• Each requests for critical section contains a sequence number. This
sequence number is used to distinguish old and current requests.
• This approach insures Mutual exclusion as the token is unique
• Example: Suzuki-Kasami’s Broadcast Algorithm
Non-token based approach
• A site communicates with other sites in order to determine which sites
should execute critical section next. This requires exchange of two or more
successive round of messages among sites.
• This approach use timestamps instead of sequence number to order
requests for the critical section.
• When ever a site make request for critical section, it gets a timestamp.
Timestamp is also used to resolve any conflict between critical section
requests.
• All algorithm which follows non-token based approach maintains a logical
clock. Logical clocks get updated according to Lamport’s scheme
• Example: Lamport's algorithm, Ricart–Agrawala algorithm
Quorum based approach

• Instead of requesting permission to execute the critical section from all


other sites, Each site requests only a subset of sites which is called a
quorum.
• Any two subsets of sites or Quorum contains a common site.
• This common site is responsible to ensure mutual exclusion
• Example: Maekawa’s Algorithm
LAMPORT’S ALGORITHM
• Lamport’s Distributed Mutual Exclusion Algorithm is a permission
based algorithm proposed by Lamport as an illustration of his
synchronization scheme for distributed systems. In permission
based timestamp is used to order critical section requests and to
resolve any conflict between requests.
• Lamport’s Algorithm critical section requests are executed in the
increasing order of timestamps i.e a request with smaller
timestamp will be given permission to execute critical section first
than a request with larger timestamp.
In the algorithm
• Three type of messages ( REQUEST, REPLY and RELEASE) are used and
communication channels are assumed to follow FIFO order.
• A site send a REQUEST message to all other site to get their permission to
enter critical section.
• A site send a REPLY message to requesting site to give its permission to enter
the critical section.
• A site send a RELEASE message to all other site upon exiting the critical
section.
• Every site Si, keeps a queue to store critical section requests ordered by their
timestamps. request_queuei denotes the queue of site Si.
• A timestamp is given to each critical section request using Lamport’s logical
clock.
• Timestamp is used to determine priority of critical section requests. Smaller
timestamp gets high priority over larger timestamp. The execution of critical
section request is always in the order of their timestamp.
• Request
• Si sends REQUEST(tsi, i) to all sites in its request set Ri and puts the request
on request_queuei
• when Sj receives REQUEST(tsi, i) from Si it returns a timestamped REPLY to Si and
places Si's request on request_queuej
• Si waits to start the CS until both
• [L1:] Si has received a message with timestamp > (tsi, i) from all other sites
• [L2:] Si's request is at the top of request_queuei
• Release
• Si removes request from top of request_queuei and sends
time-stamped RELEASE message to all the sites in its request set
• when Sj receives a RELEASE messages from Si it removes Si's request
from request_queuej
Message Complexity:
Lamport’s Algorithm requires invocation of 3(N – 1) messages per critical section execution.
These 3(N – 1) messages involves
•(N – 1) request messages
•(N – 1) reply messages
•(N – 1) release messages

Drawbacks of Lamport’s Algorithm:


•Unreliable approach: failure of any one of the processes will halt the progress of entire system.
•High message complexity: Algorithm requires 3(N-1) messages per critical section invocation
.
Performance:
•Synchronization delay is equal to maximum message transmission time
•It requires 3(N – 1) messages per CS execution.
•Algorithm can be optimized to 2(N – 1) messages by omitting the
REPLY message in some situations.
Ricart–Agrawala Algorithm in Mutual Exclusion
Ricart–Agrawala algorithm is an algorithm to for mutual exclusion in a distributed
system proposed by Glenn Ricart and Ashok Agrawala. This algorithm is an extension
and optimization of Lamport’s Distributed Mutual Exclusion Algorithm. Like
Lamport’s Algorithm, it also follows permission based approach to ensure mutual
exclusion.
In this algorithm:
•Two type of messages ( REQUEST and REPLY) are used and communication
channels are assumed to follow FIFO order.
•A site send a REQUEST message to all other site to get their permission to enter
critical section.
•A site send a REPLY message to other site to give its permission to enter the critical
section.
•A timestamp is given to each critical section request using Lamport’s logical clock.
•Timestamp is used to determine priority of critical section requests. Smaller timestamp
gets high priority over larger timestamp. The execution of critical section request is
always in the order of their timestamp.
Algorithm:
•To enter Critical section:
• When a site Si wants to enter the critical section, it send a
timestamped REQUEST message to all other sites.
• When a site Sj receives a REQUEST message from site Si, It sends
a REPLY message to site Si if and only if
• Site Sj is neither requesting nor currently executing the critical section.
• In case Site Sj is requesting, the timestamp of Site Si‘s request is
smaller than its own request.
• Otherwise the request is deferred by site Sj.
•To execute the critical section:
• Site Si enters the critical section if it has received the REPLY message
from all other sites.
•To release the critical section:
• Upon exiting site Si sends REPLY message to all the deferred requests.
Message Complexity:
Ricart–Agrawala algorithm requires invocation of 2(N – 1) messages per
critical section execution. These 2(N – 1) messages involves
•(N – 1) request messages
•(N – 1) reply messages
Drawbacks of Ricart–Agrawala algorithm:
•Unreliable approach: failure of any one of node in the system can halt the
progress of the system. In this situation, the process will starve forever.
The problem of failure of node can be solved by detecting failure after some
timeout.
Performance:
•Synchronization delay is equal to maximum message transmission time
•It requires 2(N – 1) messages per Critical section execution
Maekawa’s Algorithm for Mutual Exclusion
Maekawa’s Algorithm is quorum based approach to ensure mutual exclusion in
distributed systems. As we know, In permission based algorithms like Lamport’s
Algorithm, Ricart-Agrawala Algorithm etc. a site request permission from every
other site but in quorum based approach, A site does not request permission from
every other site but from a subset of sites which is called quorum.
In this algorithm:
•Three type of messages ( REQUEST, REPLY and RELEASE) are used.
•A site send a REQUEST message to all other site in its request set or quorum to
get their permission to enter critical section.
•A site send a REPLY message to requesting site to give its permission to enter
the critical section.
•A site send a RELEASE message to all other site in its request set or quorum
upon exiting the critical section.
Algorithm:
•To enter Critical section:
• When a site Si wants to enter the critical section, it sends a request
message REQUEST(i) to all other sites in the request set Ri.
• When a site Sj receives the request message REQUEST(i) from site Si, it returns
a REPLY message to site Si if it has not sent a REPLY message to the site from
the time it received the last RELEASE message. Otherwise, it queues up the
request.
•To execute the critical section:
• A site Si can enter the critical section if it has received the REPLY message from
all the site in request set Ri
•To release the critical section:
• When a site Si exits the critical section, it sends RELEASE(i) message to all other
sites in request set Ri
• When a site Sj receives the RELEASE(i) message from site Si, it
send REPLY message to the next site waiting in the queue and deletes that entry
from the queue
• In case queue is empty, site Sj update its status to show that it has not sent
•Message Complexity:
Maekawa’s Algorithm requires invocation of 3√N messages per critical
section execution as the size of a request set is √N. These 3√N
messages involves.
•√N request messages
•√N reply messages
•√N release messages
Drawbacks of Maekawa’s Algorithm:
•This algorithm is deadlock prone because a site is exclusively locked by other
sites
and requests are not prioritized by their timestamp.
Performance:
•Synchronization delay is equal to twice the message propagation delay time
•It requires 3√n messages per critical section execution.
Request Subsets
• Example k = 2; (N = 3).
• R1 = {1, 2}; R3 = {1, 3}; R2 = {2, 3}
• Example k = 3; N = 7.
• R1 = {1, 2, 3}; R4 = {1, 4, 5}; R6 = {1, 6, 7};
• R2 = {2, 4, 6}; R5 = {2, 5, 7}; R7 = {3, 4, 7};
• R3 = {3, 5, 6}
• Algorithm in Maekawa’s paper (uploaded in
Lecture Notes web page).

32
Token-based Algorithms
• Unique token circulates among the participating sites.
• A site can enter CS if it has the token.
• Token-based approaches use sequence numbers instead of
time stamps.
• Request for a token contains a sequence number.
• Sequence number of sites advance independently.
• Correctness issue is trivial since only one token is present
-> only one site can enter CS.
• Deadlock and starvation issues to be addressed.

B. Prabhakaran 33
Suzuki-Kasami Algorithm
• If a site without a token needs to enter a CS, broadcast a
REQUEST for token message to all other sites.
• Token: (a) Queue of request sites (b) Array LN[1..N], the
sequence number of the most recent execution by a site j.
• Token holder sends token to requestor, if it is not inside CS.
Otherwise, sends after exiting CS.
• Token holder can make multiple CS accesses.
• Design issues:
• Distinguishing outdated REQUEST messages.
• Format: REQUEST(j,n) -> jth site making nth request.
• Each site has RNi[1..N] -> RNi[j] is the largest sequence number of
request from j.
• Determining which site has an outstanding token request.
• If LN[j] = RNi[j] - 1, then Sj has an outstanding request.

34
Suzuki-Kasami Algorithm ...
• Passing the token
• After finishing CS
• (assuming Si has token), LN[i] := RNi[i]
• Token consists of Q and LN. Q is a queue of requesting sites.
• Token holder checks if RNi[j] = LN[j] + 1. If so, place j in Q.
• Send token to the site at head of Q.
• Performance
• 0 to N messages per CS invocation.
• Synchronization delay is 0 (if the token holder repeats CS) or T.

35
Suzuki-Kasami: Example
Step 1: S1 has token, S3 is in queue
Site Seq. Vector RN Token Vect. LN Token Queue
S1 10, 15, 9 10, 15, 8 3
S2 10, 16, 9
S3 10, 15, 9

Step 2: S3 gets token, S2 in queue


Site Seq. Vector RN Token Vect. LN Token Queue
S1 10, 16, 9
S2 10, 16, 9
S3 10, 16, 9 10, 15, 9 2
Step 3: S2 gets token, queue empty
Site Seq. Vector RN Token Vect. LN Token Queue
S1 10, 16, 9
S2 10, 16, 9 10, 16, 9 <empty>
S3 10, 16, 9

36
Raymond’s Algorithm
• Sites are arranged in a logical directed tree. Root: token holder. Edges:
directed towards root.
• Every site has a variable holder that points to an immediate neighbor
node, on the directed path towards root. (Root’s holder point to itself).
• Requesting CS
• If Si does not hold token and request CS, sends REQUEST upwards provided
its request_q is empty. It then adds its request to request_q.
• Non-empty request_q -> REQUEST message for top entry in q (if not done
before).
• Site on path to root receiving REQUEST -> propagate it up, if its request_q is
empty. Add request to request_q.
• Root on receiving REQUEST -> send token to the site that forwarded the
message. Set holder to that forwarding site.
• Any Si receiving token -> delete top entry from request_q, send token to that
site, set holder to point to it. If request_q is non-empty now, send REQUEST
message to the holder site.

37
Raymond’s Algorithm …
• Executing CS: getting token with the site at the top of
request_q. Delete top of request_q, enter CS.
• Releasing CS
• If request_q is non-empty, delete top entry from q, send token to that
site, set holder to that site.
• If request_q is non-empty now, send REQUEST message to the holder
site.
• Performance
• Average messages: O(log N) as average distance between 2 nodes in
the tree is O(log N).
• Synchronization delay: (T log N) / 2, as average distance between 2
sites to successively execute CS is (log N) / 2.
• Greedy approach: Intermediate site getting the token may enter CS
instead of forwarding it down. Affects fairness, may cause starvation.

38
Raymond’s Algorithm: Example
Step 1: S Token
1 holder

S Token S
2 request 3

S S S6 S
4 5 7
Step 2:
S
1

S S
2 Toke 3
n
S S S6 S
4 5 7
39

You might also like