Notes
Notes
UNIT-2
MUTUAL EXCLUSION IN DISTRIBUTED
SYSTEM
• Mutual exclusion is a concurrency control property which is introduced to
prevent race conditions. It is the requirement that a process can not enter
its critical section while another concurrent process is currently present or
executing in its critical section i.e only one process is allowed to execute the
critical section at any given instance of time.
• Mutual exclusion in single computer system Vs. distributed system: In
single computer system, memory and other resources are shared between
different processes. The status of shared resources and the status of users
is easily available in the shared memory so with the help of shared variable
(For example: Semaphores) mutual exclusion problem can be easily solved.
• In Distributed systems, we neither have shared memory nor a common
physical clock and there for we can not solve mutual exclusion problem
using shared variables. To eliminate the mutual exclusion problem in
distributed system approach based on message passing is used.
• A site in distributed system do not have complete information of state of the
system due to lack of shared memory and a common physical clock.
Requirements of Mutual exclusion Algorithm
• Freedom from Deadlock: Two or more site should not endlessly
wait for any message that will never arrive.
• Freedom from Starvation: Every site who wants to execute
critical section should get an opportunity to execute it in finite
time. Any site should not wait indefinitely to execute critical
section while other site are repeatedly executing critical section
• Fairness: Each site should get a fair chance to execute critical
section. Any request to execute critical section must be executed
in the order they are made i.e Critical section execution requests
should be executed in the order of their arrival in the system.
• Fault Tolerance: In case of failure, it should be able to recognize
it by itself in order to continue functioning without any disruption.
Solution to distributed mutual exclusion:
• As we know shared variables or a local kernel can not be used
to implement mutual exclusion in distributed systems. Message
passing is a way to implement mutual exclusion. Below are the
three approaches based on message passing to implement
mutual exclusion in distributed systems:
1. Token Based Algorithm
2. Non-token based approach
3. Quorum based approach
Token Based Algorithm
• A unique token is shared among all the sites.
• If a site possesses the unique token, it is allowed to enter its critical section
• This approach uses sequence number to order requests for the critical
section.
• Each requests for critical section contains a sequence number. This
sequence number is used to distinguish old and current requests.
• This approach insures Mutual exclusion as the token is unique
• Example: Suzuki-Kasami’s Broadcast Algorithm
Non-token based approach
• A site communicates with other sites in order to determine which sites
should execute critical section next. This requires exchange of two or more
successive round of messages among sites.
• This approach use timestamps instead of sequence number to order
requests for the critical section.
• When ever a site make request for critical section, it gets a timestamp.
Timestamp is also used to resolve any conflict between critical section
requests.
• All algorithm which follows non-token based approach maintains a logical
clock. Logical clocks get updated according to Lamport’s scheme
• Example: Lamport's algorithm, Ricart–Agrawala algorithm
Quorum based approach
32
Token-based Algorithms
• Unique token circulates among the participating sites.
• A site can enter CS if it has the token.
• Token-based approaches use sequence numbers instead of
time stamps.
• Request for a token contains a sequence number.
• Sequence number of sites advance independently.
• Correctness issue is trivial since only one token is present
-> only one site can enter CS.
• Deadlock and starvation issues to be addressed.
B. Prabhakaran 33
Suzuki-Kasami Algorithm
• If a site without a token needs to enter a CS, broadcast a
REQUEST for token message to all other sites.
• Token: (a) Queue of request sites (b) Array LN[1..N], the
sequence number of the most recent execution by a site j.
• Token holder sends token to requestor, if it is not inside CS.
Otherwise, sends after exiting CS.
• Token holder can make multiple CS accesses.
• Design issues:
• Distinguishing outdated REQUEST messages.
• Format: REQUEST(j,n) -> jth site making nth request.
• Each site has RNi[1..N] -> RNi[j] is the largest sequence number of
request from j.
• Determining which site has an outstanding token request.
• If LN[j] = RNi[j] - 1, then Sj has an outstanding request.
34
Suzuki-Kasami Algorithm ...
• Passing the token
• After finishing CS
• (assuming Si has token), LN[i] := RNi[i]
• Token consists of Q and LN. Q is a queue of requesting sites.
• Token holder checks if RNi[j] = LN[j] + 1. If so, place j in Q.
• Send token to the site at head of Q.
• Performance
• 0 to N messages per CS invocation.
• Synchronization delay is 0 (if the token holder repeats CS) or T.
35
Suzuki-Kasami: Example
Step 1: S1 has token, S3 is in queue
Site Seq. Vector RN Token Vect. LN Token Queue
S1 10, 15, 9 10, 15, 8 3
S2 10, 16, 9
S3 10, 15, 9
36
Raymond’s Algorithm
• Sites are arranged in a logical directed tree. Root: token holder. Edges:
directed towards root.
• Every site has a variable holder that points to an immediate neighbor
node, on the directed path towards root. (Root’s holder point to itself).
• Requesting CS
• If Si does not hold token and request CS, sends REQUEST upwards provided
its request_q is empty. It then adds its request to request_q.
• Non-empty request_q -> REQUEST message for top entry in q (if not done
before).
• Site on path to root receiving REQUEST -> propagate it up, if its request_q is
empty. Add request to request_q.
• Root on receiving REQUEST -> send token to the site that forwarded the
message. Set holder to that forwarding site.
• Any Si receiving token -> delete top entry from request_q, send token to that
site, set holder to point to it. If request_q is non-empty now, send REQUEST
message to the holder site.
37
Raymond’s Algorithm …
• Executing CS: getting token with the site at the top of
request_q. Delete top of request_q, enter CS.
• Releasing CS
• If request_q is non-empty, delete top entry from q, send token to that
site, set holder to that site.
• If request_q is non-empty now, send REQUEST message to the holder
site.
• Performance
• Average messages: O(log N) as average distance between 2 nodes in
the tree is O(log N).
• Synchronization delay: (T log N) / 2, as average distance between 2
sites to successively execute CS is (log N) / 2.
• Greedy approach: Intermediate site getting the token may enter CS
instead of forwarding it down. Affects fairness, may cause starvation.
38
Raymond’s Algorithm: Example
Step 1: S Token
1 holder
S Token S
2 request 3
S S S6 S
4 5 7
Step 2:
S
1
S S
2 Toke 3
n
S S S6 S
4 5 7
39