0% found this document useful (0 votes)
38 views

Non TokenBased ME

Mutual exclusion ensures that shared resources are accessed exclusively so that processes do not interfere with each other. In distributed systems, message passing is used to achieve mutual exclusion without centralized control. Lamport's algorithm uses logical clocks and message ordering to implement mutual exclusion with request, reply, and release messages. Ricart-Agarwala's algorithm optimizes this by removing release messages and deferring reply messages under certain conditions to reduce message complexity. Both algorithms guarantee mutual exclusion, progress, and bounded waiting.

Uploaded by

actualruthwik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Non TokenBased ME

Mutual exclusion ensures that shared resources are accessed exclusively so that processes do not interfere with each other. In distributed systems, message passing is used to achieve mutual exclusion without centralized control. Lamport's algorithm uses logical clocks and message ordering to implement mutual exclusion with request, reply, and release messages. Ricart-Agarwala's algorithm optimizes this by removing release messages and deferring reply messages under certain conditions to reduce message complexity. Both algorithms guarantee mutual exclusion, progress, and bounded waiting.

Uploaded by

actualruthwik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 28

Edited slides of Prof Pallabh Dasgupta


Mutual Exclusion

Refer back to the same topic from operating
systems.

Broadly, mutual exclusion mandates that for a
shared resource that can be used by at most one
process at any time, the OS has to ensure such
exclusive access.

Examples: Printer, network, a database

Otherwise, incorrect results may ensue.
Mutual Exclusion

If no guarantees of mutual exclusion exist,
consider the following scenario.

A bank account is read by two different
processes. The balance is Rs. 500.

Each process wants to add Rs. 1000 to the
balance.

Both calculate the new balance to be Rs. 1500.

If these two updates do not happen in exclusive
manner, the new balance may be incorrect.

The new balance can be for instance, Rs. 1500.
Whereas it should be Rs. 2500.
Mutual Exclusion

Mutual exclusion in operating systems solved by
algorithms such as

The bakery algorithm

Peterson’s algorithm

These algorithms require assumptions on the
system such as

Centralized control

Atomic operations

Review this material for your own ease of
understanding what follows next.
Mutual Exclusion

Mutual exclusion is important in the context of
distributed computations too.

However, achieving mutual exclusion may not be
any easier than in centralized settings.

No centralized control!!

Surprisingly, several algorithms exist in this
setting.

We will study some of them in this lecture today.
Mutual Exclusion

Mutual exclusion: Concurrent access of processes to a
shared resource or data is executed in mutually exclusive
manner.

Only one process is allowed to execute the critical section
(CS) at any given time.

In a distributed system, shared variables (semaphores) or
a local kernel cannot be used to implement mutual
exclusion.

Message passing is the sole means for implementing
distributed mutual exclusion.

Distributed mutual exclusion algorithms must deal with
unpredictable message delays and incomplete knowledge
of the system state.
Data Structures Week 1

Assumptions
• Messages dont get dropped, channels dont fail
• One process on each site
• While waiting for CS, a process can not make
further requests for CS
• Site either requesting CS, executing CS, or idle
(not requesting or executing). In Idle state the site
is executing outside CS.
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then
no other processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there
exist some processes that wishes to enter their critical section, then the
selection of the processes that will enter the critical section next cannot
be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that
other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before that
request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the N processes
Mutual Exclusion

Requirements of Mutual Exclusion Algorithms
1. Safety Property: At any instant, only one process can
execute the critical section.
2. Liveness Property: Two or more sites should not
endlessly wait for messages which will never arrive.
A requesting site should get a chance to execute CS
in finite time.
3. Fairness: Each process gets a fair chance to execute
the CS. Fairness property generally means the CS
execution requests are executed in the order of their
arrival (time is determined by a logical clock) in the
system.
Data Structures Week 1

Performance Metrics
• Message complexity -number of messages that are required
per CS execution by a site

• Synchronisation Delay - After a site leaves the CS, it is the time


required before the next site enters CS

• Response Time - This is the time interval a request waits for its
CS execution to be over after its request messages have been
sent out .

• System Throughput - Rate at which the system executes


requests for the CS.
SD -synchronization delay E-avg CS execution time
System throughput= 1/ (SD+E)
Data Structures Week 1
Data Structures Week 1
Lamport’s Algorithm for Mutual Exclusion

Lamport used his logical scalar clocks and FIFO
assumption on message delivery to design an
algorithm for mutual exclusion.

Also assume a bidirectional channel between
each pair of processors.
 Every site Si keeps a queue, request_queuei,
which contains mutual exclusion requests ordered
by their timestamps(priority queue). The
request_queue is a set of waiting processes
ordered by timestamp
Lamport’s Algorithm for Mutual Exclusion

Requesting the critical section:
 When a site Si wants to enter the CS, it broadcasts a
REQUEST(tsi , i) message to all other sites and places
the request on request_queuei. ((tsi , i) denotes the
timestamp of the request.)
 When a site Sk receives the REQUEST(tsi , i) message
from site Si ,places site Si ’s request on request_queuek
and it returns a timestamped REPLY message to Si.

Executing the critical section:
 Site Si enters the CS when the following two conditions
hold:
 L1: Si has received a message with timestamp larger than
(tsi, i) from all other sites. note: need not be reply msg
 L2: Si ’s request is at the top of request_queuei.
Lamport’s Algorithm

Releasing the critical section:
– Site Si, upon exiting the CS, removes its request
from the top of its request queue and broadcasts a
timestamped RELEASE message to all other sites.
– When a site Sj receives a RELEASE message from
site Si, it removes Si’s request from its request
queue.
– When a site removes a request from its request
queue, its own request may come at the top of the
queue, enabling it to enter the CS.
Lamport’s Algorithm

Releasing the critical section:
– Site Si, upon exiting the CS, removes its request
from the top of its request queue and broadcasts a
timestamped RELEASE message to all other sites.
– When a site Sj receives a RELEASE message from
site Si, it removes Si’s request from its request
queue.
– When a site removes a request from its request
queue, its own request may come at the top of the
queue, enabling it to enter the CS.

Would this site have received a reply from the site


that sent the RELEASE msg?
Some notable points
 Purpose of REPLY messages from node i to j is to ensure that j knows of all
requests of i prior to sending the REPLY (and therefore, possibly any
request of i with timestamp lower than j’s request)

 Requires FIFO channels.

 3(n – 1 ) messages per critical section invocation

 Synchronization delay = max mesg transmission time of Release msg

 Requests are granted in order of increasing timestamps

17
Data Structures Week 1

• Can we do away with the few reply messages ?


Data Structures Week 1

Can we do away with the Release messages and


manage only with request messages and reply
msgs?
Data Structures Week 1
Optimisation -Ricart-Agarwala’s Algorithm

Removes release msg.


If Process Pj has a request
- a process Pj gives reply msg only if the process has
request of higher timestamp than received request
- If it has lower timestamp then give reply only after
done with CS
If Process Pj has no request then give reply.
Reply message is a permission giving message

You enter CS only after receiving reply from all


processes.
Ricart-Agarwala’s Algorithm

Uses two types of messages: REQUEST and REPLY.

A process sends a REQUEST message to all other processes
to request their permission to enter the critical section.

A process sends a REPLY message to a process to give its
permission to that process.

Processes use Lamport-style logical clocks to assign a
timestamp to critical section requests and timestamps are used
to decide the priority of requests.
 Each process pi maintains a Boolean Request-Deferred array,
RDi, the size of which is the same as the number of processes
in the system.
 Initially, ∀i ∀j: Rdi[j]=0. Whenever pi defers the request sent by
pj, it sets Rdi[j]=1 and after it has sent a REPLY message to pj, it
sets RDi[j]=0.
Ricart-Agarwala’s Algorithm

Requesting the critical section:
1. When a site Si wants to enter the CS, it broadcasts a
timestamped REQUEST message to all other sites.
2. When site Sk receives a REQUEST message from site
Si, it sends a REPLY message to site Si if site Sk is
neither requesting nor executing the CS, or if the site S k
is requesting and Si’s request’s timestamp is smaller than
site Sk’s own request’s timestamp. Otherwise, the reply is
deferred and Sj sets RDj[i]=1.

Executing the critical section:
 Site Si enters the CS after it has received a REPLY
message from every site it sent a REQUEST message to.
Ricart-Agarwala’s Algorithm

Releasing the critical section:
 When site Si exits the CS, it sends all the deferred
REPLY messages: ∀ j if RDi[j]=1, then send a REPLY
message to Sj and set RDi[j]=0.
Ricart-Agarwala’s Algorithm

Theorem: Ricart-Agrawala algorithm achieves mutual
exclusion.
 Proof is by contradiction. Suppose two sites Si and Sj are
executing the CS concurrently and Si’s request has higher
priority than the request of Sj.
 Clearly, Si received Sj’s request after it has made its own
request.
 Thus, Sj can concurrently execute the CS with Si only if Si
returns a REPLY to Sj (in response to Sj’s request) before Si
exits the CS.
 However, this is impossible because Sj’s request has lower
priority.

Therefore, Ricart-Agrawala algorithm achieves mutual
exclusion.
Ricart-Agarwala’s Algorithm

For each CS execution, Ricart-Agrawala algorithm
requires (N − 1) REQUEST messages and (N − 1)
REPLY messages.

Thus, it requires 2(N − 1) messages per CS
execution.
The Ricart-Agrawala Algorithm
 Improvement over Lamport’s

 Main Idea:
– node j need not send a REPLY to node i if j has a request with
timestamp lower than the request of i (since i cannot enter before j
anyway in this case)

 Does not require FIFO

 2(n – 1) messages per critical section invocation

 Synchronization delay = max. message transmission time

 Requests granted in order of increasing timestamps

27
Roucairol-Carvalho Algorithm
 Improvement over Ricart-Agarwala

 Main idea
– Say process I is done with CS and it has not received any requests from any
processes in the meantime then it can go ahead and get into CS again without
sending out any requests. Hence with syncronisation cost 0 it goes in again.
One case: say after process I finishes CS, it checks to find no requests for CS
from any process so it enters CS again. However actually process j had sent a
request which happened to reach process I very late after process I finished
second CS too though it was sent while first CS was executing. Hence process I
landed up doing a CS for a time stamp larger than the pending request of j which
is still not fulfilled. This might seem unfair but is not incorrect. By casual order
second CS can be ordered before or after process j. That is the second CS is
concurrent to request of Pj so ordering between them can have either before the
other And no two processes are in cs at the same time and there is no starvation.

28
Roucairol-Carvalho Algorithm

 Improvement over Ricart-Agarwala

 Main idea
– Once i has received a REPLY from j, it does not need to send a
REQUEST to j again to re enter CS unless it has already sent a
REPLY to j after the first CS(in response to a REQUEST from j)

– Message complexity varies between 0 and 2(n – 1) depending on


the request pattern

– worst case message complexity still the same

29

You might also like