Distributed Sys 8
Distributed Sys 8
(3rd Edition)
Dependability
Basics
A component provides services to clients. To provide services, the
component may require the services from other components ⇒ a
component may depend on some other component.
Specifically
A component C depends on C∗ if the correctness of C’s behavior
depends on the correctness of C∗’s behavior. (Components are
processes or channels.)
2 / 68
Fault tolerance: Introduction to fault tolerance Basic concepts
Dependability
Basics
A component provides services to clients. To provide services, the
component may require the services from other components ⇒ a
component may depend on some other component.
Specifically
A component C depends on C∗ if the correctness of C’s behavior
depends on the correctness of C∗’s behavior. (Components are
processes or channels.)
Requirements related to dependability
Requirement Description
Availability Readiness for usage
Reliability Continuity of service delivery
Safety Very low probability of catastrophes
Maintainability How easy can a failed system be repaired
2 / 68
Fault tolerance: Introduction to fault tolerance Basic concepts
Traditional metrics
► Mean Time To Failure (MTTF): The average time until a
component fails.
► Mean Time To Repair (MTTR): The average time needed to
repair a component.
► Mean Time Between Failures (MTBF): Simply MTTF +
MTTR.
3 / 68
Fault tolerance: Introduction to fault tolerance Basic concepts
Observation
Reliability and availability make sense only if we have an accurate
notion of what a failure actually is.
4 / 68
Fault tolerance: Introduction to fault tolerance Basic concepts
Terminology
5 / 68
Fault tolerance: Introduction to fault tolerance Basic concepts
Terminology
Handling faults
Term Description Example
Fault Prevent the Don’t hire sloppy
prevention occurrence of a fault programmers
Fault Build a component Build each
tolerance such that it can mask component by two
the occurrence of a independent
fault programmers
Fault removal Reduce the Get rid of sloppy
presence, number, or programmers
seriousness of a
fault
Fault Estimate current Estimate how a
forecasting presence, future recruiter is doing
incidence, and when it comes to
consequences of hiring sloppy
faults programmers
6 / 68
Fault tolerance: Introduction to fault tolerance Failure models
Failure models
Types of failures
7 / 68
Fault tolerance: Introduction to fault tolerance Failure models
8 / 68
Fault tolerance: Introduction to fault tolerance Failure models
Observation
Note that deliberate failures, be they omission or commission failures
are typically security problems. Distinguishing between deliberate
failures and unintentional ones is, in general, impossible.
8 / 68
Fault tolerance: Introduction to fault tolerance Failure models
Halting failures
Scenario
C no longer perceives any activity from C∗ — a halting failure?
Distinguishing between a crash or omission/timing failure may be
impossible.
9 / 68
Fault tolerance: Introduction to fault tolerance Failure models
Halting failures
10 / 68
Fault tolerance: Introduction to fault tolerance Failure masking by redundancy
Types of redundancy
► Information redundancy: Add extra bits to data units so that errors
can recovered when bits are garbled.
► Time redundancy: Design a system such that an action can be
performed again if anything went wrong. Typically used when
faults are transient or intermittent.
► Physical redundancy: add equipment or processes in order to
allow one or more components to fail. This type is extensively
used in distributed systems.
11 / 68
Fault tolerance: Process resilience Resilience by process groups
Process resilience
Basic idea
Protect against malfunctioning processes through process replication,
organizing multiple processes into a process group. Distinguish
between flat groups and hierarchical groups.
Flat group
Hierarchical group Coordinator
Worker
Group organization 12 / 68
Fault tolerance: Process resilience Failure masking and replication
13 / 68
Fault tolerance: Process resilience Failure masking and replication
13 / 68
Fault tolerance: Process resilience Failure masking and replication
Important assumptions
► All members are identical
► All members process commands in the same order
Result: We can now be sure that all processes do exactly the same
thing.
13 / 68
Fault tolerance: Process resilience Consensus in faulty systems with crash failures
Consensus
Prerequisite
In a fault-tolerant process group, each nonfaulty process executes the
same commands, and in the same order, as every other nonfaulty
process.
Reformulation
Nonfaulty group members need to reach consensus on which
command to execute next.
14 / 68
Fault tolerance: Process resilience Consensus in faulty systems with crash failures
Flooding-based consensus
System model
► A process group P = {P1 , . . . , Pn }
► Fail-stop failure semantics, i.e., with reliable failure detection
► A client contacts a Pi requesting it to execute a command
► Every Pi maintains a list of proposed commands
15 / 68
Fault tolerance: Process resilience Consensus in faulty systems with crash failures
Flooding-based consensus
System model
► A process group P = {P1 , . . . , Pn }
► Fail-stop failure semantics, i.e., with reliable failure detection
► A client contacts a Pi requesting it to execute a command
► Every Pi maintains a list of proposed commands
15 / 68
Fault tolerance: Process resilience Consensus in faulty systems with crash failures
P2
decide
P3
decide
P4
decide
Observations
► P2 received all proposed commands from all other processes ⇒
makes decision.
► P3 may have detected that P1 crashed, but does not know if P2
received anything, i.e., P3 cannot know if it has the same
information as P2 ⇒ cannot make decision (same for P4 ).
16 / 68
Fault tolerance: Process resilience Example: Paxos
Understanding Paxos
We will build up Paxos from scratch to understand where many
consensus algorithms actually come from.
Essential Paxos 17 / 68
Fault tolerance: Process resilience Example: Paxos
Paxos essentials
Starting point
► We assume a client-server configuration, with initially one primary
server.
► To make the server more robust, we start with adding a backup
server.
► To ensure that all commands are executed in the same order at
both servers, the primary assigns unique sequence numbers to
all commands. In Paxos, the primary is called the leader.
► Assume that actual commands can always be restored (either
from clients or servers) ⇒ we consider only control messages.
Understanding Paxos 18 / 68
Fault tolerance: Process resilience Example: Paxos
Two-server situation
(Seq, o2 , 1) (Seq, o1 , 2)
SLeader
1
o2 o1
S2
o2 o1
C2
(o2 ) (σ22 ) (σ12 )
Understanding Paxos 19 / 68
Fault tolerance: Process resilience Example: Paxos
Understanding Paxos 20 / 68
Fault tolerance: Process resilience Example: Paxos
(o1 ) (σ11 )
C1
(Ac c , o1 , 1)
S1
o1
S2 Leader 2
(Ac c , o , 1) o2
C2
(o2 ) (σ22 )
Problem
Primary crashes after executing an operation, but the backup never
received the accept message.
Understanding Paxos 21 / 68
Fault tolerance: Process resilience Example: Paxos
S2 Leader 2
(Lr n , o1 ) (Ac c , o , 2) o2
o1
C2
(o2 ) (σ212 )
Solution
Never execute an operation before it is clear that is has been learned.
Understanding Paxos 22 / 68
Fault tolerance: Process resilience Example: Paxos
(Ac c , o1 , 1)
S1
o1
S2
(Lr n , o1 ) o1
S3 Leader 2
(Ac c , o , 1) o2
C2
(o2 ) (σ32 )
Understanding Paxos 23 / 68
Fault tolerance: Process resilience Example: Paxos
(Ac c , o1 , 1)
S1
o1
S2
(Lr n , o1 ) o1
S3 Leader 2
(Ac c , o , 1) o2
C2
(o2 ) (σ32 )
Scenario
What happens when LEARN (o 1 ) as sent by S2 to S1 is lost?
Understanding Paxos 23 / 68
Fault tolerance: Process resilience Example: Paxos
(Ac c , o1 , 1)
S1
o1
S2
(Lr n , o1 ) o1
S3 Leader 2
(Ac c , o , 1) o2
C2
(o2 ) (σ32 )
Scenario
What happens when LEARN (o 1 ) as sent by S2 to S1 is lost?
Solution
S2 will also have to wait until it knows that S3 has learned o1 .
Understanding Paxos 23 / 68
Fault tolerance: Process resilience Example: Paxos
General rule
In Paxos, a server S cannot execute an operation o until it has
received a LEARN (o) from all other nonfaulty servers.
Understanding Paxos 24 / 68
Fault tolerance: Process resilience Example: Paxos
Failure detection
Practice
Reliable failure detection is practically impossible. A solution is to set
timeouts, but take into account that a detected failure may be false.
Understanding Paxos 25 / 68
Fault tolerance: Process resilience Example: Paxos
Failure detection
Practice
Reliable failure detection is practically impossible. A solution is to set
timeouts, but take into account that a detected failure may be false.
1
C1 (o1 ) 1 )
(σ
(Ac c , o1 , 1)
S1
Leader
o1
S2 Leader
(Al iv e, o1 ) (Ac c , o2 , 1) o2
C2
(o2 ) (σ22 )
Understanding Paxos 25 / 68
Fault tolerance: Process resilience Example: Paxos
Observation
Paxos needs at least three servers
Understanding Paxos 26 / 68
Fault tolerance: Process resilience Example: Paxos
Observation
Paxos needs at least three servers
Understanding Paxos 26 / 68
Fault tolerance: Process resilience Example: Paxos
Understanding Paxos 27 / 68
Fault tolerance: Process resilience Example: Paxos
Observation
If either one of the backups (S2 or S3 ) crashes, Paxos will behave
correctly: operations at nonfaulty servers are executed in the same
order.
Understanding Paxos 27 / 68
Example: Paxos
Fault tolerance: Process resilience
Understanding Paxos 28 / 68
Example: Paxos
Fault tolerance: Process resilience
Understanding Paxos 28 / 68
Example: Paxos
Fault tolerance: Process resilience
S2 missed ACCEPT(o1 , 1)
► S2 did detect crash and became new leader
► If S2 sends ACCEPT (o 1 , 1) ⇒ S3 retransmits LEARN (o1 ).
► If S2 sends ACCEPT (o 2 , 1) ⇒ S3 tells S2 that it apparently missed
ACCEPT(o1 , 1) from S1 , so that S2 can catch up.
Understanding Paxos 28 / 68
Example: Paxos
Fault tolerance: Process resilience
Understanding Paxos 29 / 68
Example: Paxos
Fault tolerance: Process resilience
Observation
Paxos (with three servers) behaves correctly when a single server
crashes, regardless when that crash took place.
Understanding Paxos 29 / 68
Fault tolerance: Process resilience Example: Paxos
(Ac c , o1 , 1)
S1
Leader
drop leadership
(Ac c , o2 , 1)
S2 Leader
S3
(Lr n , o2 ) confusion
o2
C2
(o2 ) (σ32 )
Understanding Paxos 30 / 68
Fault tolerance: Process resilience Example: Paxos
(Ac c , S 1 , o1 , 1)
SLeader
1
(Ac c , S 2 , o2 , 1)
S2 Leader
S3
(Lr n , o1 ) o1 (Lr n , o2 ) o2
C2
(o2 ) (σ312 )
Understanding Paxos 31 / 68
Fault tolerance: Process resilience Example: Paxos
(Ac c , S 1 , o1 , 1)
SLeader
1
(Ac c , S 2 , o2 , 1)
S2 Leader
S3
(Lr n , o1 ) o1 (Lr n , o2 ) o2
C2
(o2 ) (σ312 )
Essence of solution
When S2 takes over, it needs to make sure than any outstanding
operations initiated by S1 have been properly flushed, i.e., executed by
enough servers. This requires an explicit leadership takeover by which
other servers are informed before sending out new accept messages.
Understanding Paxos 31 / 68
Fault tolerance: Process resilience Consensus in faulty systems with arbitrary failures
Essence
We consider process groups in which communication between process
is inconsistent: (a) improper forwarding of messages, or (b) telling
different things to different processes.
P1 P1
a a a b
P3 P2 P3 P2
b b
(a) (b)
32 / 68
Fault tolerance: Process resilience Consensus in faulty systems with arbitrary failures
Observation
► Primary faulty ⇒ BA1 says that backups may store the same, but
different (and thus wrong) value than originally sent by the client.
► Primary not faulty ⇒ satisfying BA2 implies that BA1 is satisfied.
33 / 68
Fault tolerance: Process resilience Consensus in faulty systems with arbitrary failures
Faulty process
First message round
Second message round
P P
T F T T
T F
F T
{T,{T,F}} B1
Faulty process
First message round
T
Second message round
F T
T T
F P T {T,{T,F}} B1
F T
{F,{T,T}} B2 B3 {T,{T,F}}
F T
T T T
T P T
F
{T,{T,T}} B2 B3 {T,{T,F
T
Observation
Considering that the members in a fault-tolerant process group are so
tightly coupled, we may bump into considerable performance problems,
but perhaps even situations in which realizing fault tolerance is
impossible.
Question
Are there limitations to what can be readily achieved?
► What is needed to enable reaching consensus?
► What happens when groups are partitioned?
36 / 68
Fault tolerance: Process resilience Some limitations on realizing fault tolerance
Communication delay
Process behavior
X X X X Bounded
Synchronous
X X Unbounded
X Bounded
Asynchronous
X Unbounded
Unicast Multicast Unicast Multicast
Message transmission
On reaching consensus 37 / 68
Fault tolerance: Process resilience Some limitations on realizing fault tolerance
CAP theorem
Any networked system providing shared data can provide only two of
the following three properties:
C: consistency, by which a shared and replicated data item appears
as a single, up-to-date copy
A: availability, by which updates will always be eventually executed
P: Tolerant to the partitioning of process group.
Conclusion
In a network subject to communication failures, it is impossible to
realize an atomic read/write shared memory that guarantees a
response to every request.
Fundamental question
What are the practical ramifications of the CAP theorem?
Failure detection
Issue
How can we reliably detect that a process has actually crashed?
General model
► Each process is equipped with a failure detection module
► A process P probes another process Q for a reaction
► If Q reacts: Q is considered to be alive (by P)
► If Q does not react with t time units: Q is suspected to have
crashed
40 / 68
Fault tolerance: Process resilience Failure detection
Implementation
► If P did not receive heartbeat from Q within time t : P suspects Q.
► If Q later sends a message (which is received by P):
► P stops suspecting Q
► P increases the timeout value t
► Note: if Q did crash, P will keep suspecting Q.
41 / 68
Fault tolerance: Reliable client-server communication RPC semantics in the presence of failures
42 / 68
Fault tolerance: Reliable client-server communication RPC semantics in the presence of failures
42 / 68
Fault tolerance: Reliable client-server communication RPC semantics in the presence of failures
(a) (b)
(c)
Problem
Where (a) is the normal case, situations (b) and (c) require different
solutions. However, we don’t know what happened. Two approaches:
► At-least-once-semantics: The server guarantees it will carry out
an operation at least once, no matter what.
► At-most-once-semantics: The server guarantees it will carry out
an operation at most once.
Server crashes 43 / 68
Fault tolerance: Reliable client-server communication RPC semantics in the presence of failures
Strategy M → P Strategy P → M
Reissue strategy MPC MC(P) C(MP) PMC PC(M) C(PM)
Always DUP OK OK DUP DUP OK
Never OK ZERO ZERO OK OK ZERO
Only when ACKed DUP OK ZERO DUP OK ZERO
Only when not ACKed OK ZERO OK OK DUP OK
Client Server
Server
OK = Document updated once
DUP = Document updated twice
ZERO = Document not update at all
Server crashes 45 / 68
Fault tolerance: Reliable client-server communication RPC semantics in the presence of failures
Partial solution
Design the server such that its operations are idempotent: repeating
the same operation is the same as carrying it out exactly once:
► pure read operations
► strict overwrite operations
Many operations are inherently nonidempotent, such as many banking
transactions.
Problem
The server is doing work and holding resources for nothing (called
doing an orphan computation).
Solution
► Orphan is killed (or rolled back) by the client when it recovers
► Client broadcasts new epoch number when recovering ⇒ server
kills client’s orphans
► Require computations to complete in a T time units. Old ones are
simply removed.
Client crashes 47 / 68
Fault tolerance: Reliable group communication
Sender Recipient
Recipient
Group membership Group membership Group membership
functionality functionality functionality
Message delivery
Message-handling Message-handling Message-handling
component component component
Message reception
Network
48 / 68
Fault tolerance: Reliable group communication
Tricky part
Agreement is needed on what the group actually looks like before a
received message can be delivered.
49 / 68
Fault tolerance: Reliable group communication
Sender Receiver
Receiver Receiver
M25 History
Receiver
buffer Last = 24 Last = 24 Last = 23 Last = 24
M25 M25 M25 M25
Network
Sender Receiver Receiver Receiver Receiver
50 / 68
Fault tolerance: Distributed commit
Problem
Have an operation being performed by each member of a process
group, or none at all.
► Reliable multicasting: a message is to be delivered to all
recipients.
► Distributed transaction: each local transaction must
succeed.
51 / 68
Fault tolerance: Distributed commit
Essence
The client who initiated the computation acts as coordinator; processes
required to commit are the participants.
► Phase 1a: Coordinator sends VOTE - REQUEST to participants
(also called a pre-write)
► Phase 1b: When participant receives VOTE - REQUEST it returns
either VOTE - COMMIT or VOTE - ABORT to coordinator. If it sends
VOTE - ABORT , it aborts its local computation
► Phase 2a: Coordinator collects all votes; if all are VOTE - COMMIT ,
it sends GLOBAL - COMMIT to all participants, otherwise it sends
GLOBAL - ABORT
52 / 68
Fault tolerance: Distributed commit
Vote-request
INIT Vote-abort INIT
Commit Vote-request
Vote-request Vote-commit
WAIT READY
Vote-abort Vote-commit Global-abort Global-commit
Global-abort Global-commit ACK ACK
Coordinator Participant
53 / 68
Fault tolerance: Distributed commit
54 / 68
Fault tolerance: Distributed commit
54 / 68
Fault tolerance: Distributed commit
54 / 68
Fault tolerance: Distributed commit
54 / 68
Fault tolerance: Distributed commit
Observation
When distributed commit is required, having participants use
temporary workspaces to keep their results allows for simple recovery
in the presence of failures.
54 / 68
Fault tolerance: Distributed commit
Result
If all participants are in the READY state, the protocol blocks.
Apparently, the coordinator is failing. Note: The protocol prescribes
that we need the decision from the coordinator.
55 / 68
Fault tolerance: Distributed commit
Observation
The real problem lies in the fact that the coordinator’s final decision
may not be available for some time (or actually lost).
Alternative
Let a participant P in the READY state timeout when it hasn’t received
the coordinator’s decision; P tries to find out what other participants
know (as discussed).
Observation
Essence of the problem is that a recovering participant cannot make a
local decision: it is dependent on other (possibly failed) processes
56 / 68
Fault tolerance: Distributed commit
Coordinator in Python
1 c l a s s C oo r di n a t o r :
2
3 def r u n ( sel f ) :
4 yetToReceive = l i s t ( p a r t i c i p a n t s )
5 se l f . l og . i n f o( ’ WA I T ’ )
6 s e l f . c h a n . s e n d To ( p a r t i c i p a n t s , VOTE_REQUEST)
7 w h i l e len (yetToRecei ve) > 0 :
8 msg = s e l f . c h a n . r e c v F r o m ( p a r t i c i p a n t s ,
TIMEOUT)
9 i f ( n o t msg) o r (msg[1] == VOTE_ABORT):
10 self.log.info(’ABORT’)
11 s e l f . c h a n . s e n d To ( p a r t i c i p a n t s , GLOBAL_ABORT)
12 return
13 e l s e : # msg[1] == VOTE_COMMIT
14 yetToReceive.remove(msg[0])
15 self.log.info( ’COMMIT’)
16 s e l f . c h a n . s e n d To ( p a r t i c i p a n t s , GLOBAL_COMMIT)
57 / 68
Fault tolerance: Distributed commit
Participant in Python
1 class Partici pant:
2 def r u n ( sel f ) :
3 msg = se l f . c ha n . r e cv F r om ( c oo r di n a t o r, TIMEOUT)
4 i f ( n o t m sg): # Crashed c oo rdi na t o r - g i v e up e n t i r e l y
5 d e c i s i o n = LOCAL_ABORT
6 e l s e : # Coordi nat or w i l l have s e n t VOTE_REQUEST
7 d e c i s i o n = se l f . d o_ w or k( )
8 i f d e c i s i o n == LOCAL_ABORT:
9 se l f . c h a n . s e n d To ( c o o r d i n a t o r, VOTE_ABORT)
10 e l s e : # Ready t o commit, e n t e r READY s t a t e
11 s e l f . c h a n . s e n d To ( c o o r d i n a t o r, VOTE_COMMIT)
12 msg = se l f . ch a n . r ec v F r om ( co o r di n a t o r, TIMEOUT)
13 i f ( n o t m sg) : # Crashed c oo rd i na t o r - c he c k t h e o t h e r s
14 s e l f . c h a n . s e n d To ( a l l _ p a r t i c i p a n t s , NEED_DECISION)
15 w h i l e Tr u e :
16 msg = sel f.chan.recvFrom Any()
17 i f msg[1] i n [GLOBAL_COMMIT, GLOBAL_ABORT, LOCAL_ABORT]:
18 d e c i s i o n = msg[1]
19 break
20 e l s e : # Coordinator came t o a d e c i s i o n
21 d e c i s i o n = msg[1]
22
23 w h i l e Tr u e : # Help any o t h e r p a r t i c i p a n t when c o ord i n at or crashed
24 msg = s e l f . c h a n . r e c v F r o m ( a l l _ p a r t i c i p a n t s )
25 i f msg[1] == NEED_DECISION:
26 se l f . c ha n . se nd To( [ m s g[ 0 ] ] , d e c i s i o n )
58 / 68
Fault tolerance: R e c o v e r y Introduction
Recovery: Background
Essence
When a failure occurs, we need to bring the system into an error-free
state:
► Forward error recovery: Find a new state from which the system
can continue operation
► Backward error recovery: Bring the system back into a previous
error-free state
Practice
Use backward error recovery, requiring that we establish recovery
points
Observation
Recovery in distributed systems is complicated by the fact that
processes need to cooperate in identifying a consistent state from
where to recover
59 / 68
Fault tolerance: R e c o v e r y Checkpointing
Recovery line
Assuming processes regularly checkpoint their state, the most recent
consistent global checkpoint.
P1
Failure
P2
Time
Message sent Inconsistent collection
from P2 to of
P1 checkpoints
60 / 68
Fault tolerance: R e c o v e r y Checkpointing
Coordinated checkpointing
Essence
Each process takes a checkpoint after a globally coordinated action.
Simple solution
Use a two-phase blocking protocol:
Coordinated checkpointing 61 / 68
Fault tolerance: R e c o v e r y Checkpointing
Coordinated checkpointing
Essence
Each process takes a checkpoint after a globally coordinated action.
Simple solution
Use a two-phase blocking protocol:
► A coordinator multicasts a checkpoint request message
Coordinated checkpointing 61 / 68
Fault tolerance: R e c o v e r y Checkpointing
Coordinated checkpointing
Essence
Each process takes a checkpoint after a globally coordinated action.
Simple solution
Use a two-phase blocking protocol:
► A coordinator multicasts a checkpoint request message
► When a participant receives such a message, it takes a
checkpoint, stops sending (application) messages, and reports
back that it has taken a checkpoint
Coordinated checkpointing 61 / 68
Fault tolerance: R e c o v e r y Checkpointing
Coordinated checkpointing
Essence
Each process takes a checkpoint after a globally coordinated action.
Simple solution
Use a two-phase blocking protocol:
► A coordinator multicasts a checkpoint request message
► When a participant receives such a message, it takes a
checkpoint, stops sending (application) messages, and reports
back that it has taken a checkpoint
► When all checkpoints have been confirmed at the coordinator,
the
latter broadcasts a checkpoint done message to allow all
processes to continue
Coordinated checkpointing 61 / 68
Fault tolerance: R e c o v e r y Checkpointing
Coordinated checkpointing
Essence
Each process takes a checkpoint after a globally coordinated action.
Simple solution
Use a two-phase blocking protocol:
► A coordinator multicasts a checkpoint request message
► When a participant receives such a message, it takes a
checkpoint, stops sending (application) messages, and reports
back that it has taken a checkpoint
► When all checkpoints have been confirmed at the coordinator,
the
latter broadcasts a checkpoint done message to allow all
processes to continue
Observation
It is possible to consider only those processes that depend on the
recovery of the coordinator, and ignore the rest
Coordinated checkpointing 61 / 68
Fault tolerance: Recovery Checkpointing
Cascaded rollback
Observation
If checkpointing is done at the “wrong” instants, the recovery line may
lie at system startup time. We have a so-called cascaded rollback.
Initial state Checkpoint
P1
m* m Failure
P2
Time
Independent checkpointing 62 / 68
Fault tolerance: Recovery Checkpointing
Independent checkpointing
Essence
Each process independently takes checkpoints, with the risk of a
cascaded rollback to system startup.
Independent checkpointing 63 / 68
Fault tolerance: R e c o v e r y Checkpointing
Independent checkpointing
Essence
Each process independently takes checkpoints, with the risk of a
cascaded rollback to system startup.
► Let CPi (m) denote mth checkpoint of process Pi and INTi (m) the
interval between CPi (m − 1) and CPi (m).
Independent checkpointing 63 / 68
Fault tolerance: R e c o v e r y Checkpointing
Independent checkpointing
Essence
Each process independently takes checkpoints, with the risk of a
cascaded rollback to system startup.
► Let CPi (m) denote mth checkpoint of process Pi and INTi (m) the
interval between CPi (m − 1) and CPi (m).
► When process Pi sends a message in interval INTi (m), it
piggybacks (i, m)
Independent checkpointing 63 / 68
Fault tolerance: R e c o v e r y Checkpointing
Independent checkpointing
Essence
Each process independently takes checkpoints, with the risk of a
cascaded rollback to system startup.
► Let CPi (m) denote mth checkpoint of process Pi and INTi (m) the
interval between CPi (m − 1) and CPi (m).
► When process Pi sends a message in interval INTi (m), it
piggybacks (i, m)
► When process Pj receives a message in interval INTj (n), it
records the dependency INTi (m) → INTj (n).
Independent checkpointing 63 / 68
Fault tolerance: R e c o v e r y Checkpointing
Independent checkpointing
Essence
Each process independently takes checkpoints, with the risk of a
cascaded rollback to system startup.
► Let CPi (m) denote mth checkpoint of process Pi and INTi (m) the
interval between CPi (m − 1) and CPi (m).
► When process Pi sends a message in interval INTi (m), it
piggybacks (i, m)
► When process Pj receives a message in interval INTj (n), it
records the dependency INTi (m) → INTj (n).
► The dependency INTi (m) → INTj (n) is saved to storage when
taking checkpoint CPj (n).
Independent checkpointing 63 / 68
Fault tolerance: R e c o v e r y Checkpointing
Independent checkpointing
Essence
Each process independently takes checkpoints, with the risk of a
cascaded rollback to system startup.
► Let CPi (m) denote mth checkpoint of process Pi and INTi (m) the
interval between CPi (m − 1) and CPi (m).
► When process Pi sends a message in interval INTi (m), it
piggybacks (i, m)
► When process Pj receives a message in interval INTj (n), it
records the dependency INTi (m) → INTj (n).
► The dependency INTi (m) → INTj (n) is saved to storage when
taking checkpoint CPj (n).
Observation
If process Pi rolls back to CPi (m − 1), Pj must roll back to CPj (n −
1).
Independent checkpointing 63 / 68
Fault tolerance: Recovery Message logging
Message logging
Alternative
Instead of taking an (expensive) checkpoint, try to replay your
(communication) behavior from the most recent checkpoint ⇒ store
messages in a log.
Assumption
We assume a piecewise deterministic execution model:
► The execution of each process can be considered as a sequence
of state intervals
► Each state interval starts with a nondeterministic event (e.g.,
message receipt)
► Execution in a state interval is deterministic
Conclusion
If we record nondeterministic events (to replay them later), we obtain a
deterministic execution model that will allow us to do a complete replay.
64 / 68
Fault tolerance: R e c o v e r y Message logging
65 / 68
Fault tolerance: Recovery Message logging
Message-logging schemes
Notations
► DEP(m): processes to which m has been delivered. If message
m∗ is causally dependent on the delivery of m, and m∗ has been
delivered to Q, then Q ∈ DEP(m).
► COPY(m): processes that have a copy of m, but have not (yet)
reliably stored it.
► FAIL: the collection of crashed processes.
Characterization
Q is orphaned ⇔ ∃m : Q ∈ DEP(m) and COPY(m) ⊆ FAIL
66 / 68
Fault tolerance: Recovery Message logging
Message-logging schemes
Pessimistic protocol
For each nonstable message m, there is at most one process
dependent on m, that is |DEP(m)| ≤ 1.
Consequence
An unstable message in a pessimistic protocol must be made stable
before sending a next message.
67 / 68
Fault tolerance: Recovery Message logging
Message-logging schemes
Optimistic protocol
For each unstable message m, we ensure that if COPY(m) ⊆ FAIL,
then eventually also DEP(m) ⊆
FAIL.
Consequence
To guarantee that DEP(m) ⊆ FAIL, we generally rollback each orphan
process Q until Q /∈ DEP(m).
68 / 68