Slides 08 PDF
Slides 08 PDF
(3rd Edition)
Dependability
Basics
A component provides services to clients. To provide services, the component
may require the services from other components ⇒ a component may depend
on some other component.
Specifically
A component C depends on C ∗ if the correctness of C’s behavior depends on
the correctness of C ∗ ’s behavior. (Components are processes or channels.)
2 / 67
Fault tolerance: Introduction to fault tolerance Basic concepts
Dependability
Basics
A component provides services to clients. To provide services, the component
may require the services from other components ⇒ a component may depend
on some other component.
Specifically
A component C depends on C ∗ if the correctness of C’s behavior depends on
the correctness of C ∗ ’s behavior. (Components are processes or channels.)
Traditional metrics
Mean Time To Failure (MTTF): The average time until a component fails.
Mean Time To Repair (MTTR): The average time needed to repair a
component.
Mean Time Between Failures (MTBF): Simply MTTF + MTTR.
3 / 67
Fault tolerance: Introduction to fault tolerance Basic concepts
Observation
Reliability and availability make sense only if we have an accurate notion of
what a failure actually is.
4 / 67
Fault tolerance: Introduction to fault tolerance Basic concepts
Terminology
5 / 67
Fault tolerance: Introduction to fault tolerance Basic concepts
Terminology
Handling faults
Term Description Example
Fault Prevent the occurrence Don’t hire sloppy
prevention of a fault programmers
Fault tolerance Build a component Build each component
such that it can mask by two independent
the occurrence of a programmers
fault
Fault removal Reduce the presence, Get rid of sloppy
number, or seriousness programmers
of a fault
Fault Estimate current Estimate how a
forecasting presence, future recruiter is doing when
incidence, and it comes to hiring
consequences of faults sloppy programmers
6 / 67
Fault tolerance: Introduction to fault tolerance Failure models
Failure models
Types of failures
Type Description of server’s behavior
Crash failure Halts, but is working correctly until it halts
Omission failure Fails to respond to incoming requests
Receive omission Fails to receive incoming messages
Send omission Fails to send messages
Timing failure Response lies outside a specified time interval
Response failure Response is incorrect
Value failure The value of the response is wrong
State-transition failure Deviates from the correct flow of control
Arbitrary failure May produce arbitrary responses at arbitrary
times
7 / 67
Fault tolerance: Introduction to fault tolerance Failure models
8 / 67
Fault tolerance: Introduction to fault tolerance Failure models
Observation
Note that deliberate failures, be they omission or commission failures are
typically security problems. Distinguishing between deliberate failures and
unintentional ones is, in general, impossible.
8 / 67
Fault tolerance: Introduction to fault tolerance Failure models
Halting failures
Scenario
C no longer perceives any activity from C ∗ — a halting failure? Distinguishing
between a crash or omission/timing failure may be impossible.
9 / 67
Fault tolerance: Introduction to fault tolerance Failure models
Halting failures
10 / 67
Fault tolerance: Introduction to fault tolerance Failure masking by redundancy
Types of redundancy
Information redundancy: Add extra bits to data units so that errors can
recovered when bits are garbled.
Time redundancy: Design a system such that an action can be performed
again if anything went wrong. Typically used when faults are transient or
intermittent.
Physical redundancy: add equipment or processes in order to allow one
or more components to fail. This type is extensively used in distributed
systems.
11 / 67
Fault tolerance: Process resilience Resilience by process groups
Process resilience
Basic idea
Protect against malfunctioning processes through process replication,
organizing multiple processes into process group. Distinguish between flat
groups and hierarchical groups.
Flat group
Hierarchical group Coordinator
Worker
Group organization 12 / 67
Fault tolerance: Process resilience Failure masking and replication
13 / 67
Fault tolerance: Process resilience Failure masking and replication
13 / 67
Fault tolerance: Process resilience Failure masking and replication
Important assumptions
All members are identical
All members process commands in the same order
Result: We can now be sure that all processes do exactly the same thing.
13 / 67
Fault tolerance: Process resilience Consensus in faulty systems with crash failures
Consensus
Prerequisite
In a fault-tolerant process group, each nonfaulty process executes the same
commands, and in the same order, as every other nonfaulty process.
Reformulation
Nonfaulty group members need to reach consensus on which command to
execute next.
14 / 67
Fault tolerance: Process resilience Consensus in faulty systems with crash failures
Flooding-based consensus
System model
A process group P = {P1 , . . . , Pn }
Fail-stop failure semantics, i.e., with reliable failure detection
A client contacts a Pi requesting it to execute a command
Every Pi maintains a list of proposed commands
15 / 67
Fault tolerance: Process resilience Consensus in faulty systems with crash failures
Flooding-based consensus
System model
A process group P = {P1 , . . . , Pn }
Fail-stop failure semantics, i.e., with reliable failure detection
A client contacts a Pi requesting it to execute a command
Every Pi maintains a list of proposed commands
15 / 67
Fault tolerance: Process resilience Consensus in faulty systems with crash failures
P1
P2
decide
P3
decide
P4
decide
Observations
P2 received all proposed commands from all other processes ⇒ makes
decision.
P3 may have detected that P1 crashed, but does not know if P2 received
anything, i.e., P3 cannot know if it has the same information as P2 ⇒
cannot make decision (same for P4 ).
16 / 67
Fault tolerance: Process resilience Example: Paxos
Understanding Paxos
We will build up Paxos from scratch to understand where many consensus
algorithms actually come from.
Essential Paxos 17 / 67
Fault tolerance: Process resilience Example: Paxos
Paxos essentials
Starting point
We assume a client-server configuration, with initially one primary server.
To make the server more robust, we start with adding a backup server.
To ensure that all commands are executed in the same order at both
servers, the primary assigns unique sequence numbers to all commands.
In Paxos, the primary is called the leader.
Assume that actual commands can always be restored (either from clients
or servers) ⇒ we consider only control messages.
Understanding Paxos 18 / 67
Fault tolerance: Process resilience Example: Paxos
Two-server situation
hSeq, o2 , 1i hSeq, o1 , 2i
S1
Leader
o2 o1
S2
o2 o1
C2
ho2 i hσ22 i hσ12 i
Understanding Paxos 19 / 67
Fault tolerance: Process resilience Example: Paxos
Understanding Paxos 20 / 67
Fault tolerance: Process resilience Example: Paxos
ho1 i hσ11 i
C1
hAcc, o1 , 1i
S1
o1
S2
Leader
hAcc, o2 , 1i o2
C2
ho2 i hσ22 i
Problem
Primary crashes after executing an operation, but the backup never received
the accept message.
Understanding Paxos 21 / 67
Fault tolerance: Process resilience Example: Paxos
hAcc, o1 , 1i
S1
o1
S2
hLrn, o1 i o1 Leader
hAcc, o2 , 2i o2
C2
ho2 i hσ212 i
Solution
Never execute an operation before it is clear that is has been learned.
Understanding Paxos 22 / 67
Fault tolerance: Process resilience Example: Paxos
hAcc, o1 , 1i
S1
o1
S2
hLrn, o1 i o1
S3
Leader
hAcc, o2 , 1i o2
C2
ho2 i hσ32 i
Understanding Paxos 23 / 67
Fault tolerance: Process resilience Example: Paxos
hAcc, o1 , 1i
S1
o1
S2
hLrn, o1 i o1
S3
Leader
hAcc, o2 , 1i o2
C2
ho2 i hσ32 i
Scenario
What happens when LEARN(o1 ) as sent by S2 to S1 is lost?
Understanding Paxos 23 / 67
Fault tolerance: Process resilience Example: Paxos
hAcc, o1 , 1i
S1
o1
S2
hLrn, o1 i o1
S3
Leader
hAcc, o2 , 1i o2
C2
ho2 i hσ32 i
Scenario
What happens when LEARN(o1 ) as sent by S2 to S1 is lost?
Solution
S2 will also have to wait until it knows that S3 has learned o1 .
Understanding Paxos 23 / 67
Fault tolerance: Process resilience Example: Paxos
General rule
In Paxos, a server S cannot execute an operation o until it has received a
LEARN (o) from all other nonfaulty servers.
Understanding Paxos 24 / 67
Fault tolerance: Process resilience Example: Paxos
Failure detection
Practice
Reliable failure detection is practically impossible. A solution is to set timeouts,
but take into account that a detected failure may be false.
Understanding Paxos 25 / 67
Fault tolerance: Process resilience Example: Paxos
Failure detection
Practice
Reliable failure detection is practically impossible. A solution is to set timeouts,
but take into account that a detected failure may be false.
ho1 i hσ11 i
C1
hAcc, o1 , 1i
S1
Leader
o1
S2
hAlive, o1 i Leader
hAcc, o2 , 1i o2
C2
ho2 i hσ22 i
Understanding Paxos 25 / 67
Fault tolerance: Process resilience Example: Paxos
Observation
Paxos needs at least three servers
Understanding Paxos 26 / 67
Fault tolerance: Process resilience Example: Paxos
Observation
Paxos needs at least three servers
Understanding Paxos 26 / 67
Fault tolerance: Process resilience Example: Paxos
Understanding Paxos 27 / 67
Fault tolerance: Process resilience Example: Paxos
Observation
If either one of the backups (S2 or S3 ) crashes, Paxos will behave correctly:
operations at nonfaulty servers are executed in the same order.
Understanding Paxos 27 / 67
Fault tolerance: Process resilience Example: Paxos
Understanding Paxos 28 / 67
Fault tolerance: Process resilience Example: Paxos
Understanding Paxos 28 / 67
Fault tolerance: Process resilience Example: Paxos
S2 missed ACCEPT(o1 , 1)
S2 did detect crash and became new leader
S2 sends ACCEPT(o1 , 1) ⇒ S3 retransmits LEARN(o1 ).
S2 sends ACCEPT(o2 , 1) ⇒ S3 tells S2 that it apparently missed
ACCEPT(o 1 , 1) from S1 , so that S2 can catch up.
Understanding Paxos 28 / 67
Fault tolerance: Process resilience Example: Paxos
Understanding Paxos 29 / 67
Fault tolerance: Process resilience Example: Paxos
Observation
Paxos (with three servers) behaves correctly when a single server crashes,
regardless when that crash took place.
Understanding Paxos 29 / 67
Fault tolerance: Process resilience Example: Paxos
ho1 i
C1
hAcc, o1 , 1i
S1
Leader
drop leadership
hAcc, o2 , 1i
S2
Leader
S3
hLrn, o2 i o2 confusion
C2
ho2 i hσ32 i
Understanding Paxos 30 / 67
Fault tolerance: Process resilience Example: Paxos
hAcc, S1 , o1 , 1i
S1
Leader
hAcc, S2 , o2 , 1i
S2
Leader
S3
hLrn, o1 i o1 hLrn, o2 i o2
C2
ho2 i hσ312 i
Understanding Paxos 31 / 67
Fault tolerance: Process resilience Example: Paxos
hAcc, S1 , o1 , 1i
S1
Leader
hAcc, S2 , o2 , 1i
S2
Leader
S3
hLrn, o1 i o1 hLrn, o2 i o2
C2
ho2 i hσ312 i
Essence of solution
When S2 takes over, it needs to make sure than any outstanding operations
initiated by S1 have been properly flushed, i.e., executed by enough servers.
This requires an explicit leadership takeover by which other servers are
informed before sending out new accept messages.
Understanding Paxos 31 / 67
Fault tolerance: Process resilience Consensus in faulty systems with arbitrary failures
Essence
We consider process groups in which communication between process is
inconsistent: (a) improper forwarding of messages, or (b) telling different things
to different processes.
P1 P1
a a a b
P3 P2 P3 P2
b b
(a) (b)
32 / 67
Fault tolerance: Process resilience Consensus in faulty systems with arbitrary failures
Observation
Primary faulty ⇒ BA1 says that backups may store the same, but different
(and thus wrong) value than originally sent by the client.
Primary not faulty ⇒ satisfying BA2 implies that BA1 is satisfied.
33 / 67
Fault tolerance: Process resilience Consensus in faulty systems with arbitrary failures
Faulty process
First message round
Second message round
P P
T F T T
T F
F T
F T
T T
F P T {T,{T,F}} B1
F T
{F,{T,T}} B2 B3 {T,{T,F}}
T F T
T T
T P T
F
{T,{T,T}} B2 B3 {T,{T,F}}
T
Observation
Considering that the members in a fault-tolerant process group are so tightly
coupled, we may bump into considerable performance problems, but perhaps
even situations in which realizing fault tolerance is impossible.
Question
Are there limitations to what can be readily achieved?
What is needed to enable reaching consensus?
What happens when groups are partitioned?
36 / 67
Fault tolerance: Process resilience Some limitations on realizing fault tolerance
Message ordering
Unordered Ordered
Communication delay
Process behavior
X X X X Bounded
Synchronous
X X Unbounded
X Bounded
Asynchronous
X Unbounded
Unicast Multicast Unicast Multicast
Message transmission
On reaching consensus 37 / 67
Fault tolerance: Process resilience Some limitations on realizing fault tolerance
CAP theorem
Any networked system providing shared data can provide only two of the
following three properties:
C: consistency, by which a shared and replicated data item appears as a
single, up-to-date copy
A: availability, by which updates will always be eventually executed
P: Tolerant to the partitioning of process group.
Conclusion
In a network subject to communication failures, it is impossible to realize an
atomic read/write shared memory that guarantees a response to every request.
Failure detection
Issue
How can we reliably detect that a process has actually crashed?
General model
Each process is equipped with a failure detection module
A process P probes another process Q for a reaction
If Q reacts: Q is considered to be alive (by P)
If Q does not react with t time units: Q is suspected to have crashed
39 / 67
Fault tolerance: Process resilience Failure detection
Implementation
If P did not receive heartbeat from Q within time t: P suspects Q.
If Q later sends a message (which is received by P):
P stops suspecting Q
P increases the timeout value t
Note: if Q did crash, P will keep suspecting Q.
40 / 67
Fault tolerance: Reliable client-server communication RPC semantics in the presence of failures
41 / 67
Fault tolerance: Reliable client-server communication RPC semantics in the presence of failures
41 / 67
Fault tolerance: Reliable client-server communication RPC semantics in the presence of failures
Problem
Where (a) is the normal case, situations (b) and (c) require different solutions.
However, we don’t know what happened. Two approaches:
At-least-once-semantics: The server guarantees it will carry out an
operation at least once, no matter what.
At-most-once-semantics: The server guarantees it will carry out an
operation at most once.
Server crashes 42 / 67
Fault tolerance: Reliable client-server communication RPC semantics in the presence of failures
Server crashes 43 / 67
Fault tolerance: Reliable client-server communication RPC semantics in the presence of failures
Strategy M → P Strategy P → M
Reissue strategy MPC MC(P) C(MP) PMC PC(M) C(PM)
Always DUP OK OK DUP DUP OK
Never OK ZERO ZERO OK OK ZERO
Only when ACKed DUP OK ZERO DUP OK ZERO
Only when not ACKed OK ZERO OK OK DUP OK
Client Server Server
OK = Document updated once
DUP = Document updated twice
ZERO = Document not update at all
Server crashes 44 / 67
Fault tolerance: Reliable client-server communication RPC semantics in the presence of failures
Partial solution
Design the server such that its operations are idempotent: repeating the same
operation is the same as carrying it out exactly once:
pure read operations
strict overwrite operations
Many operations are inherently nonidempotent, such as many banking
transactions.
Problem
The server is doing work and holding resources for nothing (called doing an
orphan computation).
Solution
Orphan is killed (or rolled back) by the client when it recovers
Client broadcasts new epoch number when recovering ⇒ server kills
client’s orphans
Require computations to complete in a T time units. Old ones are simply
removed.
Client crashes 46 / 67
Fault tolerance: Reliable group communication
Intuition
A message sent to a process group G should be delivered to each member of
G. Important: make distinction between receiving and delivering messages.
Message delivery
Message-handling Message-handling Message-handling
component component component
Message reception
Network
47 / 67
Fault tolerance: Reliable group communication
Tricky part
Agreement is needed on what the group actually looks like before a received
message can be delivered.
48 / 67
Fault tolerance: Reliable group communication
Network
Sender Receiver Receiver Receiver Receiver
49 / 67
Fault tolerance: Distributed commit
Problem
Have an operation being performed by each member of a process group, or
none at all.
Reliable multicasting: a message is to be delivered to all recipients.
Distributed transaction: each local transaction must succeed.
50 / 67
Fault tolerance: Distributed commit
Essence
The client who initiated the computation acts as coordinator; processes
required to commit are the participants.
Phase 1a: Coordinator sends VOTE - REQUEST to participants (also called
a pre-write)
Phase 1b: When participant receives VOTE - REQUEST it returns either
VOTE - COMMIT or VOTE - ABORT to coordinator. If it sends VOTE - ABORT, it
aborts its local computation
Phase 2a: Coordinator collects all votes; if all are VOTE - COMMIT, it sends
GLOBAL - COMMIT to all participants, otherwise it sends GLOBAL - ABORT
Phase 2b: Each participant waits for GLOBAL - COMMIT or GLOBAL - ABORT
and handles accordingly.
51 / 67
Fault tolerance: Distributed commit
Vote-request
INIT Vote-abort INIT
Commit Vote-request
Vote-request Vote-commit
WAIT READY
Vote-abort Vote-commit Global-abort Global-commit
Global-abort Global-commit ACK ACK
ABORT COMMIT ABORT COMMIT
Coordinator Participant
52 / 67
Fault tolerance: Distributed commit
53 / 67
Fault tolerance: Distributed commit
53 / 67
Fault tolerance: Distributed commit
ABORT : Merely make entry into abort state idempotent, e.g., removing
the workspace of results
53 / 67
Fault tolerance: Distributed commit
COMMIT : Also make entry into commit state idempotent, e.g., copying
workspace to storage.
53 / 67
Fault tolerance: Distributed commit
Observation
When distributed commit is required, having participants use temporary
workspaces to keep their results allows for simple recovery in the presence of
failures.
53 / 67
Fault tolerance: Distributed commit
Result
If all participants are in the READY state, the protocol blocks. Apparently, the
coordinator is failing. Note: The protocol prescribes that we need the decision
from the coordinator.
54 / 67
Fault tolerance: Distributed commit
Observation
The real problem lies in the fact that the coordinator’s final decision may not be
available for some time (or actually lost).
Alternative
Let a participant P in the READY state timeout when it hasn’t received the
coordinator’s decision; P tries to find out what other participants know (as
discussed).
Observation
Essence of the problem is that a recovering participant cannot make a local
decision: it is dependent on other (possibly failed) processes
55 / 67
Fault tolerance: Distributed commit
Coordinator in Python
1 class Coordinator:
2
3 def run(self):
4 yetToReceive = list(participants)
5 self.log.info(’WAIT’)
6 self.chan.sendTo(participants, VOTE_REQUEST)
7 while len(yetToReceive) > 0:
8 msg = self.chan.recvFrom(participants, TIMEOUT)
9 if (not msg) or (msg[1] == VOTE_ABORT):
10 self.log.info(’ABORT’)
11 self.chan.sendTo(participants, GLOBAL_ABORT)
12 return
13 else: # msg[1] == VOTE_COMMIT
14 yetToReceive.remove(msg[0])
15 self.log.info(’COMMIT’)
16 self.chan.sendTo(participants, GLOBAL_COMMIT)
56 / 67
Fault tolerance: Distributed commit
Participant in Python
1 class Participant:
2 def run(self):
3 msg = self.chan.recvFrom(coordinator, TIMEOUT)
4 if (not msg): # Crashed coordinator - give up entirely
5 decision = LOCAL_ABORT
6 else: # Coordinator will have sent VOTE_REQUEST
7 decision = self.do_work()
8 if decision == LOCAL_ABORT:
9 self.chan.sendTo(coordinator, VOTE_ABORT)
10 else: # Ready to commit, enter READY state
11 self.chan.sendTo(coordinator, VOTE_COMMIT)
12 msg = self.chan.recvFrom(coordinator, TIMEOUT)
13 if (not msg): # Crashed coordinator - check the others
14 self.chan.sendTo(all_participants, NEED_DECISION)
15 while True:
16 msg = self.chan.recvFromAny()
17 if msg[1] in [GLOBAL_COMMIT, GLOBAL_ABORT, LOCAL_ABORT]:
18 decision = msg[1]
19 break
20 else: # Coordinator came to a decision
21 decision = msg[1]
22
23 while True: # Help any other participant when coordinator crashed
24 msg = self.chan.recvFrom(all_participants)
25 if msg[1] == NEED_DECISION:
26 self.chan.sendTo([msg[0]], decision)
57 / 67
Fault tolerance: Recovery Introduction
Recovery: Background
Essence
When a failure occurs, we need to bring the system into an error-free state:
Forward error recovery: Find a new state from which the system can
continue operation
Backward error recovery: Bring the system back into a previous error-free
state
Practice
Use backward error recovery, requiring that we establish recovery points
Observation
Recovery in distributed systems is complicated by the fact that processes need
to cooperate in identifying a consistent state from where to recover
58 / 67
Fault tolerance: Recovery Checkpointing
Recovery line
Assuming processes regularly checkpoint their state, the most recent
consistent global checkpoint.
P1
Failure
P2
Time
Message sent Inconsistent collection
from P2 to P1 of checkpoints
59 / 67
Fault tolerance: Recovery Checkpointing
Coordinated checkpointing
Essence
Each process takes a checkpoint after a globally coordinated action.
Simple solution
Use a two-phase blocking protocol:
Coordinated checkpointing 60 / 67
Fault tolerance: Recovery Checkpointing
Coordinated checkpointing
Essence
Each process takes a checkpoint after a globally coordinated action.
Simple solution
Use a two-phase blocking protocol:
A coordinator multicasts a checkpoint request message
Coordinated checkpointing 60 / 67
Fault tolerance: Recovery Checkpointing
Coordinated checkpointing
Essence
Each process takes a checkpoint after a globally coordinated action.
Simple solution
Use a two-phase blocking protocol:
A coordinator multicasts a checkpoint request message
When a participant receives such a message, it takes a checkpoint, stops
sending (application) messages, and reports back that it has taken a
checkpoint
Coordinated checkpointing 60 / 67
Fault tolerance: Recovery Checkpointing
Coordinated checkpointing
Essence
Each process takes a checkpoint after a globally coordinated action.
Simple solution
Use a two-phase blocking protocol:
A coordinator multicasts a checkpoint request message
When a participant receives such a message, it takes a checkpoint, stops
sending (application) messages, and reports back that it has taken a
checkpoint
When all checkpoints have been confirmed at the coordinator, the latter
broadcasts a checkpoint done message to allow all processes to continue
Coordinated checkpointing 60 / 67
Fault tolerance: Recovery Checkpointing
Coordinated checkpointing
Essence
Each process takes a checkpoint after a globally coordinated action.
Simple solution
Use a two-phase blocking protocol:
A coordinator multicasts a checkpoint request message
When a participant receives such a message, it takes a checkpoint, stops
sending (application) messages, and reports back that it has taken a
checkpoint
When all checkpoints have been confirmed at the coordinator, the latter
broadcasts a checkpoint done message to allow all processes to continue
Observation
It is possible to consider only those processes that depend on the recovery of
the coordinator, and ignore the rest
Coordinated checkpointing 60 / 67
Fault tolerance: Recovery Checkpointing
Cascaded rollback
Observation
If checkpointing is done at the “wrong” instants, the recovery line may lie at
system startup time. We have a so-called cascaded rollback.
P1
m* m Failure
P2
Time
Independent checkpointing 61 / 67
Fault tolerance: Recovery Checkpointing
Independent checkpointing
Essence
Each process independently takes checkpoints, with the risk of a cascaded
rollback to system startup.
Independent checkpointing 62 / 67
Fault tolerance: Recovery Checkpointing
Independent checkpointing
Essence
Each process independently takes checkpoints, with the risk of a cascaded
rollback to system startup.
Let CPi (m) denote mth checkpoint of process Pi and INTi (m) the interval
between CPi (m − 1) and CPi (m).
Independent checkpointing 62 / 67
Fault tolerance: Recovery Checkpointing
Independent checkpointing
Essence
Each process independently takes checkpoints, with the risk of a cascaded
rollback to system startup.
Let CPi (m) denote mth checkpoint of process Pi and INTi (m) the interval
between CPi (m − 1) and CPi (m).
When process Pi sends a message in interval INTi (m), it piggybacks
(i, m)
Independent checkpointing 62 / 67
Fault tolerance: Recovery Checkpointing
Independent checkpointing
Essence
Each process independently takes checkpoints, with the risk of a cascaded
rollback to system startup.
Let CPi (m) denote mth checkpoint of process Pi and INTi (m) the interval
between CPi (m − 1) and CPi (m).
When process Pi sends a message in interval INTi (m), it piggybacks
(i, m)
When process Pj receives a message in interval INTj (n), it records the
dependency INTi (m) → INTj (n).
Independent checkpointing 62 / 67
Fault tolerance: Recovery Checkpointing
Independent checkpointing
Essence
Each process independently takes checkpoints, with the risk of a cascaded
rollback to system startup.
Let CPi (m) denote mth checkpoint of process Pi and INTi (m) the interval
between CPi (m − 1) and CPi (m).
When process Pi sends a message in interval INTi (m), it piggybacks
(i, m)
When process Pj receives a message in interval INTj (n), it records the
dependency INTi (m) → INTj (n).
The dependency INTi (m) → INTj (n) is saved to storage when taking
checkpoint CPj (n).
Independent checkpointing 62 / 67
Fault tolerance: Recovery Checkpointing
Independent checkpointing
Essence
Each process independently takes checkpoints, with the risk of a cascaded
rollback to system startup.
Let CPi (m) denote mth checkpoint of process Pi and INTi (m) the interval
between CPi (m − 1) and CPi (m).
When process Pi sends a message in interval INTi (m), it piggybacks
(i, m)
When process Pj receives a message in interval INTj (n), it records the
dependency INTi (m) → INTj (n).
The dependency INTi (m) → INTj (n) is saved to storage when taking
checkpoint CPj (n).
Observation
If process Pi rolls back to CPi (m − 1), Pj must roll back to CPj (n − 1).
Independent checkpointing 62 / 67
Fault tolerance: Recovery Message logging
Message logging
Alternative
Instead of taking an (expensive) checkpoint, try to replay your (communication)
behavior from the most recent checkpoint ⇒ store messages in a log.
Assumption
We assume a piecewise deterministic execution model:
The execution of each process can be considered as a sequence of state
intervals
Each state interval starts with a nondeterministic event (e.g., message
receipt)
Execution in a state interval is deterministic
Conclusion
If we record nondeterministic events (to replay them later), we obtain a
deterministic execution model that will allow us to do a complete replay.
63 / 67
Fault tolerance: Recovery Message logging
64 / 67
Fault tolerance: Recovery Message logging
Message-logging schemes
Notations
DEP(m): processes to which m has been delivered. If message m∗ is
causally dependent on the delivery of m, and m∗ has been delivered to Q,
then Q ∈ DEP(m).
COPY(m): processes that have a copy of m, but have not (yet) reliably
stored it.
FAIL: the collection of crashed processes.
Characterization
65 / 67
Fault tolerance: Recovery Message logging
Message-logging schemes
Pessimistic protocol
For each nonstable message m, there is at most one process dependent on m,
that is |DEP(m)| ≤ 1.
Consequence
An unstable message in a pessimistic protocol must be made stable before
sending a next message.
66 / 67
Fault tolerance: Recovery Message logging
Message-logging schemes
Optimistic protocol
For each unstable message m, we ensure that if COPY(m) ⊆ FAIL, then
eventually also DEP(m) ⊆ FAIL.
Consequence
To guarantee that DEP(m) ⊆ FAIL, we generally rollback each orphan process
Q until Q 6∈ DEP(m).
67 / 67