0% found this document useful (0 votes)
40 views

Rings: p0 1 2 2 2 1 1 Left Clockwise 2 Right Counter-Clockwise

The document summarizes key concepts about distributed algorithms and systems for rings: - Rings have a consistent clockwise or counter-clockwise orientation. Leader election algorithms aim to select exactly one processor as the leader. - There is no leader election algorithm for anonymous rings, even if the algorithm knows the ring size. However, algorithms are possible if each processor has a unique identifier. - A simple O(n^2) message leader election algorithm elects the processor with the largest identifier. An improved O(n log n) message algorithm uses doubling-sized neighborhoods in each phase. - Any uniform asynchronous leader election algorithm for rings requires Ω(n log n) messages in the worst case.

Uploaded by

Krzysiek Król
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PS, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Rings: p0 1 2 2 2 1 1 Left Clockwise 2 Right Counter-Clockwise

The document summarizes key concepts about distributed algorithms and systems for rings: - Rings have a consistent clockwise or counter-clockwise orientation. Leader election algorithms aim to select exactly one processor as the leader. - There is no leader election algorithm for anonymous rings, even if the algorithm knows the ring size. However, algorithms are possible if each processor has a unique identifier. - A simple O(n^2) message leader election algorithm elects the processor with the largest identifier. An improved O(n log n) message algorithm uses doubling-sized neighborhoods in each phase. - Any uniform asynchronous leader election algorithm for rings requires Ω(n log n) messages in the worst case.

Uploaded by

Krzysiek Król
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PS, PDF, TXT or read online on Scribd
You are on page 1/ 38

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 26 ]

Rings In an oriented ring, processors have a consistent notion of left and right:
2 p0 1 1 p4 2 1 p3 2 2 2 p1 1 1 = left = clockwise 2 = right = counter-clockwise

p2 1

For example, if messages are always forwarded on incident channel 1, they will cycle clockwise around the ring. Why study rings? simple starting point, easy to analyze abstraction of a token ring lower bounds for ring topology also apply to arbitrary topologies

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 27 ]

Leader Election Denition: each processor has a set of elected states and a set of not-elected states. Once an elected state is entered, the processor always is in an elected state; similarly for non-elected. I.e., irreversible decision. In every admissible execution, every processor eventually enters either an elected or a not-elected state (liveness) eactly one processor (the leader) enters an elected state (safety) A leader can be used to coordinate future activities of the system. For instance: nd a spanning tree using the leader as the root reconstruct a lost token for a token-ring We will study leader election in rings.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 28 ]

Anonymous Rings Intuition is that processors do not have unique identiers. Related issue is whether an algorithm A relies on processors knowing the ring size. uniform algorithm does not use the ring size (same algorithm for each size ring) Formally, every processor in every size ring is modeled with the same state machine A. non-uniform algorithm does use the ring size (different algorithm for each size ring; may be only trivially different) Formally, for every value of n, there is a state machine An such that every processor in a ring of size n is modeled with An. Thus A is the collection of all the Ans.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 29 ]

Leader Election in Anonymous Rings Theorem 3.2: There is no leader election algorithm for anonymous rings, even if the algorithm knows the ring size (i.e., is non-uniform) and the ring is synchronous. Proof Sketch: Every processor begins in the same state with the same messages originally in transit. Every processor receives the same messages and thus makes the same state transition and sends the same messages in round 1. Every processor receives the same messages and thus makes the same state transition and sends the same messages in round 2. Etc. Eventually some processor is supposed to enter an elected state. But then they all would, a contradiction.

Consequently, there is no uniform or asynchronous leader election algorithm.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 30 ]

Rings with Identiers Assume each processor has a unique identier. Distinguish between indices and identiers: indices are 0 through n 1 and are unavailable to the processors; used only for analysis

identiers are arbitrary nonnegative integers and are available to the processors via a special state component called id. Specify a ring by starting with the smallest id and listing ids in clockwise order. E.g., 3, 37, 19, 4, 25.
3 p4 37 p0 p1 19 p2 4

p3 25

Uniform algorithm: There is one state machine for every id, no matter what size ring. Non-uniform algorithm: There is one state machine for every id and every different ring size.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 31 ]

Overview of Leader Election in Rings with Ids In this case, there are algorithms. We will evaluate them according to their message complexity. Overview of Upcoming Results: asynchronous ring: synchronous ring:

(n log n) messages

(n) messages under certain conditions otherwise (n log n) messages

All bounds are asymptotically tight.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 32 ]

O(n2) Messages Leader Election Algorithm


Each processor follows these rules: Initially send your id to the left When you receive an id (from the right): if it is greater than your id then forward it to the left (you will never be the leader) if it is equal to your id then elect yourself leader if it is smaller than your id then do nothing Correctness: Elects processor with largest id. Message containing that id passes through every processor. Message complexity: Depends how ids are arranged. Largest id travels all around the ring, n messages Second largest id travels until reaching largest Third largest id travels until reaching largest or second largest Etc.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 33 ]

O(n2) Messages Algorithm (contd)


Worst way to arrange the ids is in decreasing order:
4 p4 4,3,2,1,0 p3 0 p2 4,3,2,1 1 4 2 p0 3 4,3

p1 2 4,3,2

Second largest id contributes n Third largest id contributes n Etc. Total number of messages is
2 i = ( n ) : i=1

2 messages.

1 messages.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 34 ]

O(n log n) Messages Leader Election Algorithm


Each processor tries to probe successively larger neighborhoods. Size of neighborhood doubles in each phase. A probe is initiated by sending a probe message containing the initiators id. If a processor receives a probe message whose id is larger than its own id, the processor will either forward it on or send back a reply, as appropriate. If a processor receives a probe message whose id is smaller than its own id, it does nothing. If a processor receives a probe message with its own id, then it becomes the leader. If a processor receives a reply message not destined for itself, it forwards it. If a processor receives both reply messages destined for itself, it proceeds to the next phase.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 35 ]

O(n log n) Messages Algorithm (contd)


P R P R P R P R P R P R P R pi P R P R P R P R P R P R P R P = probe R = reply

Correctness: Similar to previous algorithm. Message Complexity: Each message belongs to a phase and is initiated by a particular processor. Probe distance in phase i is 2i. The number of messages initiated by a particular processor in phase i is 4 2i (probes and replies in both directions).

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 36 ]

O(n log n) Messages Algorithm (contd)


How many processors initiate probes in phase i? For i = 0, all n of them do. For i > 0, every processor that is a winner in phase i 1 does (has largest id in its 2i 1 neighborhood) Maximum number of phase i 1 winners occurs when they are packed as densely as possible:
2
i-1

processors

...

a phase i-1 winner

...

a phase i-1 winner

...

Total number of phase i

1 winners is at most n 2i 1 + 1

How many phases are there? Phases continue until there is only one winner, so log n phases sufce.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 37 ]

O(n log n) Messages Algorithm (contd)


Total number of messages is

4 n + i=1
log X

4 2i

n 2i 5n + 4n i=1 2i 1
log X

2i

n
1

+ n +1

= 5n + 8n log n

= O(n log n):

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 38 ]

Asynchronous Lower Bound on Messages Consider any leader election algorithm A that 1. works in an asynchronous ring 2. is uniform 3. elects maximum id 4. guarantees everyone learns the id of the winner We will show message complexity is

(n log n).

Condition 1 is necessary for the lower bound to hold. Condition 2 is necessary for this particular proof to work. Conditions 3 and 4 are made without loss of generality: any algorithm that does not satisfy these two conditions can be converted into one that does with O(n) additional messages.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 39 ]

Asynchronous Lower Bound on Messages (contd) Theorem 3.5: For every n that is a power of 2 and every set S of n ids, there is a ring using those ids on which algorithm A has an open schedule in which at least M (n) messages are sent, where

M (2) = 1 and 1 n ) + ( 1), n > 2. M (n) = 2 M ( n 2 2 2

A schedule is open if there is an edge over which no message is delivered. (An open schedule is not admissible, but it is a prex of an admissible schedule). Proof: By induction on n. Basis: n = 2.
x p0 p1 y

Suppose x > y . At some point p0 must send a message to p1 so that p1 can learn x. Truncate immediately after sending of rst message to get desired schedule.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 40 ]

Asynchronous Lower Bound on Messages (contd) Induction: n 4. Split S into two halves, S1 and S2. By inductive hypothesis, there are rings:
p1 R1 uses S1 e1 q1 p2 e2 q2 R2 uses S2

R1 has open schedule R2 has open schedule

in which at least M (n=2) messages are sent and e1 = (p1; q1) is open edge
1

Paste R1 and R2 together to make a big ring R:


p1 R1 uses S1 q1 ep eq q2 p2 R2 uses S2

in which at least M (n=2) messages are sent and e2 = (p2; q2) is open edge
2

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 41 ]

Asynchronous Lower Bound on Messages (contd) To build an execution of R with M (n) messages: Execute 1: Processors on left cannot tell difference between being in R1 and being in R, so they will behave the same and send M (n=2) messages in R. Then execute 2: Similarly processors on right will send M (n=2) messages in R. Note crucial dependence on uniform assumption! Case 1: Without unblocking ep or eq , there is an exn 1) ( tension of 1 2 on R in which an additional 1 2 2 messages are sent. This is the desired schedule. Case 2: Without unblocking ep or eq , every extension of 1 2 on R leads to quiescence: no processor will send another message unless it receives one and no messages are in transit except on ep and eq . Let 3 be the schedule extending the system to become quiescent.
1 2

that causes

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 42 ]

Asynchronous Lower Bound on Messages (contd) Let 400 be a schedule extending the algorithm has terminated. after which
4

Claim: At least n=2 messages are sent in Reason:

00.

Each of the n=2 processors in the half of R that does not contain the leader must receive a message to learn the leaders id. And until 00 there has been no communication between the two halves of R.

Note heavy reliance here on the assumptions that the max is elected and that all learn the leaders id!

00 is not our desired schedule since However, 1 2 3 4 it might not be open.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 43 ]

Asynchronous Lower Bound on Messages (contd)

00, procesAs messages in ep and eq are delivered in 4 sors wake up from the quiescent state and send more messages. The sets of awakened processors expand outward around ep and eq :
P R1 uses S1 ep eq Q R2 uses S2

0 be the prex of Let 4 been sent.

n 1 messages have 00 when 4 2

and Q cannot meet in 0. sages are sent in 4

04, since less than n mes2

WLOG, suppose the majority of the messages sent 0 are sent by processors in P . in 4

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 44 ]

Asynchronous Lower Bound on Messages (contd)

0 Let 4 be the sequence of events obtained from 4 by keeping only those events involving processors in P . This schedule never delivers any messages over eq ! Claim: In 1 2 3 4, processors in P behave the same 0. as they do in 1 2 3 4
Reason: Since P and Q are disjoint, there is no communication between them and thus processors in P cannot tell whether or not processors in Q are active. Note heavy reliance on the asynchrony assumption here! Since the processors in P do the same thing, they still n 1) messages in 4. send 1 ( 2 2 Thus

2 M (n )+ 2 eq is open.

4 1 2 2

is the desired schedule: ( n 1) messages are sent,

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 45 ]

Leader Election in Synchronous Rings First, a simple algorithm for the synchronous model: Group the rounds into phases so that each phase contains n rounds. In phase i, the processor with id i, if there is one, sends a message around the ring and is elected. Example:

n = 4 and 7 is smallest id.

In phases 0 through 6 (corresponding to rounds 1 through 28), no message is ever sent. At beginning of phase 7 (round 29), processor with id 7 sends message which is forwarded around ring. Note reliance on synchrony and knowledge of n! Correctness: Convince yourself. Message Complexity: O(n). Note that this is optimal. Time Complexity: O(n m), where m is the smallest id in the ring. Not bounded by n.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 46 ]

Another Synchronous LE Algorithm This algorithm works in a slightly weaker model: Processors might not all start at same round; a processor either wakes up spontaneously or when rst gets a message. is uniform (does not rely on knowing n). Idea: A processor that wakes up spontaneously is active; sends its id in a fast message 1 edge/round. A processor that wakes up when receiving a message is relay; never in the competition. A fast message becomes slow if it reaches an active processor 1 edge/2m rounds (m is msg id) Processors (active or relay) only forward a message whose id is smaller than any id this processor has seen so far (ignoring the id of relay processors). If a processor gets own id back, leader.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 47 ]

Analysis of Synchronous LE Algorithm Correctness: Convince yourself that active processor with smallest id is elected. Message Complexity: Winners message is the fastest. While it traverses the ring, other messages are slower, so they are overtaken and stopped before too many messages are sent. More carefully, divide messages into three kinds: 1. fast messages 2. slow messages sent while the leaders message is fast 3. slow messages sent while the leaders message is slow

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 48 ]

Analysis of Synchronous LE Algorithm (contd) Number of type 1 messages (fast): Show that no processor forwards more than one fast message.

...

pk

...

pj

... pi ...

If pi forwards pj s fast msg and pk s fast msg, then when pk s fast message arrives at pj : 1. either pj has already sent its fast message, so pk s message becomes slow, or 2. pj has not already sent its fast message, so it never will. Number of type 1 messages is at most n.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 49 ]

Analysis of Synchronous LE Algorithm (contd) Number of type 2 messages (slow while leaders is fast): Leaders message is fast for at most n rounds. Slow message i is forwarded n=2i times in n rounds.
1 n Number of type 2 messages is at most Pn i=1 2i

Worst case (largest number of messages) is when ids are as small as possible, 0 to n 1.

n.

Number of type 3 messages (slow while leaders is slow): Once leaders message x becomes slow, it takes at most n 2x rounds to return to leader. No messages are sent once leaders message has returned to leader. Slow message i is forwarded n 2x=2i times in n 2x rounds. Worst case is when ids are 0 to n

1 and x = 0. 0 Pn 1 2 Number of type 3 messages is at most i=0 n 2 2n. 2


i

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 50 ]

Time Complexity of Synchronous LE Algorithms Time Complexity: O(n 2x), where x is the minimum id. Even worse than the previous algorithm. Both these algorithms have two potentially undesirable properties: rely on the numeric values of the ids to count number of rounds bears no relationship to n, but depends on the minimum id Next result shows that to obtain linear message complexity, an algorithm must rely on the numeric values of the ids. (The book also shows that to obtain linear message complexity, an algorithm must have time complexity that depends on the values of the ids.)

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 51 ]

Comparison-Based LE Algorithms Denition: An LE algorithm is comparison-based if, in any two order-equivalent rings R1 and R2, matching processors pi in R1 and pj in R2 have similar behaviors in exec(R1) and exec(R2). Rings R1 = (x1; x2 ; : : : ; xn) and R2 = (y1; y2; : : : ; yn) are order-equivalent if xi < xj iff yi < yj . Example:
18 p0 44 p4 R1 p3 9 p3 33 p1 2 p4 R2 8 p0 5 p1

12

p2 82

p2 6

Processors pi in R1 and pj in R2 are matching if they are same distance from minimum id. Example: p0 in R1 and p1 in R2. Behaviors are similar if, in every round, one processor sends a message to the left (right) iff the other one sends a message to the left (right) one processor is elected iff the other one is.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 52 ]

Synchronous Message Lower Bound Theorem 3.18: For every n 8 that is a power of 2, there is a ring of size n on which any synchronous comparison-based algorithm sends (n log n) messages. Proof: Construct, for each n, a highly symmetric ring Sn on which every comparison-based algorithm is expensive in terms of messages. Symmetry means many processors have order-equivalent neighborhoods and thus do the same thing (in particular, send messages). Technical complication: have to show that two processors with order-equivalent neighborhoods on the same ring behave similarly. Does not immediately follow from the denition of comparison-based, which refers to matching processors on different rings.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 53 ]

Order-Equivalent Means Similar Behavior Lemma 3.17: If R is spaced (at least n unused ids between any two used ids) and pi and pj have orderequivalent k-neighborhoods, then pi and pj have similar behaviors through the k-th active round. A round is active if at least one processor sends a message in it. A processor in a synchronous algorithm can potentially learn something even in an inactive round, from the passage of time. But in a comparison-based algorithm, it cannot learn about the order pattern of its ring in an inactive round. Note that the k-th active round might be much much larger than the k-th round.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 54 ]

Order-Equivalent Means Similar Behavior Proof Sketch for Lemma 3.17:


Ni pj R pi pi R pj

Construct R0 such that

Ni

pj s k-neighborhood in R0 equals pis in R ids in R0 are unique R0 and R are order-equivalent pj in R0 is matching to pj in R pi in exec(R) through k-th active round behaves same as pj in exec(R0), since identical k-neighborhoods pj in exec(R0) through k-th active round behaves similarly to pj in exec(R), since matching.

Can be done by spaced assumption (see text).

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 55 ]

Highly Symmetric Ring Sn

pis id is (n + 1)

rev(i), where rev(i) is the integer whose binary representation, using log n bits, is the reverse of is binary representation. Example:
00 -> 00 -> 0*5 = 0 p0 p1 01 -> 10 ->2*5 = 10

S4
11 -> 11 -> 3*5 = 15 p3 p2 10 -> 01 -> 1*5 = 5

Lemma 3.21: For all k < n=8, for every k-neighborhood n k-neighborhoods of N of Sn, there are at least 2(2k +1) Sn order-equivalent to N (including N ). Proof Sketch for Lemma 3.21: N is a sequence of 2k + 1 ids. Let j be smallest power of 2 that is larger than 2k +1. Break Sn into n j segments of length j , with one segment encompassing N .
j j j N 2k+1 j

Claim: Each segment is order-equivalent to N . n . Since j < 2(2k + 1), the number is more than 2(2k +1)

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 56 ]

Synchronous Message Lower Bound (contd) Lemma 3.20: Number of active rounds in exec(Sn) is at least n=8. Proof: Suppose T , number of active rounds, is less than n=8. Let pi be elected leader.

n By Lemma 3.19, there are at least 2(2T +1) T -neighborhoods order-equivalent to pis.
Since

Thus there exists pj 6= pi whose T -neighborhood is order-equivalent to pis.

n > 1. 2(2T +1)

8 and T < n=8,

algebra shows that

By Lemma 3.17, pj is also elected, contradiction.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 57 ]

Synchronous Message Lower Bound (contd)

n messages are sent in kLemma 3.21: At least 2(2k +1) th active round of exec(Sn), 1 k n=8.
Proof: Since round is active, at least one processor, say pi, sends a message.

n procesBy Lemma 3.19, there are at least 2(2k +1) sors whose k-neighborhood is order-equivalent to pis.

By Lemma 3.17, they each send a message in the k-th active round.

To nish proof of Theorem 3.18: number of messages sent in exec(Sn) is at least

n= n X8 k=1 2(2k + 1)

n n= X8 1 6 k=1 k = (n log n):

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 58 ]

Randomized Leader Election Leader election is impossible in anonymous rings, since no way to break symmetry: there is no deterministic algorithm, which works in every admissible execution. Unique ids are a way to break symmetry, which works in every admissible execution. Another way to break symmetry in most, but not all, situations, is to use randomization. A randomized algorithm provides each processor with an additional input to its state transition function, a random number. Weakened denition of the leader election problem: At most one leader is elected in every state of every admissible execution. (No change.) At least one leader is elected with high probability. (Weaker than before.) What does with high probability mean?

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 59 ]

Randomized Leader Election Algorithm Assume synchronous system. Initially: Set id to 1 with probability 1 probability n Send id to the left. When message M is received: if M contains n ids then

n and to 2 with

else append your id to M and send to the left Observations about this algorithm: Uses O(n2) messages. There is never more than one leader. Sometimes there is no leader. How often is there no leader?

if your id is the unique maximum in M then elected else not elected

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 60 ]

Random Choices and Probabilities Since the system is synchronous, to uniquely determine an admissible execution of the algorithm, all that is needed is to x the random choices obtained initially.

R = hr0; r1; : : : ; rn 1i, where each ri = 1 or 2. Each R is an element of R. Denote the execution by exec(R). Denition: For any predicate P on executions, Pr P ] is the probability of fR 2 R : exec(R) satises P g. In words, Pr P ] is the proportion of random choices resulting in an execution that satises P .

Call this collection of random choices

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 61 ]

Probability of Electing a Leader For the algorithm given above: Let P be the predicate there is at least one leader.

Pr P ] = probability R contains exactly one 2 ! n 1 n 1 1 (1 n ) = 1 n 1 n 1 = (1 n ) :37


Thus the probability of electing a leader is more than .37.

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 62 ]

Improving the Probability of Electing a Leader If processors notice there is no leader, they can try again. Each iteration of the basic algorithm is a phase. Keep trying until success. Random choices that dene an execution consist of, for each processor, an innite sequence of 1s and 2s. In some situations the algorithm will not terminate, for instance, if every random number obtained is 1. How likely is it that to happen? Probability of terminating in a particular phase is
1 n (1 n ) 1

Probability of not terminating in a particular phase 1 n 1 ) is 1 (1 n The last expression goes to 0 as k increases. Probability of not terminating in k phases is 1 n 1 k (1 (1 n ) ) , since phases are independent

CPSC 668: Distributed Algorithms & Systems

J. Welch, Texas A&M University [ 63 ]

Expected Number of Phases Denition: The expected value of a random variable T , denoted E T ], is
X

k Pr T = k]

= number of phases until termination. Pr T = k] = Pr (rst k 1 phases fail)&(k-th phase succeeds)] 1 n 1 1 n 1 k 1 ) ) (1 n ) = (1 (1 n 1 n 1 = (1 p)k 1 p, where p = (1 n )
Let T This is a geometric random variable with expectation p 1 < e. Thus the expected number of phases until termination is less than 3.

You might also like