Modeling and Analysis of Stochastic Systems 3rd Kulkarni Solution Manual - Complete Set Of Chapters Available For One-Click Download
Modeling and Analysis of Stochastic Systems 3rd Kulkarni Solution Manual - Complete Set Of Chapters Available For One-Click Download
http://testbankbell.com/product/modeling-and-analysis-of-
stochastic-systems-3rd-kulkarni-solution-manual/
http://testbankbell.com/product/distribution-system-modeling-and-
analysis-3rd-kersting-solution-manual/
http://testbankbell.com/product/signals-and-systems-analysis-using-
transform-methods-and-matlab-3rd-edition-roberts-solutions-manual/
http://testbankbell.com/product/test-bank-for-systems-analysis-and-
design-3rd-edition-dennis/
http://testbankbell.com/product/solution-manual-for-international-
accounting-5th-edition-timothy-doupnik-mark-finn-giorgio-gotti-hector-
perera/
Physical Chemistry Thermodynamics Statistical Mechanics
and Kinetics Cooksy Solution Manual
http://testbankbell.com/product/physical-chemistry-thermodynamics-
statistical-mechanics-and-kinetics-cooksy-solution-manual/
http://testbankbell.com/product/test-bank-for-operations-
management-13th-edition-by-stevenson/
http://testbankbell.com/product/solution-manual-for-principles-of-
managerial-finance-14th-edition-lawrence-j-gitman-chad-j-zutter/
Solution Manual for Oral Pharmacology for the Dental
Hygienist, 2nd Edition, 2/E Mea A. Weinberg, Cheryl
Westphal Theile, James Burke Fine
http://testbankbell.com/product/solution-manual-for-oral-pharmacology-
for-the-dental-hygienist-2nd-edition-2-e-mea-a-weinberg-cheryl-
westphal-theile-james-burke-fine/
Modeling and Analysis of Stochastic
Systems 3rd Kulkarni Solution Manual
Full download chapter at: https://testbankbell.com/product/modeling-and-analysis-of-
stochastic-systems-3rd-kulkarni-solution-manual/
CHAPTER 2
Modeling Exercises
2.1. The state space of {Xn , n ≥ 0} is S = {0, 1, 2, 3, ...}. Suppose Xn = i.
Then the age of the lightbulb in place at time n is i. If this light bulb does not fail at
time n + 1, then Xn+1 = i + 1. If it fails at time n + 1, then a new lightbulb is put
in at time n + 1 with age 0, making Xn+1 = 0. Let Z be the lifetime of a lightbulb.
We have
P(Xn+1 = 0|Xn = i, Xn−1 , ..., X0 ) = P(lightbulb of age i fails at age i + 1)
= P(Z = i + 1|Z > i)
p
= P∞i+1
j=i+1 p j
Similarly
and
pi+1
qi = P∞ ,
j=i+1 pj
for i ∈ S.
2.2 The state space of {Yn , n ≥ 0} is S = {1, 2, 3, ...}. Suppose Yn = i > 1, then
the remaining life decreases by one at time n + 1. Thus Xn+1 = i − 1. If Yn = 1, a
new light bulb is put in place at time n + 1, thus Yn+1 is the lifetime of the new light
bulb. Let Z be the lifetime of a light bulb. We have
P(Yn+1 = i − 1|Xn = i, Xn−1 , ..., X0 ) = 1, i ≥ 2,
5
6 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
and
P(Xn+1 = k|Xn = 1, Xn−1 , ..., X0 ) = P(Z = k) = pk , k ≥ 1.
2.3. Initially the urn has w + b balls. At each stage the number of balls in the urn
increases by k − 1. Hence after n stages, the urn has w + b + n(k − 1) balls. Xn of
them are black, and the remaining are white. Hence the probability of getting a black
ball on the n + 1st draw is
Xn
.
w + b + n(k − 1)
If the n + 1st draw is black, Xn+1 = Xn + k − 1, and if it is white, Xn+1 = Xn .
Hence
i
P(Xn+1 = i|Xn = i) = 1 − ,
w + b + n(k − 1)
and
i
P(Xn+1 = i + k − 1|Xn = i) = .
w + b + n(k − 1)
Thus {Xn , n ≥ 0} is a DTMC, but it is not time homogeneous.
2.4. {Xn , n ≥ 0} is a DTMC with state space {0 = dead, 1 = alive} because the
movements of the cat and the mouse are independent of the past while the mouse is
alive. Once the mouse is dead, it stays dead. If the mouse is still alive at time n, he
dies at time n + 1 if both the cat and mouse choose the same node to visit at time
n + 1. There are N − 2 ways for this to happen. In total there are (N − 1)2 possible
ways for the cat and the mouse to choose the new nodes. Hence
N−2
P(Xn+1 = 0|Xn = 1) = .
(N − 1)2
Hence the transition probability matrix is given by
1 0
P = N −2 N −2 .
(N −1)2
1− (N −1)2
Thus, when a functioning system fails, i components fail simultaneously with prob-
ability αi , i ≥ 1. The {Xn , n ≥ 0} is a DTMC with transition probabilities:
p0,i = αi , 0 ≤ i ≤ K,
pi,i−1 = 1, 1 ≤ i ≤ K.
2.7. Suppose Xn = i. Then, Xn+1 = i + 1 if the first coin shows heads, while
the second shows tails, which will happen with probability p1 (1 − p2 ), independent
of the past. Similarly, Xn+1 = i − 1 if the first coin shows tails and the second coin
shows heads, which will happen with probability p2 (1 −p1 ), independent of the past.
If both coins show heads, or both show tails, Xn+1 = i. Hence, {Xn , n ≥ 0} is a
space homogeneous random walk on S = {..., −2, −1, 0, 1, 2, ...} (see Example 2.5)
with
pi = p1 (1 − p2 ), qi = p2 (1 − p1 ), ri = 1 − pi − qi .
2.8. We define Xn , the state of the weather system on the nth day, as the length
of the current sunny or rainy spell. The state is k, (k = 1, 2, 3, ...), if the weather
is sunny and this is the kth day of the current sunny spell. The state is −k, (k =
1, 2, 3, ..), if the the weather is rainy and this is the kth day of the current rainy spell.
Thus the state space is S = {±1, ±2, ±3, ...}.
Now suppose Xn = k, (k = 1, 2, 3, ...). If the sunny spell continues for one
more day, then Xn+1 = k + 1, or else a rainy spell starts, and Xn+1 = −(k + 1).
Similarly, suppose Xn = −k. If the rainy spell continues for one more day, then
Xn+1 = −(k + 1), or else a sunny spell starts, and Xn+1 = 1. The Markov property
follows from the fact that the lengths of the sunny and rainy spells are independent.
Hence, for k = 1, 2, 3, ...,
P(Xn+1 = k + 1|Xn = k) = pk ,
P(Xn+1 = −1|Xn = k) = 1 − pk ,
P(Xn+1 = −(k + 1)|Xn = −k) = qk ,
P(Xn+1 = 1|Xn = −k) = 1 − qk .
2.9. Yn is the outcome of the nth toss of a six sided fair die. Sn = Y1 + ...Yn .
Xn = Sn (mod 7). Hence we see that
Xn+1 = Xn + Yn+1 (mod 7).
8 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
Since Yn s are iid, the above equation implies that {Xn , n ≥ 0} is a DTMC with
state space S = {0, 1, 2, 3, 4, 5, 6}. Now, for i, j ∈ S, we have
P(Xn+1 = j|Xn = i) = P(Xn + Yn+1 (mod 7) = j|Xn = i)
= P(i + Yn+1 (mod 7) = j)
0 if i = j
= 1
if i = j.
6
Thus the transition probability matrix is given by
1 1 1 1 1 1
0
06 6 6 6 6 6
1 1 1 1 1 1
6 6 6 6 6 6
1 1 1 1 1 1
6 6 0 6 6 6 6
1 1 1 1 1 1
P = 6 6 6 0 6 6 6
1 1 1 1
0 1 1 .
6 6 6 6 6 6
1 1 1 1 1 1
6 6 6 6 6
0 6
1 1 1 1 1 1
6 6 6 6 6 6 0
The story ends in bar k if the bivariate DTMC gets absorbed in state (k, k), for
k = 1, 2.
P(Xn+1 = i|Xn = i) = pi , 1 ≤ i ≤ k,
P(Xn+1 = i + 1|Xn = i) = 1 − pi , 1 ≤ i ≤ k − 1,
P(Xn+1 = 1|Xn = k) = 1 − pk .
2.14. Let the state space be {0, 1, 2, 12}, where the state is 0 if both components
are working, 1 if component 1 alone is down, 2 i f component 2 alone is down, and
12 if components 1 and 2 are down. Let Xn be the state on day n. {Xn , n ≥ 0} is a
DTMC on {0, 1, 2, 12} with tr pr matrix
α0 α1 α 2 α12
r1 1 − r1 0 0
P = .
r2 0 1 − r2 0
0 0 r1 1 − r1
Here we have assumed that if both components fail, we repair component 1 first, and
then component 2.
2.15. Let Xn be the pair that played the nth game. Then X0 = (1, 2). Suppose
Xn = (1, 2). Then the nth game is played between player 1 and 2. With probability
b12 player 1 wins the game, and the next game is played between players 1 and
3, thus making Xn+1 = (1, 3). On the other hand, player 2 wins the game with
probability b21 , and the next game is played between players 2 and 3, thus making
Xn+1 = (2, 3). Since the probabilities of winning are independent of the past, it is
clear that {Xn , n ≥ 0} is a DTMC on state space {(1, 2), (2, 3), (1, 3)}. Using the
arguments as above, we see that the transition probabilities are given by
0 b21 b12
P = b23 0 b32 .
b13 b31 0
2.16. Let Xn be the number of beers at home when Mr. Al Anon goes to the store.
Then {(Xn , Yn ), n ≥ 0} is DTMC on state space
S = {(0, L), (1, L), (2, L), (3, L), (4, L), (0, H), (1, H), (2, H), (3, H), (4, H)}
10 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
with the following transition probability matrix:
0 0 0 0 α 0 0 0 0 1−α
0 0 0 0 α 0 0 0 0 1−α
0 0 0 0 α 0 0 0 0 1−α
0 0 0 0 α 0 0 0 0 1−α
0 0 0 0 α 0 0 0 0 1−α
.
1− β 0 0 0 0 β 0 0 0 0
1− β 0 0 0 0 β 0 0 0 0
0 1− β 0 0 0 0 β 0 0 0
0 0 1− β 0 0 0 0 β 0 0
0 0 0 1− β 0 0 0 0 β 0
2.17. We see that
Xn+1 = max{Xn , Yn+1 }.
Since the Yn ’s are iid, {Xn , n ≥ 0} is a DTMC. The state space is S = {0, 1, · · · , M }.
Now, for 0 ≤ i < j ≤ M ,
pi,j = P(max{Xn , Yn+1 } = j|Xn = i) = P(Yn = j) = αj .
Also,
Xi
pi,i = P(max{Xn , Yn+1 } = i|Xn = i) = P(Yn ≤ i) = αk .
k=0
2.19. Let Xn be the number of messages in the inbox at 8:00am on day n. Ms.
Friendly answers Zn = Bin(Xn , p) emails on day n. hence Xn − Zn = Bin(Xn , 1 −
p) emails are left for the next day. Yn is the number messages that arrive during
24 hours on day n. Hence at the beginning of the next day there Xn+1 = Yn +
Bin(Xn , 1 − p) in her mail box. Since {Yn , n ≥ 0} is iid, {Xn , n ≥ 0} is a DTMC.
2.20. Let Xn be the number of bytes in this buffer in slot n, after the input during
the slot and the removal (playing) of any bytes. We assume that the input during the
slot occurs before the removal. Thus
Xn+1 = max{min{Xn + An+1 , B} − 1, 0}.
DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR 11
Thus if Xn = 0 and there is no input, Xn+1 = 0. Similarly, if Xn = B, Xn+1 =
B − 1. The process {Xn , n ≥ 0} is a random walk on {0, ..., B − 1} with the
following transition probabilities:
p0,0 = α0 + α1 , p0,1 = α2 ,
pi,i−1 = α0 , pi,i = α1 , pi,i+1 = α2 , 0 < i < B − 1,
pB−1,B−1 = α1 + α2 ; pB−1,B−2 = α0 .
2.21. Let Xn be the number of passengers on the bus when it leaves the nth stop.
Let Dn+1 be the number of passengers that alight at the (n + 1)st stop. Since each
person on board the bus gets off with probability p in an independent fashion, Dn+1
is Bin(Xn , p) random variable. Also, Xn − Dn+1 is a Bin(Xn , 1 − p) random
variable. Yn+1 is the number of people that get on the bus at the (n + 1)st bus stop.
Hence
Xn+1 = min{Xn − Dn+1 + Yn+1 , B}.
Since {Yn , n ≥ 0} is a sequence of iid random variables, it follows from the above
recursive relationship, that {Xn , n ≥ 0} is a DTMC. The state space is {0, 1, ..., B}.
For 0 ≤ i ≤ B, and 0 ≤ j < B, we have
pi,j = P(Xn+1 = j|Xn = i)
= P(Xn − Dn+1 + Yn+1 = j|Xn = i)
= P(Yn+1 − Bin(i, p) = j − i)
Xi
= P(Yn+1 − Bin(i, p) = j − i|Bin(i, p) = k)P(Bin(i, p) = k)
k=0
Xi
i k
= P(Yn+1 = k + j − i|Bin(i, p) = k) p (1 − p)i−k
k=0
k
i
X i k
= p (1 − p)i−k αk+j−i ,
k=0
k
2.22. The state space is {−1, 0, 1, 2, ..., k −1}. The system is in state -1 at time n if
it is in economy mode after the n-th item is produced (and possibly inspected). It is in
state i (1 ≤ i ≤ k) if it is in 100% inspection mode and i consecutive non-defective
items have been found so far. The transition probabilities are
p−1,0 = p/r, p−1,−1 = 1 − p/r,
pi,i+1 = 1 − p, pi,0 = p, 0 ≤ i ≤ k − 2
12 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
pk−1,−1 = 1 − p, pk−1,0 = p.
2.23. Xn is the amount on hand at the beginning of the nth day, and Dn is the
demand during the nth day. Hence the amount on hand at the end of the nth day is
Xn − Dn . If this is s or more, no order is placed, and hence the amount on hand at
the beginning of the (n + 1)st day is Xn − Dn . On the other hand, if Xn − Dn < s,
then the inventory is brought upto S at the beginning of the next day, thus making
Xn+1 = S. Thus
Xn − Dn if Xn − Dn ≥ s,
Xn+1 =
S if Xn − Dn < s.
Since {Dn , n ≥ 0} are iid, {Xn , n ≥ 0} is a DTMC on state space {s, s + 1, ..., S −
1, S}. We compute the transition probabilities next. For s ≤ j ≤ i ≤ S, j = S, we
have
P(Xn+1 = j|Xn = i) = P(Xn − Dn = j|Xn = i)
= P(Dn = i − j) = αi−j .
and for s ≤ i < S, j = S we have
P(Xn+1 = S|Xn = i) = P(Xn − Dn < s|Xn = i)
∞
X
= P(Dn > i − s) = αk .
k=i−s
Finally
The transition probabilities are given by (see solution to Modeling Exercise 2.1)
p(i,1),(i+1,1) = β 1i+2 /βi+1
1
, i ≥ 0,
p(i,2),(i+1,2) = β 2i+2 /βi+1
2
, i ≥ 0,
p(i,1),(0,j) = vj α1i+1 /βi+1
1
, i ≥ 0,
p(i,2),(0,j) = vj α2i+1 /βi+1
2
, i ≥ 0.
2.25. Xn is the number of bugs in the program just before running it for the nth
time. Suppose Xn = k. Then no is discovered on the nth run with probability 1−βk ,a
nd hence Xn+1 = k. A bug will be discovered on the n run with probability βk , in
which case Yn additional bugs are introduced, (with P(Yn = i) = αi , i = 0, 1, 2)
and Xn+1 = k − 1 + Yn . Hence, given Xn = k,
k −1 with probability βk α0 = qk
Xn+1 = k with probability βk α1 + 1 − βk = rk
k+1 with probability βk α2 = pk
Thus {Xn , n ≥ 0} is a DTMC with state space {0, 1, 2, ...} with transition proba-
bility matrix
1 0 0 0 0 ...
q1 r1 p1 0 0 ...
P = 0 q2 r2 p2 0 ... .
0 0 q3 r3 p3 ...
..
. . . . . .
2.27. {Xn , n ≥ 0} is a DTMC with state space S = {rr, dr, dd}, since gene type
of the n+1st generation only depends on that of the parents in the nth generation. We
are given that X0 = rr. Hence, the parents of the first generation are rr, dd. Hence
X1 is dr with probability 1. If Xn is dr, then the parents of the (n + 1)st generation
are dr and dd. Hence the (n + 1)th generation is dr or dd with probability .5 each.
Once the nth generation is dd it stays dd from then on. Hence transition probability
matrix is given by
0 1 0
P = 0 .5 .5 .
0 0 1
2.28. Using the analysis in 2.27, we see that {Xn , n ≥ 0} is a DTMC with state
space S = {rr, dr, dd} with the following transition probability matrix:
.5 .5 0
P = .25 .5 .25 .
0 .5 .5
2.29. Let Xn be the number of recipients in the nth generation. There are 20
recipients to begin with. Hence X0 = 20. Let Yi , n be the number of letters sent out
by the ith recipient in the nthe generation. The {Yi,n : n ≥ 0, i = 1, 2, ..., Xn } are
iid random variables with common pmf given below:
P(Yi,n = 0) = 1 − α; P(Yi,n = 20 = α.
The number of recipients in the (n + 1)st generation are given by
Xn
X
Xn+1 = Yi,n .
i=1
2.30. Let Xn be the number of backlogged packets at the beginning of the nth
slot. Furthermore, let In be the collision indicator defined as follows: In = id if
there are no transmissions in the (n − 1)st slot (idle slot), In = s if there is exactly 1
transmission in the (n − 1)st slot (successful slot), and In = e if there are 2 or more
transmissions in the (n − 1)st slot (error or collision in the slot). We shall model
the state of the system at the beginning of the n the slot by (Xn , In ). Now suppose
Xn = i, In = s. Then, the backlogged packets retry with probability r. Hence, we
get
P(Xn+1 = i − 1, In+1 = s|Xn = i, In = s) = (1 − p)N −i ir(1 − r)i−1 ,
P(Xn+1 = i, In+1 = s|Xn = i, In = s) = (N − i)p(1 − p)N −i−1 (1 − r)i ,
P(Xn+1 = i, In+1 = id|Xn = i, In = s) = (1 − p)(N −i) (1 − r)i ,
P(Xn+1 = i, In+1 = e|Xn = i, In = s) = (1 − p)(N −i) (1 − (1 − r)i − ir(1 − r)i−1 ).
P(Xn+1 = i + 1, In+1 = e|Xn = i, In = s) = (N − i)p(1 − p)N −i−1 (1 − (1 − r)i )
N −i i
P(Xn+1 = i + j, In+1 = e|Xn = i, In = s) = p (1 − p)N −i−j , 2 ≤ j ≤ N − i.
j
Next suppose Xn = i, In = id. Then, the backlogged packets retry with probability
r 00 > r. The above equations become:
P(Xn+1 = i − 1, In+1 = s|Xn = i, In = id) = (1 − p)N −i ir 0 (1 − r 0 )i−1 ,
P(Xn+1 = i, In+1 = s|Xn = i, In = id) = (N − i)p(1 − p)N −i−1 (1 − r 0 )i ,
P(Xn+1 = i, In+1 = id|Xn = i, In = id) = (1 − p)(N −i) (1 − r 0 )i ,
P(Xn+1 = i, In+1 = e|Xn = i, In = id) = (1 − p)(N −i) (1 − (1 − r 0 )i − ir 0 (1 − r 0 )i−1 ).
P(Xn+1 = i + 1, In+1 = e|Xn = i, In = id) = (N − i)p(1 − p)N −i−1 (1 − (1 − r 0 )i )
N −i i
P(Xn+1 = i + j, In+1 = e|Xn = i, In = id) = p (1 − p)N −i−j , 2 ≤ j ≤ N − i.
j
2.31. Let Xn be the number of packets ready for transmission at time n. Let Yn be
the number of packets that arrive during time (n, n + 1]. If Xn = 0, no packets are
transmitted during the nth slot and we have
Xn+1 = Yn .
If Xn > 0, exactly one packet is transmitted during the nth time slot. Hence,
Xn+1 = Xn − 1 + Yn .
Since {Yn , n ≥ 0} are iid, we see that {Xn , n ≥ 0} is identical to the DTMC given
in Example 2.16.
1 − α1 if n = 0,
pn,n = rn = α1 α2 + (1 − α1 )(1 − α2 ) if 0 < n < B1 + B2 ,
1 − α2 if n = B1 + B2 .
2.33. Let Xn be the age of the light bulb in place at time n. Using the solution
to Modeling Exercise 2.1, we see that {Xn , n ≥ 0} is a success-runs DTMC on
{0, 1, ..., K − 1} with
qi = pi+1 /bi+1 , pi = 1 − qi , 0 ≤ i ≤ K − 2, qK−1 = 1,
P∞
where bi = P (Zn ≥ i) = j=i pj .
2.34. The same three models of reader behavior in Section 2.3.7 work if we con-
sider a citation from paper i to paper j as link from webpage i to web page j, and
action of visiting a page is taken to the same as actually looking up a paper.
DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR 17
Computational
Exercises
2.1. Let Xn be the number of white balls in urn A after n experiments. {Xn , n ≥
0} is a DTMC on {0, 1, ..., 10} with the following transition probability matrix:
0 1.00 0 0 0 0 0 0 0 0 0
0.01 0.18 0.81 0 0 0 0 0 0 0 0
0 0.04 0.32 0.64 0 0 0 0 0 0 0
0 0 0.09 0.42 0.49 0 0 0 0 0 0
0 0 0 0.16 0.48 0.36 0 0 0 0 0
P = 0 0 0 0 0.25 0.50 0.25 0 0 0 0 .
0 0 0 0 0 0.36 0.48 0.16 0 0 0
0 0 0 0 0 0 0.49 0.42 0.09 0 0
0 0 0 0 0 0 0 0.64 0.32 0.04 0
0 0 0 0 0 0 0 0 0.81 0.18 0.01
0 0 0 0 0 0 0 0 0 1.00 0
Using the equation given in Example 2.21 we get the following table:
n X0 = 8 X0 = 5 X0 = 3
0 8.0000 5.0000 3.0000
1 7.4000 5.0000 3.4000
2 6.9200 5.0000 3.7200
3 6.5360 5.0000 3.9760
4 6.2288 5.0000 4.1808
5 5.9830 5.0000 4.3446
6 5.7864 5.0000 4.4757
7 5.6291 5.0000 4.5806
8 5.5033 5.0000 4.6645
9 5.4027 5.0000 4.7316
10 5.3221 5.0000 4.7853
11 5.2577 5.0000 4.8282
12 5.2062 5.0000 4.8626
13 5.1649 5.0000 4.8900
14 5.1319 5.0000 4.9120
15 5.1056 5.0000 4.9296
16 5.0844 5.0000 4.9437
17 5.0676 5.0000 4.9550
18 5.0540 5.0000 4.9640
19 5.0432 5.0000 4.9712
20 5.0346 5.0000 4.9769
2.2. Let P be the transition probability matrix and a the initial distribution given
in the problem.
18 DISCRETE-TIME MARKOV CHAINS: TRANSIENT BEHAVIOR
1. Let a(2) be the pmf of X2 . It is given by Equation 2.31. Substituting for a and P
we get
a(2) = [0.2050 0.0800 0.1300 0.3250 0.2600].
2.
P(X2 = 2, X4 = 5) = P(X4 = 5|X2 = 2)P(X2 = 2)
= P(X2 = 5|X0 = 2) ∗ (.0800)
= [P 2 ]2,5 ∗ (.0800)
= (.0400) ∗ (.0800) = .0032.
3.
P(X7 = 3|X3 = 4) = P(X4 = 3|X0 = 4)
= [P 4 ]4,3
= .0318.
4.
5
X
P(X1 ∈ {1, 2, 3}, X2 ∈ {4, 5}) = P(X1 ∈ {1, 2, 3}, X2 ∈ {4, 5}|X0 = i)P(X0 = i)
i=1
5
X 3 X
X 5
= ai P(X1 = j, X2 = k}|X0 = i)
i=1 j=1 k=4
5 3 5
X X X
= ai pi,j pj,k
i=1 j =1 k=4
= .4450.
2.3. Easiest way is to prove this by induction. Assume a+b = 2. Using the formula
given in Computational Exercise 3, we see that
1 1−b 1−a 1 1−a a −1 1 0
P0 = + =
1−b 1− a , b−1 1−b
2−a −b 2−a −b 0 1
and
Thus the formula is valid for n = 0 and n = 1. Now suppose it is valid for n = k ≥
1. Then
P k+1 = Pk ∗P
1 1−b 1− a (a+ b−1)k 1−a a−1 a 1−a
= ∗
+
1−b 1− a b−1 1−b , 1−b
2−a −b 2−a −b b
The laughter, the tears, and the song of a woman are equally
deceptive.
Latin Proverb.
She who dresses for others besides her husband, marks herself
a wanton.
Euripides.
With soft persuasive prayers woman wields the sceptre of the
life which she charmeth.
Schiller.
The beautiful woman always gives me joy, and a high mind, too,
if I think what she does for me.
Reinmar.
Women have the genius of charity. A man gives but his gold; a
woman adds to it her sympathy.
Legouvé.
There are some women who think virtue was given them as
claws were given to cats—to do nothing but scratch with.
Jerrold.
A woman often thinks she regrets the lover, when she only
regrets the love.
La Rochefoucauld.
Even the satyrs, like men, in one way or another, could win the
love of a woman.
Malcolm Johnson.
The man flaps about with a bunch of feathers: the woman goes
to work softly with a cloth.
Holmes.
Glory can be for a woman but the brilliant mourning of
happiness.
Mme. de Stael.
No man knows what the wife of his bosom is—no man knows
what a ministering angel she is—until he has gone with her
through the fiery trials of this world.
Washington Irving.
Women have, in general, but one object, which is their beauty;
upon which scarce any flattery is too gross for them.
Chesterfield.
If you would make a pair of good shoes, take for the sole the
tongue of a woman; it never wears out.
Alsatian Proverb.
I have seen faces of women that were fair to look upon, yet one
could see that the icicles were forming round these women’s
hearts.
Holmes.
A woman is happy and attains all that she desires when she
captivates a man; hence the great object of her life is to master
the art of captivating men.
Tolstoi.
The girl who wakes the poet’s sigh is a very different creature
from the girl who makes his soup.
Sheldon.
The man who has taken one wife deserves a crown of patience;
the man who has taken two deserves two crowns of pity.
Proverb.
A beautiful woman is never silly; she has the best wit that a
man may ask of a woman, she is pretty.
Stahl.
All the reasons of men are not worth one sentiment of woman.
Voltaire.
A man never knows how to live until a woman has lived with
him.
Mere.
Every lover who tries to find in love anything else than love is
not a lover.
Bourget.
The two pleasantest days of a woman are her marriage day and
the day of her funeral.
Hipponax.
All the evil that women have done to us comes from us, and all
the good they have done to us comes from them.
Aimi Martin.
Have a useful and good wife in the house, or don’t marry at all.
Euripides.
testbankbell.com