Assignment 1 Solution
Assignment 1 Solution
Stochastic Simulation
Prof. Peter W. Glynn
Assignment 1 Solution
March 8, 2009
Page 1 of 9
Assignment 1 Solution
Problem 1 Consider the M/M/ number-in-system process X = (X(t) : t 0). Suppose we wish
to compute
= P(X(t) = j),
given that the system is started at t = 0 empty.
a.) Write down the specific backwards equations for computing this probability.
b.) Suppose that we now decide to change the service time distribution so that it is uniformly
distributed with the same mean. Write down the corresponding integral-differential equations
for computing .
Solution:
a.) Let Q be the transition rate matrix for X(t), the M/M/1 queue number-in-system process.
Suppose and are the inter-arrival and service rate, respectively. Then,
Q(0, 0) = ,
Q(i, i 1) = ,
Q(i, i + 1) = ,
Q(0, 1) =
Q(i, i) = ( + ),
i1
so
X
Pn (X(t+) = j)Pn (X(t) = j) = (Pn (X() = n)1)Pn (X(t) = j)+
Pn (X() = k)Pk (X(t) = j)
k6=j
i.e.
u(t + ; n) u(t; n) = (Pn (X() = n) 1)u(t; n) +
Pn (X() = k)u(t; k)
k6=j
k0
n1
(1)
b.) Note that X(t) itself does not have Markovian property because the service time is not
exponentially distributed. Let S(t) be the elapsed service time of the customer already in
service at time t. Then (X(t), S(t)) is indeed Markovian. Put
f (t, x, n) =
P(X(t) = n, S(t) x)
x
i.e. f (t, x, n) is the joint probability density of (X(t), S(t)). Let g(x) be such that
P(a service is completed in (x, x + )|the elapsed service time is x) = g(x) + o().
We may interpret g(x) as the service rare. Note that if the service time is exponential,
then memoryless property implies g(x) is constant in x. But since the service time is actually
uniform distributed, it should depends on x. For small > 0,
P(X(t + ) = n, S(t + ) = x + )
n
X
=
P(X(t) = n k, S(t) = x) P(k persons arrive in (t, t + ))
k=0
n2
Z
f (t + , 0) = f (t, 0)(1 ) +
f (t, x, 1) +
R x f (t, x, 1) = ( + g(x))f (t, x, 1)
t
d
f (t, x, 1)g(x)dx f (t, 0)
dt f (t, 0) =
The system is initially empty, so the boundary conditions are
f (0, x, n) = 0, n 1
f (0, 0) = 1 R
f (t, 0, n) = R f (t, x, n + 1)g(x)dx, n 2
n2
What is left to us is to find g(x). Suppose the service time is uniformly distributed on [a, b]
where a and b are determined such that the mean is 1 . Then,
P(a service is completed in (x, x + )|the elapsed service time is x)
P(service time is in (x, x + ))
/(b a)
=
=
=
P(service time is greater than x)
(b x)/(b a)
bx
Hence, g(x) = (b x)1 , x [a, b). The integrals appearing in the integral-differential
equations and boundary conditions are over [a, t b).
Problem 2 Let (Z, C ) be a jointly distributed random vector in which Z is scalar and C is a
(random) column vector. Assume that EZ 2 < and Ek C k2 < . We further presume that
1X
(Zi T C i EZ),
n
i=1
where (Z1 , C 1 ), . . . , (Zn , C n ) are n iid copies of (Z, C ). What is the minimal variance choice
of , assuming that the goal is to compute EZ?
b.) In practice, the variance-minimizing choice must be estimated from the sample data (Z1 , C 1 ), . . . ,
n
X
(Zi n C i ) ( )N (0, 1)
i=1
as n , where 2 ( ) is the minimal variance. (In other words, at the level of the CLT
approximation, there is no asymptotic loss associated with having to estimate .)
Solution:
Var(
1
1X
(Zi T C i )) = Var(Z T C ).
n
n
i=1
= Var(Z) 2T E( C Z) + T E( C C T )
(1)
(m)
b.) Let C i = (Ci , . . . , Ci ) for 1 i n and
C
(j)
1 (j)
C
n i
, for 1 j m. We can use the sample variance and covariance to estimate . In particular,
the estimator is n = Sn1 bn where Sn is the m m matrix with (j, k)-the entry
n
!
X (j) (k)
1
(j) (k)
Ci Ci nC C
n1
i=1
1/2
n
( n ) C i 0 N (0, ) = 0.
i=1
n
X
i=1
i=1
X
X
i=1
as n .
Problem 3 Consider the Jackson network example in which the d stations are in tandem (i.e in
series). Suppose that each station is a single-server station with |i | for i 1. Provide an
algorithm for generating paths of X that has an expected complexity (in terms of flops) that scales
linearly in d.
Solution:
a.) Let Qi (t) be the number of customers at station i at time t. Then, Q(t) = (Q1 (t), . . . , Qd (t))
forms a continuous time Markov chain on the state space {(n1 , . . . , nd ) : ni Z+ , 1 i d}.
Suppose the current state is (n1 , . . . , nd ), then after an exponentially distributed holding time
the system will move to the following at most d + 1 possible states:
(n1 + 1, n2 , . . . , nd )
(n1 1, n2 + 1, . . . , nd )
if n1 > 0
if nd > 0
The transition rates from (n1 , . . . , nd ) to the above listed (possible) states are , 1 . . . , d ,
respectively. Therefore, to simulate Q(T ), we can
i. Initialize t 0, e (0, . . . , 0)
ii. Suppose
P the current state is e = (n1 , . . . , nd ). Generate s, an exponential rv with rate
+ di=1 i I(ni > 0).
iii. Generate a discrete r.v. K with p.m.f
P(K = 0) =
Pd
i=1 i I(ni
> 0)
P(K = i) =
i I(ni > 0)
, 1id
Pd
+ i=1 i I(ni > 0)
Y
Y
2
=
eti =
P(Ri+1
Ri2 > ti+1 )
i=1
proving
R12 ,
R22
R12 ,. . . are
i=1
To simulate the homogenous Poisson process on , we can first simulate the radii Ri s on
(0, r) and then simulate the arguments i s which are uniformly distributed on [0, 2). Then,
let (xi1 , xi2 ) = (Ri cos i , Ri sin i ). To avoid the calculation of trigonometric functions, we can
use acceptance-rejection method to generate (xi1 , xi2 ) after simulating Ri .
To generate Ri we can either apply part (a), i.e. simulate an inhomogenous Poisson process
on (0, r) with intensity function (t) or apply part (b), i.e. simulate a sequence of exponential
r.v.s Z1 , . . . , Zn with mean ()1 so long as Z1 + Zn < r and then set R12 = Z1 , R12 R22 =
Z2 , . . ..
Problem 5 Suppose that we wish to compute qp , where qp is the root of P(X qp ) = p, so that
qp is the pth quantile of X. We assume that X is a continuous rv with a strictly positive and
continuous density f . We estimate qp via Qbpnc , where Qbpnc is the bpncthe order statistic of an
iid sample X1 , . . . , Xn from the distribution of X. Prove rigorously that
p
p(1 p)
1/2
n (Qbpnc qp )
N (0, 1)
f (qp )
as n . (Hint: Reduce the problem to one in which the Xi s are sampled from a uniform (0,1)
population.)
Solution: Let F (x) = P(X x) be the CDF of X. Since X has a strictly positive and continuous
density f (x), we know F 1 () exists and is strictly increasing.
Suppose U1 , . . . , Un are iid uniform r.v.s on (0, 1) and U(1) , . . . , U(n) are the corresponding order
D
statistics. Note that F 1 (U ) = X, so F 1 (U(k) ) = X(k) , where X(k) is the k-the order statistic of
D
f (F 1 ())
(U(bpnc) p)
(2)
U(bpnc) p =
(1 p)
bpnc
i=1 Zi
n+1
bpnc p
Z
(n
+
1
bpnc)
+ bpnc p(n + 1)
i=bpnc+1 i
Pn+1
i=1 Zi
(3)
bpnc
Pbpnc
1
p
bpnc
i
n
i=1
i=1 Zi bpnc
nbpnc
bpnc
=
p1
Pn+1
1 Pn+1
n
+
1
i=1 Zi
i=1 Zi
n+1
where 1 is a standard normal r.v., since
bpnc
X
1
p
Zi bpnc 1 ,
bpnc i=1
Also,
1
n+1
n+1
1 X a.s.
Zi 1,
n+1
i=1
1
Pn+1
i=1
Zi
(4)
nbpnc
p.
n+1
(5)
n+1bpnc
D
Zi =
Zi0
i=1
for all n 1, where Z10 , Z20 , . . . are another sequence of iid standard exponential r.v.s. Hence,
applying Continuous Mapping Theorem again,
Pn+1
P
1 ( m Z 0 (m))
n
i=bpnc+1 Zi (n + 1 bpnc) D
nm
i=1 i
m
1 p2
(6)
P
Pn+1
n+1
1
n
+
1
Z
i
i=1
i=1 Zi
n+1
where m = n + 1 bpnc and 2 is a standard exponential r.v.. Observe that 1 and 2 are
independent, for {Z1 , . . . , Zbpnc } and {Zbpnc+1 , . . . , Zn } are independent for each n 1. Combining
(4), (5), (6) and (3), we conclude
p
D p
n(U(bpnc) p) (1 p) p1 + p 1 p2 = p(1 p)N (0, 1)
(7)
applying Continuous Mapping Theorem and the following lemma:
Lemma 1 Suppose {Xn } and {Yn } are two independent sequences of r.v.s. If Xn X and
Yn Y , then Xn + Yn X + Y .
This lemma can be easily proved by noting
lim Eei(Xi +Yi ) = lim EeiXi EeiYi = EeiX EeiY = Eei(X+Y ) .
(7) also implies U(bpnc) p and thus U(bpnc) p in probability. Recall that is between U(bpnc)
and p, so for any > 0,
P(| p| > ) P(|U(bpnc) p| > ) 0
Hence, p in probability. By the continuity of f () and F 1 (), we have
f (F 1 ()) f (F 1 (p)) = f (qp ) in probability
1/2
p(1 p)
N (0, 1).
f (qp )
(Qbpnc qp )
Problem 6 Suppose that (Y1 , 1 ), . . . , (Yn , n ) is an iid sequence sampled from the population of
(Y, ), where 0 a.s.. Assume that there exists c < such that |Yi | ci a.s.. If E > 0 and
E p < for p 4, prove that
bp/2c
E(Y n / n ) = EY /E +
aj nj + o nbp/2c
j=1
as n .
Solution:
Let g(y, t) = y/t, all the partial derivatives of all orders are zero expect
k g
tk
and
k g
ytk1
for k 1. Consider the Taylor expansion of g(Y n , n ) around (EY, E ) , (a, b). The k-th order
Taylor term is
1 k
1
k
k
g(a,
b)(
b)
+
g(a, b)(Y n a)( n b)k1 .
n
k! tk
(k 1)! ytk1
Note that all the partial derivatives appearing in the expansion terms are finite and deterministic.
So they do not influence the magnitude of the expansion terms and we only need to consider the
magnitude of E( n b)k , f (k) and E(Y n a)( n b)k1 , h(k). Obviously, f (1) = h(1) = 0.
For k = 2,
n
!2
n
!
X
X
1
1
1
2
f (2) = 2 E
(i b)
= 2E
(i b)
= E( b)2 ;
n
n
n
i=1
i=1
n
!
n
!
n
X
X
X
1
1
1
h(2) = 2 E
(Yi a)
(i b) = 2 E
(Yi a)(i b) = E(Yi a)( b)
n
n
n
i=1
i=1
i=1
n1 .
For k = 3,
n
!3
n
!
X
X
1
1
1
f (3) = 3 E
(i b)
= 3E
(i b)3 = 2 E( b)3 ;
n
n
n
i=1
i=1
n
n
! n
!
!2
X
X
X
1
1
1
h(3) = 3 E
(Yi a)
(Yi a)(i b)2 = 2 E(Y a)( b)2
(i b)
= 3E
n
n
n
i=1
i=1
i=1
n
!4
n
X
1
1 X
E
f (4) = 4 E
(i b)
(i b)4 +
n
n4
i=1
i=1
(i b)2 (j b)2
1i6=jn
b1
b2
1
{nE( b)4 + (n2 n)[E( b)2 ]2 } 2 + 3
4
n
n
n
n
! n
!3
n
!
X
X
X
1
1
1
3
(Yi a)
(i b)
= 4E
(Yi a)(i b)
= 3 E(Y a)( b)3
h(4) = 4 E
n
n
n
i=1
i=1
i=1
n2 .
X
1
1
2
2
E
(i1 b) (im b)
=O
np1
nbp/2c
1i1 ,...,im n
X
1
1
2
2
3
E
(i1 b) (im1 b) (im b)
=O
np1
np/2
1i1 ,...,im n
if p is even, where m = p/2 1 = bp/2c and i1 , . . . , im are mutually different. Similar analysis can
also be applied to h(p 1). Therefore, we have the following expansion
bp/2c
E(Y n / n ) = EY /E +
X
j=1
as n .
aj nj + o nbp/2c