0% found this document useful (0 votes)
102 views

Assignment 1 Solution

This document provides the solution to an assignment involving stochastic simulation. It addresses problems related to computing probabilities for an M/M/1 queueing process and estimating parameters for a control variate estimator. For the M/M/1 queue, it derives the backwards equations to compute probabilities for different service time distributions. For the control variate estimator, it identifies the optimal choice of parameter to minimize variance and proposes an estimator for this parameter that achieves the same asymptotic distribution as using the true parameter value. It also provides an algorithm for simulating sample paths of a Jackson network that has complexity linear in the number of stations.

Uploaded by

gowthamkurri
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views

Assignment 1 Solution

This document provides the solution to an assignment involving stochastic simulation. It addresses problems related to computing probabilities for an M/M/1 queueing process and estimating parameters for a control variate estimator. For the M/M/1 queue, it derives the backwards equations to compute probabilities for different service time distributions. For the control variate estimator, it identifies the optimal choice of parameter to minimize variance and proposes an estimator for this parameter that achieves the same asymptotic distribution as using the true parameter value. It also provides an algorithm for simulating sample paths of a Jackson network that has complexity linear in the number of stations.

Uploaded by

gowthamkurri
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

MS&E 323

Stochastic Simulation
Prof. Peter W. Glynn

Assignment 1 Solution
March 8, 2009
Page 1 of 9

Assignment 1 Solution
Problem 1 Consider the M/M/ number-in-system process X = (X(t) : t 0). Suppose we wish
to compute
= P(X(t) = j),
given that the system is started at t = 0 empty.
a.) Write down the specific backwards equations for computing this probability.
b.) Suppose that we now decide to change the service time distribution so that it is uniformly
distributed with the same mean. Write down the corresponding integral-differential equations
for computing .
Solution:
a.) Let Q be the transition rate matrix for X(t), the M/M/1 queue number-in-system process.
Suppose and are the inter-arrival and service rate, respectively. Then,
Q(0, 0) = ,
Q(i, i 1) = ,

Q(i, i + 1) = ,

Q(0, 1) =
Q(i, i) = ( + ),

i1

and Q(i, j) = 0 for all the other pairs (i, j).


Let u(t; n) = Pn (X(t) = j) = P(X(t) = j|X(0) = n), then = u(0, t). ChapmanKolmogorov equation gives
X
Pn (X(t + ) = j) =
Pn (X() = k)Pk (X(t) = j)
k0

so
X
Pn (X(t+) = j)Pn (X(t) = j) = (Pn (X() = n)1)Pn (X(t) = j)+
Pn (X() = k)Pk (X(t) = j)
k6=j

i.e.
u(t + ; n) u(t; n) = (Pn (X() = n) 1)u(t; n) +

Pn (X() = k)u(t; k)

k6=j

Dividing both sides of (1) by and sending to 0 yields


X
X
d
Q(n, k)u(t; k)
Q(n, k)u(t; k) =
u(t; n) = Q(n, n)u(t; n) +
dt
k6=j

k0

Therefore, we have the following backwards equations:


d
dt u(t; n) = u(t; n 1) ( + )u(t; n) + u(t; n + 1),
d
dt u(t; 0) = u(t; 1) u(t; 0)
with boundary conditions u(0; n) = nj .

n1

(1)

MS&E 323, Assignment 1 Solution

b.) Note that X(t) itself does not have Markovian property because the service time is not
exponentially distributed. Let S(t) be the elapsed service time of the customer already in
service at time t. Then (X(t), S(t)) is indeed Markovian. Put
f (t, x, n) =

P(X(t) = n, S(t) x)
x

i.e. f (t, x, n) is the joint probability density of (X(t), S(t)). Let g(x) be such that
P(a service is completed in (x, x + )|the elapsed service time is x) = g(x) + o().
We may interpret g(x) as the service rare. Note that if the service time is exponential,
then memoryless property implies g(x) is constant in x. But since the service time is actually
uniform distributed, it should depends on x. For small > 0,
P(X(t + ) = n, S(t + ) = x + )
n
X
=
P(X(t) = n k, S(t) = x) P(k persons arrive in (t, t + ))
k=0

P(a service is not completed in (t, t + )|it started at time t x)


since a service cannot be completed in (t, t + ) given S(t + ) = x + (otherwise we would
have S(t + ) , which is impossible by the definition of S()).
Since the arrival process is Poisson, we have

f (t + , x + , n) = [u(t, x, n)(1 ) + u(t, x, n 1)](1 g(x)) + o(),


f (t + , x + , 1) = u(t, x, 1)(1 )(1 g(x)) + o()

n2

Moreover, let f (t, 0) = P(X(t) = 0), then


P(X(t + ) = 0)
= P(X(t) = 0) P(no customers arrive in (t, t + ))
Z
+
P(X(t) = 1, S(t) dx) P(a service is completed in (t, t + )|it started at time t x)
P(no customers arrive in (t, t + )) + o()
so

Z
f (t + , 0) = f (t, 0)(1 ) +

f (t, x, 1)g(x)(1 )dx + o()

Therefore we have the following integral-differential equations


t f (t, x, n) + x f (t, x, n) = f (t, x, n 1) ( + g(x))f (t, x, n),

f (t, x, 1) +
R x f (t, x, 1) = ( + g(x))f (t, x, 1)
t
d
f (t, x, 1)g(x)dx f (t, 0)
dt f (t, 0) =
The system is initially empty, so the boundary conditions are

f (0, x, n) = 0, n 1

f (0, 0) = 1 R
f (t, 0, n) = R f (t, x, n + 1)g(x)dx, n 2

f (t, 0, 1) = f (t, x, 2)g(x)dx + f (t, 0)

n2

MS&E 323, Assignment 1 Solution

What is left to us is to find g(x). Suppose the service time is uniformly distributed on [a, b]
where a and b are determined such that the mean is 1 . Then,
P(a service is completed in (x, x + )|the elapsed service time is x)
P(service time is in (x, x + ))
/(b a)

=
=
=
P(service time is greater than x)
(b x)/(b a)
bx
Hence, g(x) = (b x)1 , x [a, b). The integrals appearing in the integral-differential
equations and boundary conditions are over [a, t b).

Problem 2 Let (Z, C ) be a jointly distributed random vector in which Z is scalar and C is a

(random) column vector. Assume that EZ 2 < and Ek C k2 < . We further presume that

E C = 0 and that the covariance matrix = E C C T is non-singular.


a.) Let be a column vector and consider the control variate estimator
n

1X
(Zi T C i EZ),
n
i=1

where (Z1 , C 1 ), . . . , (Zn , C n ) are n iid copies of (Z, C ). What is the minimal variance choice
of , assuming that the goal is to compute EZ?

b.) In practice, the variance-minimizing choice must be estimated from the sample data (Z1 , C 1 ), . . . ,

(Zn , C n ). Propose an estimator n for and carefully prove that


n1/2

n
X

(Zi n C i ) ( )N (0, 1)

i=1

as n , where 2 ( ) is the minimal variance. (In other words, at the level of the CLT
approximation, there is no asymptotic loss associated with having to estimate .)
Solution:

a.) Since (Z1 , C 1 ), . . . , (Zn , C n ) are iid copies, we have


n

Var(

1
1X
(Zi T C i )) = Var(Z T C ).
n
n
i=1

We need to find that minimizes Var(Z T C ) , g(). Note that

g() = E[(Z T C )2 ] [E(Z T C )]2

= Var(Z) 2T E( C Z) + T E( C C T )

Hence, Og() = 2E( C Z) + . It follows that = 1 E( C Z), since is assumed to be


non-singular.

MS&E 323, Assignment 1 Solution

(1)
(m)
b.) Let C i = (Ci , . . . , Ci ) for 1 i n and
C

(j)

1 (j)
C
n i

, for 1 j m. We can use the sample variance and covariance to estimate . In particular,
the estimator is n = Sn1 bn where Sn is the m m matrix with (j, k)-the entry
n
!
X (j) (k)
1
(j) (k)
Ci Ci nC C
n1
i=1

and bn is the m-vector (column) with j-th entry


n
!
X (j)
1
(j)
Ci Zi nC Z .
n1
i=1

Then, we have n a.s. as n . Hence, Continuous Mapping Theorem (CMT)


guarantees that
n
X

1/2
n
( n ) C i 0 N (0, ) = 0.
i=1

Again, applying CMT givies


n1/2

n
X

i=1

i=1

X
X

(Zi n C i EZ) = n1/2


(Zi C i EZ) + n1/2
( n ) C i ( )N (0, 1)

i=1

as n .
Problem 3 Consider the Jackson network example in which the d stations are in tandem (i.e in
series). Suppose that each station is a single-server station with |i | for i 1. Provide an

algorithm for generating paths of X that has an expected complexity (in terms of flops) that scales
linearly in d.
Solution:
a.) Let Qi (t) be the number of customers at station i at time t. Then, Q(t) = (Q1 (t), . . . , Qd (t))
forms a continuous time Markov chain on the state space {(n1 , . . . , nd ) : ni Z+ , 1 i d}.
Suppose the current state is (n1 , . . . , nd ), then after an exponentially distributed holding time
the system will move to the following at most d + 1 possible states:
(n1 + 1, n2 , . . . , nd )
(n1 1, n2 + 1, . . . , nd )

one customer arrives


one customer enters station 2 from station 1,
..
.

if n1 > 0

(n1 , . . . , ni 1, ni+1 + 1, . . . , nd ) one customer enters station i from station i + 1, if ni > 0


..
.
(n1 , . . . , nd1 , nd 1)

one customer leaves the system,

if nd > 0

The transition rates from (n1 , . . . , nd ) to the above listed (possible) states are , 1 . . . , d ,
respectively. Therefore, to simulate Q(T ), we can

MS&E 323, Assignment 1 Solution

i. Initialize t 0, e (0, . . . , 0)
ii. Suppose
P the current state is e = (n1 , . . . , nd ). Generate s, an exponential rv with rate
+ di=1 i I(ni > 0).
iii. Generate a discrete r.v. K with p.m.f
P(K = 0) =

Pd

i=1 i I(ni

> 0)

P(K = i) =

i I(ni > 0)
, 1id
Pd
+ i=1 i I(ni > 0)

iv. While t < T , update t t + s; e e + e1 if K = 0, e e ei + ei+1 if K = i,


i = 1, . . . , d 1 and e e ed if K = d, where ei is the d-vector with entries all zeros
but the i-th is 1.
Since
exponential r.v.s we generate in step (ii) have rates that are bounded by M ,
Pthe
d
+ i=1 i , the number of r.v.s we generated all bounded by N (T ) where N (t) is a Poisson
process with rate M . So the expected number of r.v.s we generate is bounded by EN (T ) =
M T . Note that the discrete r.v. in step (iii) has finite number, that is bounded by d, of
support points. There are various ways to generate a discrete r.v. with finite support. We
may cleverly choose one that has complexity that is linear in d.
Problem 4 Let 0 < R1 < R2 < be the ordered radii of the points of a Poisson process on
the disk = {(x1 , x2 ) : x21 + x22 < r}. Give algorithms for simulation of the homogenous Poisson
process on by verifying and using that (a.) the Ri are the points of an inhomogeous Poisson
process on (0, r), (b.) R12 , R22 R12 ,. . . are iid exponentials.
Solution:
e (a, b) denote
a.) Let N be the homogenous spatial Poisson process on with intensity and N
the number of Ri s that lie in (a, b). Then,
e (t, t + dt) = k)
P(N
=P(N (the annulus with inner radius t and and outer radius t + dt) = k)
=Poisson(k; area of the annulus)
=Poisson(k; 2tdt)
Moreover, the number of Ri s that fall into disjoint intervals of (0, r) will be independent since
the number of corresponding points that fall into the corresponding annuli are independent.
Therefore, Ri s are points of an inhomogenous Poisson process on (0, r) with rate function
(t) = 2t.
b.) Using part (a),
q
2
P(Ri+1
Ri2 > t|Ri ) = P(Ri+1 Ri > Ri2 + t Ri |Ri )
q
e (Ri , R2 + t) = 0)
= P(N
i
Z 2
!
Ri +t
= exp
(t)dt = et
Ri

MS&E 323, Assignment 1 Solution


2
So the marginal distribution is P(Ri+1
Ri2 > t) = et for all i and thus
2
P(Ri+1
Ri2 > ti+1 , i = 0, 1, . . .) = P(R12 > t1 ) P(R22 R12 > t2 |R1 ) P(R32 R22 > t2 |R2 )

Y
Y
2
=
eti =
P(Ri+1
Ri2 > ti+1 )
i=1

proving

R12 ,

R22

R12 ,. . . are

i=1

iid exponential r.v.s with mean ()1 .

To simulate the homogenous Poisson process on , we can first simulate the radii Ri s on
(0, r) and then simulate the arguments i s which are uniformly distributed on [0, 2). Then,
let (xi1 , xi2 ) = (Ri cos i , Ri sin i ). To avoid the calculation of trigonometric functions, we can
use acceptance-rejection method to generate (xi1 , xi2 ) after simulating Ri .
To generate Ri we can either apply part (a), i.e. simulate an inhomogenous Poisson process
on (0, r) with intensity function (t) or apply part (b), i.e. simulate a sequence of exponential
r.v.s Z1 , . . . , Zn with mean ()1 so long as Z1 + Zn < r and then set R12 = Z1 , R12 R22 =
Z2 , . . ..
Problem 5 Suppose that we wish to compute qp , where qp is the root of P(X qp ) = p, so that
qp is the pth quantile of X. We assume that X is a continuous rv with a strictly positive and
continuous density f . We estimate qp via Qbpnc , where Qbpnc is the bpncthe order statistic of an
iid sample X1 , . . . , Xn from the distribution of X. Prove rigorously that
p
p(1 p)
1/2
n (Qbpnc qp )
N (0, 1)
f (qp )
as n . (Hint: Reduce the problem to one in which the Xi s are sampled from a uniform (0,1)
population.)
Solution: Let F (x) = P(X x) be the CDF of X. Since X has a strictly positive and continuous
density f (x), we know F 1 () exists and is strictly increasing.
Suppose U1 , . . . , Un are iid uniform r.v.s on (0, 1) and U(1) , . . . , U(n) are the corresponding order
D

statistics. Note that F 1 (U ) = X, so F 1 (U(k) ) = X(k) , where X(k) is the k-the order statistic of
D

X1 , . . . , Xn . Hence, Qbpnc = F 1 (U(bpnc) ). It follows that


1

Qbpnc qp = F 1 (U(bpnc) ) F 1 (p) =

f (F 1 ())

(U(bpnc) p)

(2)

where lies between U(bpnc) and p.


Let Z1 , Z2 , . . . be iid standard exponential r.v.s. Then,
Pbpnc
Zi
D
.
U(bpnc) = Pi=1
n+1
i=1 Zi
Hence,
D

U(bpnc) p =

(1 p)

bpnc
i=1 Zi

n+1
bpnc p
Z

(n
+
1

bpnc)
+ bpnc p(n + 1)
i=bpnc+1 i
Pn+1
i=1 Zi
(3)

MS&E 323, Assignment 1 Solution

Applying Continuous Mapping Theorem,


P

bpnc
Pbpnc
1
p

bpnc
i
n
i=1
i=1 Zi bpnc
nbpnc

bpnc
=

p1
Pn+1
1 Pn+1
n
+
1
i=1 Zi
i=1 Zi
n+1
where 1 is a standard normal r.v., since

bpnc
X
1

p
Zi bpnc 1 ,
bpnc i=1
Also,

n(bpnc p(n + 1))


=
Pn+1
i=1 Zi

1
n+1

n+1

1 X a.s.
Zi 1,
n+1
i=1

1
Pn+1
i=1

Zi

(4)

nbpnc

p.
n+1

n(bpnc p(n + 1)) a.s.


0
n+1

(5)

noting pn 1 < bpnc pn and thus bpnc p(n + 1) (1 p, p).


Moreover, note that
n+1
X
i=bpnc+1

n+1bpnc
D

Zi =

Zi0

i=1

for all n 1, where Z10 , Z20 , . . . are another sequence of iid standard exponential r.v.s. Hence,
applying Continuous Mapping Theorem again,

Pn+1
P
1 ( m Z 0 (m))
n

i=bpnc+1 Zi (n + 1 bpnc) D
nm
i=1 i
m

1 p2
(6)
P
Pn+1
n+1
1
n
+
1
Z
i
i=1
i=1 Zi
n+1
where m = n + 1 bpnc and 2 is a standard exponential r.v.. Observe that 1 and 2 are
independent, for {Z1 , . . . , Zbpnc } and {Zbpnc+1 , . . . , Zn } are independent for each n 1. Combining
(4), (5), (6) and (3), we conclude
p

D p
n(U(bpnc) p) (1 p) p1 + p 1 p2 = p(1 p)N (0, 1)
(7)
applying Continuous Mapping Theorem and the following lemma:
Lemma 1 Suppose {Xn } and {Yn } are two independent sequences of r.v.s. If Xn X and
Yn Y , then Xn + Yn X + Y .
This lemma can be easily proved by noting
lim Eei(Xi +Yi ) = lim EeiXi EeiYi = EeiX EeiY = Eei(X+Y ) .

(7) also implies U(bpnc) p and thus U(bpnc) p in probability. Recall that is between U(bpnc)
and p, so for any > 0,
P(| p| > ) P(|U(bpnc) p| > ) 0
Hence, p in probability. By the continuity of f () and F 1 (), we have
f (F 1 ()) f (F 1 (p)) = f (qp ) in probability

MS&E 323, Assignment 1 Solution

It follows from (2) and (7) that


p
n

1/2

p(1 p)
N (0, 1).
f (qp )

(Qbpnc qp )

Problem 6 Suppose that (Y1 , 1 ), . . . , (Yn , n ) is an iid sequence sampled from the population of
(Y, ), where 0 a.s.. Assume that there exists c < such that |Yi | ci a.s.. If E > 0 and
E p < for p 4, prove that
bp/2c

E(Y n / n ) = EY /E +

aj nj + o nbp/2c

j=1

as n .
Solution:

Let g(y, t) = y/t, all the partial derivatives of all orders are zero expect

k g
tk

and

k g
ytk1

for k 1. Consider the Taylor expansion of g(Y n , n ) around (EY, E ) , (a, b). The k-th order
Taylor term is
1 k
1
k
k
g(a,
b)(

b)
+
g(a, b)(Y n a)( n b)k1 .
n
k! tk
(k 1)! ytk1
Note that all the partial derivatives appearing in the expansion terms are finite and deterministic.
So they do not influence the magnitude of the expansion terms and we only need to consider the
magnitude of E( n b)k , f (k) and E(Y n a)( n b)k1 , h(k). Obviously, f (1) = h(1) = 0.
For k = 2,
n
!2
n
!
X
X
1
1
1
2
f (2) = 2 E
(i b)
= 2E
(i b)
= E( b)2 ;
n
n
n
i=1
i=1
n
!
n
!
n
X
X
X
1
1
1
h(2) = 2 E
(Yi a)
(i b) = 2 E
(Yi a)(i b) = E(Yi a)( b)
n
n
n
i=1

i=1

so the second order Taylor term is of order

i=1

n1 .

For k = 3,

n
!3
n
!
X
X
1
1
1
f (3) = 3 E
(i b)
= 3E
(i b)3 = 2 E( b)3 ;
n
n
n
i=1

i=1

n
n
! n
!
!2
X
X
X
1
1
1
h(3) = 3 E
(Yi a)
(Yi a)(i b)2 = 2 E(Y a)( b)2
(i b)
= 3E
n
n
n
i=1

i=1

i=1

so the third Taylor term has magnitude n2 . For k = 4,

n
!4
n
X
1
1 X
E
f (4) = 4 E
(i b)
(i b)4 +
n
n4
i=1

i=1

(i b)2 (j b)2

1i6=jn

b1
b2
1
{nE( b)4 + (n2 n)[E( b)2 ]2 } 2 + 3
4
n
n
n

MS&E 323, Assignment 1 Solution

n
! n
!3
n
!
X
X
X
1
1
1
3
(Yi a)
(i b)
= 4E
(Yi a)(i b)
= 3 E(Y a)( b)3
h(4) = 4 E
n
n
n
i=1

i=1

i=1

n2 .

so the fourth Taylor term has magnitude


In general, if E p < for some p 4, then we can
expand g(Y n , n ) in the Taylor sense up to the (p 1)-th order term. In the expansion of f (p 1),
the dominating term (in the sense of magnitude in n) is

X
1
1
2
2

E
(i1 b) (im b)
=O
np1
nbp/2c
1i1 ,...,im n

if p is odd, where m = (p 1)/2 = bp/2c and i1 , . . . , im are mutually different; or

X
1
1
2
2
3

E
(i1 b) (im1 b) (im b)
=O
np1
np/2
1i1 ,...,im n

if p is even, where m = p/2 1 = bp/2c and i1 , . . . , im are mutually different. Similar analysis can
also be applied to h(p 1). Therefore, we have the following expansion
bp/2c

E(Y n / n ) = EY /E +

X
j=1

as n .

aj nj + o nbp/2c

You might also like