0% found this document useful (0 votes)
43 views7 pages

Wχ f F F

The document contains solutions to 4 problems from a problem set. Problem 1 involves finding the density, distribution function, and inverse distribution function for a random variable. Problem 2 examines properties of the minimum and maximum of iid exponential random variables. Problem 3 explores the law of total variance. Problem 4 provides an example of random variables where the expected value approaches 0 but the expected squared value approaches 1.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views7 pages

Wχ f F F

The document contains solutions to 4 problems from a problem set. Problem 1 involves finding the density, distribution function, and inverse distribution function for a random variable. Problem 2 examines properties of the minimum and maximum of iid exponential random variables. Problem 3 explores the law of total variance. Problem 4 provides an example of random variables where the expected value approaches 0 but the expected squared value approaches 1.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Problem Set #2 9/20/2015

Problem #1
2
W 2= Gamma (1, 1/2). Find the density f n , function distribution Fn , and the

(a) If

inverse distribution function

F1
W
explicitly.

Solution:
If

W 22= Gamma (1, 1/2), then


f W ( w )=

( 12 ) exp( w2 ) for w 0.
W Exponential (1/2). So we can compute the distribution function as

We can clearly see that

FW ( w )=1exp

( w2 )

for w 0 .

Finally, the inverse distribution function is given as follows, let

Then,

exp

( 2y )=1w

y=F1
W ( w)

. Hence,

1
FW ( w )=2 log( 1w)

(b) Suppose that ( X , Y ) N ( 0, I ) . Show that

=tan1

( YX )

R and defined by R2= X 2+Y 2 and

are independent random variables with

R2 22 and Uniform ( 0, 2 )

.
Solution:
Since ( X , Y ) N ( 0, I ) , then the joint distribution is given by

f X ,Y ( x , y )=

Now for

1
x 2 + y 2
exp
for ( x , y )R2
2
2

x=rcos ( ) and

y=rsin() with

+
r R and (0, 2 )

x +y

1 y
)= ( r , )
The inverse function (, tan
x

1
f X ,Y ( x , y ) : ( x , y )

()

rcos ( + ) ,rsin ( + ) ( rcos ( ) + )


rcos ( ) +f X ,Y
Then
rcos ( ) ,rsin()
f R , ( r , ) =f X ,Y
1
r 2
1
r 2

exp
r+
exp
r
2
2
2
2

( )

( )

1
r 2
exp
r

( )

r 2
r exp
on( 0, ) ( 0,2 )
2

( )

So,

R and are independent variables.


(c) Use (a) and (b) to show how to use two independent random Uniform (0, 1) variables U V to
generate two standard normal distribution variables.

Solution:
If
get

UV

are independent Uniform (0, 1) random variables, we can use the inverse transformation to

( U )=2 log (1U ) 22


R2 F1

2
2

and

2 V

Thus, ( X , Y ) ( Rcos ( ) , Rsin ( ) ) N 2 ( 0, I ) .

Problem #2
Suppose that
(a) Show

X 1 , X 2 , , are iid Exponential ( ). Let


n M n d exponential( )

M n min X i
1 i n

and

T n max X i
1 i n

Solution:
Consider

X 1 , X 2 , , are iid Exponential ( ) then,

P ( n M n > x ) =P min X j >


1 j n

x
n

x
x
x
P X 1 > , X 2 > , , X n>
n
n
n

x
x
x
P X 2> P X n >
n
n
n

x n
x
= exp
n
n

P X1>

P X1>

) (

) (

) ( ( ))

( x)
exp
Therefore,

n M n d exponential( ) as required.

(b) Show that

( 1 )T

T n

where

has the double exponential extreme value distribution

function given by P (T x )=exp ( exp (x ) ) .


Solution:

T n then,

We know that maximum are at

( ()

P T n

) (

() )

1
1
log n x =P max X j x +
log n

1 j n

( 1 ) log n , X x+( 1 ) log n , , X x +( 1 ) log n)

( 1 ) log n ) P ( X x +( 1 ) log n) P ( X x+( 1 ) log n )

P X1 x +

P X1 x +

1
P X1 x +
log n

() )

exp (x )
n

( 1exp (xlogn ) ) = 1

1
(
( ) log n x)=P (T x )=exp (exp(x ) )

P T n

Hence,

(c) Now suppose that

as n approaches infinity and =1 .

X 1 , X 2 , , X n are iid with distribution function

0< F ' ( 0 ) < ; here

F '(0) is the right-derivative of

F satisfying

F at 0 :

F ( x )F ( 0 )
'
=F ( 0 ) . Show that n M n d exponential ( F ' ( 0 )) .
x

lim
x 0

Solution:

P ( n M n > x ) =P min X j >

Consider,

x
x
x
P X 1 > , X 2 > , , X n>
n
n
n

x
x
x
P X 2> P X n >
n
n
n

x
n

P X1>

nF

Because

) (

) (

x
n

()
n

P X1>

1 j n

x
n

F has a derivative

F ' (0) the right derivative of F ( 0 ) =0 ; Then we can follow the

formula:

nF ( x /n ) =

( nx ) =F ( 0)

F ( x /n ) F ( 0 )
x=
x /n
1/n

'

Hence,

nF
1

x
n

()
n

n M n d exponential ( F ' ( 0 )) .
exp (F ' ( 0 ) x ) , as required

Problem #3
Suppose that

(a) Show that

is a random variable with E ( Y ) < .

Var ( Y )=E { Var (Y | X ) } +Var { E ( Y |X ) }

Solution:

E (Y | X )E ( Y )
[]
{ 2 } .
We want to show that,
2
E [ ( Y E (Y | X ) ) |X } + E
2

E ( Y E ( Y ) ) =E
By direct computation:
2

Var ( Y )=E [ Y E ( Y ) ] =E [ Y E (Y | X ) + E ( Y |X ) E (Y ) ]

( Y E ( Y | X ) ][ E ( Y | X )E ( Y ) ] + E [ E ( Y| X )E ( Y ) ]
2
E [ Y E ( Y |X ) ] +2 E

Now, because we have conditional expected value, we should expect 0 for the middle term. That is,

( Y E ( Y | X ) ][ E ( Y | X )E ( Y ) ]=E ( E ( [ Y E ( Y | X ) ][ E ( Y |X )E ( Y ) ]|X ) )
E
E ( [ E ( Y |X )E ( Y ) ] E ( [ Y E ( Y |X ) ]|X ) )
E ( [ E ( Y |X )E ( Y ) ] ( [ E ( Y | X )E ( Y X ) ] ) )
E ( [ E ( Y |X ) E (Y ) ] 0 ) =0
Hence,

Var ( Y )=E E ( [ Y E ( Y | X ) ] | X ) +Var ( E ( Y |X ) )

E ( Var ( Y | X ) ) ) +Var ( E ( Y |X ) )
(b) Interpret (a) geometrically.
Solution:
From the textbook definition we can say that

Y E ( Y | X ) is orthogonal to

E ( Y |X ) E(Y ) in

L2 ( P ) probability space. The identity defined in problem (a) is interpreted as a Pythagorean Theorem
equality.
2

(c) Suppose that Y n () . Compute

E(Y ) and Var (Y ) . Use

E ( Y )=E ( E ( Y |X ) ) and

(a).
Solution:
2
Suppose that ( Y |X ) 2 X +n

where

E ( Y )=E ( E ( Y |K ) ) =E ( 2 K + n )=n+ 2

Then using part (a)

Poisson ( / 2) . Then,

( 2 )=n+

Var ( Y )=E ( Var (Y | X ) ) +Var ( E ( Y |K ) )

E ( 2 )=2(2 K +n)+Var (2 K + n)

( 2 )+ 2n+ 4 ( 2 )=4 +2 n
2

(d) Show that

n ( )( n+ )
d N (0,1) as either n or .
2n+ 4

Solution:
Suppose

Y d ( Z 1+ ) + Z22 ++ Z 2n and Z i N (0,1) independent. Then,


2

1
2
2n ( )( n+ ) ( ( Z 1+ ) ( 1+ ) + ( Z 21 ) + + ( Z n1 ) )

( 2n+ 4 )
2n+ 4 d
2

( Z 1+ ) ++(Z 2n 1) 2 Z 1
d
+
( 2n+ 4 )
2n+ 4

T 1 ++T n
2 Z1
+
( 2 n+ 4 ) 2n+ 4

Thus,

2 Z1
p 0 and
2 n+ 4

T 1 ++T n
d N ( 0,1) as n .
( 2 n)

n ( )( n+ )
d N (0,1) as n .
2n+ 4

Hence,

Problem # 4
Give an example of random variables such that

E|X n| 0 and E|X n|2 1 .

Solution:
Consider the random variable

Then,

Clearly,

E X n n Pn=n

X n such that X n = n
0, else , with probability

1
1
=
2
n
n

( )

and

E|X n| 0 as n gets larger.

E|X n| =n2 p n=n2

pn=1/n2

1
=1 for all n .
n2

( )

You might also like