0% found this document useful (0 votes)
37 views15 pages

Lec For The Future Slide10

This document discusses random variables and their probability distributions. It introduces the concepts of: 1) Joint probability distribution functions which define the probability that two random variables X and Y are less than or equal to x and y, respectively. 2) Properties of joint distributions including their ranges from 0 to 1 and formulas for calculating probabilities that X and Y are within certain bounds. 3) Marginal distributions which are the distributions of individual random variables X and Y obtained by integrating the joint distribution over one of the variables.

Uploaded by

Hush Hush
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views15 pages

Lec For The Future Slide10

This document discusses random variables and their probability distributions. It introduces the concepts of: 1) Joint probability distribution functions which define the probability that two random variables X and Y are less than or equal to x and y, respectively. 2) Properties of joint distributions including their ranges from 0 to 1 and formulas for calculating probabilities that X and Y are within certain bounds. 3) Marginal distributions which are the distributions of individual random variables X and Y obtained by integrating the joint distribution over one of the variables.

Uploaded by

Hush Hush
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

7.

Two Random Variables

In many experiments, the observations are expressible not


as a single quantity, but as a family of quantities. For
example to record the height and weight of each person in
a community or the number of people and the total income
in a family, we need two numbers.
Let X and Y denote two random variables (r.v) based on a
probability model (Ω, F, P). Then
P ( x 1 < X (ξ ) ≤ x 2 ) = F X ( x 2 ) − F X ( x 1 ) =
x2
∫ x1
f X ( x ) dx ,

and
P ( y 1 < Y (ξ ) ≤ y 2 ) = FY ( y 2 ) − FY ( y 1 ) =
y2
∫ y1
f Y ( y ) dy . 1
PILLAI
What about the probability that the pair of r.vs (X,Y) belongs
to an arbitrary region D? In other words, how does one
estimate, for example, P [( x1 < X (ξ ) ≤ x 2 ) ∩ ( y1 < Y (ξ ) ≤ y 2 ) ] = ?
Towards this, we define the joint probability distribution
function of X and Y to be

FXY ( x , y ) = P [( X (ξ ) ≤ x ) ∩ (Y (ξ ) ≤ y ) ]
= P ( X ≤ x , Y ≤ y ) ≥ 0, (7-1)

where x and y are arbitrary real numbers.


Properties
(i) FXY ( −∞ , y ) = FXY ( x , −∞ ) = 0, FXY ( +∞ , +∞ ) = 1 . (7-2)

since ( X ( ξ ) ≤ −∞ , Y ( ξ ) ≤ y ) ⊂ (X ( ξ ) ≤ −∞ ) , we get
2
PILLAI
FXY ( −∞, y ) ≤ P( X (ξ ) ≤ −∞ ) = 0. Similarly ( X (ξ ) ≤ +∞ , Y (ξ ) ≤ +∞ ) = Ω ,
we get FXY (∞, ∞) = P(Ω) = 1.
(ii) P ( x 1 < X (ξ ) ≤ x 2 , Y (ξ ) ≤ y ) = F XY ( x 2 , y ) − F XY ( x 1 , y ). (7-3)
P ( X (ξ ) ≤ x , y 1 < Y (ξ ) ≤ y 2 ) = F XY ( x , y 2 ) − F XY ( x , y 1 ). (7-4)

To prove (7-3), we note that for x2 > x1,


( X (ξ ) ≤ x2 , Y (ξ ) ≤ y ) = ( X (ξ ) ≤ x1 , Y (ξ ) ≤ y ) ∪ (x1 < X (ξ ) ≤ x2 , Y (ξ ) ≤ y )

and the mutually exclusive property of the events on the


right side gives
P ( X (ξ ) ≤ x2 , Y (ξ ) ≤ y ) = P ( X (ξ ) ≤ x1 , Y (ξ ) ≤ y ) + P ( x1 < X (ξ ) ≤ x2 , Y (ξ ) ≤ y )

which proves (7-3). Similarly (7-4) follows.


3
PILLAI
(iii) P ( x1 < X (ξ ) ≤ x2 , y1 < Y (ξ ) ≤ y2 ) = FXY ( x2 , y2 ) − FXY ( x2 , y1 )
− FXY ( x1 , y2 ) + FXY ( x1 , y1 ). (7-5)

This is the probability that (X,Y) belongs to the rectangle R0


in Fig. 7.1. To prove (7-5), we can make use of the
following identity involving mutually exclusive events on
the right side.
(x1 < X (ξ ) ≤ x2 ,Y (ξ ) ≤ y2 ) = (x1 < X (ξ ) ≤ x2 ,Y (ξ ) ≤ y1 ) ∪ (x1 < X (ξ ) ≤ x2 , y1 < Y (ξ ) ≤ y2 ).
Y

y2
R0
y1

x1 x2
4
Fig. 7.1
PILLAI
This gives
P( x1 < X (ξ ) ≤ x2 ,Y (ξ ) ≤ y2 ) = P(x1 < X (ξ ) ≤ x2 ,Y (ξ ) ≤ y1 ) + P( x1 < X (ξ ) ≤ x2 , y1 < Y (ξ ) ≤ y2 )

and the desired result in (7-5) follows by making use of (7-


3) with y = y2 and y1 respectively.
Joint probability density function (Joint p.d.f)
By definition, the joint p.d.f of X and Y is given by
∂ 2 FXY ( x , y )
f XY ( x , y ) = . (7-6)
∂x ∂y
and hence we obtain the useful formula
x y
F XY ( x , y ) = ∫ ∫
−∞ −∞
f XY ( u , v ) dudv . (7-7)
Using (7-2), we also get
+∞ +∞
∫ ∫
−∞ −∞
f XY ( x , y ) dxdy = 1 . (7-8) 5
PILLAI
To find the probability that (X,Y) belongs to an arbitrary
region D, we can make use of (7-5) and (7-7). From (7-5)
and (7-7)
P ( x < X (ξ ) ≤ x + ∆x, y < Y (ξ ) ≤ y + ∆y ) = FXY ( x + ∆x, y + ∆y )
− FXY ( x, y + ∆y ) − FXY ( x + ∆x, y ) + FXY ( x, y )
x + ∆x y + ∆y (7-9)
=∫ ∫ f XY (u, v )dudv = f XY ( x, y ) ∆x∆y.
x y

Thus the probability that (X,Y) belongs to a differential


rectangle ∆x ∆y equals f XY ( x , y ) ⋅ ∆ x ∆ y , and repeating this
procedure over the union of no overlapping differential
rectangles in D, we get the useful result
Y

D ∆y
∆x

X
6
Fig. 7.2 PILLAI
P (( X , Y ) ∈ D ) = ∫ ∫ f XY ( x , y ) dxdy . (7-10)
( x , y )∈ D

(iv) Marginal Statistics


In the context of several r.vs, the statistics of each individual
ones are called marginal statistics. Thus FX (x) is the
marginal probability distribution function of X, and f X (x) is
the marginal p.d.f of X. It is interesting to note that all
marginals can be obtained from the joint p.d.f. In fact

FX ( x ) = FXY ( x , +∞ ), FY ( y ) = FXY ( +∞ , y ). (7-11)


Also
+∞ +∞
f X ( x) = ∫
−∞
f XY ( x , y ) dy , f Y ( y ) = ∫−∞
f XY ( x , y ) dx . (7-12)
To prove (7-11), we can make use of the identity
( X ≤ x ) = ( X ≤ x ) ∩ (Y ≤ +∞ ) 7
PILLAI
so that F X ( x ) = P ( X ≤ x ) = P ( X ≤ x , Y ≤ ∞ ) = F XY ( x , +∞ ).
To prove (7-12), we can make use of (7-7) and (7-11),
which gives
x +∞
F X ( x ) = F XY ( x , +∞ ) = ∫ ∫
−∞ −∞
f XY ( u , y ) dudy (7-13)

and taking derivative with respect to x in (7-13), we get


+∞
f X ( x) = ∫ −∞
f XY ( x , y ) dy . (7-14)
At this point, it is useful to know the formula for
differentiation under integrals. Let
b( x )
H ( x) = ∫ a(x)
h ( x , y ) dy . (7-15)

Then its derivative with respect to x is given by


dH ( x ) db( x ) da ( x ) b ( x ) ∂h ( x , y )
= h ( x , b) − h( x, a ) + ∫ dy. (7-16)
dx dx dx a ( x ) ∂x

Obvious use of (7-16) in (7-13) gives (7-14). 8


PILLAI

If X and Y are discrete r.vs, then pij = P( X = xi , Y = y j ) represents
their joint p.d.f, and their respective marginal p.d.fs are
given by
P( X = xi ) = ∑ P( X = xi , Y = y j ) = ∑ pij (7-17)
j j
and
P (Y = y j ) = ∑ P ( X = xi , Y = y j ) = ∑ pij (7-18)
i i

Assuming that P( X = xi , Y = y j ) is written out in the form of a


rectangular array, to obtain P( X = xi ), from (7-17), one need
to add up all entries in the i-th row. ∑ p ij
i

It used to be a practice for insurance p p " p " p11 12 1j 1n

p p " p " p
companies routinely to scribble out
21 22 2j 2n

# # # # # #

these sum values in the left and top ∑ p p# p# "# p# "# p#


j
ij i1 i2 ij in

margins, thus suggesting the name p p " p " pm1 m2 mj mn

marginal densities! (Fig 7.3). Fig. 7.3 9


PILLAI
From (7-11) and (7-12), the joint P.D.F and/or the joint p.d.f
represent complete information about the r.vs, and their
marginal p.d.fs can be evaluated from the joint p.d.f. However,
given marginals, (most often) it will not be possible to
compute the joint p.d.f. Consider the following example:
Y
Example 7.1: Given
1
 constant, 0 < x < y < 1,
f XY ( x , y ) =  (7-19) y
 0, otherwise . X
0 1
Obtain the marginal p.d.fs f X (x) and fY ( y ). Fig. 7.4
Solution: It is given that the joint p.d.f f XY ( x, y ) is a constant in
the shaded region in Fig. 7.4. We can use (7-8) to determine
that constant c. From (7-8)
2 1
cy c
( x , y ) dxdy = ∫  ∫ c ⋅ dx  dy = ∫ cydy =
+∞ +∞ 1 y 1
∫ ∫ f XY = = 1 . (7-20)
−∞ −∞ y =0  x=0  y =0 2 2
0
10
PILLAI
Thus c = 2. Moreover from (7-14)
+∞ 1
f X ( x) = ∫ −∞
f XY ( x , y ) dy = ∫ y=x
2 dy = 2 (1 − x ), 0 < x < 1, (7-21)
and similarly
+∞ y
fY ( y ) = ∫ −∞
f XY ( x , y ) dx = ∫ x =0
2 dx = 2 y , 0 < y < 1. (7-22)

Clearly, in this case given f X (x) and fY ( y ) as in (7-21)-(7-22),


it will not be possible to obtain the original joint p.d.f in (7-
19).
Example 7.2: X and Y are said to be jointly normal (Gaussian)
distributed, if their joint p.d.f has the following form:
−1  ( x − µ X ) 2 2 ρ ( x − µ X )( y − µY ) ( y − µY ) 2 
 − + 
1 2 (1− ρ 2 )  σ X2 σ XσY σ Y2 
f XY ( x , y ) = e 
,
2πσ X σ Y 1 − ρ 2
(7-23)
− ∞ < x < +∞ , − ∞ < y < +∞ , | ρ |< 1. 11
PILLAI
By direct integration, using (7-14) and completing the
square in (7-23), it can be shown that
+∞ 1
∫ ~
2 2
fX ( x) = f XY ( x , y ) dy = e −( x− µ X ) / 2σ X
N ( µ X , σ X2 ), (7-24)
−∞
2 πσ 2
X

and similarly
+∞ 1 (7-25)
∫ ~
2
/ 2 σ Y2
fY ( y ) = f XY ( x , y ) dx = e − ( y − µY ) N ( µ Y , σ Y2 ),
−∞
2 πσ 2
Y

Following the above notation, we will denote (7-23)


as N ( µ X , µ Y , σ X2 , σ Y2 , ρ ). Once again, knowing the marginals
in (7-24) and (7-25) alone doesn’t tell us everything about
the joint p.d.f in (7-23).
As we show below, the only situation where the marginal
p.d.fs can be used to recover the joint p.d.f is when the
random variables are statistically independent. 12
PILLAI
Independence of r.vs
Definition: The random variables X and Y are said to be
statistically independent if the events {X (ξ ) ∈ A} and {Y (ξ ) ∈ B}
are independent events for any two Borel sets A and B in x
and y axes respectively. Applying the above definition to
the events {X (ξ ) ≤ x} and {Y (ξ ) ≤ y }, we conclude that, if
the r.vs X and Y are independent, then
P (( X (ξ ) ≤ x ) ∩ (Y (ξ ) ≤ y ) ) = P ( X (ξ ) ≤ x ) P (Y (ξ ) ≤ y ) (7-26)
i.e.,
FXY ( x , y ) = FX ( x ) FY ( y ) (7-27)

or equivalently, if X and Y are independent, then we must


have
f XY ( x , y ) = f X ( x ) fY ( y ). (7-28)
13
PILLAI
If X and Y are discrete-type r.vs then their independence
implies
P( X = xi ,Y = y j ) = P( X = xi ) P(Y = y j ) for all i, j. (7-29)

Equations (7-26)-(7-29) give us the procedure to test for


independence. Given f XY ( x, y ), obtain the marginal p.d.fs f X (x)
and fY ( y ) and examine whether (7-28) or (7-29) is valid. If
so, the r.vs are independent, otherwise they are dependent.
Returning back to Example 7.1, from (7-19)-(7-22), we
observe by direct verification that f XY ( x, y ) ≠ f X ( x) fY ( y ). Hence
X and Y are dependent r.vs in that case. It is easy to see that
such is the case in the case of Example 7.2 also, unless ρ = 0.
In other words, two jointly Gaussian r.vs as in (7-23) are
independent if and only if the fifth parameter ρ = 0.
14
PILLAI
Example 7.3: Given
 xy 2 e − y , 0 < y < ∞, 0 < x < 1,
f XY ( x, y ) =  (7-30)
 0, otherwise.

Determine whether X and Y are independent.


Solution:
+∞ ∞
f X ( x) = ∫ 0
f XY ( x , y ) dy = x ∫
0
y 2 e − y dy

= x  − 2 ye − y ye − y dy  = 2 x ,
∞ ∞
+ 2∫ 0 < x < 1. (7-31)
 0 0 
Similarly
1 y2 −y
fY ( y ) = ∫ 0
f XY ( x , y ) dx =
2
e , 0 < y < ∞. (7-32)
In this case
f XY ( x , y ) = f X ( x ) fY ( y ),
and hence X and Y are independent random variables. 15
PILLAI

You might also like