0% found this document useful (0 votes)
89 views

Joint Continuous Random Variables

1) The document discusses key concepts for working with continuous random variables including joint probability density functions (PDFs), double integrals, marginal PDFs, conditional PDFs, and independence. 2) Computing probabilities for events involving two continuous random variables requires double integrals of their joint PDF over the event space. 3) Marginal PDFs are found by integrating the joint PDF over one of the variables, while conditional PDFs are the joint PDF divided by the marginal PDF. 4) Two continuous random variables are independent if their joint PDF equals the product of their marginal PDFs.

Uploaded by

Sudeepa Herath
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views

Joint Continuous Random Variables

1) The document discusses key concepts for working with continuous random variables including joint probability density functions (PDFs), double integrals, marginal PDFs, conditional PDFs, and independence. 2) Computing probabilities for events involving two continuous random variables requires double integrals of their joint PDF over the event space. 3) Marginal PDFs are found by integrating the joint PDF over one of the variables, while conditional PDFs are the joint PDF divided by the marginal PDF. 4) Two continuous random variables are independent if their joint PDF equals the product of their marginal PDFs.

Uploaded by

Sudeepa Herath
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Notes: Joint Probability and Independence for Continuous RVs

CS 3130 / ECE 3530: Probability and Statistics for Engineers


October 22, 2015
Joint Probability Density Functions. Remember that we can use a continuous random variable X to define
events such as {a X b}, which is the event X landed somewhere between a and b. Also, remember
that probability of such an event is computed by integrating the pdf for X, f (x):
b

Z
P (a X b) =

f (x) dx.
a

Just as in the discrete case, we can extend this concept to the case where we consider the joint probability
of two continuous random variables. Let X and Y be two continuous random variables. Now an event for
both random variables might be something of the form: {a X b} {c Y d}, meaning the pair
(X, Y ) fell inside the box [a, b] [c, d]. The joint pdf for X and Y is a function f (x, y) satisfying
1. f (x, y) 0, for all x, y
R R
2. f (x, y) dx dy = 1
3. P (a X b, c Y d) =

RdRb
c

f (x, y) dx dy

Double Integrals. Computing probabilities of events for joint random variables requires double integrals
like the one in rule #3 above. Double integrals are not that scary (if you can integrate once, you can integrate
twice!). Here is the procedure for evaluating the integral
Z

dZ b

P (a X b, c Y d) =

f (x, y) dx dy.
c

Rb
1. First evaluate the inside integral: a f (x, y) dx. Treat the variable y as if it were a constant (like
you would the number 2). Evaluating the integral over the interval [a, b] will result in a function F (y),
where the variable x does not appear.
Rd
2. Next evaluate the outside integral: c F (y) dy. This will result in the answer you are looking for.
Example: Consider random variables X, Y with joint probability density function:
(
2y sin(x)
for 0 x 2 , 0 y 1
f (x, y) =
0
otherwise
What is P (0 X 4 , 0.5 Y 1)?

Lets go through the two steps above to evaluate this probability as a double integral:
P (0 X 4 , 0.5 Y 1) =
Z 1 Z /4
2y sin(x) dx dy
=
0.5 0
Z 1
x=/4

=
2y cos(x)
dy
x=0
0
Z 1


=
2y
2/2 1 dy
0.5

 y=1

2
= y
2/2 1
y=0.5




1
= 1
2/2 1
4

63 2
=
0.2197
8

notice how the integration bounds are set up


integrating in x, and considering y constant
evaluating cos(x) from 0 to /4
integrating in y
evaluating y 2 from 0.5 to 1
simplifying final answer

Marginal Probabilities. Remember that for joint discrete random variables, the process of marginalizing
one of the variables just means to sum over it. For continuous random variables, we have the same process,
just replace a sum with an integral. So, to get the pdf for X or the pdf for Y from the joint pdf f (x, y), we
just integrate out the other variable:
Z
Z
fX (x) =
f (x, y) dy, and fY (y) =
f (x, y) dx.

Example: The marginal pdfs for the above example are:


Z 1
Z 1
y=1

fX (x) =
f (x, y) dy =
2y sin(x) dy = y 2 sin(x)
= sin(x)
0

Z
fY (y) =
0

y=0

Z
f (x, y) dx =
0

x=
2
2y sin(x) dx = 2y cos(x)
= 2y
x=0

Conditional Probability. Conditional probability works much like the discrete case. For random variables X, Y with joint pdf f (x, y) and marginal pdfs fX (x) and fY (y), we define the conditional density
function:
( f (x,y)
for all values of y where fY (y) 6= 0
f (x|Y = y) = fY (y)
0
otherwise
Now, conditional probabilities are found by integrating f (x|Y = y):
Z b
P (a X b | Y = y) =
f (x | Y = y) dx.
a

Example: Again, using the joint pdf in the example above, what is the conditional density f (x | Y = y)?
We just need to use the formula we found above for fY (y):
f (x | Y = y) =

f (x, y)
2y sin(x)
=
= sin(x) (for 0 x /2, 0 otherwise).
fY (y)
2y
2

Notice that this turned out to be just fX (x). Lets try a more interesting example.
In-class Exercise: Given the joint density of X, Y :
(
x2 + 43 xy + y 2
f (x, y) =
0

for (x, y) [0, 1] [0, 1],


otherwise.

What are the marginal densities fX (x), fY (y), and what are the conditional densities f (x | Y = y) and
f (y | X = x)?
Independence. Again, independence is just the same as in the discrete case, we just have pdfs instead of
pmfs. The three equivalent definitions for independence of X and Y are:
For all possible x R and y R:
f (x, y) = fX (x)fY (y)
f (x | Y = y) = fX (x)
f (y | X = x) = fY (y)

In-class Exercise: For the two joint densities in the examples above, determine if X and Y are independent.

You might also like