0% found this document useful (0 votes)
47 views

Lecture 06 - Functions of Random Variables

This document provides an overview of functions of random variables. It discusses how if X is a random variable, and Y is defined as some function g of X (Y=g(X)), then Y is also a random variable. It shows how to derive the cumulative distribution function and probability density function of Y in terms of those of X. Specifically, it demonstrates that the pdf of Y, fY(y), can be written as the sum of the pdfs of X, fX(x), evaluated at the solutions to y=g(x), divided by the derivative of g(x) with respect to x. Examples are provided to illustrate this concept for different functions g(x).

Uploaded by

aryhuh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views

Lecture 06 - Functions of Random Variables

This document provides an overview of functions of random variables. It discusses how if X is a random variable, and Y is defined as some function g of X (Y=g(X)), then Y is also a random variable. It shows how to derive the cumulative distribution function and probability density function of Y in terms of those of X. Specifically, it demonstrates that the pdf of Y, fY(y), can be written as the sum of the pdfs of X, fX(x), evaluated at the solutions to y=g(x), divided by the derivative of g(x) with respect to x. Examples are provided to illustrate this concept for different functions g(x).

Uploaded by

aryhuh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

EEF 271E

PROBABILITY AND STATISTICS


FALL 2022

Lecture 06
Functions of Random Variables

Dr. Ramazan Çağlar


Lesson Overview

Functions of Random Variables

 Functions of Random Variable

 Expectation

 General and Central Moments

Lecture 06 Function of random variables 2


FUNCTIONS of RANDOM VARIABLES

* X is a random variable
* We know its cdf, and pdf.
Let
𝒈∶ ℝ→ℝ be some function
Define
𝒀=𝒈 𝑿
Then, Y is also a random variable.

Lecture 06 Function of random variables 3


FUNCTIONS of RANDOM VARIABLES

A picture

Ω ωn

ω2 Y

ω1 y1 y2 yn Real line
X Y(ω1) Y(ω2) … Y(ωn)
g(.)
x1 x2 xn y1 = g(x1)
X(ω1) X(ω2) … X(ωn) Real line y2 = g(x2)

yn = g(xn)

Lecture 06 Function of random variables 4


What about cdf and pdf of Y ?
… can get these in terms of the
cdf and pdf of X.

Lecture 06 Function of random variables 5


Example
Suppose
𝑌 =𝑋+𝑎 (a is some constant)
Then 𝐹𝑌 𝑦 = 𝑃 𝑌 ≤ 𝑦
=𝑃 𝑋+𝑎 ≤𝑦
=𝑃 𝑋 ≤𝑦−𝑎
= 𝐹𝑋 𝑦 − 𝑎

Differentiating the cdf w.r.t y, (𝑑𝑦 = 𝑑𝑥)


𝑑 1
𝒇𝒀 𝒚 = 𝐹𝑋 𝑦 − 𝑎 = 𝐹𝑋 𝑦 − 𝑎 = 𝒇𝑿 (𝒚 − 𝒂)
𝑑𝑦 𝑑𝑥

Lecture 06 Function of random variables 6


Example

For instance if X is uniformly distributed


over [0, 1], and a = 1.

X = [0,1] Y=X+1 Y = [1,2]

cdf
FX (x) cdf
1 1
FY (y)
x y
0 1 1 2

pdf pdf
1 1
fX (x) fY (y)
x y
0 1 1 2

Lecture Notes 06 Function of random variables 7


Example
Suppose 𝑌 = 𝑏𝑋 (b > 0 is some constant)
Then 𝐹𝑌 𝑦 = 𝑃 𝑌 ≤ 𝑦
= 𝑃 𝑏𝑋 ≤ 𝑦
𝑦
=𝑃 𝑋≤
𝑏
𝑦
= 𝐹𝑋
𝑏

Differentiating the cdf w.r.t y, (𝑑𝑦 = 𝑏 ∙ 𝑑𝑥)


𝑑 𝑦 1 𝑦 1 1 𝑦
𝒇𝒀 𝒚 = 𝐹𝑋 = 𝐹𝑋 = 𝐹𝑋
𝑑𝑦 𝑏 𝑏. 𝑑𝑥 𝑏 𝑏 𝑑𝑥 𝑏
𝟏 𝒚
= 𝒇𝑿
𝒃 𝒃
Lecture Notes 06 8
For instance if X is uniformly distributed over [0, 1], and b = 2
X = [0,1] Y = 2·X Y = [0,2]

cdf
cdf
FX (x) 1 1
FY (y)
x y
0 1 0 2
1 𝑦
𝑓𝑌 (𝑦) = 𝑓𝑋
2 2
pdf pdf

fX (x) 1 fY (y) 0.5


x y
0 1 0 2

For instance if X is uniformly distributed over [0, 1], and b = 0.5


X = [0,1] Y= 0.5 × X Y = [0, 0.5]

cdf
FX (x) cdf
1 1
FY (y)
x y
0 1 0 0.5

1 𝑦
𝑓𝑌 (𝑦) = 𝑓𝑋
0.5 0.5
pdf pdf 2

fX (x) 1 fY (y)
x y
0 1 0 0.5
9
The key step in the previous two examples is to
go from
𝑷 𝒀 ≤ 𝒚 to 𝑷 𝑿 ≤ 𝒈−𝟏 (𝒚)
This may not always be true:

 What if Y = - X ?
 What if Y = X2 ?
So, we need a more general approach. Turns out
that we can directly derive the pdf of y. …

Lecture 06 Function of random variables 10


Suppose 𝑌 = 𝑔 𝑋 → 𝑦 = 𝑔(𝑥)

𝑦
𝑦 = 𝑔(𝑥)
𝑦 + 𝑑𝑦
𝑦

𝑥1 𝑥2 𝑥3 𝑥

𝑥1 + 𝑑𝑥1 𝑥2 + 𝑑𝑥2 𝑥3 + 𝑑𝑥3

Let us say, we want pdf 𝑓𝑌 𝑦


Known
𝑃 𝑦 < 𝑌 ≤ 𝑦 + 𝑑𝑦 ≈ 𝑓𝑌 𝑦 𝑑𝑦

Lecture 06 Function of random variables 11


From the figure, the event
𝑦 < 𝑌 ≤ 𝑦 + 𝑑𝑦
is equivalent to the disjoint union
𝑥1 < 𝑋 ≤ 𝑥1 + 𝑑𝑥1 ∪ 𝑥1 < 𝑋 ≤ 𝑥1 + 𝑑𝑥1
∪ 𝑥1 < 𝑋 ≤ 𝑥1 + 𝑑𝑥1
Thus
𝑓𝑌 𝑦 𝑑𝑦 ≈ 𝑓𝑋 𝑥1 𝑑𝑥1 + 𝑓𝑋 𝑥2 𝑑𝑥2 + 𝑓𝑋 𝑥3 𝑑𝑥3

(absolute values necessary because the incremental


changes 𝑑𝑥𝑖 and 𝑑𝑦𝑖 can have different signs and its
slope at that point can be negative, as with 𝑑𝑥2 here...)
Lecture 06 Function of random variables 12
So
𝑑𝑥1 𝑑𝑥2 𝑑𝑥3
𝑓𝑌 𝑦 = 𝑓𝑋 𝑥1 + 𝑓𝑋 𝑥2 + 𝑓𝑋 𝑥3
𝑑𝑦 𝑑𝑦 𝑑𝑦

𝑓𝑋 𝑥1 𝑓𝑋 𝑥2 𝑓𝑋 𝑥3
𝑓𝑌 𝑦 = + +
𝑑𝑦 𝑑𝑦 𝑑𝑦
𝑑𝑥 𝑥=𝑥 𝑑𝑥 𝑥=𝑥 𝑑𝑥 𝑥=𝑥
1 2 3

(You can check this is exactly what we got with the


‘‘direct’’ approach earlier…)

Lecture 06 Function of random variables 13


Thus the conclusion is:
Suppose 𝑌=𝑔 𝑋

1. Given any y, solve for all solution to y = 𝑔 𝑥 .

Let them be given by {𝑥1 , 𝑥2 , … , 𝑥𝑛 }.

2. Then,
𝑛
𝑓𝑋 𝑥𝑘
𝑓𝑌 𝑦 = ෍
𝑑𝑦
𝑘=1
𝑑𝑥 𝑥=𝑥
𝑘

Lecture 06 Function of random variables 14


Example

X is uniformly distributed over [0, 1], and


𝑌 = 𝑔 𝑋 where the graph of g(x) is shown
below.
g(x)
1.00

0.75

0.50

0.25

x
0 0.25 0.50 0.75 1.00

Lecture Notes 06 Function of random variables 15


Example
This is an example where a continuous random variable is
mapped to a discrete random variable.
Thus, Y can assume values only from {0, 0.25, 0.50, 0,75)
𝑃 𝑌 = 0 = 𝑃 0 ≤ 𝑋 < 0.25 = 0.25
Similarly;
𝑃 𝑌 = 0.25 = 𝑃 0.25 < 𝑋 ≤ 0. 50 = 0.25
𝑃 𝑌 = 0.50 = 𝑃 0.50 < 𝑋 ≤ 0. 75 = 0.25
𝑃 𝑌 = 0.75 = 𝑃 0.75 < 𝑋 ≤ 1.00 = 0.25
Thus, the probability mass function (pmf) of Y is
𝑝𝑌 0.00 = 0.25 𝑝𝑌 0.25 = 0.25
𝑝𝑌 0.50 = 0.25 𝑝𝑌 0.75 = 0.25

Lecture Notes 06 Function of random variables 16


Example g(x)

This is an example where a


1.00

continuous random variable is 0.75

mapped to a discrete random


variable.
0.50

Thus, Y can assume values only


0.25

from {0, 0.25, 0.50, 0,75) 0 0.25 0.50 0.75 1.00


x

𝑃 𝑌 = 0 = 𝑃 0 ≤ 𝑋 < 0.25 = 0.25


Similarly;
𝑃 𝑌 = 0.25 = 0.25, 𝑃 𝑌 = 0.50 = 0.25, 𝑃 𝑌 = 0.75 = 0.25
Thus, the probability mass function (pmf) of Y is
𝑝𝑌 0.00 = 0.25 𝑝𝑌 0.25 = 0.25
𝑝𝑌 0.50 = 0.25 𝑝𝑌 0.75 = 0.25
Lecture Notes 06 Function of random variables 17
Example
X is uniform over [-1/2, 1/2]
𝒀 = 𝒈 𝑿 = 𝑿𝟐
fX (x)

-1/2 1/2

Here, a continuous random variable (cts rv) is mapped


to a cts rv. Note that Y takes on values in [0, 1/4].
Solution for x: 𝑥1 = + 𝑦 and 𝑥2 = − 𝑦,
𝑑𝑦
Moreover 𝑑𝑦 = 2𝑥𝑑𝑥 → = 2𝑥 and
𝑑𝑥

𝑓𝑋 𝑥1 = 𝑓𝑋 𝑥2 = 1.0 (from the graph)

Lecture Notes 06 18
Therefore,
𝑛
𝑓𝑋 𝑥𝑘
𝑓𝑌 𝑦 = ෍
𝑑𝑦
𝑘=1
𝑑𝑥 𝑥=𝑥
𝑘

𝑓𝑋 𝑥1 𝑓𝑋 𝑥2 1 1
𝑓𝑌 𝑦 = + = +
𝑑𝑦 𝑑𝑦 2𝑥1 2𝑥2
𝑑𝑥 𝑥=𝑥 𝑑𝑥 𝑥=𝑥
1 2

1 1 1 1 1
= + =
2 𝑦 2 𝑦 𝑦

Lecture 06 Function of random variables 19


1 1
, 𝑦 ∈ 0,
Thus, 𝑓𝑌 𝑦 = ቐ 𝑦 4
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝒇𝒀 (𝒚)

Note, 𝑓𝑌 𝑦 → ∞ as 𝑦 → 0,

3 But area under 𝑓𝑌 𝑦 is still 1 !

0 0.25 𝒚

Lecture 06 Function of random variables 20


Example
X is uniform over 0, 2𝜋
𝒀 = 𝒈 𝑿 = 𝒄𝒐𝒔𝑿
y

1
𝒚 = 𝒄𝒐𝒔( )

x0 2π - x0 x

Given 𝑦 ∈ −1, 1 ,
Let 𝑥0 ∈ 0, 𝜋 such that cos 𝑥0 = 𝑦0
Then 𝑥0 , 2𝜋 − 𝑥0 list all 𝑥 such that cos 𝑥 = 𝑦

Lecture Notes 06 Function of random variables 21


Then,
𝑓𝑋 𝑥0 𝑓𝑋 2𝜋 − 𝑥0
𝑓𝑌 𝑦 = +
𝑑𝑦 𝑑𝑦
𝑑𝑥 𝑥=𝑥 𝑑𝑥 𝑥=2𝜋−𝑥
0 0

Since X is uniform over 0, 2𝜋


0, 𝑥 ∉ [0, 2𝜋]
𝑓𝑋 𝑥 = ቐ 1
, 𝑥 ∈ 0, 2𝜋
2𝜋
So
1 1
2𝜋 2𝜋 1 1
𝑓𝑌 𝑦 = + =
| − sin(𝑥0 )| | − sin(2𝜋 − 𝑥0 )| 𝜋 sin(𝑥0 )
1 1 1 1
= =
𝜋 1 − cos2 (𝑥0 ) 𝜋 1 − 𝑦 2
Lecture 06 22
Then,
0, 𝑖𝑓 𝑦 ∉ [−1, 1]
𝑓𝑌 𝑦 = ൞ 1
, 𝑖𝑓 𝑦 ∈ −1, 1
𝜋 1 − 𝑦2

A picture
f Y (y)

-1 0 1
y

Lecture 06 Function of random variables 23


𝑦

𝐹𝑌 𝑦 = න 𝑓𝑌 𝜏 𝑑𝜏
−∞
𝑦
1 1
= න 𝑑𝜏 𝑓𝑜𝑟 𝑦 ∈ [−1, 1]
𝜋 1−𝜏 2
−1
1 −1
𝜋
= sin ( 𝑦) +
𝜋 2
↘ 𝑟𝑎𝑛𝑔𝑒 𝑜𝑓 sin−1 𝑦 𝑖𝑠 0, 𝜋
So
0 𝑦 < −1
1 −1
𝜋
𝐹𝑌 𝑦 = sin ( 𝑦) + 𝑦 ∈ [−1, 1]
𝜋 2
1 𝑦>1

Lecture 06 Function of random variables 24


EXPECTATION
General and Central Moments
Mean and Variance
Mean and Variance of a discrete r.v.
Definition:
Let X be a discrete random variable. The mean or
expected value of X is denoted by 𝝁𝑿 or 𝑬(𝑿), and is
defined as:

𝜇𝑋 = 𝐸 𝑋 = ෍ 𝑥𝑓𝑋 𝑥 = ෍ 𝑥𝑖 𝑃(𝑋 = 𝑥𝑖 )
𝑥 𝑖=1
The variance of X is denoted by 𝜎𝑋2 or 𝑉 𝑋 or 𝑉𝑎𝑟 𝑋 is
defined as

𝜎𝑋2 = 𝐸 𝑋 − 𝜇𝑋 2
= ෍ 𝑥 − 𝜇𝑋 2 𝑓𝑋 𝑥
𝑥

= ෍ 𝑥 − 𝜇𝑋 2 𝑃(𝑋 = 𝑥𝑖 ) = ෍ 𝑥 − 𝜇𝑋 2 𝑝(𝑥𝑖 )
𝑥 𝑥
Lecture 06 Expectation 26
Mean and Variance of a discrete r.v.

Intuitive explanation of the mean:

Let X be a discrete random variable having n


elements.
X={x1, x2, …, xn}

The arithmetic mean of this r.v. is


𝑛
𝑥1 + 𝑥2 + ⋯ + 𝑥𝑛 1
𝜇𝑋 = = ෍ 𝑥𝑖
𝑛 𝑛
𝑖=1

Lecture 06 Expectation 27
Mean and Variance of a discrete r.v.
Intuitive explanation of the mean:
Now, let consider the case that some of elements
are repeated (or observed more than one).

Ley say fi represents the frequency of ith element,


that is, the number of repetition of xi .

𝑋 = { 𝑥1, 𝑥1, … , 𝑥1 ; 𝑥2 𝑥2, … , 𝑥2 ; … ; 𝑥𝑚, 𝑥𝑚, … , 𝑥𝑚}


𝑓1 𝑓2 𝑓𝑚

X = {fi xi : i = 1, 2, .., m}
where
𝑚

m < 𝑛, 𝑛 = ෍ 𝑓𝑖
𝑖=1
Lecture 06 Expectation 28
Mean and Variance of a discrete r.v.
Therefore the arithmetic mean is
𝑚
𝑓1 𝑥1 + 𝑓2 𝑥2 + ⋯ + 𝑓𝑚 𝑥𝑚 1
𝜇𝑋 = = ෍ 𝑓𝑖 𝑥𝑖
𝑛 𝑛
𝑖=1
𝑚 𝑚 𝑚
𝑓𝑖 𝑥𝑖 𝑓𝑖 𝑓𝑖
𝜇𝑋 = ෍ =෍ 𝑥𝑖 = ෍ 𝑥𝑖
𝑛 𝑛 𝑛
𝑖=1 𝑖=1 𝑖=1
Now, let use the frequency approach for the probability
computation.
𝑓𝑖
𝑃 𝑋 = 𝑥𝑖 = 𝑝 𝑥𝑖 = lim
𝑛→∞ 𝑛
Therefore, we have
𝑚

𝜇𝑋 = 𝐸(𝑋) = ෍ 𝑥𝑖 ∙ 𝑝𝑖
𝑖=1
done!
Lecture 06 Expectation 29
Mean and Variance of a continuous r.v.
Definition:
Let X be a continuous random variable. The mean
or expected value of X is denoted by 𝝁𝑿 or 𝑬[𝑿],
and is defined as:

𝐸 𝑋 = න 𝑥𝑓𝑋 𝑥 𝑑𝑥
−∞
The variance of X is denoted by 𝜎𝑋2 or 𝑉𝑎𝑟 𝑋 is
defined as

𝜎𝑋2 = 𝐸 𝑋 − 𝜇𝑋 2
= න 𝑥 − 𝜇𝑋 2 𝑓𝑋 𝑥 𝑑𝑥
−∞
Lecture 06 Expectation 30
The idea of expected value of a random variable
can be generalized to Expected Value of functions
of a random variables. This done through the
definition of the ‘’Expectation operator’’.

𝐸 𝑔(𝑋) = න 𝑔(𝑡)𝑓𝑋 𝑡 𝑑𝑡
−∞
Thus, the expectation operator E(.) takes as
arguments a function (g here) and return a real
number. Once again, for discrete random variable

𝐸 𝑔(𝑋) = ෍ 𝑔(𝑥𝑘 ) ∙ 𝑝𝑋 𝑥𝑘
𝑘
Lecture 06 Expectation 31
෍ 𝑔(𝑥𝑘 )𝑝𝑋 𝑥𝑘 if 𝑋 is discrete
𝑘
𝐸 𝑔(𝑋) = ∞

න 𝑔(𝑥)𝑓𝑋 𝑥 𝑑𝑥 if 𝑋 is continous
−∞

E[g(X)] is finite if E[| g(X) |] is finite.

Lecture 06 Expectation 32
Laws of Expected Value
Let X be a r.v., g(X) be a function of X and a,
and b be real constants numbers, i.e., 𝑎, 𝑏 ∈ ℝ.
The expectation operator is linear.
• 𝐸 𝑎 =𝑎
• 𝐸 𝑎𝑋 + 𝑏 = 𝑎𝐸 𝑋 + 𝑏
• 𝐸 𝑎𝑔 𝑋 = 𝑎𝐸 𝑔 𝑋
• 𝐸 𝑎𝑔1 𝑋 ∓ 𝑏𝑔2 (𝑋)
= 𝑎𝐸 𝑔1 𝑋 ∓ 𝑏𝐸 𝑔2 𝑋
Lecture 06 Expectation 33
Expectation:
Suppose 𝑌 = g(𝑋) . Then, we can define 𝐸 𝑌 in
two different ways.

1) Use Expectation operator and get


𝐸 𝑌 = 𝐸[𝑔 𝑋 ] = න 𝑔 𝑡 𝑓𝑋 𝑡 𝑑𝑡
−∞

1) Find 𝑓𝑌 𝑦 ; then

𝐸 𝑌 = න 𝑡𝑓𝑌 𝑡 𝑑𝑡
−∞
Lecture 06 Expectation 34
Which one of these two methods is correct ?

Fortunately, they both are correct; in other


words, they both give the same answer.

Lecture 06 Expectation 35
Why?
The answer goes back to how we derived
𝑓𝑌 𝑦 from 𝑓𝑋 𝑥 . Consider, for simplicity,
the special case when
∗ 𝑦 = 𝑔 𝑥 has only one solution 𝑥0 , given 𝑦
𝑑𝑦
∗ >0
𝑑𝑥
Then
𝑓𝑌 𝑦 𝑑𝑦 = 𝑓𝑋 𝑥 𝑑𝑥
or
𝑓𝑌 𝑔 𝑥 𝑔′ 𝑥 𝑑𝑥 = 𝑓𝑋 𝑥 𝑑𝑥

Lecture 06 Expectation 36
Now, take the second expression for 𝑬 𝒀 i.e.,

𝐸 𝑌 = න 𝑡𝑓𝑌 𝑡 𝑑𝑡
−∞
Let 𝑡 = 𝑔(𝜔) (change of variables)
𝑑𝑡 = 𝑔′ 𝜔 𝑑𝜔
Then right hand side (RHS) becomes
∞ ∞

න 𝑔 𝜔 𝑓𝑌 𝑔 𝜔 𝑔′ 𝜔 𝑑𝜔 = න 𝑔 𝜔 𝑓𝑋 𝜔 𝑑𝜔
−∞ −∞
done!
Lecture 06 Expectation 37
General and Central Moments
The nth general moments of X is defined as
𝐸 𝑋𝑛

෍ 𝑥𝑘𝑛 𝑝𝑋 𝑥𝑘 if 𝑋 is discrete
𝑘
𝑛
𝐸 𝑋 = ∞

න 𝑥 𝑛 𝑓𝑋 𝑥 𝑑𝑥 if 𝑋 is continous
−∞

Lecture 06 38
Moments
General and Central Moments
The expectations of certain special
functions of X have special names.
• The moment definition is essentially
based on a reference point.
• For the general moments the reference
point is 0 (zero). 𝐸 𝑋 − 0 𝑛 = 𝐸 𝑋𝑛

• For the central moments the reference


point is the mean value.

Lecture 06 39
Moments
General and Central Moments
General Moments:
The first general moment, 𝐸(𝑋) is denoted 𝝁𝑿
or ഥ simply the expected value of X, and is also
called the ‘’mean’’ or ‘’average value’’ of X.
Second general moment is called ‘’mean square
value’’ is defined as
𝐸 𝑋2
Then 3rd general moment of X is defined as
𝐸 𝑋3
and so on, the nth general moment is 𝐸 𝑋 𝑛 .
Lecture 06 Moments 40
General and Central Moments

Central Moments:
Then nth central moment of X is defined
as
𝑛
𝐸 𝑋 − 𝜇𝑋

Lecture 06 Moments 41
General and Central Moments
Second Central Moments (Variance):
The second central moment
2
𝐸 𝑋 − 𝜇𝑋
is called the variance, and it shows the
average value of square deviation from
the mean, in other words the variance is
mean square value of deviations of rvs
from the mean. It is denoted by Var(X).
2
𝑉𝑎𝑟 𝑋 = 𝐸 𝑋 − 𝜇𝑋
Lecture 06 Moments 42
General and Central Moments
Variance and standard deviation:
The square root of the variance is called the
‘’standard deviation’’. It is denoted by 𝝈𝑿 (for a
large group or population) or 𝒔𝑿 (for a sample
derived from a population).
This measures the ‘’spread’’ of the values of X.
𝜎𝑋 = 𝑉𝑎𝑟(𝑋) = 𝐸 𝑋 − 𝜇𝑋 2

2
𝑠𝑋 = 𝑉𝑎𝑟(𝑋) = 𝐸 𝑋 − 𝑥ҧ
By the way, in Statistics, calculation of the variance of a sample is different from
which for a population due to the degree of freedom which is given for a sample. We
will see in the statistics.
Lecture 06 Moments 43
General and Central Moments
Variance (continued):
So, the second central moment
𝐸 𝑋 − 𝜇𝑋 2
= 𝐸 𝑋 2 − 2𝜇𝑋 𝑋 + 𝜇𝑋2
= 𝐸 𝑋 2 − 𝐸 2𝜇𝑋 𝑋 + 𝐸 𝜇𝑋2
= 𝐸 𝑋 2 − 2𝜇𝑋 𝐸 𝑋 + 𝐸 𝜇𝑋2
= 𝐸 𝑋 2 − 2𝜇𝑋 𝜇𝑋 + 𝜇𝑋2
= 𝐸 𝑋 2 − 2𝜇𝑋2 + 𝜇𝑋2
= 𝐸 𝑋 2 − 𝜇𝑋2

𝐸 𝑋 − 𝜇𝑋 2
= 𝐸 𝑋 2 − 𝜇𝑋2 a useful identity.

Lecture 06 Moments 44
Mean and Variance
The mean is a weighted average of the possible
values of X, where the weights are their
corresponding probabilities.
Therefore the mean describes the “center” of
the distribution. In other words X takes values
“around” its mean…
The variance measures the dispersion of X
around the mean. If this value is large means X
varies a lot.
Important: The variance is ALWAYS non-
negative. That is 𝑉𝑎𝑟(𝑋) ≥ 0

Lecture 06 Moments 45
Properties of Variance
Let X be a r.v., and a, b and c be real
constants numbers, i.e., 𝑎, 𝑏, 𝑐 ∈ ℝ.

 𝑉𝑎𝑟(𝑋) ≥ 0
 𝑉𝑎𝑟 𝑐 = 0
 𝑉𝑎𝑟 𝑋 + 𝑐 = 𝑉𝑎𝑟(𝑋)
 𝑉𝑎𝑟 𝑎𝑋 = 𝑎2 𝑉𝑎𝑟(𝑋)
 𝑉𝑎𝑟 𝑎𝑋 + 𝑏 = 𝑎2 𝑉𝑎𝑟(𝑋)
 𝑉𝑎𝑟 𝑎𝑋 + 𝑏 = |𝑎| 𝑉𝑎𝑟(𝑋)

Lecture 06 Moments 46
General and Central Moments
Example: Let
0, 𝑖𝑓 𝑥 < 0 0, 𝑖𝑓 𝑥 < 0
𝑓𝑋 𝑥 = ቊ −2𝑥 , 𝑓𝑋 𝑥 = ൝ −𝜆𝑥
2𝑒 , 𝑖𝑓 𝑥 ≥ 0 𝜆𝑒 , 𝑖𝑓 𝑥 ≥ 0

a) Find 𝐸 𝑋 𝑏
𝑏
𝑏

∞ න 𝑢𝑑𝑣 = 𝑢𝑣ቚ − න 𝑣𝑑𝑢


𝑎
𝑎 𝑎
𝐸 𝑋 = න 𝑡𝑓𝑋 𝑡 𝑑𝑡 𝑢 = 𝑡 → 𝑑𝑢 = 𝑑𝑡
−∞ 𝑑𝑣 = 2𝑒 −2𝑡 → 𝑣 = −𝑒 −2𝑡
∞ ∞

= න 𝑡 ∙ 2𝑒 −2𝑡 𝑑𝑡 = න 𝑡 ∙ 𝑑 −𝑒 −2𝑡
0 0

−2𝑡

−2𝑡
1 1
= −𝑡𝑒 ቚ +න𝑒 𝑑𝑡 = , 𝐸 𝑋 =
0 2 𝜆
0

Lecture 06 Moments 47
General and Central Moments
Example: Let
0, 𝑖𝑓 𝑥 < 0
𝑓𝑋 𝑥 = ቊ −2𝑥
2𝑒 , 𝑖𝑓 𝑥 ≥ 0
𝑏 𝑏
b) Find mean square value 𝐸 𝑋 2 𝑏
න 𝑢𝑑𝑣 = 𝑢𝑣ቚ − න 𝑣𝑑𝑢
∞ 𝑎
𝑎 𝑎

𝐸 𝑋 2 = න 𝑡 2 𝑓𝑋 𝑡 𝑑𝑡 𝑢 = 𝑡 2 → 𝑑𝑢 = 2𝑡𝑑𝑡

−∞ 𝑑𝑣 = 2𝑒 −2𝑡 → 𝑣 = −𝑒 −2𝑡
∞ ∞

= න 𝑡 2 ∙ 2𝑒 −2𝑡 𝑑𝑡 = න 𝑡 2 ∙ 𝑑 −𝑒 −2𝑡
0 0

2 −2𝑡

−2𝑡
1
= −𝑡 𝑒 ቚ +න 𝑒 2𝑡𝑑𝑡 =
0 2
0
Lecture 06 Moments 48
Mean and Variance
of Random Variable

Lecture 06 Moments 49
General and Central Moments
Example: Let
0, 𝑖𝑓 𝑥 < 0
𝑓𝑋 𝑥 = ቊ −2𝑥
2𝑒 , 𝑖𝑓 𝑥 ≥ 0
b) Find the variance, 𝐸 𝑋 − 𝜇𝑋 2

1 1
𝜇𝑋 = 2, 𝐸 𝑋2 = 2
Using the identity
𝑉𝑎𝑟 𝑋 = 𝐸 𝑋 − 𝜇𝑋 2
= 𝐸 𝑋 2 − 𝜇𝑋2
2
1 1 1
= − =
2 2 4
Lecture 06 Moments 50

You might also like