0% found this document useful (0 votes)
11 views18 pages

323 Egec

Uploaded by

yuting gu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views18 pages

323 Egec

Uploaded by

yuting gu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

EG-EE 323

Dr. S.G. Shahi


Chapter 3- Expected Value:
Definition:
Process of averaging when random variable is involved is called Expectation.
Consider random variable 𝑋 and its function g(𝑋), then expected value of g(𝑋)
defined as:

̅̅̅̅̅̅̅
𝑔(𝑋) = 𝐸𝑋 [𝑔(𝑋)] = ∫−∞ 𝑔(𝑥)𝑓(𝑥)𝑑𝑥 ∶ Continuous random variable
∑𝑛𝑖=1 𝑔(𝑥𝑖 )𝑃𝑋 (𝑥𝑖 ) Discrete random variable

Mean: g(X)=X
 Mean is weighted average value of all 𝑥𝑖 values for discrete random
variable.
 The mean is center of gravity of the probability density function.

𝐸[𝑋] = 𝜇𝑋 = 𝑋̅ = ∑𝑛𝑖=1 𝑥𝑖 𝑃𝑋 (𝑥𝑖 ): Discrete random variable



𝐸[𝑋] = 𝑋̅ = ∫−∞ 𝑥 𝑓𝑋 (𝑥)𝑑𝑥 ∶ Continuous random variable

Example:
You are playing a game with a friend and throwing a pair of dice, if the sum
of the up faces is a prime number you win corresponding number of dollars.
And if it is not a prime number you lose corresponding number of dollars.
How much you expect to win or to lose?
𝑋 = {2, 3, 4, … , 12}
𝐸[𝑋] = ∑𝐴𝑙𝑙 𝑖 𝑥𝑖 𝑃(𝑥𝑖 ) =
1 2 3 4 5 6 7 8
= (2) + (3) − (4) + (5) − (6) + (7) − (8) − (9) −
36 36 36 36 36 36 36 36
9 10 11
(10) + (11) − (12) = −1.89 𝐿𝑂𝑆𝐸
36 36 36

1|P a g e
EG-EE 323
Dr. S.G. Shahi

Variance: g(X)= (𝑋 − 𝑋̅)2


𝑉𝐴𝑅 [𝑋] = 𝜎𝑋2 = 𝐸𝑋 [(𝑋 − 𝑋̅𝑋 )2 ] = ∑𝑛𝑖=1(𝑥𝑖 − 𝑋̅)2 𝑃(𝑥𝑖 )
or:

𝜎𝑋2 = ∫−∞(𝑥 − 𝑋̅)2 𝑓(𝑥)𝑑𝑥
1) Variance is a measure of the spread of random variable 𝑋 about it’s mean
value.
2) Variance is positive value.
3) Variance is moment of inertia of the probability density function about the
mean.
4) If 𝜎𝑋2 = 0, then 𝑃(𝑋 = 𝑋̅) = 1. ⟹ 𝑋 is a constant.

Note:
If probability density function is even symmetric about 𝑥 = 𝑎 then 𝜇𝑋 = 𝑎
since: 𝑓(𝑎 − 𝑥) = 𝑓(𝑎 + 𝑥)
∞ ∞
∫−∞(𝑥 − 𝑎)𝑓(𝑥)𝑑𝑥 = 0 ⇒ ∫ 𝑥𝑓(𝑥)𝑑𝑥
⏟ − 𝑎 ∫−∞ 𝑓(𝑥)𝑑𝑥 = 0 → 𝜇𝑋 = 𝑎
𝜇𝑥

Example:
Consider random variable 𝑋 be uniformly distributed on interval[𝑎, 𝑏]. Find
mean and variance of random variable 𝑋.
𝑏 1 𝑎+𝑏
Mean = 𝜇𝑋 = 𝐸[𝑋] = ∫𝑎 𝑥 ( ) 𝑑𝑥 = Mid.Point
𝑏−𝑎 2
𝑏 1
Var [𝑋] = 𝜎𝑋2 = 𝐸[(𝑋 − 𝑋̅)2 ] = ∫𝑎 (𝑥 − 𝑋̅)2 𝑑𝑥
𝑏−𝑎
𝑏 𝑎+𝑏 2 1
= ∫𝑎 (𝑥 − ) 𝑑𝑥
2 𝑏−𝑎
𝑎+𝑏
Let: 𝑦 ≜ 𝑥 − then: 𝑑𝑥 = 𝑑𝑦
2
𝑏−𝑎
𝑦2 1
𝜎𝑋2 =∫ 𝑎−𝑏
2
𝑑𝑦 = (𝑏 − 𝑎)2
𝑏−𝑎 12
2

2|P a g e
EG-EE 323
Dr. S.G. Shahi
Mode:
Value of 𝑋 where probability density function 𝑓(𝑥) at that point is maximum,
for normal distribution function mode and mean are the same quantities.

Median:
Median is the value of 𝑋 noted as 𝑥𝑚 where:
𝑥𝑚
1
𝐹𝑋 (𝑥𝑚 ) = 𝑃(𝑋 ≤ 𝑥𝑚 ) = ∫ 𝑓(𝑥)𝑑𝑥 =
−∞ 2

𝑥𝜇 = 𝑀𝑜𝑑𝑒

Standard Deviation:
Positive square root of variance of random variable X is called its standard
deviation.

𝑆𝐷𝑉[𝑋] = 𝜎𝑋 = +√𝜎𝑋2

Moments:
Moments around origin of random variable 𝑋 is:

𝑚𝑘 = 𝐸[𝑋 𝑘 ] = ∫−∞ 𝑥 𝑘 𝑓(𝑥)𝑑𝑥 ∶ Continuous random variable

𝑚𝑘 = 𝐸[𝑋 𝑘 ] = ∑𝑛𝑖=1 𝑥𝑖𝑘 𝑃(𝑥𝑖 ) : Discrete random variable

3|P a g e
EG-EE 323
Dr. S.G. Shahi
Central Moments:

𝜆𝑘 = 𝐸[(𝑥 − 𝜇𝑋 )𝑘 ] = ∫−∞(𝑥 − 𝜇𝑋 )𝑘 𝑓(𝑥)𝑑𝑥
Or = ∑𝑛𝑖=1(𝑥𝑖 − 𝑋̅)𝑘 𝑃(𝑥𝑖 )
Example:
𝑚1 = Mean
𝑚2 = 𝐸[𝑋 2 ] = Mean – squared value
𝜆0 = 1
𝜆1 = 0
𝜆2 = 𝜎𝑋2 = Variance
𝜆3 = 𝐸[(𝑋 − 𝑋̅)3 ]
𝜆3 Is a measure of the asymmetry of 𝑓(𝑥) about mean. Normalized third
𝜆3
central moments (i.e; 3 ) is called skewness or coefficient of skewness. If
𝜎𝑋

probability density function is symmetric about 𝑋 = 𝑋̅ then 𝜆3 = 0.

Properties of Expected Values:


1) 𝐸𝑋 [𝐶] = 𝐶 , Where 𝐶 is a constant:
Since:
∞ ∞
𝐸𝑋 [𝐶] = ∫−∞ 𝐶 𝑓(𝑥)𝑑𝑥 = 𝐶 ∫−∞ 𝑓(𝑥)𝑑𝑥
⏟ =𝐶
1

2) 𝐸[𝑋1 + 𝑋2 ] = 𝐸[𝑋1 ] + 𝐸[𝑋2 ]


Since:
∞ ∞
𝐸[𝑋1 + 𝑋2 ] = ∫−∞ ∫−∞(𝑥1 + 𝑥2 )𝑓(𝑥1 , 𝑥2 )𝑑𝑥1 𝑑𝑥2
∞ ∞ ∞ ∞
= ∫−∞ ∫−∞ 𝑥1 𝑓(𝑥1 , 𝑥2 )𝑑𝑥1 𝑑𝑥2 + ∫−∞ ∫−∞ 𝑥2 𝑓(𝑥1 , 𝑥2 )𝑑𝑥1 𝑑𝑥2
∞ ∞
= ∫−∞ 𝑥1 𝑓(𝑥1 )𝑑𝑥1 + ∫−∞ 𝑥2 𝑓(𝑥2 )𝑑𝑥2
= 𝐸[𝑋1 ] + 𝐸[𝑋2 ]

4|P a g e
EG-EE 323
Dr. S.G. Shahi
In general:
𝑛 𝑛

𝐸 [∑ 𝑋𝑖 ] = ∑ 𝐸[𝑋𝑖 ]
𝑖=1 𝑖=1

3) 𝐶𝑜𝑛𝑠𝑖𝑑𝑒𝑟 𝑐𝑜𝑒𝑓𝑖𝑐𝑖𝑒𝑛𝑡𝑠 𝑎𝑖 all be constant for all 𝑖, then:


∑𝑛𝑖=1 𝐸[𝑎𝑖 𝑋𝑖 ] = ∑𝑛𝑖=1 𝑎𝑖 𝐸[𝑋𝑖 ]

4) For two independent random variables 𝑋1 and 𝑋2 .


𝐸[𝑋1 𝑋2 ] = 𝐸[𝑋1 ]𝐸[𝑋2 ]
Since:
∞ ∞
𝐸[𝑋1 𝑋2 ] = ∫−∞ ∫−∞ 𝑥1 𝑥2 𝑓(𝑥1 , 𝑥2 ) 𝑑𝑥1 𝑑𝑥2
∞ ∞
= ∫−∞ 𝑥1 𝑓(𝑥1 )𝑑𝑥1 ∫−∞ 𝑥2 𝑓(𝑥2 )𝑑𝑥2
= 𝐸[𝑋1 ]𝐸[𝑋2 ]
In general for n independent random variables 𝑋1 … 𝑋𝑛 :
𝑛 𝑛

𝐸[𝑋1 … 𝑋𝑛 ] = 𝐸 [∏ 𝑋𝑖 ] = ∏ 𝐸[𝑋𝑖 ]
𝑖=1 𝑖=1

5) Let 𝑋1 and 𝑋2 be zero mean independent random variables then:


𝐸[(𝑋1 + 𝑋2 )2 ] = 𝐸[𝑋12 ] + 𝐸[𝑋22 ]
In general if 𝑋𝑖 (𝑖 = 1, 2, 3, … , 𝑛) be 𝑛 independent zero mean random
variable:
𝑛 𝑛

𝐸 [(∑ 𝑋𝑖 )2 ] = ∑ 𝐸[𝑋𝑖2 ]
𝑖=1 𝑖=1

Expected value of square of sum of 𝑛 independent zero mean random


variables are equal to the sum of their mean square values.

5|P a g e
EG-EE 323
Dr. S.G. Shahi
6) Another form of calculating variance:
𝜎𝑋2 = 𝐸[(𝑋 − 𝜇𝑋 )2 ] = 𝐸[𝑋 2 ] − 𝜇𝑋2
Since:
𝜎𝑋2 = 𝐸[𝑋 2 + 𝜇𝑋2 − 2𝑋𝜇𝑋 ] = 𝐸[𝑋 2 ] + 𝜇𝑋2 − 2𝜇𝑋 𝐸[𝑋]
⏟ = 𝐸[𝑋 2 ] − 𝜇𝑋2 =
𝜇𝑋

𝑚2 − 𝑚12
Example:
Consider two random variable 𝑋 and 𝑌 with the following relation:
𝑌 = 𝑎𝑋 + 𝑏 (linear transformation).
Find mean and variance of 𝑌 in terms of mean and variance of 𝑋.
𝜇𝑌 = 𝐸[𝑌] = 𝐸[𝑎𝑋 + 𝑏] = 𝑎𝜇𝑋 + 𝑏
𝜎𝑌2 = 𝐸[𝑌 2 ] − 𝜇2 𝑌
𝐸[𝑌 2 ] = 𝐸[(𝑎𝑋 + 𝑏)2 ] = 𝑎2 𝐸[𝑋 2 ] + 𝑏 2 + 2𝑎𝑏𝐸[𝑋]
𝜎𝑌2 = 𝐸[𝑌 2 ] − 𝜇2 𝑌 = 𝑎2 𝐸[𝑋 2 ] + 𝑏 2 + 2𝑎𝑏𝐸[𝑋] − [𝑎2 𝜇𝑋2 + 𝑏 2 + 2𝑎𝑏𝜇𝑋 ]
= 𝑎2 [𝐸[𝑋 2 ] − 𝜇𝑋2 ] = 𝑎2 𝜎𝑋2

Example:
Consider Bernoulli random variable 𝑋 which takes two values of 0 or 1 with:
𝑃(𝑋 = 1) = 𝑝 and 𝑃(𝑋 = 0) = 𝑞. Find mean and variance of 𝑋
𝜇𝑋 = 𝐸[𝑋] = ∑𝐴𝑙𝑙 𝑖 𝑥𝑖 𝑃(𝑥𝑖 ) = (1)(𝑝) + (0)𝑞 = 𝑝
𝜎𝑋2 = 𝐸[𝑋 2 ] − 𝜇𝑋2
𝑚2 = 𝐸[𝑋 2 ] = ∑𝐴𝑙𝑙 𝑥𝑖 𝑥𝑖2 𝑃(𝑥𝑖 ) = (1)2 𝑝 + (0)2 𝑞 = 𝑝
𝜎𝑋2 = 𝑝 − 𝑝2 = 𝑝(1 − 𝑝) = 𝑝𝑞

Example: For Binomial random variable 𝑋 calculate the mean and variance.
Recall that 𝑃(𝑥 = 𝑘) = (𝑛𝑘)𝑝𝑘 𝑞 𝑛−𝑘 𝑘 = 0, 1, … , 𝑛

6|P a g e
EG-EE 323
Dr. S.G. Shahi
Mean:
𝜇𝑋 = 𝐸[𝑋] = ∑𝐴𝑙𝑙 𝑖 𝑥𝑖 𝑃(𝑥𝑖 ) = ∑𝑛𝑘=0 𝑘 𝑃𝑋 (𝑘) = ∑𝑛𝑘=0 𝑘 (𝑛𝑘)𝑝𝑘 𝑞 𝑛−𝑘
𝑛! (𝑛−1)! 𝑃𝑘−1 𝑞 𝑛−𝑘
= ∑𝑛𝑘=0 𝑘 (𝑛−𝑘)!𝑘!
𝑃𝑘 𝑞 𝑛−𝑘 = 𝑛𝑝 ∑𝑛𝑘=1 (𝑛−𝑘)! (𝑘−1)!

= 𝑛𝑝 ∑𝑛𝑘=1 (𝑛−1
𝑘−1
) 𝑃𝑘−1 𝑞 𝑛−𝑘
= 𝑛𝑃(𝑃 + 𝑞)𝑛−1 = 𝑛𝑝
𝑚2 = 𝐸[𝑋 2 ] = ∑𝑛𝑘=0 𝑘 2 (𝑛𝑘)𝑝𝑘 𝑞 𝑛−𝑘 = ∑𝑛𝑘=0 𝑘(𝑘 − 1)(𝑛𝑘)𝑝𝑘 𝑞 𝑛−𝑘 +
𝑛 𝑛 𝑘 𝑛−𝑘

⏟𝑘=0 𝑘(𝑘 )𝑝 𝑞
𝑛𝑝
(𝑛−2)!
= 𝑛(𝑛 − 1)𝑝2 ∑𝑛𝑘=2 (𝑘−2)!(𝑛−𝑘)! 𝑝𝑘−2 𝑞 𝑛−𝑘 + 𝑛𝑝

= 𝑛(𝑛 − 1)𝑝2 (𝑝 + 𝑞)𝑛−2 + 𝑛𝑝


= 𝑛(𝑛 − 1)𝑝2 + 𝑛𝑝
𝜎𝑋2 = 𝑚2 − 𝑚12 = 𝑛2 𝑝2 + 𝑛𝑝 − 𝑛𝑝2 − 𝑛2 𝑝2 = 𝑛𝑝(1 − 𝑝) = 𝑛𝑝𝑞
Therefore mean and variance of binomial random variable X is:
𝜇𝑋 = 𝑛𝑝
𝜎𝑋2 = 𝑛𝑝𝑞

Moments of normal:
For zero mean Gaussian random variable 𝑋 , we can calculate the higher
moments by knowing standard deviation 𝜎𝑋 as:
(1)(3) … (𝑛 − 1) 𝜎𝑋𝑛 , 𝑛 𝑒𝑣𝑒𝑛
𝑚𝑛 = {
0 , 𝑛 𝑜𝑑𝑑
For Gaussian random variable 𝑋 with mean 𝜇𝑋 and variance 𝜎𝑋2 , the higher
order central moments can be calculated just from 𝜎𝑋 𝑎𝑠:
(1)(3) … (𝑛 − 1)𝜎𝑋𝑛 , 𝑛 𝑒𝑣𝑒𝑛
𝜆𝑛 = 𝐸[(𝑥 − 𝜇𝑋𝑛 )] ={
0 , 𝑛 𝑜𝑑𝑑

7|P a g e
EG-EE 323
Dr. S.G. Shahi
Example:
Consider Gaussian random variable 𝑋 with:
1 (𝑥 − 𝜇𝑋 )2
𝑓(𝑥) = 𝐸𝑥𝑝 {− }
√2𝜋𝜎𝑋 2𝜎𝑋2
Show that 𝜇𝑋 and 𝜎𝑋2 are mean and variance of random variable 𝑋
respectively.

𝑥 (𝑥 − 𝜇𝑋 )2
𝐸[𝑋] = ∫ 𝐸𝑥𝑝 {− } 𝑑𝑥
−∞ √2𝜋𝜎𝑋 2𝜎𝑋2
Change variable:
𝑥−𝜇𝑋
=𝜁
𝜎𝑋

∞ 1 −𝜁 2
𝐸[𝑋] = ∫−∞ (𝜎𝑋 + 𝜇𝑋 )𝐸𝑥𝑝 ( ) 𝑑𝜁
√2𝜋 2
−𝜁2
1 ∞ −𝜁 2 𝜇𝑋 ∞ ( 2 )
= ∫ 𝜎
⏟ 𝑋 𝐸𝑥𝑝
⏟ {( 2 )} 𝑑𝜁 + ∫
⏟ 𝑒 𝑑𝜁 = 0 + 𝜇𝑋
√2𝜋 −∞ √2𝜋 −∞
𝑜𝑑𝑑
⏟ 𝑒𝑣𝑒𝑛 1
𝑜𝑑𝑑

So: 𝐸[𝑋] = 𝜇𝑋
(𝑥−𝜇 )2
𝑋 2 −𝜁2
∞ 1 2 − 2𝜎𝑋 2 𝜎𝑋 ∞ 2 ( 2 )
𝑉𝐴𝑅 [𝑋] = ∫−∞ √2𝜋 𝜎 (𝑥 − 𝜇𝑋 ) 𝑒 𝑑𝑥 = ∫
⏟−∞
𝜁 𝑒 𝑑𝜁 = 𝜎𝑋2
𝑋 √ 2𝜋
√2𝜋

8|P a g e
EG-EE 323
Dr. S.G. Shahi
Cheby cheff’s inequality:
For any arbitrary density function 𝑓(𝑥):
2
𝜎𝑋
𝑃(|𝑥 − 𝜇𝑋 | ≥ 𝑘) ≤ . Which gives the upper band on probability of 𝑋
𝑘2

deviated from mean by 𝐾.


∞ 𝜇−𝑘 ∞
𝜎𝑋2 = ∫−∞(𝑥 − 𝜇𝑋 )2 𝑓(𝑥)𝑑𝑥 ≥ ∫−∞ (𝑥 − 𝜇𝑋 )2 𝑓(𝑥)𝑑𝑥 + ∫𝜇+𝑘(𝑥 − 𝜇𝑋 )2 𝑓(𝑥)𝑑𝑥
𝜇−𝑘 ∞
≥ 𝑘 2 ∫−∞ 𝑓(𝑥)𝑑𝑥 + 𝑘 2 ∫𝜇+𝑘 𝑓(𝑥)𝑑𝑥
𝑓(𝑥)
Since: (𝑥 − 𝜇𝑋 )2 ≥ 𝑘 2 over dashed area
𝜇−𝑘 ∞
≥ 𝑘 2 [⏟∫−∞ 𝑓(𝑥)𝑑𝑥 + ∫𝜇+𝑘 𝑓(𝑥)𝑑𝑥 ]
𝑃(|𝑥−𝜇𝑋 |>𝑘)
𝐾
𝜎𝑋2 𝐾

𝑃(|𝑥 − 𝜇𝑋 | ≥ 𝑘) ≤ 𝑥
𝑘2 𝜇−𝑘 𝜇 𝜇+𝑘

2
𝜎𝑋
𝑃(|𝑥 − 𝜇𝑋 | < 𝑘) = 1 − 𝑃(|𝑥 − 𝜇𝑋 | ≥ 𝑘) ≥ 1 − which gives the lower
𝑘2

band on probability of 𝑋 to be in the range of 𝑘 from its mean.

Example:
Consider random variable 𝑋 with 𝜎𝑋2 = 3.
1) Find upper band on the random variable 𝑋 deviated from its mean by
value equal to two.
2) Find lower band on probability of 𝑋 to be in the range of 2 from its
mean:

2
𝜎𝑋 3
1) 𝑃(|𝑋 − 𝜇𝑋 | ≥ 2) ≤ =
4 4
3 1
2) 𝑃(|𝑋 − 𝜇𝑋 | < 2) = 1 − 𝑃(|𝑋 − 𝜇𝑋 | ≥ 2) ≥ 1 − =
4 4

9|P a g e
EG-EE 323
Dr. S.G. Shahi

Characteristic function (C.F):

𝜙𝑋 (𝜔) ≝ 𝐸[𝑒 𝐽𝜔𝑋 ]



∫ 𝑒 𝐽𝜔𝑥 𝑓(𝑥)𝑑𝑥 , 𝐶𝑜𝑛𝑡𝑖𝑛𝑜𝑢𝑠 𝑟𝑎𝑛𝑑𝑜𝑚 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒
−∞
=
∑ 𝑒 𝐽𝜔𝑥𝑖 𝑃𝑋 (𝑥𝑖 ) , 𝐷𝑖𝑠𝑐𝑟𝑒𝑡𝑒 𝑟𝑎𝑛𝑑𝑜𝑚 𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒
{𝐴𝑙𝑙 𝑥𝑖

𝜙𝑋 (𝜔) is the “Fourier Transform” of the probability density function with


sign change on 𝜔 as:
𝜙𝑋 (−𝜔) = 𝐹. 𝑇 (𝑓(𝑥))

Inversion Formula:
1 ∞
𝑓𝑋 (𝑥) = ∫ 𝜙𝑋 (𝜔)𝑒 −𝐽𝜔𝑥 𝑑𝜔
2𝜋 −∞

Applications of C.F:
1) Determination of higher order moments.
2) Determination of densities of sums of independent random variables.

Properties of C.F:
∞ ∞
1) 𝜙𝑋 (0) = ∫−∞ 𝑒 𝐽(0)𝑥 𝑓(𝑥)𝑑𝑥 = ∫−∞ 𝑓(𝑥)𝑑𝑥 = 1 𝐹𝑜𝑟 𝑋 be continuous

𝜙𝑋 (0) = ∑𝐴𝑙𝑙 𝑖 𝑒 𝐽(0)𝑥𝑖 𝑃𝑋 (𝑥𝑖 ) = ∑𝐴𝑙𝑙 𝑖 𝑃𝑋 (𝑥𝑖 ) = 1 𝐹𝑜𝑟 𝑋 be discrete

𝟐) |𝜙𝑋 (𝜔)| ≤ 1

Since: |𝜙𝑋 (𝜔)| = |∫−∞ 𝑓(𝑥)𝑒 𝐽𝜔𝑥 𝑑𝑥 |

10|P a g e
EG-EE 323
Dr. S.G. Shahi
∞ ∞ ∞
≤ ∫−∞|𝑓(𝑥)𝑒 𝐽𝜔𝑥 |𝑑𝑥 ≤ ∫−∞|𝑓(𝑥)||𝑒 𝐽𝜔𝑥 |𝑑𝑥 ≤ ∫−∞ 𝑓(𝑥)𝑑𝑥 = 1
Reminder of Euler equation:

|𝑒 𝐽𝜔𝑥 | = |cos 𝜔𝑥 + 𝐽𝑠𝑖𝑛 𝜔𝑥| = √𝑐𝑜𝑠 2 𝜔𝑥 + 𝑠𝑖𝑛2 𝜔𝑥 = 1

3) Consider sum of two independent random variables 𝑋 and 𝑌, and 𝑍 = 𝑋 +


𝑌 then:
𝜙𝑧 (𝜔) = 𝜙𝑋 (𝜔)𝜙𝑌 (𝜔)
Since:

𝜙𝑧 (𝜔) = 𝐸[𝑒 𝐽𝜔𝑧 ] = 𝐸[𝑒 𝐽𝜔𝑋 𝑒 𝐽𝜔𝑌 ] = ∬−∞ 𝑒 𝐽𝜔𝑥 𝑒 𝐽𝜔𝑦 ⏟
𝑓(𝑥, 𝑦) 𝑑𝑥 𝑑𝑦
𝑓(𝑥)𝑓(𝑦)
∞ ∞
= ∫−∞ 𝑒 𝐽𝜔𝑥 𝑓(𝑥)𝑑𝑥 ∫−∞ 𝑒 𝐽𝜔𝑦 𝑓(𝑦)𝑑𝑦 = 𝐸[𝑒 𝐽𝜔𝑥 ]𝐸[𝑒 𝐽𝜔𝑦 ] =
𝜙𝑋 (𝜔)𝜙𝑌 (𝜔)

4) If 𝑋𝑖 (𝑖 = 1, 2, … , 𝑛) be 𝑛 independent R.V.S and Z = 𝑋1 + 𝑋1 + ⋯ + 𝑋𝑛


Then: 𝜙𝑧 (𝜔) = ∏𝑛𝑖=1 𝜙𝑋𝑖 (𝜔)
5) Let 𝑌 = 𝑎𝑋 + 𝑏:
𝜙𝑌 (𝜔) = 𝐸[𝑒 𝐽𝜔𝑦 ] = 𝐸[𝑒 𝐽𝜔(𝑎𝑥+𝑏) ] = 𝐸[𝑒 𝐽𝜔𝑎𝑥 𝑒 𝐽𝜔𝑏𝑥 ] = 𝑒 𝐽𝜔𝑏 𝐸[𝑒 𝐽𝜔𝑎𝑥 ] =
𝑒 𝐽𝜔𝑏 𝜙𝑋 (𝑎𝜔)

Determination of moments:
We can calculate the moment by calculating the derivative of C.F as:

𝑛]
1 𝑑 𝑛 𝜙𝑋 (𝜔)
𝑚𝑛 = 𝐸[𝑋 = 𝑛 |
𝐽 𝑑𝜔 𝑛 𝜔=0

𝜙𝑋 (𝜔) = ∫ 𝑒 𝐽𝜔𝑥 𝑓(𝑥)𝑑𝑥
−∞

11|P a g e
EG-EE 323
Dr. S.G. Shahi

Expand 𝑒 𝐽𝜔𝑥 using Mac Laurin series:



(𝐽𝜔𝑥)2
𝜙𝑋 (𝜔) = ∫ [1 + 𝐽𝜔𝑥 + + ⋯ ] 𝑓(𝑥)𝑑𝑥
−∞ 2!

Separate integration and simplify:


(𝐽𝜔)2 (𝐽𝜔)𝑛 (𝐽𝜔)𝑛
= 1 + 𝐽𝜔𝐸[𝑋] + 𝐸[𝑋 2 ] + ⋯ + 𝐸[𝑋 𝑛 ] = ∑∞
𝑛=0 𝐸[𝑋 𝑛 ] (1)
2! 𝑛! 𝑛!

Write MacLaurin series expansion of 𝜙𝑋 (𝜔):


𝑑𝜙(𝜔) 𝑑 2 𝜙𝑋 (𝜔) 𝜔2
𝜙𝑋 (𝜔) = 𝜙𝑋 (0) + |𝜔=0 𝜔 + |𝜔=0 +….=
𝑑𝜔 𝑑𝜔2 2!
𝜔𝑛 𝑑 𝑛 𝜙𝑋 (𝜔)
∑∞
𝜔=0 |𝜔=0 (2)
𝑛! 𝑑𝜔𝑛

Where (𝑛) indicates the 𝑛-th derivatives. Equating same powers of “𝜔” in
equations (1) and (2) yields:

𝑛 𝑛]
𝑑 𝑛 𝜙𝑋 (𝜔)
𝐽 𝐸[𝑋 = |
𝑑𝜔 𝑛 𝜔=0

Example:
a) Calculate the C.F for Poisson random variable 𝑋.
b) Find mean and variance of 𝑋 using C.F of Poisson equation:
𝑒 −𝑏 𝑏𝑘
Recall Poisson probability as: 𝑃(𝑋 = 𝑘) = ; 𝑘 = 0, 1, …
𝑘!
𝑒 −𝑏 𝑏𝑘
𝜙𝑋 (𝜔) = 𝐸[𝑒 𝐽𝜔𝑥 ] = ∑∞
𝑘=0 𝑒
𝐽𝜔𝑘
𝑃(𝑥 = 𝐾) = ∑∞
𝑘=0 𝑒
𝐽𝜔𝑘
=
𝑘!
(𝑏𝑒 𝐽𝜔 )𝑘 𝐽𝜔 𝐽𝜔 −1)
𝑒 −𝑏 ∑∞
𝑘=0 = 𝑒 −𝑏 𝑒 𝑏𝑒 = 𝑒 𝑏(𝑒
𝑘!

𝜙𝑋 (𝜔) = 𝐸𝑥𝑝. 𝑏(𝑒 𝐽𝜔 − 1)

12|P a g e
EG-EE 323
Dr. S.G. Shahi

𝑑𝜙𝑋 (𝜔) 𝐽 𝐽
𝐸[𝑋] = |𝜔=0 = 𝑏𝑒 𝐽𝜔 𝜙𝑋 (𝜔) |𝜔=0 = 𝑏 = 𝑏
𝐽𝑑𝜔 𝐽 𝐽

Therefore:
Mean = 𝐸[𝑋] = 𝑏
𝑑 2 𝜙𝑋 (𝜔)
= [𝐽2 𝑏𝑒 𝐽𝜔 𝜙𝑋 (𝜔) + (𝐽𝑏𝑒 𝐽𝜔 )2 𝜙𝑋 (𝜔)]|𝜔=0 = 𝐽2 (𝑏 + 𝑏 2 )
𝑑𝜔2

𝐸[𝑋 2 ] = 𝑏 + 𝑏 2
Variance: 𝜎𝑋2 = 𝑏 + 𝑏 2 − 𝑏 2 = 𝑏

Example:
Find C.F for standard normal random variable (i.e; 𝑁(0,1))
2
∞ 𝐽𝜔𝑥 − 𝑥
1
𝜙𝑋 (𝜔) = ∫ 𝑒 𝑒 2 𝑑𝑥
√2𝜋 −∞
𝑥2 (𝐽𝜔)2 −(𝐽𝜔)2
1 ∞ 𝐽𝜔𝑥 −
= ∫−∞
𝑒 𝑒 2 𝑒 2 𝑒 2 𝑑𝑥
√2𝜋
(𝐽𝜔)2
1 ∞ − 1(𝑥−𝐽𝜔)2
= 𝑒 2 ∫−∞ 𝑒 2 𝑑𝑥
√2𝜋
(𝐽𝜔)2 1
∞ 1 − (𝑥−𝐽𝜔)2
= 𝑒 2 ∫−∞ ⏟ 𝑒 2 𝑑𝑥
√2𝜋
𝑁(𝐽𝜔,1)

(𝐽𝜔)2 −𝜔2
= 𝑒 2 = 𝑒 2

Example:
Find C.F for normal random variable Y with mean 𝜇 and variance 𝜎 2 . By
using following transformation 𝑌 = 𝜎𝑋 + 𝜇.
C.F. of 𝑌 is:
𝜙𝑌 (𝜔) = 𝐸[𝑒 𝐽𝜔𝑌 ] = 𝐸[𝑒 𝐽𝜔(𝜎𝑋 +𝜇) ] = 𝐸[𝑒 𝐽𝜔𝜎𝑋 𝑒 𝐽𝜔𝜇 ] = 𝑒 𝐽𝜔𝜇 𝜙𝑋 (𝜎𝜔) =
−𝜔2 𝜎2 𝜔 2 𝜎2
𝐽𝜔𝜇 𝐽𝜔𝜇−
𝑒 𝑒 2 =𝑒 2

13|P a g e
EG-EE 323
Dr. S.G. Shahi

Example:
Find mean and variance of normal random variable 𝑌 using
its characteristic function 𝜙𝑌 (𝜔) calculated above.
1 𝑑𝜙(𝜔) 1 2𝜎 2 𝜔 𝐽𝜇
𝐸[𝑌] = |𝜔=0 = (𝐽𝜇 − )𝜙𝑌 (𝜔) |𝜔=0 = =𝜇
𝐽 𝑑𝜔 𝐽 2 𝐽

𝐸[𝑌] = 𝜇
1 𝑑 2 𝜙(𝜔)
𝐸[𝑌 2 ] = |𝜔=0 = 𝜎 2 + 𝜇2
𝐽2 𝑑𝜔2

𝜎 2 = 𝐸[𝑌 2 ] − {𝐸[𝑌]}2 = 𝜎 2 − 𝜇2 + 𝜇2 = 𝜎 2

Method of Transformation (section3-4):


Case one:
𝑌 = 𝑔(𝑋) is a monotonic increasing function of 𝑋.
Therefore x= 𝑔−1 (𝑦) 𝑌
𝐹𝑌 (𝑦) = 𝑃(𝑌 ≤ 𝑦) = 𝑃(𝑔(𝑋) ≤ 𝑦)
= 𝑃(𝑋 ≤ 𝑔−1 (𝑦)) = 𝐹𝑋 (𝑔−1 (𝑦)) 𝑦

𝑑 𝑑𝑔−1 (𝑦)
𝑓𝑌 (𝑦) = 𝑑𝑦 𝐹𝑋 (𝑔−1 (𝑦)) = 𝑓𝑥 (𝑔−1 (𝑦)) 𝑑𝑦

𝑑𝑔−1 (𝑦) 𝑑𝑥
But: = |
𝑑𝑦 𝑑𝑦 𝑋=𝑔−1 (𝑦)
𝑥 = 𝑔−1 (𝑦) 𝑋

𝑑𝑥
𝑓𝑌 (𝑦) = 𝑓𝑋 (𝑥) |
𝑑𝑦 𝑥 = 𝑔−1 (𝑦)

14|P a g e
EG-EE 323
Dr. S.G. Shahi

Case two:
𝑌 = 𝑔(𝑋) is monotonic decreasing function of 𝑥.
𝐹𝑌 (𝑦) = 𝑃(𝑌 ≤ 𝑦) = 𝑃(𝑋 > 𝑥) =1-P(X≤ 𝑥)
𝐹𝑌 (𝑦) = 1 − 𝐹𝑋 (𝑥) = 1 − 𝐹𝑋 (𝑔−1 (𝑦)) 𝑌
𝑑 𝑑
𝑓𝑌 (𝑦) = 𝑑𝑦 𝐹𝑌 (𝑦) = −𝑓𝑋 (𝑔−1 (𝑦) 𝑑𝑦 𝑔−1 (𝑦))
𝑦
−1 ( 𝑑 𝑔−1 (𝑦)
= 𝑓𝑋 (𝑔 𝑦 )) | |
𝑑𝑦

Or:
𝑑𝑥 𝑋 = 𝑔−1 (𝑦) 𝑋
𝑓(𝑦) = 𝑓𝑋 (𝑥) | | 𝑥=𝑔−1(𝑦)
𝑑𝑦
𝑌
Or:
1
𝑑𝑥 1 1
= 𝑑𝑦 ∴ 𝑓𝑌 (𝑦) = 𝑓𝑋 (𝑥) | 𝑑𝑦 | 𝑥=𝑔−1(𝑦) 𝑎
𝑦
𝑑𝑦
𝑑𝑥 𝑑𝑥
1
𝑏
𝑁𝑜𝑡𝑒 𝑓𝑌 (𝑦) 𝑚ust be expressed in terms of 𝑦.
𝑋
𝑎 𝑏

Example:
1 1
𝐶𝑜𝑛𝑠𝑖𝑑𝑒𝑟 𝑌 = 𝑔(𝑋) = 𝑋 and 𝑓(𝑥) = 𝑏−𝑎 𝑎≤𝑥≤𝑏

Find density function of Y.


𝑑𝑥 𝑑𝑔−1 (𝑦) 1
𝑇ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒 𝑥 = 𝑔−1 (𝑦) 𝑎𝑛𝑑 = = − 𝑦2
𝑑𝑦 𝑑𝑦

𝑑𝑔−1 (𝑦) 1 1 1 1 1
𝑓𝑌 (𝑦) = 𝑓𝑋 (𝑔−1 (𝑦)) | |= (𝑏−𝑎) = (𝑏−𝑎)𝑦2 → ≤𝑦≤
𝑑𝑦 𝑦2 𝑏 𝑎

Case three: 𝑦 = 𝑔(𝑥)


Mixed mode, not monotonic
Solve for 𝑥 in terms of 𝑦.
𝑥1 = 𝑔1−1 (𝑦)
𝑥2 = 𝑔2−1 (𝑦) 𝑥1 𝑥2 𝑥3 𝑥
𝑥3 = 𝑔3−1 (𝑦)

15|P a g e
EG-EE 323
Dr. S.G. Shahi
⏟ 1−1 (𝑦) + 𝐹𝑋 (𝑔
𝐹𝑌 (𝑦) = 𝑃(𝑥 𝑖𝑛 𝑠ℎ𝑎𝑑𝑒𝑑 𝑟𝑒𝑔𝑖𝑜𝑛. ) = 𝐹𝑋 (𝑔 ⏟ 3−1 (𝑦) − 𝐹𝑋 (𝑔
⏟ 2−1 (𝑦)
𝑥1 𝑥3 𝑥2

𝑑𝑔1−1 (𝑦) 𝑑𝑔3−1 (𝑦) 𝑑𝑔2−1 (𝑦)


𝑓𝑌 (𝑦) = 𝑓𝑋 (𝑔1−1 (𝑦)) + 𝑓𝑋 (𝑔3−1 (𝑦)) − 𝑓𝑋 (𝑔2−1 (𝑦))
𝑑𝑦 𝑑𝑦 𝑑𝑦

𝑑𝑔𝑖−1 (𝑦)
= ∑3𝑖=1 𝑓𝑋 (𝑔𝑖−1 (𝑦)) | |
𝑑𝑦

𝑑𝑥𝑖
𝑓𝑌 (𝑦) = ∑ 𝑓𝑋 (𝑥𝑖 ) | |
𝑑𝑦 𝑥𝑖 = 𝑔𝑖−1 (𝑦)
𝐴𝑙𝑙 𝑟𝑜𝑜𝑡𝑠

Example:
𝑦 = 𝑎𝑥 2 𝑎>0 𝑦
𝑥2
1
𝑓(𝑥) = 𝑒− 2
√2𝜋

𝑦
𝑥1,2 = ±√𝑎

𝑑𝑥1,2 1
| |=2 𝑦 𝑦
𝑑𝑦 √𝑦𝑎 −√ +√ 𝑥
𝑎 𝑎
𝑦 𝑦 1
𝑓𝑌 (𝑦) = [𝑓𝑋 (√𝑎) + 𝑓𝑋 (−√𝑎)] 2 𝑦≥0
√𝑦𝑎
𝑦
1
𝑓𝑌 (𝑦) = [𝑒 −2𝑎 ] 𝑦≥0
√2𝜋√𝑦𝑎

Special cases:
𝑦
1. 𝑔(𝑋) has a jump at 𝑥, and X is continuous random variable.
𝑃{𝑦1 ≤ 𝑌 ≤ 𝑦2 } = 𝐹(𝑦2 ) − 𝐹(𝑦1 ) = 𝑃(𝑥 = 𝑥0 ) = 0 𝑦2
𝐹𝑌 (𝑦) = 𝑐𝑡𝑒 𝑦1 ≤ 𝑌 ≤ 𝑦2
𝑓𝑌 (𝑦) = 0 𝑦1 ≤ 𝑌 ≤ 𝑦2 𝑦1

𝑥0 𝑥
2. 𝑔(𝑋) has a flat region.
𝑃(𝑌 = 𝑦0 ) = 𝑃{𝑥1 ≤ 𝑋 ≤ 𝑥2 } = 𝐹𝑋 (𝑥2 ) − 𝐹𝑋 (𝑥1 ) = 𝐴
So 𝐹𝑌 (𝑦) has discontinuity at point 𝑌 = 𝑌0 and 𝑓𝑌 (𝑦) has impulse of strength
A at 𝑦 = 𝑦0 .
𝑦

𝑦0

16|P a g e

𝑥1 𝑥2 𝑥
EG-EE 323
Dr. S.G. Shahi

Example:
A random voltage 𝑉 has a uniform distribution between 90 and 100 volts. Find
the distribution of the output voltage 𝑊 , when this voltage is put into a
nonlinear device, with the characteristic shown as:

𝑓(𝑣) 𝑊

1
10

90 100
𝑣 94 96
𝑣

0 𝑉 ≤ 94
𝑊 = 𝑔(𝑉) = { 1 𝑉 ≥ 96
1
(𝑉 − 94) 94 < 𝑉 < 96
2
1 4
𝑃(𝑤 = 0) = 𝑃(𝑉 ≤ 94) = 4 × 10 = 10
6 4
𝑃(𝑤 = 1) = 𝑃(𝑉 ≥ 96) = 1 − 𝑃(𝑉 ≤ 96) = 1 − 10 = 10
1
𝑊 = 2 (𝑉 − 94) 94 ≤ 𝑉 ≤ 96
𝐹(𝑤)
𝑉 = 2(𝑊 + 47) 0≤𝑊≤1
𝑑𝑉
=2 1.0
𝑑𝑊
0.6
𝑑𝑣
𝑓𝑊 (𝑤) = 𝑓𝑉 (𝑣) | |
𝑑𝑤
0.4
1 1
𝑓𝑊 (𝑤) = 10 (2) = 5 0≤𝑤≤1
1 𝑤
17|P a g e
EG-EE 323
Dr. S.G. Shahi
1
𝐹(𝑤) = 5 𝑤

18|P a g e

You might also like