0% found this document useful (0 votes)
11 views

Properties of Random Variables and Their Probability Distributions

Summary

Uploaded by

sishahed420
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Properties of Random Variables and Their Probability Distributions

Summary

Uploaded by

sishahed420
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Properties of Random Variables

 Let us assume 𝑓(𝑥) be the probability mass function (pmf) of discrete random variable 𝑋 or
probability density function (pdf) of continuous random variable 𝑋. Then, the cumulative distribution
function (cdf) 𝐹(𝑥) is defined as
𝑥𝑡
➢ 𝐹(𝑥) = 𝑃(𝑋 ≤ 𝑥𝑡 ) = ∑𝑥=−∞ 𝑓(𝑥) (for discrete)
𝑡 𝑥
➢ 𝐹(𝑥) = 𝑃(𝑋 ≤ 𝑥𝑡 ) = ∫−∞ 𝑓(𝑥) 𝑑𝑥 (for continuous)

 Consider 𝑢 = 𝑢(𝑋) as a function of discrete random variable 𝑋 with the probability mass function
(pmf) 𝑓(𝑥) then the Mathematical Expectation is defined as 𝐸[𝑢(𝑋)] = ∑⬚
𝑥∈𝑆 𝑢(𝑥)𝑓(𝑥).

 Consider 𝑢 = 𝑢(𝑋) as a function of continuous random variable 𝑋 with the probability density

function (pdf) 𝑓(𝑥) then the Mathematical Expectation is defined as 𝐸[𝑢(𝑋)] = ∫−∞ 𝑢(𝑥)𝑓(𝑥) 𝑑𝑥.

 For 𝑌 = 𝑋 𝑟 then, we have


➢ 𝐸[𝑌] = 𝐸[𝑋 𝑟 ] = ∑⬚ 𝑟
𝑥∈𝑆 𝑥 𝑓(𝑥) (for discrete)

➢ 𝐸[𝑌] = 𝐸[𝑋 𝑟 ] = ∫−∞ 𝑥 𝑟 𝑓(𝑥) 𝑑𝑥 (for continuous)

❖ If the given function is renamed of a new variable is declared with the same pmf the desired
mathematical expectation will not be changed.
❖ If the variable or/and corresponding pmf redefined the mathematical expectation will be changed.
❖ For discrete random variable 𝑃(𝑋 ≥ 𝑎) ≠ 𝑃(𝑋 > 𝑎) and 𝑃(𝑋 ≤ 𝑎) ≠ 𝑃(𝑋 < 𝑎).
❖ For continuous random variable 𝑃(𝑋 ≥ 𝑎) = 𝑃(𝑋 > 𝑎) and 𝑃(𝑋 ≤ 𝑎) = 𝑃(𝑋 < 𝑎).

✓ 𝐸[𝑐] = 𝑐, where 𝑐 is a constant


✓ 𝐸[𝑐𝑢(𝑋)] = 𝑐𝐸[𝑢(𝑋)]
✓ 𝐸[𝑢1 (𝑋) ± 𝑢2 (𝑋)] = 𝐸[𝑢1 (𝑋)] ± 𝐸[𝑢2 (𝑋)]
✓ 𝐸[𝑐1 𝑢1 (𝑋) ± 𝑐2 𝑢2 (𝑋) ± 𝑐3 ] = 𝑐1 𝐸[𝑢1 (𝑋)] ± 𝑐2 𝐸[𝑢2 (𝑋)] ± 𝑐3

 Mean of any distribution 𝜇 = 𝐸[𝑋] = ∑⬚


𝑥∈𝑆 𝑥𝑓(𝑥) (for discrete)

 Mean of any distribution 𝜇 = 𝐸[𝑋] = ∫−∞ 𝑥𝑓(𝑥) 𝑑𝑥 (for continuous)
➢ 𝐸[𝑋 − 𝜇] = 𝐸[𝑋] − 𝐸[𝜇] = 𝜇 − 𝜇 = 0

 Variance of any distribution 𝜎 2 = 𝑉(𝑋) = 𝐸[(𝑋 − 𝜇)2 ] = ∑⬚ 2


𝑥∈𝑆(𝑥 − 𝜇) 𝑓(𝑥) (for discrete)

 Variance of any distribution 𝜎 2 = 𝑉(𝑋) = 𝐸[(𝑋 − 𝜇)2 ] = ∫−∞(𝑥 − 𝜇)2 𝑓(𝑥) 𝑑𝑥 (for continuous)
➢ Shortcut formula 𝜎 2 = 𝐸[𝑋 2 ] − 𝜇2 = 𝐸[𝑋 2 ] − {𝐸[𝑋]}2
➢ If 𝐸[𝑋] = 𝜇 = 0, 𝜎 2 = 𝐸[𝑋 2 ]

 Standard deviation 𝑆𝐷 = 𝜎 = +√𝑉(𝑥) = √𝐸 [𝑋 2 ] − {𝐸[𝑋]}2


✓ 𝑉(𝑢(𝑋)) ≥ 0
✓ 𝑉(𝑐) = 0, where 𝑐 is a constant
✓ 𝑉[𝑐𝑢(𝑋)] = 𝑐 2 𝑉[𝑢(𝑋]
✓ 𝑉[𝑢1 (𝑋) ± 𝑢2 (𝑋)] = 𝑉[𝑢1 (𝑋)] + 𝑉[𝑢2 (𝑋)] [Considering 𝑢1 (𝑋) and 𝑢2 (𝑋) are uncorrelated]
✓ 𝑉[𝑐1 𝑢1 (𝑋) ± 𝑐2 𝑢2 (𝑋) ± 𝑐3 ] = 𝑐12 𝑉[𝑢1 (𝑋)] + 𝑐22 𝑉[𝑢2 (𝑋)]

 Moment generating function (mgf) is defined as 𝑀(𝑡) = 𝐸[𝑒 𝑡𝑋 ] = ∑⬚ 𝑡𝑥


𝑥∈𝑆 𝑒 𝑓(𝑥) (for discrete)
➢ 𝑀(0) = 𝐸[1] = ∑⬚ 𝑥∈𝑆 𝑓(𝑥) = 1
➢ 𝑀𝑟 (𝑡) = ∑⬚ 𝑟 𝑡𝑥
𝑥∈𝑆 𝑥 𝑒 𝑓(𝑥)
➢ 𝑀𝑟 (0) = ∑⬚ 𝑟 𝑟
𝑥∈𝑆 𝑥 𝑓(𝑥) = 𝐸 [𝑋 ]


 Moment generating function (mgf) is defined as 𝑀(𝑡) = 𝐸[𝑒 𝑡𝑋 ] = ∫−∞ 𝑒 𝑡𝑥 𝑓(𝑥) 𝑑𝑥 (for continuous)

➢ 𝑀(0) = 𝐸[1] = ∫−∞ 𝑓(𝑥) 𝑑𝑥 = 1

➢ 𝑀𝑟 (𝑡) = ∫−∞ 𝑥 𝑟 𝑒 𝑡𝑥 𝑓(𝑥) 𝑑𝑥

➢ 𝑀𝑟 (0) = ∫−∞ 𝑥 𝑟 𝑓(𝑥) 𝑑𝑥 = 𝐸[𝑋 𝑟 ]

✓ Mean 𝜇 = 𝐸[𝑋] = 𝑀′ (0)


✓ Variance 𝜎 2 = 𝐸[𝑋 2 ] − {𝐸[𝑋]}2 = 𝑀′′ (0) − {𝑀′ (0)}2
✓ Standard deviation 𝑆𝐷 = √𝑀′′ (0) − {𝑀′ (0)}2
Uniform (Discrete) distribution

Let us consider a distribution having 𝑚 discrete outcomes with equal probability, and then probability
distribution of getting any of the outcomes is called as Uniform distribution of discrete type. The pmf of
the Uniform distribution is defined as

1
𝑓(𝑥) = ; 𝑥 = 1, 2, 3, … … … , 𝑚
𝑚
0;𝑥 < 1
𝑘
The cdf of Uniform distribution is defined as 𝐹(𝑥) = 𝑃(𝑋 ≤ 𝑥) = {𝑚 ; 1 ≤ 𝑥 < 𝑚
1; 𝑥 ≥ 𝑚

1 1
Here, ∑⬚ 𝑚
𝑥∈𝑆 𝑓(𝑥) = ∑𝑥=1 𝑚 = 𝑚 𝑚 = 1

11 𝑚(𝑚+1) 𝑚+1
✓ 𝐸[𝑋] = ∑⬚ 𝑚
𝑥∈𝑆 𝑥𝑓(𝑥) = ∑𝑥=1 𝑥 = =
𝑚𝑚 2 2
1 1 𝑚(𝑚+1)(2𝑚+1) (𝑚+1)(2𝑚+1)
✓ 𝐸[𝑋 2 ] = ∑⬚ 2
𝑥∈𝑆 𝑥 𝑓(𝑥) =
𝑚
∑𝑥=1 𝑥 2
= =
𝑚 𝑚 6 6

𝑚+1
 Mean 𝜇 = 𝐸[𝑋] =
2
(𝑚+1)(2𝑚+1) 𝑚+1 2 𝑚2 −1
 Variance 𝜎 2 = 𝐸[𝑋 2 ] − {𝐸[𝑋]}2 = 6
−{ 2
} = 12
𝑚2 −1
 Standard deviation 𝑆𝐷 = √
12

1 𝑒 𝑡𝑥
 Moment generating function 𝑀(𝑡) = 𝐸[𝑒 𝑡𝑋 ] = ∑⬚ 𝑡𝑥 𝑚
𝑥∈𝑆 𝑒 𝑓(𝑥) = ∑𝑥=1 𝑒
𝑡𝑥
= ∑𝑚
𝑥=1
𝑚 𝑚
1
➢ 𝑀(0) = ∑𝑚
𝑥=1 𝑚 = 1
Hypergeometric distribution

Let us consider a distribution having 𝑁 discrete outcomes can be divided in two well-defined categories
with 𝑁1 and 𝑁2 outcomes 𝑁 = 𝑁1 + 𝑁2 , respectively. The probability distribution of getting 𝑥 ≤ 𝑁1
outcomes from the first category and the rest 𝑛 − 𝑥 ≤ 𝑁2 outcomes from the second category for 𝑛
selected outcomes is called as Hypergeometric distribution. The pmf of the Hypergeometric distribution is
defined as

(𝑁𝑥1 )(𝑛−𝑥
𝑁2
)
𝑓(𝑥) = ; 𝑥≤𝑛
(𝑁
𝑛
)

𝑥 (𝑁𝑥1 )(𝑛−𝑥
𝑁2
)
The cdf of Hypergeometric distribution is defined as 𝐹(𝑥) = 𝑃(𝑋 ≤ 𝑥) = ∑𝑥=0
𝑡
(𝑁
𝑛)

(𝑁𝑥1 )(𝑛−𝑥
𝑁2
)
Here, ∑⬚ ⬚
𝑥∈𝑆 𝑓(𝑥) = ∑𝑥∈𝑆 =1
(𝑁
𝑛)

(𝑁𝑥1 )(𝑛−𝑥
𝑁2
) 𝑁
✓ 𝐸[𝑋] = ∑⬚ ⬚
𝑥∈𝑆 𝑥𝑓(𝑥) = ∑𝑥∈𝑆 𝑥 = 𝑛 ( 𝑁1 )
(𝑁
𝑛)
(𝑁𝑥1 )(𝑛−𝑥
𝑁2
) 𝑛(𝑛−1)𝑁1 (𝑁1 −1) 𝑁
✓ 𝐸[𝑋 2 ] = ∑⬚ 2 ⬚
𝑥∈𝑆 𝑥 𝑓(𝑥) = ∑𝑥∈𝑆 𝑥
2
= + 𝑛 ( 𝑁1 )
(𝑁
𝑛)
𝑁(𝑁−1)

𝑁
 Mean 𝜇 = 𝐸[𝑋] = 𝑛 ( 𝑁1 )
𝑛(𝑛−1)𝑁1 (𝑁1 −1) 𝑁 𝑁 2 𝑁 𝑁 𝑁−𝑛
 Variance 𝜎 2 = 𝐸[𝑋 2 ] − {𝐸[𝑋]}2 = + 𝑛 ( 1 ) − {𝑛 ( 1 )} = 𝑛 ( 1 ) ( 2 ) ( )
𝑁(𝑁−1) 𝑁 𝑁 𝑁 𝑁 𝑁−1
𝑁 𝑁 𝑁−𝑛
 Standard deviation 𝑆𝐷 = √𝑛 ( 1 ) ( 2 ) ( )
𝑁 𝑁 𝑁−1

(𝑁𝑥1 )(𝑛−𝑥
𝑁2
)
 Moment generating function 𝑀(𝑡) = 𝐸[𝑒 𝑡𝑋 ] = ∑⬚ 𝑡𝑥 ⬚
𝑥∈𝑆 𝑒 𝑓(𝑥) = ∑𝑥∈𝑆 𝑒
𝑡𝑥
(𝑁
𝑛)
𝑁1 𝑁2
⬚ ( 𝑥 )(𝑛−𝑥)
➢ 𝑀(0) = ∑ 𝑥∈𝑆 =1
(𝑁
𝑛)
Geometric distribution

Let us consider 𝑝 and 𝑞 as the probability of success and failure of any trial with 𝑝 + 𝑞 = 1. The trial will
be terminated at the maiden success, and then probability distribution of getting success in 𝑥-th trial in
successive trials is called as Geometric distribution. The pmf of the Geometric distribution is defined as

𝑓(𝑥) = 𝑞 𝑥−1 𝑝 ; 𝑥 = 1, 2, 3, … … …
𝑥
The cdf of Geometric distribution is defined as 𝐹(𝑥) = 𝑃(𝑋 ≤ 𝑥) = ∑𝑥=1
𝑡
𝑞 𝑥−1 𝑝
𝑝
Here, ∑⬚ ∞
𝑥∈𝑆 𝑓(𝑥) = ∑𝑥=1 𝑞
𝑥−1
𝑝 = 𝑝 + 𝑞𝑝 + 𝑞 2 𝑝 + 𝑞 3 𝑝 + ⋯ … … = 1−𝑞 = 1

1
 Mean 𝜇 = 𝐸[𝑋] = ∑⬚ ∞
𝑥∈𝑆 𝑥𝑓(𝑥) = ∑𝑥=1 𝑥𝑞
𝑥−1
𝑝 = [Considering 𝑝 = 1 − 𝑞]
𝑝

𝑝𝑒 𝑡
 Moment generating function 𝑀(𝑡) = 𝐸[𝑒 𝑡𝑋 ] = ∑⬚ 𝑡𝑥 ∞ 𝑡𝑥 𝑥−1
𝑥∈𝑆 𝑒 𝑓(𝑥) = ∑𝑥=1 𝑒 𝑞 𝑝=
1−𝑞𝑒 𝑡
𝑝
➢ 𝑀(0) = =1
1−𝑞
𝑝𝑒 𝑡
➢ 𝑀′ (𝑡) = (1−𝑞𝑒 𝑡 )2
𝑝𝑒 𝑡 (1+𝑞𝑒 𝑡 )
➢ 𝑀′′ (𝑡) = (1−𝑞𝑒 𝑡 )3

1
 Mean 𝜇 = 𝑀′ (0) = 𝑝
1+𝑞 1 2 𝑞
 Variance 𝜎 2 = 𝑀′′ (0) − {𝑀′ (0)}2 = 𝑝2
− {𝑝} = 𝑝2
𝑞
 Standard deviation 𝑆𝐷 = √
𝑝2
Binomial distribution

Let us consider 𝑝 and 𝑞 as the probability of success and failure of 𝑛 independent trials with 𝑝 + 𝑞 = 1.
The value of 𝑝 and 𝑞 remain the same on each trial, and then probability distribution of getting 𝑥 number
of success in 𝑛 trials is called as Binomial distribution. The pmf of the Binomial distribution is defined as

𝑓(𝑥) = ⬚𝑛𝐶𝑥 𝑝 𝑥 𝑞𝑛−𝑥 ; 𝑥 = 0,1, 2, 3, … … … , 𝑛


𝑥
The cdf of Binomial distribution is defined as 𝐹(𝑥) = 𝑃(𝑋 ≤ 𝑥) = ∑𝑥=0
𝑡
𝑛𝐶𝑥 𝑝𝑥 𝑞𝑛−𝑥

𝑛
Here, ∑⬚ 𝑛 𝑥 𝑛−𝑥
𝑥∈𝑆 𝑓(𝑥) = ∑𝑥=0 ⬚𝐶𝑥 𝑝 𝑞 = (𝑞 + 𝑝)𝑛 = 1

The Binomial distribution can be represented as 𝑏(𝑛, 𝑝).


𝑛
 Mean 𝜇 = 𝐸[𝑋] = ∑⬚ 𝑛 𝑥 𝑛−𝑥
𝑥∈𝑆 𝑥𝑓(𝑥) = ∑𝑥=0 𝑥 ⬚𝐶𝑥 𝑝 𝑞 = 𝑛𝑝 [Considering 𝑝 = 1 − 𝑞]

𝑡𝑥 𝑛
 Moment generating function 𝑀(𝑡) = 𝐸[𝑒 𝑡𝑋 ] = ∑⬚ 𝑡𝑥 𝑛
𝑥∈𝑆 𝑒 𝑓(𝑥) = ∑𝑥=0 𝑒 ⬚𝐶𝑥 𝑝 𝑞
𝑥 𝑛−𝑥
= (𝑞 + 𝑝𝑒 𝑡 )𝑛
➢ 𝑀(0) = (𝑞 + 𝑝)𝑛 = 1
➢ 𝑀′ (𝑡) = 𝑛(𝑞 + 𝑝𝑒 𝑡 )𝑛−1 𝑝𝑒 𝑡
➢ 𝑀′′ (𝑡) = 𝑛(𝑛 − 1)(𝑞 + 𝑝𝑒 𝑡 )𝑛−2 (𝑝𝑒 𝑡 )2 + 𝑛(𝑞 + 𝑝𝑒 𝑡 )𝑛−1 𝑝𝑒 𝑡

 Mean 𝜇 = 𝑀′ (0) = 𝑛𝑝
 Variance 𝜎 2 = 𝑀′′ (0) − {𝑀′ (0)}2 = 𝑛(𝑛 − 1)𝑝2 + 𝑛𝑝 − {𝑛𝑝}2 = 𝑛𝑝(1 − 𝑝) = 𝑛𝑝𝑞
 Standard deviation 𝑆𝐷 = √𝑛𝑝𝑞
Poisson distribution

Let us consider 𝑝 as the probability of success of 𝑛 trials with 𝑛 → ∞ implies 𝑝 → 0. The probability
distribution of getting 𝑥 number of success is called as Poisson distribution. Then, a finite constant 𝜆 > 0
is known as the Poisson parameter and can be attained as 𝜆 = 𝑛𝑝. The pmf of the Poisson distribution is
defined as

𝜆𝑥 𝑒 −𝜆
𝑓(𝑥) = ; 𝑥 = 0,1, 2, 3, … … …
𝑥!
𝑥 𝜆𝑥 𝑒−𝜆
The cdf of Poisson distribution is defined as 𝐹(𝑥) = 𝑃(𝑋 ≤ 𝑥) = ∑𝑥=0
𝑡
𝑥!

𝜆𝑥 𝑒 −𝜆 𝜆𝑥
Here, ∑⬚ ∞
𝑥∈𝑆 𝑓(𝑥) = ∑𝑥=0 = 𝑒 −𝜆 ∑∞
𝑥=0 = 𝑒 −𝜆 𝑒 𝜆 = 1
𝑥! 𝑥!

𝜆𝑥 𝑒 −𝜆 𝜆𝑥
 Mean 𝜇 = 𝐸[𝑋] = ∑⬚ ∞
𝑥∈𝑆 𝑥𝑓(𝑥) = ∑𝑥=0 𝑥 𝑥!
= 𝑒 −𝜆 ∑∞ −𝜆 𝜆
𝑥=0 𝑥 𝑥! = 𝜆𝑒 𝑒 = 𝜆

𝜆𝑥 𝑒 −𝜆 𝑡 −1)
 Moment generating function 𝑀(𝑡) = 𝐸[𝑒 𝑡𝑋 ] = ∑⬚ 𝑡𝑥 ∞
𝑥∈𝑆 𝑒 𝑓(𝑥) = ∑𝑥=0 𝑒
𝑡𝑥
𝑥!
= 𝑒 𝜆(𝑒
➢ 𝑀(0) = 𝑒 𝜆(1−1) = 1
𝑡 −1)
➢ 𝑀′ (𝑡) = 𝜆𝑒 𝑡 𝑒 𝜆(𝑒
𝑡 −1) 𝑡 −1)
➢ 𝑀′′ (𝑡) = (𝜆𝑒 𝑡 )2 𝑒 𝜆(𝑒 + 𝜆𝑒 𝑡 𝑒 𝜆(𝑒

 Mean 𝜇 = 𝑀′ (0) = 𝜆
 Variance 𝜎 2 = 𝑀′′ (0) − {𝑀′ (0)}2 = 𝜆2 + 𝜆 − 𝜆2 = 𝜆
 Standard deviation 𝑆𝐷 = √𝜆
Uniform (Continuous) distribution

Let us consider a distribution having constant probability within the interval [𝑎, 𝑏], and then distribution
of the probability in that interval is called as Uniform distribution of continuous type. The pdf of the
Uniform distribution is defined as

1
𝑓(𝑥) = ; 𝑎≤𝑥≤𝑏
𝑏−𝑎
0, 𝑥<𝑎
𝑥 1 𝑥−𝑎
The cdf of Uniform distribution is defined as 𝐹(𝑥) = 𝑃(𝑋 ≤ 𝑥) = {∫𝑎 𝑏−𝑎 𝑑𝑤 = 𝑏−𝑎
, 𝑎≤𝑥<𝑏
1, 𝑥≥𝑏
∞ 𝑏 1 1
Here, ∫−∞ 𝑓(𝑥)𝑑𝑥 = ∫𝑎 𝑑𝑥 = (𝑏 − 𝑎) = 1
𝑏−𝑎 𝑏−𝑎

The Uniform distribution can be represented as 𝑈(𝑎, 𝑏).

∞ 𝑏 𝑥 𝑎+𝑏
✓ 𝐸[𝑋] = ∫−∞ 𝑥𝑓(𝑥)𝑑𝑥 = ∫𝑎 𝑑𝑥 =
𝑏−𝑎 2
∞ 𝑏 𝑥2 𝑎 2 +𝑎𝑏+𝑏2
✓ 𝐸[𝑋 2 ] = ∫−∞ 𝑥 2 𝑓(𝑥)𝑑𝑥 = ∫𝑎 𝑏−𝑎 𝑑𝑥 = 3

𝑎+𝑏
 Mean 𝜇 = 𝐸[𝑋] =
2
𝑎 2 +𝑎𝑏+𝑏2 𝑎+𝑏 2 (𝑏−𝑎)2
 Variance 𝜎 2 = 𝐸[𝑋 2 ] − {𝐸[𝑋]}2 = 3
−{ 2
} = 12
(𝑏−𝑎)2
 Standard deviation 𝑆𝐷 = √
12

𝑒 𝑏𝑡 −𝑒 𝑎𝑡
𝑡𝑋 ] ∞ 𝑏 𝑒 𝑡𝑥 , 𝑡≠0
 Moment generating function 𝑀(𝑡) = 𝐸[𝑒 = ∫−∞ 𝑒 𝑡𝑥 𝑓(𝑥)𝑑𝑥 = ∫𝑎 𝑏−𝑎 𝑑𝑥 = { 𝑡(𝑏−𝑎)
1, 𝑡=0
➢ 𝑀(0) = 1
Exponential distribution
1
Let us consider a distribution having average waiting time 𝜃 = 𝜆 > 0 for the first occurrence defined in
[0, ∞), and then distribution of the probability in that interval is called as Exponential distribution. The
pdf of the Exponential distribution is defined as

1 −𝑥
𝑓(𝑥) = 𝑒 𝜃 ; 0≤𝑥<∞
𝜃

The cdf of Exponential distribution is defined as


𝑤 𝑥
𝑥1
➢ 𝐹(𝑥) = 𝑃(𝑋 ≤ 𝑥) = ∫0 𝑒 − 𝜃 𝑑𝑤 = 1 − 𝑒 − 𝜃 ; 0 ≤ 𝑥 < ∞
𝜃
𝑥
➢ 𝑃(𝑋 > 𝑥) = 1 − 𝐹(𝑥) = 𝑒 − 𝜃 ; 0 ≤ 𝑥 < ∞

𝑥
∞ ∞1 1 0−1
Here, ∫−∞ 𝑓(𝑥)𝑑𝑥 = ∫0 𝜃
𝑒 − 𝜃 𝑑𝑥 = 𝜃 1 =1

𝜃

 Median 𝑚 = 𝜃 log 2
𝑥
∞ ∞ 1 1
 Moment generating function 𝑀(𝑡) = 𝐸[𝑒 𝑡𝑋 ] = ∫−∞ 𝑒 𝑡𝑥 𝑓(𝑥)𝑑𝑥 = ∫0 𝑒 𝑡𝑥 𝑒 − 𝜃 𝑑𝑥 =
𝜃 1−𝜃𝑡
1
➢ 𝑀(0) = 1−0 = 1
𝜃
➢ 𝑀′ (𝑡) =
(1−𝜃𝑡)2
2𝜃2
➢ 𝑀′′ (𝑡) = (1−𝜃𝑡)2

 Mean 𝜇 = 𝑀′ (0) = 𝜃
 Variance 𝜎 2 = 𝑀′′ (0) − {𝑀′ (0)}2 = 2𝜃 2 − 𝜃 2 = 𝜃 2
 Standard deviation 𝑆𝐷 = 𝜃
Gamma distribution
1
Let us consider a distribution having average waiting time 𝜃 = 𝜆 > 0 defined in [0, ∞), and then
distribution of the probability waiting for the 𝛼-th occurrence in that interval is called as Gamma
distribution. The pdf of the Gamma distribution is defined as

1 𝑥
𝛼−1 − 𝜃
𝑓(𝑥) = 𝑥 𝑒 ; 0≤𝑥<∞
Γ(𝛼)𝜃 𝛼

1 𝑥𝑡 𝛼−1 − 𝑥
The cdf of Gamma distribution is defined as 𝐹(𝑥) = 𝛼 ∫−∞
𝑥 𝑒 𝜃 𝑑𝑥 ; 0 ≤ 𝑥𝑡 < ∞
Γ(𝛼)𝜃

∞ 1 ∞ 1 𝛼−1 − 𝑥 1
Here, ∫−∞ 𝑓(𝑥)𝑑𝑥 = Γ(𝛼) ∫0 𝜃𝛼
𝑥 𝑒 𝜃 𝑑𝑥 = Γ(𝛼) Γ(𝛼) = 1


 Moment generating function 𝑀(𝑡) = 𝐸[𝑒 𝑡𝑋 ] = ∫−∞ 𝑒 𝑡𝑥 𝑓(𝑥)𝑑𝑥
𝑥
∞ 1 1
= ∫0 𝑒 𝑡𝑥 Γ(𝛼)𝜃𝛼 𝑥 𝛼−1 𝑒 − 𝜃 𝑑𝑥 = (1−𝜃𝑡)𝛼
1
➢ 𝑀(0) = (1−0)𝛼 = 1
𝛼𝜃
➢ 𝑀′ (𝑡) =
(1−𝜃𝑡)𝛼+1
𝛼(𝛼+1)𝜃2
➢ 𝑀′′ (𝑡) =
(1−𝜃𝑡)𝛼+2

 Mean 𝜇 = 𝑀′ (0) = 𝛼𝜃
 Variance 𝜎 2 = 𝑀′′ (0) − {𝑀′ (0)}2 = 𝛼(𝛼 + 1)𝜃 2 − 𝛼 2 𝜃 2 = 𝛼𝜃 2
 Standard deviation 𝑆𝐷 = √𝛼𝜃 2
Normal distribution

Let us consider a symmetric distribution (Mean, Median, Mode coincide) having mean −∞ < 𝜇 < ∞ and
variance 𝜎 2 ; 0 < 𝜎 < ∞, defined in (−∞, ∞) is called the Normal distribution. The pdf of the Normal
distribution is defined as

1 (𝑥−𝜇)2

𝑓(𝑥) = 𝑒 2𝜎2 ; −∞ < 𝑥 < ∞
𝜎√2𝜋
𝑥−𝜇
For the standardized variable 𝑧 = 𝜎
cdf of the Normal distribution is defined as

𝑧
1 𝑤2

𝜙(𝑧) = 𝑃(𝑍 ≤ 𝑧) = ∫ 𝑒 2 𝑑𝑤 ; −∞ < 𝑧 < ∞
√2𝜋
−∞

2
∞ 1 ∞ − (𝑥−𝜇)
Here, ∫−∞ 𝑓(𝑥)𝑑𝑥 = ∫
𝜎 √2𝜋 0
𝑒 2𝜎2 𝑑𝑥 = 1

The Normal distribution can be represented as 𝑁(𝜇, 𝜎 2 ).



 Moment generating function 𝑀(𝑡) = 𝐸[𝑒 𝑡𝑋 ] = ∫−∞ 𝑒 𝑡𝑥 𝑓(𝑥)𝑑𝑥
(𝑥−𝜇) 2 1 2 2
∞ 1 −
= ∫−∞ 𝑒 𝑡𝑥 𝜎 2𝜋 𝑒 2𝜎2 𝑑𝑥 = 𝑒 𝜇𝑡+2𝜎 𝑡

➢ 𝑀(0) = 𝑒 0 = 1
1 2 2
➢ 𝑀′ (𝑡) = (𝜇 + 𝜎 2 𝑡)𝑒 𝜇𝑡+2𝜎 𝑡
1 2 2
➢ 𝑀′′ (𝑡) = [(𝜇 + 𝜎 2 𝑡)2 + 𝜎 2 ]𝑒 𝜇𝑡+2𝜎 𝑡

 Mean 𝜇 = 𝑀′ (0) = 𝜇
 Variance 𝜎 2 = 𝑀′′ (0) − {𝑀′ (0)}2 = 𝜇2 + 𝜎 2 − 𝜇2 = 𝜎 2
 Standard deviation 𝑆𝐷 = √𝜎 2

❖ 𝑃(𝑍 ≤ 𝑧) = 𝜙(𝑧)
❖ 𝑃(𝑧1 ≤ 𝑍 ≤ 𝑧2 ) = 𝜙(𝑧2 ) − 𝜙(𝑧1 )
❖ 𝑃(−𝑧2 ≤ 𝑍 ≤ −𝑧1 ) = 𝑃(𝑧1 ≤ 𝑍 ≤ 𝑧2 ) = 𝜙(𝑧2 ) − 𝜙(𝑧1 )
❖ 𝑃(𝑍 ≥ 𝑧) = 1 − 𝑃(𝑍 ≤ 𝑧) = 1 − 𝜙(𝑧)
❖ 𝑃(𝑍 ≥ −𝑧) = 𝑃(𝑍 ≤ 𝑧) = 𝜙(𝑧)
❖ 𝑃(𝑍 ≤ −𝑧) = 𝑃(𝑍 ≥ 𝑧) = 1 − 𝜙(𝑧)
❖ 𝑃(|𝑍| ≤ 𝑧) = 𝑃(−𝑧 ≤ 𝑍 ≤ 𝑧) = 𝜙(𝑧) − 𝜙(−𝑧) = 𝜙(𝑧) − [1 − 𝜙(𝑧)] = 2𝜙(𝑧) − 1
❖ 𝑃(|𝑍| ≥ 𝑧) = 𝑃(𝑍 ≤ −𝑧) + 𝑃(𝑍 ≥ 𝑧) = [1 − 𝜙(𝑧)] + [1 − 𝜙(𝑧)] = 2 − 2𝜙(𝑧)
❖ 𝑃(𝑍 ≤ 0) = 𝜙(0) = 0.5
❖ 𝑃(𝑍 ≥ 0) = 𝑃(𝑍 ≤ 0) = 𝜙(0) = 0.5
❖ 𝑃(𝑍 = 𝑧) = 0 or undefined

You might also like