Probability & Random Process Formulas All Units
Probability & Random Process Formulas All Units
com
2 F ( x) = P [ X ≤ x] x
F ( x) = P [ X ≤ x] = ∫ f ( x )dx
−∞
3 Mean = E [ X ] = ∑ xi p( xi ) ∞
i Mean = E [ X ] = ∫ xf ( x )dx
−∞
4 E X = ∑ x p( xi )
2 2 ∞
E X 2 = ∫x
i 2
i f ( x )dx
−∞
Var ( X ) = E ( X 2 ) − E ( X ) Var ( X ) = E ( X 2 ) − E ( X )
2 2
5
6 Moment = E X r = ∑ xir pi ∞
Moment = E X r = ∫x
r
i f ( x )dx
−∞
7 M.G.F M.G.F
passannauniv.blogspot.com
M X ( t ) = E e tX = ∑ e tx p( x ) ∞
x M X ( t ) = E e tX = ∫e
tx
f ( x )dx
−∞
4) E ( aX + b ) = aE ( X ) + b
5) Var ( aX + b ) = a 2 Var ( X )
6) Var ( aX ± bY ) = a 2 Var ( X ) + b 2Var (Y )
7) Standard Deviation = Var ( X )
8) f ( x ) = F ′( x )
9) p( X > a ) = 1 − p( X ≤ a )
p( A ∩ B)
10) p ( A / B ) = , p( B) ≠ 0
p( B)
11) If A and B are independent, then p ( A ∩ B ) = p ( A ) ⋅ p ( B ) .
LEARNLEGO.BLOGSPOT.COM
i) A random variable X may have no moments although its m.g.f exists.
ii) A random variable X can have its m.g.f and some or all moments, yet the
m.g.f does not generate the moments.
iii) A random variable X can have all or some moments, but m.g.f does not
exist except perhaps at one point.
14) Properties of M.G.F:
i) If Y = aX + b, then MY ( t ) = e bt M X ( at ) .
ii) M cX ( t ) = M X ( ct ) , where c is constant.
iii) If X and Y are two independent random variables then
M X +Y ( t ) = M X ( t ) ⋅ M Y ( t ) .
15) P.D.F, M.G.F, Mean and Variance of all the distributions:
Sl. Distributio P.D.F ( P ( X = x ) ) M.G.F Mean Variance
No. n
nc x p x q n− x ( q + pe ) np npq
1 Binomial n
t
2 Poisson
e λ
−λ x
e
(
λ e t −1 ) λ λ
x!
3 Geometric
q x −1 p (or) q x p pe t 1 q
1 − qe t p p2
passannauniv.blogspot.com
4 Uniform
1 e bt − e at a + b (b − a )2
, a< x<b
f ( x) = b − a ( b − a )t 2 12
0, otherwise
5 Exponential
λ e − λ x , x > 0, λ > 0 λ 1 1
f ( x) =
0, otherwise λ−t λ λ2
6 Gamma
e − x x λ −1 1 λ λ
f ( x) = , 0 < x < ∞, λ > 0
Γ(λ ) (1 − t )λ
7 Normal −1 x − µ
2
µt +
t 2σ 2 µ σ2
1 2 σ
f ( x) = e e 2
σ 2π
16) Memoryless property of exponential distribution
P ( X > S + t / X > S ) = P ( X > t ) .
dx
17) Function of random variable: fY ( y ) = f X ( x )
dy
1) ∑∑ p i j
ij = 1 (Discrete random variable)
LEARNLEGO.BLOGSPOT.COM
∫∫
∞ ∞
−∞ −∞
f ( x , y )dxdy = 1 (Continuous random variable)
P ( x, y )
2) Conditional probability function X given Y P { X = xi / Y = yi } = .
P( y)
P ( x, y )
Conditional probability function Y given X P {Y = yi / X = xi } = .
P( x)
f ( x, y)
3) Conditional density function of X given Y, f ( x / y) = .
f ( y)
f ( x, y)
Conditional density function of Y given X, f ( y / x) = .
f ( x)
4) If X and Y are independent random variables then
f ( x , y ) = f ( x ). f ( y ) (for continuous random variable)
passannauniv.blogspot.com
P ( X = x , Y = y ) = P ( X = x ) . P (Y = y ) (for discrete random variable)
d b
5) Joint probability density function P ( a ≤ X ≤ b, c ≤ Y ≤ d ) = ∫ ∫ f ( x , y )dxdy .
c a
b a
6) Marginal density function of X, f ( x ) = f X ( x ) = ∫
−∞
f ( x , y )dy
Marginal density function of Y, f ( y ) = fY ( y ) = ∫
−∞
f ( x , y )dx
7) P ( X + Y ≥ 1) = 1 − P ( X + Y < 1)
Cov ( X , Y )
8) Correlation co – efficient (Discrete): ρ ( x , y ) =
σ Xσ Y
1 1 1
Cov ( X , Y ) =
n
∑ XY − XY , σ X =
n
∑ X 2 − X 2 , σ Y =
n
∑ Y 2 −Y 2
Cov ( X , Y )
9) Correlation co – efficient (Continuous): ρ ( x , y ) =
LEARNLEGO.BLOGSPOT.COM σ Xσ Y
Regression line Y on X is y − y = b yx ( x − x ) , b yx =
∑( x − x)( y − y)
∑( x − x)
2
passannauniv.blogspot.com
σy
Regression line Y on X is y − E ( y ) = b yx ( x − E ( x ) ) , b yx = r
σx
∞
Regression curve X on Y is x = E ( x / y) = ∫ x f ( x / y ) dx
−∞
Regression curve Y on X is y = E ( y / x) = ∫ y f ( y / x ) dy
−∞
dx
fY ( y ) = f X ( x ) (One dimensional random variable)
dy
∂u ∂u
∂x ∂y
fUV ( u, v ) = f XY ( x , y ) (Two dimensional random variable)
∂v ∂v
∂x ∂y
LEARNLEGO.BLOGSPOT.COM
15) Central limit theorem (Liapounoff’s form)
If X1, X2, …Xn be a sequence of independent R.Vs with E[Xi] = µi and Var(Xi) = σi2, i
= 1,2,…n and if Sn = X1 + X2 + … + Xn then under certain general conditions, Sn
n n
follows a normal distribution with mean µ = ∑ µi and variance σ 2 = ∑ σ i2 as
i =1 i =1
n→∞.
16) Central limit theorem (Lindberg – Levy’s form)
If X1, X2, …Xn be a sequence of independent identically distributed R.Vs with E[Xi]
= µi and Var(Xi) = σi2, i = 1,2,…n and if Sn = X1 + X2 + … + Xn then under certain
general conditions, Sn follows a normal distribution with mean nµ and variance
nσ 2 as n → ∞ .
Sn − nµ X −µ
Note: z = ( for n variables), z = ( for single variables)
σ n σ
n
passannauniv.blogspot.com
1) Random Process:
A random process is a collection of random variables {X(s,t)} that are
functions of a real variable, namely time ‘t’ where s Є S and t Є T.
2) Classification of Random Processes:
We can classify the random process according to the characteristics of time t
and the random variable X. We shall consider only four cases based on t and X
having values in the ranges -∞< t <∞ and -∞ < x < ∞.
Continuous random process
Continuous random sequence
Discrete random process
Discrete random sequence
LEARNLEGO.BLOGSPOT.COM
Example: If X(t) represents the maximum temperature at a place in the
interval (0,t), {X(t)} is a Continuous Random Process.
Continuous Random Sequence:
A random process for which X is continuous but time takes only discrete values is
called a Continuous Random Sequence.
Example: If Xn represents the temperature at the end of the nth hour of a day, then
{Xn, 1≤n≤24} is a Continuous Random Sequence.
Discrete Random Process:
If X assumes only discrete values and t is continuous, then we call such random
process {X(t)} as Discrete Random Process.
Example: If X(t) represents the number of telephone calls received in the interval
(0,t) the {X(t)} is a discrete random process since S = {0,1,2,3, . . . }
Discrete Random Sequence:
A random process in which both the random variable and time are discrete is called
Discrete Random Sequence.
Example: If Xn represents the outcome of the nth toss of a fair die, the {Xn : n≥1} is a
discrete random sequence. Since T = {1,2,3, . . . } and S = {1,2,3,4,5,6}
passannauniv.blogspot.com
LEARNLEGO.BLOGSPOT.COM
16) Poisson process:
If X ( t ) represents the number of occurrences of a certain event in (0, t ) ,then
the discrete random process { X ( t )} is called the Poisson process, provided the
following postulates are satisfied.
(i) P [1 occurrence in ( t , t + ∆t )] = λ∆t + O ( ∆t )
(ii) P [ 0 occurrence in ( t , t + ∆t )] = 1 − λ ∆t + O ( ∆t )
(iii) P [ 2 or more occurrences in ( t , t + ∆t )] = O ( ∆t )
(iv) X ( t ) is independent of the number of occurrences of the event in any
interval.
e−λt ( λ t )
x
RXX (τ ) - Auto correlation function
passannauniv.blogspot.com
S XX ( ω ) - Power spectral density (or) Spectral density
RXY (τ ) - Cross correlation function
S XY ( ω ) - Cross power spectral density
S XX ( ω ) = ∫ R (τ ) e
XX
− iωτ
dτ
−∞
C XX (τ ) = RXX (τ ) − E [ X ( t )] E [ X ( t + τ )] = 0
2π −∞
XY
LEARNLEGO.BLOGSPOT.COM
5) General formula:
e ax
( a cos bx + b sin bx )
∫ e cos bx dx =
ax
i)
a 2 + b2
e ax
ii) ∫ e sin bx dx =
ax
( a sin bx − b cos bx )
a 2 + b2
2
a a2
iii) x + ax = x + −
2
2 4
e iθ − e − iθ
iv) sin θ =
2i
e iθ + e − iθ
v) cos θ =
2
passannauniv.blogspot.com
1) Linear system:
f is called a linear system if it satisfies
f a1 X 1 ( t ) ± a2 X 2 ( t ) = a1 f ( X 1 ( t ) ) ± a2 f ( X 2 ( t ) )
invariant system.
3) Relation between input X ( t ) and output Y ( t ) :
∞
Y (t ) = ∫ h(u) X (t − u) du
−∞
Where h( u) system weighting function.
4) Relation between power spectrum of X ( t ) and output Y ( t ) :
SYY (ω ) = S XX (ω ) H (ω )
2
LEARNLEGO.BLOGSPOT.COM
If H (ω ) is not given use the following formula H (ω ) = − jω t
∫e
−∞
h( t ) dt
5) Contour integral:
∞
e imx π −ma
∫−∞ a 2 + x 2 = a e (One of the result)
− τ −aτ
1 e
6) F −1 2 2
= (from the Fourier transform)
a + ω 2a