0% found this document useful (0 votes)
366 views

Probability & Random Process Formulas All Units

This document provides formulas and definitions for probability and random processes. It covers discrete and continuous random variables, their probability mass functions and probability density functions. It also defines expectations, variances, and moments. Formulas are given for common distributions like binomial, Poisson, geometric, uniform, exponential, gamma and normal. Properties of moment generating functions are outlined. The memoryless property of the exponential distribution and functions of random variables are also discussed.

Uploaded by

Sijil
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
366 views

Probability & Random Process Formulas All Units

This document provides formulas and definitions for probability and random processes. It covers discrete and continuous random variables, their probability mass functions and probability density functions. It also defines expectations, variances, and moments. Formulas are given for common distributions like binomial, Poisson, geometric, uniform, exponential, gamma and normal. Properties of moment generating functions are outlined. The memoryless property of the exponential distribution and functions of random variables are also discussed.

Uploaded by

Sijil
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

passannauniv.blogspot.

com

Probability & Random Process


Formulas
UNIT-I (RANDOM VARIABLES)

1) Discrete random variable:


A random variable whose set of possible values is either finite or countably 
infinite is called discrete random variable. 
  Eg: (i) Let X represent the sum of the numbers on the 2 dice, when two 
dice are thrown.  In this case the random variable X takes the values 2, 3, 4, 5, 6, 
7, 8, 9, 10, 11 and 12.  So X is a discrete random variable. 
        (ii) Number of transmitted bits received in error. 
2) Continuous random variable:
A random variable X is said to be continuous if it takes all possible values 
between certain limits. 
  Eg: The length of time during which a vacuum tube installed in a circuit 
functions is a continuous random variable, number of scratches on a surface, 
proportion of defective parts among 1000 tested, number of transmitted in 
error. 
LEARNLEGO.BLOGSPOT.COM
3)
Sl.No. Discrete random variable Continuous random variable
∞ ∞


i =−∞
p( x i ) = 1  
∫ f ( x )dx = 1  
−∞

2  F ( x) = P [ X ≤ x]   x

F ( x) = P [ X ≤ x] = ∫ f ( x )dx  
−∞

3  Mean = E [ X ] = ∑ xi p( xi )   ∞

i Mean = E [ X ] = ∫ xf ( x )dx
−∞

4  E  X  = ∑ x p( xi )  
2 2 ∞

E  X 2  = ∫x
i 2
i f ( x )dx
−∞

Var ( X ) = E ( X 2 ) −  E ( X )    Var ( X ) = E ( X 2 ) −  E ( X ) 
2 2

6  Moment =  E  X r  = ∑ xir pi   ∞

Moment =  E  X r  = ∫x
r
i f ( x )dx
−∞

7  M.G.F  M.G.F 
passannauniv.blogspot.com

M X ( t ) = E  e tX  = ∑ e tx p( x )   ∞

x M X ( t ) = E  e tX  = ∫e
tx
f ( x )dx  
−∞

4) E ( aX + b ) = aE ( X ) + b
5) Var ( aX + b ) = a 2 Var ( X )
6) Var ( aX ± bY ) = a 2 Var ( X ) + b 2Var (Y )
7) Standard Deviation = Var ( X )
8) f ( x ) = F ′( x )
9) p( X > a ) = 1 − p( X ≤ a )
p( A ∩ B)
10) p ( A / B ) = , p( B) ≠ 0
p( B)
11) If A and B are independent, then p ( A ∩ B ) = p ( A ) ⋅ p ( B ) . 

12) 1st Moment about origin =  E [ X ]  =   M X ′ ( t )     (Mean)


  t =0
2nd Moment about origin =  E  X 2   =   M X ′′ ( t )   
  t =0
r
t
The co-efficient of   =  E  X r    (rth Moment about the origin)
r!
13) Limitation of M.G.F:

LEARNLEGO.BLOGSPOT.COM
i) A random variable X may have no moments although its m.g.f exists.
ii) A random variable X can have its m.g.f and some or all moments, yet the 
m.g.f does not generate the moments.
iii) A random variable X can have all or some moments, but m.g.f does not 
exist except perhaps at one point.
14) Properties of M.G.F:
i) If Y = aX + b, then  MY ( t ) = e bt M X ( at ) .
ii) M cX ( t ) = M X ( ct ) , where c is constant.
iii) If X and Y are two independent random variables then 
M X +Y ( t ) = M X ( t ) ⋅ M Y ( t ) .
15) P.D.F, M.G.F, Mean and Variance of all the distributions:
Sl. Distributio P.D.F ( P ( X = x ) ) M.G.F Mean Variance
No. n
nc x p x q n− x   ( q + pe ) np   npq  
1  Binomial  n
t

 
2  Poisson 
e λ
−λ x
e
(
λ e t −1 )  λ  λ 
 
x!
3  Geometric  
q x −1 p  (or)  q x p   pe t 1 q
     
1 − qe t p p2
passannauniv.blogspot.com

4  Uniform  
 1 e bt − e at a + b (b − a )2
 , a< x<b  
f ( x) =  b − a   ( b − a )t 2 12
 0, otherwise    
5  Exponential  
 λ e − λ x , x > 0, λ > 0 λ 1 1
f ( x) =         
 0, otherwise λ−t λ λ2
6  Gamma 
e − x x λ −1 1 λ  λ 
f ( x) = , 0 < x < ∞, λ > 0    
Γ(λ ) (1 − t )λ
7  Normal   −1  x − µ 
2
µt +
t 2σ 2 µ  σ2 
1 2  σ 
f ( x) = e      e 2
 
σ 2π
 
16) Memoryless property of exponential distribution 
P ( X > S + t / X > S ) = P ( X > t ) . 
dx
17) Function of random variable:  fY ( y ) = f X ( x )
dy

UNIT-II (RANDOM VARIABLES)

1) ∑∑ p i j
ij = 1   (Discrete random variable) 

LEARNLEGO.BLOGSPOT.COM
∫∫
∞ ∞

−∞ −∞
f ( x , y )dxdy = 1   (Continuous random variable) 

P ( x, y )
2) Conditional probability function X given Y  P { X = xi / Y = yi } = . 
P( y)

P ( x, y )
Conditional probability function Y given X  P {Y = yi / X = xi } = . 
P( x)

P ( X < a,Y < b )


P { X < a / Y < b} =  
P (Y < b )

f ( x, y)
3) Conditional density function of X given Y,  f ( x / y) = . 
f ( y)

f ( x, y)
Conditional density function of Y given X,  f ( y / x) = . 
f ( x)
4) If X and Y are independent random variables then 
f ( x , y ) = f ( x ). f ( y )      (for continuous random variable) 
passannauniv.blogspot.com

P ( X = x , Y = y ) = P ( X = x ) . P (Y = y ) (for discrete random variable) 
d b

5) Joint probability density function P ( a ≤ X ≤ b, c ≤ Y ≤ d ) = ∫ ∫ f ( x , y )dxdy . 
c a

b a

P ( X < a , Y < b ) = ∫ ∫ f ( x , y )dxdy  


0 0

6) Marginal density function of X,  f ( x ) = f X ( x ) = ∫
−∞
f ( x , y )dy  

Marginal density function of Y,  f ( y ) = fY ( y ) = ∫
−∞
f ( x , y )dx  

7) P ( X + Y ≥ 1) = 1 − P ( X + Y < 1)

Cov ( X , Y )
8) Correlation co – efficient (Discrete):  ρ ( x , y ) =
σ Xσ Y

1 1 1
Cov ( X , Y ) =
n
∑ XY − XY ,    σ X =
n
∑ X 2 − X 2 ,    σ Y =
n
∑ Y 2 −Y 2  

Cov ( X , Y )
9) Correlation co – efficient (Continuous):  ρ ( x , y ) =
LEARNLEGO.BLOGSPOT.COM σ Xσ Y

Cov ( X , Y ) = E ( X , Y ) − E ( X ) E (Y ) ,    σ X = Var ( X ) ,    σ Y = Var (Y )

10) If X and Y are uncorrelated random variables, then Cov ( X , Y ) = 0 .


∞ ∞ ∞ ∞

11) E ( X ) = ∫ xf ( x )dx ,    E (Y ) = ∫ yf ( y )dy ,   E ( X ,Y ) = ∫ ∫ xyf ( x , y )dxdy .


−∞ −∞ −∞ −∞

12) Regression for Discrete random variable:

Regression line X on Y is  x − x = bxy ( y − y ) ,   bxy =


∑( x − x)( y − y)
∑( y − y)
2

Regression line Y on X is  y − y = b yx ( x − x ) ,  b yx =
∑( x − x)( y − y)
∑( x − x)
2

Correlation through the regression,   ρ = ± bXY .bYX       Note:  ρ ( x , y ) = r ( x , y )  

 
passannauniv.blogspot.com

13) Regression for Continuous random variable:


σx
Regression line X on Y is  x − E ( x ) = bxy ( y − E ( y ) ) ,   bxy = r
σy

σy
Regression line Y on X is  y − E ( y ) = b yx ( x − E ( x ) ) ,   b yx = r  
σx

Regression curve X on Y is   x = E ( x / y) = ∫ x f ( x / y ) dx    
−∞

Regression curve Y on X is   y = E ( y / x) = ∫ y f ( y / x ) dy    
−∞

14) Transformation Random Variables:

dx
  fY ( y ) = f X ( x )       (One dimensional random variable) 
dy

∂u ∂u
∂x ∂y
fUV ( u, v ) = f XY ( x , y )   (Two dimensional random variable)
∂v ∂v
∂x ∂y

LEARNLEGO.BLOGSPOT.COM
15) Central limit theorem (Liapounoff’s form)
If X1, X2, …Xn be a sequence of independent R.Vs with E[Xi] = µi and Var(Xi) = σi2, i 
= 1,2,…n and if  Sn =  X1 +  X2 + … + Xn then under certain general conditions, Sn 
n n
follows a normal distribution with mean  µ = ∑ µi and variance  σ 2 = ∑ σ i2  as 
i =1 i =1

n→∞.
16) Central limit theorem (Lindberg – Levy’s form)
If X1, X2, …Xn be a sequence of independent identically distributed R.Vs with E[Xi] 
= µi and Var(Xi) = σi2, i = 1,2,…n and if  Sn =  X1 +  X2 + … + Xn then under certain 
general conditions, Sn follows a normal distribution with mean  nµ and variance 

nσ 2  as  n → ∞ . 

Sn − nµ X −µ
Note:  z =   ( for n variables),             z =  ( for single variables) 
σ n σ
n
passannauniv.blogspot.com

UNIT-III (MARKOV PROCESSES AND MARKOV CHAINS)

1) Random Process:
A random process is a collection of random variables {X(s,t)} that are 
functions of a real variable, namely time ‘t’ where s Є S and t Є T. 
 
2) Classification of Random Processes:
We can classify the random process according to the characteristics of time t 
and the random variable X.  We shall consider only four cases based on t and X 
having values in the ranges -∞< t <∞ and   -∞ < x < ∞. 
 
Continuous random process 

Continuous random sequence 

Discrete random process 

Discrete random sequence 

Continuous random process:


If X and t are continuous, then we call X(t) , a Continuous Random Process. 

LEARNLEGO.BLOGSPOT.COM
Example:  If X(t) represents the maximum temperature at a place in the 
interval (0,t), {X(t)} is a Continuous Random Process. 
Continuous Random Sequence:
A random process for which X is continuous but time takes only discrete values is 
called a Continuous Random Sequence.  
Example:  If Xn represents the temperature at the end of the nth hour of a day, then 
{Xn, 1≤n≤24} is a Continuous Random Sequence. 
Discrete Random Process:
If X assumes only discrete values and t is continuous, then we call such random 
process {X(t)} as Discrete Random Process. 
Example:  If X(t) represents the number of telephone calls received in the interval 
(0,t) the {X(t)} is a discrete random process since S = {0,1,2,3, . . . } 
Discrete Random Sequence:
A random process in which both the random variable and time are discrete is called 
Discrete Random Sequence. 
Example:  If Xn represents the outcome of the nth toss of a fair die, the {Xn : n≥1} is a 
discrete random sequence.  Since T = {1,2,3, . . . } and S = {1,2,3,4,5,6} 
passannauniv.blogspot.com

3) Condition for Stationary Process:  E [ X ( t )] = Constant , Var [ X ( t )] = constant .


If the process is not stationary then it is called evolutionary. 
 
4) Wide Sense Stationary (or) Weak Sense Stationary (or) Covariance Stationary:
A random process is said to be WSS or Covariance Stationary if it satisfies the 
following conditions. 
i) The mean of the process is constant (i.e)  E ( X ( t ) ) = constant . 
ii) Auto correlation function depends only on  τ (i.e)
RXX (τ ) = E [ X ( t ). X ( t + τ )]  
5) Time average:
T
1
The time average of a random process  { X ( t )} is defined as X T = ∫ X (t ) dt . 
2T −T
T
1
If the interval is  ( 0,T ) , then the time average is  X T = ∫ X ( t ) dt . 
T 0
6) Ergodic Process:
A random process  { X ( t )} is called ergodic if all its ensemble averages are 
interchangeable with the corresponding time average X T . 
7) Mean ergodic:
Let  { X ( t )} be a random process with mean  E [ X ( t )] = µ  and time average X T , 
then  { X ( t )} is said to be mean ergodic if  X T → µ  as  T → ∞  (i.e) 
LEARNLEGO.BLOGSPOT.COM
E [ X ( t )] = Lt X T . 
T →∞

Note:  Lt var ( X T ) = 0  (by mean ergodic theorem) 


T →∞

8) Correlation ergodic process:


The stationary process  { X ( t )} is said to be correlation ergodic if the process 
{Y ( t )} is mean ergodic where  Y ( t ) = X ( t ) X ( t + τ ) . (i.e)  E (Y ( t ) ) = TLt
→∞
YT . 
Where  YT is the time average of  Y ( t ) . 
9) Auto covariance function:
C XX (τ ) = RXX (τ ) − E ( X ( t ) ) E ( X ( t + τ ) )  
10) Mean and variance of time average:
T
1
Mean:    E  X T  = ∫ E [ X ( t )] dt  
T 0
2T
1
Variance:  Var  X T  = ∫ RXX (τ )C XX (τ ) dτ  
2T −2 T
passannauniv.blogspot.com

11) Markov process:


A random process in which the future value depends only on the present value 
and not on the past values, is called a markov process. It is symbolically 
represented by  P  X ( t n +1 ) ≤ xn +1 / X ( t n ) = xn , X ( t n −1 ) = xn −1 ... X ( t 0 ) = x0   
= P  X ( t n +1 ) ≤ xn +1 / X ( t n ) = xn   
    Where  t 0 ≤ t1 ≤ t 2 ≤ ... ≤ t n ≤ t n +1  
12) Markov Chain:
If for all  n , 
P  X n = an / X n −1 = an −1 , X n − 2 = an − 2 , ... X 0 = a0  = P  X n = an / X n −1 = an −1     
then the process  { X n } ,  n = 0,1, 2, ...  is called the markov chain. Where 
a0 , a1 , a2 , ...an , ...  are called the states of the markov chain. 
13) Transition Probability Matrix (tpm):
When the Markov Chain is homogenous, the one step transition probability is 
denoted by Pij.  The matrix P = {Pij} is called transition probability matrix. 
14) Chapman – Kolmogorov theorem:
If ‘P’ is the tpm of a homogeneous Markov chain, then the n – step tpm P(n) is 
n
equal to Pn.  (i.e)  Pij( n ) =  Pij  . 
15) Markov Chain property: If  Π = ( Π 1 , Π 2 , Π 3 ) , then  ΠP = Π  and 
Π 1 + Π 2 + Π 3 = 1 . 

LEARNLEGO.BLOGSPOT.COM
16) Poisson process:
If  X ( t ) represents the number of occurrences of a certain event in  (0, t ) ,then 
the discrete random process  { X ( t )} is called the Poisson process, provided the 
following postulates are satisfied. 
 
(i) P [1 occurrence in ( t , t + ∆t )] = λ∆t + O ( ∆t )  
(ii) P [ 0 occurrence in ( t , t + ∆t )] = 1 − λ ∆t + O ( ∆t )  
(iii) P [ 2 or more occurrences in ( t , t + ∆t )] = O ( ∆t )  
(iv) X ( t )  is independent of the number of occurrences of the event in any 
interval. 
e−λt ( λ t )
x

17) Probability law of Poisson process:  P { X ( t ) = x} = , x = 0,1, 2, ...∞  


x!
Mean E [ X ( t )] = λ t ,   E  X 2 ( t )  = λ 2 t 2 + λ t , Var [ X ( t )] = λ t . 

UNIT-IV (CORRELATION AND SPECTRAL DENSITY)

RXX (τ )  - Auto correlation function 
passannauniv.blogspot.com

S XX ( ω )  - Power spectral density (or) Spectral density 

RXY (τ )  - Cross correlation function 

S XY ( ω )  - Cross power spectral density 

1) Auto correlation to Power spectral density (spectral density):


  S XX ( ω ) = ∫ R (τ ) e
XX
− iωτ
dτ  
−∞

2) Power spectral density to Auto correlation:



1
RXX (τ ) = ∫ S (ω ) e dω  
ωτ i
   
2π −∞
XX

3) Condition for X ( t ) and X ( t + τ ) are uncorrelated random process is

C XX (τ ) = RXX (τ ) − E [ X ( t )] E [ X ( t + τ )] = 0  

4) Cross power spectrum to Cross correlation:



1
RXY (τ ) = ∫ S ( ω ) e dω  
ωτ i

2π −∞
XY

LEARNLEGO.BLOGSPOT.COM
5) General formula:
e ax
( a cos bx + b sin bx )  
∫ e cos bx dx =
ax
i)
a 2 + b2
e ax
ii) ∫ e sin bx dx =
ax
( a sin bx − b cos bx )  
a 2 + b2
2
 a  a2
iii) x + ax =  x +  −
2
 
 2 4

e iθ − e − iθ
iv) sin θ =  
2i
e iθ + e − iθ
v) cos θ =  
2
 
 
 
passannauniv.blogspot.com

UNIT-V (LINEAR SYSTEMS WITH RANDOM INPUTS)

1) Linear system:
f is called a linear system if it satisfies 

f a1 X 1 ( t ) ± a2 X 2 ( t ) = a1 f ( X 1 ( t ) ) ± a2 f ( X 2 ( t ) )  

2) Time – invariant system:


Let  Y ( t ) = f ( X ( t ) ) . If  Y ( t + h) = f ( X ( t + h) )  then  f  is called a time – 

invariant system. 
3) Relation between input X ( t ) and output Y ( t ) :

Y (t ) = ∫ h(u) X (t − u) du  
−∞

Where  h( u)  system weighting function. 
4) Relation between power spectrum of X ( t ) and output Y ( t ) :

SYY (ω ) = S XX (ω ) H (ω )  
2

LEARNLEGO.BLOGSPOT.COM
If  H (ω )  is not given use the following formula  H (ω ) = − jω t
∫e
−∞
h( t ) dt   

5) Contour integral:

e imx π −ma
∫−∞ a 2 + x 2 = a e     (One of the result) 

− τ −aτ
 1  e
6) F −1  2 2
=     (from the Fourier transform) 
a + ω  2a

You might also like