0% found this document useful (0 votes)
20 views

CS Probability and Random Processes

The document is a lecture on probability and random variables for a communication systems course. It covers topics such as: 1) Definitions of probability, events, sample spaces, and the probability axioms. 2) Concepts of conditional probability, independence of events, and Bayes' rule. 3) Definitions of random variables, cumulative distribution functions, probability density functions, and probability mass functions. 4) Examples of important random variables like the Bernoulli, binomial, uniform, and Gaussian random variables.

Uploaded by

yeet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

CS Probability and Random Processes

The document is a lecture on probability and random variables for a communication systems course. It covers topics such as: 1) Definitions of probability, events, sample spaces, and the probability axioms. 2) Concepts of conditional probability, independence of events, and Bayes' rule. 3) Definitions of random variables, cumulative distribution functions, probability density functions, and probability mass functions. 4) Examples of important random variables like the Bernoulli, binomial, uniform, and Gaussian random variables.

Uploaded by

yeet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 102

Probability, Random Variables, and Stochastic

Processes

Mohammad Hadi

[email protected]
@MohammadHadiDastgerdi

Fall 2020

Mohammad Hadi Communication systems Fall 2020 1 / 102


Overview

1 Probability

2 Random Variables

3 Random Processes

4 Gaussian, White, and Bandpass Processes

5 Thermal Noise

Mohammad Hadi Communication systems Fall 2020 2 / 102


Probability

Mohammad Hadi Communication systems Fall 2020 3 / 102


Sample Space, Events, and Probability

A random experiment is any experiment whose outcome cannot be


predicted with certainty.
A random experiment has certain outcomes ω ∈ Ω.
The set of all possible outcomes is called the sample space Ω.
A sample space is discrete if the number of its elements are finite or
countably infinite, otherwise it is a nondiscrete sample space.
Events are subsets of the sample space, i.e., E ⊂ Ω.
Events are disjoint if their intersection is empty. i.e. Ei ∩ Ej = ∅.

Mohammad Hadi Communication systems Fall 2020 4 / 102


Sample Space, Events, and Probability

Definition (Probability Axioms)


A probability P is defined as a set function assigning nonnegative values to
all events E such that
1 0 ≤ P(E ) ≤ 1 for all events.
2 P(Ω) = 1.
P∞
3 For disjoint events E1 , E2 , · · · , P(∪∞
i=1 Ei ) = i=1 P(Ei ).

1 P(E c ) = 1 − P(E ), Ec = Ω \ E.
2 P(∅) = 0.
3 P(E1 ∪ E2 ) = P(E1 ) + P(E2 ) − P(E1 ∩ E2 ).
4 E1 ⊆ E2 ⇒ P(E1 ) ≤ P(E2 ).

Mohammad Hadi Communication systems Fall 2020 5 / 102


Conditional Probability

Definition (Conditional Probability)


The conditional probability of the event E1 given the event E2 is defined by
( P(E ∩E )
1 2
P(E2 ) , P(E2 ) 6= 0
P(E1 |E2 ) =
0 , P(E2 ) = 0

Mohammad Hadi Communication systems Fall 2020 6 / 102


Conditional Probability

1 The events E1 and E2 are said to be independent if P(E1 |E2 ) = P(E1 ).


2 For independent events, P(E1 ∩ E2 ) = P(E1 )P(E2 ).
3 If the events {Ei }ni=1 are disjoint and their union is the entire sample
space, then they make a partition of the sample space Ω.
4 The
Pn total probability theorem states that for an event A, P(A) =
i=1 P(Ei )P(A|Ei ).
5 Bayes’s rule gives the conditional probabilities P(Ei |A) by

P(Ei )P(A|Ei ) P(Ei )P(A|Ei )


P(Ei |A) = = Pn
P(A) i=1 P(Ei )P(A|Ei )

Mohammad Hadi Communication systems Fall 2020 7 / 102


Random Variables

Mohammad Hadi Communication systems Fall 2020 8 / 102


Random Variables

Definition (Random Variable)


A random variable is a mapping from the sample space Ω to the set of real
numbers.

Figure: A random variable as a mapping from Ω to R.

Mohammad Hadi Communication systems Fall 2020 9 / 102


Random Variables

Definition (Cumulative Distribution Function (CDF))


The cumulative distribution function or CDF of a random variable X is
defined as
FX (x) = P{ω ∈ Ω : X (ω) ≤ x} = p{X ≤ x}

1 0 ≤ FX (x) ≤ 1.
2 FX (−∞) = 0, FX (∞) = 1.
3 P(a < X ≤ b) = FX (b) − FX (a).

Mohammad Hadi Communication systems Fall 2020 10 / 102


Random Variables

Figure: CDF for a (a) continuous (b) discrete (c) mixed random variable.

Mohammad Hadi Communication systems Fall 2020 11 / 102


Random Variables

Definition (Probability Density Function (PDF))


The probability density function or PDF of a random variable X is defined
as
dFX (x)
fX (x) =
dx

1 fX (x) ≥ 0.
R∞
−∞ fX (x)dx = 1.
2

Rb
3 P(a < X ≤ b) = a fX (x)dx.
R x+
4 FX (x) = −∞ fX (u)du.

Mohammad Hadi Communication systems Fall 2020 12 / 102


Random Variables

Definition (Probability Mass Function (PMF))


The probability mass function or PMF of a discrete random variable X is
defined as
pi = P{X = xi }

1 pi ≥ 0.
P
i pi = 1.
2

Mohammad Hadi Communication systems Fall 2020 13 / 102


Important Random Variables

Statement (Bernoulli Random Variable)


The Bernoulli random variable is a discrete random variable taking two
values 1 and 0, with probabilities p and 1 − p.

Figure: The PMF for the Bernoulli random variable.

Mohammad Hadi Communication systems Fall 2020 14 / 102


Important Random Variables

Statement (Binomial Random Variable)


The binomial random variable is a discrete random variable giving the num-
ber of 1 ’s in n independent Bernoulli trials. The PMF is given by
 !
 n p k (1 − p)n−k ,

0≤k ≤n
P{X = k} = k

0, otherwise

Figure: The PMF for the binomial random variable.

Mohammad Hadi Communication systems Fall 2020 15 / 102


Important Random Variables

Statement (Uniform Random Variable)


The Uniform random variable is a continuous random variable taking values
between a and b with equal probabilities for intervals of equal length. The
density function is given by
(
1
, a≤x ≤b
fX (x) = b−a
0, otherwise

Figure: The PDF for the uniform random variable.

Mohammad Hadi Communication systems Fall 2020 16 / 102


Important Random Variables
Statement (Gaussian Random Variable)
The Gaussian, or normal, random variable N (m, σ 2 ) is a continuous random
variable described by the density function
1 (x−m)2
fX (x) = √ e − 2σ2
2πσ

, where m, σ, and σ 2 are named mean, standard deviation, and variance.

Figure: The PDF for the Gaussian random variable.

Mohammad Hadi Communication systems Fall 2020 17 / 102


Important Random Variables

Statement (Q Function)
Assuming that X is a standard normal random variable N (0, 1), the function
Q(x) is defined as
Z ∞
1 t2
Q(x) = P{X > x} = √ e − 2 dt
x 2π

Figure: The Q-function as the area under the tail of a standard normal random variable.

Mohammad Hadi Communication systems Fall 2020 18 / 102


Important Random Variables

The Q function has the following properties,


1 Q(−∞) = 1, Q(0) = 0.5, Q(+∞) = 0.
2 Q(−x) = 1 − Q(x).
The important bounds on the Q function are
x2
1 Q(x) ≤ 12 e − 2 , x ≥ 0.
2
√1 e − x2
2 Q(x) < 2πx
, x ≥ 0.
x 2
3 Q(x) > √ 1 (1 − 12 )e − 2 , x > 1.
2πx x
For an N (m, σ 2 ) random variable,
1 FX (x) = P{X ≤ x} = 1 − Q( x−m
σ ).

Mohammad Hadi Communication systems Fall 2020 19 / 102


Important Random Variables
x Q(x) x Q(x) x Q(x)

0.0 5.000000 × 10−01 2.4 8.197534 × 10−03 4.8 7.933274 × 10−07


0.1 4.601722 × 10−01 2.5 6.209665 × 10−03 4.9 4.791830 × 10−07
0.2 4.207403 × 10−01 2.6 4.661189 × 10−03 5.0 2.866516 × 10−07
0.3 3.820886 × 10−01 2.7 3.466973 × 10−03 5.1 1.698268 × 10−07
0.4 3.445783 × 10−01 2.8 2.555131 × 10−03 5.2 9.964437 × 10−06
0.5 3.085375 × 10−01 2.9 1.865812 × 10−03 5.3 5.790128 × 10−08
0.6 2.742531 × 10−01 3.0 1.349898 × 10−03 5.4 3.332043 × 10−08
0.7 2.419637 × 10−01 3.1 9.676035 × 10−04 5.5 1.898956 × 10−08
0.8 2.118554 × 10−01 3.2 6.871378 × 10−04 5.6 1.071760 × 10−08
0.9 1.840601 × 10−01 3.3 4.834242 × 10−04 5.7 5.990378 × 10−09
1.0 1.586553 × 10−01 3.4 3.369291 × 10−04 5.8 3.315742 × 10−09
1.1 1.356661 × 10−01 3.5 2.326291 × 10−04 5.9 1.817507 × 10−09
1.2 1.150697 × 10−01 3.6 1.591086 × 10−04 6.0 9.865876 × 10−10
1.3 9.680049 × 10−02 3.7 1.077997 × 10−04 6.1 5.303426 × 10−10
1.4 8.075666 × 10−02 3.8 7.234806 × 10−05 6.2 2.823161 × 10−10
1.5 6.680720 × 10−02 3.9 4.809633 × 10−05 6.3 1.488226 × 10−10
1.6 5.479929 × 10−02 4.0 3.167124 × 10−05 6.4 7.768843 × 10−11
1.7 4.456546 × 10−02 4.1 2.065752 × 10−05 6.5 4.016001 × 10−11
1.8 3.593032 × 10−02 4.2 1.334576 × 10−05 6.6 2.055790 × 10−11
1.9 2.871656 × 10−02 4.3 8.539898 × 10−06 6.7 1.042099 × 10−11
2.0 2.275013 × 10−02 4.4 5.412542 × 10−06 6.8 5.230951 × 10−12
2.1 1.786442 × 10−02 4.5 3.397673 × 10−06 6.9 2.600125 × 10−12
2.2 1.390345 × 10−02 4.6 2.112456 × 10−06 7.0 1.279813 × 10−12
2.3 1.072411 × 10−02 4.7 1.300809 × 10−06

Table: Table of the Q Function.


Mohammad Hadi Communication systems Fall 2020 20 / 102
Important Random Variables

Example (Q Function)
X is a Gaussian random variable with mean 1 and variance 4. Therefore,

P(5 < X < 7) = FX (7) − FX (5)


7−1 5−1
= 1 − Q( ) − [1 − Q( )]
2 2
= Q(2) − Q(3) ≈ 0.0214

Mohammad Hadi Communication systems Fall 2020 21 / 102


Functions of a Random Variable

Statement (Functions of a Random Variable)


The CDF of the random variable Y = g (X ) is

FY (y ) = P{ω ∈ Ω : g (X (ω)) ≤ y }

. In the special case that, for all y , the equation g (x) = y has a countable
number of solutions {xi }, and for all these solutions, g 0 (xi ) exists and is
nonzero,
X fX (xi )
fY (y ) =
|g 0 (xi )|
i
.

Mohammad Hadi Communication systems Fall 2020 22 / 102


Functions of a Random Variable

Example (Linear function of a normal variable)


if X is N (m, σ 2 ), then Y = aX + b is also a Gaussian random variable of
the form N (am + b, a2 σ 2 ).

If y = ax + b = g (x), then x = (y − b)/a and g 0 (x) = a. So,


fX (x)
fY (y ) =
|g 0 (x)| x=(y −b)/a
1 1 (x−m)2
= √ e − 2σ2 x=(y −b)/a
a 2πσ
y −b
( a −m)2
1
=√ e − 2σ2
2πaσ
1 (y −b−am)2
=√ e − 2a2 σ2
2πaσ

Mohammad Hadi Communication systems Fall 2020 23 / 102


Statistical Averages

Definition (Mean of Function)


The mean, expected value, or expectation of the random variable Y = g (X )
is defined as Z ∞
E {g (X )} = g (x)fX (x)dx
−∞

Definition (Mean of Function)


The mean, expected value, or expectation of the discrete random variable
Y = g (X ) is defined as
X
E {g (X )} = g (xi )P{X = xi }
i

Mohammad Hadi Communication systems Fall 2020 24 / 102


Statistical Averages

Definition (Mean)
The mean, expected value, or expectation of the random variable X is
defined as Z ∞
E {X } = mX = xfX (x)dx
−∞

Definition (Mean)
The mean, expected value, or expectation of the discrete random variable
X is defined as X
E {X } = mX = xi P{X = xi }
i

1 E (cX ) = cE (X ).
2 E (X + c) = c + E (X ).
3 E (c) = c.
Mohammad Hadi Communication systems Fall 2020 25 / 102
Statistical Averages

Definition (Variance)
The variance of the random variable X is defined as

σX2 = V (X ) = E {(X − E {X })2 } = E {X 2 } − (E {X })2

1 V (cX ) = c 2 V (X ).
2 V (X + c) = V (X ).
3 V (c) = 0.

Mohammad Hadi Communication systems Fall 2020 26 / 102


Important Random Variables

Example (Bernoulli random variable)


If X is a Bernoulli random variable, E (X ) = p and V (X ) = p(1 − p).

Example (Binomial random variable)


If X is a Binomial random variable, E (X ) = np and V (X ) = np(1 − p).

Example (Uniform random variable)


a+b (b−a)2
If X is a Uniform random variable, E (X ) = 2 and V (X ) = 12 .

Example (Gaussian random variable)


If X is a Gaussian random variable, E (X ) = m and V (X ) = σ 2 .

Mohammad Hadi Communication systems Fall 2020 27 / 102


Bi-variate Random Variables

Definition (Joint CDF)


Let X and Y represent two random variables. For these two random vari-
ables, the joint CDF is defined as

FX ,Y (x, y ) = P(X ≤ x, Y ≤ y )

1 FX (x) = FX ,Y (x, ∞).


2 FY (x) = FX ,Y (∞, y ).
3 If X and Y are statistically independent, FX ,Y (x, y ) = FX (x)FY (y ).

Mohammad Hadi Communication systems Fall 2020 28 / 102


Bi-variate Random Variables

Definition (Joint PDF)


Let X and Y represent two random variables. For these two random vari-
ables, the joint PDF is defined as

∂ 2 FX ,Y (x, y )
fX ,Y (x, y ) =
∂x∂y
R∞
1 fX (x) = fX ,Y (x, y )dy .
R−∞

2 fY (y ) = −∞ fX ,Y (x, y )dx.
R∞ R∞
−∞ −∞ fX ,Y (x, R
y )dxdy = 1.
3
R
4 P{(x, y ) ∈ A} = (x,y )∈A fX ,Y (x, y )dxdy .
Rx Ry
5 FX ,Y (x, y ) = −∞ −∞ fX ,Y (u, v )dudv .
6 If X and Y are statistically independent, fX ,Y (x, y ) = fX (x)fY (y ).

Mohammad Hadi Communication systems Fall 2020 29 / 102


Bi-variate Random Variables

Definition (Conditional PDF)


The conditional PDF of the random variable Y , given that the value of the
random variable X is equal to x, is defined as
( f (x,y )
X ,Y
fX (x) , fX (x) 6= 0
fY |X (y |x) =
0, fX (x) = 0

Mohammad Hadi Communication systems Fall 2020 30 / 102


Bi-variate Random Variables

Definition (Mean)
The
R ∞ R expected value of g (X , Y ) is defined as E {g (X , Y )} =

−∞ −∞ g (x, y )fX ,Y (x, y )dxdy

Definition (Correlation)
R(X , Y ) = E (XY ) is called the correlation f X and Y .

Definition (Covariance)
The covariance of X and Y is defined as C (X , Y ) = E (XY ) − E (X )E (Y ).

Definition (Correlation Coefficient)


The correlation coefficient of X and Y is defined as ρX ,Y =
C (X , Y )/(σX σY ).

Mohammad Hadi Communication systems Fall 2020 31 / 102


Bi-variate Random Variables

1 If ρX ,Y = C (X , Y ) = 0. i.e., E (XY ) = E (X )E (Y ), then X and Y are


called uncorrelated.
2 If X and Y are independent, E (XY ) = E (X )E (Y ), i.e., X and Y are
uncorrelated.
3 |ρX ,Y | ≤ 1.
4 If ρX ,Y = 1, then Y = aX + b, where a is a positive.
5 If ρX ,Y = −1, then Y = aX + b, where a is a negative.

Mohammad Hadi Communication systems Fall 2020 32 / 102


Bi-variate Random Variables

Example (Moment calculation)


Assume that X ∼ N (3, 4) and Y ∼ N (−1, 2) are independent. If Z =
X − Y and W = 2X + 3Y , then

E (Z ) = E (X ) − E (Y ) = 3 + 1 = 4
E (W ) = 2E (X ) + 3E (Y ) = 6 − 3 = 3
E (X 2 ) = V (X ) + (E (X ))2 = 4 + 9 = 13
E (Y 2 ) = V (Y ) + (E (Y ))2 = 2 + 1 = 3
E (XY ) = E (X )E (Y ) = −3
C (W , Z ) = E (WZ ) − E (W )E (Z ) = E (2X 2 − 3Y 2 + XY ) − 12 = 2

Mohammad Hadi Communication systems Fall 2020 33 / 102


Bi-variate Random Variables

Statement (Multiple Functions of Multiple Random Variables)


If Z = g (X , Y ) and W = h(X , Y ) and the set of equations
(
g (x, y ) = z
h(x, y ) = w

has a countable number of solutions {(xi , yi )}, and if at these points the
determinant of the Jacobian matrix
 
∂z/∂x ∂z/∂y
J(x, y ) =
∂w /∂x ∂w /∂y

is nonzero, then
X fX ,Y (xi , yi )
fZ ,W (z, w ) =
|detJ(xi , yi )|
i
.
Mohammad Hadi Communication systems Fall 2020 34 / 102
Bi-variate Random Variables

Example (Magnitude and phase of two i.i.d Gaussian variables)


If X and Y are independent and identically distributed zero-mean Gaus-
sian random variables with the variance σ 2√
, i.e., X ∼ N (0, σ 2 ) ⊥⊥ Y ∼
N (0, σ ), then the random variables V = X 2 + Y 2 and Θ = arctan Y
2
X
are independent
√ and have Rayleigh and uniform distribution, respectively,
i.e., V = X 2 + Y 2 ∼ R(σ) ⊥ ⊥ Θ = arctan Y X ∼ U[0, 2π].

V = X 2 + Y 2 and Θ = arctan Y X and

1 − x 2 +y2 2
fX ,Y (x, y ) = fX (x)fY (y ) = e 2σ
2πσ 2

Mohammad Hadi Communication systems Fall 2020 35 / 102


Bi-variate Random Variables

Example (Magnitude and phase of two i.i.d Gaussian variables)



If X ∼ N (0, σ 2 ) ⊥⊥ Y ∼ N (0, σ 2 ), then V = X 2 + Y 2 ∼ R(σ) ⊥⊥ Θ =
arctan Y
X ∼ U[0, 2π].

"
√ x √ y #
x 2 +y 2 x 2 +y 2 1 1
J(x, y ) = y x
⇒ |detJ(x, y )| = p =
− x 2 +y 2 x 2 +y 2 x2 + y2 v
(p (
x2 + y2 = v x = v cos θ

arctan yx = θ y = v sin θ
2
v − v2
fV ,Θ (v , θ) = vfX ,Y (v cos θ, v sin θ) = e 2σ
2πσ 2

Mohammad Hadi Communication systems Fall 2020 36 / 102


Bi-variate Random Variables

Example (Magnitude and phase of two i.i.d Gaussian variables)



If X ∼ N (0, σ 2 ) ⊥⊥ Y ∼ N (0, σ 2 ), then V = X 2 + Y 2 ∼ R(σ) ⊥⊥ Θ =
arctan Y
X ∼ U[0, 2π].

Z ∞
1
fΘ (θ) = fV ,Θ (v , θ)dv = , 0 ≤ θ ≤ 2π
−∞ 2π
Z∞
v − v 22
fV (v ) = fV ,Θ (v , θ)dθ = e 2σ , v ≥ 0
−∞ σ2
The magnitude and the phase are independent random variables since

fV ,Θ (v , θ) = fΘ (θ)fV (v )

Mohammad Hadi Communication systems Fall 2020 37 / 102


Bi-variate Random Variables

Statement (Jointly Gaussian Random Variables)


Jointly Gaussian random variables X and Y are distributed according to a
joint PDF of the form
1
fX ,Y (x, y ) = p
2πσ1 σ2 1 − ρ2
n 1  (x − m1 )2 (y − m2 )2 2ρ(x − m1 )(y − m2 ) o
× exp − + −
2(1 − ρ2 ) σ12 σ22 σ1 σ2

3 Two uncorrelated jointly Gaussian random variables are independent.


Therefore, for jointly Gaussian random variables, independence and uncor-
relatedness are equivalent.

Mohammad Hadi Communication systems Fall 2020 38 / 102


Multi-variate Random Variables

Definition (Multi-variate CDF)


Let X = (X1 , · · · , Xn )T represent n random variables. For these random
vector , the CDF is defined as

FX (xx ) = FX1 ,··· ,Xn (x1 , · · · , xn ) = P(X1 ≤ x1 , · · · , Xn ≤ xn )

Definition (Multi-variate PDF)


Let X = (X1 , · · · , Xn )T represent n random variables. For these random
vector , the PDF is defined as
∂ n FX1 ,··· ,Xn (x1 , · · · , xn )
fX (xx ) = fX1 ,··· ,Xn (x1 , · · · , xn ) =
∂x1 · · · ∂xn

Mohammad Hadi Communication systems Fall 2020 39 / 102


Multi-variate Random Variables

Definition (Joint Multi-variate CDF)


Let X = (X1 , · · · , Xn )T and Y = (Y1 , · · · , Ym )T represent two random
vectors. For these random vector , the joint CDF is defined as

FX ,Y x , y ) = P(X1 ≤ x1 , · · · , Xn ≤ xn , Y1 ≤ y1 , · · · , Ym ≤ ym )
Y (x

Definition (Joint Multi-variate PDF)


Let X = (X1 , · · · , Xn )T and Y = (Y1 , · · · , Ym )T represent two random
vectors. For these random vector , the joint PDF is defined as

∂ n+m FX ,Y x,y )
Y (x
fX ,Y x,y ) =
Y (x
∂x1 · · · ∂xn ∂y1 · · · ∂ym

Mohammad Hadi Communication systems Fall 2020 40 / 102


Multi-variate Random Variables

Definition (Mean)
X ) = (E {X1 }, · · · , E {Xn })
The expected value of X is defined as E (X

Definition (Correlation)
X Y T ) is called the correlation matrix of X and Y .
X , Y ) = E (X
R(X

Definition (Covariance)
The covariance of X and Y is defined as C (X X − E (X
X , Y ) = E (X Y −
X ))(Y
Y ))T = E (X
X Y T ) − E (X Y )T .

E (Y X )E (Y

Mohammad Hadi Communication systems Fall 2020 41 / 102


Multi-variate Random Variables

1 If fX (xx ) = fX1 (x1 ) · · · fXn (xn ), then X is called mutually independent.


2 X , X ) is a diagonal matrix, then X is called mutually uncorrelated.
If C (X
3 If X is independent, then, X is uncorrelated.
4 If fX ,Y x , y ) = fX (xx )fY (yy ), then X and Y are called independent.
Y (x
5 If C (XX , Y ) = 0 , then X and Y are called uncorrelated.
6 If X and Y are independent, X and Y are uncorrelated.

Mohammad Hadi Communication systems Fall 2020 42 / 102


Multi-variate Random Variables

Statement (Jointly Gaussian Random Variables)


T
Jointly Gaussian random variables X = X1 , · · · , Xn are distributed ac-
cording to a joint PDF of the form
n  −1
Σ|)− 2 exp (xx − m )T Σ −1 (xx − m )

fX (xx ) = (2π|Σ
2
, where m = E (XX ) and Σ = C (X
X , X ) are the mean vector and covariance
matrix and |Σ
Σ| is the determinant of Σ .

3 Uncorrelated jointly Gaussian random variables are independent. There-


fore, for jointly Gaussian random variables, independence and uncorrelated-
ness are equivalent.

Mohammad Hadi Communication systems Fall 2020 43 / 102


Multi-variate Random Variables

Theorem (Central Limit Theorem)


If {Xi }ni=1 are n i.i.d. (independent and identically distributed) random
P vari-
ables, which each have the mean m and variance σ 2 , then Y = n1 ni=1 Xi
2
converges to N (m, σn ).

3 The central limit theorem states that the sum of many i.i.d. random
variables converges to a Gaussian random variable.

Mohammad Hadi Communication systems Fall 2020 44 / 102


Random Processes

Mohammad Hadi Communication systems Fall 2020 45 / 102


Random Processes

3 A random process is a set of possible realizations of signal waveforms.

Mohammad Hadi Communication systems Fall 2020 46 / 102


Random Processes

Example (Sample random process)


X (t) = A cos(2πf0 t + Θ), Θ ∼ U[0, 2π].

Figure: Sample functions of the example random process.

Mohammad Hadi Communication systems Fall 2020 47 / 102


Random Processes
Example (Sample random process)
X (t) = X , X ∼ U[−1, 1].

Figure: Sample functions of the example random process.

Mohammad Hadi Communication systems Fall 2020 48 / 102


Random Processes

3 A random process is denoted by x(t; ω), where ω ∈ Ω is a random


variable.
3 For each ωi , there exists a deterministic time function x(t; ωi ), which is
called a sample function or a realization.
3 For the different outcomes at a fixed time t0 , the numbers x(t0 ; ω)
constitute a random variable denoted by X (t0 ).
3 At each time instant t0 and for each ωi ∈ Ω, we have the number
x(t0 ; ωi ).

Mohammad Hadi Communication systems Fall 2020 49 / 102


Random Processes

Example (Sample random process)


Let Ω = {1, 2, 3, 4, 5, 6} denote the sample space corresponding to the
random experiment of throwing a die. For all ω ∈ Ω, let x(t; ω) = ωe −t u(t)
denote a random process. Then X (1) is a random variable taking values
{e −1 , 2e −1 , 3e −1 , 4e −1 , 5e −1 , 6e −1 } and each has probability 1/6.

Figure: Sample functions of a random process.

Mohammad Hadi Communication systems Fall 2020 50 / 102


Statistical Averages
Definition (Mean Function)
The mean, or expectation, of the random process X (t) is a deterministic
function of time denoted by mX (t) that at each time instant to equals
Rthe

mean of the random variable X (t0 ). That is, mX (t) = E [X (t)] =
−∞ xfX (t) (x)dx, ∀t.

Figure: The mean of a random process.

Mohammad Hadi Communication systems Fall 2020 51 / 102


Statistical Averages

Definition (Autocorrelation Function)


The autocorrelation function of the random process X (t) is defined as
Z ∞Z ∞
RX (t1 , t2 ) = E [X (t1 )X (t2 )] = x1 x2 fX (t1 ),X (t2 ) (x1 , x2 )dx1 dx2
−∞ −∞
.

Mohammad Hadi Communication systems Fall 2020 52 / 102


Statistical Averages

Example (Statistical averages)


If X (t) = A cos(2πf0 t + Θ), Θ ∼ U[0, 2π], then mX (t) = 0 and
2
RX (t1 , t2 ) = A2 cos(2πf0 (t1 − t2 )).

Z 2π
1
mx (t) = E [X (t)] = E [A cos(2πf0 t + Θ)] = A cos(2πf0 t + θ) dθ = 0
0 2π

RX (t1 , t2 ) = E [X (t1 )X (t2 )]


= E [A cos(2πf0 t1 + Θ)A cos(2πf0 t2 + Θ)]
A2 A2
= E[ cos(2πf0 (t1 − t2 )) + cos(2πf0 (t1 + t2 ) + 2Θ)]
2 2
A2
= cos(2πf0 (t1 − t2 ))
2
Mohammad Hadi Communication systems Fall 2020 53 / 102
Statistical Averages

Example (Statistical averages)


If X (t) = X , X ∼ U[−1, 1], then mX (t) = 0 and RX (t1 , t2 ) = 13 .

−1 + 1
mx (t) = E [X (t)] = E [X ] = =0
2
(1 − (−1))2 1
RX (t1 , t2 ) = E [X 2 ] = =
12 3

Mohammad Hadi Communication systems Fall 2020 54 / 102


Wide-Sense Stationary Processes

Definition (Wide-Sense Stationary (WSS))


A process X (t) is WSS if the following conditions are satisfied
1 mx (t) = E [X (t)] is independent of t.
2 RX (t1 , t2 ) depends only on the time difference τ = t1 − t2 and not on
t1 and t2 individually.

1 RX (t1 , t2 ) = RX (t2 , t1 ).
2 If X (t) is WSS, RX (τ ) = RX (−τ ).

Mohammad Hadi Communication systems Fall 2020 55 / 102


Wide-Sense Stationary Processes

Example (WSS)
If X (t) = A cos(2πf0 t + Θ), Θ ∼ U[0, 2π], then mX (t) = 0 and
2
RX (t1 , t2 ) = A2 cos(2πf0 (t1 − t2 )) and therefore, X (t) is WSS.

Z 2π
1
mx (t) = E [A cos(2πf0 t + Θ)] = A cos(2πf0 t + θ) dθ = 0
0 2π

RX (t1 , t2 ) = E [A cos(2πf0 t1 + Θ)A cos(2πf0 t2 + Θ)]


A2 A2
= E[ cos(2πf0 (t1 − t2 )) + cos(2πf0 (t1 + t2 ) + 2Θ)]
2 2
A2
= cos(2πf0 (t1 − t2 ))
2

Mohammad Hadi Communication systems Fall 2020 56 / 102


Wide-Sense Stationary Processes

Example (WSS)
If X (t) = A cos(2πf0 t + Θ), Θ ∼ U[0, π], then mX (t) = −2 Aπ sin(2πf0 t)
2
and RX (t1 , t2 ) = A2 cos(2πf0 (t1 − t2 )) and therefore, X (t) is not WSS.

Z π
1 A
mx (t) = E [A cos(2πf0 t+Θ)] = A cos(2πf0 t+θ) dθ = −2 sin(2πf0 t)
0 π π

RX (t1 , t2 ) = E [A cos(2πf0 t1 + Θ)A cos(2πf0 t2 + Θ)]


A2 A2
= E[ cos(2πf0 (t1 − t2 )) + cos(2πf0 (t1 + t2 ) + 2Θ)]
2 2
A2
= cos(2πf0 (t1 − t2 ))
2

Mohammad Hadi Communication systems Fall 2020 57 / 102


Multiple Random Processes

Definition (Independent Processes)


Two random processes X (t) and Y (t) are independent if for all positive
integers m, n, and for all t1 , t2, · · · , tn and τ1 , τ2 , · · · , τm the random
 vec-
tors X (t1 ), X (t2 ), · · · , X (tn ) and Y (τ1 ), Y (τ2 ), · · · , Y (τm ) are inde-
pendent.

Definition (Uncorrelated Processes)


Two random processes X (t) and Y (t) are uncorrelated if for all positive
integers m, n, and for all t1 , t2 , · · · , tn and τ1 , τ2 , · · · , τm the random
 vec-
tors X (t1 ), X (t2 ), · · · , X (tn ) and Y (τ1 ), Y (τ2 ), · · · , Y (τm ) are uncor-
related.

Mohammad Hadi Communication systems Fall 2020 58 / 102


Multiple Random Processes

1 The independence of random processes implies that they are uncorre-


lated.
2 The uncorrelatedness generally does not imply independence.
3 For the important class of Gaussian processes, the independence and
uncorrelatedness are equivalent.

Mohammad Hadi Communication systems Fall 2020 59 / 102


Multiple Random Processes

Definition (Cross Correlation)


The cross correlation between two random processes X (t) and Y (t) is de-
fined as RXY (t1 , t2 ) = E [X (t1 )Y (t2 )].

Definition (Jointly WSS)


Two random processes X (t) and Y (t) are jointly wide-sense stationary, or
simply jointly stationary, if both X (t) and Y (t) are individually stationary
and the cross-correlation RXY (t1 , t2 ) depends only on τ = t1 − t2 .

1 RXY (t1 , t2 ) = RYX (t2 , t1 ).


2 For jointly WSS random processes X (t) and Y (t), RXY (τ ) = RYX (−τ ).

Mohammad Hadi Communication systems Fall 2020 60 / 102


Multiple Random Processes

Example (Jointly WSS)


Assuming that the two random processes X (t) and Y (t) are jointly station-
ary, determine the autocorrelation of the process Z (t) = X (t) + Y (t).

RZ (t + τ, t) = E [Z (t + τ )Z (t)]
= E [(X (t + τ ) + Y (t + τ ))(X (t) + Y (t))]
= RX (τ ) + RY (τ ) + RXY (τ ) + RXY (−τ )

Mohammad Hadi Communication systems Fall 2020 61 / 102


Random Processes and Linear Systems

Statement (LTI System with Random Input)


If a stationary process X (t) with mean mx and autocorrelation function
RX (τ ) is passed through an LTI system with impulse response h(t), the
input and output processes X (t) and Y (t) will be jointly stationary with
Z ∞
mY = mx h(t)dt
−∞

RXY (τ ) = RX (τ ) ∗ h(−τ )
RY (τ ) = RXY (τ ) ∗ h(τ ) = RX (τ ) ∗ h(τ ) ∗ h(−τ )

Figure: A random process passing through an LTI system.

Mohammad Hadi Communication systems Fall 2020 62 / 102


Random Processes and Linear Systems
Statement (LTI System with Random Input)
If a stationary process X (t) with mean mx and autocorrelation function
RX (τ ) is passed through an LTI system with impulse response h(t), the
input and output processes X (t) and Y (t) will be jointly stationary.

Z ∞
E [Y (t)] = E [ X (τ )h(t − τ )dτ ]
Z ∞−∞
= E [X (τ )]h(t − τ )dτ ]
−∞
Z ∞
= mX h(t − τ )dτ
−∞
Z ∞
= mX h(u)du = mY
−∞

Mohammad Hadi Communication systems Fall 2020 63 / 102


Random Processes and Linear Systems
Statement (LTI System with Random Input)
If a stationary process X (t) with mean mx and autocorrelation function
RX (τ ) is passed through an LTI system with impulse response h(t), the
input and output processes X (t) and Y (t) will be jointly stationary.

Z ∞
E [X (t1 )Y (t2 )] = E [X (t1 ) X (s)h(t2 − s)ds]
−∞
Z ∞
= E [X (t1 )X (s)]h(t2 − s)ds
−∞
Z ∞
= RX (t1 − s)h(t2 − s)ds
Z−∞∞
= RX (t1 − t2 − u)h(−u)du = RX (τ ) ∗ h(−τ ) = RXY (τ )
−∞

Mohammad Hadi Communication systems Fall 2020 64 / 102


Random Processes and Linear Systems
Statement (LTI System with Random Input)
If a stationary process X (t) with mean mx and autocorrelation function
RX (τ ) is passed through an LTI system with impulse response h(t), the
input and output processes X (t) and Y (t) will be jointly stationary.

Z ∞
E [Y (t1 )Y (t2 )] = E [Y (t2 ) X (s)h(t1 − s)ds]
−∞
Z ∞
= E [X (s)Y (t2 )]h(t1 − s)ds
−∞
Z ∞
= RXY (s − t2 )h(t1 − s)ds
Z−∞∞
= RXY (u)h(t1 − t2 − u)du = RXY (τ ) ∗ h(τ ) = RY (τ )
−∞

Mohammad Hadi Communication systems Fall 2020 65 / 102


Random Processes and Linear Systems

Example (Differentiateor)
Assume a stationary process passes through a differentiator. What are the
mean and autocorrelation functions of the output? What is the cross cor-
relation between the input and output?

Since h(t) = δ 0 (t),


Z ∞ Z ∞
mY = mx h(t)dt = mx δ 0 (t)dt = 0
−∞ −∞

dRX (τ )
RXY (τ ) = RX (τ ) ∗ h(−τ ) = RX (τ ) ∗ δ 0 (−τ ) = −RX (τ ) ∗ δ 0 (τ ) = −

dRX (τ ) d 2 RX (τ )
RY (τ ) = RXY (τ ) ∗ h(τ ) = − ∗ δ 0 (τ ) = −
dτ dτ 2

Mohammad Hadi Communication systems Fall 2020 66 / 102


Random Processes and Linear Systems

Example (Hilbert Transform)


Assume a stationary process passes through a Hilbert filter. What are the
mean and autocorrelation functions of the output? What is the cross cor-
relation between the input and output?

Assume that RX (τ ) has no DC component. Since h(t) = 1/(πt),


Z ∞ Z ∞
1
mY = mx h(t)dt = mx dt = 0
−∞ −∞ πt

−1
RXY (τ ) = RX (τ ) ∗ h(−τ ) = RX (τ ) ∗ = −RbX (τ )
πτ
1
RY (τ ) = RXY (τ ) ∗ h(τ ) = −RbX (τ ) ∗ = −RbX (τ ) = RX (τ )
b
πτ

Mohammad Hadi Communication systems Fall 2020 67 / 102


Power Spectral Density of Stationary Processes

Definition (Truncated Fourier Transform)


The truncated Fourier transform of a realization of the random process
X (t; ωi ) over an interval [−T /2, T /2] is defined by
Z T /2
XT (f ; ωi ) = x(t; ωi )e −j2πft dt
−T /2

Definition (Power Spectral Density)


The power spectral density of the random process X (t) is defined by

1
SX (f ) = lim E [|XT (f ; ω)|2 ]
T →∞ T

Mohammad Hadi Communication systems Fall 2020 68 / 102


Power Spectral Density of Stationary Processes

Theorem (Wiener-Khinchin)
For a stationary random process X (t), the power spectral density is the
Fourier transform of the autocorrelation function, i.e.,
Z ∞
SX (f ) = F[RX (τ )] = RX (τ )e −j2πf τ dτ
−∞
.

Mohammad Hadi Communication systems Fall 2020 69 / 102


Power Spectral Density of Stationary Processes

Definition (Power)
The power in the random process X (t) is obtained by
Z ∞
PX = SX (f )df = F −1 [SX (f )]|τ =0 = RX (0)
−∞
.

Definition (Cross Power Spectral Density)


For the jointly stationary random processes X (t) and Y (t), the cross power
spectral density is the Fourier transform of the cross correlation function,
i.e., Z ∞
SXY (f ) = F[RXY (τ )] = RXY (τ )e −j2πf τ dτ
−∞
.

Mohammad Hadi Communication systems Fall 2020 70 / 102


Power Spectral Density of Stationary Processes

Example (Wiener-Khinchin)
2
If X (t) = A cos(2πf0 t + Θ), Θ ∼ U[0, 2π], then RX (τ ) = A2 cos(2πf0 τ )
2 2
and therefore, SX (f ) = A4 [δ(f − f0 ) + δ(f + f0 )] and PX = A2 .

Figure: Power spectral density of the example random process.

Mohammad Hadi Communication systems Fall 2020 71 / 102


Power Spectral Density of Stationary Processes

Example (Wiener-Khinchin)
1
If X (t) = X , X ∼ U[−1, 1], then RX (τ ) = 3 and therefore, SX (f ) =
1 1
3 δ(f ) and PX = 3 .

Mohammad Hadi Communication systems Fall 2020 72 / 102


Power Spectral Density of Stationary Processes

Statement (LTI System with Random Input)


If a stationary process X (t) with mean mx and autocorrelation function
RX (τ ) is passed through an LTI system with impulse response h(t) and
frequency response H(f ), the input and output processes X (t) and Y (t)
will be jointly stationary with
Z ∞
mY = mx h(t)dt ↔ my = mx H(0)
−∞

RXY (τ ) = RX (τ ) ∗ h(−τ ) ↔ SXY (f ) = H ∗ (f )SX (f )



RYX (τ ) = RXY (−τ ) ↔ SYX (f ) = SXY (f ) = H(f )SX (f )
RY (τ ) = RXY (τ ) ∗ h(τ ) = RX (τ ) ∗ h(τ ) ∗ h(−τ ) ↔ SY (f ) = |H(f )|2 SX (f )

Mohammad Hadi Communication systems Fall 2020 73 / 102


Power Spectral Density of Stationary Processes
Statement (LTI System with Random Input)
If a stationary process X (t) with mean mx and autocorrelation function
RX (τ ) is passed through an LTI system with impulse response h(t) and
frequency response H(f ), the input and output processes X (t) and Y (t)
will be jointly stationary.

Figure: Input-output relations for the power spectral density and the cross-spectral density.

Mohammad Hadi Communication systems Fall 2020 74 / 102


Power Spectral Density of Stationary Processes

Example (Power spectral densities for a differentiator)


If X (t) = A cos(2πf0 t + Θ), Θ ∼ U[0, 2π] passes through a differentiator,
2
we have SY (f ) = π 2 f02 A2 [δ(f − f0 ) + δ(f + f0 )] and SXY (f ) = jπA2 f0 [δ(f +
f0 ) − δ(f − f0 )].

A2
SY (f ) = 4π 2 f 2 [δ(f − f0 ) + δ(f + f0 )] = π 2 f02 A2 [δ(f − f0 ) + δ(f + f0 )]
4
A2 jπA2 f0
SXY (f ) = −j2πf [δ(f − f0 ) + δ(f + f0 )] = [δ(f + f0 ) − δ(f − f0 )]
4 2

Mohammad Hadi Communication systems Fall 2020 75 / 102


Power Spectral Density of Stationary Processes

Example (Power spectral densities for a differentiator)


If X (t) = X , X ∼ U[−1, 1] passes through a differentiator, we have
SY (f ) = SXY (f ) = 0.

1
SY (f ) = 4π 2 f 2 δ(f ) = 0
3
1
SXY (f ) = −j2πf δ(f ) = 0
3

Mohammad Hadi Communication systems Fall 2020 76 / 102


Power Spectral Density of Stationary Processes

Example (Power Spectral Density of a Sum Process)


Let Z (t) = X (t)+Y (t), where X (t) and Y (t) are jointly stationary random
processes. Also assume that X (t) and Y (t) are uncorrelated and at least
one of them has zero mean. Then, SZ (f ) = SX (f ) + SY (f ).

Since RXY (τ ) = mX mY = 0,
RZ (τ ) = RX (τ ) + RY (τ ) + RXY (τ ) + RXY (−τ ) = RX (τ ) + RY (τ ). So,

SZ (f ) = F{RZ (τ )} = SX (f ) + SY (f )

Mohammad Hadi Communication systems Fall 2020 77 / 102


Gaussian, White, and
Bandpass Processes

Mohammad Hadi Communication systems Fall 2020 78 / 102


Gaussian Processes

Definition (Gaussian Random Process)


A random process X (t) is a Gaussian process if for all n and all
(t1 , t2 , · · · , tn ), the random variables {X (ti )}ni=1 have a jointly Gaussian
density function.

For a Gassian random process,


1 At any time instant t0 , the random variable X (t0 ) is Gaussian.
2 At any two points t1 , t2 , random variables (X (t1 ), X (t2 )) are distributed
according to a two-dimensional jointly Gaussian distribution.

Mohammad Hadi Communication systems Fall 2020 79 / 102


Gaussian Processes

Example (Gaussian Random Process)


Let X (t) be a zero-mean stationary Gaussian random process with the power
spectral density SX (f ) = 5 u (f /1000). Then, X (3) ∼ N (0, 5000).

m = mX (3) = mX = 0
σ 2 = V [X (3)] = E [X 2 (3)] − (E [X (3)])2 = E [X (3)X (3)] = RX (0) = PX
Z ∞
2
σ = PX = SX (f )df = 5000
−∞

Mohammad Hadi Communication systems Fall 2020 80 / 102


Gaussian Processes

Definition (Jointly Gaussian Random Processes)


The random processes X (t) and Y (t) are jointly Gaussian if for all
n, m and all (t1 , t2 , · · · , tn ) and (τ1 , τ2 , · · · , τm ), the random vector
(X (t1 ), X (t2 ), · · · , X (tn ), Y (τ1 ), Y (τ2 ), · · · , Y (τm )) is distributed accord-
ing to an n + m dimensional jointly Gaussian distribution.

For jointly Gassian random processes,


1 If the Gaussian process X (t) is passed through an LTI system, then the
output process Y (t) will also be a Gaussian process. Moreover, X (t)
and Y (t) will be jointly Gaussian processes.
2 For jointly Gaussian processes, uncorrelatednesss and independence are
equivalent.

Mohammad Hadi Communication systems Fall 2020 81 / 102


Gaussian and White Processes

Example (Jointly Gaussian Random Processes)


Let X (t) be a zero-mean stationary Gaussian random process with the power
spectral density SX (f ) = 5 u (f /1000). If X (t) passes a differentiator, the
output random process Y (3) ∼ N (0, 1.6 × 1010 ).

Since H(f ) = 2πf ,


m = mY (3) = mX H(0) = 0
σ 2 = V [Y (3)] = E [Y 2 (3)] − (E [Y (3)])2 = E [Y (3)Y (3)] = RY (0) = PY
Z ∞
2
σ = PY = |H(f )|2 SX (f )df = 1.6 × 1010
−∞

Mohammad Hadi Communication systems Fall 2020 82 / 102


White Processes

Definition (White Random Process)


A random process X (t) is called a white process if it has a flat power spectral
density, i.e., if SX (f ) = N20 equals the constant N20 for all f .

Figure: Power spectrum of a white process.

Mohammad Hadi Communication systems Fall 2020 83 / 102


White Processes

1 The power content of a white process


Z ∞ Z ∞
N0
PX = SX (f )df = df = ∞
−∞ −∞ 2
.
2 A white process is not a meaningful physical process.
3 The autocorrelation function of a white process is
N0
RX (τ ) = F −1 {SX (f )} = δ(τ )
2
.

Mohammad Hadi Communication systems Fall 2020 84 / 102


White Processes

1 If we sample a zero-mean white process at two points t1 and t2 (t1 6=


t2 ), the resulting random variables will be uncorrelated.
2 If the zero-mean random process is white and also Gaussian, any pair of
random variables X (t1 ), X (t2 ), where t1 6= t2 , will also be independent.

Mohammad Hadi Communication systems Fall 2020 85 / 102


Bandpass Processes

Definition (Lowpass Random Process)


A WSS random process X (t) is called lowpass if its autocorrelation RX (τ )
is a lowpass signal.

Definition (Bandpass Random Process)


A zero-mean real WSS random process X (t) is called bandpass if its auto-
correlation RX (τ ) is a bandpass signal.

3 For a bandpass process, the power spectral density is located around


frequencies ±fc , and for lowpass processes, the power spectral density is
located around zero frequency.

Mohammad Hadi Communication systems Fall 2020 86 / 102


Bandpass Processes

Definition (In-phase/Quadrature Random Process)


The in-phase and quadrature components of a bandpass random process
X (t) are defined as

Xc (t) = X (t) cos(2πfc t) + X̂ (t) sin(2πfc t)


Xs (t) = X̂ (t) cos(2πfc t) − X (t) sin(2πfc t)

Definition (Lowpass Equivalent Random Process)


The lowpass equivalent random process of a bandpass random process X (t)
is defined as

Xl (t) = Xc (t) + jXs (t)

Mohammad Hadi Communication systems Fall 2020 87 / 102


Bandpass Processes

Theorem (In-phase/Quadrature Random Process)


For the in-phase and quadrature components of a bandpass random process
X (t),
1 Xc (t) and Xs (t) are jointly WSS zero-mean random processes.
2 Xc (t) and Xs (t) are both lowpass processes.
3 Xc (t) and Xs (t) have the same power spectral density as

f
SXc (f ) = SXs (f ) = [SX (f + fc ) + SX (f − fc )] u ( )
2fc
4 The cross-spectral density of the components are
f
SXc Xs (f ) = −SXs Xc (f ) = j[SX (f + fc ) − SX (f − fc )] u ( )
2fc

Mohammad Hadi Communication systems Fall 2020 88 / 102


Bandpass Processes

Theorem (Lowpass Equivalent Random Process)


For the lowpass equivalent of a bandpass random process X (t),
1

SXl (f ) = 4SX (f + fc )u(f + fc )


2
1 
SX (f ) = SXl (f − fc ) + SXl (−f − fc )
4
3
−j2πfc τ
RXl (τ ) = 2(RX (τ ) + j Rc
X (τ ))e

Mohammad Hadi Communication systems Fall 2020 89 / 102


Bandpass Processes

Example (In-phase autocorrelation)


The autocorrelation of the in-phase component of a bandpass random pro-
cess X (t) is RXc (τ ) = RX (τ ) cos(2πfc τ ) + Rc
X (τ ) sin(2πfc τ ).


RXc (t + τ, t) = E Xc (t + τ )Xc (t)

= E [X (t + τ ) cos(2πfc (t + τ )) + X̂ (t + τ ) sin(2πfc (t + τ ))]

× [X (t) cos(2πfc t) + X̂ (t) sin(2πfc t)]
= RX (τ ) cos(2πfc (t + τ )) cos(2πfc t)
+ RX X̂ (t + τ, t) cos(2πfc (t + τ )) sin(2πfc t)
+ RX̂ X (t + τ, t) sin(2πfc (t + τ )) cos(2πfc t)
+ RX̂ X̂ (t + τ, t) sin(2πfc (t + τ )) sin(2πfc t)
= RX (τ ) cos(2πfc τ ) + Rc X (τ ) sin(2πfc τ )

Mohammad Hadi Communication systems Fall 2020 90 / 102


Thermal and Filtered Noise

Mohammad Hadi Communication systems Fall 2020 91 / 102


Thermal Noise

3 The thermal noise, which is produced by the random movement of


electrons due to thermal agitation, is usually modeled by a white Gaussian
process.

Mohammad Hadi Communication systems Fall 2020 92 / 102


Thermal Noise
Statement (Thermal Noise)
Quantum mechanical analysis of the thermal noise shows that it has a power
hf
spectral density given by Sn (f ) = 0.5hf /(e KT − 1), which can be approx-
imated by KT /2 = N0 /2 for f < 2 THz, where h = 6.6 × 10−34 J×sec
denotes Planck’s constan, K = 1.38 × 10−23 J/K is Boltzmann’s constant,
and T denotes the temperature in degrees Kelvin. Further, the noise origi-
nates from many independent random particle movements.

Figure: Power spectrum of thermal noise.

Mohammad Hadi Communication systems Fall 2020 93 / 102


Thermal and Filtered Noise Model

Statement (Thermal Noise Model)


The thermal noise is assumed to have the following properties,
1 Thermal noise is a stationary process.
2 Thermal noise is a zero-mean process.
3 Thermal noise is a Gaussian process.
KT N0
4 Thermal noise is a white process with a PSD Sn (f ) = 2 = 2 .

Statement (Filtered Noise Process)


The PSD of an ideally bandpass filtered noise is
N0
SX (f ) = |H(f )|2
2

Mohammad Hadi Communication systems Fall 2020 94 / 102


Filtered Noise Model

Example (Filtered Noise Process)


If the Gaussian white noise passes through the shown filter, the PSD of the
filtered noise is
(
N0
N0 , |f − fc | ≤ W
SX (f ) = |H(f )|2 = 2
2 0, otherwise

Figure: Filter transfer function H(f ).

Mohammad Hadi Communication systems Fall 2020 95 / 102


Filtered Noise Model

For a filtered white Gaussian noise, the following properties for Xc (t) and
Xs (t) can be proved.
1 Xc (t) and Xs (t) are zero-mean, lowpass, jointly WSS, and jointly Gaus-
sian random processes.
2 If the power in process X (t) is PX , then the power in each of the
processes Xc (t) and Xs (t) is also Px .
3 Processes Xc (t) and Xs (t) have a common power spectral density, i.e.,
SXc (f ) = SXs (f ) = [SX (f + fc ) + SX (f − fc )] u ( 2ff c ).
4 If fc and −fc are the axis of symmetry of the positive and negative
frequencies, respectively, then Xc (t) and Xs (t) will be independent
processes.

Mohammad Hadi Communication systems Fall 2020 96 / 102


Filtered Noise Model

Example (Filtered Noise Process)


For the bandpass white noise at the output of filter given below, power
spectral density of the process Z (t) = aXc (t) + bXs (t) is SZ (f ) = N0 (a2 +
f
b 2 ) u ( 2W )).

Figure: Filter transfer function H(f ).

Mohammad Hadi Communication systems Fall 2020 97 / 102


Filtered Noise Model

Example (Filtered Noise Process (cont.))


For the bandpass white noise at the output of filter given below, power
spectral density of the process Z (t) = aXc (t) + bXs (t) is SZ (f ) = N0 (a2 +
f
b 2 ) u ( 2W ).

Figure: Power spectral densities of the in-phase and quadrature components of the example
filtered noise.

Mohammad Hadi Communication systems Fall 2020 98 / 102


Filtered Noise Model

Example (Filtered Noise Process (cont.))


For the bandpass white noise at the output of filter given below, power
spectral density of the process Z (t) = aXc (t) + bXs (t) is SZ (f ) = N0 (a2 +
f
b 2 ) u ( 2W ).

Since fc is the axis of symmetry of the noise power spectral density, the
in-phase and quadrature components of the noise will be independent with
zero mean. So,

RZ (τ ) = E [aXc (t+τ )+bXs (t+τ )][aXc (t)+bXs (t)] = a2 RXc (τ )+b 2 RXs (τ )


f
Since SXc (f ) = SXs (f ) = N0 u ( 2W ),

f
SZ (f ) = a2 SXc (f ) + b 2 SXs (f ) = N0 (a2 + b 2 ) u ( )
2W

Mohammad Hadi Communication systems Fall 2020 99 / 102


Noise Equivalent Bandwidth
Definition (Noise Equivalent Bandwidth)
The noise equivalent bandwidth of a filter with the frequency response H(f )

|H(f )|2 df
R
is defined as Bneq = −∞2H 2 , where Hmax denotes the maximum of
max
|H(f )| in the passband of the filter.
R∞
3 The power content of the filtered noise is PX = −∞ |H(f )|2 Sn (f )df
R∞
= N20 −∞ |H(f )|2 df = N0 Bneq Hmax2

Figure: Noise equivalent bandwidth of a typical filter.

Mohammad Hadi Communication systems Fall 2020 100 / 102


Noise Equivalent Bandwidth

Example (Noise Equivalent Bandwidth)


1
The noise equivalent bandwidth of a lowpass RC filter is 4RC .

Figure: Frequency response of a lowpass f RC filter.

1 1
H(f ) = ⇒ |H(f )| = √ ⇒ Hmax = 1
1 + j2πfRC 1 + 4π 2 f 2 R 2 C 2
R∞ 2 1
−∞ |H(f )| df 2RC 1
Bneq = 2
= =
2Hmax 2 4RC
Mohammad Hadi Communication systems Fall 2020 101 / 102
The End

Mohammad Hadi Communication systems Fall 2020 102 / 102

You might also like