0% found this document useful (0 votes)
12 views

SP Slides 2

Uploaded by

tquan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

SP Slides 2

Uploaded by

tquan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

The z-Transform

Definition of the z-Transform

The z-tranform of a sequence x[n] is defined as



X
X(z) = x[n]z −n
n=−∞

where z is a complex variable. Notice the close relationship to the Fourier


transform:

X

x[n]e−jωn

X (ω) = X e =
n=−∞

The Fourier transform is X(z) with z = ejω (evaluated on the unit circle)
The Unit Circle in the Complex z-Plane
The z-transform is a function of a complex variable and is often viewed on the
complex z-plane:
Region of Convergence

For a given sequence, x[n], the set of values of z for which the z-transform power
series converges is called the region of convergence (ROC).
Example: Finding the z−Transform

Find the z-Transform of the signal x[n] = an u[n], where a denotes a real or
complex number.

X ∞
X
−n
n
az −1 z n

X(z) = a u[n]z =
n=−∞ n=0

ROC: range of z when |az −1 | < 1 or, |z| > |a|



X 1 z
az −1 z n =

X(z) = −1
= , ROC : |z| > |a|
1 − az z−a
n=0
Example: Finding the z−Transform (Continued)
1 z
X(z) = −1
= , ROC : |z| > |a|
1 − az z−a
P (z)
X(z) = (zeros are roots of numerator and poles are roots of denominator)
Q(z)
Common z-Transform Pairs
Properties of the Region of Convergence (ROC)

▶ Fourier transform converges if and only


if ROC contains unit circle
▶ ROC cannot contain any poles
▶ Finite-duration sequence: ROC is entire
z-plane except possibly z = 0 or z = ∞
▶ Right-sided sequence: ROC extends
outward from outermost pole
▶ Left-sided sequence: ROC extends
inward from innermost pole
▶ Two-sided sequence: ROC is a ring
bounded by poles
The Inverse z-Transform
Inverse z-Transform

Determine the inverse z-transform from the following complex contour integral:
I
1
x[n] = X(z)z n−1 dz
2πj C

Less formal but sufficient procedures:


▶ Power Series Expansion
▶ Partial Fraction Expansion and Table Lookup
Power Series Expansion

Given a z-transform X(z) with its corresponding ROC, we can expand X(z) into a
power series of the form

X
X(z) = cn z −n
n=−∞

Then, for all n,


x[n] = cn
Example: Finite Length Sequence

Given the following z-transform, by the sequence, x[n]:

X(z) = z 2 1 − 12 z −1 1 + z −1 1 − z −1
  
ROC : |z| > 0

After multiplying the factors:

X(z) = z 2 − 21 z − 1 + 21 z −1

Therefore,  
x[n] = 1, − 12 , −1, 12

or,
x[n] = δ[n + 2] − 12 δ[n + 1] − δ[n] + 21 δ[n − 1]
Example: Long Division

Determine the inverse z-transform of


1
X(z) = ROC: |z| > 1
1 − 1.5z −1 + 0.5z −2

Using long-division while eliminating the lowest power term of z −1 in each step:
1
X(z) = = 1 + 23 z −1 + 74 z −2 + 15 −3
8 z + 31 −4
16 z + ...
1− 1.5z −1 + 0.5z −2
By comparing with the definition of the z-tranform:
 
3 7 15 31
x[n] = 1, 2 , 4 , 8 , 16 , ...

Partial Fraction Expansion and Table Lookup

We want to express the function X(z) as a linear combination

X(z) = α1 X1 (z) + α2 X2 (z) + ... + αK XK (z)

Where X1 (z), ..., XK (z) have inverse transforms x1 [n], ..., xK [n] that can be found
in a table. The inverse z-transform can be found using the linearity property:

x[n] = α1 x1 [n] + α2 x2 [n] + ... + αK xK [n]

Useful when X(z) is a rational function:

B(Z) b0 + b1 z −1 + ... + bM z −M
X(z) = =
A(z) a0 + a1 z −1 + ... + aN z −N
Partial Fraction Expansion and Table Lookup (Continued)

Let’s assume that a0 = 1 (we can divide both numerator and denominator by a0 if
a0 ̸= 1)
B(Z) b0 + b1 z −1 + ... + bM z −M
X(z) = =
A(z) 1 + a1 z −1 + ... + aN z −N
Proper rational function if M < N
Improper rational function if M ≥ N

First, convert any improper rational function into a sum of individual terms plus a
proper rational function by carrying out long division (with each polynomial
written in reverse order) and stopping with the order of the remainder is less than
the order of the denominator.
Partial Fraction Expansion and Table Lookup (Continued)
Let’s assume X(z) is now a proper rational function, where M < N

B(Z) b0 + b1 z −1 + ... + bM z −M
X(z) = =
A(z) 1 + a1 z −1 + ... + aN z −N

We can eliminate the negative powers of z by multiplying the numerator,


denominator by z N :

b0 Z N + b1 z N −1 + ... + bM z N −M
X(z) =
z N + a1 z N −1 + ... + aN
Since, N > M , when we divide through by z, this function is always proper and for
distinct poles:

X(z) b0 Z N −1 + b1 z N −2 + ... + bM z N −M −1 A1 A2 AN
= N N −1
= + + ... +
z z + a1 z + ... + aN z − p1 z − p2 z − pN
Example: Using Partial Fraction Expansion
Determine the partial-fraction expansion of the proper function:
1
X(z) =
1− 1.5z −1 + 0.5z −2
z2
=
z 2 − 1.5z + 0.5
Poles are p1 = 1 and p2 = 0.5 (roots of the denominator), so:

X(z) z A1 A2
= = +
z (z − 1)(z − 0.5) z − 1 z − 0.5

Multiplying through by the the denominator term (z − 1)(z − 0.5), we obtain:

X(z) 2 −1
z = (z − 0.5)A1 + (z − 1)A2 Solving for A1 and A2 : = +
z z − 1 z − 0.5
Example: Using Partial Fraction Expansion (Continued)

X(z) 2 1
= −
z z − 1 z − 0.5

2z z
X(z) = −
z − 1 z − 0.5
2 1
= −1

1−z 1 − 0.5z −1
Three possible regions of convergence:
▶ ROC: |z| > 1 causal / right-sided
−→ x[n] = 2(1)n u[n] − (0.5)n u[n] = (2 − 0.5n )u[n]
▶ ROC: |z| > 0.5 anticausal / left-sided −→ x[n] = [−2 + (0.5)n ]u[−n − 1]
▶ ROC: 0.5 < |z| < 1 two-sided −→ x[n] = −2(1)n u[−n − 1] − (0.5)n u[n]
Properties of the z-Transform
Linearity Property of the z-Transform

If
x1 [n] ←→ X1 (z)
and
x2 [n] ←→ X2 (z)
then
x[n] = a1 x1 [n]a2 x2 [n] ←→ X(z) = a1 X1 (z) + a2 X2 (z)
The region of convergence of X(z) is the intersection of the ROC of each of the
individual z-transforms.
Time Shifting Property of the z-Transform

If
x[n] ←→ X(z)
then
x[n − k] ←→ z −k X(z)

The ROC will be the same except for z = 0 if k > 0 and z = ∞ if k < 0.
Example: Using the Time Shifting Property of the z-Transform

Determine x[n] if
!
−1 1 1
X(z) = z ROC: |z| > 4
1 − 14 z −1

The factor z −1 is associated with a time shift of one sample to the right.
First, let’s find the inverse transform of
1 1 1 n

ROC: |z| > 4 from table: 4 u[n]
1 − 41 z −1

and shift this time-domain sequence one sample to the right:


n−1
x[n] = 14 u[n − 1]
Convolution Property of the z-Transform

If
x1 [n] ←→ X1 (z)
and
x2 [n] ←→ X2 (z)
then
x[n] = x1 [n] ∗ x2 [n] ←→ X(z) = X1 (z)X2 (z)
Example: Using the Convolution Property

Compute the convolution, x[n] = x1 [n] ∗ x2 [n], of the following two signals:

x1 [n] = {1, −2, 1}


(
1, 0 ≤ n ≤ 5
x2 [n] =
0, elsewhere

X1 (z) = 1 − 2z −1 + z −2
X2 (z) = 1 + z −1 + z −2 + z −3 + z −4 + z −5
X(z) = X1 (z)X2 (z) = 1 − z −1 − z −6 + z −7
x[n] = {1, −1, 0, 0, 0, 0, −1, 1}

Summary of Properties of the z-Transform
Analysis of LTI Systems in the z-Domain
System Function of a Linear Time-Invariant System

An LTI system can be represented as the convolution of the input and impulse
response:
y[n] = x[n] ∗ h[n]
From the convolution property of the z-transform:
Y (z) = X(z)H(z)

where the System Function is


Y (z)
H(z) =
X(z)
h[n] ←→ H(z) are a z-transform pair:

X
H(z) = h[n]z −n
n=−∞
Finding the System Function From the Difference Equation
With zero input prior to n = 0 and at initial rest (zero initial conditions), then a
causal LTI system is defined by this constant coefficient difference equation
(assuming a0 = 1):
N
X M
X
y[n] = − ak y[n − k] + bk x[n − k]
k=1 k=0

We can find the system function, H(z), using taking the z-transform of both sides
and applying the time-shifting property:
N
X M
X
Y (z) = − ak z −k Y (z) + bk z −k X(z)
k=1 k=0
N M
! !
X X
Y (z) 1 + ak z −k = X(z) bk z −k
k=1 k=0
Finding the System Function From the Difference Equation
(Continued)
Solving for the system function:
M M
bk z −k bk z −k
P P
Y (z) k=0 k=0
= H(z) = N
= N
X(z)
ak z −k ak z −k
P P
1+
k=1 k=0

▶ This is rational system function, called a pole-zero system, with N poles and
M zeros
▶ Since this is a causal system, ROC is outward from the pole farthest from the
origin
▶ If all poles are inside the unit circle, then system is stable and has a frequency
response
▶ Due to the presence of poles, this system is an infinite-impulse response (IIR)
system
Special Case: All-Zero System

If ak = 0 for 1 ≤ k ≤ N then the H(z) reduces to:


M M
X
−k 1 X
H(z) = bk z = M bk z M −k
z
k=0 k=0

▶ Contains M zeros
▶ Contains M -th order pole at the origin, z = 0
▶ Since poles at the origin are considered trivial, this is an all-zero system
▶ Has a finite impulse response (the bk coefficients)
▶ Called a FIR system or a moving average (MA) system
Special Case: All-Pole System

If bk = 0 for 1 ≤ k ≤ M then the H(z) reduces to:

Y (z) b0 b0 z N
= H(z) = N
= N
X(z)
ak z −k ak z N −k
P P
1+
k=1 k=0

▶ Contains N poles (locations determined by parameters ak )


▶ Contains N -th order zero at the origin, z = 0
▶ Since zeros at the origin are considered trivial, this is called an all-pole system
▶ Has a infinite impulse response (due to the presence of poles)
▶ Called an IIR system
Stability, Causality, and the ROC

LTI System with impulse reponse h[n]


H(z) is the System Function with
pole-zero plot

Three possible ROCs:


▶ |z| < 12
▶ 1
2 < |z| < 2
▶ stable but non-causal
▶ |z| > 2
▶ causal but not stable
Example: Finding the System Function and Impulse Response from
the Difference Equation
Determine the system function and the unit sample response (impulse response) of
the system described by the difference equation:

y[n] = 21 y[n − 1] + 2x[n]

By determining the z-transform of the difference equation, we obtain

Y (z) = 12 z −1 Y (z) + 2X(z)

Solving for the the system function

Y (z) 2 1 1 n

H(z) = = ROC: |z| > 2 ←→ h[n] = 2 2 u[n]
X(z) 1 − 12 z −1
Probability Models and Random Variables
Deterministic and Random Signals

Deterministic signal: specified by a explicitly by a formula or implicitly by


difference equation driven by deterministic signal:

x[n] = cos ω0 n
y[n] = −ay[n − 1] + bu[n]
y[n] = −ax[n]y[n − 1]

A random or stochastic signal is governed by probability.


Probability Theory

▶ experiment: two or more possible outcomes


▶ event: combination of one or more outcomes
▶ probability: every event assigned a number between 0 and 1 called the
probability of the event; probabilities of all possible events must sum to one
▶ random variable: function that maps each outcome of an experiment to a
number
▶ random process: sequence of random variables
▶ realization: set of measurements of a random process (or a sample function)
▶ ensemble: set of all possible realizations
Probability Density Function (pdf)
Let x be a continuous random variable, this is the cumulative distribution function
(CDF):
Px (x) = P (x ≤ x)
Note that Px (−∞) = 0 and Px (∞) = 1.

The probability density function (pdf), px (x), is defined as

dPx (x)
px (x) =
dx
The distribution function is an integral of the probability density function:
Z x
Px (x) = px (ξ) dξ
−∞
Probability Density Function (pdf) (Continued)

The pdf has unit area, Z ∞


px (ξ) dξ = Px (∞) = 1.
−∞

The probability that an observed random variable is in the interval [a, b) is


Z b
P (a ≤ x < b) = px (ξ) dξ = Px (b) − Px (a)
a

Example PDF: Uniform Distribution


(
1/(b − a), a ≤ x < b b+a 1
px (x) = E{x} = var{x} = (b − a)2
0, otherwise 2 12
Averages of Random Variables
The expected value, or mean, is formally defined:
Z ∞
mx = E {x} = x px (x) dx
−∞

Function of a random variable:


Z ∞
E {g(x)} = g(ξ) px (ξ) dξ
−∞

Variance (average squared distance from the mean)


Z ∞
2
(x − mx )2 px (x) dx = E x2 − m2x

var{x} = σx =
−∞

σx is the standard deviation or the root mean square value


Expected Value and Variance Properties
Let a and b be (nonrandom) constants. Then:

E {ax} = aE {x}
var {ax} = a2 var {x}
E {x + a} = E {x} + a
var {x + a} = var {x}

Suppose a random variable z has zero mean and unit variance. Then the random
variable x = az + b has mean and variance:

E {x} = E {az + b} = aE {z} + b = b


n o
var {x} = E (az + b − b)2 = a2 E z2 = a2

Normal, or Gaussian, Distribution

The standard normal distribution,


1 2
pz (z) = √ e−z /2

can be shown to have zero mean and unit variance. The random variable
x = σz + µ has mean µ and variance σ 2 . The pdf of this random variable is also
normal or Gaussian:
1 2 2
px (x) = √ e−(x−µ) /2σ −∞<x<∞
2πσ
Physical Interpretations

Assume the random process x[n] is a physical signal, with units of volts.

▶ The expected value, mx , is the baseline, or “DC level”, measured in volts.

▶ The standard deviation, σx , is the RMS value, in volts. This is the value
measured by an AC voltmeter.

▶ The variance, σx2 , has units of volts2 . Recall V 2 /R has units of power (watts).
Thus, we may interpret σx2 as the AC power dissipated in a 1Ω resistor.
Signal-to-Noise Ratio
We use power to compare the strengths of a desired signal and a background noise
with the signal power-to-noise power ratio, or SNR:
2
σsignal /Rload 2
σsignal
Psignal
SNR = = 2 = 2
Pnoise σnoise /Rload σnoise

or, in decibels,
!
2
σsignal
SNRdB = 10 log10 2 dB
σnoise

SNR is also defined as a ratio of RMS voltages,


 
σsignal σsignal
SNR = =⇒ SNRdB = 20 log10 dB
σnoise σnoise
Random Signals: Statistical Averages
Discrete-Time Random Process
▶ Random process is an indexed family of random variables {xn }
▶ Set of all possible sequences is an ensemble, or the random process
▶ Each sample value of x[n] governed by a probability law
▶ probability density function: pxn (xn , n)
▶ joint probability density: pxn ,xm (xn , n, xm , m)

Stationary Random Process: When all probability distributions are invariant to


translation of time axis:

pxn ,xm (xn , n, xm , m) = pxn+k ,xm+k (xn+k , n + k, xm+k , m + k)

This is stationary in the strict sense as the joint PDFs of sets of random variables
are identical, even as each random variable in the second set were displaced in time
from the first set by amount k.
Statistical (Ensemble) Averages: Mean, Variance
Average / Mean (expected value):
Z ∞
mxn = E{xn } = xpxn (x, n)dx
−∞

For uncorrelated (or linearly independent) random variables:

E{xn ym } = E{xn }E{ym }

Mean-square value or average power (average of |xn |2 ):


Z ∞
2
E{|xn | } = |x|2 pxn (x, n)dx
−∞

Variance (mean-square value of xn − mxn ):

σx2 n = E{|xn − mxn |2 } = E{|xn |2 } − |mxn |2


Statistical (Ensemble) Averages: Autocorrelation and Cross-correlation
Autocorrelation, measure of dependence between random process at different times:
Z ∞Z ∞
ϕxx [n, m] = E{xn x∗m } = xn x∗m pxn ,xm (xn , n, xm , m)dxn dxm
∞ ∞

Autocovariance:

γxx [n, m] = E{(xn − mxn )(xm − mxm )∗ } = ϕxx [n, m] − mxn m∗xm

Cross-correlation, measure of dependence between two different random signals:



ϕxy [n, m] = E{xn ym }

Cross-Covariance:

γxy = E{(xn − mxn )(ym − mym )∗ } = ϕxy [n, m] − mxn m∗ym


Examples With Random Signals
Continuous Sinusoidal Signal With Random Amplitude
Let x(t) = A cos(2πt) where A is a random variable. Find the mean,
autocorrelation, and autocovariance of x(t).

mxt = E{A cos(2πt)} = E{A} cos(2πt)

ϕxx (tn , tm ) = E{xtn xtm } = E{A cos(2πtn )A cos(2πtm )}


= E{A2 } cos(2πtn ) cos(2πtm )

γxx (tn , tm ) = ϕxx (tn , tm ) − mxtn mxtm


= E{A2 } − E{A}2 cos(2πtn ) cos(2πtm )
 

= var[A] cos(2πtn ) cos(2πtm )

Note: Mean is time-varying and correlation/covariance depends on times tn and tm


Continuous Sinusoidal Signal With Random Phase
If amplitude, A, and frequency, ω, are fixed quantities and θ is a uniformly
distributed random variable in the interval (0, 2π), here is the random signal:

xt = x(t) = A cos(ωt + θ)
Z 2π  
1
mxt = E{xt } = E{A cos(ωt + θ)} = A cos(ωt + θ) dθ = 0 = mx
0 2π
The autocorrelation of the random process (also autocovariance since zero mean):

ϕxx (tn , tm ) = E{xtn xtm } = E{A cos(ωtn + θ)A cos(ωtm + θ)}


A2 2π 1 A2
Z
= [cos(ω(tn − tm )) + cos(ω(tn + tm ) + 2θ)] dθ = cos(ω(tn − tm ))
2π 0 2 2
A2
ϕxx (τ ) = cos(ωτ ) where τ = tn − tm (only depends on time difference)
2
Cross-Correlation of Signal Plus Noise Process

We observe a random signal, y[n], which has a desired signal, x[n], plus noise, v[n]:

y[n] = x[n] + v[n]

The cross-correlation between the observed and desired signal, assuming that the
noise and desired signal are uncorrelated:

ϕxy [n, m] = E{xn ym } = E{xn (xm + vm )}


= E{xn xm } + E{xn vm }
= ϕxx [n, m] + E{xn }E{vm }
= ϕxx [n, m] + mxn mvm
Random Signals:
Wide-Sense Stationarity, Ergodicity, Power Density Spectrum
Statistical (Ensemble) Averages: Wide-Sense Stationary
Stationary random process: statistical properties invariant to shift of time origin.
▶ First order PDF and averages are independent of time
▶ Second order joint PDF’s and averages depend only on time difference
Therefore, mean and variance is independent of n:

mx = E{xn }
σx2 = E{|(xn − mx )|2 }

Autocorrelation is one-dimensional sequence, function of time-difference (or lag) m:

ϕxx [n + m, n] = ϕxx [m] = E{xn+m x∗n }

If the probability distributions are not time invariant but the above equations for
the averages sill hold, the random process is wide-sense stationary.
Time Averages and Ergodicity

A random process is ergodic if averages can be obtained from a single realization


which allows us to estimate ensemble averages using time averages.
L−1
1 X
m̂x = x[n]
L
n=0
L−1
1 X
σ̂x2 = |x[n] − m̂x |2
L
n=0
L−1
1 X
⟨x[n + m]x∗ [n]⟩L = x[n + m]x∗ [n]
L
n=0

We will assume ergodicity and wide-sense stationarity unless otherwise specified.


Fourier Transform Representation: Power Density Spectrum
Spectral characteristic of a random process is Fourier transform of autocorrelation
function (Wiener-Khintchine theorem):
∞ Z π
X 1
Φxx (ejω ) = ϕxx [m]e−jωm ←→ ϕxx [m] = Φxx (ejω )ejωm dω
m=−∞
2π −π

If zero-mean process (mx = 0) and define Pxx (ω) = Φxx (ejω ), then at zero lag
(m = 0), the average power of the random process is:
Z π Z π
1 1
E{|x[n]|2 } = ϕxx [0] = σx2 = Φxx (ejω )dω = Pxx (ω)dω
2π −π 2π −π

▶ Pxx (ω) is the power density spectrum,


▶ If Pxx (ω) is a constant, the random process is a white noise process
▶ Pxx (ω) is always real valued, also even for real signals: Pxx (ω) = Pxx (−ω)
Correlation and Covariance Sequences
Wide-Sense Stationary...A Frequent Assumption
The autocorrelation and autocovariance are functions of the two observation times,
n and n + m. This simplifies with two assumptions, which are frequently valid:
1. The mean mxn is constant.
2. The correlation and covariance depend only on the time difference m, called
the lag, rather than on the instants n and n + m separately.

ϕxx [m] = E {xn+m xn }


γxx [m] = E {(xn+m − mx ) (xn − mx )}
= ϕxx [m] − m2x

A signal with these properties is called wide-sense stationary (wss); we make this
assumption moving forward.
Interpreting Autocorrelation ϕxx [m]
▶ Autocorrelation is a measure of rate of change of a random process
▶ Autocorrelation at lag m = 0 gives the average power of the random signal:

ϕxx [0] = E{x2n }


▶ The covariance at zero lag, γxx [0], is the variance, σx2
▶ this is also the average power when the mean is zero
▶ Autocorrelation sequence has even symmetry:

ϕxx [m] = ϕxx [−m]

▶ Autocorrelation is a maximum at lag m = 0:

|ϕxx [m]| ≤ ϕxx [0]


White Noise Process (contains all frequencies)
White Noise: Sequence of uncorrelated random variables x[n] with zero-mean and
constant variance σx2 has this autocorrelation function:

ϕxx [m] = σx2 δ[m]

With power spectral density (PSD):



X

Pxx (ω) = Φxx (e ) = ϕxx [m]e−jωm
m=−∞

X
= σx2 δ[m]e−jωm = σx2
m=−∞

The PSD is constant, therefore all frequencies are present in the same amount.
Filtered Random Process
Let x be a white noise process so all the xn are mutually independent and
uncorrelated. Form a new random process by the difference equation
yn = xn + xn−1
y1 = x1 + x0 , y2 = x2 + x1 , etc. Calculate a few covariances:
γyy [1, 1] = E{y1 y1 } = E {(x1 + x0 ) (x1 + x0 )} = E {x1 x1 + x0 x1 + x1 x1 + x0 x1 }
= σx2 + 0 + σx2 + 0 = 2σx2
γyy [1, 2] = E {(x1 + x0 ) (x2 + x1 )} = E {x1 x2 + x0 x2 + x1 x1 + x0 x1 }
= 0 + 0 + σx2 + 0 = σx2
γyy [1, 3] = E {(x1 + x0 ) (x3 + x2 )} = E {x1 x3 + x0 x3 + x1 x2 + x0 x2 }
=0+0+0+0=0
γyy [2, 3] = E {(x2 + x1 ) (x3 + x2 )} == E {x2 x3 + x1 x3 + x2 x2 + x1 x2 }
= 0 + 0 + σx2 + 0 = σx2
Filtered Random Process (Continued)
Filtered white noise:
yn = xn + xn−1
Autocorrelation of output:
ϕyy [m] = {0, σx2 , 2σx2 , σx2 , 0}
Impulse response of filter:
h[n] = {1, 1}
Note:
h[n] ∗ h[−n] = {0, 1, 2, 1, 0}
Compare:
ϕxx [m] = σx2 δ[m]
h[n] ∗ h[−n] = {0, 1, 2, 1, 0}
ϕyy [m] = {0, σx2 , 2σx2 , σx2 , 0} = σx2 δ[m] ∗ {0, 1, 2, 1, 0}
Discrete-Time Random Signals and LTI Systems
Mean of LTI System Output
▶ h[n] impulse response of stable LTI system
▶ x[n] is real-valued input that is sample sequence of wide-sense stationary
discrete-time random process with mean mx and autocorrelation ϕxx [m]
▶ y[n] is output of LTI system an also a random process
X∞ X∞
y[n] = h[n − k]x[k] = h[k]x[n − k]
k=−∞ k=−∞

Find the mean of the output process:



X ∞
X
my [n] = E{y[n]} = h[k]E{x[n − k]} = mx h[k] = my
k=−∞ k=−∞

▶ Output is also constant. In terms of the frequency response: my = H(ej0 )mx


Autocorrelation Function of LTI System Output

ϕyy [n, n + m] = E{y[n]y[n + m]}


( ∞ ∞
)
X X
=E h[k]h[r]x[n − k]x[n + m − r]
k=−∞ r=−∞

X X∞
= h[k] h[r]E {x[n − k]x[n + m − r]}
k=−∞ r=−∞
X∞ X∞
= h[k] h[r]ϕxx [m + k − r]
k=−∞ r=−∞

= ϕyy [m]

▶ Output is also wide-sense stationary for LTI system with wide-sense


stationary input
Autocorrelation Function of LTI System Output (Continued)
Making a substitution, l = r − k, the autocorrelation of the output is
X∞ X∞
ϕyy [m] = h[k] h[r]ϕxx [m + k − r]
k=−∞ r=−∞

X ∞
X
= ϕxx [m − l] h[k]h[l + k]
l=−∞ k=−∞

X
= ϕxx [m − l]chh [l]
l=−∞

where chh [l] is the (deterministic) autocorrelation sequence of h[n] defined as



X
chh [l] = h[k]h[l + k] (Note that this is equivalent to h[n] ∗ h[−n])
k=−∞
Using Fourier Transforms: Power Density Spectrum of Output Process
Since the autocorrelation of the output, is a convolution:

X
ϕyy [m] = ϕxx [m − l]chh [l]
l=−∞

we can represent this using Fourier transforms (we now assume mx = 0):

Φyy (ejω ) = Chh (ejω )Φxx (ejω )

Since Chh (ejω ) = H(ejω )H ∗ (ejω ) = |H(ejω )|2 we now have the power density
spectrum of the output process:

Φyy (ejω ) = |H(ejω )|2 Φxx (ejω )


Z π
1 2
average power: E{y 2 [n]} = ϕyy [0] = H(ejω ) Φxx (ejω ) dω
2π −π
Using Fourier Transforms: Cross-Power Density Spectrum
Cross-correlation between the input and output of an LTI system:

ϕyx [m] = E{x[n]y[n + m]}



( )
X
= E x[n] h[k]x[n + m − k]
k=−∞

X
= h[k]ϕxx [m − k]
k=−∞

Therefore, the cross-correlation is the convolution of the impulse response with the
input autocorrelations sequence. This can be represented in the frequency domain:

Φyx (ejω ) = H(ejω )Φxx (ejω )


Examples of White Noise as Input to LTI Systems
Random Signals and LTI Systems in the Frequency Domain
Fourier transform of the autocorrelation and cross-correlation sequences:
DT F T
ϕxx [m] ←→ Φxx (ejω )
DT F T
ϕxy [m] ←→ Φxy (ejω )

Power spectral density of the output process:

Φyy (ejω ) = |H(ejω )|2 Φxx (ejω )


Z π
2 1 2
average power: E{y [n]} = ϕyy [0] = H(ejω ) Φxx (ejω ) dω
2π −π

Cross-power spectral density of the output process:

Φyx (ejω ) = H(ejω )Φxx (ejω )


Example: Using White Noise to Represent Random Signals
The autocorrelation of a white noise signal is ϕxx [m] = σx2 δ[m] and the power
spectrum is constant for all frequencies, ω. Let’s assume the signal has zero mean:

Φxx (ejω ) = σx2 −π ≤ω ≤π

If we have a system where h[n] = an u[n] then the frequency response is:
1
H(ejω ) =
1 − ae−jω
The output of this system can represent any random signal with this power
spectrum:

Φyy (ejω ) = |H(ejω )|2 Φxx (ejω )


2
1 σx2
= σx2 =
1 − ae−jω 1 + a2 − 2a cos(ω)
Example: Using White Noise to Identify An Unknown System
Using a zero-mean white-noise signal as an input to an LTI system, where
ϕxx [m] = σx2 δ[m] and Φxx (ejω ) = σx2 , the cross-correlation between the output and
input is:

X ∞
X
ϕyx [m] = h[k]ϕxx [m − k] = h[k]σx2 δ[m − k] = σx2 h[m]
k=−∞ k=−∞

Also,

Φyx (ejω ) = H(ejω )Φxx (ejω )


= σx2 H(ejω )

By using white-noise as an input, and unknown system, H(ejω ) can be identified:


cross-correlate input sequence with output sequence to obtain ϕyx [m]; compute the
Fourier transform and result is proportional to system frequency response.

You might also like