SP Slides 2
SP Slides 2
The Fourier transform is X(z) with z = ejω (evaluated on the unit circle)
The Unit Circle in the Complex z-Plane
The z-transform is a function of a complex variable and is often viewed on the
complex z-plane:
Region of Convergence
For a given sequence, x[n], the set of values of z for which the z-transform power
series converges is called the region of convergence (ROC).
Example: Finding the z−Transform
Find the z-Transform of the signal x[n] = an u[n], where a denotes a real or
complex number.
∞
X ∞
X
−n
n
az −1 z n
X(z) = a u[n]z =
n=−∞ n=0
Determine the inverse z-transform from the following complex contour integral:
I
1
x[n] = X(z)z n−1 dz
2πj C
Given a z-transform X(z) with its corresponding ROC, we can expand X(z) into a
power series of the form
∞
X
X(z) = cn z −n
n=−∞
X(z) = z 2 1 − 12 z −1 1 + z −1 1 − z −1
ROC : |z| > 0
X(z) = z 2 − 21 z − 1 + 21 z −1
Therefore,
x[n] = 1, − 12 , −1, 12
↑
or,
x[n] = δ[n + 2] − 12 δ[n + 1] − δ[n] + 21 δ[n − 1]
Example: Long Division
Using long-division while eliminating the lowest power term of z −1 in each step:
1
X(z) = = 1 + 23 z −1 + 74 z −2 + 15 −3
8 z + 31 −4
16 z + ...
1− 1.5z −1 + 0.5z −2
By comparing with the definition of the z-tranform:
3 7 15 31
x[n] = 1, 2 , 4 , 8 , 16 , ...
↑
Partial Fraction Expansion and Table Lookup
Where X1 (z), ..., XK (z) have inverse transforms x1 [n], ..., xK [n] that can be found
in a table. The inverse z-transform can be found using the linearity property:
B(Z) b0 + b1 z −1 + ... + bM z −M
X(z) = =
A(z) a0 + a1 z −1 + ... + aN z −N
Partial Fraction Expansion and Table Lookup (Continued)
Let’s assume that a0 = 1 (we can divide both numerator and denominator by a0 if
a0 ̸= 1)
B(Z) b0 + b1 z −1 + ... + bM z −M
X(z) = =
A(z) 1 + a1 z −1 + ... + aN z −N
Proper rational function if M < N
Improper rational function if M ≥ N
First, convert any improper rational function into a sum of individual terms plus a
proper rational function by carrying out long division (with each polynomial
written in reverse order) and stopping with the order of the remainder is less than
the order of the denominator.
Partial Fraction Expansion and Table Lookup (Continued)
Let’s assume X(z) is now a proper rational function, where M < N
B(Z) b0 + b1 z −1 + ... + bM z −M
X(z) = =
A(z) 1 + a1 z −1 + ... + aN z −N
b0 Z N + b1 z N −1 + ... + bM z N −M
X(z) =
z N + a1 z N −1 + ... + aN
Since, N > M , when we divide through by z, this function is always proper and for
distinct poles:
X(z) b0 Z N −1 + b1 z N −2 + ... + bM z N −M −1 A1 A2 AN
= N N −1
= + + ... +
z z + a1 z + ... + aN z − p1 z − p2 z − pN
Example: Using Partial Fraction Expansion
Determine the partial-fraction expansion of the proper function:
1
X(z) =
1− 1.5z −1 + 0.5z −2
z2
=
z 2 − 1.5z + 0.5
Poles are p1 = 1 and p2 = 0.5 (roots of the denominator), so:
X(z) z A1 A2
= = +
z (z − 1)(z − 0.5) z − 1 z − 0.5
X(z) 2 −1
z = (z − 0.5)A1 + (z − 1)A2 Solving for A1 and A2 : = +
z z − 1 z − 0.5
Example: Using Partial Fraction Expansion (Continued)
X(z) 2 1
= −
z z − 1 z − 0.5
2z z
X(z) = −
z − 1 z − 0.5
2 1
= −1
−
1−z 1 − 0.5z −1
Three possible regions of convergence:
▶ ROC: |z| > 1 causal / right-sided
−→ x[n] = 2(1)n u[n] − (0.5)n u[n] = (2 − 0.5n )u[n]
▶ ROC: |z| > 0.5 anticausal / left-sided −→ x[n] = [−2 + (0.5)n ]u[−n − 1]
▶ ROC: 0.5 < |z| < 1 two-sided −→ x[n] = −2(1)n u[−n − 1] − (0.5)n u[n]
Properties of the z-Transform
Linearity Property of the z-Transform
If
x1 [n] ←→ X1 (z)
and
x2 [n] ←→ X2 (z)
then
x[n] = a1 x1 [n]a2 x2 [n] ←→ X(z) = a1 X1 (z) + a2 X2 (z)
The region of convergence of X(z) is the intersection of the ROC of each of the
individual z-transforms.
Time Shifting Property of the z-Transform
If
x[n] ←→ X(z)
then
x[n − k] ←→ z −k X(z)
The ROC will be the same except for z = 0 if k > 0 and z = ∞ if k < 0.
Example: Using the Time Shifting Property of the z-Transform
Determine x[n] if
!
−1 1 1
X(z) = z ROC: |z| > 4
1 − 14 z −1
The factor z −1 is associated with a time shift of one sample to the right.
First, let’s find the inverse transform of
1 1 1 n
ROC: |z| > 4 from table: 4 u[n]
1 − 41 z −1
If
x1 [n] ←→ X1 (z)
and
x2 [n] ←→ X2 (z)
then
x[n] = x1 [n] ∗ x2 [n] ←→ X(z) = X1 (z)X2 (z)
Example: Using the Convolution Property
Compute the convolution, x[n] = x1 [n] ∗ x2 [n], of the following two signals:
X1 (z) = 1 − 2z −1 + z −2
X2 (z) = 1 + z −1 + z −2 + z −3 + z −4 + z −5
X(z) = X1 (z)X2 (z) = 1 − z −1 − z −6 + z −7
x[n] = {1, −1, 0, 0, 0, 0, −1, 1}
↑
Summary of Properties of the z-Transform
Analysis of LTI Systems in the z-Domain
System Function of a Linear Time-Invariant System
An LTI system can be represented as the convolution of the input and impulse
response:
y[n] = x[n] ∗ h[n]
From the convolution property of the z-transform:
Y (z) = X(z)H(z)
We can find the system function, H(z), using taking the z-transform of both sides
and applying the time-shifting property:
N
X M
X
Y (z) = − ak z −k Y (z) + bk z −k X(z)
k=1 k=0
N M
! !
X X
Y (z) 1 + ak z −k = X(z) bk z −k
k=1 k=0
Finding the System Function From the Difference Equation
(Continued)
Solving for the system function:
M M
bk z −k bk z −k
P P
Y (z) k=0 k=0
= H(z) = N
= N
X(z)
ak z −k ak z −k
P P
1+
k=1 k=0
▶ This is rational system function, called a pole-zero system, with N poles and
M zeros
▶ Since this is a causal system, ROC is outward from the pole farthest from the
origin
▶ If all poles are inside the unit circle, then system is stable and has a frequency
response
▶ Due to the presence of poles, this system is an infinite-impulse response (IIR)
system
Special Case: All-Zero System
▶ Contains M zeros
▶ Contains M -th order pole at the origin, z = 0
▶ Since poles at the origin are considered trivial, this is an all-zero system
▶ Has a finite impulse response (the bk coefficients)
▶ Called a FIR system or a moving average (MA) system
Special Case: All-Pole System
Y (z) b0 b0 z N
= H(z) = N
= N
X(z)
ak z −k ak z N −k
P P
1+
k=1 k=0
Y (z) 2 1 1 n
H(z) = = ROC: |z| > 2 ←→ h[n] = 2 2 u[n]
X(z) 1 − 12 z −1
Probability Models and Random Variables
Deterministic and Random Signals
x[n] = cos ω0 n
y[n] = −ay[n − 1] + bu[n]
y[n] = −ax[n]y[n − 1]
dPx (x)
px (x) =
dx
The distribution function is an integral of the probability density function:
Z x
Px (x) = px (ξ) dξ
−∞
Probability Density Function (pdf) (Continued)
E {ax} = aE {x}
var {ax} = a2 var {x}
E {x + a} = E {x} + a
var {x + a} = var {x}
Suppose a random variable z has zero mean and unit variance. Then the random
variable x = az + b has mean and variance:
Assume the random process x[n] is a physical signal, with units of volts.
▶ The standard deviation, σx , is the RMS value, in volts. This is the value
measured by an AC voltmeter.
▶ The variance, σx2 , has units of volts2 . Recall V 2 /R has units of power (watts).
Thus, we may interpret σx2 as the AC power dissipated in a 1Ω resistor.
Signal-to-Noise Ratio
We use power to compare the strengths of a desired signal and a background noise
with the signal power-to-noise power ratio, or SNR:
2
σsignal /Rload 2
σsignal
Psignal
SNR = = 2 = 2
Pnoise σnoise /Rload σnoise
or, in decibels,
!
2
σsignal
SNRdB = 10 log10 2 dB
σnoise
This is stationary in the strict sense as the joint PDFs of sets of random variables
are identical, even as each random variable in the second set were displaced in time
from the first set by amount k.
Statistical (Ensemble) Averages: Mean, Variance
Average / Mean (expected value):
Z ∞
mxn = E{xn } = xpxn (x, n)dx
−∞
Autocovariance:
γxx [n, m] = E{(xn − mxn )(xm − mxm )∗ } = ϕxx [n, m] − mxn m∗xm
Cross-Covariance:
xt = x(t) = A cos(ωt + θ)
Z 2π
1
mxt = E{xt } = E{A cos(ωt + θ)} = A cos(ωt + θ) dθ = 0 = mx
0 2π
The autocorrelation of the random process (also autocovariance since zero mean):
We observe a random signal, y[n], which has a desired signal, x[n], plus noise, v[n]:
The cross-correlation between the observed and desired signal, assuming that the
noise and desired signal are uncorrelated:
mx = E{xn }
σx2 = E{|(xn − mx )|2 }
If the probability distributions are not time invariant but the above equations for
the averages sill hold, the random process is wide-sense stationary.
Time Averages and Ergodicity
If zero-mean process (mx = 0) and define Pxx (ω) = Φxx (ejω ), then at zero lag
(m = 0), the average power of the random process is:
Z π Z π
1 1
E{|x[n]|2 } = ϕxx [0] = σx2 = Φxx (ejω )dω = Pxx (ω)dω
2π −π 2π −π
A signal with these properties is called wide-sense stationary (wss); we make this
assumption moving forward.
Interpreting Autocorrelation ϕxx [m]
▶ Autocorrelation is a measure of rate of change of a random process
▶ Autocorrelation at lag m = 0 gives the average power of the random signal:
The PSD is constant, therefore all frequencies are present in the same amount.
Filtered Random Process
Let x be a white noise process so all the xn are mutually independent and
uncorrelated. Form a new random process by the difference equation
yn = xn + xn−1
y1 = x1 + x0 , y2 = x2 + x1 , etc. Calculate a few covariances:
γyy [1, 1] = E{y1 y1 } = E {(x1 + x0 ) (x1 + x0 )} = E {x1 x1 + x0 x1 + x1 x1 + x0 x1 }
= σx2 + 0 + σx2 + 0 = 2σx2
γyy [1, 2] = E {(x1 + x0 ) (x2 + x1 )} = E {x1 x2 + x0 x2 + x1 x1 + x0 x1 }
= 0 + 0 + σx2 + 0 = σx2
γyy [1, 3] = E {(x1 + x0 ) (x3 + x2 )} = E {x1 x3 + x0 x3 + x1 x2 + x0 x2 }
=0+0+0+0=0
γyy [2, 3] = E {(x2 + x1 ) (x3 + x2 )} == E {x2 x3 + x1 x3 + x2 x2 + x1 x2 }
= 0 + 0 + σx2 + 0 = σx2
Filtered Random Process (Continued)
Filtered white noise:
yn = xn + xn−1
Autocorrelation of output:
ϕyy [m] = {0, σx2 , 2σx2 , σx2 , 0}
Impulse response of filter:
h[n] = {1, 1}
Note:
h[n] ∗ h[−n] = {0, 1, 2, 1, 0}
Compare:
ϕxx [m] = σx2 δ[m]
h[n] ∗ h[−n] = {0, 1, 2, 1, 0}
ϕyy [m] = {0, σx2 , 2σx2 , σx2 , 0} = σx2 δ[m] ∗ {0, 1, 2, 1, 0}
Discrete-Time Random Signals and LTI Systems
Mean of LTI System Output
▶ h[n] impulse response of stable LTI system
▶ x[n] is real-valued input that is sample sequence of wide-sense stationary
discrete-time random process with mean mx and autocorrelation ϕxx [m]
▶ y[n] is output of LTI system an also a random process
X∞ X∞
y[n] = h[n − k]x[k] = h[k]x[n − k]
k=−∞ k=−∞
= ϕyy [m]
we can represent this using Fourier transforms (we now assume mx = 0):
Since Chh (ejω ) = H(ejω )H ∗ (ejω ) = |H(ejω )|2 we now have the power density
spectrum of the output process:
Therefore, the cross-correlation is the convolution of the impulse response with the
input autocorrelations sequence. This can be represented in the frequency domain:
If we have a system where h[n] = an u[n] then the frequency response is:
1
H(ejω ) =
1 − ae−jω
The output of this system can represent any random signal with this power
spectrum:
Also,