Lecture 7 - IIR Filters: James Barnes (James - Barnes@colostate - Edu)
Lecture 7 - IIR Filters: James Barnes (James - Barnes@colostate - Edu)
Spring 2014
Lecture 8
1. IIR Filter Design Overview
2. REVIEW Analog → Digital Filter Design via Impulse Invariance
3. NEW Analog → Digital Filter Design via Bilinear Transformation
4. Pre-warping
5. Low-pass to High-pass Frequency Transformation
An IIR filter is one class of linear time-invariant system. We can represent the
function of the filter as a difference equation:
X
N X
M
y[n] = − ak y[n − k] + bk x[n − k] (1)
k=1 k=0
The first sum represents the ”Auto-Regressive” or IIR part and the second sum
represents the ”Moving Average” or FIR part. In general, a filter can have either
or both parts.
X
N X
M
{1 + ak z −k }{y[n]} = { bk z −k }{x[n]} (2)
k=1 k=0
or
A(z)H(z) = B(z). (5)
Therefore
PM−k
B(z) k=0 bk z
H(z) = = PN . (6)
A(z) 1 + k=1 ak z −k
Another form of the transfer function we will use is the factored or product form:
QM −1
k=1 (1 − zk z )
H(z) = b0 QN , (7)
−1
k=1 (1 − pk z )
where zk and pk are the pole and zero locations. Since we are concerned with
systems having a real impulse response h[n], the poles and zeros either be real
or occur in complex conjugate pairs.
Recall that for a filter to have linear phase, H(z) must satisfy
Taking the inverse z-transform of (4) , we get the standard convolution sum form
X
N X
M
ak y[n − k] = bk x[n − k]. (9)
k=0 k=0
But using the fact that x[n] = δ[n] ⇒ y[n] = h[n], (10) can be written
X
N X
M
ak h[n − k] = bk δ[n − k]. (10)
k=0 k=0
h[0] = b0 ,
h[1] = b1 − a1 h[0],
... ... ...
h[M ] = bM − a1 h[M − 1] − . . . − aN h[M − N ],
h[M + 1] = −a1 h[M ] − a2 h[M − 1] − . . . − aN −1 h[M − N ] − aN h[M − N − 1].
Note that even though h[n] has a finite length, the impulse response can have an
”infinite” duration because of the recursive nature of the equation for y[n].
1
H(z) = H1 (z)H2 (z) = B(z) (13)
A(z)
z -1 b[1] - a[1]
z -1 Properties
s [1]
+ +
● Memory cells: M+N+1
z -1 b[2] - a[2]
z -1 ● Mpy/Add ops: M+N+1
+ +
b[M-1] - a[N-1]
+ +
z -1 b[M] - a[N]
z -1
s [N]
1
H(z) = H2 (z)H1 (z) = B(z) (14)
A(z)
z -1
- a[1] b[1] Properties
s1[n]
+ +
● Memory cells: Greater of [M,N]
z -1 ● Mpy/Add ops: M+N+1
- a[2] b[2]
s2[n]
+ +
- a[N-1] b[N-1]
sN-1[n]
+ +
z -1
- a[N] b[N]
sN[n]
Transposition Theorem: reverse direction of all signal flow paths, exchange input
and output, and the filter function is unchanged
z -1
Properties
b[1] - a[1]
+ ● Memory cells: Greater of [M,N]
s2[n] ● Mpy/Add ops: M+N+1
z -1 ● Fewer adders, but 3 inputs
b[2] - a[2]
+
sN[n]
z -1
b[N] - a[N]
Fixed point arithmetic (and to a lesser extent floating point) can cause deviations
from ideal behavior because of
● Round-off error in computing sum-of-products expressions
● Arithmetic overflow in computing SOP expressions
● Arithmetic overflow in filter tap coefficients in adaptive filters
● ”Quantization” of pole and zero locations due to finite precision leading to
different frequency dependence than design target.
Pole and Zero locations are determined from the ak and bk values. Because of
finite precision in fixed-point arithmetic, placement is constrained to a grid of
points. Example for a 2-pole system, using 4 bit precision
If △pi is the error in pole location pi due to quantization, we can compute △pi
from the expression
X
N
∂pi
△pi = △ak . (17)
∂ak
k=1
X
N
pN −k
i
△pi = − QN . (19)
k=1 l=1,l6=i (pi − pl )
Because of the product term in the denominator, the error term can get large for
poles which are close together. The error can be minimized by breaking up the
filter into sections with poles as far apart as possible.
Typically, the filter is broken up into sections of second-order sections, where the
poles and zeros are chosen to be complex conjugates. These wil not be close
together, and the error will be minimized.
x[n]
G 1 b[1,1]z 1 b[2,1]z 2 1 b[1, k ]z 1 b[2, k ]z 2 1 b[1, N s ]z 1 b[2, N s ]z 2 y[n]
1 a[1,1]z 1
a[2,1]z 2 … 1 a[1, k ]z 1
a[2, k ]z 2 … 1 a[1, N s ]z 1 a[2, N s ]z 2