Ee430 Lectures
Ee430 Lectures
These lecture notes were prepared with the purpose of helping the students to follow the lectures
more easily and efficiently. This course is a fast-paced course with a significant amount of material,
and to cover all of this material at a reasonable pace in the lectures, we intend to benefit from these
partially-complete lecture notes. In particular, we included important results, properties, comments
and examples, but left out most of the mathematics, derivations and solutions of examples, which
we do on the board and expect the students to write into the provided empty spaces in the notes.
We hope that this approach will reduce the note-taking burden on the students and will enable
more time to stress important concepts and discuss more examples.
These lecture notes were prepared using mainly our textbook titled ”Discrete-time Signal Process-
ing” by Alan V. Oppenheim, Ronald W. Schafer and John R. Buck. Lecture notes of Professors
Tolga Ciloglu, Aydin Alatan and Engin Tuncer were also very helpful when preparing these notes.
Most figures and tables in the notes are also taken from the textbook.
This is the first version of the notes. Therefore the notes may contain errors and we also believe
there is room for improving the notes in many aspects. In this regard, we are open to feedback and
comments, especially from the students taking the course.
Finally, I would like thank the students who have taken the course in my section in Fall 2018/2019.
They have helped me type some parts of these notes. (Çağnur Tekerekoğlu, İbrahim Üste, Canberk
Sönmez, İsmail Mert Meral, Selen Keleş, Enes Muhavvid Şahin, Yüksel Mert Salar, Umut Utku
Erdem, Abbas Raimkulov, Fatih Yıldırım, Ferdi Akdoğan, Uğur Berk Şahin, Barış Şafak Gökçe,
Furkan Kılıç, Oytun Akpulat, Güner Dilşad Er, Zülfü Serhat Kük, Hilal Köksal, Alper Bilgiç, Emre
Onat Keser, Faruk Tellioğlu, Tahir Çimen, Berrin Güney, Mahmoud ALAsmar, Safa Özer, Tamer
Aktekin, Barış Fındık, Batuhan Kircova, Ahmed Akyol, Şevket Doğmuş, Emre Can, Mert Elmas,
Halil Temurtaş, Yüksel Yönsel, Eren Berk Kama, Ahmet Nazlıoğlu, Dilge Hüma Aydın, Abdullah
Aslam, Özer Karanfil)
Fatih Kamışlı
December 27th , 2018.
1
Contents
2 The Z Transform 34
2.1 The Z Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2 Properties of the ROC for the Z Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.3 The Inverse Z Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4 Z Transform Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5 Z Transform and LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2
4 Sampling of Continuous-time (CT) Signals 69
4.1 Periodic Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2 Frequency-domain Representation of Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3 Reconstruction of a Band-Limited Signal from Its Samples . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.4 Discrete-time processing of continuous-time signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.4.1 Impulse Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.5 CT Processing of DT Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.6 Changing Sampling Rate Using DT Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.6.1 Sampling Rate Reduction by an Integer Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.6.2 Increasing Sampling Rate by an Integer Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.6.3 Simple Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.6.4 Changing Sampling Rate by Non-Integer Factor . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.6.5 Sampling of band-pass signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.7 Digital Processing of Analog Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.7.1 Prefiltering to Avoid Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.7.2 Analog-to-Digital (A/D) Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.7.3 Analysis of Quantization error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.7.4 Digital-to-Analog (D/A) Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3
8 Computation of Discrete Fourier Transform 137
8.1 Direct Computation of DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.1.1 Direct Evaluation of the DFT definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.1.2 The Goertzel Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.2 Decimation-in-time FFT Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.3 Decimation-in-Frequency FFT Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.4 More general FFT algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4
Chapter 1
Contents
1.1 Discrete-time (DT) signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.1 Basic sequences and sequence operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Discrete-time (DT) systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.1 Memoryless systems: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.2 Linear systems: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.3 Time-invariant systems: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.4 Causality: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.5 Stability: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Linear time-invariant (LTI) systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.1 Computation of convolution sum: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4 Properties of convolution and LTI systems . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.1 Properties of convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.2 Properties of LTI systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.3 FIR and IIR systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5 Linear constant-coefficient difference equations (LCCDE) . . . . . . . . . . . . . . . . . 17
1.6 Frequency domain representation of DT signals and systems . . . . . . . . . . . . . . . 21
1.7 Representation of sequences by Fourier transforms . . . . . . . . . . . . . . . . . . . . . 24
1.8 Symmetry properties of DT Fourier transforms . . . . . . . . . . . . . . . . . . . . . . . 27
1.9 DT Fourier transform theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
This chapter covers the fundamental concepts of discrete-time (DT) signals and systems. In par-
ticular, we cover Sections 2.1 to 2.9 from our textbook.
5
1.1 Discrete-time (DT) signals
Any sequence x[n] can be written in terms of delayed and scaled δ[n].
6
Unit step sequence
Exponential sequences
General form :
7
Complex exponentials
x[n] =
Properties :
1. Complex exponentials Aej(ω0 +2πr)n with frequencies (ω0 +2πr), r ∈ Z (e.g. ω0 , ω0 +2π, ω0 +4π,
...) are equivalent to each other:
2. Based on above property, when discussing complex exponentials Aejω0 n (or sinusoids cos(ω0 n+
φ)), we only need to consider and interval of length 2π for frequency ω0 :
2π
3. Complex exponentials Aejω0 n (or sinusoids cos(ω0 n + φ)) are periodic only if ω0
is a ratio of
integers, i.e.
Remember periodicity requirement for any sequence x[n]:
4. (Prop.1 + Prop. 3) There are only N distinguishable frequencies for which the complex
exponentials Aejω0 n (or sinusoids cos(ω0 n + φ)) are periodic with N :
8
5. For complex exponentials Aejω0 n (or sinusoids cos(ω0 n + φ)),
Rate of oscillation of complex exp. (or sinusoid) determines whether frequency is high or low:
9
Note : For CT complex exponential x(t) = Aejφ0 t , none of the above 5 properties hold :
1.
2.
3.
4.
5.
Note : First time shift, then time reversal 6= first time reversal, then time shift
10
1.2 Discrete-time (DT) systems
Notation:
Output y[n] does not depend on past or future values of input x[n].
Ex:
The systems satisfies the following relation for any a, b, x1 [n], x2 [n]:
11
1.2.3 Time-invariant systems:
Any time shift at the input causes a time shift at the output by the same amount.
1.2.4 Causality:
Current output sample y[n] depends only on current and past input samples x[n], x[n−1], x[n−2], ...
Ex:
1.2.5 Stability:
A system is stable if and only if (iff) every bounded input (i.e. ) produces a
bounded output (i.e. ).
12
Ex:
13
1.3.1 Computation of convolution sum:
Ex: x[n] = δ[n + 2] + 2δ[n] − δ[n − 3] is input to LTI system with impulse response h[n] =
3δ[n] + 2δ[n − 1] + δ[n − 2]. Find output y[n] using two methods.
Echo method : Add outputs to each weighted and delayed delta function in the input. (Useful when
input has few samples.)
14
Ex: Impulse response of LTI system is h[n] = u[n] − u[n − N ] and input x[n] = an u[n], 0 < a < 1.
Find output y[n].
Commutative property:
Associative property:
15
Figure 1.1: Figure 1.2:
(Figure 2.11 in textbook) (a) Parallel combination (Figure 2.12 in textbook) (a) Cascade combination
of LTI systems (b) an equivalent system. of LTI systems (b) equivalent cascade system (c)
single equivalent system.
P∞
Step-2 : Necessity, i.e. for LTI system to be stable, we must have n=−∞ |h[n]| < ∞.
16
Invertibility property: LTI system (h[n]) is invertible ⇐⇒ There is another LTI system
(g[n]) such that h[n] ∗ g[n] = δ[n].
FIR : Finite (-duration) Impulse Response (h[n] has finite number of nonzero samples)
Ex:
IIR : Infinite (-duration) Impulse Response (h[n] has infinite number of nonzero samples)
Ex:
An important subclass of LTI systems where the input x[n] and output y[n] satisfy an LCCDE
input x[n] given, output y[n] found (given x[n], LCCDE solved for y[n])
Why study LCCDE? They can be useful to represent and implement LTI systems.
Ex: Accumulator :
Initial/auxiliary conditions
LCCDE require initial/auxiliary conditions on y[n] samples to find a unique solution y[n] for a given
x[n]. Consider the following first-order LCCDE :
17
LCCDE and LTI systems
Consider the followign example to gain insight for the following results.
Ex: y[n] + ay[n − 1] = x[n] for x[n] = 0, n < 0 and initial condition y[−1] = C
LCCDE can represent LTI systems if the initial/auxiliary conditions are so-called zero initial
conditions:
(Type I)
(Type II)
Note :
– Type I (IRC) conditions lead to LTI systems that are causal.
– Type II (IRC) conditions lead to LTI systems that are anti-causal.
– Question: There are LTI system that are neither causal nor anti-causal (i.e. h[n] is tow-
sided). What initial conditions lead to such systems?
General solution of LCCDE is obtained as a sum of particular solution and homogeneous solution:
18
Particular soln: Given a particular input x [n], particular solution y [n] is any solution that
p p
satisfies LCCDE.
Homogeneous soln: Solution y [n] which satisfies LCCDE for zero input (i.e. x[n] = 0)
h
– In general, yh [n] is a weighted sum of z n , nz n , n2 z n , .... type signals where z are complex
– Consider yh [n] = z n and plug into the homogeneous equation:
– Note: If there is a root zr with multiplicity m > 1, then zrn , nzrn , ...nm−1 zrn should be
included in yh [n].
Ex: 3 roots : z1 , z2 , z2 → yh [n] = A1 z1n + A2 z2n + B2 nz2n
– Ak in yh [n] are determined from the initial conditions.
LCCDE :
If a set of auxiliary conditions are satisfied, then forward or backward recursion can be used to
find/calculate solution.
Ex:
19
LCCDE of FIR and IIR systems
In this course, we are mostly interested in finding impulse repsonse of LTI systems represented by
LCCDE. Given an LCCDE of the general form
N
X M
X
ak y[n − k] = bm x[n − m] (1.1)
k=0 m=0
PN
1. Consider the LCCDE with RHS only x[n]: k=0 ak y[n − k] = x[n]
3. Find h[0], h[1], .., h[N − 1] from LCCDE using x[n] = δ[n] and zero initial conditions, i.e.
h[−1] = h[−2] = ... = 0 (This is for causal LTI system, perform similarly if anti-causal LTI
system desired)
4. Find unknown constants in homogeneous solution yh [n] using h[0], h[1], .., h[N − 1] to get ĥ[n]
(This is for causal LTI system, perform similarly if anti-causal LTI system desired)
Ex: y[n] − 3y[n − 1] − 4y[n − 2] = x[n] + 2x[n − 1]. Find impulse response h[n] of causal LTI system
represented by this LCCDE.
20
Transform domain approaches
Transform domain approaches are best/useful for LCCDE describing LTI systems.
Ex: LCCDE : y[n] − 3y[n − 1] − 4y[n − 2] = x[n] + 2x[n]
Consider an LTI system with impulse response h[n] and input x[n]. The output y[n] is
If x[n] = ejωn for −∞ < n < ∞ (i.e. complex exponential with frequency ω)
21
=⇒ e is the eigenfunction for all LTI systems.
jωn
The corresponding eigenvalue is H(e ), also called the frequency response of the system.
jω
It will be shown that a broad class of signals can be represented by a sum of complex exponentials
Note that the signals ejωn and ej(ω+2π)n are equal and hence the system cannot differentiate between
these eigenfunctions.
Ex: Input to an LTI systems is x[n] = A cos(ω0 n + φ). Find output in terms of H(ejω ).
22
An important class of LTI systems, called frequency selective filters, have frequency response
H(ejω ) that is unity (i.e. 1) over a range of frequencies and 0 for the remaining frequencies.
Figure 1.3:
(Figure 2.17 in textbook) Ideal lowpass filter showing (a)
periodicity of frequency response and (b) one period of frequncy
response.
Figure 1.4:
(Figure 2.18 in textbook) Ideal frequency
selective filters (a) Highpass filter (b) Bandstop
filter (c) Bandpass filter.
1
PM 2
Ex: Moving average system: y[n] = M1 +M2 +1 k=−M1 x[n − k]. LTI ? If so, find h[n] and H(ejω ).
23
Suddenly applied complex exponential inputs
This subsection (Sec. 2.6.2 in textbook) is a reading assignment. It discusses LTI system when
inputs are of the form x[n] = ejω0 n u[n] instead of x[n] = ejω0 n .
If the above summation converges ( i.e. DTFT of x[n] exists) the sequence x[n] can be obtained
from X(ejω ) as follows:
Notes :
DTFT X(e jω
) is periodic with 2π
DTFT X(e jω
) specifies how much of each frequency component (ejωn ) is required to synthesize
sequence x[n]
X(e jω
) is in general complex
24
Remember the ”frequency response” definition of LTI systems:
P∞
1. (Sufficient condition) If x[n] is absolutely summable (i.e. n=−∞ |x[n]| < ∞),
then ∞ −jωn
converges, i.e. DTFT X(ejω ) exist.
P
n=−∞ x[n]e
Ex: : x[n] = an u[n]. Does X(ejω ) exist ? If so, for which values of a and what is X(ejω ) ?
2. Some sequences are not absolutely summable but square summable. If x[n] is square summable
( ∞
P 2
P∞ −jωn
n=−∞ |x[n]| < ∞ ), then n=−∞ x[n]e converges in the mean-square sense, i.e.
PM
jω
for a given X(e ) if we define XM (e ) = n=−M x[n]e−jωn ,
jω
Rπ
then limM →∞ −π |X(ejω ) − XM (ejω )|2 dω = 0. In other words, the error
|X(ejω ) − XM (ejω )| may not be zero at each ω value but the energy of the error is.
1, | ω |< ω
c
(**) Ex: : H(ejω ) = with periodicity 2π .
0, ωc <| ω |< π
25
Figure 1.5: (Figure 2.21 in textbook) Convergence of the Fourier Transform.
3. For some sequences that are neither absolutely nor square summable (e.g. periodic se-
2π
quences such us ej 5 n ), ∞ −jωn
P
n=−∞ x[n]e converges in generalized functions sense.
(***) Ex: x[n] = 1 for all n.
26
Ex: Examine convergence of DTFT for x[n] = ejω0 n .
2. DTFT synthesis or analysis equation may not be easily calculated for some x[n] or X(ejω ) :
Note that:
−→ Adding (2) and (3) gives (1).
−→ From (2):
From (3):
27
(Hence, conjugate-symmetry & conjugate-antisymmtery is generalization of even-odd decomposi-
tion from EE301.)
1-
2-
3-
4-
5-
28
6-
Figure 1.6: (Table 2.1 in textbook) Symmetry Properties of the Fourier Transform.
29
1.9 DT Fourier transform theorems
Energy of a DT Signal:
Average Power:
30
If E is finite =⇒ P = 0
If P is finite =⇒ x[n] : Power Signal
31
Figure 1.7: (Table 2.2 in textbook) Theorems of the Fourier Transform.
32
Examples
Use DTFT and known signal & DTFT pairs to find DTFT or IDTFT of given expression.
1
Ex: X(ejω ) = (1−aejω )(1−bejω )
e−jωnd , ω <| ω |< π
jω c
Ex: H(e ) = with periodicity 2π .
0, | ω |< ωc
33
Chapter 2
The Z Transform
Contents
2.1 The Z Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2 Properties of the ROC for the Z Transform . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.3 The Inverse Z Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4 Z Transform Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5 Z Transform and LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
This chapter covers the Z transform and its properties. In particular, we cover Sections 3.1-3.4 from
our textbook.
34
2.1 The Z Transform
Notation :
Z-transform operator Z . :
Correspondence between x[n] and its z-transform:
The complex plane (z-plane):
From the above definitions of DTFT and the Z-transform, we can make the following observations:
For z = r · ejω
Convergence of Z-Transform
35
Hence, we can make the following observations :
Region of Convergence of (ROC) is Pdefined for a given x[n], as the range of z values
∞
for which its Z transform converges, i.e. | n=−∞ x[n]z −n | < ∞.
If the unit circle (i.e. all z s.t. |z| = r = 1) is inside ROC, then X(z)|z=1ejω converges
uniformly and DTFT can be obtained from the Z transform as X(ejω ) = X(z)|z=ejω and thus
also converges uniformly.
Unit circle is inside ROC of X(z) ⇐⇒ ⇐⇒
The following sequences do not have a uniformly converging DTFT since they are not absolutely
summable. Note that their Z transform do not converge (uniformly) for any z either (i.e. their
ROCs are empty) since x1 [n]r−n and x2 [n]r−n are not absolutely summable for any value of r.
x [n] = sinπnω n
1
0
x [n] = cos ω n
2 0
Their DTFTs X1 (ejω ) and X2 (ejω ) do not converge uniformly (x1 [n], x2 [n] are not absolutely
summable) but are defined in other means. x1 [n] is square summable and X1 (ejω ) converges in
the mean-square sense, and x2 [n] is neither absolutely nor square summable but X2 (ejω ) converges
in generalized function sense.
Obtaining the DTFT from the Z transform (X(ejω ) = X( z)| z=ejω ) should only be used if x[n] is
absolute summable, i.e. DTFT converges uniformly (and ROC of X(z) includes the unit circle). In
other words, since X1 (ejω ) and X2 (ejω ) do not converge uniformly, they cannot be obtained from
the Z transforms, which do not exist.
Z-transform X(z) has a region of convergence (ROC) for any finite value of a :
36
DTFT X(e jω
) exists/converges uniformly only for some values of a :
For a values for which the DTFT exists/converges uniformly (i.e ROC includes unit circle),
DTFT can be obtained from the Z transform via X(ejω ) = X( z)| z=ejω
– Let a = 2,
– Let a = 1/2,
From Ex1 & Ex2 above, we can see that different sequences have the same X(z) expression but
with different ROCs.
=⇒
37
Ex4: x[n] = ( 21 )n u[n] + (− 31 )n u[n] (Right - sided sequence)
38
Some common Z Transform pairs
Note that from pairs 5 and 6, almost all the other pairs can be derived.
39
4. x[n] finite-duration seq. =⇒ ROC entire z-plane except possibly at z = 0 and/or z = ∞
5. x[n] right-sided seq. =⇒ ROC outwards from outermost finite pole (possibly including z = ∞)
6. x[n] left-sided seq. =⇒ ROC inwards from innermost finite pole (possibly including z = 0)
7. x[n] two-sided seq. =⇒ ROC is a ring in the z-plane bounded by poles (or is empty)
8. ROC must be a connected region. (i.e. can not be union of multiple unconnected regions)
Note: These properties of ROC limit the possible ROCs that can be associated with a given
pole-zero plot (or X(z)).
40
2.3 The Inverse Z Transform
For typical sequences & Z Transforms in this course, less formal methods are sufficient & preferred:
1. Inspection method
Inspection Method:
Remember the Table 2.2 with common signal and Z transform pairs. (Remember basic ones, e.g. 5
and 6, and derive others from those and Z transform properties.)
Any rational X(z) can be expanded into a sum of partial fractions. (Then use inspection method
to determine time domain expression for each fraction)
41
1 + 2z −1 + z −2 (1 + z −1 )2
Ex: X(z) = = with ROC |Z| > 1 is given. Find x[n].
1 − 23 z −1 + 12 z −2 (1 − 21 z −1 )(1 − z −1 )
42
Power Series Method:
P∞
X(z) = n=−∞ x[n]z −n = ...
Ex: Power series expansion for functions such as log sin etc. is known.
X(z) = log(1 + az −1 ), |z| > a
1 1
Ex: X(z) = , |z| > a Ex: X(z) = , |z| < a
1 − az −1 1 − az −1
43
Linearity:
Time-shifting:
Differentiation of X(z):
44
Conjugation of a Complex Sequence:
Time Reversal:
Convolution of Sequences:
45
2.5 Z Transform and LTI Systems
LTI System:
Ex: Input x[n] = Au[n] to an LTI system with h[n] = an u[n]. Find output y[n] (assume |a| < 1)
LCCDE:
Causality and Stability of LTI Systems and the ROC of the system function H(z):
LTI system both stable and causal ⇐⇒ All poles of H(z) inside unit circle.
46
Chapter 3
Contents
3.1 Discrete (-Time) Fourier Series (DFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.2 Properties of DFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3 DTFT of Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.4 Sampling the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.5 The Discrete Fourier Transform (DFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.6 Properties of DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.7 Computing Linear Convolution using DFT . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.7.1 Linear Convolution of Two Finite Length Sequences . . . . . . . . . . . . . . . . . . . . . . 63
3.7.2 Circular Convolution as Linear Convolution with Aliasing . . . . . . . . . . . . . . . . . . . 64
3.7.3 Implementing LTI systems Using DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
This chapter covers the Discrete Fourier Transform (DFT) and its properties. In particular, we
cover Sections 8.1-8.7 from our textbook. Reading assignment for this chapter is :
In many applications, it is desirable to have finite extent frequency domain representations with
discrete variables :
N-point finite x[n] ←→ N-point finite X[k], where k is discrete
The DFT provides such a finite extent frequency domain representation with a discrete independent
variable while also having many of the beautiful properties of the Fourier representations.
The DFT of a finite sequence x[n] can be obtained by generating a periodic version of the sequence
(x̃[n]), obtaining its DTFS coefficients (X̃[k]), and then taking one period of it (X[k]) :
47
3.1 Discrete (-Time) Fourier Series (DFS)
Consider a sequence x
e[n] periodic with N :
2π
How to determine X[k] e[n] ? By multiplying both sides with e−j N rn an summing from n = 0
e from x
to n = N − 1.
2π
Let’s introduce a new notation for (e−j N ):
The analysis and synthesis equations for the DFS are :
48
Note that both x
e[n] and X[k]
e are periodic with N .
P∞
Ex: : DFS of Periodic Impulse Train x
e[n] = r=−∞ δ[n − rN ]
49
3.2 Properties of DFS
Linearity :
Shift of Seq :
Duality Property:
Symmetry Properties:
50
Periodic Convolution:
Multiplication Property:
DFS properties
51
3.3 DTFT of Periodic Signals
Consider sequence x̃[n] periodic with N, which we can represent with DFS representation :
P∞
Ex: DTFT of periodic impulse train p̃[n] = r=−∞ δ[n-rN]
Time and Frequency Domain Relation between finite-extent x[n] and its periodic version x̃[n]
Summary : Samples of DTFT X(ejω ) of finite extent sequence x[n] give DFS coefficients X̃[k] of
x̃[n].
52
Note : The same result would be obtained even if x[n] is L-point (i.e. x[n] = 0, outside 0 ≤ n ≤
L − 1) and x̃[n] is generated with period N , i.e. x̃[n] = ∞
P
r=−∞ x[n − rN ], where L > N or L < N .
(More details in the next section.)
1, 0 ≤ n ≤ 4
Ex: Consider x[n] =
0, otherwise
Exercises:
1) Find DFS coefficients X̃1 [k] for x˜1 [n] = ∞
P
n=−∞ x[n − r.8]
P∞
2) Find DFS coefficients X̃2 [k] for x˜2 [n] = n=−∞ x[n − r.4]
3. The periodic sequence X̃[k] can be seen as DFS coefficients of a periodic sequence x̃[n].
What is x̃[n] equal to in terms of non-periodic x[n]?
53
Figure 3.5: (Figure 8.9 in textbook)
Periodic x̃[n] with N=7. Aliasing in time domain.
4. If x[n] is finite with L samples and N ≥ L, then x[n] can be recovered from x̃[n] as one period
of it:
Otherwise (N<L), x[n] can not be perfectly recovered from x̃[n] (Aliasing in some samples)
(Note this looks like the dual of sampling theorem. We need to take sufficiently many samples,
i.e. N samples s.t. N ≥ L in DTFT domain to avoid aliasing in time domain)
Proof of 1-3 :
Remember the DTFT and DFS representations for finite and periodic sequences :
54
In many applications, it is desirable to have finite extent frequency domain representations with
discrete variables :
finite-extent x[n] (N -point) ←→ finite-extent X[k] (N -point), where k is discrete
The DFT provides such a finite extent frequency domain representation with a discrete independent
variable while also having many of the beautiful properties of the Fourier representations.
The DFT of a finite sequence x[n] can be obtained by generating a periodic version of the sequence
(x̃[n]), obtaining its DTFS coefficients (X̃[k]), and then taking one period of it (X[k]) :
Note that the DFS coefficients can be obtained from DFT coefficients :
55
Remember the DFS equations relating x̃[n] and X̃[k]:
Since summations in DFS relations are over one period (i.e 0 to N − 1) in which x̃[n] = x[n], we
obtain the following DFT analysis and synthesis equations :
Sampling relation between DTFT and DFS carries over to DFT: (remember Sec. 8.4)
Note that since DFT relates finite extent x(n) to finite-extent X[k], the DFT relation can be written
using matrix notation :
56
Ex: Consider L = 5 point sequence x[n] :
1, 0 ≤ n ≤ 4
x[n] =
0, otherwise
Find its 5-pt DFT and 10-pt DFT (ie. N = 5 and N = 10).
Exercises :
1. You are given same x[n] and its DTFT X(ejω ). Form 4-sample
X(ejω )| , k = 0, 1, 2, 3
ω=k 2π
X1 [k] = 4
0 , other k.
57
2. You are given the same x[n]. It’s 5-pt DFT is X[k]. Form
x[ n ] , n even
2
x2 [n] =
0 , otherwise.
Note that in the DFT definition, both x[n] and X[k] are finite and zero outside the range [0, ..., N −1].
⇒ This condition must be preserved in DFT properties, i.e after manipulation of a signals DFT.
To derive or understand following DFT properties, one can use DFS-DFT relation and the corre-
sponding DFS properties.
⇒ Generate periodic x̃[n], apply DFS property, take one period from x̃[n] and obtained DFS X̃[k].
Linearity Property:
58
Ex:
Duality Property:
59
Symmetry Properties :
Symmetry properties of DFT can be obtained using the symmetry properties of DFS and taking
one period of signal and DFS :
For real x[n] (i.e. x[n] = x∗ [n]), we have X[k] = X ∗ [((−k))N ] and thus :
Similarly :
60
Note that, circular convolution means, apply periodic convolution and take one period of result.
2. To obtain DFT relation , take one period of periodic signal and its DFS :
Multiplication Property :
61
Verify circular convolution property with N = 2L
Note that
62
3.7 Computing Linear Convolution using DFT
There are efficient algorithms to compute the DFT of a sequence. In other words, there are
algorithms that require less computation (i.e. multiplication, addition) than computing the DFT
directly from its definition summation. One famous such efficient algorithm is the Fast Fourier
Transform (FFT) algorithm.
Due to the efficiency of the FFT algorithm, it may be more efficient (i.e. require less overall
multiplication and additions) to implement convolution of two sequences x3 [n] = x1 [n] ∗ x2 [n]
using the circular convolution property of DFT as follows :
1. Compute N-point DFT’s X1 [k] and X2 [k] of x1 [n] and x2 [n] using the FFT algorithm
3. Compute N-point inverse DFT of X3 [k] to obtain circular convolution x3 [n] = x1 [n] N x2 [n].
Note that the above procedure gives circular convolution x1 [n] N x2 [n] but we wanted the (regular
or linear) convolution x1 [n] ∗ x2 [n]. The two can be made equal by carefully choosing the DFT
size N . (More details below)
63
3.7.2 Circular Convolution as Linear Convolution with Aliasing
3. X[k]
e can be used as DFS coefficients :
Now consider similar discussion for x3 [n] = x1 [n] ∗ x2 [n], where x1 [n] is L-point, x2 [n] is P -point
and x3 [n] is (L + P − 1)-point sequence :
4. But since X3 [k] = X1 [k] · X2 [k], from circular convolution theorem, we have:
64
P∞
Plot r=−∞ x3 [n − rN ] :
Summary: For L-point x1 [n], P -point x2 [n] and (L + P − 1)-point linear convolution result x3 [n] =
x1 [n] ∗ x2 [n], we have the following results :
If N ≥ (L + P − 1) :
Otherwise N < (L + P − 1) :
Note that circular convolution can be implemented with DFT’s (ie. FFT alg.) :
65
0≤n<3
O
1,
Ex: Let x1 [n] = x2 [n] = . Find and plot x1 [n] 5 x2 [n].
0, otherwise
O O
Exercise : Find and plot x1 [n] 2 x2 [n] and x1 [n] 1 x2 [n]. In MATLAB, find these results again by
IDF TN {DF TN {x1 [n]}·DF TN {xN [n]}}, where N = 5, 2, 1. Use fft(x,N) command for DF TN {x[n]}.
This procedure may require less computation (addition, multiplication) than implementation
from convolution definition since DFT or IDFT operations can be calculated very efficiently
using the FFT algorithm.
Let’s compare with signal x[n] of length T = 900, and h[n] of length P = 100 :
66
In practice, the signal x[n] signal can be very long (e.g. speech signal of 20 seconds).
=⇒ Divide signal x[n] into small pieces/blocks, convolve each block with h[n] (we can use FFT here
for efficient computation), combine convolution results. There are two well-known such methods:
1. Overlap-Add Method :
67
2. Overlap-Save Method :
Divide x[n] into overlapping length-L blocks
xk [n] where first (P-1) samples overlap with
the previous block.
O
block xk [n] with filter h[n] :
yk [n] = xk [n] L h[n]
For a visual illustration of the Overlap add and save methods, you can watch the following videos
on youtube (Watch at least the range from 1:30 to 3:00) :
68
Chapter 4
Contents
4.1 Periodic Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2 Frequency-domain Representation of Sampling . . . . . . . . . . . . . . . . . . . . . . . 70
4.3 Reconstruction of a Band-Limited Signal from Its Samples . . . . . . . . . . . . . . . . 73
4.4 Discrete-time processing of continuous-time signals . . . . . . . . . . . . . . . . . . . . . 74
4.4.1 Impulse Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.5 CT Processing of DT Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.6 Changing Sampling Rate Using DT Processing . . . . . . . . . . . . . . . . . . . . . . . 78
4.6.1 Sampling Rate Reduction by an Integer Factor . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.6.2 Increasing Sampling Rate by an Integer Factor . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.6.3 Simple Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.6.4 Changing Sampling Rate by Non-Integer Factor . . . . . . . . . . . . . . . . . . . . . . . . 82
4.6.5 Sampling of band-pass signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.7 Digital Processing of Analog Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.7.1 Prefiltering to Avoid Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.7.2 Analog-to-Digital (A/D) Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.7.3 Analysis of Quantization error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.7.4 Digital-to-Analog (D/A) Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
This chapter covers the sampling of continuous-time signals and related topics. Under some con-
straints given by the sampling theorem, a continuous-time signal can be accurately represented by
the samples taken from it at regular discrete points in time. This property enables continuous-time
signal processing to be implemented through a process of sampling, discrete-time processing, and
then subsequent reconstruction of a continuous-time signal.
We cover Sections 4.1-4.6 and 4.8 from our textbook. Reading assignment for this chapter is :
69
4.1 Periodic Sampling
70
Nyquist Sampling Theorem:
Let xc (t) be a bandlimited signal with Xc (jΩ)=0 for |Ω| ≥ΩN . Then xc (t) is uniquely deter-
mined by its samples xc [n]=xc (nT ), n=0,±1,±2,... if Ωs ≥ 2ΩN where T and ΩS = 2π T
are the
sampling period and frequency, respectively.
71
1
Ex: xc (t) = cos(4000πt) Sample with T = 6000
sec.
1
Ex: xc (t) = cos(16000πt) Sample with same T = 6000
sec.
72
4.3 Reconstruction of a Band-Limited Signal from Its Samples
Figure 4.2: (Figure 4.2 in textbook) Sampling with Figure 4.3: (Figure 4.10 in textbook) (a) Ideal band-limited sig-
periodic impulse train and conversion to DT signal nal reconstruction system. (b) Equivalent representation as ideal
D/C converter
Remember that Hr (jΩ) has gain T and cutoff frequency Ωc such that :
Note that hr (t)|t=0 = 1 and hr (t)|t=nT = 0 for n = ±1, ±2, ... Hence, xr (t)|t=m.T = x[m] = xc (mT )
for any integer m, i.e. reconstructed sign xr (t) and original signal xc (t) have same values at sampling
instants t = mT , independently of the sampling period T .
73
Consider also frequency-domain analysis of D/C conversion:
74
Summary : If xc (t) is band-limited and Nyquist theorem is satisfied during sampling (i.e. Xc (jΩ) =
0 for |Ω| > Tπ ), then the overall system is equivalent to a CT LTI system with frequency response
Consider the reverse of our discussion so far. In other words, we are now given a desired CT system,
H(jΩ) that we need to implement with ”DT processing of CT signals”. How to choose T and H(ejω )
(or h[n]) in the system ?
Answer is given in two steps, remembering our previous main result:
2. Let H(ejω ) =
Under the condition in 1., the relationship between the desired CT and the DT system can also be
written in time domain :
75
Pf.
Note that such a system is not used to implement DT systems in practice, however, it can be a
useful mode to interpret certain DT systems.
76
CT filter output :
Overall DT system interpretation : Generate Xc (t) with D/C converter from samples x[n], shift
xc (t) by T.∆, sample shifted signal with C/D.
77
4.6 Changing Sampling Rate Using DT Processing
Let’s discuss sampling rate reduction and increase by integer factors first, then by non-integer
factors next.
Pfs:
78
π
Note : In order to avoid aliasing in xd [n] or Xd (ejω ), we must have X(ejω ) = 0 for M < |ω| < π, or
equivalently, x[n] must have been obtained by sampling xc (t) at least M times higher than Nyquist rate.
π
If desired M is too large, s.t. X(ejω ) 6= 0 for M < |ω| < π, or equivalently x[n] was not obtained at
π
M times higher than Nyquist rate, then an ideal LPF with cutoff ωc = M can be used to discard
79
part of spectrum before downsampling so that aliasing is avoided:
80
Relationship in frequency domain :
81
the linear interpolator with hi [n] =
Figure 4.6: (Figure 4.28 in textbook) System for changing sampling rate by non-integer factor
82
If M > L ⇒ sampling period increases (i.e. sampling rate or frequency decreases)
Note : Do not have the decimater first, then the interpolator. This can cause aliasing.
83
Ex: (A previous MT question)
1
a) Assume L = 3, M = 2 and T1 = sec. Find T2 and H1 (ejω ) such that xr (t) = xc (t).
4000
1
b) Assume T1 = T2 = sec. Find L, M and H1 (ejω ) s.t. Xr (jΩ) is as follows :
4000
c) Assume L = M = 3 and H1 (ejω ) is fixed. Let p[n] = x[n] ∗ h2 [n]. If p[n] = y[n], find H2 (ejω ) in
terms of H1 (ejω ).
(Approach to solve such problems : Remember frequency domain relations of building blocks used
in sampling (e.g. C/D converter). Write after each building block in the given system the relation
between its input and output in frequency domain, and also plot the output’s spectrum in terms of
the input’s spectrum.)
Solution :
84
Solution to a) : Solution to a b) :
85
4.6.5 Sampling of band-pass signals
It may be possible to sample this band-pass signal below Nyquist rate of 2(Ωc + Ωb ) and still recover
it from the samples :
If replicas of Xc (jΩ) in Xs (jΩ) fit into the empty slots, it is possible to avoid aliasing and recover
from Xs (jΩ) back the Xc (jΩ) by band-pass filtering :
Ex:
In general, for band-pass signals, the minimum sampling rate Ωs,min can be found in two steps :
Ωc + Ωb
1. Find integer r such that r≤ <r+1
2Ωb
Ωc + Ω b
2. Ωs,min = 2
r
1.
2.
86
4.7 Digital Processing of Analog Signals
Up to now, we used ideal building blocks, such as ideal C/D, D/C converters and ideal low-pass
filters. This allowed us to focus on the essential mathematical relationships between a CT signal,
its samples and the reconstruction from the samples. In practice, these building blocks are not
ideal. For example, CT signals are not exactly band-limited, ideal filters can not be implemented
and ideal C/D and D/C converters are approximated by devices called analog-to-digital (A/D) and
digital-to-analog (D/A) converters. Thus a more realistic model for sampling is as follows.
Figure 4.7: (Figure 4.41 in textbook) (a) Discrete-time filtering of continuous-time signals (b) Digital processing of
analog signals
If the input CT signal xc (t) is not band-limited or the required Nyquist frequency is too high for
your digital system, aliasing will occur in sampling. To avoid aliasing, a low-pass filter (called
anti-aliasing filter) can be used to reduce the bandwidth of the input to half of the desired
sampling frequency Ωs :
Since
Hence,
87
If sharp cut-off anti-aliasing filter can not be used (because in CT it is difficult to implement ideal
LPF), then the following system can be used :
Figure 4.8: (Figure 4.43 in textbook) Using oversampled A/D conversion to simplify CT anti-aliasing filter.
Note :
Ex: (Signal is not band-limited due to noise and a simple CT anti-alisaing filter is available)
88
4.7.2 Analog-to-Digital (A/D) Conversion
An ideal C/D converter converts a CT signal xa (t) to a DT signal x[n], where each sample of the
DT signal is known with infinite precision. In practice, the DT signal samples have finite precision,
i.e. are quantized, and such conversion is performed using an A/D converter circuit.
A device or circuit that converts a continuous voltage at its input to a binary code representing
a quantized value of the input.
Requires constant input voltage for some time (T) to operate. (i.e. needs a sample and hold
device in front of it)
Quantizer:
A non-linear system that transforms an input sample x[n] to one of finite possible prescribed values
x̂[n], which can be represented with the Q(.) operator as
x̂[n] = Q(x[n]).
x̂[n] is called the quantized value. The following figure show a typical quantizer, where the input
values x[n] are rounded to the nearest quantization level.
89
Figure 4.12: (Figure 4.48 in textbook) Typical quantizer for A/D conversion.
Figure 4.13: (Figure 4.49 in textbook) Sampling, quantization and coding (and D/A conversion discussed later)
90
4.7.3 Analysis of Quantization error
Quantization introduces error (called quantization error) in the sample value of x[n] :
In our typical quantizer the error is constrained :
Figure 4.14: (Figure 4.51 in textbook) a) Unquantized samples of the signal x[n] b) Quantized samples x̂[n] with
3-bit quantizer. c) Quantization error e[n] for 3-bit quantizer. d) Quantization error e[n] for 8 bit quantizer.
91
4.7.4 Digital-to-Analog (D/A) Conversion
92
Chapter 5
Contents
5.1 Frequency Response of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.2 Systems Characterized by LCCDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.3 Frequency Response for Rational System Functions . . . . . . . . . . . . . . . . . . . . 99
5.4 Relationship Between Magnitude And Phase . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.5 All-pass Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.6 Minimum-phase Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.6.1 Minimum-phase and All-pass Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.6.2 Frequency Response Compensation of Non-minimum-Phase Systems . . . . . . . . . . . . . 114
5.6.3 Properties of Minimum-phase Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.7 Linear Systems with Generalized Linear Phase . . . . . . . . . . . . . . . . . . . . . . . 118
5.7.1 Linear Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.7.2 Generalized Linear Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.7.3 Causal Generalized Linear Phase Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
This chapter presents the representation and analysis of LTI systems using the Fourier and Z
transform in more detail. Many properties of LTI systems can be obtained from the frequency
response (Fourier transform of impulse response) and system response (Z transform of impulse
response). We cover Sections 5.1-5.7 from our textbook and reading assignment for this chapter is :
93
5.1 Frequency Response of LTI Systems
Ex: Ideal delay LTI system: hid [n] = δ[n − nd ] ⇐⇒ Hid (ejω ) = e−jωnd
Hence the only distortion in the ideal delay system is time delay, which is acceptable in many
applications.
94
Figure 5.1: (a) Continuous phase (b) Principal value
of phase (c) Integer multiples of 2π added
In the narrowband (ω0 − ∆ω < |ω| < ω0 + ∆ω), H(ejω ) can be approximated as follows ( assume
h[n] is real) :
95
Then, output to ”narrowband” signal x0 [n] is :
Ex: Consider LTI system with H(ejω ) and input with X(ejω ) shown below. Output y[n] can be
approximately calculated and plotted as below using group-delay and magnitude of H(ejω ).
Figure 5.2: (a) Continuous phase (b) Principal value of phase (c) Integer multiples of 2π added
Time dispersion can occur if group-delay differs significantly for each ’pocket’ of the input signal.
LTI systems can be represented with LCCDE, which yields a rational system function H(z).
N
P M
P
ak y[n − k] = bk x[n − k]
k=0 k=0
96
N M
ak Y (z)z −k = bk X(z)z −k
P P
⇒
k=0 k=0
Notes :
(1 − c z ) terms ⇒
k
−1
(1 − d z ) terms ⇒
k
−1
System is stable and causal ⇒ ROC of H(z) has all poles inside the unit circle.
Inverse Systems
An LTI system with impulse response h[n] and its inverse LTI system with impulse response hi [n]
must satisfy the following relationship :
Note that not all LTI systems have an inverse. (E.g. Ideal low-pass filter does not.)
System with rational H(z) have inverses:
1 − 0.5z −1
Ex: H(z) = , ROC: |z| > 0.9
1 − 0.9z −1
97
Note :
An LTI system and its inverse are both stable and causal ⇔
Such systems are called minimum-phase systems (more details in the next sections.)
Impulse Response for Rational System Functions
Consider a rational system function H(z) as multiplication and division of first order terms and its
partial fraction expansion.
If the system is causal, then the ROC of H(z) is outside of outermost pole ⇒
Two classes of system functions can be identified from a given system function :
98
5.3 Frequency Response for Rational System Functions
Consider an LTI system with the following rational system function H(z) and the corresponding
frequency response and related expressions.
M M
(1 − ck z −1 ) (1 − ck e−jω )
Q Q
b0 b0
H(z) = ( ) k=1
N
=⇒ If ROC includes unit circle H(ejω ) = ( ) k=1
N
a0 Q a0 Q
(1 − dk z −1 ) (1 − dk e−jω )
k=1 k=1
M
|1 − ck e−jω |
Q
b0
⇒ |H(ejω )| = | | k=1
N
a0 Q
|1 − dk e−jω |
k=1
⇒ |H(ejω )|2 =
PM PN
⇒ G(dB) = 20 log10 |H(ejω )| = 20 log10 | ab00 |+ k=1 20 log10 |1−ck e
−jω
|− k=1 20 log10 |1−dk e−jω |
PM PN
⇒ ∠H(ejω ) = ∠ ab00 + k=1 ∠(1 − ck e−jω ) − k=1 ∠(|1 − dk e−jω )
d
arg [H(ejω )] = 0 − M
PN
grd[H(ejω )] = − d
− ck e−jω ]) + d
− dk e−jω )])
P
⇒ k=1 dω
(arg [(1 k=1 dω (arg [(1
dω
99
Vector Diagrams in z-plane :
A simple geometric construction with vectors is often very useful in approximate sketching of fre-
quency response functions directly from the pole-zero plot as follows.
Figure 5.3: Frequency response for a single zero at r = 0.9 Figure 5.4: Frequency response for a single zero with θ =
and three values of θ. (a) Log magnitude (b) Phase (c) π and four values of r = 1, r = 0.9, r = 0.7, r = 0.5. (a)
Group-delay Log magnitude (b) Phase (c) Group-delay
100
Note that there will be phase jump of π if there is a zero or pole on the unit circle (i.e.
r = 1).
1. The magnitude has a minimum/maximum for a zero/pole at (around for higher orders) ω = θ.
5. These four effects become stronger as |r| → 1, i.e., as the zero/pole approaches the unit circle.
101
In general, when there are multiple poles and zeros, the above observations on single zero/pole
can be used to make a rough sketch of frequency response magnitude, phase and group delay.
Note, however, that with multiple poles and zeros, the absolute rate of change of the phase is not
necessarily maximum at the exact pole/zero angle, i.e., ω = θ.
A great animation illustrating approximate frequency response sketches can be found here (https :
//engineering.purdue.edu/V ISE/ee438/demos/f lash/polez ero.html).
Some example pole-zero plots and corresponding frequency response plots are given below for your
investigation. (From Tolga Ciloglu’s notes.)
102
103
104
105
106
107
5.4 Relationship Between Magnitude And Phase
In general knowledge about magnitude response |H(ejω )| or phase response ∠H(ejω ) gives no in-
formation about the other. However, for LTI systems described by LCCDE (i.e. rational system
functions), there are some constraints between |H(ejω )| and ∠H(ejω ).
If |H(e )| (and number of poles and zeros) known ⇒ Finite numbers of choices for ∠H(e ).
jω ω
If ∠H(e ) (and number of poles and zeros) known ⇒ Finite numbers choices for |H(e )|
jω jω
If H(e jω
) is a minimum phase system, the constraints imply a unique choice :
– If |H(ejω )| (and number of poles and zeros) known ⇒ ∠H(ejω ) can be found uniquely
– If ∠H(ejω ) (and number of poles and zeros) known ⇒ |H(ejω )| can be found uniquely
(within a scaling constant).
1
2. The poles (zeros) of H(z) are conjugate reciprocals of poles (zeros) of H ∗ ( ):
z∗
Let us define C(z) and notice that it gives the squared magnitude response |H(ejω )|2 when evaluated
on the unit circle.
1
C(z) = H(z)H ∗ ( ∗ )
z
C(z)|z=ejω = H(ejω )H ∗ (ejω ) = |H(ejω )|2
108
Poles and 1zeros of C(z) occur in conjugate reciprocal pairs, one from H(z) and the other
from H ∗ ( ).
z∗
– One is inside unit circle, the other outside (or both on unit circle in same location.)
– Without further information on the system, we don’t known which one is inside or outside.
1
– For H(z), having a pole (or zero) at dk or at its conjugate reciprocal ∗ results in the
dk
jω 2 jω
same C(z) and therefore the same |H(e )| or |H(e )|.
If H(z) is minimum-phase :
– We can identify both poles and zeros of H(z) uniquely (i.e. uniquely identify H(z)) : From
each conjugate reciprocal pair of poles (zeros) of C(z), choose the pole (zero) inside the
unit circle.
1
Hence, a pole at dk and ∗
give the same magnitude response |H(ejω )|, but their phase responses
dk
∠H1 (ejω ) and ∠H2 (ejω ) are different.
Ex: Given pole-zero plot of C(z) below, we want to determine poles and zeros of H(z).
109
With this much information, we can choose for H(z) one from each conjugate reciprocal pair
of poles and zeros from C(z) =⇒
If H(z) is minimum-phase =⇒
Note that if the number of poles/zeros were not restricted, number of choices for H(z) would
be unlimited with any extra information on the system :
An All-pass system is a system with constant magnitude response (i.e. |H(ejω )| = c).
1
Remembering that a pole (zero) at a and a pole (zero) at at its conjugate reciprocal give the
a∗
A∗ ( z1∗ )
same magnitude response, check the system H(z) = where A(z) = 1 − az −1 , |a| < 1.
A(z)
The canonical (stable & causal, i.e pole inside unit circle, i.e. |a| < 1) form of All-pass Hap (z) is
then given as follows :
z −1 − a∗
Hap (z) =
1 − az −1
110
Let us show that |Hap (z)|z=ejω | = 1
A more general form of all-pass systems is the product of first-order all-pass terms is as follows :
M
Y z −1 − a∗k
Hap (z) =
k=1
1 − ak z −1
A more general form of all-pass systems with real impulse response hap [n] is as follows :
Note, for the above all-pass system to be stable and causal, we need |ek | < 1 and |dk | < 1 for all k.
1
In summary, in an all-pass system, for every non-zero pole a, a zero at its conj. reciprocal exists.
a∗
z −1 − a∗
For the canonical first-order all-pass Hap (z) = , where a = rejθ , the magnitude, phase and
1 − az −1
group-delay are as follows :
|H (e )| = 1
ap
jω
∠H (e ) =
ap
jω
grd[H (e )] = − dωd ∠H
ap
jω jω
ap (e ) = 1+2
1
r sin(ω−θ) 2
1 + ( 1−r )
d
· dω r sin(ω−θ)
( 1−r cos(ω−θ)
) = ... =
1 − r2
|1 − rejθ e−jω |2
cos(ω−θ)
For higher-order all-pass systems, the phase and group-delay are sum of such terms.
Important properties of the phase and group-delay of stable and causal all-pass systems :
1. Stable and causal all-pass systems have positive group-delay, i.e. grd[Hap (ejω )] > 0
111
Higher order stable and causal all-pass systems’ group-delay are sum of such terms.
2. Stable and causal all-pass systems with real and positive H(ejω )|ω=0 (satisfied if hap [n]
real) have negative continuous phase (i.e. arg[H(ejω )] ≤ 0) that starts at 0 for ω = 0 and
decreases monotonically for increasing ω.
H jω
ap (e )|ω=0 =
.
Another useful property for any all-pass system is that the phase response has a total change of
”order x 2π” over a range of frequencies [ω0 , ω0 + 2π].
112
For the above plots (magnitude, principal value of phase, group-delay) of 3 stable and causal
all-pass systems, observe the properties on group-delay (grd[H(ejω )]) and continuous phase
(arg[H(ejω )]). Can you guess the orders of the systems and the locations of the poles ?
A system that is stable and causal and has an inverse system that is also stable and causal.
Hence, a minimum phase system Hmin (z) must have all of its poles and zeros inside the unit circle.
1
Remember that given C(z) = H(z)H ∗ ( ∗ ) or |H(ejω )|2 = C(z)|z=ejω , one can find the system
z
H(z) uniquely if the system H(z) is minimum-phase. We just need to choose from each conjugate
reciprocal pair of poles of C(z), the ones inside the unit circle, and similarly from each conjugate
reciprocal pair of zeros of C(z), the ones inside the unit circle.
Any rational system function H(z) can be decomposed into the product of a minimum phase and
an all-pass systems:
113
Let’s justify by example. Suppose H(z) has many poles&zeros inside unit circle and one zero outside
1
at z = (i.e. |c| < 1)
c∗
The general procedure for obtaining the Minimum-phase and All-pass decomposition of a given
system function H(z) is as follows :
1. Choose all poles&zeros of given H(z) that are inside the unit circle to Hmin (z), and the ones
outside the unit circle to Hap (z).
2. For all poles&zeros chosen to Hap (z) in step-1, add conjugate reciprocal zeros&poles to Hap (z)
to obtain an all-pass system. (They will be added inside unit circle).
3. Cancel the conjugate reciprocal zeros&poles added in step-2 by adding poles&zeros at same
location to Hmin (z). (They will be added inside unit circle).
Ex:
Note that, if the given H(z) is stable and causal, then decomposition will also yield a
stable and causal all-pass system Hap (z).
Assume a signal has been distorted and we would like to undo/compensate for the distortion.
In many cases it is desired that compensating system Hc (z) is stable and causal.
114
Figure 5.7: Distortion compensation
If the distorting
1
system H (z) is minimum phase, we can choose compensating system as
d
Then the overall system becomes an all-pass system : G(z) = Hd (z)Hc (z) =
There are 3 important properties of minimum-phase systems relative to all other stable and
causal systems that have the same frequency response magnitude |H(ejω )|.
1. Minimum phase-lag property : The minimum-phase system is the system with the smallest
phase-lag.
2. Minimum group-delay property : The minimum-phase system is the system with the
smallest group-delay.
3. Minimum energy-delay property : The minimum-phase system is the system with the
smallest energy-delay.
To discuss the meaning and derivation of these properties, first remember the discussion in Section
1
5.4. Given a magnitude response |H(ejω )| (or equivalently C(z) = H(z)H ∗ ( ∗ )), there are finite
z
number of possible systems H(z) that have this given magnitude response. (These different H(z)
could be obtained by choosing a pole (zero) from each conjugate reciprocal pair of poles (zeros) of
C(z).) Amongst these finite number of possible systems H(z), one of them is a minimum-phase
system, some are stable&causal, and others are not stable&causal. All of these systems can be
115
decomposed with minimum-phase and all-pass decomposition :
The first system H0 (z) is the minimum-phase system with the given magnitude response H(ejω ).
Assume the next N systems (Hk (ejω ), k = 1, ..., N ) are stable&causal and the following M systems
(Hk (ejω ), k = N + 1, ..., N + M ) are not stable&causal. Amongst all the stable&causal systems
(Hk (ejω ), k = 0, ..., N ), the minimum-phase system H0 (ejω ) is the system with the
smallest phase-lag
smallest group-delay
smallest energy-delay.
1. Minimum phase-lag property
The phase-lag is defined as the negative of continuous phase response : Phase-lag = −arg[H(ejω )].
Writing the phase-lag for the decomposition of all the stable and causal systems, we can con-
clude that the minimum-phase systems H0 (z) will have the smallest phase-lag since the phase
lag of stable&causal all-pass systems is always positive (as discussed in Section 5.5).
116
in Section 5.5).
∞ Z π Z π ∞
X
2 1 jω 2 1 jω 2
X
|h[n]| = |H(e )| dω = |Hmin (e )| dω = |hmin [n]|2
n=0
2π −π 2π −π n=0
Define the partial-energy of a system with impulse response h[n] as E[k] = km=0 |h[k]|2 .
P
Amongst all the stable and causal systems, the minimum-phase systems H0 (z) will have the
smallest energy-delay, i.e partial-energy:
k
X k
X
|h[k]|2 ≤ |hmin [k]|2 for all k=0,1,2,...
m=0 m=0
The proof of this property is more tedious than those of the first two properties and we will
skip it here. But the proof can be obtained from Problems 5.71 and 5.72 in the textbook.
117
5.7 Linear Systems with Generalized Linear Phase
∠H(ejω ) = −αω
⇒ grd[H(ejω )] = α
Hence, a linear-phase LTI system has frequency response in the following form :
Linear phase systems delay all frequencies by the same amount and thus preserve time-domain
synchronization of different frequency components of an input signal (i.e. there will be no time-
dispersion like in the example of Sec. 5.1) :
Linear phase systems can be interpreted as a cascade of a magnitude filter and time-shift :
(Recall Sec. 4.5 CT Processing of DT signals for how to interpret time-delay by non-integer α.)
One of the most important properties of linear-phase systems is that the impulse response hlp [n] (if
real) has even symmetry (around n = α) if α or 2α is an integer :
Proof :
sin(ωc (n − α))
Figure 5.11: (Figure 5.32 in textbook) hlp [n] = ←→ (a) α = 5 (b) α = 4.5 (c) α = 4.3
π(n − α)
118
sin(ωc (n − α)) e−jωα , |ω| < ω
jω c
Figure 5.11 : hlp [n] = ←→ Hlp (e ) =
π(n − α) 0, ωc < |ω| < π
Many properties of Linear Phase (LP) systems also apply to a larger class of systems called Gener-
alized Linear Phase (GLP) system, that have the following more general frequency response form:
If we ignore the discontinuities in the phase resulting from the addition of ±π due to the sign change
of A(ejω ), the continuous phase and group-delay of GLP systems are as follows :
Consider the two following expansions of the frequency response of a GLP system (note that the
second equation applies when h[n] is real) :
H(ejω ) = A(ejω )e−j(αω−β) =
H(ejω ) =
If we cross multiply the above equation and use sin(a − b) = sin(a) sin(b) − cos(a) cos(b), we get
∞
X
h[n] sin(ω(n − α) + β) = 0
n=−∞
119
This equation is a necessary condition on h[n], α, β for the system to be general linear phase (GLP)
system. (It is not a sufficient condition, i.e. a system with h[n], α, β satisfying the above condition
is not guaranteed to be GLP. However, every GLP system must satisfy the above conditions.) It
does not tell use how to find GLP systems.
Two sets of conditions for real impulse responses that do guarantee generalized linear phase
systems are :
Pf.
120
5.7.3 Causal Generalized Linear Phase Systems
−h[2α − n], 0 ≤ n ≤ M = 2α
2. Anti-symmetric Case: h[n] = −→ H(ejω ) =
0, otherwise
Depending on whether M is even or odd (i.e. α is integer or half-integer; note that length of h[n] is
M +1), the above two cases can be split into two special cases, yielding overall 4 FIR GLP systems:
1. Type-I FIR GLP system (h[n] symmetric, M even): h[n] = h[M − n], 0≤n≤M
2. Type-II FIR GLP system (h[n] symmetric, M odd ): h[n] = h[M − n], 0≤n≤M
3. Type-III FIR GLP system (h[n] anti-sym., M even): h[n] = −h[M − n], 0≤n≤M
4. Type-IV FIR GLP system (h[n] anti-sym., M odd ): h[n] = −h[M − n], 0≤n≤M
121
Ex: For the given example FIR impulse responses h[n] above for each type, verify the type, find
and plot magnitude, phase responses and group-delay.
=⇒ H(z) = H(z −1 )z −M
122
Type-III&IV : ( i.e. h[n] = −h[M − n] )
H(z) =
=⇒ H(z) = −H(z −1 )z −M
The above obtained results can also be summarized as follows (from Tolga Ciloglu’s notes):
Finally, from the properties of zeros (i.e. 4 complex conjugate and conjugate reciprocal zeros), any
FIR GLP system with real impulse response can be decomposed into a product of a minimum-phase
term, maximum-phase-term (all of its zeros outside the unit circle) and a term with all of its zeros
on the unit circle :
H(z) = Hmin (z)Huc (z)Hmax (z).
123
Chapter 6
Contents
6.1 Filter Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.2 Design of DT IIR Filters from CT Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.2.1 Filter Design by Impulse Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.2.2 Filter Design by Bilinear Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.3 Design of DT FIR filters by Windowing Method . . . . . . . . . . . . . . . . . . . . . . 130
6.3.1 Commonly used Windows and Their Properties . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.3.2 Incorporation of Generalized Linear Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.3.3 Kaiser Window Filter Design Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Frequency selective filters are an important class of LTI systems. This chapter discusses DT IIR
and FIR filter design techniques. We cover Sections 7.1-7.3 and 7.5 from our textbook and reading
assignment for this chapter is :
124
6.1 Filter Specifications
The goal in discrete-time filter design is to determine parameters of a system function or dif-
ference equation that approximates a given/desired frequency response within specified
tolerances, for example given by δp1 , δp2 , δs , ωp , ωs . (Typically there is no constraint on the phase
of the frequency response.)
DT IIR filter design techniques are based on mapping well-known CT IIR filter designs to DT
IIR filter design via mappings between the CT frequency and the DT frequency axis.
Most prevalent DT FIR design techniques are the windowing method and the Parks-McClellan
algorithm.
CT IIR filter design is highly advanced with straightforward closed form design formulas. Hence,
design methods are based on transforming a CT IIR filter to a DT IIR filter meeting desired
specifications. Two transformation methods are discussed :
Impulse Invariance
Bilinear Transformation.
6.2.1 Filter Design by Impulse Invariance
Filter Design by Impulse Invariance is based on sampling a CT impulse response (i.e. filter) hc (t) :
h[n] = Td hc (nTd )
=⇒ H(ejω ) =
π
If the CT filter is bandlimited so that Hc (jΩ) = 0 for Ω > , then
Td
ω
H(ejω ) = Hc (j ), |ω| < π
Td
125
This equation indicates a linear relation (i.e mapping) between the CT and DT frequencies :
In practice, any CT filter is not exactly bandlimited. Hence, some aliasing occurs, which can be
negligibly small if the CT frequency response approaches zero quickly.
N
X Td Ak
=⇒ H(z) = , |z| > max{esk Td }
k=1
1 − esk Td z −1 k
Step-2: Design CT filter parameters and system function Hc (s) (with Butterworth or other CT IIR
filter technique) meeting the specifications
126
Step-3: Transform back CT system function to DT system function :
2 1 − z −1
s= ( )
Td 1 + z −1
=⇒ H(z) =
Step-1: Transform given DT filter specifications to CT filter specifications via bilinear transformation
2
via Ω = tan( ω2 ) :
Td
Step-2: Design CT filter parameters and system function Hc (s) (with Butterworth or other CT IIR
filter technique) meeting the specifications
2 1 − z −1
Step-3: Transform back CT system function to DT system function via s = ( ):
Td 1 + z −1
129
6.3 Design of DT FIR filters by Windowing Method
Ideally, the desired DT filter Hd (ejω ) is an ideal filter, such as the ideal low-pass filter with some
sin(ωc n)
cut-off frequency ωc , which corresponds to a impulse response hd [n] = with infinitely many
πn
samples.
To obtain FIR approximation of hd [n], the simplest method is to truncate hd [n] by windowing.
2. The peak amplitudes of main and side lobes of window increase in a manner such that the
area under each lobe remains constant.
Note: In the convolution, as W (ej(ω−θ) ) slides past a discontinuity in Hd (ejθ ), the integral of
W (ej(ω−θ) )Hd (ejθ ) will oscillate as each side-lobe of W (ej(ω−θ) ) moves past the discontinuity.
130
3. (Due to 1&2) the oscillations in the shape of H(ejω ) occur more rapidly but the oscillation
amplitudes remain constant.
131
Figure 6.9: (Table 7.30 and 7.31 in textbook) Fourier transforms (log magnitude) of windows for M=50. a) Rectan-
gular b) Bartlet c) Hann d) Hamming e) Blackman
132
M
Then, windowed impulse response h[n] = hd [n]w[n] will also be symmetric around n = ,i.e.
2
h[M − n], 0 ≤ n ≤ M
h[n] =
0, otherwise
(If desired impulse response is anti-symmetric as hd [n] = −hd [M − n], then h[n] will be anti-
M
symmetric FIR =⇒ H(ejω ) = jAo (ejω )e−jω 2 )
In summary :
Figure 6.10: (Figure 7.31 in textbook) Illustration of type of approximation obtained at a discontinuity of the ideal
frequency response Note that pass and stop-band tolerances are same due to the symmetry of the sliding window.
133
6.3.3 Kaiser Window Filter Design Method
There is a fundamental trade-off between main-lobe width and side-lobe area for any window, i.e.
as we change window shape to decrease side-lobe area (i.e. oscillation amplitudes), main-lobe width
(i.e. transition band) will increase.
A near-optimal window design method for this trade-off is the Kaiser window.
n−α 2 1/2
I0 [β(1 − [( α )] ) ] , 0 ≤ n ≤ M
w[n] = I0 (β)
0, otherwise
M
Here, α = 2
and I0 (.) is first kind zeroth order Bessel function.
Kaiser window has two parameter (length parameter M and shape parameter β) while the other
windows had only length parameter M . By varying β and M , the window shape and length can be
adjusted to trade side-lobe area (or amplitude) for main-lobe width.
Figure 6.11: (Fgiure 7.32 in textbook) a) Kaiser windows for β = 0, 3, 6 and M = 20 b) FT corresponding to windows
in a) c) FT of windows with β = 6 and M = 10, 20, 40.
134
Relationship of the Kaiser window to other windows
Figure 6.12: (Fgiure 7.33 in textbook) Comparison of fixed windows with Kaiser window in low-pass filter design
application (M=32 ωc = π/2). Kaiser 6 means Kaiser window with β = 6.
There are approximate formulas to determine M and β for given filter specifications δp1 , δp2 , δs , ωp , ωs .
2. Cut-off frequency ωc of ideal LPF must be found. Due to the symmetry of approximation at
ωp + ωs
the discontinuity of Hd (ejω ), we set ωc = = 0.5π.
2
3. To determine Kaiser window parameters β and M , we first compute
∆ω = ωs − ωp = 0.2π and A = −20 log10 δ = 60.
A−8
M=
2.285∆ω
Formulas give β = 5.653 and M = 37.
M
where α = 2
= 37/2 = 18.5.
135
Chapter 7
136
Chapter 8
Contents
8.1 Direct Computation of DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.1.1 Direct Evaluation of the DFT definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.1.2 The Goertzel Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.2 Decimation-in-time FFT Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.3 Decimation-in-Frequency FFT Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.4 More general FFT algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
The Discrete Fourier Transform (DFT) plays an important role in many digital signal processing
applications. This chapter discusses several methods that allow efficient computation of the values of
the DFT. In particular, we discuss the Goertzel and the Fast Fourier Transform (FFT) algorithms.
We cover Sections 9.1-9.3 from our textbook and reading assignment for this chapter is :
137
Remember the DFT and IDFT equations for and N-point signal x[n] :
N
X −1
kn
X[k] = x[n]wN , k = 0, 1, ...N − 1
n=0
N −1
1 X
−kn
x[n] = X[k]wN , n = 0, 1, ...N − 1
N k=0
Let us first examine the computational complexity of the computation of the DFT directly from its
definition.
N
X −1
kn
X[k] = x[n]wN
n=0
0 k k−2 k(N −1)
= x[0]wN + x[1]wN + x[2]wN + ... + x[N − 1]wN , k = 0, 1, ...N − 1
Summary: With direct computation of N-point DFT, number of computations required is propor-
tional to N 2 , i.e has O(N 2 ) complexity.
138
The following two properties are used to obtain more efficient/faster algorithms to compute DFT:
k(N −n) −kn kn
1. wN = wN = (wN )∗ (complex conjugate symmetry)
kn k(n+N ) (k+N )n
2. wN = wN = wN (periodicity in n and k)
1. Goertzel algorithm
When only a few samples of an N-point DFT are required (i.e. a few samples of the DTFT), then
the Goertzel algorithm can be more efficient. If all N values of the N-point DFT are required, the
FFT algorithms are more efficient.
Notes:
Initial rest conditions (y[n] = 0, n < 0, if x[n] = 0, n < 0) are used in difference equations.
139
Computation of each sample of y [n] requires 1 complex multiplication + 1 complex addition
k
or equivalently 4 real multiplications + (2+2) real additions.
To compute X[k] = y [N ], we need to compute y [N − 1], y [N − 2], ..., y[1], recursively. (i.e.
k k k
we need N iterations of the difference equation)
Hence to compute for a particular k, the DFT value X[k] = y [n] requires:
k
N compl. mult. + N compl. additions =⇒ 4N real mult. + 4N real additions
(this complexity is similar to direct computation’s complexity)
The number of multiplications in Goertzel algorithm can be reduced by a factor of 2 with a second
order flow-graph as follows :
Figure 8.1: (Figure 9.2 in textbook) Flow-graph of second order recursive computation of X[k]
In direct computation or Goertzel algorithm, we can calculate X[k] for only a M values of k
=⇒ Total complexity proportional to M N .
140
8.2 Decimation-in-time FFT Algorithms
Consider DFT with length N = 2α , where α is an integer, and separate DFT summation to two
sums over even & odd samples :
k
=⇒ X[k] = G[((k))N/2 ] + wN H[((k))N/2 ] k = 0, 1, ..., N − 1
N
where G(k) and H(k) are two periods of 2
-point DFT of even and odd samples of x[n], respectively.
Let us evaluate the above result for N = 8 and also examine the corresponding flow-graph :
141
N N
In a similar manner, a -point DFT can be computed using two -point DFTs :
2 4
In a similar manner, one can split all DFT blocks until only 2-pt DFT’s need to be computed, which
is easy:
Overall, computation of the 8-pt DFT example becomes the following flow-graph :
Note the resulting complete decimation-in-time decomposition of 8-point DFT consists of regular
structures, called butterfly structures :
142
Figure 8.6: (Figure 9.8 in textbook) Flow-graph of basic butterfly computation.
This butterfly structure requires 2 complex multiplications + 2 complex additions, but can be
simplified to 1 complex multiplication + 2 complex additions as follows :
With this simplified butterfly structure, the overall 8-pt DFT computation becomes as follows :
Figure 8.8: (Figure 9.11 in textbook) Flow-graph of 8-point DFT using simplified butterfly computation.
N
log2 N complex multiplications
2
N log2 N complex additions
143
Ex: Let N = 1024 = 210 . For N-point DFT calculation, compare direct computation and FFT
algorithm complexity.
−1
NP
kn
Remember the DFT definition of N-point DFT : X[k] = x[n]wN , k = 0, 1, ...N − 1
n=0
144
Let us write the definition for only even values of k (indexed by 2r, r = 0, ....N/2 − 1) and simplify
the expression :
N −1
X
n2r N
X[2r] = x[n]wN r = 0, 1, ... −1
n=0
2
N/2−1 N −1
X
n2r
X
n2r N
= x[n]wN + x[n]wN r = 0, 1, ... −1
n=0
2
n=N/2
N/2−1 N/2−1
X
n2r
X N 2nr N r N
= x[n]wN + x[n + ]w WN r = 0, 1, ... −1
n=0 n=0
2 N 2
N/2−1
X N nr N
= (x[n] + x[n + ])wN/2 r = 0, 1, ... −1
n=0
2 2
N
Hence, even samples of N-point DFT X[k] can be obtained from the -point DFT of a sequence
2
N
x0 [n] = x[n] + x[n + ], n = 0, ..., N/2 − 1.
2
(Remember what happens when the DTFT of an N-point signal is sampled at less than N points !)
In a similar manner, one can obtain the following result for the odd samples of the N-point DFT :
N −1
X
n2r+1 N
X[2r + 1] = x[n]wN r = 0, 1, ... −1
n=0
2
= ...
N/2−1
X N n nr N
= [(x[n] − x[n + ])wN ]wN/2 r = 0, 1, ... −1
n=0
2 2
N
Overall, the flow-graph of decimation-in-frequency decomposition of N-point DFT into two 2
-point
DFTs is as follows :
Figure 8.10: (Figure 9.19 in textbook) Flow-graph of decimation-in-frequency decomposition of N-point DFT into
two N2 -point DFTs.
145
We can repeat the decomposition procedure until only 2-point DFTs are needed and obtain the
following flow-graphs :
For N-point DFT calculations where N is not a power of 2, but is a composite number i.e.
N = N1 N2 (e.g.N = 10 = 2 · 5), computationally efficient FFT algorithms can be obtained. The
N-point DFT can be expressed as a combination of N1 N2 -point DFTs or as a combination of N2
N1 -point DFTs. Similar statement is valid also for larger composite numbers such as N = N1 N2 N3
(e.g.N = 30 = 2 · 5 · 3).
146