0% found this document useful (0 votes)
82 views147 pages

Ee430 Lectures

This document contains lecture notes for an electrical engineering course on digital signal processing. It introduces key concepts related to discrete-time signals and systems, including linear time-invariant systems, convolution, Fourier transforms, sampling, and filtering. The notes are intended to help students follow the lectures and reduce note-taking burden. They were prepared using a digital signal processing textbook as a reference.

Uploaded by

50 Shubham Saini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views147 pages

Ee430 Lectures

This document contains lecture notes for an electrical engineering course on digital signal processing. It introduces key concepts related to discrete-time signals and systems, including linear time-invariant systems, convolution, Fourier transforms, sampling, and filtering. The notes are intended to help students follow the lectures and reduce note-taking burden. They were prepared using a digital signal processing textbook as a reference.

Uploaded by

50 Shubham Saini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 147

Lecture Notes

EE430 Digital Signal Processing

Department of Electrical and Electronics Engineering

Middle East Technical University (METU)


Preface

These lecture notes were prepared with the purpose of helping the students to follow the lectures
more easily and efficiently. This course is a fast-paced course with a significant amount of material,
and to cover all of this material at a reasonable pace in the lectures, we intend to benefit from these
partially-complete lecture notes. In particular, we included important results, properties, comments
and examples, but left out most of the mathematics, derivations and solutions of examples, which
we do on the board and expect the students to write into the provided empty spaces in the notes.
We hope that this approach will reduce the note-taking burden on the students and will enable
more time to stress important concepts and discuss more examples.

These lecture notes were prepared using mainly our textbook titled ”Discrete-time Signal Process-
ing” by Alan V. Oppenheim, Ronald W. Schafer and John R. Buck. Lecture notes of Professors
Tolga Ciloglu, Aydin Alatan and Engin Tuncer were also very helpful when preparing these notes.
Most figures and tables in the notes are also taken from the textbook.

This is the first version of the notes. Therefore the notes may contain errors and we also believe
there is room for improving the notes in many aspects. In this regard, we are open to feedback and
comments, especially from the students taking the course.

Finally, I would like thank the students who have taken the course in my section in Fall 2018/2019.
They have helped me type some parts of these notes. (Çağnur Tekerekoğlu, İbrahim Üste, Canberk
Sönmez, İsmail Mert Meral, Selen Keleş, Enes Muhavvid Şahin, Yüksel Mert Salar, Umut Utku
Erdem, Abbas Raimkulov, Fatih Yıldırım, Ferdi Akdoğan, Uğur Berk Şahin, Barış Şafak Gökçe,
Furkan Kılıç, Oytun Akpulat, Güner Dilşad Er, Zülfü Serhat Kük, Hilal Köksal, Alper Bilgiç, Emre
Onat Keser, Faruk Tellioğlu, Tahir Çimen, Berrin Güney, Mahmoud ALAsmar, Safa Özer, Tamer
Aktekin, Barış Fındık, Batuhan Kircova, Ahmed Akyol, Şevket Doğmuş, Emre Can, Mert Elmas,
Halil Temurtaş, Yüksel Yönsel, Eren Berk Kama, Ahmet Nazlıoğlu, Dilge Hüma Aydın, Abdullah
Aslam, Özer Karanfil)

Fatih Kamışlı
December 27th , 2018.

1
Contents

1 Discrete-time Signals and Systems 5


1.1 Discrete-time (DT) signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.1 Basic sequences and sequence operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Discrete-time (DT) systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.1 Memoryless systems: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.2 Linear systems: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.3 Time-invariant systems: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.4 Causality: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.5 Stability: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Linear time-invariant (LTI) systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.1 Computation of convolution sum: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4 Properties of convolution and LTI systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.1 Properties of convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.2 Properties of LTI systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.3 FIR and IIR systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5 Linear constant-coefficient difference equations (LCCDE) . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6 Frequency domain representation of DT signals and systems . . . . . . . . . . . . . . . . . . . . . . . . 21
1.7 Representation of sequences by Fourier transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.8 Symmetry properties of DT Fourier transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.9 DT Fourier transform theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2 The Z Transform 34
2.1 The Z Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2 Properties of the ROC for the Z Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.3 The Inverse Z Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4 Z Transform Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5 Z Transform and LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

3 The Discrete Fourier Transform (DFT) 47


3.1 Discrete (-Time) Fourier Series (DFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.2 Properties of DFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3 DTFT of Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.4 Sampling the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.5 The Discrete Fourier Transform (DFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.6 Properties of DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.7 Computing Linear Convolution using DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.7.1 Linear Convolution of Two Finite Length Sequences . . . . . . . . . . . . . . . . . . . . . . . . 63
3.7.2 Circular Convolution as Linear Convolution with Aliasing . . . . . . . . . . . . . . . . . . . . . 64
3.7.3 Implementing LTI systems Using DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

2
4 Sampling of Continuous-time (CT) Signals 69
4.1 Periodic Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2 Frequency-domain Representation of Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3 Reconstruction of a Band-Limited Signal from Its Samples . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.4 Discrete-time processing of continuous-time signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.4.1 Impulse Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.5 CT Processing of DT Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.6 Changing Sampling Rate Using DT Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.6.1 Sampling Rate Reduction by an Integer Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.6.2 Increasing Sampling Rate by an Integer Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.6.3 Simple Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.6.4 Changing Sampling Rate by Non-Integer Factor . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.6.5 Sampling of band-pass signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.7 Digital Processing of Analog Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.7.1 Prefiltering to Avoid Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.7.2 Analog-to-Digital (A/D) Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.7.3 Analysis of Quantization error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.7.4 Digital-to-Analog (D/A) Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

5 Transform Analysis of LTI Systems 93


5.1 Frequency Response of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.2 Systems Characterized by LCCDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.3 Frequency Response for Rational System Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.4 Relationship Between Magnitude And Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.5 All-pass Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.6 Minimum-phase Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.6.1 Minimum-phase and All-pass Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.6.2 Frequency Response Compensation of Non-minimum-Phase Systems . . . . . . . . . . . . . . . 114
5.6.3 Properties of Minimum-phase Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.7 Linear Systems with Generalized Linear Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.7.1 Linear Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.7.2 Generalized Linear Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.7.3 Causal Generalized Linear Phase Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

6 Filter Design Techniques 124


6.1 Filter Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.2 Design of DT IIR Filters from CT Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.2.1 Filter Design by Impulse Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.2.2 Filter Design by Bilinear Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.3 Design of DT FIR filters by Windowing Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6.3.1 Commonly used Windows and Their Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.3.2 Incorporation of Generalized Linear Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.3.3 Kaiser Window Filter Design Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

7 Structures for Discrete-time Systems 136

3
8 Computation of Discrete Fourier Transform 137
8.1 Direct Computation of DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.1.1 Direct Evaluation of the DFT definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.1.2 The Goertzel Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.2 Decimation-in-time FFT Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.3 Decimation-in-Frequency FFT Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.4 More general FFT algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

4
Chapter 1

Discrete-time Signals and Systems

Contents
1.1 Discrete-time (DT) signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.1 Basic sequences and sequence operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Discrete-time (DT) systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.1 Memoryless systems: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.2 Linear systems: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.3 Time-invariant systems: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.4 Causality: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.5 Stability: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Linear time-invariant (LTI) systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.1 Computation of convolution sum: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4 Properties of convolution and LTI systems . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.1 Properties of convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.2 Properties of LTI systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4.3 FIR and IIR systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5 Linear constant-coefficient difference equations (LCCDE) . . . . . . . . . . . . . . . . . 17
1.6 Frequency domain representation of DT signals and systems . . . . . . . . . . . . . . . 21
1.7 Representation of sequences by Fourier transforms . . . . . . . . . . . . . . . . . . . . . 24
1.8 Symmetry properties of DT Fourier transforms . . . . . . . . . . . . . . . . . . . . . . . 27
1.9 DT Fourier transform theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

This chapter covers the fundamental concepts of discrete-time (DT) signals and systems. In par-
ticular, we cover Sections 2.1 to 2.9 from our textbook.

Reading assignment for this chapter :

ˆ Sections 2.1 to 2.9 from our textbook.

5
1.1 Discrete-time (DT) signals

A DT signal is simply a sequence of numbers indexed by integer n.

Our notation to show a DT signal is :

A DT signal can be obtained from

ˆ an inherently discrete event (e.g. number of students attending lecture n)


ˆ sampling of a continuous-time (CT) signal :

1.1.1 Basic sequences and sequence operations


Unit sample sequence

Any sequence x[n] can be written in terms of delayed and scaled δ[n].

For an arbitrary x[n], we have :

6
Unit step sequence

Relations between unit step and sample sequences:

Exponential sequences

General form :

If A and α are real, x[n] is real.

ˆ A > 0, 0 < α < 1 : x[n] decreases in time


ˆ A > 0, −1 < α < 0 : x[n] increases in time with alternating sign
ˆ |α| > 1 : x[n] grows in magnitude as n increases

If A and α are complex :

7
Complex exponentials

x[n] =

Properties :

1. Complex exponentials Aej(ω0 +2πr)n with frequencies (ω0 +2πr), r ∈ Z (e.g. ω0 , ω0 +2π, ω0 +4π,
...) are equivalent to each other:

2. Based on above property, when discussing complex exponentials Aejω0 n (or sinusoids cos(ω0 n+
φ)), we only need to consider and interval of length 2π for frequency ω0 :


3. Complex exponentials Aejω0 n (or sinusoids cos(ω0 n + φ)) are periodic only if ω0
is a ratio of
integers, i.e.
Remember periodicity requirement for any sequence x[n]:

Ex: Are following sequences periodic ? If so, find the periods.


x1 [n] = cos(n) x2 [n] = cos( 2π
8
n) x3 [n] = cos( 3π
8
n + φ)

4. (Prop.1 + Prop. 3) There are only N distinguishable frequencies for which the complex
exponentials Aejω0 n (or sinusoids cos(ω0 n + φ)) are periodic with N :

8
5. For complex exponentials Aejω0 n (or sinusoids cos(ω0 n + φ)),

ˆ low frequencies are in the vicinity of ω =


0

ˆ high frequencies are in the vicinity of ω =0

Rate of oscillation of complex exp. (or sinusoid) determines whether frequency is high or low:

9
Note : For CT complex exponential x(t) = Aejφ0 t , none of the above 5 properties hold :

1.

2.

3.

4.

5.

Transformation of independent variable n

Time shift: x[n] → x[n − k]

Time reversal: x[n] → x[−n]

Note : First time shift, then time reversal 6= first time reversal, then time shift

10
1.2 Discrete-time (DT) systems
Notation:

1.2.1 Memoryless systems:

Output y[n] does not depend on past or future values of input x[n].
Ex:

1.2.2 Linear systems:

The systems satisfies the following relation for any a, b, x1 [n], x2 [n]:

In a linear system, if input

Ex: Are these systems linear ?

11
1.2.3 Time-invariant systems:

Any time shift at the input causes a time shift at the output by the same amount.

Ex: Are these systems time-invariant ? (Accumulator)

Ex: Are these systems time-invariant ? (Compressor)

1.2.4 Causality:

Current output sample y[n] depends only on current and past input samples x[n], x[n−1], x[n−2], ...
Ex:

1.2.5 Stability:

A system is stable if and only if (iff) every bounded input (i.e. ) produces a
bounded output (i.e. ).

12
Ex:

1.3 Linear time-invariant (LTI) systems

LTI systems have both linearity and time-invariance properties.


LTI systems are a very important class of systems.
The output of a LTI system to an arbitrary input can be calculated by the famous convolution sum:

Hence, for any input x[n] to an LTI system, output

An LTI system is completely characterized by its impulse response h[n] = T {δ[n]}.

13
1.3.1 Computation of convolution sum:

Ex: x[n] = δ[n + 2] + 2δ[n] − δ[n − 3] is input to LTI system with impulse response h[n] =
3δ[n] + 2δ[n − 1] + δ[n − 2]. Find output y[n] using two methods.

Echo method : Add outputs to each weighted and delayed delta function in the input. (Useful when
input has few samples.)

Sliding average method : Apply definition of convolution sum.

14
Ex: Impulse response of LTI system is h[n] = u[n] − u[n − N ] and input x[n] = an u[n], 0 < a < 1.
Find output y[n].

1.4 Properties of convolution and LTI systems

1.4.1 Properties of convolution

ˆ Distribution over addition:

ˆ Commutative property:

ˆ Associative property:

1.4.2 Properties of LTI systems

ˆ Impulse response property: An LTI system is completely characterized/specified/determined


by its impulse response h[n].
=⇒

15
Figure 1.1: Figure 1.2:
(Figure 2.11 in textbook) (a) Parallel combination (Figure 2.12 in textbook) (a) Cascade combination
of LTI systems (b) an equivalent system. of LTI systems (b) equivalent cascade system (c)
single equivalent system.

ˆ Memory property: LTI system is memoriless ⇐⇒

ˆ Causality property: LTI system is causal ⇐⇒

ˆ Stability property: LTI system is stable ⇐⇒


Proof given in two steps.
Step-1 : Sufficiency, i.e. if ∞
P
n=−∞ |h[n]| < ∞, then LTI system is stable.

P∞
Step-2 : Necessity, i.e. for LTI system to be stable, we must have n=−∞ |h[n]| < ∞.

16
ˆ Invertibility property: LTI system (h[n]) is invertible ⇐⇒ There is another LTI system
(g[n]) such that h[n] ∗ g[n] = δ[n].

1.4.3 FIR and IIR systems

FIR : Finite (-duration) Impulse Response (h[n] has finite number of nonzero samples)
Ex:

IIR : Infinite (-duration) Impulse Response (h[n] has infinite number of nonzero samples)
Ex:

1.5 Linear constant-coefficient difference equations (LCCDE)

An important subclass of LTI systems where the input x[n] and output y[n] satisfy an LCCDE

ˆ a , b are real constants


k m

ˆ input x[n] given, output y[n] found (given x[n], LCCDE solved for y[n])
ˆ Why study LCCDE? They can be useful to represent and implement LTI systems.
Ex: Accumulator :

Initial/auxiliary conditions

LCCDE require initial/auxiliary conditions on y[n] samples to find a unique solution y[n] for a given
x[n]. Consider the following first-order LCCDE :

17
LCCDE and LTI systems

Consider the followign example to gain insight for the following results.
Ex: y[n] + ay[n − 1] = x[n] for x[n] = 0, n < 0 and initial condition y[−1] = C

LCCDE can represent LTI systems if the initial/auxiliary conditions are so-called zero initial
conditions:

ˆ (Type I)
ˆ (Type II)
ˆ Note :
– Type I (IRC) conditions lead to LTI systems that are causal.
– Type II (IRC) conditions lead to LTI systems that are anti-causal.
– Question: There are LTI system that are neither causal nor anti-causal (i.e. h[n] is tow-
sided). What initial conditions lead to such systems?

ˆ Other initial conditions lead to systems that are not LTI.


General Solutions of LCCDE

General solution of LCCDE is obtained as a sum of particular solution and homogeneous solution:

18
ˆ Particular soln: Given a particular input x [n], particular solution y [n] is any solution that
p p
satisfies LCCDE.

– Finding particular solutions is not easy in general.


– For some types of input signals (e.g. constant,exponential, sinusoid), the particular solu-
tion has the same type/form. (Ex: xp [n] = αn → yp [n] = Kαn . Put yp [n] into LCCDE,
find K.)

ˆ Homogeneous soln: Solution y [n] which satisfies LCCDE for zero input (i.e. x[n] = 0)
h

– In general, yh [n] is a weighted sum of z n , nz n , n2 z n , .... type signals where z are complex
– Consider yh [n] = z n and plug into the homogeneous equation:

– Note: If there is a root zr with multiplicity m > 1, then zrn , nzrn , ...nm−1 zrn should be
included in yh [n].
Ex: 3 roots : z1 , z2 , z2 → yh [n] = A1 z1n + A2 z2n + B2 nz2n
– Ak in yh [n] are determined from the initial conditions.

Forward/Backward Solvability of LCCDE

LCCDE :

Forward recursive solution:

Backwards recursive solution:

If a set of auxiliary conditions are satisfied, then forward or backward recursion can be used to
find/calculate solution.
Ex:

19
LCCDE of FIR and IIR systems

If N = 0, in the LCCDE equation, we have

ˆ no recursion and thus no initial/auxiliary conditions are required to compute output


ˆ actually, the LCCDE is in the form of a convolution where

If N > 1, in the LCCDE equation, we have

ˆ recursion and initial conditions are required to compute output


ˆ if zero input conditions used, system is LTI and h[n] IIR
Finding Impulse Response from LCCDE

In this course, we are mostly interested in finding impulse repsonse of LTI systems represented by
LCCDE. Given an LCCDE of the general form
N
X M
X
ak y[n − k] = bm x[n − m] (1.1)
k=0 m=0

PN
1. Consider the LCCDE with RHS only x[n]: k=0 ak y[n − k] = x[n]

2. Find homogeneous solution yh [n]

3. Find h[0], h[1], .., h[N − 1] from LCCDE using x[n] = δ[n] and zero initial conditions, i.e.
h[−1] = h[−2] = ... = 0 (This is for causal LTI system, perform similarly if anti-causal LTI
system desired)

4. Find unknown constants in homogeneous solution yh [n] using h[0], h[1], .., h[N − 1] to get ĥ[n]
(This is for causal LTI system, perform similarly if anti-causal LTI system desired)

5. Since system is LTI, write h[n] = M


P
m=0 bm ĥ[n − m].

Ex: y[n] − 3y[n − 1] − 4y[n − 2] = x[n] + 2x[n − 1]. Find impulse response h[n] of causal LTI system
represented by this LCCDE.

20
Transform domain approaches

Transform domain approaches are best/useful for LCCDE describing LTI systems.
Ex: LCCDE : y[n] − 3y[n − 1] − 4y[n − 2] = x[n] + 2x[n]

1.6 Frequency domain representation of DT signals and systems

Consider an LTI system with impulse response h[n] and input x[n]. The output y[n] is

If x[n] = ejωn for −∞ < n < ∞ (i.e. complex exponential with frequency ω)

21
=⇒ ˆ e is the eigenfunction for all LTI systems.
jωn

ˆ The corresponding eigenvalue is H(e ), also called the frequency response of the system.

Ex: What is the frequency response of and ideal delay system ?

It will be shown that a broad class of signals can be represented by a sum of complex exponentials

Hence, for an LTI system, the output can be easily calculated

Note that the frequency response is periodic with 2π:

Therefore, frequency response can be defined only over a range of 2π :

Note that the signals ejωn and ej(ω+2π)n are equal and hence the system cannot differentiate between
these eigenfunctions.

Ex: Input to an LTI systems is x[n] = A cos(ω0 n + φ). Find output in terms of H(ejω ).

22
An important class of LTI systems, called frequency selective filters, have frequency response
H(ejω ) that is unity (i.e. 1) over a range of frequencies and 0 for the remaining frequencies.

Figure 1.3:
(Figure 2.17 in textbook) Ideal lowpass filter showing (a)
periodicity of frequency response and (b) one period of frequncy
response.
Figure 1.4:
(Figure 2.18 in textbook) Ideal frequency
selective filters (a) Highpass filter (b) Bandstop
filter (c) Bandpass filter.

1
PM 2
Ex: Moving average system: y[n] = M1 +M2 +1 k=−M1 x[n − k]. LTI ? If so, find h[n] and H(ejω ).

23
Suddenly applied complex exponential inputs

This subsection (Sec. 2.6.2 in textbook) is a reading assignment. It discusses LTI system when
inputs are of the form x[n] = ejω0 n u[n] instead of x[n] = ejω0 n .

1.7 Representation of sequences by Fourier transforms

The Discrete-time Fourier Transform (DTFT) of a sequence x[n] is defined as:

If the above summation converges ( i.e. DTFT of x[n] exists) the sequence x[n] can be obtained
from X(ejω ) as follows:

Notes :

ˆ DTFT X(e jω
) is periodic with 2π

ˆ DTFT X(e jω
) specifies how much of each frequency component (ejωn ) is required to synthesize
sequence x[n]

ˆ X(e jω
) is in general complex

ˆ Note that the phase is not unique:

24
ˆ Remember the ”frequency response” definition of LTI systems:

Convergence (Existence) of DTFT

Convergence of DTFT means

P∞
1. (Sufficient condition) If x[n] is absolutely summable (i.e. n=−∞ |x[n]| < ∞),
then ∞ −jωn
converges, i.e. DTFT X(ejω ) exist.
P
n=−∞ x[n]e

It can also be shown that in this case the sum ∞ −jωn


P
n=−∞ x[n]e converges uniformly to a

continuous function of ω, i.e. X(e ) is continuous.

Ex: : x[n] has finite length =⇒


n
(*) Ex: x[n] = 21 u[n] =⇒

Ex: : x[n] = an u[n]. Does X(ejω ) exist ? If so, for which values of a and what is X(ejω ) ?

2. Some sequences are not absolutely summable but square summable. If x[n] is square summable
( ∞
P 2
P∞ −jωn
n=−∞ |x[n]| < ∞ ), then n=−∞ x[n]e converges in the mean-square sense, i.e.
PM

for a given X(e ) if we define XM (e ) = n=−M x[n]e−jωn ,


then limM →∞ −π |X(ejω ) − XM (ejω )|2 dω = 0. In other words, the error
|X(ejω ) − XM (ejω )| may not be zero at each ω value but the energy of the error is.


1, | ω |< ω
c
(**) Ex: : H(ejω ) = with periodicity 2π .
0, ωc <| ω |< π

25
Figure 1.5: (Figure 2.21 in textbook) Convergence of the Fourier Transform.

3. For some sequences that are neither absolutely nor square summable (e.g. periodic se-

quences such us ej 5 n ), ∞ −jωn
P
n=−∞ x[n]e converges in generalized functions sense.
(***) Ex: x[n] = 1 for all n.

26
Ex: Examine convergence of DTFT for x[n] = ejω0 n .

Summary of convergence/existence of DTFT :

1. For a sequence x[n], DTFT synthesis equation ∞ −jωn


converges (i.e. DTFT X(ejω )
P
n=−∞ x[n]e
exists) in 3 different ways depending on x[n].

ˆx[n] absolutely summable → ∞


P
n=−∞ x[n]e
−jωn
convergences uniformly, X(ejω ) is contin-
uous
ˆx[n] square summable → ∞
P
n=−∞ x[n]e
−jωn
convergences in mean-square sense, X(ejω ) has
discontinuities
ˆx[n] neither absolutely nor square summable → ∞
P
n=−∞ x[n]e
−jωn
convergences in gener-

alized functions sense, X(e ) has impulse functions

2. DTFT synthesis or analysis equation may not be easily calculated for some x[n] or X(ejω ) :

1.8 Symmetry properties of DT Fourier transforms

Definitions: Conjugate-symmetric sequence xe [n] satisfies


Conjugate-antisymmetric sequence xo [n] satisfies

Any sequence can be written as a sum of conjugate-symmetric & conjugate-antisymm. sequences:

Note that:
−→ Adding (2) and (3) gives (1).
−→ From (2):
From (3):

−→ If x[n] is real, then

27
(Hence, conjugate-symmetry & conjugate-antisymmtery is generalization of even-odd decomposi-
tion from EE301.)

Conjugate-symmetry & conjugate-asymmetry definitions and decomposition also apply to CT sig-


nals and FT’s (CTFT, DTFT,..).

Same notes apply!


−→ Adding (5) and (6) gives (4).
−→ From (5):
From (6):

−→ If X(ejw ) is real, ...

For a sequence x[n] with DTFT X(ejw ), i.e. x[n] ←→ X(ejw ):

1-

2-

3-

4-

5-

28
6-

If x[n] is real (i.e. x[n] = x∗ [n]), we have :


7- X(ejw ) = X ∗ (e−jw ) (i.e. DTFT is conjugate symmetric)

8- XR (ejw ) = XR (e−jw ) (i.e. real part of DTFT is even)

9- XI (ejw ) = −XI (e−jw ) (i.e. imaginary part of DTFT is odd)

10- |X(ejw )| = |X(e−jw )| (i.e. magnitude of DTFT is even)

11- ∠X(ejw ) = −∠X(e−jw ) (i.e. phase of DTFT is odd)

12- even part xe [n] ←→ XR (ejw ) (from 5)

13- odd part xo [n] ←→ jXI (ejw ) (from 6)

Table 1 in ch2 pg 57 —– Fig. 2.22 DTFT Real, Im, Mag, Phase

Figure 1.6: (Table 2.1 in textbook) Symmetry Properties of the Fourier Transform.

29
1.9 DT Fourier transform theorems

Consider following notation :

ˆ Linearity of DTFT: x1 [n]⇐⇒ X1 (ejω )


x2 [n] ⇐⇒ X2 (ejω )

ˆ Time & Frequency Shifting: x[n]⇐⇒ X(ejω )

ˆ Time Reversal: x[n]⇐⇒ X(ejω )

ˆ Differentiation in Frequency: x[n]⇐⇒ X(ejω )

ˆ Parseval‘s Theorem: x[n]⇐⇒ X(ejω )


y[n]⇐⇒ Y (ejω )

Energy of a DT Signal:

If E is finite =⇒ x[n] : Energy Signal

Average Power:

30
If E is finite =⇒ P = 0
If P is finite =⇒ x[n] : Power Signal

ˆ Convolution Theorem: x[n]⇐⇒ X(ejω )


h[n] ⇐⇒ H(ejω )

Remember the eigenfunction property & linearity of LTI system

ˆ Windowing Theorem: x[n]⇐⇒ X(ejω )


w[n] ⇐⇒ W (ejω )

31
Figure 1.7: (Table 2.2 in textbook) Theorems of the Fourier Transform.

Figure 1.8: (Table 2.3 in textbook) Pairs of the Fourier Transform.

32
Examples

Use DTFT and known signal & DTFT pairs to find DTFT or IDTFT of given expression.

Ex: x[n] = an u[n − 5]

1
Ex: X(ejω ) = (1−aejω )(1−bejω )


e−jωnd , ω <| ω |< π
jω c
Ex: H(e ) = with periodicity 2π .
0, | ω |< ωc

33
Chapter 2

The Z Transform

Contents
2.1 The Z Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2 Properties of the ROC for the Z Transform . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.3 The Inverse Z Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4 Z Transform Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5 Z Transform and LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

This chapter covers the Z transform and its properties. In particular, we cover Sections 3.1-3.4 from
our textbook.

Reading assignment for this chapter :

ˆ Sections 3.1-3.4 from our textbook.


The DT Fourier transform does not converge for some sequences, as discussed in the previous
chapter. To work with a broader class of signals, the Z transform is defined, which can converge for
a broader class of signals. In addition, the Z transform can be more convenient in some analytical
problems.

34
2.1 The Z Transform

Remember DTFT of x[n] :

The Z-Transform of x[n] :

Notation :

ˆ Z-transform operator Z . :
ˆ Correspondence between x[n] and its z-transform:
The complex plane (z-plane):

From the above definitions of DTFT and the Z-transform, we can make the following observations:

ˆ For z = r · ejω

ˆ On the unit circle, i.e. for z = 1 · e


Convergence of Z-Transform

(Uniform) Convergence of Z-transform means :

35
Hence, we can make the following observations :

ˆ Convergence of Z-transform (i.e.


P∞
n=−∞ |x[n]||z|
−n
< ∞) depends on magnitude |z| = r but
not on angle ∠z = ω.

ˆ Region of Convergence of (ROC) is Pdefined for a given x[n], as the range of z values

for which its Z transform converges, i.e. | n=−∞ x[n]z −n | < ∞.

ˆ If the unit circle (i.e. all z s.t. |z| = r = 1) is inside ROC, then X(z)|z=1ejω converges
uniformly and DTFT can be obtained from the Z transform as X(ejω ) = X(z)|z=ejω and thus
also converges uniformly.
Unit circle is inside ROC of X(z) ⇐⇒ ⇐⇒

The following sequences do not have a uniformly converging DTFT since they are not absolutely
summable. Note that their Z transform do not converge (uniformly) for any z either (i.e. their
ROCs are empty) since x1 [n]r−n and x2 [n]r−n are not absolutely summable for any value of r.

ˆ x [n] = sinπnω n
1
0

ˆ x [n] = cos ω n
2 0

Their DTFTs X1 (ejω ) and X2 (ejω ) do not converge uniformly (x1 [n], x2 [n] are not absolutely
summable) but are defined in other means. x1 [n] is square summable and X1 (ejω ) converges in
the mean-square sense, and x2 [n] is neither absolutely nor square summable but X2 (ejω ) converges
in generalized function sense.
Obtaining the DTFT from the Z transform (X(ejω ) = X( z)| z=ejω ) should only be used if x[n] is
absolute summable, i.e. DTFT converges uniformly (and ROC of X(z) includes the unit circle). In
other words, since X1 (ejω ) and X2 (ejω ) do not converge uniformly, they cannot be obtained from
the Z transforms, which do not exist.

A rational Z transform is expressed as follows :

Ex1: x[n] = an u[n]

Notes for this example:

ˆ Z-transform X(z) has a region of convergence (ROC) for any finite value of a :
36
ˆ DTFT X(e jω
) exists/converges uniformly only for some values of a :

ˆ For a values for which the DTFT exists/converges uniformly (i.e ROC includes unit circle),
DTFT can be obtained from the Z transform via X(ejω ) = X( z)| z=ejω

– Let a = 2,

– Let a = 1/2,

Ex2: x[n] = −an u[−n − 1]

From Ex1 & Ex2 above, we can see that different sequences have the same X(z) expression but
with different ROCs.
=⇒

Ex3: x[n] = ( 21 )n u[n] + (− 31 )n u[n] (Sum of two exponential sequences)

37
Ex4: x[n] = ( 21 )n u[n] + (− 31 )n u[n] (Right - sided sequence)

Ex5: x[n] = (− 13 )n u[n] − ( 21 )n u[−n − 1] (Two - sided sequence)

Ex6: x[n] = an (u[n] − u[n − 1]) (Finite-length sequence)

38
Some common Z Transform pairs

Figure 2.1: (Table 3.2 in textbook) Some common Z Transform pairs.

Note that from pairs 5 and 6, almost all the other pairs can be derived.

2.2 Properties of the ROC for the Z Transform

Assume X(z) is rational for the properties below.


1. ROC is a ring or disc centered at origin in z-plane

2. DTFT X(ejω ) converges uniformly ⇐⇒ ROC of X(z) includes unit circle

3. ROC cannot contain any poles

39
4. x[n] finite-duration seq. =⇒ ROC entire z-plane except possibly at z = 0 and/or z = ∞

5. x[n] right-sided seq. =⇒ ROC outwards from outermost finite pole (possibly including z = ∞)

6. x[n] left-sided seq. =⇒ ROC inwards from innermost finite pole (possibly including z = 0)

7. x[n] two-sided seq. =⇒ ROC is a ring in the z-plane bounded by poles (or is empty)

8. ROC must be a connected region. (i.e. can not be union of multiple unconnected regions)

Note: These properties of ROC limit the possible ROCs that can be associated with a given
pole-zero plot (or X(z)).

Ex: Indicate all possible ROCs of following pole-zero plot:

40
2.3 The Inverse Z Transform

For typical sequences & Z Transforms in this course, less formal methods are sufficient & preferred:

1. Inspection method

2. Partial fraction expansion method

3. Power series expansion method

Inspection Method:

Recognize ”by inspection” some transform pairs.

Remember the Table 2.2 with common signal and Z transform pairs. (Remember basic ones, e.g. 5
and 6, and derive others from those and Z transform properties.)

Partial fraction expansion method:

Any rational X(z) can be expanded into a sum of partial fractions. (Then use inspection method
to determine time domain expression for each fraction)

41
1 + 2z −1 + z −2 (1 + z −1 )2
Ex: X(z) = = with ROC |Z| > 1 is given. Find x[n].
1 − 23 z −1 + 12 z −2 (1 − 21 z −1 )(1 − z −1 )

42
Power Series Method:
P∞
X(z) = n=−∞ x[n]z −n = ...

Ex: X(z) = z 2 (1 − 12 z −1 )(1 + z −1 )(1 − z −1 ) =

Ex: Power series expansion for functions such as log sin etc. is known.
X(z) = log(1 + az −1 ), |z| > a

1 1
Ex: X(z) = , |z| > a Ex: X(z) = , |z| < a
1 − az −1 1 − az −1

2.4 Z Transform Properties

Consider pairs : x[n] ←→ X(z), ROC : Rx


x1 [n] ←→ X1 (z), ROC : R1
x2 [n] ←→ X2 (z), ROC : R2

43
Linearity:

Time-shifting:

Multiplication with an Exponent Sequence:

Ex: x[n] = rn cos(ω0 n)u[n]

Differentiation of X(z):

Ex: nan u[n] ⇐⇒ X(z) =

44
Conjugation of a Complex Sequence:

Time Reversal:

Convolution of Sequences:

Initial Value Theorem:

Figure 2.2: (Table 3.2 in textbook) Z Transform properties

45
2.5 Z Transform and LTI Systems
LTI System:

Ex: Input x[n] = Au[n] to an LTI system with h[n] = an u[n]. Find output y[n] (assume |a| < 1)

LCCDE:

Ex: Suppose an LTI system is described by: y[n] = ay[n − 1] + x[n].


Find impulse response h[n] if system is causal. Find impulse response h[n] if system is not causal.

Causality and Stability of LTI Systems and the ROC of the system function H(z):

ˆ Causal LTI systems have right-sided impulse response ⇐⇒


ˆ Stability of LTI Systems ⇐⇒ ⇐⇒ DTFT H(e jw
) converges uniformly
⇐⇒ ROC of H(z) ...

ˆ LTI system both stable and causal ⇐⇒ All poles of H(z) inside unit circle.

46
Chapter 3

The Discrete Fourier Transform (DFT)

Contents
3.1 Discrete (-Time) Fourier Series (DFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.2 Properties of DFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3 DTFT of Periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.4 Sampling the DTFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.5 The Discrete Fourier Transform (DFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.6 Properties of DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.7 Computing Linear Convolution using DFT . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.7.1 Linear Convolution of Two Finite Length Sequences . . . . . . . . . . . . . . . . . . . . . . 63
3.7.2 Circular Convolution as Linear Convolution with Aliasing . . . . . . . . . . . . . . . . . . . 64
3.7.3 Implementing LTI systems Using DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

This chapter covers the Discrete Fourier Transform (DFT) and its properties. In particular, we
cover Sections 8.1-8.7 from our textbook. Reading assignment for this chapter is :

ˆ Sections 8.1-8.7 from our textbook.


The DT Fourier transform or the Z transform provide frequency domain representations with con-
tinuous variables, ω or z :
N-point finite x[n] ←→ X(ejω ), X(z) where ω, z are continuous

In many applications, it is desirable to have finite extent frequency domain representations with
discrete variables :
N-point finite x[n] ←→ N-point finite X[k], where k is discrete

The DFT provides such a finite extent frequency domain representation with a discrete independent
variable while also having many of the beautiful properties of the Fourier representations.

The DFT of a finite sequence x[n] can be obtained by generating a periodic version of the sequence
(x̃[n]), obtaining its DTFS coefficients (X̃[k]), and then taking one period of it (X[k]) :

47
3.1 Discrete (-Time) Fourier Series (DFS)

Consider a sequence x
e[n] periodic with N :

As CT periodic signals, DT periodic signals can be represented as a weighted sum of harmonically


related complex exponentials :


How to determine X[k] e[n] ? By multiplying both sides with e−j N rn an summing from n = 0
e from x
to n = N − 1.


Let’s introduce a new notation for (e−j N ):
The analysis and synthesis equations for the DFS are :

48
Note that both x
e[n] and X[k]
e are periodic with N .

Signal & DFS pair: x


e[n] ←−−−
DF S−−→ X[k]
−−−
e

P∞
Ex: : DFS of Periodic Impulse Train x
e[n] = r=−∞ δ[n − rN ]

Ex: : DFS of Periodic Rectangular Pulse Train

Figure 3.1: Magnitude and phase of X[k]

49
3.2 Properties of DFS

For following properties assume the following signal DFS pairs :

Linearity :

Shift of Seq :

Duality Property:

Symmetry Properties:

Properties similar to DTFT properties exist for DFS.

50
Periodic Convolution:

Figure 3.2: (Table 8.1 in textbook) Computa-


tion of periodic convolution

Multiplication Property:

DFS properties

Figure 3.3: (Table 8.1 in textbook) DFS properties

51
3.3 DTFT of Periodic Signals

Consider sequence x̃[n] periodic with N, which we can represent with DFS representation :

It is also useful to represent periodic signals with DTFT :

To show the DTFT relation, take IDTFT of X̃(ejω ) :

P∞
Ex: DTFT of periodic impulse train p̃[n] = r=−∞ δ[n-rN]

Time and Frequency Domain Relation between finite-extent x[n] and its periodic version x̃[n]

Consider N -point finite extent sequence

Generate its periodic version

Summary : Samples of DTFT X(ejω ) of finite extent sequence x[n] give DFS coefficients X̃[k] of
x̃[n].

52
Note : The same result would be obtained even if x[n] is L-point (i.e. x[n] = 0, outside 0 ≤ n ≤
L − 1) and x̃[n] is generated with period N , i.e. x̃[n] = ∞
P
r=−∞ x[n − rN ], where L > N or L < N .
(More details in the next section.)

1, 0 ≤ n ≤ 4
Ex: Consider x[n] =
0, otherwise

Exercises:
1) Find DFS coefficients X̃1 [k] for x˜1 [n] = ∞
P
n=−∞ x[n − r.8]

P∞
2) Find DFS coefficients X̃2 [k] for x˜2 [n] = n=−∞ x[n − r.4]

3) Take samples from X(ejω ) at ω = k 2π3


, k=..,-1,0,1,.. to form X̃3 [k].
Find sequence x˜3 [n] whose DFS coefficients are X̃3 [k].

3.4 Sampling the DTFT

1. Consider non-periodic x[n] with DTFT X(ejω ):


(no assumption on length of x[n])

2. Take samples X̃[k] from X(ejω ) at frequencies ωk =k 2π


N
, k = 0, ±1, ±2,...

The samples X̃[k] will be periodic with N since X(e ) is periodic with 2π.

3. The periodic sequence X̃[k] can be seen as DFS coefficients of a periodic sequence x̃[n].
What is x̃[n] equal to in terms of non-periodic x[n]?

53
Figure 3.5: (Figure 8.9 in textbook)
Periodic x̃[n] with N=7. Aliasing in time domain.

Figure 3.4: (Figure 8.8 in textbook)


(a) x[n]. (b) Periodic x̃[n] with N=12

4. If x[n] is finite with L samples and N ≥ L, then x[n] can be recovered from x̃[n] as one period
of it:

Otherwise (N<L), x[n] can not be perfectly recovered from x̃[n] (Aliasing in some samples)
(Note this looks like the dual of sampling theorem. We need to take sufficiently many samples,
i.e. N samples s.t. N ≥ L in DTFT domain to avoid aliasing in time domain)

Proof of 1-3 :

3.5 The Discrete Fourier Transform (DFT)

Remember the DTFT and DFS representations for finite and periodic sequences :

finite-extent x[n] (N -point) ←→ X(ejω ), periodic with 2π and ω is continuous


periodic x̃[n] (with N ) ←→ X̃[k], periodic with N

54
In many applications, it is desirable to have finite extent frequency domain representations with
discrete variables :
finite-extent x[n] (N -point) ←→ finite-extent X[k] (N -point), where k is discrete

The DFT provides such a finite extent frequency domain representation with a discrete independent
variable while also having many of the beautiful properties of the Fourier representations.

The DFT of a finite sequence x[n] can be obtained by generating a periodic version of the sequence
(x̃[n]), obtaining its DTFS coefficients (X̃[k]), and then taking one period of it (X[k]) :

Formal Development of DFT:

Assume finite-duration sequence x[n] of duration/length M :

Generate periodic x̃[n] with period N , such that N ≥ M :

Since N ≥ M , x[n] can be recovered from x̃[n] , as one period of x̃[n]:

Since x̃[n] is periodic with N , we have its DFS coefficients:

Define finite extent DFT coefficients X[k] as one period of X̃[k]

Note that the DFS coefficients can be obtained from DFT coefficients :

55
Remember the DFS equations relating x̃[n] and X̃[k]:

Since summations in DFS relations are over one period (i.e 0 to N − 1) in which x̃[n] = x[n], we
obtain the following DFT analysis and synthesis equations :

Arrow notation for DFT:

Sampling relation between DTFT and DFS carries over to DFT: (remember Sec. 8.4)

Note that since DFT relates finite extent x(n) to finite-extent X[k], the DFT relation can be written
using matrix notation :

56
Ex: Consider L = 5 point sequence x[n] :

1, 0 ≤ n ≤ 4
x[n] =
0, otherwise

Find its 5-pt DFT and 10-pt DFT (ie. N = 5 and N = 10).

Exercises :

1. You are given same x[n] and its DTFT X(ejω ). Form 4-sample

X(ejω )| , k = 0, 1, 2, 3
ω=k 2π
X1 [k] = 4
0 , other k.

Find x1 [n] which has 4-pt DFT X1 [k].

57
2. You are given the same x[n]. It’s 5-pt DFT is X[k]. Form

x[ n ] , n even
2
x2 [n] =
0 , otherwise.

Find 10-pt DFT of x2 [n] as X2 [k] in terms of X[k].

3.6 Properties of DFT

Note that in the DFT definition, both x[n] and X[k] are finite and zero outside the range [0, ..., N −1].
⇒ This condition must be preserved in DFT properties, i.e after manipulation of a signals DFT.

To derive or understand following DFT properties, one can use DFS-DFT relation and the corre-
sponding DFS properties.
⇒ Generate periodic x̃[n], apply DFS property, take one period from x̃[n] and obtained DFS X̃[k].

Linearity Property:

Circular Shift of a Sequence:

58
Ex:

Figure 3.6: (Figure 8.12 in textbook)


Circular shift of a finite sequence x[n].

Circular Shift of a DFT:

Duality Property:

Note that ((−k))N = Thus, N x[((−k))10 ] =

59
Symmetry Properties :

Symmetry properties of DFT can be obtained using the symmetry properties of DFS and taking
one period of signal and DFS :

For real x[n] (i.e. x[n] = x∗ [n]), we have X[k] = X ∗ [((−k))N ] and thus :

Similarly :

Note again that ((−n))N =

Circular Convolution Property :

Consider x1 [n] with duration N1 and x2 [n] with duration N2 .


Let X1 [k] and X2 [k] be their N sample DFT’s , where N ≥ max{N1 , N2 }.

Remember the periodic convolution definition :

60
Note that, circular convolution means, apply periodic convolution and take one period of result.

1. Remember periodic convolution property of DFS:

2. To obtain DFT relation , take one period of periodic signal and its DFS :

Multiplication Property :

Ex: Two identical rectangular pulses of duration L : x1 [n] = x2 [n] =

ˆ Verify circular convolution property with N = L :

61
ˆ Verify circular convolution property with N = 2L

Note that

Summary of DFT properties:

Figure 3.7: (Table 8.1 in textbook) DFS properties

62
3.7 Computing Linear Convolution using DFT

There are efficient algorithms to compute the DFT of a sequence. In other words, there are
algorithms that require less computation (i.e. multiplication, addition) than computing the DFT
directly from its definition summation. One famous such efficient algorithm is the Fast Fourier
Transform (FFT) algorithm.

Due to the efficiency of the FFT algorithm, it may be more efficient (i.e. require less overall
multiplication and additions) to implement convolution of two sequences x3 [n] = x1 [n] ∗ x2 [n]
using the circular convolution property of DFT as follows :

1. Compute N-point DFT’s X1 [k] and X2 [k] of x1 [n] and x2 [n] using the FFT algorithm

2. Compute the product of the DFT’s : X3 [k] = X1 [k] · X2 [k]

3. Compute N-point inverse DFT of X3 [k] to obtain circular convolution x3 [n] = x1 [n] N x2 [n].

Note that the above procedure gives circular convolution x1 [n] N x2 [n] but we wanted the (regular
or linear) convolution x1 [n] ∗ x2 [n]. The two can be made equal by carefully choosing the DFT
size N . (More details below)

3.7.1 Linear Convolution of Two Finite Length Sequences

Consider following : x1 [n] with length L


x2 [n] with length P

x3 [n] = x1 [n] ∗ x2 [n] =

Figure 3.8: (Figure 8.17 in textbook)

63
3.7.2 Circular Convolution as Linear Convolution with Aliasing

Remember Sect.8.4 Sampling the DTFT :

1. For L-pt x[n] :

2. Sample DTFT X(ejw ) at ωk = k 2π


N
, k ∈ Z:

3. X[k]
e can be used as DFS coefficients :

Consider similar discussion with DFT instead DFS:

1. For L-pt x[n] :

2. Sample DTFT X(ejw ) in [0, 2π):

3. X[k] can be viewed as DFT coeffs of an N-point sequence xp [n]:

Now consider similar discussion for x3 [n] = x1 [n] ∗ x2 [n], where x1 [n] is L-point, x2 [n] is P -point
and x3 [n] is (L + P − 1)-point sequence :

1. For x3 [n] = x1 [n] ∗ x2 [n]:

2. Sample X3 (ejw ) in [0, 2π):

3. X3 [k] can be viewed as DFT coeffs of an N -point sequence x3p [n]:

4. But since X3 [k] = X1 [k] · X2 [k], from circular convolution theorem, we have:

64
P∞
Plot r=−∞ x3 [n − rN ] :

Thus circular convolution ...

Summary: For L-point x1 [n], P -point x2 [n] and (L + P − 1)-point linear convolution result x3 [n] =
x1 [n] ∗ x2 [n], we have the following results :
If N ≥ (L + P − 1) :

Otherwise N < (L + P − 1) :

Note that circular convolution can be implemented with DFT’s (ie. FFT alg.) :

65

0≤n<3
O
1,
Ex: Let x1 [n] = x2 [n] = . Find and plot x1 [n] 5 x2 [n].
0, otherwise

O O
Exercise : Find and plot x1 [n] 2 x2 [n] and x1 [n] 1 x2 [n]. In MATLAB, find these results again by
IDF TN {DF TN {x1 [n]}·DF TN {xN [n]}}, where N = 5, 2, 1. Use fft(x,N) command for DF TN {x[n]}.

3.7.3 Implementing LTI systems Using DFT

Consider T-point input sequence x[n], P-point impulse response h[n] :

ˆ Linear convolution gives (T+P-1)-point output: y[n] = x[n] ∗ h[n]


ˆ Circular convolution with length N s.t. N ≥ T + P − 1 will give same (T+P-1)-point output
as linear convolution : y[n] = x[n]O
N h[n]

ˆ Circular convolution can be implemented with DFT/IDFT:

Why do we implement linear convolution x[n]∗h[n] not directly from definiton ( ∞


P
k=−∞ x[k]h[n−k]),
but from this procedure with DFT/IDFT?

ˆ This procedure may require less computation (addition, multiplication) than implementation
from convolution definition since DFT or IDFT operations can be calculated very efficiently
using the FFT algorithm.

ˆ Definition of convolution requires (T + P − 1) · P ≈ T · P multiplications


ˆ One N-pt DFT or IDFT with FFT algorithm requires log N multiplication
N
2 2

ˆ Let’s compare with signal x[n] of length T = 900, and h[n] of length P = 100 :

66
In practice, the signal x[n] signal can be very long (e.g. speech signal of 20 seconds).
=⇒ Divide signal x[n] into small pieces/blocks, convolve each block with h[n] (we can use FFT here
for efficient computation), combine convolution results. There are two well-known such methods:

1. Overlap-Add Method :

Divide x[n] into non-overlapping length-L


blocks xr [n]:

To obtain y[n], add the convolution results


yr [n] from each block.

The last (P-1) samples of each block will


overlap with first (P-1) samples of the next
block.

Overlapping samples are added when form-


ing overall output y[n].

Note that the convolution of each block and


filter can be performed with the FFT algo-
rithm for efficient computation.

Figure 3.9: (Figure 8.22 and 8.23 in textbook)

67
2. Overlap-Save Method :
Divide x[n] into overlapping length-L blocks
xk [n] where first (P-1) samples overlap with
the previous block.

Perform L-point circular convolution of each

O
block xk [n] with filter h[n] :
yk [n] = xk [n] L h[n]

Since L < (L + P − 1), the first (P − 1)


samples of the circular convolution result
yk [n] will be time-aliased and the remain-
ing L − (P − 1) samples will be correctly
equal to linear convolution result xk [n] ∗ h[n].

Discard these incorrect first (P − 1) samples


when combining yk [n] to obtain overall result
y[n].

The discarded first (P − 1) samples of yk [n]


come from convolution summation of first
(P −1) samples of xk [n]. But due to the over-
lap of blocks xk [n], these first (P −1) samples
of xk [n] are equal to the last (P − 1) samples
of previous block xk−1 [n]. Thus combining
yk [n] in this manner correctly produces over-
all output y[n] = x[n] ∗ h[n].

Note that the circular convolution of each


overlapping block can be performed with Figure 3.10: (Figure 8.24 in textbook)
FFT algorithm for efficient computation.

For a visual illustration of the Overlap add and save methods, you can watch the following videos
on youtube (Watch at least the range from 1:30 to 3:00) :

ˆ Overlap-add method : https://youtu.be/FPzZj30hPY4


ˆ Overlap-save method : https://youtu.be/gulQfZPcnw8.

68
Chapter 4

Sampling of Continuous-time (CT)


Signals

Contents
4.1 Periodic Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2 Frequency-domain Representation of Sampling . . . . . . . . . . . . . . . . . . . . . . . 70
4.3 Reconstruction of a Band-Limited Signal from Its Samples . . . . . . . . . . . . . . . . 73
4.4 Discrete-time processing of continuous-time signals . . . . . . . . . . . . . . . . . . . . . 74
4.4.1 Impulse Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.5 CT Processing of DT Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.6 Changing Sampling Rate Using DT Processing . . . . . . . . . . . . . . . . . . . . . . . 78
4.6.1 Sampling Rate Reduction by an Integer Factor . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.6.2 Increasing Sampling Rate by an Integer Factor . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.6.3 Simple Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.6.4 Changing Sampling Rate by Non-Integer Factor . . . . . . . . . . . . . . . . . . . . . . . . 82
4.6.5 Sampling of band-pass signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.7 Digital Processing of Analog Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.7.1 Prefiltering to Avoid Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.7.2 Analog-to-Digital (A/D) Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.7.3 Analysis of Quantization error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.7.4 Digital-to-Analog (D/A) Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

This chapter covers the sampling of continuous-time signals and related topics. Under some con-
straints given by the sampling theorem, a continuous-time signal can be accurately represented by
the samples taken from it at regular discrete points in time. This property enables continuous-time
signal processing to be implemented through a process of sampling, discrete-time processing, and
then subsequent reconstruction of a continuous-time signal.

We cover Sections 4.1-4.6 and 4.8 from our textbook. Reading assignment for this chapter is :

ˆ Sections 4.1-4.6 and 4.8 from our textbook.

69
4.1 Periodic Sampling

Sampling is taking samples from a continuous time-signal at periodic points in time :

A convenient mathematical method for sampling is (shorter model is x[n] = ):

Figure 4.1: (Figure 4.2 in textbook) Sampling with


periodic impulse train and conversion to DT signal

4.2 Frequency-domain Representation of Sampling

Let’s analyse the mathematical sampling model in frequency domain :

70
Nyquist Sampling Theorem:

Let xc (t) be a bandlimited signal with Xc (jΩ)=0 for |Ω| ≥ΩN . Then xc (t) is uniquely deter-
mined by its samples xc [n]=xc (nT ), n=0,±1,±2,... if Ωs ≥ 2ΩN where T and ΩS = 2π T
are the
sampling period and frequency, respectively.

Let’s also relate DTFT{x[n]} = X(ejω ) to Xs (jΩ) and Xc (jΩ):

71
1
Ex: xc (t) = cos(4000πt) Sample with T = 6000
sec.

1
Ex: xc (t) = cos(16000πt) Sample with same T = 6000
sec.

72
4.3 Reconstruction of a Band-Limited Signal from Its Samples

Figure 4.2: (Figure 4.2 in textbook) Sampling with Figure 4.3: (Figure 4.10 in textbook) (a) Ideal band-limited sig-
periodic impulse train and conversion to DT signal nal reconstruction system. (b) Equivalent representation as ideal
D/C converter

To reconstruct a CT signal, first, form a CT impulse train using the DT sequence :

Next, filter it with an ideal reconstruction filter Hr (jΩ) :

Remember that Hr (jΩ) has gain T and cutoff frequency Ωc such that :

A convenient and common choice for Ωc is center of that range:

Note that hr (t)|t=0 = 1 and hr (t)|t=nT = 0 for n = ±1, ±2, ... Hence, xr (t)|t=m.T = x[m] = xc (mT )
for any integer m, i.e. reconstructed sign xr (t) and original signal xc (t) have same values at sampling
instants t = mT , independently of the sampling period T .

73
Consider also frequency-domain analysis of D/C conversion:

4.4 Discrete-time processing of continuous-time signals

Figure 4.4: (Figure 4.11 in textbook) DT processing of CT signals

Remember C/D and D/C converters’ frequency domain relations :

ˆ If DT system is identity (y[n] = x[n]) :


Overall system becomes sampling xc (t) and reconstructing from the samples the signal yr (t).
=⇒

ˆ If DT system is not LTI :


Difficult to find a general relation between Xc (jΩ) and Yr (jΩ).

ˆ If DT system is LTI with frequency response H(ejω ) :


=⇒

74
Summary : If xc (t) is band-limited and Nyquist theorem is satisfied during sampling (i.e. Xc (jΩ) =
0 for |Ω| > Tπ ), then the overall system is equivalent to a CT LTI system with frequency response

What if xc (t) is not band-limited or Nyquist theorem is not satisfied ?


Then one approach is to go step-by-step in the frequency domain (by possibly also drawing fre-
quency domain representations) over all steps in the entire system.

1, |ω| < ω
c
Ex: Consider LTI DT system with H(ejω ) = For a band-limited inputs xc (t),
0, ωc < |ω| < π.
the overall CT system (DT processing of CT signals) with a sampling period of T that satisfies
Nyquist theorem will behave alike a CT LTI system with

4.4.1 Impulse Invariance

Consider the reverse of our discussion so far. In other words, we are now given a desired CT system,
H(jΩ) that we need to implement with ”DT processing of CT signals”. How to choose T and H(ejω )
(or h[n]) in the system ?
Answer is given in two steps, remembering our previous main result:

1. Choose T such that

2. Let H(ejω ) =

Under the condition in 1., the relationship between the desired CT and the DT system can also be
written in time domain :

75
Pf.

Ex: We wish to obtain an ideal LPF DT filter with cut-off ωc = 2π 3


(h[n] ←→ H(ejω )).
We can do it by sampling an ideal LPF CT filter with cut-off Ωc . (h(t) ←→ Hc (jΩ))

4.5 CT Processing of DT Signals

Figure 4.5: (Figure 4.16 in textbook) CT processing of DT signals

Note that such a system is not used to implement DT systems in practice, however, it can be a
useful mode to interpret certain DT systems.

By definition of D/C converter (with ideal reconstruction filter ):

D/C conversion in frequency domain :

76
CT filter output :

C/D conversion in frequency domain :

Substitute previous equations :

Summary: Overall DT system behaves with frequency response:

Ex: Consider DT system with H(ejω ) = e−jω∆ , |ω| < π.


When ∆ is an integer :

When ∆ is not an integer :

Overall DT system interpretation : Generate Xc (t) with D/C converter from samples x[n], shift
xc (t) by T.∆, sample shifted signal with C/D.

Also possible to obtain relation in time domain:

77
4.6 Changing Sampling Rate Using DT Processing

Sample xc (t) with T :


Often there is need to change sampling rate of x[n], i.e. generate new signal x1 [n] = xc (nT1 ) (where
T1 6= T ) from x[n].

Conceptually it can be done as :

In practice it is done completely in DT as :

Let’s discuss sampling rate reduction and increase by integer factors first, then by non-integer
factors next.

4.6.1 Sampling Rate Reduction by an Integer Factor

Relationship in time domain :

Relationship in frequency domain :

Pfs:

78
π
Note : In order to avoid aliasing in xd [n] or Xd (ejω ), we must have X(ejω ) = 0 for M < |ω| < π, or
equivalently, x[n] must have been obtained by sampling xc (t) at least M times higher than Nyquist rate.

Consider same X(ejω ), but let M = 3 now. Then Xd (ejω )=

π
If desired M is too large, s.t. X(ejω ) 6= 0 for M < |ω| < π, or equivalently x[n] was not obtained at
π
M times higher than Nyquist rate, then an ideal LPF with cutoff ωc = M can be used to discard

79
part of spectrum before downsampling so that aliasing is avoided:

4.6.2 Increasing Sampling Rate by an Integer Factor

Relationship in time domain :

80
Relationship in frequency domain :

4.6.3 Simple Interpolation


sin( πn )
Ideal LPF hi [n] = πnL is difficult to implement, has infinitely many samples. Simpler filters may
L
be enough or forced by computational requirements in some applications. One such simple filter is

81
the linear interpolator with hi [n] =

4.6.4 Changing Sampling Rate by Non-Integer Factor

This can be achieved by interpolation followed by decimation.

Figure 4.6: (Figure 4.28 in textbook) System for changing sampling rate by non-integer factor

The ideal LPFs can be combined to obtain :

82
If M > L ⇒ sampling period increases (i.e. sampling rate or frequency decreases)

If M < L ⇒ sampling period decreases (i.e. sampling rate or frequency increases)

Note : Do not have the decimater first, then the interpolator. This can cause aliasing.

Ex: Let L = 2, M = 3, i.e. net decrease in sampling rate

Here, x[n] is sampled at ex-


actly Nyquist rate, and thus
ideal LPF discards part of the
spectrum.

83
Ex: (A previous MT question)

1
a) Assume L = 3, M = 2 and T1 = sec. Find T2 and H1 (ejω ) such that xr (t) = xc (t).
4000
1
b) Assume T1 = T2 = sec. Find L, M and H1 (ejω ) s.t. Xr (jΩ) is as follows :
4000
c) Assume L = M = 3 and H1 (ejω ) is fixed. Let p[n] = x[n] ∗ h2 [n]. If p[n] = y[n], find H2 (ejω ) in
terms of H1 (ejω ).

(Approach to solve such problems : Remember frequency domain relations of building blocks used
in sampling (e.g. C/D converter). Write after each building block in the given system the relation
between its input and output in frequency domain, and also plot the output’s spectrum in terms of
the input’s spectrum.)

Solution :

c) If p[n] = y[n], then y[n] = x[n] ∗ h2 [n], which implies

84
Solution to a) : Solution to a b) :

85
4.6.5 Sampling of band-pass signals

Remember Nyquist sampling theorem :

Suppose we have a band-pass signal of the form :

It may be possible to sample this band-pass signal below Nyquist rate of 2(Ωc + Ωb ) and still recover
it from the samples :

If replicas of Xc (jΩ) in Xs (jΩ) fit into the empty slots, it is possible to avoid aliasing and recover
from Xs (jΩ) back the Xc (jΩ) by band-pass filtering :

Ex:

In general, for band-pass signals, the minimum sampling rate Ωs,min can be found in two steps :
Ωc + Ωb
1. Find integer r such that r≤ <r+1
2Ωb
Ωc + Ω b
2. Ωs,min = 2
r

Ex: For the above example, we have

1.

2.

86
4.7 Digital Processing of Analog Signals

Up to now, we used ideal building blocks, such as ideal C/D, D/C converters and ideal low-pass
filters. This allowed us to focus on the essential mathematical relationships between a CT signal,
its samples and the reconstruction from the samples. In practice, these building blocks are not
ideal. For example, CT signals are not exactly band-limited, ideal filters can not be implemented
and ideal C/D and D/C converters are approximated by devices called analog-to-digital (A/D) and
digital-to-analog (D/A) converters. Thus a more realistic model for sampling is as follows.

Figure 4.7: (Figure 4.41 in textbook) (a) Discrete-time filtering of continuous-time signals (b) Digital processing of
analog signals

4.7.1 Prefiltering to Avoid Aliasing

If the input CT signal xc (t) is not band-limited or the required Nyquist frequency is too high for
your digital system, aliasing will occur in sampling. To avoid aliasing, a low-pass filter (called
anti-aliasing filter) can be used to reduce the bandwidth of the input to half of the desired
sampling frequency Ωs :

Since

Hence,

If anti-aliasing filter is not ideal LPF, then :

87
If sharp cut-off anti-aliasing filter can not be used (because in CT it is difficult to implement ideal
LPF), then the following system can be used :

Figure 4.8: (Figure 4.43 in textbook) Using oversampled A/D conversion to simplify CT anti-aliasing filter.

Note :

Ex: (Signal is not band-limited due to noise and a simple CT anti-alisaing filter is available)

Figure 4.9: (Figure 4.44 in textbook)

88
4.7.2 Analog-to-Digital (A/D) Conversion

An ideal C/D converter converts a CT signal xa (t) to a DT signal x[n], where each sample of the
DT signal is known with infinite precision. In practice, the DT signal samples have finite precision,
i.e. are quantized, and such conversion is performed using an A/D converter circuit.

Figure 4.11: Conceptual/mathematical model for the practi-


Figure 4.10: Practical A/D conversion. cal A/D conversion.

Practical A/D converter:

ˆ A device or circuit that converts a continuous voltage at its input to a binary code representing
a quantized value of the input.

ˆ Requires constant input voltage for some time (T) to operate. (i.e. needs a sample and hold
device in front of it)

Quantizer:

A non-linear system that transforms an input sample x[n] to one of finite possible prescribed values
x̂[n], which can be represented with the Q(.) operator as

x̂[n] = Q(x[n]).

x̂[n] is called the quantized value. The following figure show a typical quantizer, where the input
values x[n] are rounded to the nearest quantization level.

Note that the quantization operation is an irreversible (i.e. lossy) operation.

89
Figure 4.12: (Figure 4.48 in textbook) Typical quantizer for A/D conversion.

Figure 4.13: (Figure 4.49 in textbook) Sampling, quantization and coding (and D/A conversion discussed later)

90
4.7.3 Analysis of Quantization error

Quantization introduces error (called quantization error) in the sample value of x[n] :
In our typical quantizer the error is constrained :

Figure 4.14: (Figure 4.51 in textbook) a) Unquantized samples of the signal x[n] b) Quantized samples x̂[n] with
3-bit quantizer. c) Quantization error e[n] for 3-bit quantizer. d) Quantization error e[n] for 8 bit quantizer.

A simplified but useful model for quantization:

Typical assumptions on random noise signal e[n]:

ˆ Stationary white noise uncorrelated with x[n].


ˆ Each e[n] has uniform pdf in [− , ] =⇒ variance of the error is σ
1 1
2 2
2
e =
∆2
12
=
(XM 2−B )2
12

91
4.7.4 Digital-to-Analog (D/A) Conversion

Remember the ideal D/C converter model :

Figure 4.15: (Figure 4.10 in textbook) (a) Ideal band-limited


signal reconstruction system. (b) Equivalent representation as
ideal D/C converter

In practice a D/A converter followed by a compensated reconstruction filter is used :

92
Chapter 5

Transform Analysis of LTI Systems

Contents
5.1 Frequency Response of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.2 Systems Characterized by LCCDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.3 Frequency Response for Rational System Functions . . . . . . . . . . . . . . . . . . . . 99
5.4 Relationship Between Magnitude And Phase . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.5 All-pass Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.6 Minimum-phase Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.6.1 Minimum-phase and All-pass Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.6.2 Frequency Response Compensation of Non-minimum-Phase Systems . . . . . . . . . . . . . 114
5.6.3 Properties of Minimum-phase Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.7 Linear Systems with Generalized Linear Phase . . . . . . . . . . . . . . . . . . . . . . . 118
5.7.1 Linear Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.7.2 Generalized Linear Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.7.3 Causal Generalized Linear Phase Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

This chapter presents the representation and analysis of LTI systems using the Fourier and Z
transform in more detail. Many properties of LTI systems can be obtained from the frequency
response (Fourier transform of impulse response) and system response (Z transform of impulse
response). We cover Sections 5.1-5.7 from our textbook and reading assignment for this chapter is :

ˆ Sections 5.1-5.7 from our textbook.

93
5.1 Frequency Response of LTI Systems

For an LTI system, the input-output relation can be represented as follows :

Note that the phase ∠H(ejω ) is not uniquely defined :

Principle value of phase ( ARG[H(ejω )] ):

Continuous phase function ( arg[H(ejω )] ):

(See Figure 5.1 below).

Group Delay τ (ω) :

Ex: Ideal delay LTI system: hid [n] = δ[n − nd ] ⇐⇒ Hid (ejω ) = e−jωnd

Hence the only distortion in the ideal delay system is time delay, which is acceptable in many
applications.

94
Figure 5.1: (a) Continuous phase (b) Principal value
of phase (c) Integer multiples of 2π added

Ex: Filtering of narrowband signals:

In the narrowband (ω0 − ∆ω < |ω| < ω0 + ∆ω), H(ejω ) can be approximated as follows ( assume
h[n] is real) :

95
Then, output to ”narrowband” signal x0 [n] is :

Similarly, for narrowband


Ex: Consider LTI system with H(ejω ) and input with X(ejω ) shown below. Output y[n] can be
approximately calculated and plotted as below using group-delay and magnitude of H(ejω ).

Figure 5.2: (a) Continuous phase (b) Principal value of phase (c) Integer multiples of 2π added

Time dispersion can occur if group-delay differs significantly for each ’pocket’ of the input signal.

5.2 Systems Characterized by LCCDE

LTI systems can be represented with LCCDE, which yields a rational system function H(z).

N
P M
P
ak y[n − k] = bk x[n − k]
k=0 k=0

96
N M
ak Y (z)z −k = bk X(z)z −k
P P

k=0 k=0

Notes :

ˆ (1 − c z ) terms ⇒
k
−1

ˆ (1 − d z ) terms ⇒
k
−1

ˆ Given LCCDE, you can obtain H(z).


ˆ Given H(z), you can obtain LCCDE.
ˆ Only LCCDE does not uniquely determine the system for FIR h[n] (Need auxiliary conditions.)
ˆ Only H(z) does not uniqely determine the system.(Need ROC.)
For such LTI systems, remember the following important properties :

ˆ System is causal ⇒ h[n] is right-sided ⇔ ROC of H(z) is outside of outermost pole.


ˆ System is stable ⇒ P |h[n]| < ∞ ⇔ ROC of H(z) includes the unit circle.

n=−∞

ˆ System is stable and causal ⇒ ROC of H(z) has all poles inside the unit circle.
Inverse Systems

An LTI system with impulse response h[n] and its inverse LTI system with impulse response hi [n]
must satisfy the following relationship :

Note that not all LTI systems have an inverse. (E.g. Ideal low-pass filter does not.)
System with rational H(z) have inverses:

1 − 0.5z −1
Ex: H(z) = , ROC: |z| > 0.9
1 − 0.9z −1

97
Note :

ˆ An LTI system and its inverse are both stable and causal ⇔
ˆ Such systems are called minimum-phase systems (more details in the next sections.)
Impulse Response for Rational System Functions

Consider a rational system function H(z) as multiplication and division of first order terms and its
partial fraction expansion.

If the system is causal, then the ROC of H(z) is outside of outermost pole ⇒

Two classes of system functions can be identified from a given system function :

1. (IIR system) At least one nonzero pole of H(z) exists.



2. (FIR system) H(z) has no poles except at z = 0 (i.e N = 0)



Ex: h[n] = an u[n], |a| < 1

Ex: h[n] = an (u[n] − u[n − M − 1]), |a| < 1

98
5.3 Frequency Response for Rational System Functions

Consider an LTI system with the following rational system function H(z) and the corresponding
frequency response and related expressions.
M M
(1 − ck z −1 ) (1 − ck e−jω )
Q Q
b0 b0
H(z) = ( ) k=1
N
=⇒ If ROC includes unit circle H(ejω ) = ( ) k=1
N
a0 Q a0 Q
(1 − dk z −1 ) (1 − dk e−jω )
k=1 k=1

M
|1 − ck e−jω |
Q
b0
⇒ |H(ejω )| = | | k=1
N
a0 Q
|1 − dk e−jω |
k=1

⇒ |H(ejω )|2 =
PM PN
⇒ G(dB) = 20 log10 |H(ejω )| = 20 log10 | ab00 |+ k=1 20 log10 |1−ck e
−jω
|− k=1 20 log10 |1−dk e−jω |

PM PN
⇒ ∠H(ejω ) = ∠ ab00 + k=1 ∠(1 − ck e−jω ) − k=1 ∠(|1 − dk e−jω )

d
arg [H(ejω )] = 0 − M
PN
grd[H(ejω )] = − d
− ck e−jω ]) + d
− dk e−jω )])
P
⇒ k=1 dω
(arg [(1 k=1 dω (arg [(1

Frequency Response of a single zero (or pole):


Frequency response functions for a single pole or zero at z = ck , i.e. a single factor like below, can
be obtained as follows.

(1 − ck e−jω ) = (1 − rejθ e−jω ) where ck = rejθ .

|1 − rejθ e−jω |2 = (1 − rejθ e−jω )(1 − re−jθ ejω ) = 1 − r2 − 2r cos(ω − θ)


r sin(ω − θ)
ARG[1 − rejθ e−jω ] = ARG[1 − r cos(ω − θ) + jr sin(ω − θ)] = arctan( )
1 − r cos(ω − θ)
d r sin(ω − θ) r2 − r cos(ω − θ)
grd[1 − rejθ e−jω ] = − arctan( ) = ... =
dω 1 − r cos(ω − θ) 1 + r2 − 2r cos(ω − θ)

99
Vector Diagrams in z-plane :
A simple geometric construction with vectors is often very useful in approximate sketching of fre-
quency response functions directly from the pole-zero plot as follows.

Figure 5.3: Frequency response for a single zero at r = 0.9 Figure 5.4: Frequency response for a single zero with θ =
and three values of θ. (a) Log magnitude (b) Phase (c) π and four values of r = 1, r = 0.9, r = 0.7, r = 0.5. (a)
Group-delay Log magnitude (b) Phase (c) Group-delay

100
Note that there will be phase jump of π if there is a zero or pole on the unit circle (i.e.
r = 1).

For a zero or pole at rejθ , the following observations can be made :

1. The magnitude has a minimum/maximum for a zero/pole at (around for higher orders) ω = θ.

2. The absolute rate of change of the phase is maximum around ω = θ.

3. ˆ For r < 1, the phase tends to increase/decrease for a zero/pole towards ω = θ.


ˆ For r > 1, the phase tends to decrease/increase for a zero/pole towards ω = θ.
4. (After (2)) The absolute value of group delay has a maximum around ω = θ.

5. These four effects become stronger as |r| → 1, i.e., as the zero/pole approaches the unit circle.

Frequency Response of multiple zeros or poles:


Vector diagrams are again useful. Consider a 2nd order system with complex conjugate pair of poles.

Figure 5.5: Complex conjugate poles at


π
0.9e±j 4 . (a) Log mag. (b) Phase (c) Grd

101
In general, when there are multiple poles and zeros, the above observations on single zero/pole
can be used to make a rough sketch of frequency response magnitude, phase and group delay.

Note, however, that with multiple poles and zeros, the absolute rate of change of the phase is not
necessarily maximum at the exact pole/zero angle, i.e., ω = θ.

A great animation illustrating approximate frequency response sketches can be found here (https :
//engineering.purdue.edu/V ISE/ee438/demos/f lash/polez ero.html).

Some example pole-zero plots and corresponding frequency response plots are given below for your
investigation. (From Tolga Ciloglu’s notes.)

102
103
104
105
106
107
5.4 Relationship Between Magnitude And Phase

In general knowledge about magnitude response |H(ejω )| or phase response ∠H(ejω ) gives no in-
formation about the other. However, for LTI systems described by LCCDE (i.e. rational system
functions), there are some constraints between |H(ejω )| and ∠H(ejω ).
ˆ If |H(e )| (and number of poles and zeros) known ⇒ Finite numbers of choices for ∠H(e ).
jω ω

ˆ If ∠H(e ) (and number of poles and zeros) known ⇒ Finite numbers choices for |H(e )|
jω jω

(within a scaling constant).

ˆ If H(e jω
) is a minimum phase system, the constraints imply a unique choice :

– If |H(ejω )| (and number of poles and zeros) known ⇒ ∠H(ejω ) can be found uniquely
– If ∠H(ejω ) (and number of poles and zeros) known ⇒ |H(ejω )| can be found uniquely
(within a scaling constant).

To discuss these, first note the following two facts :


1
1. H(z) and H ∗ ( ∗ ) give the same magnitude response |H(ejω )| :
z

1
2. The poles (zeros) of H(z) are conjugate reciprocals of poles (zeros) of H ∗ ( ):
z∗

Let us define C(z) and notice that it gives the squared magnitude response |H(ejω )|2 when evaluated
on the unit circle.
1
C(z) = H(z)H ∗ ( ∗ )
z
C(z)|z=ejω = H(ejω )H ∗ (ejω ) = |H(ejω )|2

A general form C(z) is as follows :

If we know |H(ejω )|2 , can find C(z) by replacing ejω with z.


Then from C(z) we can infer as much information as possible about H(z), i.e its poles and zeros
(and hence ∠H(ejω )) :

108
ˆ Poles and 1zeros of C(z) occur in conjugate reciprocal pairs, one from H(z) and the other
from H ∗ ( ).
z∗
– One is inside unit circle, the other outside (or both on unit circle in same location.)
– Without further information on the system, we don’t known which one is inside or outside.
1
– For H(z), having a pole (or zero) at dk or at its conjugate reciprocal ∗ results in the
dk
jω 2 jω
same C(z) and therefore the same |H(e )| or |H(e )|.

ˆ If H(z) is causal and stable :


– We can identify poles of H(z) uniquely : From each conjugate reciprocal pair of poles
of C(z), choose the pole inside the unit circle. But zeros of H(z) can not be identified
uniquely.

ˆ If H(z) is minimum-phase :
– We can identify both poles and zeros of H(z) uniquely (i.e. uniquely identify H(z)) : From
each conjugate reciprocal pair of poles (zeros) of C(z), choose the pole (zero) inside the
unit circle.

Ex: Consider two different systems with


H1 (z) = 1 − 0.5ej(π/4) z −1 =⇒
H2 (z) = 1 − 0.5e−j(π/4) z =⇒

1
Hence, a pole at dk and ∗
give the same magnitude response |H(ejω )|, but their phase responses
dk
∠H1 (ejω ) and ∠H2 (ejω ) are different.

Ex: Given pole-zero plot of C(z) below, we want to determine poles and zeros of H(z).

Figure 5.6: Given C(z).

109
ˆ With this much information, we can choose for H(z) one from each conjugate reciprocal pair
of poles and zeros from C(z) =⇒

ˆ If H(z) is stable and causal =⇒

ˆ If h[n] is also real =⇒

ˆ If H(z) is minimum-phase =⇒
ˆ Note that if the number of poles/zeros were not restricted, number of choices for H(z) would
be unlimited with any extra information on the system :

5.5 All-pass Systems

An All-pass system is a system with constant magnitude response (i.e. |H(ejω )| = c).

1
Remembering that a pole (zero) at a and a pole (zero) at at its conjugate reciprocal give the
a∗
A∗ ( z1∗ )
same magnitude response, check the system H(z) = where A(z) = 1 − az −1 , |a| < 1.
A(z)

The canonical (stable & causal, i.e pole inside unit circle, i.e. |a| < 1) form of All-pass Hap (z) is
then given as follows :
z −1 − a∗
Hap (z) =
1 − az −1

110
Let us show that |Hap (z)|z=ejω | = 1

A more general form of all-pass systems is the product of first-order all-pass terms is as follows :
M
Y z −1 − a∗k
Hap (z) =
k=1
1 − ak z −1

A more general form of all-pass systems with real impulse response hap [n] is as follows :

Note, for the above all-pass system to be stable and causal, we need |ek | < 1 and |dk | < 1 for all k.

1
In summary, in an all-pass system, for every non-zero pole a, a zero at its conj. reciprocal exists.
a∗
z −1 − a∗
For the canonical first-order all-pass Hap (z) = , where a = rejθ , the magnitude, phase and
1 − az −1
group-delay are as follows :

ˆ |H (e )| = 1
ap

ˆ ∠H (e ) =
ap

ˆ grd[H (e )] = − dωd ∠H
ap
jω jω
ap (e ) = 1+2
1
r sin(ω−θ) 2
1 + ( 1−r )
d
· dω r sin(ω−θ)
( 1−r cos(ω−θ)
) = ... =
1 − r2
|1 − rejθ e−jω |2
cos(ω−θ)

For higher-order all-pass systems, the phase and group-delay are sum of such terms.

Important properties of the phase and group-delay of stable and causal all-pass systems :

1. Stable and causal all-pass systems have positive group-delay, i.e. grd[Hap (ejω )] > 0

ˆ Consider first first-order H ap (z) =


z −1 − a∗
1 − az −1

111
ˆ Higher order stable and causal all-pass systems’ group-delay are sum of such terms.
2. Stable and causal all-pass systems with real and positive H(ejω )|ω=0 (satisfied if hap [n]
real) have negative continuous phase (i.e. arg[H(ejω )] ≤ 0) that starts at 0 for ω = 0 and
decreases monotonically for increasing ω.

ˆH jω
ap (e )|ω=0 =

ˆ.

Typical plots indicating the above properties on phase and group-delay :

Another useful property for any all-pass system is that the phase response has a total change of
”order x 2π” over a range of frequencies [ω0 , ω0 + 2π].

112
For the above plots (magnitude, principal value of phase, group-delay) of 3 stable and causal
all-pass systems, observe the properties on group-delay (grd[H(ejω )]) and continuous phase
(arg[H(ejω )]). Can you guess the orders of the systems and the locations of the poles ?

5.6 Minimum-phase Systems

A system that is stable and causal and has an inverse system that is also stable and causal.

Hence, a minimum phase system Hmin (z) must have all of its poles and zeros inside the unit circle.

1
Remember that given C(z) = H(z)H ∗ ( ∗ ) or |H(ejω )|2 = C(z)|z=ejω , one can find the system
z
H(z) uniquely if the system H(z) is minimum-phase. We just need to choose from each conjugate
reciprocal pair of poles of C(z), the ones inside the unit circle, and similarly from each conjugate
reciprocal pair of zeros of C(z), the ones inside the unit circle.

5.6.1 Minimum-phase and All-pass Decomposition

Any rational system function H(z) can be decomposed into the product of a minimum phase and
an all-pass systems:

113
Let’s justify by example. Suppose H(z) has many poles&zeros inside unit circle and one zero outside
1
at z = (i.e. |c| < 1)
c∗

The general procedure for obtaining the Minimum-phase and All-pass decomposition of a given
system function H(z) is as follows :

1. Choose all poles&zeros of given H(z) that are inside the unit circle to Hmin (z), and the ones
outside the unit circle to Hap (z).

2. For all poles&zeros chosen to Hap (z) in step-1, add conjugate reciprocal zeros&poles to Hap (z)
to obtain an all-pass system. (They will be added inside unit circle).

3. Cancel the conjugate reciprocal zeros&poles added in step-2 by adding poles&zeros at same
location to Hmin (z). (They will be added inside unit circle).

Ex:

Note that, if the given H(z) is stable and causal, then decomposition will also yield a
stable and causal all-pass system Hap (z).

5.6.2 Frequency Response Compensation of Non-minimum-Phase Systems

Assume a signal has been distorted and we would like to undo/compensate for the distortion.
In many cases it is desired that compensating system Hc (z) is stable and causal.

114
Figure 5.7: Distortion compensation

ˆ If the distorting
1
system H (z) is minimum phase, we can choose compensating system as
d

Hc (z) = , which is also stable and causal.


Hd (z)
ˆ If the distortion system H (z) is not minimum phase, we can use decomposition
d

Then the overall system becomes an all-pass system : G(z) = Hd (z)Hc (z) =

– Magnitude distortion is compensated for exactly. i.e |G(ejω )| = 1


– Phase distortion is modified to ∠G(ejω ) = ∠Hap (ejω )

5.6.3 Properties of Minimum-phase Systems

There are 3 important properties of minimum-phase systems relative to all other stable and
causal systems that have the same frequency response magnitude |H(ejω )|.

1. Minimum phase-lag property : The minimum-phase system is the system with the smallest
phase-lag.

2. Minimum group-delay property : The minimum-phase system is the system with the
smallest group-delay.

3. Minimum energy-delay property : The minimum-phase system is the system with the
smallest energy-delay.

To discuss the meaning and derivation of these properties, first remember the discussion in Section
1
5.4. Given a magnitude response |H(ejω )| (or equivalently C(z) = H(z)H ∗ ( ∗ )), there are finite
z
number of possible systems H(z) that have this given magnitude response. (These different H(z)
could be obtained by choosing a pole (zero) from each conjugate reciprocal pair of poles (zeros) of
C(z).) Amongst these finite number of possible systems H(z), one of them is a minimum-phase
system, some are stable&causal, and others are not stable&causal. All of these systems can be

115
decomposed with minimum-phase and all-pass decomposition :

H0 (ejω ) = Hmin (ejω )


H1 (ejω ) = Hmin (ejω )Hap,1 (ejω )
H2 (ejω ) = Hmin (ejω )Hap,2 (ejω )
..
.
HN (ejω ) = Hmin (ejω )Hap,N (ejω )
HN +1 (ejω ) = Hmin (ejω )Hap,N +1 (ejω )
..
.
HN +M (ejω ) = Hmin (ejω )Hap,N +M (ejω )

The first system H0 (z) is the minimum-phase system with the given magnitude response H(ejω ).
Assume the next N systems (Hk (ejω ), k = 1, ..., N ) are stable&causal and the following M systems
(Hk (ejω ), k = N + 1, ..., N + M ) are not stable&causal. Amongst all the stable&causal systems
(Hk (ejω ), k = 0, ..., N ), the minimum-phase system H0 (ejω ) is the system with the

ˆ smallest phase-lag
ˆ smallest group-delay
ˆ smallest energy-delay.
1. Minimum phase-lag property
The phase-lag is defined as the negative of continuous phase response : Phase-lag = −arg[H(ejω )].
Writing the phase-lag for the decomposition of all the stable and causal systems, we can con-
clude that the minimum-phase systems H0 (z) will have the smallest phase-lag since the phase
lag of stable&causal all-pass systems is always positive (as discussed in Section 5.5).

−arg[H0 (ejω )] = −arg[Hmin (ejω )]


−arg[H1 (ejω )] = −arg[Hmin (ejω )] − arg[Hap,1 (ejω )]
..
.
−arg[HN (ejω )] = −arg[Hmin (ejω )] − arg[Hap,N (ejω )]

2. Minimum group-delay property


In a similar manner, writing the group-delay for the decomposition of all the stable and causal
systems, we can conclude that the minimum-phase systems H0 (z) will have the smallest group-
delay since the group-delay of stable&causal all-pass systems is always positive (as discussed

116
in Section 5.5).

grd[H0 (ejω )] = grd[Hmin (ejω )]


grd[H1 (ejω )] = grd[Hmin (ejω )] + grd[Hap,1 (ejω )]
..
.
grd[HN (ejω )] = grd[Hmin (ejω )] + grd[Hap,N (ejω )]

3. Minimum energy-delay property


First, note that all systems with the same magnitude response |H(ejω )| (i.e. Hk (ejω ), k =
0, ..., N + M ) have the same total energy as hmin [n] :

∞ Z π Z π ∞
X
2 1 jω 2 1 jω 2
X
|h[n]| = |H(e )| dω = |Hmin (e )| dω = |hmin [n]|2
n=0
2π −π 2π −π n=0

Define the partial-energy of a system with impulse response h[n] as E[k] = km=0 |h[k]|2 .
P

Amongst all the stable and causal systems, the minimum-phase systems H0 (z) will have the
smallest energy-delay, i.e partial-energy:
k
X k
X
|h[k]|2 ≤ |hmin [k]|2 for all k=0,1,2,...
m=0 m=0

The proof of this property is more tedious than those of the first two properties and we will
skip it here. But the proof can be obtained from Problems 5.71 and 5.72 in the textbook.

Figure 5.10: (c) Partial energies of the four


systems.
Figure 5.8: (a) Four systems with the
same frequency response magnitude
|H(ejω )|.

Figure 5.9: (b) Impulse re-


sponses of the four systems.

117
5.7 Linear Systems with Generalized Linear Phase

5.7.1 Linear Phase



Consider an LTI system with frequency response H(ejω ) = |H(ejω )|ej∠H(e ) . System is/has linear
phase means that the system’s phase response is a linear function of ω :

∠H(ejω ) = −αω
⇒ grd[H(ejω )] = α

Hence, a linear-phase LTI system has frequency response in the following form :

Hlp (ejω ) = |Hlp (ejω )|e−jαω

Linear phase systems delay all frequencies by the same amount and thus preserve time-domain
synchronization of different frequency components of an input signal (i.e. there will be no time-
dispersion like in the example of Sec. 5.1) :

Linear phase systems can be interpreted as a cascade of a magnitude filter and time-shift :

(Recall Sec. 4.5 CT Processing of DT signals for how to interpret time-delay by non-integer α.)
One of the most important properties of linear-phase systems is that the impulse response hlp [n] (if
real) has even symmetry (around n = α) if α or 2α is an integer :

hlp [n] = hlp [2α − n], if α or 2α is integer

Proof :

sin(ωc (n − α))
Figure 5.11: (Figure 5.32 in textbook) hlp [n] = ←→ (a) α = 5 (b) α = 4.5 (c) α = 4.3
π(n − α)

118

sin(ωc (n − α)) e−jωα , |ω| < ω
jω c
Figure 5.11 : hlp [n] = ←→ Hlp (e ) =
π(n − α) 0, ωc < |ω| < π

5.7.2 Generalized Linear Phase

Many properties of Linear Phase (LP) systems also apply to a larger class of systems called Gener-
alized Linear Phase (GLP) system, that have the following more general frequency response form:

Hglp (ejω ) = A(ejω )e−jαω+jβ

If we ignore the discontinuities in the phase resulting from the addition of ±π due to the sign change
of A(ejω ), the continuous phase and group-delay of GLP systems are as follows :

∠Hglp (ejω ) = −αω + β


⇒ grd[Hglp (ejω )] = α

Ex: H1 (ejω ) = (1 + cos(ω))e−jω , H2 (ejω ) = (1 + 2 cos(ω))e−jω

Symmetry properties of generalized linear phase systems

Consider the two following expansions of the frequency response of a GLP system (note that the
second equation applies when h[n] is real) :
H(ejω ) = A(ejω )e−j(αω−β) =
H(ejω ) =

If we cross multiply the above equation and use sin(a − b) = sin(a) sin(b) − cos(a) cos(b), we get

X
h[n] sin(ω(n − α) + β) = 0
n=−∞

119
This equation is a necessary condition on h[n], α, β for the system to be general linear phase (GLP)
system. (It is not a sufficient condition, i.e. a system with h[n], α, β satisfying the above condition
is not guaranteed to be GLP. However, every GLP system must satisfy the above conditions.) It
does not tell use how to find GLP systems.

Two sets of conditions for real impulse responses that do guarantee generalized linear phase
systems are :

1. Symmetric Case: h[n] = h[2α − n] −→ H(ejω ) =

2. Anti-symmetric Case: h[n] = −h[2α − n] −→ H(ejω ) =

Pf.

120
5.7.3 Causal Generalized Linear Phase Systems

If the system is causal, the necessary condition for GLP becomes: ∞


P
n=−∞ h[n] sin(ω(n−α)+β) = 0.
If the system is causal, the two conditions that guarantee GLP imply that h[n] must be FIR, in
particular, h[n] = 0 for n < 0 and n > 2α = M . Hence, these two conditions become :

h[2α − n], 0 ≤ n ≤ M = 2α
1. Symmetric Case: h[n] = −→ H(ejω ) =
0, otherwise


−h[2α − n], 0 ≤ n ≤ M = 2α
2. Anti-symmetric Case: h[n] = −→ H(ejω ) =
0, otherwise

Depending on whether M is even or odd (i.e. α is integer or half-integer; note that length of h[n] is
M +1), the above two cases can be split into two special cases, yielding overall 4 FIR GLP systems:

1. Type-I FIR GLP system (h[n] symmetric, M even): h[n] = h[M − n], 0≤n≤M

2. Type-II FIR GLP system (h[n] symmetric, M odd ): h[n] = h[M − n], 0≤n≤M

3. Type-III FIR GLP system (h[n] anti-sym., M even): h[n] = −h[M − n], 0≤n≤M

4. Type-IV FIR GLP system (h[n] anti-sym., M odd ): h[n] = −h[M − n], 0≤n≤M

121
Ex: For the given example FIR impulse responses h[n] above for each type, verify the type, find
and plot magnitude, phase responses and group-delay.

Locations of Zeros for FIR Generalized Linear Phase Systems

ˆ Type-I&II : ( i.e. h[n] = h[M − n] )


H(z) =

=⇒ H(z) = H(z −1 )z −M

From this relation, the following results can be obtained :


1
– If z0 is zero of H(z) =⇒ is also zero of H(z)
z0
1 1
– If h[n] is real and z0 is zero of H(z) =⇒ z0∗ , , ∗ are also zeros of H(z)
z0 z0
−M
– For z = −1, H(−1) = H(−1)(−1) =⇒ For odd M (Type-II) H(−1) = −H(−1) i.e.
H(−1) = 0

122
ˆ Type-III&IV : ( i.e. h[n] = −h[M − n] )
H(z) =

=⇒ H(z) = −H(z −1 )z −M

From this relation, the following results can be obtained :


1
– (same) If z0 is zero of H(z) =⇒ is also zero of H(z)
z0
1 1
– (same) If h[n] is real and z0 is zero of H(z) =⇒ z0∗ , , ∗ are also zeros of H(z)
z0 z0
– For z = 1 =⇒ H(1) = −H(1) i.e. H(1) = 0 for both Type III&IV
– For z = −1 =⇒ H(−1) = −H(−1)(−1)−M i.e. H(−1) = 0 for (M even) Type III only

The above obtained results can also be summarized as follows (from Tolga Ciloglu’s notes):

Finally, from the properties of zeros (i.e. 4 complex conjugate and conjugate reciprocal zeros), any
FIR GLP system with real impulse response can be decomposed into a product of a minimum-phase
term, maximum-phase-term (all of its zeros outside the unit circle) and a term with all of its zeros
on the unit circle :
H(z) = Hmin (z)Huc (z)Hmax (z).

123
Chapter 6

Filter Design Techniques

Contents
6.1 Filter Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.2 Design of DT IIR Filters from CT Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.2.1 Filter Design by Impulse Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.2.2 Filter Design by Bilinear Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.3 Design of DT FIR filters by Windowing Method . . . . . . . . . . . . . . . . . . . . . . 130
6.3.1 Commonly used Windows and Their Properties . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.3.2 Incorporation of Generalized Linear Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.3.3 Kaiser Window Filter Design Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

Frequency selective filters are an important class of LTI systems. This chapter discusses DT IIR
and FIR filter design techniques. We cover Sections 7.1-7.3 and 7.5 from our textbook and reading
assignment for this chapter is :

ˆ Sections 7.1-7.3 and 7.5 from our textbook.

124
6.1 Filter Specifications

The goal in discrete-time filter design is to determine parameters of a system function or dif-
ference equation that approximates a given/desired frequency response within specified
tolerances, for example given by δp1 , δp2 , δs , ωp , ωs . (Typically there is no constraint on the phase
of the frequency response.)

Figure 6.1: (Figure 7.1 in textbook) Low-pass filter tolerance scheme

Design techniques for IIR and FIR filters are different.

ˆ DT IIR filter design techniques are based on mapping well-known CT IIR filter designs to DT
IIR filter design via mappings between the CT frequency and the DT frequency axis.

ˆ Most prevalent DT FIR design techniques are the windowing method and the Parks-McClellan
algorithm.

6.2 Design of DT IIR Filters from CT Filters

CT IIR filter design is highly advanced with straightforward closed form design formulas. Hence,
design methods are based on transforming a CT IIR filter to a DT IIR filter meeting desired
specifications. Two transformation methods are discussed :

ˆ Impulse Invariance
ˆ Bilinear Transformation.
6.2.1 Filter Design by Impulse Invariance

Filter Design by Impulse Invariance is based on sampling a CT impulse response (i.e. filter) hc (t) :

h[n] = Td hc (nTd )

=⇒ H(ejω ) =
π
If the CT filter is bandlimited so that Hc (jΩ) = 0 for Ω > , then
Td
ω
H(ejω ) = Hc (j ), |ω| < π
Td

125
This equation indicates a linear relation (i.e mapping) between the CT and DT frequencies :

ω = ΩTd , in |ω| < π

In practice, any CT filter is not exactly bandlimited. Hence, some aliasing occurs, which can be
negligibly small if the CT frequency response approaches zero quickly.

Impulse invariance transformation is carried over to rational system functions as follows :


Ak
Causal system with Hc (s) = N
P
k=1 ←→
s − sk

N
X Td Ak
=⇒ H(z) = , |z| > max{esk Td }
k=1
1 − esk Td z −1 k

Hence, a pole at s = sk in the s-plane is mapped to a pole at z = eskTd in the z-plane.


=⇒ If the CT system is stable (i.e. Re{sk } < 0), then |zk | = |esk Td | < 1, i.e. DT system is also
stable.

Ex: Impulse Invariance with a Butterworth Filter (a well-studied CT IIR filter)


Specifications of desired DT filter :

0.89125 ≤|H(ejω )| ≤ 1, 0 < |ω| < 0.2π


|H(ejω )| ≤ 0.17783, 0.3π < |ω| < π
ω
Step-1: Transform given DT filter specifications to CT filter specifications via Ω =
Td

Step-2: Design CT filter parameters and system function Hc (s) (with Butterworth or other CT IIR
filter technique) meeting the specifications

126
Step-3: Transform back CT system function to DT system function :

6.2.2 Filter Design by Bilinear Transformation

This design method is based on non-linear mapping (transformation) between s and z :

2 1 − z −1
s= ( )
Td 1 + z −1

=⇒ H(z) =

If we solve for z, we obtain :

Properties of Bilinear Transformation :


ˆ Left-half s-plane maps to interior of unit circle in z-plane.
127
(A pole in left-half s-plane maps to a pole inside the unit circle in z-plane → Causal & stable
CT filter → Causal & stable DT filter)
Pf

ˆ The jΩ axis in s-plane maps onto unit circle in z-plane


Pf.

ˆ Mapping or transformation between the CT and DT frequency axes :


2 ω ΩTd
Ω= tan( ) ←→ ω = 2 arctan( )
Td 2 2
Pf.

Figure 6.2: (Figure 7.6 and 7.7 in textbook) Mapping


of s-plane onto z-plane and mapping of CT frequency
axis onto DT frequency axis via bilinear transforma- Figure 6.3: (Figure 7.8 in textbook) Mapping of filer
tion. specifications.

ˆ There is no aliasing in bilinear transformation unlike in impulse invariance.


128
Ex: Bilinear transformation with a Butterworth Filter (Same specifications as previous example)
Specifications of desired DT filter :

0.89125 ≤|H(ejω )| ≤ 1, 0 < |ω| < 0.2π


|H(ejω )| ≤ 0.17783, 0.3π < |ω| < π

Step-1: Transform given DT filter specifications to CT filter specifications via bilinear transformation
2
via Ω = tan( ω2 ) :
Td

Step-2: Design CT filter parameters and system function Hc (s) (with Butterworth or other CT IIR
filter technique) meeting the specifications

2 1 − z −1
Step-3: Transform back CT system function to DT system function via s = ( ):
Td 1 + z −1

129
6.3 Design of DT FIR filters by Windowing Method

Ideally, the desired DT filter Hd (ejω ) is an ideal filter, such as the ideal low-pass filter with some
sin(ωc n)
cut-off frequency ωc , which corresponds to a impulse response hd [n] = with infinitely many
πn
samples.

To obtain FIR approximation of hd [n], the simplest method is to truncate hd [n] by windowing.

Figure 6.4: (Figure 7.28 in textbook) Magnitude re-


sponse of rectangular window (M=7)
Figure 6.5: (Figure 7.27 in textbook) Convolution implied by
truncation i.e. windowing.

As M (i.e window length) increases :

1. Main and side lobe widths of window decreases.

2. The peak amplitudes of main and side lobes of window increase in a manner such that the
area under each lobe remains constant.
Note: In the convolution, as W (ej(ω−θ) ) slides past a discontinuity in Hd (ejθ ), the integral of
W (ej(ω−θ) )Hd (ejθ ) will oscillate as each side-lobe of W (ej(ω−θ) ) moves past the discontinuity.

130
3. (Due to 1&2) the oscillations in the shape of H(ejω ) occur more rapidly but the oscillation
amplitudes remain constant.

6.3.1 Commonly used Windows and Their Properties

By using smoother tapered windows (instead of rectangular window),

ˆ the height of the side-lobes of window frequency response can be reduced


ˆ but the main-lobe width increases (while M window length is constant.)
=⇒ The oscillation amplitudes in H(ejω ) (discussed in 3 above) reduce, however, a wider transition
band (from pass to stop-band) occurs.

Figure 6.6: (Figure 7.29 in textbook) Commonly used


windows

Figure 6.7: Mathematical expressions of commonly used win-


dows.

As window gets smoother (rectangular → Blackman), window’s frequency response’s

ˆ side-lobe amplitude decreases (=⇒ oscillation amplitudes in H(e ) decrease)


ˆ main-lobe width increases (=⇒ transition band in H(e ) widens).jω

Figure 6.8: (Table 7.2 in textbook) Comparison of commonly used windows

131
Figure 6.9: (Table 7.30 and 7.31 in textbook) Fourier transforms (log magnitude) of windows for M=50. a) Rectan-
gular b) Bartlet c) Hann d) Hamming e) Blackman

6.3.2 Incorporation of Generalized Linear Phase


M
Note that all windows are symmetric around n = :
2
M
If desired impulse response hd [n] is also chosen symmetric around n = :
2

132
M
Then, windowed impulse response h[n] = hd [n]w[n] will also be symmetric around n = ,i.e.
2

h[M − n], 0 ≤ n ≤ M
h[n] =
0, otherwise

Hence, H(ejω ) will have GLP due to symmetry :

(If desired impulse response is anti-symmetric as hd [n] = −hd [M − n], then h[n] will be anti-
M
symmetric FIR =⇒ H(ejω ) = jAo (ejω )e−jω 2 )

In summary :

Figure 6.10: (Figure 7.31 in textbook) Illustration of type of approximation obtained at a discontinuity of the ideal
frequency response Note that pass and stop-band tolerances are same due to the symmetry of the sliding window.

133
6.3.3 Kaiser Window Filter Design Method

There is a fundamental trade-off between main-lobe width and side-lobe area for any window, i.e.
as we change window shape to decrease side-lobe area (i.e. oscillation amplitudes), main-lobe width
(i.e. transition band) will increase.

A near-optimal window design method for this trade-off is the Kaiser window.
 n−α 2 1/2
 I0 [β(1 − [( α )] ) ] , 0 ≤ n ≤ M

w[n] = I0 (β)

0, otherwise

M
Here, α = 2
and I0 (.) is first kind zeroth order Bessel function.

Kaiser window has two parameter (length parameter M and shape parameter β) while the other
windows had only length parameter M . By varying β and M , the window shape and length can be
adjusted to trade side-lobe area (or amplitude) for main-lobe width.

Figure 6.11: (Fgiure 7.32 in textbook) a) Kaiser windows for β = 0, 3, 6 and M = 20 b) FT corresponding to windows
in a) c) FT of windows with β = 6 and M = 10, 20, 40.

134
Relationship of the Kaiser window to other windows

Figure 6.12: (Fgiure 7.33 in textbook) Comparison of fixed windows with Kaiser window in low-pass filter design
application (M=32 ωc = π/2). Kaiser 6 means Kaiser window with β = 6.

There are approximate formulas to determine M and β for given filter specifications δp1 , δp2 , δs , ωp , ωs .

Ex: DT filter specifications : ωp = 0.4π, ωs = 0.6π, δp1 = δp2 = 0.01, δs = 0.001.


1. Since filters designed by window method inherently have δp1 = δp2 = δs , we set δ = 0.001.

2. Cut-off frequency ωc of ideal LPF must be found. Due to the symmetry of approximation at
ωp + ωs
the discontinuity of Hd (ejω ), we set ωc = = 0.5π.
2
3. To determine Kaiser window parameters β and M , we first compute
∆ω = ωs − ωp = 0.2π and A = −20 log10 δ = 60.

4. We now use approximate formulas to determine Kaiser window parameters β and M :





 0.1102(A − 8.7), A > 50

β = 0.5842(A − 21)0.4 + 0.07886(A − 21). 21 ≤ A ≤ 50


0.0

A < 21

A−8
M=
2.285∆ω
Formulas give β = 5.653 and M = 37.

5. Finally, the impulse response of the filter is given by


 n−α 2 1/2
 sin(ωc (n − α)) I0 [β(1 − [( α )] ) ] , 0 ≤ n ≤ M

h[n] = π(n − α) I0 (β)

0, otherwise

M
where α = 2
= 37/2 = 18.5.

135
Chapter 7

Structures for Discrete-time Systems

136
Chapter 8

Computation of Discrete Fourier


Transform

Contents
8.1 Direct Computation of DFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.1.1 Direct Evaluation of the DFT definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.1.2 The Goertzel Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.2 Decimation-in-time FFT Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.3 Decimation-in-Frequency FFT Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.4 More general FFT algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

The Discrete Fourier Transform (DFT) plays an important role in many digital signal processing
applications. This chapter discusses several methods that allow efficient computation of the values of
the DFT. In particular, we discuss the Goertzel and the Fast Fourier Transform (FFT) algorithms.
We cover Sections 9.1-9.3 from our textbook and reading assignment for this chapter is :

ˆ Sections 9.1-9.3 from our textbook.

137
Remember the DFT and IDFT equations for and N-point signal x[n] :
N
X −1
kn
X[k] = x[n]wN , k = 0, 1, ...N − 1
n=0
N −1
1 X
−kn
x[n] = X[k]wN , n = 0, 1, ...N − 1
N k=0

The DFT and IDFT expressions differ only in wn±kn and N1 .


=⇒ Any fast computation procedure for DFT thus can also be used (with little modification) for
fast computation of the IDFT.

8.1 Direct Computation of DFT

8.1.1 Direct Evaluation of the DFT definition

Let us first examine the computational complexity of the computation of the DFT directly from its
definition.
N
X −1
kn
X[k] = x[n]wN
n=0
0 k k−2 k(N −1)
= x[0]wN + x[1]wN + x[2]wN + ... + x[N − 1]wN , k = 0, 1, ...N − 1

ˆ In general, both x[n] and w are complex numbers.


kn
N

ˆ For one particular k value, computing X[k] requires:

ˆ Then, to compute DFT X[k] for N values of k = 0, 1, ...N − 1 requires:

Summary: With direct computation of N-point DFT, number of computations required is propor-
tional to N 2 , i.e has O(N 2 ) complexity.

138
The following two properties are used to obtain more efficient/faster algorithms to compute DFT:
k(N −n) −kn kn
1. wN = wN = (wN )∗ (complex conjugate symmetry)
kn k(n+N ) (k+N )n
2. wN = wN = wN (periodicity in n and k)

There are 3 well-known algorithms that exploit these properties :

1. Goertzel algorithm

2. Decimation-in-time FFT algorithm

3. Decimation-in-frequency FFT algorithm

When only a few samples of an N-point DFT are required (i.e. a few samples of the DTFT), then
the Goertzel algorithm can be more efficient. If all N values of the N-point DFT are required, the
FFT algorithms are more efficient.

8.1.2 The Goertzel Algorithm


−kn −kN 2π
Uses periodicity of wN , in particular wN = ej N kN = 1 as follows :

Let’s define a sequence yk [n]:

Since x[n] = 0 for n < 0 and n ≥ N , we have :

In z-transform domain, we have :

Notes:

ˆ Initial rest conditions (y[n] = 0, n < 0, if x[n] = 0, n < 0) are used in difference equations.
139
ˆ Computation of each sample of y [n] requires 1 complex multiplication + 1 complex addition
k
or equivalently 4 real multiplications + (2+2) real additions.

ˆ To compute X[k] = y [N ], we need to compute y [N − 1], y [N − 2], ..., y[1], recursively. (i.e.
k k k
we need N iterations of the difference equation)

ˆ Hence to compute for a particular k, the DFT value X[k] = y [n] requires:
k
N compl. mult. + N compl. additions =⇒ 4N real mult. + 4N real additions
(this complexity is similar to direct computation’s complexity)

ˆ However, Goertzel algorithm avoids computation or storage of w −k


0 k 2k (N −1)k
N , wN , wN , ...wN that
direct computation requires. (Goertzel algorithm needs only wN )

The number of multiplications in Goertzel algorithm can be reduced by a factor of 2 with a second
order flow-graph as follows :

Figure 8.1: (Figure 9.2 in textbook) Flow-graph of second order recursive computation of X[k]

Notes on comparing algorithms :

ˆ In direct computation or Goertzel algorithm, we can calculate X[k] for only a M values of k
=⇒ Total complexity proportional to M N .

ˆ In the FFT algorithms, X[k] needs to be calculated for all N values of k


=⇒ Total complexity proportional to N log2 N .

140
8.2 Decimation-in-time FFT Algorithms

Consider DFT with length N = 2α , where α is an integer, and separate DFT summation to two
sums over even & odd samples :

k
=⇒ X[k] = G[((k))N/2 ] + wN H[((k))N/2 ] k = 0, 1, ..., N − 1
N
where G(k) and H(k) are two periods of 2
-point DFT of even and odd samples of x[n], respectively.

Let us evaluate the above result for N = 8 and also examine the corresponding flow-graph :

X[0] = G[0] + WN0 H[0] k=0


X[1] = G[1] + WN1 H[1] k=1
X[2] = G[2] + WN2 H[2] .
X[3] = G[3] + WN3 H[3] k = N2 − 1
−−−−−−−−−−
X[4] = G[0] + WN4 H[0] k = N2
X[5] = G[1] + WN5 H[1] k = N2 + 1
X[6] = G[2] + WN6 H[2] .
X[7] = G[3] + WN7 H[3] k =N −1

Figure 8.2: (Figure 9.4 in textbook) Flow-graph of


decimation-in-time decomposition of N-point DFT
into two N2 -point DFTs.

141
N N
In a similar manner, a -point DFT can be computed using two -point DFTs :
2 4

Figure 8.3: (Figure 9.5 in textbook) Flow-graph of


decimation-in-time decomposition of N2 -point DFT
into two N4 -point DFTs.

Note also that wkN = wN


2k
.
2

In a similar manner, one can split all DFT blocks until only 2-pt DFT’s need to be computed, which
is easy:

Overall, computation of the 8-pt DFT example becomes the following flow-graph :

Figure 8.5: (Figure 9.9 in textbook) Flow-graph


Figure 8.4: (Figure 9.6 in textbook) Flow-graph of decimation- of complete decimation-in-time decomposition of 8-
in-time decomposition of N2 -point DFT into two N4 -point point DFT
DFTs.

Note the resulting complete decimation-in-time decomposition of 8-point DFT consists of regular
structures, called butterfly structures :

142
Figure 8.6: (Figure 9.8 in textbook) Flow-graph of basic butterfly computation.

This butterfly structure requires 2 complex multiplications + 2 complex additions, but can be
simplified to 1 complex multiplication + 2 complex additions as follows :

Figure 8.7: (Figure 9.10 in textbook) Flow-graph of simplified butterfly computation.

With this simplified butterfly structure, the overall 8-pt DFT computation becomes as follows :

Figure 8.8: (Figure 9.11 in textbook) Flow-graph of 8-point DFT using simplified butterfly computation.

In this final flow-graph of 8-point DFT,

ˆ there are log 8 = 3 (in general log N stages),


2 2

ˆ each stage has simplified butterfly structures, and


N
2

ˆ each simplified butterfly structure has 1 complex mult. + 2 complex additions.


=⇒ Overall for N-point DFT, decimation-in-time FFT algorithm requires :

N
log2 N complex multiplications
2
N log2 N complex additions

143
Ex: Let N = 1024 = 210 . For N-point DFT calculation, compare direct computation and FFT
algorithm complexity.

In place computation and bit-reversed data access

In-place computation is possible in the FFT flow-graph of Figure 8.10 :


ˆ One complex array of N storage elements (registers) is sufficient to perform the FFT compu-
tation in Figure 8.10.
butterflies butterflies butterflies
stage-0 (reg-A) −→ stage-1 (reg-A) −→ stage-2 (reg-A) −→ ...

ˆ A pair of elements at the input of the m th


stage (Xm−1 [p] and Xm−1 [q]) are processed with a
simplified butterfly structure and the output (Xm [p] and Xm [q]) are stored in the same storage
elements (registers) as the input.

Figure 8.9: (Figure 9.12 in textbook).

Input data to the FFT computation can be accessed/stored in bit-reversed order :


x[0] = x[000] −→ x[000] = x[0]
x[1] = x[001] −→ x[100] = x[4]
x[2] = x[010] −→ x[010] = x[2]
x[3] = x[011] −→ x[110] = x[6]
x[4] = x[100] −→ x[001] = x[1]
x[5] = x[101] −→ x[101] = x[5]
x[6] = x[110] −→ x[011] = x[3]
x[7] = x[111] −→ x[111] = x[7]

8.3 Decimation-in-Frequency FFT Algorithm


N
Calculate even and odd samples of DFT X[k] from two 2
-pt DFT’s of manipulated input.

−1
NP
kn
Remember the DFT definition of N-point DFT : X[k] = x[n]wN , k = 0, 1, ...N − 1
n=0

144
Let us write the definition for only even values of k (indexed by 2r, r = 0, ....N/2 − 1) and simplify
the expression :
N −1
X
n2r N
X[2r] = x[n]wN r = 0, 1, ... −1
n=0
2
N/2−1 N −1
X
n2r
X
n2r N
= x[n]wN + x[n]wN r = 0, 1, ... −1
n=0
2
n=N/2
N/2−1 N/2−1
X
n2r
X N 2nr N r N
= x[n]wN + x[n + ]w WN r = 0, 1, ... −1
n=0 n=0
2 N 2
N/2−1
X N nr N
= (x[n] + x[n + ])wN/2 r = 0, 1, ... −1
n=0
2 2

N
Hence, even samples of N-point DFT X[k] can be obtained from the -point DFT of a sequence
2
N
x0 [n] = x[n] + x[n + ], n = 0, ..., N/2 − 1.
2
(Remember what happens when the DTFT of an N-point signal is sampled at less than N points !)

In a similar manner, one can obtain the following result for the odd samples of the N-point DFT :
N −1
X
n2r+1 N
X[2r + 1] = x[n]wN r = 0, 1, ... −1
n=0
2
= ...
N/2−1
X N n nr N
= [(x[n] − x[n + ])wN ]wN/2 r = 0, 1, ... −1
n=0
2 2

N
Overall, the flow-graph of decimation-in-frequency decomposition of N-point DFT into two 2
-point
DFTs is as follows :

Figure 8.10: (Figure 9.19 in textbook) Flow-graph of decimation-in-frequency decomposition of N-point DFT into
two N2 -point DFTs.

145
We can repeat the decomposition procedure until only 2-point DFTs are needed and obtain the
following flow-graphs :

Figure 8.12: (Figure 9.22 in textbook) Flow-graph of


Figure 8.11: (Figure 9.20 in textbook) complete decimation-in-frequency decomposition of an
8-point DFT computation

Again, in-place computation and bit-reversed ordering can be used.

The computation complexity is the same as in the decimation-in-time FFT algorithm.

8.4 More general FFT algorithms

For N-point DFT calculations where N is not a power of 2, but is a composite number i.e.
N = N1 N2 (e.g.N = 10 = 2 · 5), computationally efficient FFT algorithms can be obtained. The
N-point DFT can be expressed as a combination of N1 N2 -point DFTs or as a combination of N2
N1 -point DFTs. Similar statement is valid also for larger composite numbers such as N = N1 N2 N3
(e.g.N = 30 = 2 · 5 · 3).

146

You might also like