Wiley - Interscience.introduction - To.digital - Signal.processing - And.filter - Design.oct.2005.ebook LinG
Wiley - Interscience.introduction - To.digital - Signal.processing - And.filter - Design.oct.2005.ebook LinG
INTRODUCTION TO
DIGITAL SIGNAL
PROCESSING AND
FILTER DESIGN
INTRODUCTION TO
DIGITAL SIGNAL
PROCESSING AND
FILTER DESIGN
B. A. Shenoi
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any
form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise,
except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without
either the prior written permission of the Publisher, or authorization through payment of the
appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers,
MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests
to the Publisher for permission should be addressed to the Permissions Department, John Wiley &
Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at
http://www.wiley.com/go/permission.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best
efforts in preparing this book, they make no representations or warranties with respect to the
accuracy or completeness of the contents of this book and specifically disclaim any implied
warranties of merchantability or fitness for a particular purpose. No warranty may be created or
extended by sales representatives or written sales materials. The advice and strategies contained
herein may not be suitable for your situation. You should consult with a professional where
appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other
commercial damages, including but not limited to special, incidental, consequential, or other
damages.
For general information on our other products and services or for technical support, please contact
our Customer Care Department within the United States at (800) 762-2974, outside the United
States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print
may not be available in electronic formats. For more information about Wiley products, visit our
web site at www.wiley.com.
10 9 8 7 6 5 4 3 2 1
CONTENTS
Preface xi
1 Introduction 1
1.1 Introduction 1
1.2 Applications of DSP 1
1.3 Discrete-Time Signals 3
1.3.1 Modeling and Properties of Discrete-Time Signals 8
1.3.2 Unit Pulse Function 9
1.3.3 Constant Sequence 10
1.3.4 Unit Step Function 10
1.3.5 Real Exponential Function 12
1.3.6 Complex Exponential Function 12
1.3.7 Properties of cos(ω0 n) 14
1.4 History of Filter Design 19
1.5 Analog and Digital Signal Processing 23
1.5.1 Operation of a Mobile Phone Network 25
1.6 Summary 28
Problems 29
References 30
v
vi CONTENTS
Problems 347
References 353
Index 415
PREFACE
to emphasize the important concepts and the results stated in those sentences.
Many of the important results are mentioned more than once or summarized in
order to emphasize their significance.
The other attractive feature of this book is that all the problems given at the
end of the chapters are problems that can be solved by using only the material
discussed in the chapters, so that students would feel confident that they have an
understanding of the material covered in the course when they succeed in solving
the problems. Because of such considerations mentioned above, the author claims
that the book is written with a student-oriented approach. Yet, the students should
know that the ability to understand the solution to the problems is important but
understanding the theory behind them is far more important.
The following paragraphs are addressed to the instructors teaching a junior-
level course on digital signal processing. The first seven chapters cover well-
defined topics: (1) an introduction, (2) time-domain analysis and z-transform,
(3) frequency-domain analysis, (4) infinite impulse response filters, (5) finite
impulse response filters, (6) realization of structures, and (7) quantization filter
analysis. Chapter 8 discusses hardware design, and Chapter 9 covers MATLAB.
The book treats the mainstream topics in digital signal processing with a well-
defined focus on the fundamental concepts.
Most of the senior–graduate-level textbooks treat the theory of finite wordlength
in great detail, but the students get no help in analyzing the effect of finite word-
length on the frequency response of a filter or designing a filter that meets a set
of frequency response specifications with a given wordlength and quantization
format. In Chapter 7, we discuss the use of a MATLAB tool known as the “FDA
Tool” to thoroughly investigate the effect of finite wordlength and different formats
of quantization. This is another attractive feature of the textbook, and the material
included in this chapter is not found in any other textbook published so far.
When the students have taken a course on digital signal processing, and join an
industry that designs digital signal processing (DSP) systems using commercially
available DSP chips, they have very little guidance on what they need to learn.
It is with that concern that additional material in Chapter 8 has been added,
leading them to the material that they have to learn in order to succeed in their
professional development. It is very brief but important material presented to
guide them in the right direction. The textbooks that are written on DSP hardly
provide any guidance on this matter, although there are quite a few books on
the hardware implementation of digital systems using commercially available
DSP chips. Only a few schools offer laboratory-oriented courses on the design
and testing of digital systems using such chips. Even the minimal amount of
information in Chapter 8 is not found in any other textbook that contains “digital
signal processing” in its title. However, Chapter 8 is not an exhaustive treatment
of hardware implementation but only as an introduction to what the students have
to learn when they begin a career in the industry.
Chapter 1 is devoted to discrete-time signals. It describes some applications
of digital signal processing and defines and, suggests several ways of describing
discrete-time signals. Examples of a few discrete-time signals and some basic
PREFACE xiii
IIR filters include direct-form and cascade and parallel structures, lattice–ladder
structures with autoregressive (AR), moving-average (MA), and allpass struc-
tures as special cases, and lattice-coupled allpass structures. Again, this chapter
contains a large number of examples worked out numerically and using the func-
tions from MATLAB and Signal Processing Toolbox; the material is more than
what is found in many other textbooks.
The effect of finite wordlength on the frequency response of filters realized
by the many structures discussed in Chapter 6 is treated in Chapter 7, and the
treatment is significantly different from that found in all other textbooks. There
is no theoretical analysis of finite wordlength effect in this chapter, because it
is beyond the scope of a junior-level course. I have chosen to illustrate the use
of a MATLAB tool called the “FDA Tool” for investigating these effects on the
different structures, different transfer functions, and different formats for quan-
tizing the values of filter coefficients. The additional choices such as truncation,
rounding, saturation, and scaling to find the optimum filter structure, besides the
alternative choices for the many structures, transfer functions, and so on, makes
this a more powerful tool than the theoretical results. Students would find expe-
rience in using this tool far more useful than the theory in practical hardware
implementation.
Chapters 1–7 cover the core topics of digital signal processing. Chapter 8,
on hardware implementation of digital filters, briefly describes the simulation
of digital filters on Simulink®, and the generation of C code from Simulink
using Real-Time Workshop® (Simulink and Real-Time Workshop are registered
trademarks of The MathWorks, Inc.), generating assembly language code from the
C code, linking the separate sections of the assembly language code to generate an
executable object code under the Code Composer Studio from Texas Instruments
is outlined. Information on DSP Development Starter kits and simulator and
emulator boards is also included. Chapter 9, on MATLAB and Signal Processing
Toolbox, concludes the book.
The author suggests that the first three chapters, which discuss the basics of
digital signal processing, can be taught at the junior level in one quarter. The pre-
requisite for taking this course is a junior-level course on linear, continuous-time
signals and systems that covers Laplace transform, Fourier transform, and Fourier
series in particular. Chapters 4–7, which discuss the design and implementation
of digital filters, can be taught in the next quarter or in the senior year as an
elective course depending on the curriculum of the department. Instructors must
use discretion in choosing the worked-out problems for discussion in the class,
noting that the real purpose of these problems is to help the students understand
the theory. There are a few topics that are either too advanced for a junior-level
course or take too much of class time. Examples of such topics are the derivation
of the objective function that is minimized by the Remez exchange algorithm, the
formulas for deriving the lattice–ladder realization, and the derivation of the fast
Fourier transform algorithm. It is my experience that students are interested only
in the use of MATLAB functions that implement these algorithms, and hence I
have deleted a theoretical exposition of the last two topics and also a description
PREFACE xv
B. A. Shenoi
May 2005
CHAPTER 1
Introduction
1.1 INTRODUCTION
1
2 INTRODUCTION
mobile phones, besides voice, music, and other audio signals—all of which are
classified as multimedia—because of limited hardware in the mobile phones and
not the software that has already been developed. However, the computers can
be used to carry out the same functions more efficiently with greater memory and
large bandwidth. We see a seamless integration of wireless telephones and com-
puters already developing in the market at present. The new technologies being
used in the abovementioned applications are known by such terms as CDMA,
TDMA,1 spread spectrum, echo cancellation, channel coding, adaptive equaliza-
tion, ADPCM coding, and data encryption and decryption, some of which are
used in the software to be introduced in the third-generation (G3) mobile phones.
2. Speech Processing. The quality of speech transmission in real time over
telecommunications networks from wired (landline) telephones or wireless (cel-
lular) telephones is very high. Speech recognition, speech synthesis, speaker
verification, speech enhancement, text-to-speech translation, and speech-to-text
dictation are some of the other applications of speech processing.
3. Consumer Electronics. We have already mentioned cellular or mobile
phones. Then we have HDTV, digital cameras, digital phones, answering
machines, fax and modems, music synthesizers, recording and mixing of music
signals to produce CD and DVDs. Surround-sound entertainment systems includ-
ing CD and DVD players, laser printers, copying machines, and scanners are
found in many homes. But the TV set, PC, telephones, CD-DVD players, and
scanners are present in our homes as separate systems. However, the TV set can
be used to read email and access the Internet just like the PC; the PC can be
used to tune and view TV channels, and record and play music as well as data
on CD-DVD in addition to their use to make telephone calls on VoIP. This trend
toward the development of fewer systems with multiple applications is expected
to accelerate in the near future.
4. Biomedical Systems. The variety of machines used in hospitals and biomed-
ical applications is staggering. Included are X-ray machines, MRI, PET scanning,
bone scanning, CT scanning, ultrasound imaging, fetal monitoring, patient moni-
toring, and ECG and EEC mapping. Another example of advanced digital signal
processing is found in hearing aids and cardiac pacemakers.
5. Image Processing. Image enhancement, image restoration, image under-
standing, computer vision, radar and sonar processing, geophysical and seismic
data processing, remote sensing, and weather monitoring are some of the applica-
tions of image processing. Reconstruction of two-dimensional (2D) images from
several pictures taken at different angles and three-dimensional (3D) images from
several contiguous slices has been used in many applications.
6. Military Electronics. The applications of digital signal processing in mili-
tary and defense electronics systems use very advanced techniques. Some of the
applications are GPS and navigation, radar and sonar image processing, detection
1
Code- and time-division multiple access. In the following sections we will mention several technical
terms and well-known acronyms without any explanation or definition. A few of them will be
described in detail in the remaining part of this book.
DISCRETE-TIME SIGNALS 3
x1(t) x2(t)
0 t 0 t
(a) (b)
7/8
6/8
5/8
4/8
3/8
2/8
1/8
0.0
−4 −3 −2 −1 0 1 2 3 4 5 6 7 8 n
−1/8
−2/8
−3/8
Figure 1.2 The continuous-time function f (t) and the discrete-time function f (n).
−3 −2 −1 0 1 2 3 4 5 6 n
0 111 7
8 = 0.875
0 110 6
8 = 0.750
0 101 5
8 = 0.625
0 100 4
8 = 0.500
0 011 3
8 = 0.375
0 010 2
8 = 0.250
0 001 1
8 = 0.125
0 000 0.0 = 0.000
1 000 −0.0 = −0.000
1 001 − 18 = −0.125
1 010 − 28 = −0.250
1 011 − 38 = −0.375
1 100 − 48 = −0.500
1 101 − 58 = −0.625
1 110 − 68 = −0.750
1 111 − 78 = −0.875
DISCRETE-TIME SIGNALS 7
filters. However, we use the terms digital filter and discrete-time system inter-
changeably in this book. Continuous-time signals and systems are also called
analog signals and analog systems, respectively. A system that contains both the
ZOH circuit and the quantizer is called an analog-to digital converter (ADC),
which will be discussed in more detail in Chapter 7.
Consider an analog signal as shown by the solid line in Figure 1.2. When it
is sampled, let us assume that the discrete-time sequence has values as listed
in the second column of Table 1.2. They are expressed in only six significant
decimal digits and their values, when truncated to four digits, are shown in the
third column. When these values are quantized by the quantizer with four binary
digits (bits), the decimal values are truncated to the values at the finite discrete
levels. In decimal number notation, the values are listed in the fourth column,
and in binary number notation, they are listed in the fifth column of Table 1.2.
The binary values of f (n) listed in the third column of Table 1.2 are plotted in
Figure 1.4.
A continuous-time signal f (t) or a discrete-time signal f (n) expresses the
variation of a physical quantity as a function of one variable. A black-and-white
photograph can be considered as a two-dimensional signal f (m, r), when the
intensity of the dots making up the picture is measured along the horizontal axis
(x axis; abscissa) and the vertical axis (y axis; ordinate) of the picture plane
and are expressed as a function of two integer variables m and r, respectively.
We can consider the signal f (m, r) as the discretized form of a two-dimensional
signal f (x, y), where x and y are the continuous spatial variables for the hor-
izontal and vertical coordinates of the picture and T1 and T2 are the sampling
Values of f (n)
Decimal Truncated to Quantized Binary
n Values of f (n) Four Digits Values of f (n) Number Form
7/8
6/8
5/8
4/8
3/8
2/8
1/8
−4 −3 −2 −1
0 1 2 3 4 5 6 7 8 n
Figure 1.4 Binary values in Table 1.2, after truncation of f (n) to 4 bits.
periods (measured in meters) along the x and y axes, respectively. In other words,
f (x, y)|x=mT1 ,y=rT2 = f (m, r).
A black-and-white video signal f (x, y, t) is a 3D function of two spatial
coordinates x and y and one temporal coordinate t. When it is discretized, we
have a 3D discrete signal f (m, p, n). When a color video signal is to be modeled,
it is expressed by a vector of three 3D signals, each representing one of the
three primary colors—red, green, and blue—or their equivalent forms of two
luminance and one chrominance. So this is an example of multivariable function
or a multichannel signal:
⎡ ⎤
fr (m, p, n)
F(m, r, n) = ⎣ fg (m, p, n) ⎦ (1.1)
fb (m, p, n)
We denote the DT sequence by x(n) and also the value of a sample of the
sequence at a particular value of n by x(n). If a sequence has zero values for
n < 0, then it is called a causal sequence. It is misleading to state that the
causal function is a sequence defined for n ≥ 0, because, strictly speaking, a DT
sequence has to be defined for all values of n. Hence it is understood that a causal
sequence has zero-valued samples for −∞ < n < 0. Similarly, when a function
is defined for N1 ≤ n ≤ N2 , it is understood that the function has zero values for
−∞ < n < N1 and N2 < n < ∞. So the sequence x1 (n) in Equation (1.2) has
zero values for 2 < n < ∞ and for −∞ < n < −3. The discrete-time sequence
x2 (n) given below is a causal sequence. In this form for representing x2 (n), it is
implied that x2 (n) = 0 for −∞ < n < 0 and also for 4 < n < ∞:
x2 (n) = 1 −2 0.4 0.3 0.4 0 0 0 (1.3)
↑
The length of a finite sequence is often defined by other authors as the number
of samples, which becomes a little ambiguous in the case of a sequence like x2 (n)
given above. The function x2 (n) is the same as x3 (n) given below:
x3 (n) = 1 −2 0.4 0.3 0.4 0 0 0 0 0 0 (1.4)
↑
But does it have more samples? So the length of the sequence x3 (n) would be
different from the length of x2 (n) according to the definition above. When a
sequence such as x4 (n) given below is considered, the definition again gives an
ambiguous answer:
x4 (n) = 0 0 0.4 0.3 0.4 (1.5)
↑
and it is plotted in Figure 1.5a. It is often called the unit sample function and also
the unit impulse function. But note that the function δ(n) has a finite numerical
10 INTRODUCTION
−2 −1 0 1 2 3 n −1 0 1 2 3 n −3 −2 −1 0 1 2 3 n
Figure 1.5 Unit pulse functions δ(n), δ(n − 3), and δ(n + 3).
value of one at n = 0 and zero at all other values of integer n, whereas the unit
impulse function δ(t) is defined entirely in a different way.
When the unit pulse function is delayed by k samples, it is described by
1 n=k
δ(n − k) = (1.7)
0 n = k
u(n) u(n − 2)
0 1 2 3 4 5 n 0 1 2 3 4 5 6 7 n
(a) (b)
u(−n) u(−n + 2)
−6 −5 −4 −3 −2 −1 0 1 2 3 n −4 −3 −2 −1 0 1 2 3 n
(c) (d )
u(−n − 2)
−6 −5 −4 −3 −2 −1 0 1 2 3 4 n
(e)
We also define the function u(−n), obtained from the time reversal of u(n), as a
sequence that is zero for n > 0. The sequences u(−n + k) and u(−n − k), where
k is a positive integer, are obtained when u(−n) is delayed by k samples and
advanced by k samples, respectively. In other words, u(−n + k) is obtained by
12 INTRODUCTION
A special discrete-time sequence that we often use is the function defined for
n ≥ 0:
An example of x1 (n) = (0.8)n u(n) is plotted in Figure 1.7a. The function x2 (n) =
x1 (n − 3) = (0.8)(n−3) u(n − 3) is obtained when x1 (n) is delayed by three sam-
ples. It is plotted in Figure 1.7b. But the function x3 (n) = (0.8)n u(n − 3) is
obtained by chopping off the first three samples of x1 (n) = (0.8)n u(n), and as
shown in Figure 1.7c, it is different from x2 (n).
Plot of x1(n)
1
0.8
Value of x1(n)
0.6
0.4
0.2
0
0 2 4 6 8 10 12
Value of n
(a)
Plot of x2(n)
1
0.8
Value of x2(n)
0.6
0.4
0.2
0
0 2 4 6 8 10 12
Value of n
(b)
Plot of x3(n)
1
0.8
Value of x3(n)
0.6
0.4
0.2
0
0 2 4 6 8 10 12
Value of n
(c)
Value of y(n)
0.2 0.2
0 0
−0.2 −0.2
−0.4 −0.4
−0.6 −0.6
−0.8 −0.8
−1 −1
0 10 20 30 40 0 10 20 30 40
Value of n Value of n
(a) (b)
Example 1.1
Plot of x3(n)
1
0.8
0.6
0.4
Value of x3(n)
0.2
−0.2
−0.4
−0.6
−0.8
−1
0 5 10 15 20 25 30 35 40
Value of n
where K1 , K2 , and K3 and N are integers. The value of N that satisfies this
condition is 20 when K1 = 2, K2 = 5, and K3 = 6. So N = 20 is the fundamental
period of x3 (n). The sequence x3 (n) plotted in Figure 1.9 for 0 ≤ n ≤ 40 shows
that it is periodic with a period of 20 samples.
0.5 0.5
Value of y2(n)
Value of y1(n)
0 0
−0.5 −0.5
−1 −1
0 20 40 60 0 20 40 60
Value of n Value of n
0.5 0.5
Value of y3(n)
Value of y4(n)
0 0
−0.5 −0.5
−1 −1
0 20 40 60 0 20 40 60
Value of n Value of n
Figure 1.10 Plot of cos(ω0 n) for different values of ω0 between 0 and 2π.
We have plotted the sequences v1 (n) and v2 (n) in Figure 1.11, to verify this
property.
Remember that in Chapter 3, we will use the term “folding” to describe new
implications of this property. We will also show in Chapter 3 that a large class
of discrete-time signals can be expressed as the weighted sum of exponential
sequences of the form ej ω0 n , and such a model leads us to derive some powerful
analytical techniques of digital signal processing.
We have described several ways of characterizing the DT sequences in this
chapter. Using the unit sample function and the unit step function, we can express
the DT sequences in other ways as shown below.
For example, δ(n) = u(n) − u(n − 1) and u(n) = m=n m=−∞ δ(m). A mathe-
matical way of modeling a sequence
x(n) = 2 3 1.5 0.5 −1 4 (1.24)
↑
0.8 0.8
0.6 0.6
0.4 0.4
Value of v1(n)
Value of v2(n)
0.2 0.2
0 0
−0.2 −0.2
−0.4 −0.4
−0.6 −0.6
−0.8 −0.8
−1 −1
0 10 20 30 40 0 10 20 30 40
Value of n Value of n
Filtering is the most common form of signal processing used in all the appli-
cations mentioned in Section 1.2, to remove the frequencies in certain parts
and to improve the magnitude, phase, or group delay in some other part(s) of
the spectrum of a signal. The vast literature on filters consists of two parts:
(1) the theory of approximation to derive the transfer function of the filter such
that the magnitude, phase, or group delay approximates the given frequency
response specifications and (2) procedures to design the filters using the hardware
components. Originally filters were designed using inductors, capacitors, and
transformers and were terminated by resistors representing the load and the inter-
nal resistance of the source. These were called the LC (inductance × capacitance)
filters that admirably met the filtering requirements in the telephone networks for
many decades of the nineteenth and twentieth centuries. When the vacuum tubes
and bipolar junction transistors were developed, the design procedure had to
be changed in order to integrate the models for these active devices into the
filter circuits, but the mathematical theory of filter approximation was being
advanced independently of these devices. In the second half of the twentieth
century, operational amplifiers using bipolar transistors were introduced and fil-
ters were designed without inductors to realize the transfer functions. The design
procedure was much simpler, and device technology also was improved to fabri-
cate resistors in the form of thick-film and later thin-film depositions on ceramic
substrates instead of using printed circuit boards. These filters did not use induc-
tors and transformers and were known as active-RC (resistance × capacitance)
filters. In the second half of the century, switched-capacitor filters were devel-
oped, and they are the most common type of filters being used at present for
audio applications. These filters contained only capacitors and operational ampli-
fiers using complementary metal oxide semiconductor (CMOS) transistors. They
used no resistors and inductors, and the whole circuit was fabricated by the
20 INTRODUCTION
very large scale integration (VLSI) technology. The analog signals were con-
verted to sampled data signals by these filters and the signal processing was
treated as analog signal processing. But later, the signals were transitioned as
discrete-time signals, and the theory of discrete-time systems is currently used to
analyze and design these filters. Examples of an LC filter, an active-RC filter,
and a switched-capacitor filter that realize a third-order lowpass filter function
are shown in Figures 1.12–1.14.
The evolution of digital signal processing has a different history. At the begin-
ning, the development of discrete-time system theory was motivated by a search
for numerical techniques to perform integration and interpolation and to solve
differential equations. When computers became available, the solution of phys-
ical systems modeled by differential equations was implemented by the digital
R2
+
L2
Rs
+ c1 c2 v2 RL
_ v1
+
−
+
+
−
v2
v1 +
−
f1 f1 f1 f1
f2 f2 f2 f2
+
v1 −
+
−
f2 f1
f1 f2
f1 f1 f2 f1
−
+ −
+ +
f2 f2 f1 f2 v2
−
as the preconditioning filter or antialiasing filter —such that the output of the
lowpass filter attenuates the frequencies considerably beyond a well-chosen fre-
quency so that it can be considered a bandlimited signal. It is this signal that
is sampled and converted to a discrete-time signal and coded to a digital signal
by the analog-to-digital converter (ADC) that was briefly discussed earlier in
this chapter. We consider the discrete-time signal as the input to the digital filter
designed in such a way that it improves the information contained in the original
analog signal or its equivalent discrete-time signal generated by sampling it. A
typical example of a digital lowpass filter is shown in Figure 1.15.
The output of the digital filter is next fed to a digital-to-analog converter
(DAC) as shown in Figure 1.17 that also uses a lowpass analog filter that smooths
the sampled-data signal from the DAC and is known as the “smoothing filter.”
Thus we obtain an analog signal yd (t) at the output of the smoothing filter as
shown. It is obvious that compared to the analog filter shown in Figure 1.16, the
circuit shown in Figure 1.17 requires considerably more hardware or involves a
lot more signal processing in order to filter out the undesirable frequencies from
the analog signal x(t) and deliver an output signal yd (t). It is appropriate to
compare these two circuit configurations and determine whether it is possible to
get the output yd (t) that is the same or nearly the same as the output y(t) shown
in Figure 1.16; if so, what are the advantages of digital signal processing instead
of analog signal processing, even though digital signal processing requires more
circuits compared to analog signal processing?
x(n) Σ Σ
z−1
z−1
z−1 Σ y(n)
The basic elements in digital filters are the multipliers, adders, and delay ele-
ments, and they carry out multiplication, addition, and shifting operations on
numbers according to an algorithm determined by the transfer function of the
filters or their equivalent models. (These models will be discussed in Chapter 3
and also in Chapter 7.) They provide more flexibility and versatility compared
to analog filters. The coefficients of the transfer function and the sample values
of the input signal can be stored in the memory of the digital filter hardware or
on the computer (PC, workstation, or the mainframe computer), and by changing
the coefficients, we can change the transfer function of the filter, while chang-
ing the sample values of the input, we can find the response of the filter due
to any number of input signals. This flexibility is not easily available in ana-
log filters.
The digital filters are easily programmed to do time-shared filtering under time-
division multiplexing scheme, whereas the analog signals cannot be interleaved
between timeslots. Digital filters can be designed to serve as time-varying filters
also by changing the sampling frequency and by changing the coefficients as a
function of time, namely, by changing the algorithm accordingly.
The digital filters have the advantage of high precision and reliability. Very
high precision can be obtained by increasing the number of bits to represent
the coefficients of the filter transfer function and the values of the input signal.
Again we can increase the dynamic range of the signals and transfer function
coefficients by choosing floating-point representation of binary numbers. The
values of the inductors, capacitors, and the parameters of the operational amplifier
parameters and CMOS transistors, and so on used in the analog filters cannot
achieve such high precision. Even if the analog elements can be obtained with
high accuracy, they are subject to great drift in their value due to manufacturing
tolerance, temperature, humidity, and other parameters—depending on the type
of device technology used—over long periods of service, and hence their filter
response degrades slowly and eventually fails to meet the specifications. In the
case of digital filters, such effects are nonexistent because the wordlength of
the transfer coefficients as well as the product of addition and multiplication
within the filter do not change with respect to time or any of the environmental
conditions that plague the analog circuits. Consequently, the reliability of digital
filters is much higher than that of analog filters, and this means that they are more
economical in application. Of course, catastrophic failures due to unforeseen
factors are equally possible in both cases. If we are using computers to analyze,
24 INTRODUCTION
design, and simulate these filters, we can assume even double-precision format
for the numbers that represent filter coefficients and signal samples. We point
out that we can carry out the simulation, analysis, and design of any number
of filters and under many conditions, for example, Monte Carlo analysis, worst-
case analysis, or iterative optimization to test the design before we build the
hardware and test it again and again. Of course, we can do the same in the case
of analog filters or continuous-time systems also (e.g., analog control systems)
using such software as MATLAB and Simulink.2 During the manufacture of
analog filters, we may have to tune each of them to correct for manufacturing
tolerances, but there is no such need to test the accuracy of the wordlength in
digital filters.
Data on digital filters can be stored on magnetic tapes, compact disks(CDs),
digital videodisks (DVDs), and optical disks for an indefinite length of time.
They can be retrieved without any degradation or loss of data; a good example
is the music recorded on CDs. In contrast, analog signals deteriorate slowly
as time passes and cannot be retrieved easily without any loss. There is no
easy way of storing the transfer function coefficients that defines the analog
system and feeding the input signals stored on these storage devices to the ana-
log system.
By using digital filters, we can realize many transfer functions that cannot be
realized by analog filters. For example, in addition to those already mentioned
above, we can realize the following characteristics from digital filters:
1. Transition bands much smaller than what can be achieved from analog fil-
ters; an example would be a lowpass filter with a bandwidth of 5000 Hz
and a passband ripple of 0.5 dB, and 100 dB attenuation above 5010 Hz. In
spectrum analyzers and synthesizers, vocoders (voice recorders), and simi-
lar devices, extremely low tolerances on the magnitude and phase responses
over adjacent passbands are required, and digital filters can be designed to
meet these specifications.
2. Finite duration impulse response and filters with linear phase. Neither of
these characteristics can be achieved by analog filters. Digital filters with
these characteristics are used extensively in many applications.
3. Bandwidth of the order 5 Hz or even a fraction thereof that are commonly
required to process biomedical or seismic signals.
4. Programmable filters, multirate filters, multidimensional filters, and adap-
tive filters. Programmable filters are used to adjust the frequency-selective
properties of the filters. Multirate filters are used in the processing of
many complex signals with different rates of fluctuation, whereas two-
dimensional digital filters are the filters used in image processing. Adaptive
filters are used invariably when the transmission medium between the trans-
mitter and receiver changes—either as the transmission line is switched to
2
MATLAB and Simulink are registered trademarks of The MathWorks, Inc. Natick, MA. The soft-
ware is available from The MathWorks, Inc.
ANALOG AND DIGITAL SIGNAL PROCESSING 25
2
3
1
4
2 2 2
3 3 3
1 1 1
4 4 4
2 2 2
3 3 3
1 1 1
4 4 4
using the same frequency are kept sufficiently apart so that there is no cochannel
interference; also the frequencies used in a BTS are separated by 200 kHz. The
base transceiver stations located on towers over a coverage area are connected by
fixed digital lines to a mobile switching center (MSC), and the mobile switching
center is connected to the public switched telephone network (PSTN) as shown
in Figure 1.19 as well as the Internet, to which other MSCs are also connected.
When a phone initiates a call to send voice, text, an instant message, or other
media, it registers with the network via the BTS closest to its location and the
BTS tracks its location and passes this information to the mobile switching center
(MSC) over fixed digital lines, which updates this information continuously as
received from the BTS. Each mobile phone has a home location register (HLR)
and a visitor location register (VLR) assigned to it. The HLR contains information
such as the identity of the user and phone number assigned to the user in the
user’s home network, the services to which the user has subscribed, whereas the
VLR contains information about the mobile phone when it is used outside the
home network. So when a mobile phone initiates a call, it sends the information
to the BTS about its identity and so on from the VLR or the HLR depending on
the location of the phone at the time the call originates. The mobile switching
center checks the data from its HLR or VLR to authenticate the call and gives
permission for the phone to access the network. As the caller moves within the
cell, the BTS monitors the strength of the signal between the phone and the
receiver, and if this falls below a certain level, it may transfer control of the
phone to the BTS in the next cell, which may offer a stronger signal. If no such
cell is nearby, the caller is cut off (i.e., will not be able to receive or to send
ANALOG AND DIGITAL SIGNAL PROCESSING 27
Base station
controller (BSC)
Mobile
PSTN switching
center (MSC) Base
stations
Visitor Home
location location Base transceiver Cell
register register station (BTS) phone
(VLR) (HLR)
Base station
controller (BSC)
a call). As the caller moves from one cell to another cell, the BTS serving it
will transfer control to the BTS in the cell that it has moved to. This is the
main feature that makes mobile telephony possible. All of these operations are
carried out by the computers serving the mobile cellular phone network, and that
technology is known as computer networking technology. It is different from the
theory of digital signal processing. This textbook offers an introduction to the
fundamental theory of digital signal processing, which is used in such techniques
as speech compression, multipath equalization, and echo cancellation, mentioned
in the previous section.
There are some disadvantages and limitations in digital signal processing in
comparison with analog signal processing. By looking at the two circuit config-
urations in Figures 1.16 and 1.17, it is obvious that the digital signal processor
is a more complex system, because of the additional components of analog low-
pass filters (ADC and DAC) on either side of the digital signal processor besides
the additional control and programming circuits, which are not shown in the
figures. Another disadvantage is that the digital signal processor can process sig-
nals within a range of frequencies that is limited mainly by the highest sampling
28 INTRODUCTION
Maximum
Sampling Rate Resolution Frequency in
(samples per second) (bits) Input Signal Power
96,000 24 48 kHz 90 mW
96,000 18 48 kHz 60 mW
96,000 16 48 kHz 40 mW
65,000,000 14 500 MHz 0.6 W
400,000,000 8 1 GHz 3W
frequency of the ADC and DAC that are available. As the frequency is increased,
the wordlength of these devices decreases and therefore the accuracy and dynamic
range of the input and output data decrease.
For example, data on a few ADCs currently available are given in Table 1.3 [3].
Hence digital signal processing is restricted to approximately one megahertz,
and analog signal processors are necessary for processing signals above that
frequency, for example, processing of radar signals. In such applications, analog
signal processing is a more attractive and viable choice, and currently a lot of
research is being directed toward what is known as mixed-signal processing. Note
in Table 1.3, that as the resolution (wordlength) for a given signal decreases, the
power consumption also decreases, but that is just the power consumed by the
ADCs; the power increases as the sampling frequency increases, even when
the resolution is decreased. The digital signal processor itself consumes a lot
more power, and hence additional power management circuits are added to the
whole system. In contrast, the analog signal processors consume less power.
The LC filters consume almost negligible power and can operate at frequencies
in the megahertz range. The active-RC filters and switched-capacitor filters are
restricted to the audiofrequency range, but they consume more power than do
the LC filters. It is expected that mixed-signal processing carried out on a single
system or a single chip will boost the maximum frequency of the signal that can
be processed, by a significant range, beyond what is possible with a strictly digital
signal processing system. Therefore we will see more and more applications of
DSP with increasing frequencies because the advantages of DSP outweigh the
disadvantages in analog signal processing.
1.6 SUMMARY
In this introductory chapter, we defined the discrete-time signal and gave a few
examples of these signals, along with some simple operations that can be applied
with them. In particular, we pointed out the difference between a sinusoidal
signal, which is a continuous-time signal, and a discrete-time signal. We dis-
cussed the basic procedure followed to sample and quantize an analog signal
PROBLEMS 29
and compared the advantages and disadvantages of digital signal processing with
those of directly processing the analog signal through an analog system, tak-
ing a filter as an example. In doing so, we introduced many terms or acronyms
that we have not explained. Some of them will be explained in great detail in
the following chapter. In Chapter 2 we will discuss several ways of modeling
a discrete-time system and the methods used to find the response in the time
domain, when excitation is effected by the discrete-time signals.
PROBLEMS
1.1 Given two discrete-time signals x1 (n) = {0.9 0.5 0.8 1.0 1.5 2.0
↑
0.2} and x2 (n) = {1.0 0.3 0.6 0.4}, sketch each of the following:
↑
(a) y1 (n) = x1 (n) + 3x2 (n)
(b) y2 (n) = x1 (n) − x2 (n − 5)
(c) y3 (n) = x1 (n)x2 (n)
(d) y4 (n) = x1 (−n + 4)
(e) y5 (n) = y4 (n)x2 (n)
(f) y6 (n) = x2 (−n − 3)
(g) y7 (n) = y4 (n)y6 (n)
1.2 Sketch each of the following, where x1 (n) and x2 (n) are the DT sequences
given in Problem 1.1:
(a) v1 (n) = x1 (n)x2 (4 − n)
(b) v2 (n) = ∞ k=−∞ x1 (k)x2 (n − k)
∞
(c) v3 (n) = k=−∞ x2 (k)x1 (n − k)
(d) v4 (n) = n=10 2
n=0 x2 (n)
(e) v5 (n) = x1 (2n)
1.3 Repeat Problem 1.1 with x1 (n) = {1.0 0.8 0.2 − 0.2 − 0.5
↑
− 0.7} and x2 (n) = {0.5 0.2 0.1 0.2 0.6}.
↑
1.4 Repeat Problem 1.2 with x1 (n) and x2 (n) as given in Problem 1.3.
1.5 Find the even and odd parts of x1 (n) and x2 (n) given in Problem 1.1.
Even part of x1 (n) is defined as [x1 (n) + x1 (−n)]/2, and the odd part as
[x1 (n) − x1 (−n)]/2
1.6 Repeat Problem 1.5 with x1 (n) and x2 (n) as given in Problem 1.3.
1.7 Find S1 (k) = n=k n=0 n and sketch it, for k = 0, 1, . . . 5.
1.8 Find and sketch S2 (k) = kn=−∞ x1 (n) and S3 (k) = kn=−∞ x2 (n), where
x1 (n) and x2 (n) are as given in Problem 1.1.
30 INTRODUCTION
1.9 Repeat Problem 1.8 with x1 (n) and x2 (n) as given in Problem 1.3.
1.10 Express the sequences x1 (n) and x2 (n) as a summation of weighted and
shifted unit step functions, where x1 (n) and x2 (n) are as given in Prob-
lem 1.1.
1.11 Repeat Problem 1.10 with the sequences x1 (n) and x2 (n) given in Prob-
lem 1.3.
1.12 Given x(n) = [0.5ej (π/6) ]n [u(n) − u(n − 4)], calculate the values of
|x(n)| and sketch them. What are the real and imaginary parts of x(n)?
1.13 Express the real and imaginary parts of x(n) = 4k=0 [0.5ej (π/6) ]k
δ(n − k).
1.14 What are the real and imaginary parts of q(n) = ∞ n=0 (0.3 − j 0.4) ?
n
REFERENCES
Time-Domain Analysis
and z Transform
32
A LINEAR, TIME-INVARIANT SYSTEM 33
values of the input, then the system is defined as a causal system. This means
that the output does not depend on the future values of the input. We will discuss
these concepts again in more detail in later sections of this chapter. In this book,
we consider only discrete-time systems that are linear and time-invariant (LTI)
systems.
Another way of defining a system in general is that it is an interconnection
of components or subsystems, where we know the input–output relationship of
these components, and that it is the way they are interconnected that determines
the input–output relationship of the whole system. The model for the DT system
can therefore be described by a circuit diagram showing the interconnection of
its components, which are the delay elements, multipliers, and adders, which are
introduced below. In the following sections we will use both of these definitions
to model discrete-time systems. Then, in the remainder of this chapter, we will
discuss several ways of analyzing the discrete-time systems in the time domain,
and in Chapter 3 we will discuss frequency-domain analysis.
X3(n)
Adder Σ Y3(n) = X3(n) + X4(n)
X4(n)
m6(n)
X(n)
x(0)
x(3)
x(1)
x(2)
b(0)
V0(n)
n
0 1 2 3 4 5 6 7
x(0)
x(3) z−1
x(1)
x(2)
b(1)
V1(n)
n
0 1 2 3 4
x(0)
z−1
x(3)
x(1) Σ y(n)
x(2)
b(2)
V2(n)
0 1 2 3 4 5 6 7 n
x(0) z−1
x(3)
x(1)
x(2)
b(3)
V3(n)
0 1 2 3 4 5 6 n
7
v1 (n) shown on the left is the signal x(n) delayed by T seconds or one sam-
ple, so, v1 (n) = x(n − 1). Similarly, v(2) and v(3) are the signals obtained
from x(n) when it is delayed by 2T and 3T seconds: v2 (n) = x(n − 2) and
v3 (n) = x(n − 3). When we say that the signal x(n) is delayed by T , 2T , or 3T
seconds, we mean that the samples of the sequence are present T , 2T , or 3T
seconds later, as shown by the plots of the signals to the left of v1 (n), v2 (n),
and v3 (n). But at any given time t = nT , the samples in v1 (n), v2 (n), and v3 (n)
are the samples of the input signal that occur T , 2T , and 3T seconds previous to
t = nT . For example, at t = 3T , the value of the sample in x(n) is x(3), and the
values present in v1 (n), v2 (n) and v3 (n) are x(2), x(1), and x(0), respectively.
A good understanding of the operation of the discrete-time system as illustrated
above is essential in analyzing, testing, and debugging the operation of the sys-
tem when available software is used for the design, simulation, and hardware
implementation of the system.
It is easily seen that the output signal in Figure 2.2 is
0.8
−0.1
y1(z) y2(z) y3(z)
X(z) z−1 Σ Σ Σ
z−1
z−1
0.5
0.3 z−1y2(z)
−0.4 0.6
z−1
z−2y1(z)
−0.2
where b(0), b(1), b(2), b(3) are the gain constants of the multipliers. It is also
easy to see from the last expression that the output signal is the weighted sum of
the current value and the previous three values of the input signal. So this gives
us an input–output relationship for the system shown in Figure 2.2.
Now we consider another example of a discrete-time system, shown in
Figure 2.3. Note that a fundamental rule is to express the output of the adders
and generate as many equations as the number of adders found in this circuit
diagram for the discrete-time system. (This step is similar to writing the node
equations for an analog electric circuit.) Denoting the outputs of the three adders
as y1 (n), y2 (n), and y3 (n), we get
These three equations give us a mathematical model derived from the model
shown in Figure 2.3 that is schematic in nature. We can also derive (draw
the circuit realization) the model shown in Figure 2.3 from the model given in
Equations (2.1). We will soon describe a method to obtain a single input–output
relationship between the input x(n) and the output y(n) = y3 (n), after eliminat-
ing the internal variables y1 (n) and y2 (n); that relationship constitutes the third
model for the system. The general form of such an input–output relationship is
N
M
y(n) = − a(k)y(n − k) + b(k)x(n − k) (2.2)
k=1 k=0
36 TIME-DOMAIN ANALYSIS AND z TRANSFORM
N
M
a(k)y(n − k) = b(k)x(n − k); a(0) = 1 (2.3)
k=0 k=0
Equation (2.2) shows that the output y(n) is determined by the weighted sum
of the previous N values of the output and the weighted sum of the current and
previous M + 1 values of the input. Very often the coefficient a(0) as shown in
(2.3) is normalized to unity.
Soon we will introduce the z transform to represent the discrete-time signals
in the set of equations above, thereby generating more models for the system, and
from these models in the z domain, we will derive the transfer function H (z−1 )
and the unit sample response or the unit impulse response h(n) of the system.
From any one of these models in the z domain, we can derive the other models in
the z domain and also the preceding models given in the time domain. It is very
important to know how to obtain any one model from any other given model
so that the proper tools can be used efficiently, for analysis of the discrete-time
system. In this chapter we will elaborate on the different models of a discrete-
time system and then discuss many tools or techniques for finding the response of
discrete-time systems when they are excited by different kinds of input signals.
resulting output is called the unit impulse response h(n) (or more appropriately
the unit sample response) and is infinite in length.
Consider a system in which the multiplier constants a(k) = 0 for k =
1, 2, 3, . . . , N . Then Equation (2.2) reduces to the form
M
y(n) = b(k)x(n − k) (2.4)
k=0
= b(0)x(n) + b(1)x(n − 1) + b(2)x(n − 2) + · · · + b(M)x(n − M)
Let us find the unit impulse response of this system, using the recursive algo-
rithm, as before:
y(3) = b(3)
y(4) = b(4)
·
·
·
y(M) = b(M)
This example leads to the following two observations: (1) the samples of the unit
impulse response are the same as the coefficients b(n), and (2) therefore the unit
impulse response h(n) of the system is finite in length.
So we have shown without proof but by way of example that the unit impulse
response of the system modeled by an equation of the form (2.2) is infinite in
length, and hence such a system is known as an infinite impulse response (IIR)
filter, whereas the system modeled by an equation of the form (2.4), which has
an unit impulse response that is finite in length, is known as the finite impulse
response (FIR) filter. We will have a lot more to say about these two types of
filters later in the book. Equation (2.3) is the ordinary, linear, time-invariant,
difference equation of N th order, which, if necessary, can be rewritten in the
recursive difference equation form (2.2). The equation can be solved in the time
domain, by the following four methods:
We should point out that methods 1–3 require that the DT system be modeled by a
single-input, single-output equation. If we are given a large number of difference
equations describing the DT system, then methods 1–3 are not suitable for finding
the output response in the time domain. Method 4, using the z transform, is the
only powerful and general method to solve such a problem, and hence it will be
treated in greater detail and illustrated by several examples in this chapter. Given
a model in the z-transform domain, we will show how to derive the recursive
algorithm and the unit impulse response h(n) so that the convolution sum can be
applied. So the z-transform method is used most often for time-domain analysis,
and the frequency-domain analysis is closely related to this method, as will be
discussed in the next chapter.
∞
y(n) = x(k)h(n − k) (2.5)
k=0
A LINEAR, TIME-INVARIANT SYSTEM 39
For example, even though we know that h(n) = 0 for −∞ < n < 0, if the input
sequencex(n) is defined for −M < n < ∞, then we have to use the formula
y(n) = ∞ − k). If x(n) = 0 for −∞ < n < 0, then we have to use
k=−∞ x(k)h(n
the formula y(n) = ∞ k=0 x(k)h(n − k).
To understand the procedure for implementing the summation formula, we
choose a graphical method in the following example. Remember that the recur-
sive algorithm cannot be used if the DT system is described by more than one
difference equation, and the convolution sum requires that we have the unit pulse
response of the system. We will find that these limitations are not present when
we use the z-transform method for analyzing the DT system performance in the
time domain.
Example 2.1
Given an h(n) and x(n), we change the independent variable from n to k and
plot h(k) and x(k) as shown in Figure 2.4a,b. Note that the input sequence is
defined for −2 ≤ k ≤ 5 but h(k) is a causal sequence defined for 0 ≤ k ≤ 4. Next
we do a time reversal and plot h(−k) in Figure 2.4c. When n ≥ 0, we obtain
h(n − k) by delaying (or shifting to the right) h(−k) by n samples; when n < 0,
the sequence h(−k) is advanced (or shifted to the left). For every value of n,
we have h(n − k) and x(k) and we multiply the samples of h(n − k) and x(k)
at each value of k and add the products.
For our example, we show the summation of the product when n = −2 in
Figure 2.4d, and show the summation of the product when n = 3 in Figure 2.4e.
The output y(−2) has only one nonzero product = x(−2)h(0). But the output
sample y(3) is equal to x(0)h(3) + x(1)h(2) + x(2)h(1) + x(3)h(0).
But note that when n > 9, and n < −2, the sequences h(n − k) and x(k) do
not have overlapping samples, and therefore y(n) = 0 for n > 9 and n < −2.
Example 2.2
As another example, let us assume that the input sequence x(n) and also the
unit impulse response h(n) are given for 0 ≤ n < ∞. Then output y(n) given
40 TIME-DOMAIN ANALYSIS AND z TRANSFORM
X(k) h(k)
−2 −1 0 1 2 3 4 5 k 0 1 2 3 4 k
(a) (b)
h(−2 − k)
h(− k)
−6 −5−4 −3−2 −1 0 1 2 k
−4 −3 −2 −1 0 k
(d) (c)
h(3 − k)
−1 0 1 2 3 k
(e)
h(10 − k)
0 1 2 3 4 5 6 7 8 9 10 k
(f )
2.2.1 Definition
In many textbooks, the z transform of a sequence x(n) is simply defined as
∞
Z[x(n)] = X(z) = x(n)z−n (2.8)
n=−∞
1
Z−1 [X(z)] = x(n) = X(z)zn−1 dz (2.9)
2πj C
Example 2.3
There is only term in the z transform of δ(n), which is one when n = 0. Hence
Z [δ(n)] = 1.
z TRANSFORM THEORY 43
Example 2.4
It is obvious that the region of convergence for the z transform of δ(n) is the
entire z plane.
Example 2.5
1
It can be shown that the z transform for the anticausal sequence f (n) = −α n u(−n − 1) is F (z) =
−1 n −n
n=−∞ α z , which also converges to z/(z − α), which is the same as X(z) in (2.18), but its ROC
is |z| < α. So the inverse z transform of a function is not unique; only when we know its ROC does
the inverse z transform become unambiguous.
44 TIME-DOMAIN ANALYSIS AND z TRANSFORM
Example 2.6
and its region of convergence is the region outside the unit circle in the z plane:
|z| > 1.
Example 2.7
Given a sequence x(n) = r n cos(θ n)u(n), where 0 < r ≤ 1, to derive its z trans-
form, we express it as follows:
ej θn + e−j θn
x(n) = r n
u(n)
2
n j θn
r e r n e−j θn
= + u(n)
2 2
j θ
n −j θ
n
re re
= u(n) + u(n) (2.20)
2 2
Now one can use the previous results and obtain the z transform of x(n) =
r n cos(θ n)u(n) as
z(z − r cos(θ ))
X(z) = (2.21)
z2 − (2r cos(θ ))z + r 2
and its region of convergence is given by |z| > r. Of course, if the sequence
given is x(n) = e−an cos(ω0 n)u(n), we simply substitute e−a for r in (2.21),
to get the z transform of x(n). It is useful to have a list of z transforms for
discrete-time sequences that are commonly utilized; they are listed in Table 2.1.
It is also useful to know the properties of z transforms that can be used to
generate and add more z transforms to Table 2.1, as illustrated by the following
example.
dX(z)
nx(n)u(n) ⇐⇒ −z (2.22)
dz
z TRANSFORM THEORY 45
1 δ(n) 1
2 δ(n − m) z−m
z
3 u(n)
z−1
az
4 au(n)
z−1
z
5 an
z−a
az
6 na n
(z − a)2
z(z + 1)
7 n2
(z − 1)3
z(z2 + 4z + 1)
8 n3
(z − 1)4
az(z + a)
9 n2 a n
(z − a)3
n(n − 1) n−2 z
10 a
2! (z − a)3
n(n − 1)(n − 2) · · · · · · · (n − m + 2) n−m+1 z
11 a
(m − 1)! (z − a)m
z
12 r n ej θn
z − rej θ
z(z − r cos(θ ))
13 r n cos(θn)
z2 − (2r cos(θ ))z + r 2
rz sin(θ )
14 r n sin(θn)
z2 − (2r cos(θ ))z + r 2
z(z − e−α cos(θ ))
15 e−αn cos(θn)
z2 − (2e−α cos(θ ))z + e−2α
∞
Proof : X(z) = n=0 x(n)z−n . Differentiating both sides with respect to z,
we get
∞ ∞
dX(z)
= x(n) −nz−n−1 = −z−1 nx(n)z−n
dz
n=0 n=0
∞
dX(z)
−z = nx(n)z−n = Z[nx(n)u(n)]
dz
n=0
46 TIME-DOMAIN ANALYSIS AND z TRANSFORM
Now consider the z transform given by (2.18) and also listed in Table 2.1:
z
x(n) = a n u(n) ⇐⇒ = X(z) (2.23)
z−a
Using this differentiation property recursively, we can show that
az
na n u(n) ⇐⇒ (2.24)
(z − a)2
and
az(z + a)
n2 a n u(n) ⇐⇒ (2.25)
(z − a)3
1 z3
(n + 1)(n + 2)a n u(n) ⇐⇒ (2.26)
2 (z − a)3
The transform pair given by (2.26) is an addition to Table 2.1. Indeed, we can
find the z transforms of n3 a n u(n), n4 a n u(n), . . . , using (2.22) and then find the
z transforms of
1
(n + 1)(n + 2)(n + 3)a n u(n)
3!
1
(n + 1)(n + 2)(n + 3)(n + 4)a n u(n) (2.27)
4!
·
·
·
and
or
m−1
−m
x(n − m)u(n − m) ⇐⇒ z X(z) + x(n − m)z−n (2.31)
n=0
Example 2.8
where
x(n) = (0.2)n u(n)
y(−1) = 2
Let Z[y(n)] = Y (z). From (2.28), we have Z[y(n − 1)] = z−1 Y (z) + y(−1)
and Z[x(n − 1)] = z−1 X(z) + x(−1) where X(z) = z/(z − 0.2) and x(−1) = 0,
since x(n) is zero for −∞ < n < 0. Substituting these results, we get
Y (z) − 0.5 z−1 Y (z) + y(−1) = 5 z−1 X(z) + x(−1)
Y (z) − 0.5 z−1 Y (z) + y(−1) = 5z−1 X(z)
48 TIME-DOMAIN ANALYSIS AND z TRANSFORM
Y (z) 1 − 0.5z−1 = 0.5y(−1) + 5z−1 X(z) (2.34)
0.5y(−1) 5z−1
Y (z) = + X(z)
(1 − 0.5z−1 ) (1 − 0.5z−1 )
0.5y(−1)z 5
Y (z) = + X(z)
(z − 0.5) (z − 0.5)
Substituting y(−1) = 2 and X(z) = z/(z − 0.2) in this last expression, we get
z 5z
Y (z) = + (2.35)
(z − 0.5) (z − 0.5)(z − 0.2)
= Y0i (z) + Y0s (z) (2.36)
where Y0i (z) is the z transform of the zero input response and Y0s (z) is the
z transform of the zero state response as explained below.
Now we have to find the inverse z transform of the two terms on the right
side of (2.35). The inverse transform of the first term Y0i (z) = z/(z − 0.5) is
easily found as y0i (n) = (0.5)n u(n). Instead of finding the inverse z transform
of the second term by using the complex integral given in (2.9), we resort to
the same approach as used in solving differential equations by means of Laplace
transform, namely, by decomposing Y0s (z) into its partial fraction form to obtain
the inverse z transform of each term. We have already derived the z transform of
Ra n u(n) as Rz/(z − a), and it is easy to write the inverse z transform of terms
like Rk z/(z − ak ). Hence we should expand the second term in the form
R1 z R2 z
Y0s (z) = + (2.37)
z − 0.5 z − 0.2
by a slight modification to the partial fraction expansion procedure that we are
familiar with. Dividing Y0s (z) by z, we get
Y0s (z) 5 R1 R2
= = +
z (z − 0.5)(z − 0.2) z − 0.5 z − 0.2
Now we can easily find the residues R1 and R2 using the normal procedure and
get
Y0s (z) 5
R1 = (z − 0.5) = = 16.666
z z=0.5 (z − 0.2) z=0.5
Y0s (z) 5
R2 = (z − 0.2) = = −16.666
z z=0.2 (z − 0.5) z=0.2
Therefore
Y0s (z) 5 16.666 16.666
= = −
z (z − 0.5)(z − 0.2) z − 0.5 z − 0.2
z TRANSFORM THEORY 49
16.666z 16.666z
Y0s (z) = − (2.38)
z − 0.5 z − 0.2
Now we obtain the inverse z transform y0s (n) = 16.666[(0.5)n − (0.2)n ]u(n).
The total output satisfying the given difference equation is therefore given as
y(n) = y0i (n) + y0s (n) = (0.5)n + 16.666[(0.5)n − (0.2)n ] u(n)
= 17.6666(0.5)n − 16.666(0.2)n u(n)
Thus the modified partial fraction procedure to find the inverse z transform of
any function F (z) is to divide the function F (z) by z, expand F (z)/z into its
form, and then multiply each of the terms by z to get
normal partial fraction
F (z) in the form k=1 Rk z/(z − ak ). From this form, the inverse z transform
f (n) is obtained as k=1 Rk (ak )n u(n).
However, there is an alternative method, to expand a transfer function expressed
in the form, when it has only simple poles
N (z−1 )
H (z−1 ) = −1
k=1 (1 − ak z )
where
Rk = H (z−1 )(1 − ak z−1 )z=a
k
Then the inverse z transform is the sum of the inverse z transform of all the terms
in (2.39): h(n) = K n
k=1 Rk (ak ) u(n). We prefer the first method because we are
already familiar with the partial fraction expansion of H (s) and know how to
find the residues when it has multiple poles in the s plane. This method will be
illustrated by several examples that are worked out in the following pages.
to be zero is called the zero input response and is determined only by the initial
conditions given. The initial conditions specified with the difference equation are
better known as initial states. (But the term state has a specific definition in the
theory of linear discrete-time systems, and the terminology of initial states is con-
sistent with this definition.) When the initial state y(−1) in the problem presented
above is assumed to be zero, the z transform of the total response Y (z) contains
only the term Y0s (z) = 5/(z − 0.5)X(z) = 5/[(z − 0.5)(z − 0.2)], which gives a
response y0s (n) = 16.666[(0.5)n − (0.2)n ]u(n). This is the response y(n) when
the initial condition or the initial state is zero and hence is called the zero state
response. The zero state response is the response due to input only, and the zero
input response is due to the initial states (initial conditions) only. We repeat it
in order to avoid the common confusion that occurs among students! The zero
input response is computed by neglecting the input function and computing the
response due to initial states only, and the zero state response is computed by
neglecting the initial states (if they are given) and computing the response due to
input function only. Students are advised to know the exact definition and mean-
ing of the zero input response and zero state response, without any confusion
between these two terms.
Let us denote the solution to this equation as the output y(n) when an input x(n)
is applied. Such a system is said to be time-invariant if the output is y(n − N )
when the input is x(n − N ), which means that if the input sequence is delayed
by N samples, the output also is delayed by N samples. For this reason, a time-
invariant discrete-time system is also called a shift-invariant system. Again from
the preceding discussion about linearity of a system, it should be obvious the
output y(n) and y(n − N ) must be chosen as the zero state response only or
the zero input response only, when the abovementioned test for a system to be
time-invariant is applied.
We will consider a few more examples to show how to solve a linear shift-
invariant difference equation, using the z transform in this section, and later we
show how to solve a single-input, single-output difference equation using the
classical method. Students should be familiar with the procedure for decompos-
ing a proper, rational function of a complex variable in its partial fraction form,
when the function has simple poles, multiple poles, and pairs of complex conju-
gate poles. A “rational” function in a complex variable is the ratio between two
polynomials with real coefficients, and a “proper” function is one in which the
degree of the numerator polynomial is less than that of the denominator polyno-
mial. It can be shown that the degree of the numerator in the transfer function
H (s) of a continuous-time system is at most equal to that of its denominator. In
contrast, it is relevant to point out that the transfer function of a discrete-time
system when expressed in terms of the variable z−1 need not be a proper func-
tion. For example, let us consider the following example of an improper function
of the complex variable z−1 :
−4.8z−1 + 8.0
H (z−1 ) = z−2 + 0.2z−1 − 4.0 + (2.41)
z−2 − z−1 + 2.0
= z−2 + 0.2z−1 − 4.0 + H1 (z−1 ) (2.42)
Since the inverse z transform of z−m is δ(n − m), we get the inverse z trans-
form of the first three terms as δ(n − 2) + 0.2δ(n − 1) − 4.0δ(n), and we add it
to the inverse z transform of the H1 (z−1 ), which will be derived below.
52 TIME-DOMAIN ANALYSIS AND z TRANSFORM
Let us choose the second term on the right side of (2.41) as an example of a
transfer function with complex poles:
−4.8z−1 + 8.0
H1 (z−1 ) =
z−2 − z−1 + 2.0
Multiplying the numerator and denominator by z2 , and factorizing the denomina-
tor, we find that H1 (z−1 ) has a complex conjugate pair of poles at 0.25 ± j 0.6614:
8(z2 − 0.6z)
H1 (z) =
2z2 − z + 1
8(z2 − 0.6z)
=
2(z2 − 0.5z + 0.5)
(z2 − 0.6z)
=4
(z − 0.25 − j 0.6614)(z − 0.25 + j 06614)
Let us expand H1 (z)/z into its modified partial fraction form:
H1 (z) 2 + j 1.0583 2 − j 1.0583
= + .
z z − 0.25 − j 0.6614 z − 0.25 + j 0.6614
It is preferable to express the residues and the poles in their exponential form
and then multiply by z to get
2.2627ej 0.4867 z 2.2627e−j 0.4867 z
H1 (z) = +
z − 0.7071ej 1.209 z − 0.7071e−j 1.209
The inverse z transform of H1 (z) is given by
h1 (n) = 2.2627ej 0.4867 (0.7071ej 1.209 )n
+ 2.2627e−j 0.4867 (0.7071e−j 1.209 )n u(n)
= 2.2627(0.7071)n ej 1.209n ej 0.4867
+ 2.2627(0.7071)n e−j 1.209n e−j 0.4867 u(n)
= 2.2627(0.7071)n ej (1.209n+0.4867) + e−j (1.209n+0.4867) u(n)
= 2.2627(0.7071)n {2 cos(1.209n + 0.4867)} u(n)
= 4.5254(0.7071)n {cos(1.209n + 0.4867)} u(n).
Example 2.10
where
Using the z transform for each term in this difference equation, we get
We know X(z) = z/(z + 0.2) and x(−1) = 0. Substituting these and the given
initial states, we get
When the input x(n) is zero, X(z) = 0; hence the second term on the right side
is zero, leaving only the first term due to initial conditions given. It is the z
transform of the zero input response y0i (n).
The inverse z transform of this first term on the right side
X(z)[1 − 0.1z−1 ]
= Y0s (z)
[1 − 0.3z−1 + 0.02z−2 ]
gives the response when the initial conditions (also called the initial states) are
zero, and hence it is the zero state response y0s (n).
Substituting the values of the initial states and for X(z), we obtain
[0.288 − 0.02z−1 ] [0.288z2 − 0.02z] z[0.288z − 0.02]
Y0i (z) = −1 −2
= =
[1 − 0.3z + 0.02z ] z − 0.3z + 0.02
2 (z − 0.1)(z − 0.2)
and
X(z)[1 − 0.1z−1 ] z [1 − 0.1z−1 ]
Y0s (z) = =
[1 − 0.3z−1 + 0.02z−2 ] z + 0.2 [1 − 0.3z−1 + 0.02z−2 ]
z[z2 − 0.1z] z2 (z − 0.1)
= =
(z + 0.2)(z2 − 0.3z + 0.02) (z + 0.2)(z − 0.1)(z − 0.2)
We notice that there is a pole and a zero at z = 0.1 in the second term on the
right, which cancel each other, and Y0s (z) reduces to z2 /[(z + 0.2)(z − 0.2)]. We
divide Y0i (z) by z, expand it into its normal partial fraction form
Y0i (z) [0.288z − 0.02] 0.376 0.088
= = −
z (z − 0.1)(z − 0.2) (z − 0.2) (z − 0.1)
and multiply by z to get
0.376z 0.088z
Y0i (z) = −
(z − 0.2) (z − 0.1)
Similarly, we expand Y0s (z)/z = z/[(z + 0.2)(z − 0.2)] in the form −0.5/(z +
0.2) + 0.5/(z − 0.2) and get
z2 −0.5z 0.5z
Y0s (z) = = +
(z + 0.2)(z − 0.2) (z + 0.2) (z − 0.2)
Therefore, the zero input response is y0i (n) = [0.376(0.2)n − 0.088(0.1)n ]u(n)
and the zero state response is y0s (n) = 0.5[−(−0.2)n + (0.2)n ]u(n).
USING z TRANSFORM TO SOLVE DIFFERENCE EQUATIONS 55
Here we discuss the case of a function that has multiple poles and expand it into
its partial fraction form. Let
G(z) N (z)
=
z (z − z0 ) (z − z1 )(z − z2 )(z − z3 ) · · · (z − zm )
r
After obtaining the residues and the coefficients, we multiply the expansion by z:
C0 z C1 z Cr−1 z
G(z) = + + ··· +
(z − z0 )r (z − z0 )r−1 (z − z0 )
k1 z k2 z km z
+ + + ··· +
(z − z1 ) (z − z2 ) (z − zm )
Then we find the inverse z transform of each term to get g(n), using the z
transform pairs given in Table 2.1. To illustrate this method, we consider the
56 TIME-DOMAIN ANALYSIS AND z TRANSFORM
Therefore we have
−2z −z 3z −3z
G(z) = + + + (2.45)
(z − 2)3 (z − 2)2 (z − 2) (z − 1)
Now note that the inverse z transform of az/(z − a)2 is easily obtained from
Table 2.1, as na n u(n). We now have to reduce the term −z/(z − 2)2 to −( 12 )2z/
(z − 2)2 so that its inverse z transform is correctly written as −( 12 )n2n u(n). From
the transform pair 6 in Table 2.1, we get the inverse z transform of z/(z − a)3
as n(n − 1)/2!a n−2 u(n).
Therefore the inverse z transform of −2z/(z − 2)3 is obtained as
0.8
z−1
z−1
0.5
0.3 −1
−0.4 z y2(z) 0.6
z−1
z−2y1(z)
−0.2
Note that these are linear algebraic equations—three equations in three unknown
functions Y1 (z), Y2 (z), and Y3 (z), where the initial states and X(z) are known.
After rearranging these equations as follows
By use of matrix algebra, we can now find any one or all three unknown functions
Y1 (z), Y2 (z), and Y3 (z), when the input X(z) is zero—their inverse z transforms
yield zero input responses. We can find them when all the initial states are
zero—their inverse z transform will yield zero state responses. Of course we
can find the total responses y1 (n), y2 (n), and y3 (n), under the given initial states
and the input function x(n). This outlines a powerful algebraic method for the
analysis of discrete-time systems described by any large number of equations in
either the discrete-time domain or the z-transform domain. We use this method
to find the zero input response and the zero state response and their sum, which
is the total response denoted as y1 (n), y2 (n), and y3 (n).
be denoted by (c1 ), (c2 ), . . . , (cN ), which are the natural frequencies. Assuming
that all the roots are distinct and separate, the natural response assumes the form
If, however, the characteristic polynomial has a repeated root (cr ) with a mul-
tiplicity of R, then the R terms in yc (n) corresponding to this natural frequency
(cr ) are assumed to be of the form
[B0 + B1 n + B2 n2 + · · · + BR nR ](cr )n
Suppose that the system is described by a set of linear difference equations such
as (2.47). When we solve for Y1 (z), Y2 (z), or Y3 (z), we get the determinant of
the system matrix shown in (2.48) as the denominator in Y1 (z), Y2 (z), and Y3 (z).
The roots of this system determinant are the poles of the z transform Y1 (z),
Y2 (z), Y3 (z) and appear in the partial fraction expansion for these functions. The
inverse z transform of each term in the partial fraction expansion will exhibit
the corresponding natural frequency. All terms containing the natural frequencies
make up the natural response of the system. The important observation to be made
is that terms with these natural frequencies appear in the zero input response as
well as the zero state response; hence the amplitudes Aj of the term with the
natural frequency (cj ) have to be computed as the sum of terms with the natural
frequency (cj ) found in both the zero input and zero state response. It follows,
therefore, that the natural response is not the same as the zero input response
and the forced response is not the same as the zero state response. Computation
of these different components in the total response must be carried out by using
the correct definition of these terms.
Now that we have described the method for finding the complementary func-
tion for a system described by an nth linear ordinary difference equation, we
discuss the computation of the particular function or particular solution, due to
the specified input function. Note that this classical method can be used when
there is only one such equation, and it is not very easy when there are many
equations describing the given discrete-time system. Also, when the order of the
characteristic polynomial or the system determinant is more than 3, finding the
zeros of the characteristic polynomial or the system determinant analytically is
not possible. We have to use numerical techniques to find these zeros, which are
the natural frequencies of the system. If and when we have found the natural
frequencies, the natural response can be identified as the function yc (n) given in
the preceding section. Next we have to choose the form of the particular function
that depends on the form of the input or the forcing function. Hence it is the
forced response, and the sum of the natural response (complementary function)
and the forced response (particular function) is the total response. The form of
the particular function is chosen as listed in Table 2.2.
We substitute the particular function in the nonhomogeneous difference
equation, and by comparing the coefficients on both sides of the resulting
60 TIME-DOMAIN ANALYSIS AND z TRANSFORM
1 A(α)n , α = c1 (i = 1, 2, . . .) B(α)n
2 A(α)n , α = ci [B0 + B1 n] (α)n
3 A cos(ω0 n + θ ) B cos(ω0 n + φ)
m i
n m i
n
4 i=0 Ai n α i=0 Bi n α
Example 2.12
Solve the linear difference equation given below, using the classical method:
Therefore B = 1 and the particular function yp (n) = (0.1)n . So the total solu-
tion is
Solving these two equations, we get A1 = 9.903 and A2 = −8.4. So the total
response is
Example 2.13
Let us reconsider Example 2.10. The zero input response and the zero state
response in this example were found to be
The characteristic polynomial for the system given in Example 2.10 is easily
seen as [1 − 0.3z−1 + 0.02z−2 ]. After multiplying it by z2 , we find the natu-
ral frequencies as the zeros of [z2 − 0.3z + 0.02] = [(z − 0.2)(z − 0.1)] to be
(c1 ) = (0.2) and (c2 ) = (0.1). Note that the zero input response y0i (n) has a
term 0.376(0.2)n u(n), which has the natural frequency equal to (0.2) and the
term −0.088(0.1)n u(n) with the natural frequency of (0.1), while the zero state
response y0s (n) also contains the term 0.5(0.2)n u(n) with the natural frequency
of (0.2). We also noticed that the pole of Y0s (z) at z = 0.1 was canceled by a zero
at z = 0.1. Therefore there is no term in the zero state response y0s (n) with the
natural frequency of (0.1). So the term containing the natural frequency of (0.2) is
the sum 0.5(0.2)n u(n) + 0.376(0.2)n u(n) = 0.876(0.2)n u(n), whereas the other
term with the natural frequency of (0.1) is −0.088(0.1)n u(n). Consequently, the
natural response of the system is 0.876(0.2)n u(n) − 0.088(0.1)n u(n).
The remaining term −0.5(−0.2)n u(n) is the forced response with the fre-
quency (−0.2), which is found in the forcing function or the input function
x(n) = (−0.2)n u(n). Thus the total response of the system is now expressed as
the sum of its natural response 0.876(0.2)n u(n) − 0.088(0.1)n u(n) and forced
response −0.5(−0.2)n u(n). We repeat that in the zero state response, there are
terms with natural frequencies of the system, besides terms with input frequen-
cies; hence it is erroneous to state that the zero input response is equal to the
natural response or that the zero state response is the forced response.
Example 2.14
2.0
X(z) Y1(z) Y2(z)
z−1 ∑ z−1 ∑
z−1
−0.2 z−1
−0.1
z−1
−0.4
equation, and hence solving for output by using the recursive algorithm or
the classical method is not possible. However, we can transform the difference
equations to their equivalent z-transform equations. They become linear, alge-
braic equations that can be solved to find the z transform of the output using
matrix algebra. The inverse z transform of the output function gives us the final
solution in the time domain. So it is the z-transform method that is the more pow-
erful method for time-domain analysis. To illustrate this method, let us transform
the two equations above in the time domain to get the following:
2z−2
= X(z)
1 + 0.3z−1 + 0.42z−2 + 0.04z−3
2z
= 3 X(z)
z + 0.3z2 + 0.42z + 0.04
When we substitute the z transform of the given input above and find the inverse
z transform, we get the output y2 (n).
In this example, the natural frequencies of the system are computed as the
zeros of the system determinant
(1 + 0.2z−1 + 0.4z−2 ) 0
−2z−1 −1
(1 + 0.1z )
As long as these poles of H (z) are not canceled by its zeros, that is, if there
are no common factors between its numerator and the denominator, its inverse z
transform will display all three natural frequencies. If some poles of the transfer
function are canceled by its zeros, and it is therefore given in its reduced form, we
may not be able to identify all the natural frequencies of the system. Therefore
the only way to find all the natural frequencies of the system is to look for
the zeros of the system determinant or the characteristic polynomial. We see
that in this example, the system response does contain three terms in its natural
response, corresponding to the three natural frequencies of the system. But if
and when there is a cancellation of its poles by some zeros, the natural response
components corresponding to the canceled poles will not be present in the zero
state response h(n). So we repeat that in some cases, the poles of the transfer
function may not display all the natural frequencies of the system.
Note that the inverse z transform of Y2 (z) is computed from Y2 (z) = H (z)X(z)
when the initial states are zero. Therefore the response y2 (n) is just the zero state
response of the system, for the given input x(n).
nonzero component. All terms with their frequencies that lie within the unit circle
of the z plane approach zero as n → ∞, and terms with simple poles that lie on
the unit circle contribute to the steady-state response.
For example, let us consider a function
◦ ◦
0.5z z 0.4z 0.5ej 40 z 0.5e−j 40 z
Y (z) = + + + +
z − 1 (z − 0.2)2 (z + 0.4) (z − ej 50◦ ) (z − e−j 50◦ )
In this example, Y (z) has a double pole at z = 0.2 and a simple pole at z = −0.4,
and the terms [5n(0.2)n + 0.4(−0.4)n ] u(n) corresponding to these frequencies
inside the unit circle constitute the transient response in y(n) since these terms
approach zero as n → ∞. The other terms in Y (z) have a pole at z = 1 and
◦
another one at z = ±ej 50 . These are frequencies that lie on the unit circle,
and their inverse z transform is 0.5 + cos(50◦ n + 40◦ ) u(n), which remains
bounded and is nonzero as n → ∞. It is the steady-state component in y(n), and
obviously the sum of the transient response and steady-state response is the total
◦
response y(n) of the system. The frequencies at z = ±ej 50 may be the natural
frequencies of the system or may be the frequencies of the forcing function; this
also applies for the other frequencies that show up as the poles of Y (z). The
natural response and forced response are therefore not necessarily the same as
the transient response and the steady-state response. Only by using the different
definitions of these terms should one determine the different components that add
up to the total response. In summary, we have shown how to express the total
response as the sum of two terms in the following three different ways:
The transfer function H (z) of a system is defined as the ratio of the z transform
of the output and the z transform of the input, under the condition that all initial
states are zero and there are no other independent sources within the system. For
the system described in Figure 2.6, the ratio
Y2 (z) 2z
= 3 = H (z)
X(z) z + 0.3z2 + 0.02z + 0.8
is the transfer function. So we can also use the relationship Y2 (z) = H (z)X(z).
That means that when X(z) = 1 and when the initial states are zero, we have
CONVOLUTION REVISITED 65
Then
∞ ∞
∞
−n
Y (z) = y(n)z = x(k)h(n − k) z−n
n=0 n=0 k=0
66 TIME-DOMAIN ANALYSIS AND z TRANSFORM
1. x(n) ∗ h(n) = h(n) ∗ x(n), which means that the convolution sum is com-
mutative. It is now easy to prove that this satisfies the following additional
properties, by using the algebraic relationships for the z transforms of the
discrete-time sequences.
2. KX1 (z)X2 (z) = X1 (z)KX2 (z). Hence convolution sum operation is linear:
Kx1 (n) ∗ x2 (n) = x1 (n) ∗ Kx2 (n).
3. [X1 (z)X2 (z)]X3 (z) = X1 (z)[X2 (z)X3 (z)]. Hence convolution sum opera-
tion is associative:
4. X1 (z)[X2 (z)+X3 (z)] = X1 (z)X2 (z)+X1 (z)X3 (z). Convolution sum opera-
tion is distributive:
One can store the coefficients h(n) and x(n) for a system being investigated,
on a personal computer or workstation, do the time reversal off line, delay the
time-reversed sequence, and multiply the terms and add the products as explained
in Figure 2.4. Computer software has been developed to perform the convolution
of two sequences in a very rapid and efficient manner—even when the sequences
are very long.2 But a real hardware that contains the electronic devices such as
the delay element, multiplier, and the adder cannot reverse a sequence in real
time, but it operates on the incoming samples of the input as follows. When the
sample x0 enters the system at t = 0, it launches the sequence x0 h(nT ), which
appears at the output; when the next sample x1 enters the system at t = T , it
launches the sequence x1 h(nT − T ), which appears at the output, and when the
next sample x2 enters the system, the sequence at the output is x2 h(nT − 2T ),
and so on. At any time t = mT , the value of the output sample is
This is the physical process being implemented by the real hardware; an example
of this process was described in Figure 2.2. However, a real hardware can be
programmed to store the input data x(n) and h(n) in its memory registers and to
implement the convolution sum.
It is important to remember that convolution can be used to find the output,
even when the input sequence does not have a z transform, that is, when we
cannot use the z-transform approach. This makes convolution a very fundamen-
tal operation for signal processing and is one of the most powerful algorithms
implemented by the electronic hardware as it does not know what z transform is!
Example 2.15
2
Suppose that the input sequence is x(n) = (0.1)n u(n) and the unit impulse
response h(n) = {0.2 0.4 0.6 0.8 1.0}.
The z transform X(z) for the infinite sequence x(n) does not have a closed-
form expression, whereas it is easy to write the z transform H (z) = 0.2 +
0.4z−1 + 0.6z−2 + 0.8z−3 + z−4 . Therefore we cannot find X(z)H (z) = Y (z) as
a rational function and invert to get y(n). However, the polynomial H (z) can be
multiplied by the power series X(z) = ∞ n2 −n
n=0 (0.1) z to get y(n), according
2
Two methods used to improve the efficiency of computation are known as the overlap-add and
overlap-save methods. Students interested in knowing more details of these methods may refer to
other books.
CONVOLUTION REVISITED 69
to either one of the algorithms x(n) ∗ h(n) or h(n) ∗ x(n). For example
y(0) = 0.2
y(1) = 0.4 + 0.1(0.2)
y(2) = 0.6 + (0.1)(0.4) + (0.1)4 (0.2)
y(3) = 0.8 + (0.1)(0.6) + (0.1)4 (0.4) + (0.1)9 (0.2)
·
·
·
Recollect that we have obtained two different equations for finding the output
due to a given input. They are the convolution sum (2.6) and the linear difference
equation (2.2), which are repeated below.
∞
y(n) = x(k)h(n − k) (2.53)
k=0
N
M
y(n) = − a(k)y(n − k) + b(k)x(n − k) (2.54)
k=1 k=0
In Equation (2.53), the product of the input sequence and the current and previ-
ous values of the unit impulse response are added, whereas in Equation (2.54)
the previous values of the output and present and past values of the input are
multiplied by the fixed coefficients and added. The transfer function H (z) for
the first case is given by H (z) = ∞ −n
n=0 h(n)z , and for the second case, we use
the z transform for both sides to get
N
M
Y (z) = − a(k)z−k Y (z) + b(k)z−k X(z)
k=1 k=0
N
M
Y (z) 1 + a(k)z−k = b(k)z−k X(z)
k=1 k=0
M −k
Y (z) k=0 b(k)z
H (z) = =
X(z) 1+ N −k
k=1 a(k)z
So we can derive the transfer function H (z) from the linear difference equation
(2.54), which defines the input–output relationship.
We can also obtain the linear difference equation defining the input–output
relationship, from the transfer function H (z), simply by reversing
the steps as
−k
follows. Given the transfer function H (z), we get Y (z)[1 + N k=1 a(k)z ] =
70 TIME-DOMAIN ANALYSIS AND z TRANSFORM
M
b(k)z−k X(z). Finding the inverse z transform for each term, we arrive at
k=0
the input–output relationship for the system, as shown by the following example.
Example 2.16
By expressing the inverse z transform of each term, we get the linear difference
equation or the input–output relationship
Since the transfer function has been defined and derived by setting the initial
conditions to zero , one may assert that from the transfer function we cannot
find the response due to initial conditions, but this is not true. In the preceding
example, after we have derived the input–output relationship from the given
transfer function, we write the corresponding z-transform equation including the
terms containing the initial conditions, in the form
We substitute the initial conditions y(−1), y(−2) and y(−3), in these equations
and obtain the zero input response as well as the zero state response of the
system. Therefore the transfer function H (z) constitutes a complete model of the
discrete-time system.
In this section, we review the important concepts and techniques that we have
discussed so far. For this purpose, we select one more example below.
A MODEL FROM OTHER MODELS 71
Example 2.17
The circuit for a discrete-time system is shown in Figure 2.7. The equations that
describe it are.
y1 (n) = −x(n) + y3 (n − 2)
y2 (n) = d2 y1 (n) + x(n − 1) − y3 (n − 1) (2.55)
y3 (n) = x(n − 2) + d1 y2 (n)
Let us try to eliminate the internal variables y1 (n) and y2 (n) and get a difference
equation relating the output y3 (n) and x(n):
y3 (n) = x(n − 2) + d1 d2 y1 (n) + x(n − 1) − y3 (n − 1)
= x(n − 2) + d1 d2 {−x(n) + y3 (n − 2)} + x(n − 1) − y3 (n − 1)
x(n − 1) x(n − 2)
x(n) z−1 z−1 Σ y3(n)
−1
y1(n) d2 d1
y2(n)
Σ Σ
−1
z−1 z−1
Now we can derive the difference equation relating the input and output, in the
form of (2.56).
Let us choose d1 = 0.5 and d2 = −0.5. Then the preceding transfer function
reduces to
H (z) k1 k1∗
=4+ +
z (z + 0.25 − j 0.433) (z + 0.25 + j 0.433)
◦ ◦
4 (1.9843ej 160.9 ) (1.9843e−j 160.9 )
= + ◦ + ◦
z (z − 0.5ej 120 ) (z − 0.5e−j 120 )
Therefore we have
◦ ◦
(1.9843ej 160.9 )z (1.9843e−j 160.9 )z
H (z) = 4 + ◦ + ◦
(z − 0.5ej 120 ) (z − 0.5e−j 120 )
◦ ◦ ◦ ◦
h(n) = 4δ(n) + 1.9843 e160.9 (0.5ej 120 )n + e−160.9 (0.5e−j 120 )n
◦ ◦
= 4δ(n) + 3.9686(0.5)n cos(120 n + 160.9 ) u(n) (2.60)
The first model is a circuit diagram, whereas the remaining ones are mathematical
models describing the discrete-time system.
In the example worked out above, we have shown how to derive model 2
from model 1, model 3 from model 2, to model 6 from model 5. It is easy to see
that we can get model 5 from model 6 and model 3 from model 5. But when we
get a model 2 or 4 from model 3 or 4, the result is not unique. We will show
that getting a circuit model from model 5 is not unique, either. Yet the flexibility
to generate one model from many of the other models makes the analysis of
discrete-time systems very versatile and requires that we learn how to choose the
most appropriate model to find the output of a system for a given input, with
initial states also given. Even when the transfer function of a system is derived
under zero initial states, we can get model 3 and then can include the previous
values of the output as the initial states and obtain the total output.
Property 2.3: Time Reversal If X(z) is the z transform of a causal sequence
x(n), n ≥ 0, then the z transform of the sequence x(−n) is X(z−1 ). The sequence
x(−n) is obtained by reversing the sequence of time, which can be done only by
storing the samples of x(n) and generating the sequence x(−n) by reversing the
order of the sequence. If a discrete-time sequence or data x(n) is recorded on an
audiocassette or a magnetic tape, it has to be played in reverse to generate x(−n).
The sequence x(−n) and its z transform X(z−1 ) are extensively used in the
simulation and analysis of digital signal processing for the purpose of designing
digital signal processors, although the sequence x(−n) cannot be generated in
real time, by actual
electronic−nsignal processors.
0 ∞
Let X(z) = ∞ n=0 x(n)z . Then, n=−∞ x(−n)z =
n
n=0 x(n)z =
n
−1
X(z ).
Example 2.18
∞
1 z
Z[(0.5)n u(n)] = (0.5)n z−n = −1
= = X(z)
1 − 0.5z z − 0.5
n=0
Then
0
z−1 1
Z[(0.5)−n u(−n)] = (0.5)−n zn = X(z−1 ) = −1
=
n=−∞
z − 0.5 1 − 0.5z
If
0.1 + 0.25z−1 + 0.6z−2
F (z) = (2.61)
1.0 + 0.4z−1 + 0.5z−2 + 0.3z−3 + 0.08z−4
74 TIME-DOMAIN ANALYSIS AND z TRANSFORM
From the coefficients in the quotient, we see that x(0) = 0.1, x(1) = 0.21, x(2) =
−0.134, x(3) = −0.0514 and by continuing this procedure, we can get x(4),
x(5), . . ..
The method of finding the few coefficients of the inverse z transform of a
transfer function X(z) can be shown to have a recursive formula [1,6] which is
given as follows. Let the transfer function X(z) be expressed in the form
M −n
n=0 bn z
N = x0 + x1 z−1 + x2 z−2 + x3 z−3 + · · · (2.66)
−n
n=0 an z
The samples of the inverse z transform are given by the recursive formula
1 n
xn = bn − x(n − i)ai , n = 1, 2, . . . . (2.67)
a0
i=1
where x0 = b0 /a0 .
when and if (z − 1)X(z) has all its poles inside the unit circle.
Let us express
N
N
−N
Z[x(n)] = lim x(n)z and Z[x(n + 1)] = lim x(n + 1)z−N
N→∞ N →∞
n=0 n=0
76 TIME-DOMAIN ANALYSIS AND z TRANSFORM
Then
Letting z → 1, we get
N
lim {x(n + 1) − x(n)} = lim [(z − 1)X(z) − zx(0)]
N→∞ z→1
n=0
= x(∞) − x(0)
where we have assumed that limN →∞ [{x(N + 1)}] = x(∞) has a finite or zero
value. This condition is satisfied when (z − 1)X(z) has all its poles inside the
unit circle. Under this condition, we have proved that
Proof :
∞
∞
z −n z
n n −n
Z[r x(n)u(n)] = r x(n)z = x(n) =X
r r
n=0 n=0
2.8 STABILITY
When H (z) = N (z)/ K i=1 (z − γi ), where γi are the poles of H (z) such that
|γi | < 1 for i = 1, 2, 3, . . . , K, we get
∞
K
1
|h(n)| ≤ <∞
1 − |γi |
n=0 i=1
then the output y(n) is bounded in magnitude when the input x(n) is bounded
and the system is BIBO-stable.
There are a few tests that we can use to determine whether the poles of a
transfer function
b0 zN + b1 zN −1 + · · · + bM zN −M
=
a0 zN + a1 zN −1 + a2 zN −2 + · · · + aN
Row Coefficients
1 a0 a1 a2 . . . aN −1 aN
2 aN aN −1 aN −2 . . . a1 a0
3 c0 c1 c2 . . . cN −1
4 cN −1 cN −2 . . . c1 c0
5 d0 d1 d2 . . . dN −2
6 dN −2 dN −3 . . . d0
7
8
..
.
2N − 3 r0 r1 r2
Routh–Hurwitz test that the students have learned from an earlier course, and it
is easier than the other tests that are available.3
We consider the coefficients of the denominator arranged in descending pow-
ers of z, specifically D(z) = a0 zN + a1 zN −1 + a2 zN −2 + · · · + aN where a0 > 0.
The first row of the Jury–Marden array lists the coefficients a0 , a1 , a2 , . . . , aN
(see Table 2.4), and the second row lists these coefficients in the reverse order,
aN , aN −1 , aN −2 , . . . , a2 , a1 , a0 So we start with the two rows with elements
chosen directly from the given polynomial as follows:
a0 a1 a2 . . . aN −1 aN
aN aN −1 aN −2 . . . a1 a0
The elements of the third row are computed as second-order determinants
according to the following rule:
a0 aN −i
ci = for i = 0, 1, 2, . . . , (N − 1)
aN ai
For example
a aN
c0 = 0
aN a0
a aN −1
c1 = 0
aN a1
a aN −2
c2 = 0
aN a2
3
However, in Chapter 6 we describe the use of a MATLAB function tf2latc, which is based on
the Schur–Cohn test.
80 TIME-DOMAIN ANALYSIS AND z TRANSFORM
Note that the entries in the first column of the determinants do not change as i
changes in computing ci . The coefficients of the fourth row are the coefficients
of the third row in reverse order, as shown in the array below. The elements of
the fifth row are computed by
c0 cN −1−i
di =
cN −1 ci for i = 0, 1, 2, . . . , (N − 2)
For example
c cN −1
d0 = 0
cN −1 c0
c cN −2
d1 = 0
cN −1 c1
c cN −3
d2 = 0
cN −1 c2
and the elements of the sixth row are those of the fifth row in reverse order.
Note that the number of elements in these rows are one less than those in the
two rows above. As we continue this procedure, the number of elements in each
successive pair of rows decreases by one, until we construct (2N − 3) rows and
end up with the last row having three elements. Let us denote them as r0 , r1 , r2.
The Jury–Marden test states that the denominator polynomial D(z) = a0 zn +
a1 zn−1 + · · · + aN has roots inside the unit circle in the z plane if and only if the
following three conditions are satisfied. Note here that we need to express the
denominator polynomial in positive powers of z, because we have to evaluate it
at z = ±1 in the first two criteria shown below:
Also
|c0 | > |cN −1 |
|d0 | > |dN −2 |
..
.
|r0 | > |r2 |
Example 2.19
Row Coefficients
1 5 4 3 1 1 1
2 1 1 1 3 4 5
3 24 19 14 2 1
4 1 2 14 19 24
5 575 454 322 29
6 29 322 454 575
7 329784 251712 171984
Example 2.20
Now consider another example: D(z) = 3z4 + 5z3 + 3z2 + 2z + 1. The Jury–
Marden array is constructed as shown below:
jury–marden array
Row 1 3 5 3 2 1
2 1 2 3 5 3
3 8 13 6 1
4 1 6 13 8
5 63 98 35
Although we have calculated all the entries in the array, we find that the second
criterion is not satisfied because (−1)4 D(−1) = 0. We conclude that there is at
least one zero of D(z) that is not inside the unit circle. Indeed, it is found that
there is one zero at z = −1.000. It is a good idea to check at the beginning,
whether the first two criteria are satisfied, because if one or both of these two
criteria (which are easy to check) fail, there is no need to compute the entries in
the rows after the first two rows of the Jury–Marden array.
1. Recursive algorithm
2. Convolution sum
3. z-Transform method
where the initial states are given as y(−1) = 2 and y(−2) = 1.0. We learned
how to find the output of this system for any given input, by using the recursive
algorithm. Assuming x(n) = δ(n) and the initial two states in this example to
be zero, we found the unit impulse response h(n). Knowing the unit impulse
response, we can find the response when any input is given, by using the convo-
lution algorithm. It was pointed out that convolution algorithm can be used to find
only the zero state response since it uses h(n), whereas the recursive algorithm
computes the total response due to the given input and the initial states.
Now we use the z transform to convert the difference equation above to get
Therefore
We obtain the transfer function H (z) = 1/[1 − 0.4z−1 − 0.05z−2 ] from the given
linear difference equation describing the discrete-time system.
But when we decide to use MATLAB functions, note that if the given input is a
finite-length sequence x(n), we can easily find the coefficients of the polynomial
in the descending powers of z as the entries in the row vector that will be
required for defining the polynomial X(z). But if the input x(n) is infinite in
length, MATLAB cannot find a closed-form expression for the infinite power
series X(z) = ∞ n=0 x(n)z −n
; we have to find the numerator and denominator
coefficients of X(z).
SOLUTION USING MATLAB FUNCTIONS 83
Example 2.21
z 0.5z
X(z) = + (2.70)
z + 0.2 z − 0.3
1.5z2 − 0.2z
= (2.71)
z2 − 0.1z − 0.06
1.5 − 0.2z−1
= (2.72)
1 − 0.1z−1 − 0.06z−2
0.85 + 0.1z−1
Y (z) =
[1 − 0.4z−1 − 0.05z−2 ]
1.5 − 0.2z−1
+ (2.73)
[1 − 0.4z−1 − 0.05z−2 ][1 − 0.1z−1 − 0.06z−2 ]
We illustrate the use of MATLAB function conv to find the product of two
polynomials in the denominator of Y0i (z)
den2=conv(d1,d2)
where the entries for the row vectors d1 and d2 are the coefficients in ascending
powers of z−1 for the two polynomials [1 − 0.4z−1 − 0.05z−2 ] and [1 − 0.1z−1 −
0.06z−2 ].
So we use the following MATLAB statements to find the coefficients of their
product by convolution:
MATLAB gives us the vector den2 = [1.00 -0.50 -0.07 0.029 0.003].
Example 2.22
[r,p,k]=residuez(num,den)
84 TIME-DOMAIN ANALYSIS AND z TRANSFORM
in the form
[y,T]=impz(num,den,K)
y =filter(num,den,x)
SOLUTION USING MATLAB FUNCTIONS 85
So we enter the samples of the input in the row vector x, besides the vectors for
the coefficients of num and den of H (z−1 ). When the vector x is simply 1, the
output vector y is obviously the unit sample response h(n). This function even
allows us to find the output when initial states are given, if we use
where I0 is the vector listing the initial conditions and F is the final value. It is
important to know that although the transfer function H (z−1 ) is the z transform of
the zero state response, the function filter implements the recursive algorithm
based on the transfer function and can find the total response when initial states
are also given. So this function is a more useful function in signal processing
applications.
Example 2.23
Let us consider the z transform of the zero input function found in (2.73):
0.85 + 0.1z−1
Y0i (z−1 ) = (2.77)
[1 − 0.4z−1 − 0.05z−2 ]
To find the partial fraction expansion, we use the following MATLAB script:
num=[0.85 0.1];
den=[1 -0.4 -0.05] ;
[r,p,k]=residuez(num,den)
and we get
r= 0.8750
− 0.0250
p= 0.5000
− 0.1000
k=[]
Example 2.24
To find the 20 samples of the zero input response y0i (n) directly from (2.77), we
use the function impz in the following script:
86 TIME-DOMAIN ANALYSIS AND z TRANSFORM
num=[0.85 0.1];
den=[1 -0.4 -0.05];
[y,T]=impz (num,den,20)
y = 0.8500
0.4400
0.2185
0.1094
0.0547
0.0273
0.0137
0.0068
0.0034
0.0017
0.0009
0.0004
0.0002
0.0001
0.0001
0.0000
0.0000
0.0000
0.0000
0.0000
Example 2.25
As the second example, we consider the z transform of the zero state response
Y0s (z−1 ) in (2.73) and use the following MATLAB program to find the partial
fraction expansion:
1.5 − 0.2z−1
Y0s (z−1 ) =
[1 − 0.4z−1 − 0.05z−2 ][1 − 0.1z−1 − 0.06z−2 ]
SOLUTION USING MATLAB FUNCTIONS 87
r= 1.6369
− 0.5625
0.5714
− 0.1458
p= 0.5000
0.3000
− 0.2000
− 0.1000
k=[]
Example 2.26
To find the zero state response by using the function filter, we choose an input
of finite length, say, 10 samples of
n=(0:9);
x=[(-0.2).^n+0.5*(0.3).^n];
y=filter(b,a,x)
88 TIME-DOMAIN ANALYSIS AND z TRANSFORM
y = columns 1–7:
1.5000 0.5500 0.3800 0.1850 0.0987 0.0496 0.0252
columns 8–10:
0.0127 0.0064 0.0032
for n = 0, 1, 2, . . . 9, using the following program and find that the result agrees
with that obtained by the function filter:
n=(0:9);
y=[1.6369*(0.5).^n-0.5625*(0.3).^n+0.5714*(-0.2). ^n-0.1458*(
-0.1).^n]
The output is
y = columns 1–7:
1.5000 0.5500 0.3800 0.1850 0.0986 0.0496 0.0252
columns 8–10:
0.0127 0.0064 0.0032
Example 2.27
Now let us verify whether the result from the function impz also agrees with the
results above. We use the script
We get the following result, which also agrees with the results from the pre-
ceding two methods:
y = 1.5000
0.5500
0.3800
SOLUTION USING MATLAB FUNCTIONS 89
0.1850
0.0986
0.0496
0.0252
0.0127
0.0064
0.0032
0.0016
0.0008
0.0004
0.0002
Example 2.28
To find the unit impulse response h(n) using the function filter, we identify
the transfer function H (z−1 ) in (2.73) as 1/[1 − 0.4z−1 − 0.05z−2 ].
From the MATLAB program
b=[1];
a=[1 -0.4 -0.05];
[r,p,k]=residuez(b,a),
we get
r= 0.8333
0.1667
p= 0.5000
− 0.1000
k=[]
0.8333z 0.1667z
H (z−1 ) = −
z − 0.5 z + 0.1
To find the unit impulse response using the function impz, we use
b=[1];
a=[1 -0.4 -0.05];
[y,T]=impz(b,a,20)
and get
y = 1.0000
0.4000
0.2100
0.1040
0.0521
0.0260
0.0130
0.0065
0.0033
0.0016
0.0008
0.0004
0.0002
0.0001
0.0001
0.0000
0.0000
0.0000
0.0000
0.0000
Example 2.29
To get the same result, using the function filter, we use x =[1 zeros(1,
19)] which creates a vector [1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]:
b=[1 0 0];
a=[1 -0.4 -0.05];
SOLUTION USING MATLAB FUNCTIONS 91
x=[1 zeros(1,19];
y=filter(b,a,x)
The output is
y = columns 1–7:
1.0000 0.4000 0.2100 0.1040 0.0521 0.0260 0.0130
columns 8–14:
0.0065 0.0033 0.0016 0.0008 0.0004 0.0002 0.0001
columns 15–20:
0.0001 0.0000 0.0000 0.0000 0.0000 0.0000
Example 2.30
Now we consider the use of the function residuez when the transfer function
has multiple poles. Let us choose G(z) from (2.44) and (2.45) and also reduce
it to a rational function in ascending powers of z−1 as shown in (2.80):
r= 3.0000 + 0.0000i
0.5000 − 0.0000i
− 0.5000
− 3.0000
92 TIME-DOMAIN ANALYSIS AND z TRANSFORM
p= 2.0000 + 0.0000i
2.0000 − 0.0000i
2.0000
1.0000
k=[]
3z 0.5z2 0.5z3 3z
G(z) = + − −
(z − 2) (z − 2) 2 (z − 2) 3 (z − 1)
which differs from the partial fraction expansion shown in (2.45) or (2.79). But
let us expand
0.5z2 z 0.5z
= +
(z − 2)2 (z − 2)2 (z − 2)
and
0.5z3 −2z 2z 0.5z
− = − −
(z − 2)3 (z − 2)3 (z − 2)2 (z − 2)
Example 2.31
We can use a MATLAB function deconv(b,a) to find a few values in the inverse
z transform of a transfer function, and it is based on the recursive formula given
by (2.65). Let us select the transfer function (2.67) to illustrate this function.
%MATLAB program to find a few samples of the inverse z transform
where
x = 0.1000 0.2100 − 0.1340 − 0.0514
r =0 0 0 0 0.0876 0.0257
0.1 + 0.25z−1
X(z) = = 0.1 + 0.21z−1 − 0.134z−2 − 0.0514z−3
1 + 0.4z−1 + 0.5z−2
0.086z−4 − 0.0257z−5
+ .
1 + 0.4z−1 + 0.5z−2
2.10 SUMMARY
model from other models that is appropriate for solving a given problem in the
time-domain analysis of the system. The recursive algorithm and the convolution
sum were described first; then the theory and application of z transform was
discussed in detail, for finding the response of the system in the time domain.
In this process, many properties of the z transform of discrete-time signals were
introduced. Some fundamental concepts and applications that we discussed in
this chapter are (1) using a recursive algorithm to find the output in the time
domain, due to a given input and initial conditions; (2) finding the output (zero
input response, zero state response, natural response, forced response, transient
response, steady-state response, etc.) of a discrete-time system from a linear
difference equation (or set of equations), using the z transform; (3) finding the
transfer function and the unit impulse response of the system; and (4) finding
the output due to any input by means of convolution sum. We also showed the
method for obtaining the single input–output relation from the transfer function
and then solving for the zero input and zero state response by introducing the
initial conditions of the output into the linear difference equation.
The concept of stability and a procedure for testing the stability of a discrete-
time system was discussed in detail and followed by a description of many
MATLAB functions that facilitate the time-domain analysis of such systems. In
the next chapter, we consider the analysis of these systems in the frequency
domain, which forms the foundation for the design of digital filters.
PROBLEMS
2.1 Given a linear difference equation as shown below, find the output y(n)
for 0 ≤ n ≤ 5, using the recursive algorithm
where y(−1) = 1.5, y(−2) = −1.0, and x(n) = (0.2)n u(n). Find the out-
put sample y(4) using the recursive algorithm.
2.5 What are the (a) zero state response, (b) zero input response, (c) natu-
ral response, (d) forced response, (e) transient response, (f) steady-state
response, and (g) unit impulse response of the system described in Prob-
lem 2.4?
2.6 Given an input sequence x(−3) = 0.5, x(−2) = 0.1, x(−1) = 0.9, x(0) =
1.0, x(1) = 0.4, x(2) = −0.6, and h(n) = (0.8)n u(n), find the output y(n)
for −5 ≤ n ≤ 5, using the convolution sum.
2.7 Find the samples of the output y(n) for 0 ≤ n ≤ 4, using the convolution
sum y(n) = x(n) ∗ h(n), where x(n) = {1.0 0.5 − 0.2 0.4 0.4}
↑
and h(n) = (0.8)n u(n).
2.8 Given an input sequence x(n) = {−0.5 0.2 0.0 0.2 − 0.5} and
↑
the unit impulse response h(n) = {0.1 −0.1 0.1 − 0.1}, find the
↑
output using the convolution sum, for 0 ≤ n ≤ 6.
2.9 Given an input x(n) = (0.5)n u(n) and h(n) = (0.8)n u(n), find the output
y(n) for 0 ≤ n ≤ 4, using the convolution sum formula and verify that
answer by using the z transforms X(z) and H (z).
2.10 When x(n) = {1.0 0.5 − 0.2 0.4 0.4}, and h(n) = (0.8)n u(n),
↑
find the output y(n) for 0 ≤ n ≤ 6, using the convolution formula.
2.11 Find the output y(n) using the convolution sum formula, y(n) = v(n) ∗
x(n), where v(n) = (−1)n u(n) and x(n) = (−1)n u(n).
2.12 Find the output sample y(3), using the convolution sum formula for
y(n) = x(n) ∗ h(n),where x(n) = e0.5n u(n) and h(n) = e−0.5n u(n).
2.13 Find the output y(5), using the convolution sum, when an LTI-DT system
defined by h(n) = (0.5)n u(n) is excited by an input x(n) = (0.2)n ; 2 ≤
n ≤ ∞.
2.14 Given h(n) = (−1)n u(n) and x(n) = {0.1 0.2 0.3 0.4 0.5 0.6},
↑
find the value of y(n) = x(n) ∗ h(n) at n = 3, from the convolution sum.
2.15 An LTI, discrete-time system is defined by its h(n) = (0.8)n u(n). Find
the output y(n) for n = 1, 2, 3, 4, when the input is given by x(n) =
{1.0 0.5 − 0.5 0.2 0.2 0.4 0.6 0.8}, using the convolution
↑
sum.
96 TIME-DOMAIN ANALYSIS AND z TRANSFORM
2.16 (a) Plot the output y(n) for −3 ≤ n ≤ 3, when x(n) = {1.0 0.5 0.0
↑
0.5 1.0} is convolved with h(n) = (−1)n u(n).
(b) Plot the output y(n) for −4 ≤ n ≤ 4, when x(n) = (−1)n u(−n + 3)
is convolved with h(n) = (−1)n u(n − 2).
2.17 The input sequence is x(n) = {1.0 − 0.5 1.0 −0.5 1.0 − 0.5
↑
1.0 − 0.5} and the unit pulse response h(n) = {0.1 0.2 0.3}. Find
↑
the output sample y(1) and y(4), using the convolution sum formula.
2.18 Show that the z transform of x(n) = (n + 1)a n u(n) is X(z) =
z2 /(z − a)2
2.19 Find the z transform of the following sequences:
(a) x1 (n) = (0.1)n−3 u(n)
(b) x2 (n) = (0.1)n u(n − 3)
(c) x3 (n) = e−j πn cos(0.5πn)u(n)
2.20 Find the z transform of the following two functions:
(a) x1 (n) = n(0.5)n−2 u(n)
(b) x2 (n) = (0.5)n u(n − 2)
2.21 Find the z transform of the following two functions:
(a) x1 (n) = −na n u(−n − 1)
(b) x2 (n) = (−1)n cos( π3 n)u(n)
2.22 Find the z transform of the following functions:
(a) x1 (n) = (−1)n 2−n u(n)
(b) x2 (n) = na n sin(ω0 n)u(n)
(c) x3 (n) = (n2 + n)a n−1 u(n − 1)
(d) x4 (n) = (0.5)n [u(n) − u(n − 5)]
2.23 Show that
⎧
⎨N when z = 1
−1 −2 −(N −1)
X(z) = 1 + z +z +··· +z = 1 − z−N
⎩ when z = 1
1 − z−1
2.24 Find the z transform of an input x(n) = (−1)n [u(n − 4) − u(n −
8)]. When an LTI, discrete-time system, defined by its h(n) = {1.0
↑
0.8 0.6 0.4}, is excited by this x(n), what is the output y(n); n ≥ 0?
2.25 An LTI discrete-time system has an unit pulse response h(n) =
(0.1)n u(n). What is its output y(n) when it is excited by an input
x(n) = (n + 1)(0.5)n u(n)?
2.26 Find the inverse z transform of H (z) = 0.3z + 1.0/[(z + 0.5)(z +
0.2)2 (z + 0.3)].
PROBLEMS 97
z + 0.6
H1 (z) =
+ 0.8z + 0.5)(z − 0.4)
(z2
(z + 0.4)(z + 1)
H2 (z) =
(z − 0.5)2
2.33 Find the inverse z transform of H (z) = z/[(z + 0.5)2 (z2 + 0.25)].
2.34 Find the inverse z transform of H (z) = [0.1z(z + 1)]/[(z − 1)(z2 − z +
0.9)].
2.35 Find the inverse z transform of F (z) = (z + 0.5)/z(z2 + 0.2z + 0.02).
2.36 Find the inverse z transform of the following two functions:
1 + 0.1z−1 + 0.8z−2
G1 (z) =
(1 + z−1 )
0.2z2 + z + 1.0
G2 (z) =
(z + 0.2)(z + 0.1)
r n sin(n + 1)θ
h(n) = u(n)
sin θ
2.39 Show that the inverse z transform of H (z) = z/(z − a)3 is given by
where y(−1) = 1, y(−2) = 0, and x(n) = u(n), find the (a) zero state
response, (b) zero input response, (c) natural response, (d) forced
response, (e) transient response, and (f) steady-state response of the
system.
2.41 Given an LTI discrete-time system described by
where y(−1) = 1, y(−2) = −2, and x(n) = (−0.3)n u(n), find the
(a) zero state response, (b) zero input response, (c) natural response,
(d) forced response, (e) transient response, and (f) steady-state response
of the system above. What is the unit pulse response h(n) of this system?
2.42 An LTI discrete-time system is described by its difference equation
y(n) − 0.09y(n − 2) = u(n), where y(−1) = 1 and y(−2) = 0. Find its
(a) zero state response, (b) zero input response, (c) natural response,
(d) forced response, (e) transient response, and (f) steady-state response,
and (g) the unit pulse response.
2.43 Given an LTI discrete-time system described by
where y(−1) = 0, y(−2) = 0.4, and x(n) = (−1)n u(n), find the (a) nat-
ural response, (b) forced response, (c) transient response, and (d) steady-
state response of the system.
2.45 Given an LTI-DT system defined by the difference equation
and y(−1) = y(−2) = 0, find its (a) natural response, (b) forced
response, (c) transient response, and (d) steady-state response, when it is
excited by x(n) = u(n). What is its unit impulse response h(n)?
PROBLEMS 99
2.46 Find the total response y(n) of the LTI-DT system defined by the fol-
lowing difference equation
where y(−1) = 2, y(−2) = 2, and x(n) = (e−0.1n )u(n), find its unit pulse
response h(n).
2.51 The difference equation describing an LTI discrete-time system is given
below. Solve for y(n)
where y(−1) = 2 and x(n) = (0.5)n u(n), find y(n) and also the unit
impulse response h(n).
2.56 Given the transfer function H (z) = z/[(z − 1)2 (z + 1)] of a digital filter,
compute and plot the values of h(n) for n = 0, 1, 2, 3, 4, 5. What is the
value of limn→∞ h(n)?
2.57 Given the input X(z−1 ) = 1.0 + 0.1z−1 + 0.2z−2 and the transfer func-
tion H (z) = z/[(z − 0.2)(z + 0.3)], find the output y(n).
2.58 If the z transform of y(n) = x(n) ∗ h(n) is X(z)H (z), what is the con-
volution sum formula for x(−n) ∗ h(n)? What is the z transform of
x(−n) ∗ h(n)?
2.59 Given an LTI discrete-time system described by the difference equation
find h(n) and the zero state response when x(n) = u(n).
2.60 Derive the transfer function H (z) of the LTI discrete-time system
described by the circuit shown in Figure 2.8.
2.61 Derive the transfer function H (z) of the LTI-DT system described by the
circuit given in Figure 2.9. Obtain the difference equation relating the
input x(n) to the output y(n).
2.62 Derive the single input–single output relationship as a difference equation
for the LTI-DT system shown in Figure 2.10.
2.63 Obtain the transfer function H (z) = Y3 (z)/X(z) as the ratio of polyno-
mials, for the discrete-time system shown in Figure 2.11.
2.64 Write the equations in the z domain to describe the LTI-DT system shown
in Figure 2.12. and find the z transform Y2 (z).
PROBLEMS 101
2.0
z−1
X(z)
z−1 Σ Σ Y(z)
1.5
z−1
−0.2
z−1
−0.5
X(n)
Σ Σ z−1 Σ Y(n)
z−1
1.0
z−1
−1 0.5 0.8
z
−0.5
Σ
1.0
−1
z
Σ z−1 Σ Y(n)
X(n) −0.5
0.5
z−1
0.3
2.0
X1(z) z−1 Σ z−1 Σ Y2(z)
z−1
z−1
−0.1
−0.2 z−1
−0.4
2.0
X1(z) z−1 Σ z−1 Σ Y2(z)
z−1
z−1
−0.1
−0.2 z−1
−0.4
1.5
X(n) z−1 Σ z−1 z−1 Y(n)
0.4
−0.3
2.65 Derive the transfer function H (z) for the circuit shown in Figure 2.13
and find its unit impulse response h(n).
2.66 Write the equations in the z domain to describe the LTI-DT system given
in Figure 2.14 and derive the transfer function H (z) = Y3 (z)/X(z), as a
ratio of two polynomials.
PROBLEMS 103
0.5 −0.5
1.5
X(z) Σ Σ z−1 z−1 Σ Y3(z)
1.5
z−1
−0.6
0.4
z−1
0.5
X(z) Σ Σ Σ Y(z)
z−1 z−1
−0.4 0.1
X(n) Σ
z−1
1.0
−0.1
z−1 Σ Y(n)
0.02 0.4
2.67 Repeat Problem 2.66 for the circuit given in Figure 2.15.
2.68 Find the unit pulse response of the LTI-DT system shown in Figure 2.16.
2.69 Find the unit pulse response h(n) of the discrete-time system shown in
Figure 2.17.
104 TIME-DOMAIN ANALYSIS AND z TRANSFORM
−1.0 V2(z)
Σ
V1(z)
Σ
−1.5 0.5
z−1 z−1
−0.5
−0.2 0.4
Σ
V3(z)
Y(z)
1.0
−0.1 0.5
X(z) z−1 ∑
∑ Y(z)
z−1
0.3 0.2
z−1
−0.02
2.70 Find the transfer function H (z) of the discrete-time system given in
Figure 2.18.
2.71 Derive the transfer function of the digital filter shown in Figure 2.19 and
find the samples h(0), h(1), and h(2).
2.72 Derive the transfer function H (z) for the digital filter shown in Figure 2.20
and find its unit impulse response h(n).
2.73 Find the unit sample response h(n) of the discrete-time system shown in
Figure 2.21.
PROBLEMS 105
Σ z−1
−0.1 −0.2
X(z) Σ Y(z)
−1.0
0.2 0.1
Σ z−1
Σ z−1 Σ
−0.1 −0.5
z−1
X(z) z−1
0.1
0.5 0.2
Σ z−1 Y(z)
Σ z−1 z−1
−0.4
−0.03 Σ Y(n)
X(n)
Σ z−1 z−1
−0.5
1.5
z−1
X(z) Σ Σ Y(z)
1.8
z−1
0.75 −2.0
Σ z−1 z−1
−1.0
z−1
1.5
2.74 Derive the transfer function H (z) = Y (z)/X(z) for the LTI-DT system
shown in Figure 2.22.
−1
2.75 A moving-average filter is defined by y(n) = 1/N N k=0 y(n − k). Find
the transfer function of the filter when N = 10.
2.76 In the partial fraction expansion of H (z) = N (z)/ K k=1 (z − zk ) =
K
R
k=1 k /(z − zk ), which has simple poles at z = zk , show that the
residues Rk can be found from the formula Rk = N (zk )/D (zk ), where
D (z) = dD(z)/dz.
2.77 The transfer function H (z) is expanded into its partial fraction form as
shown below:
z
H (z) =
(z − 0.1)(z − 0.2)(1 − 0.3z−1 )(1 − 0.5z−1 )
K1 z K2 z R3 R4
= + + +
(z − 0.1) (z − 0.2) (1 − 0.3z−1 ) (1 − 0.5z−1 )
1
H (z−1 ) =
(1 − 0.5z−1 )(1
− 0.1z−1 )
R1 R2
= +
(1 − 0.5z−1 ) (1 − 0.1z−1 )
N (z−1 )
N
Rn
H (z−1 ) = N =
n=1 (1 − an z−1 ) (1 − an z−1 )
n=1
what is the general method for finding the residues Rn ? What is the
unit impulse response h(n) of this system?
2.79 (a) In the expression given below, find the values of K1 and K2 and find
h(n):
1 K1 K2
H (z−1 ) = = −1 + −1
(z−1 −1
− 0.5)(z − 0.1) (z − 0.5) (z − 0.1)
N (z−1 )
N
Kn
−1
H (z ) = N =
n=1 (z−1 − an ) (z−1 − an )
n=1
z + 0.1
H (z) =
z2 + 0.5z + 0.4
2.82 Derive the linear difference equation for the input–output relationship
for the system with its transfer function H (z)
z(z + 0.4)
H (z) =
z3 + 0.2z2 − 0.4z + 0.05
108 TIME-DOMAIN ANALYSIS AND z TRANSFORM
z−1
H (z−1 ) =
(1 + 0.3z−1 + 0.02z−2 )
find the zero state response when the input is a unit step function. What
is the zero input response that satisfies the initial conditions y(−1) = 2
and y(−2) = 4?
2.84 Use the Jury–Marden test to determine whether the discrete-time system
defined by the following transfer function is stable:
z + 0.5
H (z) =
z3 + z2 + 2z + 5
are inside the unit circle |z| = 1, using the Jury–Marden test.
2.87 Determine whether the three zeros of the polynomial
P (z) = z3 + 2z2 + 4z + 6
are inside the unit circle |z| = 1, using the Jury–Marden test.
2.88 Apply the Jury–Marden test to determine whether the polynomial has its
zeros inside the unit circle in the z plane:
MATLAB Problems
2.89 Find the roots of the following two polynomials:
2.90 Plot the poles and zeros of the transfer function H1 (z) = N1 (z)/D1 (z),
where N1 (z) and D1 (Z) are the polynomials given above.
PROBLEMS 109
2.91 Find the polynomials that have the zeros given below and also the product
of the two polynomials N2 (z)D2 (z):
2.92 Plot the poles and zeros of H2 (z) = N2 (z)/D2 (z) in the z plane.
2.93 Find the values of R1 , R2 , and R3 in the expansion of the transfer function
G(z), using the MATLAB function residuez:
(1 + 0.6z) R1 z R2 z R3 z
G(z) = = + +
(z − 0.8)(z + 0.5)2 (z − 0.8) (z + 0.5)2 (z + 0.5)
2.94 Find the values of K1 , K2 , K3 , K4 , K5 in the expansion of the following
transfer functions, using the MATLAB function residuez:
(z − 0.3)
H1 (z) =
(z − 0.2)3 (z + 0.4)(z + 0.5)
K1 z K2 z K3 z K4 z K5 z
= + + + +
(z − 0.2)3 (z − 0.2)2 (z − 0.2) (z + 0.4) (z + 0.5)
(z)2
H2 (z) =
(z + 0.5)2 (z + 0.1)2 (z − 0.2)
K1 z K2 z K3 z K4 z K5 z
= + + + +
(z + 0.5)2 (z + 0.5) (z + 0.1) 2 (z + 0.1) (z − 0.2)
2.95 Plot the magnitude, phase, and group delay of the transfer function
H1 (z−1 ) given below:
0.20 − 0.45z−1
H1 (z−1 ) =
1 − 1.3z−1 + 0.75z−2
2.1 + 1.45z−1 1.8 − 0.60z−1
+ +
1 − 1.07z−1 + 0.30z−2 1 − z−1 + 0.25z−2
2.96 Given H2 (z) = (1 − z−1 )/(1 − 0.9z−1 ), plot the magnitude of H3 (z) =
H2 (zej 1.5 )H2 (ze−j 1.5 ) and the magnitude of H4 (z) = H2 (zej 1.5 ) +
H2 (ze−j 1.5 ).
2.97 Find the partial fraction expansion of the following two transfer functions
and evaluate their unit pulse response for 0 ≤ n ≤ 10:
z(z − 0.5)
H1 (z) =
(z − 0.8)(z − +0.6)
(z − 0.6)
H2 (z) =
(z + 0.6)(z2 + 0.8z + 0.9)
110 TIME-DOMAIN ANALYSIS AND z TRANSFORM
(z − 0.5)
H3 (z) =
(z + 0.4)(z + 0.2)2
2.99 Find the output y1 (n), y2 (n), and y3 (n) for 0 ≤ n ≤ 15 of the LTI-DT
systems defined by the preceding transfer functions H1 (z), H2 (z), and
H3 (z), respectively, assuming that they are excited by an input sequence
x(n) = {0.5 0.2 − 0.3 0.1}.
↑
Write your code using the MATLAB function filter, and submit it with
the computer output.
2.100 An LTI-DT system is described by the following difference equation
REFERENCES
9. V. K. Ingle and J. G. Proakis, Digital Signal Processing Using MATLAB (R) V.4, PWS
Publishing, 1997.
10. S. K. Mitra, Digital Signal Processing Laboratory Using MATLAB, McGraw-Hill,
1999.
11. J. G. Proakis and D. G. Manolakis, Digital Signal Processing, Prentice-Hall, 1996.
CHAPTER 3
Frequency-Domain Analysis
3.1 INTRODUCTION
In the previous chapter, we derived the definition for the z transform of a discrete-
time signal by impulse-sampling a continuous-time signal xa (t) with a sampling
period T and using the transformation z = esT . The signal xa (t) has another
equivalent representation in the form of its Fourier transform X(j ω). It contains
the same amount of information as xa (t) because we can obtain xa (t) from X(j ω)
as the inverse Fourier transform of X(j ω). When the signal xa (t) is sampled
∞ a sampling period T , to generate the discrete-time signal represented by
with
k=0 xa (kT )δ(nT − kT ), the following questions need to be answered:
We address these questions in this chapter, arrive at the definition for the discrete-
time Fourier transform (DTFT) of the discrete-time system, and describe its prop-
erties and applications. In the second half of the chapter, we discuss another trans-
form known as the discrete-time Fourier series (DTFS) for periodic, discrete-time
signals. There is a third transform called discrete Fourier transform (DFT), which
is simply a part of the DTFS, and we discuss its properties as well as its applica-
tions in signal processing. The use of MATLAB to solve many of the problems
or to implement the algorithms will be discussed at the end of the chapter.
112
THEORY OF SAMPLING 113
and evaluating it on the unit circle in the z plane; thus, when z = ej ωT , we get
∞
X(ej ωT ) = x(nT )e−j ωnT (3.5)
n=−∞
1 The material in this section is adapted from a section with the same heading, in the author’s book
Magnitude and Delay Approximation of 1-D and 2-D Digital Filters [1], with permission from the
publisher, Springer-Verlag.
2
We have chosen
(measured in radians per second) to denote the frequency variable of an analog
function in this section and will choose the same symbol to represent the frequency to represent the
frequency response of a lowpass, normalized, prototype analog filter in Chapter 5.
3
Here we have used the bilateral z transform of the DT sequence, since we have assumed that it
is defined for −∞ < n < ∞ in general. But the theory of bilateral z transform is not discussed in
this book.
114 FREQUENCY-DOMAIN ANALYSIS
Note that the signal ej ωnT is assumed to have values for −∞ < n < ∞ in gen-
h(kT ) is a causal
eral, whereas sequence: h(kT ) = 0 for −∞ < k < 0. Hence the
summation ∞ k=−∞ h(kT )e −j ωkT
in (3.6) can be replaced by ∞
k=0 h(kT )e
−j ωkT
.
j ωT
It is denoted as H (e ) and is a complex-valued function of ω, having a magni-
tude response H (ej ωT ) and phase response θ (ej ωT ). Thus we have the following
result
y(nT ) = ej ωnT H (ej ωT ) ej θ(e )
j ωT
(3.7)
which shows that when the input is a complex exponential function ej ωnT ,
the magnitude of the output y(nT ) is H (ej ωT ) and the phase of the output
y(nT ) is (ωnT + θ ). If we choose a sinusoidal input x(nT ) = Re(Aej ωnT ) =
A cos(ωnT ), then the output y(nT ) is also a sinusoidal function given by
y(nT ) = A H (ej ωT ) cos(ωnt
+ θ ). Therefore we multiply the amplitude of the
sinusoidal input by H (ej ωT ) and increase the phase by θ (ej ωT ) to get the ampli-
tude and phase of the sinusoidal output. For the reason stated above, H (ej ωT )
is called thefrequency response of the discrete-time system. We use a similar
expression ∞ k=−∞ x(kT )e
−j ωkT
= X(ej ωT ) for the frequency response of any
input signal x(kT ) and call it the discrete-time Fourier transform (DTFT) of
x(kT ).
To find a relationship between the Fourier transform Xa (j
) of the continuous-
time function xa (t) and the Fourier transform X(ej ωT ) of the discrete-time
sequence, we start with the observation that the DTFT X(ej ωT ) is a periodic func-
tion of ω with a period ωs = 2π/T , namely, X(ej ωT +j rωs T ) = X(ej ωT +j r2π ) =
X(ej ωT ), where r is any integer. It can therefore be expressed in a Fourier series
form
∞
X(ej ωT ) = Cn e−j ωnT (3.8)
n=−∞
By comparing (3.5) with (3.8), we conclude that x(nT ) are the Fourier series
coefficients of the periodic function X(ej ωT ), and these coefficients are evaluated
from
π/T
T
Cn = x(nT ) = X(ej ωT )ej ωnT dω (3.10)
2π −(π/T )
Therefore
∞
X(ej ωT ) = x(nT )e−j ωnT (3.11)
n=−∞
THEORY OF SAMPLING 115
However, each term in this summation can be reduced to an integral over the
range −(π/T ) to π/T by a change of variable from
to
+ 2πr/T , to get
∞
T 1 π/T 2πr
x(nT ) = Xa j
+ j ej
nT ej 2πrn d
(3.13)
2π r=−∞ T −(π/T ) T
Note that ej 2πrn = 1 for all integer values of r and n. By changing the order of
summation and integration, this equation can be reduced to
π/T ∞
T 1 2πr
x(nT ) = Xa j
+ j ej
nT d
(3.14)
2π −(π/T ) T r=−∞ T
This shows that the discrete-time Fourier transform (DTFT) of the sequence
x(nT ) generated by sampling the continuous-time signal xa (t) with a sampling
period T is obtained by a periodic duplication of the Fourier transform Xa (j ω)
of xa (t) with a period 2π/T = ωs and scaled by T . To illustrate this result,
a typical analog signal xa (t) and the magnitude of its Fourier transform are
sketched in Figure 3.1. In Figure 3.2a the discrete-time sequence generated by
sampling xa (t) is shown, and in Figure 3.2b,
the magnitude of a few terms of
(3.16) as well as the magnitude X(ej ωT ) are shown.
Ideally the Fourier transform of xa (t) approaches zero only as the frequency
approaches ∞. Hence it is seen that, in general, when Xa (j ω)/T is duplicated
and added as shown in Figure 3.2b, there is an overlap of the frequency responses
at all frequencies. The frequency responses of the individual terms in (3.16) add
up, giving the actual response as shown by the curve for X(ej ω ). [We have
116 FREQUENCY-DOMAIN ANALYSIS
(a)
⎮Xa( jw)⎮
0
(b)
Figure 3.1 An analog signal xa (t) and the magnitude of its Fourier transform X(j ω).
xa(nT)
0 n
(a)
⏐X(e jw)⏐
X(jw)
X( j(w − ws))
T
T
X( j(w − 2ws))
T
0 ws 2ws
(b)
Figure 3.2 The discrete-time signal xa (nT ) obtained from the analog signal xa (t) and
the discrete-time Fourier transform H (ej ω ).
THEORY OF SAMPLING 117
disregarded the effect of phase in adding the duplicates of X(j ω).] Because of
this overlapping effect, more commonly known as “aliasing,” there is no way of
retrieving X(j ω) from X(ej ω ) by any linear operation; in other words, we have
lost the information contained in the analog function xa (t) when we sample it.
Aliasing of the Fourier transform can be avoided if and only if (1) the function
xa (t) is assumed to be bandlimited—that is, if it is a function such that its
Fourier transform Xa (j ω) ≡ 0 for |ω| > ωb ; and (2) the sampling period T is
chosen such that ωs = 2π/T > 2ωb . When the analog signal xb (t) is bandlimited
as shown in Figure 3.3b and is sampled at a frequency ωs ≥ 2ωb , the resulting
discrete-time signal xb (nT ) and its Fourier transform X(ej ω ) are as shown in
Figure 3.4a,b, respectively.
If this bandlimited signal xb (nT ) is passed through an ideal lowpass filter
with a bandwidth of ωs /2, the output will be a signal with a Fourier transform
equal to X(ej ωT )Hlp (j ω) = Xb (j ω)/T . The unit impulse response of the ideal
lowpass filter with a bandwidth ωb obtained as the inverse Fourier transform of
Hlp (j ω) is given by
∞
1
hlp (t) = Hlp (j ω)ej ωt dω
2π −∞
ωs
1 2
= T ej ωt dω (3.17)
2π −ωs
2
ωs t πt
sin sin
2 T
= = (3.18)
ωs t πt
2 T
xb(t )
0 t
(a)
⏐Xb( jw)⏐
−wb wb
(b)
Figure 3.3 A bandlimited analog signal and the magnitude of its Fourier transform.
118 FREQUENCY-DOMAIN ANALYSIS
xb(nT )
0 n
(a)
wt wt
− −wb wb wt
2 2
(b)
Figure 3.4 The discrete-time signal obtained from the bandlimited signal and the mag-
nitude of its Fourier transform.
The output signal will be the result of convolving the discrete input sequence
xb (nT ) with the unit impulse response hlp (t) of the ideal analog lowpass fil-
ter. But we have not defined the convolution between a continuous-time signal
and samples of discrete-time sequence. Actually it is the superposition of the
responses due to the delayed impulse responses hlp (t − nT ), weighted by the
samples xb (nT ), which gives the output xb (t). Using this argument, Shannon [2]
derived the formula for reconstructing the continuous-time function xb (t), from
only the samples x(n) = xb (nT )—under the condition that xb (t) be bandlimited
up to a maximum frequency ωb and be sampled with a period T < π/ωb . This
formula (3.19) is commonly called the reconstruction formula, and the statement
that the function xb (t) can be reconstructed from its samples xb (nT ) under the
abovementioned conditions is known as Shannon’s sampling theorem:
π
∞ sin (t − nT )
xb (t) = xb (nT ) π T (3.19)
n=−∞ (t − nT )
T
The reconstruction process is indicated in Figure 3.5a. An explanation of the
reconstruction is also given in Figure
3.5b, where
it is seen that the delayed
impulse response sin Tπ (t − nT ) / Tπ (t − nT ) has a value of xb (nT ) at t = nT
and contributes zero value at all other sampling instants t = nT so that the
reconstructed analog signal interpolates exactly between these sample values of
the discrete samples.
THEORY OF SAMPLING 119
(a)
−T 0 T 2T 3T 4T
(b)
Figure 3.5 Reconstruction of the bandlimited signal from its samples, using an ideal
lowpass analog filter.
This revolutionary theorem implies that the samples xb (nT ) contain all the
information that is contained in the original analog signal xb (t), if it is bandlim-
ited and if it has been sampled with a period T < π/ωb . It lays the necessary
foundation for all the research and developments in digital signal processing
that is instrumental in the extraordinary progress in the information technology
that we are witnessing.4 In practice, any given signal can be rendered almost
bandlimited by passing it through an analog lowpass filter of fairly high order.
Indeed, it is common practice to pass an analog signal through an analog lowpass
filter before it is sampled. Such filters used to precondition the analog signals
are called as antialiasing filters. As an example, it is known that the maximum
frequency contained in human speech is about 3400 Hz, and hence the sampling
frequency is chosen as 8 kHz. Before the human speech is sampled and input to
telephone circuits, it is passed through a filter that provides an attenuation of at
least 30 dB at 4000 Hz. It is obvious that if there is a frequency above 4000 Hz
in the speech signal, for example, at 4100 Hz, when it is sampled at 4000 Hz,
due to aliasing of the spectrum of the sampled signal, there will be a frequency
at 4100 Hz as well as 3900 Hz. Because of this phenomenon, we can say that
the frequency of 4100 Hz is folded into 3900 Hz, and 4000 Hz is hence called
the “folding frequency.” In general, half the sampling frequency is known as the
folding frequency (expressed in radians per second or in hertz).
4
This author feels that Shannon deserved an award (such as the Nobel prize) for his seminal contri-
butions to sampling theory and information theory.
120 FREQUENCY-DOMAIN ANALYSIS
Example 3.1
Consider a continuous-time signal xa (t) = e−0.2t u(t) that has the Fourier
transform
" X(j ω) = 1/(j ω + 0.2). The magnitude |X(j ω)| = |1/(j ω + 0.2)| =
1/(ω + 0.04), and when we choose a frequency of 200π, we see that the
2
and at ω = 0.5, we can neglect the duplicates at j k400π and give the magnitude
of the frequency response as
1 1
= 371.3907
0.005 0.2 + j 0.5
The two magnitudes at ω = 0.5 are nearly equal; the small difference is
attributable to the slight aliasing in the frequency response. See Figure 3.6, which
illustrates the equivalence of the two equations. But (3.16) is not useful when
a sequence of arbitrary values (finite or infinite in length) is given because it
is difficult to guess the continuous-time signal of which they are the sampled
values; even if we do know the continuous-time signal, the choice of a sampling
frequency to avoid aliasing may not be practical, for example, when the signal is
a highpass signal. Hence we refer to (3.11) whenever we use the acronym DTFT
in our discussion.
X(jw)
0 200 p
X(e jw)
0 200 p fs = 400p w
Figure 3.6 Equivalence of the two definitions for the Fourier transform of a discrete-time
signal.
122 FREQUENCY-DOMAIN ANALYSIS
The expressions for the DTFT X(ej ω ) and the IDTFT x(n) are
∞
jω
X(e ) = x(n)e−j ωn (3.22)
n=0
π
1
x(n) = X(ej ω )ej ωn dω (3.23)
2π −π
The DTFT and its inverse (IDTFT) are extensively used for the analysis and
design of discrete-time systems and in applications of digital signal processing
such as speech processing, speech synthesis, and image processing. Remember
that the terms frequency response of a discrete-time signal and the discrete-time
Fourier transform (DTFT) are synonymous and will be used interchangeably.
This is also known as the frequency spectrum; its magnitude response and
phase response are generally known as the magnitude spectrum and phase spec-
trum, respectively. We will also use the terms discrete-time signal, discrete-time
sequence, discrete-time function, and discrete-time series synonymously.
We will represent the frequency response of the digital filter either by
H (ej ωT ) or more often by H (ej ω ) for convenience. Whenever it is expressed
as H (ej ω )—which is very common practice in the published literature—the
frequency variable ω is to be understood as the normalized frequency ωT =
ω/fs . We may also represent the normalized frequency ωT by θ (radians). In
Figure 3.7a, we have shown the magnitude response of an ideal lowpass filter,
demonstrating that it transmits all frequencies from 0 to ωc and rejects frequencies
higher than ωc . The frequency response H (ej ω ) is periodic, and its magnitude is
an even function. In Figure 3.7b suppose we have shown the magnitude response
of the lowpass filter only over the frequency range [0 π]. We draw its magni-
tude for negative values of ω since it is an even function and extend it by repeated
duplication with a period of 2π, thereby obtaining the magnitude response for all
values of ω over the range (−∞, ∞). Therefore, if the frequency specifications
are given over the range [0 π], we know the specifications for all values of the
normalized frequency ω, and the specifications for digital filters are commonly
given for only this range of frequencies. Note that we have plotted the magni-
tude response as a function of the normalized frequency ω. Therefore the range
[0 π] corresponds to the actual frequency range [0 ωs /2] and the normalized
frequency π corresponds to the Nyquist frequency (and 2π corresponds to the
sampling frequency).
Sometimes the frequency ω is even normalized by πfs so that the Nyquist
frequency has a value of 1, for example, in MATLAB functions. In Figures 3.7c,d,
we have shown the magnitude response of an ideal highpass filter. In Figure 3.8
we show the magnitude responses of an ideal bandpass and bandstop filter.
It is convenient to do the analysis and design of discrete-time systems on
the basis of the normalized frequency. When the frequency response of a fil-
ter, for example, shows a magnitude of 0.5 (i.e., −6 dB) at the normalized
DTFT AND IDTFT 123
⎮Hlp(e jw)⎮
−2p −p −wc 0 wc p 2p 2w
(a)
⎮Hlp(e jw)⎮
−p −wc 0 wc p
(b)
⎮Hlp(e jw)⎮
−2p −p −wc 0 wc p 2p w
(c)
⎮Hlp(e jw)⎮
0 wc p w
(d)
frequency 0.3π, the actual frequency can be easily computed as 30% of the
Nyquist frequency, and when the sampling period T or the sampling frequency
ωs (or fs = 1/T ) is given, we know that 0.3π represents (0.3)(ωs /2) rad/s or
(0.3)(fs /2) Hz. By looking at the plot, one should therefore be able to determine
what frequency scaling has been chosen for the plot. And when the actual sam-
pling period is known, we know how to restore the scaling and find the value
of the actual frequency in radians per second or in hertz. So we will choose the
normalized frequency in the following sections, without ambiguity.
The magnitude response of the ideal filters shown in Figures 3.7 and 3.8 cannot
be realized by any transfer function of a digital filter. The term “designing a digital
filter” has different meanings depending on the context.
One
meaning is to find a
transfer function H (z) such that its magnitude H (ej ω ) approximates the ideal
magnitude response as closely as possible. Different approximation
criteria have
been proposed to define how closely the magnitude H (ej ω ) approximates the
ideal magnitude. In Figure 3.9a, we show the approximation of the ideal lowpass
filter meeting the elliptic function criteria. It shows an error in the passband as
124 FREQUENCY-DOMAIN ANALYSIS
⎮Hbp(e jw)⎮
0 wc1 wc2 π w
(a)
⎮Hbs(e jw)⎮
0 wc1 wc2 π w
(b)
0.8 Passband
Magnitude
0.6
0.4
0.8
0.6 Passband
0.4
0.2
Stopband
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency w/pi
(b)
Figure 3.9 Approximation of ideal lowpass and highpass filter magnitude response.
DTFT AND IDTFT 125
0.6
0.4
0.2
Stopband Stopband
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency w/pi
(a)
Passband Passband
0.8
0.6
0.4
0.2 Stopband
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency w/pi
(b)
is defined for −∞ < n < 0 or −∞ < n < ∞. In this case, the unilateral z trans-
form of x(n) cannot be used. Therefore we cannot find the output y(n) as the
inverse z transform of X(z)H (z). However, we can find the DTFT of the input
sequence even when it is defined for −∞ < n < ∞, and then multiply it by the
DTFT of h(n) to get the DTFT of the output as Y (ej ω ) = X(ej ω )H (ej ω ). Its
IDTFT yields the output y(n). This is one advantage of using the discrete-time
Fourier transform theory. So for time-domain analysis, we see that the DTFT-
IDTFT pair offers an advantage over the z-transform method, when the input
signal is defined for −∞ < n < 0 or −∞ < n < ∞. An example is given later
to illustrate this advantage over the z-transform theory in such cases.
The relationship Y (ej ω ) = X(ej ω )H (ej ω ) offers a greater advantage as it is
the basis for the design of all digital filters. When we want to eliminate certain
frequencies or a range of frequencies in the input signal, we design a filter such
that the magnitude of H (ej ω ) is very small at these frequencies or over the
range of frequencies that would therefore form the stopband. The magnitude of
the frequency response H (ej ω ) at all other frequencies is maintained at a high
level, and these frequencies constitute the passband. The magnitude and phase
responses of the filter are chosen so that the magnitude and phase responses of
the output of the filter will have an improved quality of information. We will
discuss the design of digital filters in great detail in Chapters 4 and 5. We give
only a simple example of its application in the next section.
Example 3.2
Suppose that the input signal has a lowpass magnitude response with a bandwidth
of 0.7π as shown in Figure 3.11 and we want to filter out all frequencies outside
the range between ω1 = 0.3π and ω2 = 0.4π. Note that the sampling frequency
of both signals is set at 2π. If we pass the input signal through a bandpass
filter with a passband between ω1 = 0.3π and ω2 = 0.4π, then the frequency
response of the output is given by a bandpass response with a passband between
ω1 = 0.3π and ω2 = 0.4π, with all the other frequencies having been filtered
out. It is interesting to observe that the maximum frequency in the output is
0.4π; therefore, we can reconstruct y(t) from the samples y(n) and then sample
at a lower sampling frequency of 0.8π, instead of the original frequency of 2π.
If the sampling frequency in this example is 10,000 Hz, then the Nyquist fre-
quency is 5000 Hz, and therefore the input signal has a bandwidth of 3500 Hz,
corresponding to the normalized bandwidth of 0.7π, whereas the bandpass filter
has a passband between 1500 and 2000 Hz. The output of the bandpass filter has
a passband between 1500 and 2000 Hz. Since the maximum frequency in the
output signal is 2000 Hz, one might think of reconstructing the continuous-time
signal using a sampling frequency of 4000 Hz. But this is a bandpass signal
with a bandwidth of 500 Hz, and 2000 Hz is 8 times the bandwidth; according
to the sampling theorem for bandpass signals, we can reconstruct the output sig-
nal y(t) using a sampling frequency of twice the bandwidth, namely, 1000 Hz
instead of 4000 Hz. The theory and the procedure for reconstructing the analog
DTFT AND IDTFT 127
|X(e jw)|
15
0 0.3π 0.4π w π
bandpass signal from its samples is beyond the scope of this book and will not
be treated further.
But the correct expression for the DTFT of x(−n) is of the form 1 + aej ω +
a 2 ej 2ω + a 3 ej 3ω + · · · .
So the compact form for this series is 0n=−∞ a −n e−j ωn . With this clarifica-
tion, we now prove the property that if x(n) ⇔ X(ej ω ) then
Example 3.3
Consider x(n) = δ(n). Then, from the definition for DTFT, we see that δ(n) ⇔
X(ej ω ) = 1 for all ω.
From the time-shifting property, we get
The Fourier transform e−j ωk has a magnitude of one at all frequencies but a
linear phase as a function of ω that yields a constant group delay of k samples. If
we extend this result byconsidering an infinite sequence of unit impulses,
∞ which
can be represented by ∞ k=−∞ δ(n − k), its DTFT would yield k=−∞ e
−j ωk
.
But this does not converge to any form of expression. Hence we resort to a
different approach, as described below, and derive the result (3.28).
Example 3.4
Example 3.5
∞
2π δ(ω − 2πk) ⇔ 1 (for all n) (3.27)
k=−∞
∞
Proof : The inverse DTFT of 2π k=−∞ δ(ω − 2πk) is evaluated as
∞
1 π
2π δ(ω − 2πk) ej ωn dω
2π −π k=−∞
∞
π
= δ(ω − 2πk) ej ωn dω
−π k=−∞
where we have used ej 2πkn = 1 for all n. When we integrate the sequence of
impulses from −π to π, we have only the impulse at ω = 0.
Therefore
π ∞
δ(ω − 2πk) ej ωn dω
−π k=−∞
π ∞
= δ(ω)ej ωn dω = 1 (for all n)
−π k=−∞
To point out some duality in the results we have obtained above, let us repeat
them:
When x(n) = 1 at n = 0 and 0 at n = 0, that is, when we have δ(n), its DTFT
X(ej ω ) = 1 for all ω.
When x(n) = 1 for all n, specifically, when we have ∞ k=−∞ δ(n − k), its
DTFT X(ej ω ) = 2π ∞ k=−∞ δ(ω − 2πk).
From these results, we can obtain the DTFT for the following sinusoidal
sequences:
∞
1 j ω0 n
cos(ω0 n) = [e + e−j ω0 n ] ⇔ π δ(ω − ω0 − 2πk) + δ(ω + ω0 − 2πk)
2 k=−∞
∞
1 j ω0 n −j ω0 n π
sin(ω0 n) = [e −e ]⇔ δ(ω − ω0 − 2πk) − δ(ω + ω0 − 2πk)
2j j k=−∞
(3.30)
Now compare the results in (3.28) and (3.30), which are put together in (3.31)
and (3.32) in order to show the dualities in the properties of the two transform
pairs. Note in particular that cos(ωk) is a discrete-time Fourier transform and a
function of ω, where k is a fixed integer, whereas cos(ω0 n) is a discrete-time
sequence where ω0 is fixed and is a function of n:
1
2 [δ(n + k) + δ(n − k)] ⇐⇒ cos(ωk) (3.31)
∞
cos(ω0 n) ⇐⇒ π δ(ω − ω0 − 2πk) + δ(ω + ω0 − 2πk)
k=−∞
(3.32)
Let us show the duality of the other functions derived in (3.26) and (3.29)
whereas
∞
x(n) = 1 for all n ⇐⇒ 2π δ(ω − 2πk)
k=−∞
DTFT AND IDTFT 131
δ(n − k) ⇐⇒ e−j ωk
∞
e j ω0 n
⇐⇒ 2π δ(ω − ω0 − 2πk)
k=−∞
Example 3.6
f(n)
5
2.5
−5 −4 −3 −2 −1 0 1 2 3 4 5 n
(a)
|G(e jw)|
10
Example 3.7
Example 3.8
Let us consider the DTFT of some more sequences. For example, the DTFT of
x1 (n) = a n u(n) is derived below:
∞
∞
n
X1 (ej ω ) = a n e−j ωn = ae−j ω
n=0 n=−∞
DTFT AND IDTFT 133
This
−j ωinfinite
series converges to 1/(1 − ae−j ω ) = ej ω /(ej ω − a) when
ae < 1, that is, when |a| < 1. So, the DTFT of (0.4)n u(n) is 1/(1 − 0.4e−j ω )
and the DTFT of (−0.4)n u(n) is 1/(1 + 0.4e−j ω ). Note that both of them are causal
sequences.
If we are given a sequence x13 (n) = a |n| , where |a| < 1, we split the sequence
as a causal sequence x1 (n) from 0 to ∞, and a noncausal sequence x3 (n)
from −∞ to −1. In other words, we can express x1 (n) = a n u(n) and x3 (n) =
a −n u(−n − 1). We derive the DTFT of x13 (n) as
∞
−1
X13 (ej ω ) = a n e−j ωn + a −n e−j ωn X1 (ej ω ) + X3 (ej ω )
n=0 n=−∞
∞
∞
n
m
= ae−j ω −1+ aej ω
n=0 m=0
1 1
= −j
−1+ for |a| < 1
1 − ae ω 1 − aej ω
1 aej ω
= +
1 − ae−j ω 1 − aej ω
1 − a2
= for |a| < 1
1 − 2a cos ω + a 2
Hence we have shown that
1 − a2
a |n| ⇔ for |a| < 1
1 − 2a cos ω + a 2
These results are valid when |a| < 1. From the result a n u(n) ⇔ 1/(1 − ae−j ω ),
by application of the time-reversal property, we also find that x4 (n) = x1 (−n) =
a −n u(−n) ⇔ 1/(1 − aej ω ) for |a| < 1 whereas we have already determined that
x3 (n) = a −n u(−n − 1) ⇔ aej ω /(1 − aej ω ). Note that x3 (n) is obtained from
x4 (n) by deleting the sample of x4 (n) at n = 0, specifically, x4 (n) − 1 = x3 (n).
We used this result in deriving X3 (ej ω ) above. The sequence x13 (n) is plotted in
Figure 3.13, while the plots of x1 (n), x3 (n) are shown in Figures 3.14 and 3.15,
respectively.
134 FREQUENCY-DOMAIN ANALYSIS
x13(n)
−8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8
n
x1(n)
1.0
0.8
0.64
0.512
0.41
0.328
0.262
0.21
0.168
0.134
0 1 2 3 4 5 6 7 8 n
x3(n)
0.8
0.64
0.512
0.41
0.328
0.262
0.168
−7 −6 −5 −4 −3 −2 −1 0 n
x5(n)
0.5
0.25
0.125
0.0315 0.0625
−7 −6 −5 −4 −3 −2 −1 0 1 2 n
Now let us consider the case of x5 (n) = α n u[−(n + 1)], where |α| > 1. A plot
of this sequence is shown in Figure 3.16 for α = 2. Its DTFT is derived below:
∞
X5 (ej ω ) = α n u[−(n + 1)]e−j ωn
n=−∞
−∞
−∞
−j ω n 1 j ω −n
= αe = e
α
n=−1 n=−1
1
=
αe−j ω −1
So we have the transform pair
1 ej ω
x5 (n) = α n u[−(n + 1)] ⇔ = when |α| > 1 (3.33)
αe−j ω − 1 α − ej ω
It is important to exercise caution in determining the differences in this pair
(3.33), which is valid for |α| > 1, and the earlier pairs, which are valid for |a| < 1.
All of them are given below (again, the differences between the different DTFT-
IDTFT pairs and the corresponding plots should be studied carefully and clearly
understood):
1 ej ω
x1 (n) = a n u(n) ⇔ = when |a| < 1 (3.34)
1 − ae−j ω ej ω − a
136 FREQUENCY-DOMAIN ANALYSIS
1 e−j ω
x4 (n) = a −n u(−n) ⇔ = when |a| < 1 (3.35)
1 − aej ω e−j ω − a
aej ω
x3 (n) = a −n u(−n − 1) ⇔ when |a| < 1 (3.36)
1 − aej ω
1 − a2
x13 (n) = x1 (n) + x3 (n) ⇔ when |a| < 1 (3.37)
1 − 2a cos ω + a 2
For the sequence x5 (n) = α n u[−(n + 1)], note that the transform pair is given
by (3.38), which is valid when |α| > 1:
1 ej ω
x5 (n) = α n u[−(n + 1)] ⇔ = when |α| > 1 (3.38)
αe−j ω −1 α − ej ω
Example 3.9
A few examples are given below to help explain these differences. From the
results given above, we see that
1. If the DTFT X1 (ej ω ) = 1/(1 − 0.8e−j ω ), its IDTFT is x1 (n) = (0.8)n u(n).
2. The IDTFT of X3 (ej ω ) = 0.8ej ω /(1 − 0.8ej ω ) is given by x3 (n) =
(0.8)−n [u(−n − 1)].
3. The IDTFT of X4 (ej ω ) = 1/(1 − 0.8ej ω ) is x4 (n) = (0.8)−n u(−n). But
4. The IDTFT of X5 (ej ω ) = ej ω /(2 − ej ω ) is x5 (n) = (2)n u(−n − 1).
Note the differences in the examples above, particularly the DTFT-IDTFT pair
for x5 (n).
The magnitude and phase responses of X1 (ej ω ), X3 (ej ω ), and X13 (ej ω ) are
shown in Figures 3.17, 3.18, and 3.19, respectively. The magnitude responses of
X1 (ej ω ), X4 (ej ω ), and X3 (ej ω ) given below appear the same except for a scale
5 4.5
4 4
3 3.5
2 3
1 2.5
0 2
−2 −1 0 1 2 −2 −1 0 1 2
5 18
16
4
14
12
3
10
2 8
6
1
4
0 2
−2 −1 0 1 2 −2 −1 0 1 2
10
0
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
0.8
X3 (ej ω ) = " (3.42)
[1 − 0.8 cos(ω)]2 + [0.8 sin(ω)]2
0.8
=√ (3.43)
[1 + 0.64 − 1.6 cos(ω)
0.36
X13 (ej ω ) = (3.44)
1 − 1.6 cos ω + 0.64
Note that a n u(n) ⇔ 1/(1 − ae−j ω ) = ej ω /(ej ω − a) is valid only when |a| < 1.
When a = 1, we get the unit step sequence u(n), but the DTFT 1/(1 − e−j ω )
has an infinite number of poles at ω = 0, ±k2π, where k is an integer. In order
to avoid these singularities in 1/(1 − e−j ω ) = ej ω /(ej ω − 1), the DTFT of the
unit step sequence u(n) is derived in a different way as described below.
We express the unit step function as the sum of two functions
u(n) = u1 (n) + u2 (n)
where
u1 (n) = 1
2 for − ∞ < n < ∞
and
#
1
for n ≥ 0
u2 (n) = 2
− 12 for n < 0
Therefore
1
U2 (ej ω ) =
[1 − e−j ω ]
We know that the DTFT of u1 (n) = π ∞ k=−∞ δ(ω − 2πk) = U1 (e ). Adding
jω
This gives us the DTFT of the unit step function u(n), which is unique.
DTFT OF UNIT STEP SEQUENCE 139
It is worth comparing the DTFT of ej ω0 n u(n) given above with the DTFT of
e−an u(n), where |a| < 1:
1
e−an u(n) ⇔ (3.49)
1 − e−a e−j ω
Since the DTFT of a n u(n) is 1/(1 − ae−j ω ), we add this DTFT to that of na n u(n)
and get
1
(n + 1)a n u(n) ⇔ (3.51)
(1 − ae−j ω )2
140 FREQUENCY-DOMAIN ANALYSIS
Example 3.10
Consider a rectangular pulse
1 |n| ≤ N
xr (n) =
0 |n| > N
which is plotted in Figure 3.20. It is also known as a rectangular window (of
length 2N + 1) and will be used in Chapter 5 when we discuss the design of
FIR filters. Its DTFT is derived as follows:
N
Xr (ej ω ) = e−j ωn
n=−N
= 2N + 1; r=1 (3.53)
and get
jNω
e−j (N +1)ω − e
Xr (ej ω ) =
e−j ω − 1
e−j 0.5ω e−j (N +0.5)ω − ej (N +0.5)ω
=
e−j 0.5ω (e−j 0.5ω − ej 0.5ω )
⎧
⎨ sin[(N + 0.5)ω] ω = 0
= sin[0.5ω]
⎩
2N + 1 ω=0
which is shown in Figure 3.21.
Xr(n)
−5 0 1 2 3 4 5 n
10
6
Magnitude
−2
−4
−8 −6 −4 −2 0 2 4 6 8
Normalized frequency
Using the time-shifting property, we can find the DTFT of the sequence
xr2 (n) = x1 (n − N ) as
sin[(N + 0.5)ω]
Xr2 (ej ω ) = e−j N ω (3.54)
sin[0.5ω]
1 0 ≤ n ≤ 2N
where xr2 (n) =
0 otherwise
Example 3.11
Let us find the IDTFT of a rectangular spectrum H (ej ω ) which is shown as the
magnitude of an ideal lowpass filter in Figure 3.7a with a cutoff frequency of ωc .
π
1
h(n) = H (ej ω )ej ωn dω
2π −π
ωc
1
= ej ωn dω
2π −ωc
j ωn ωc
1 e
=
2π j n −ωc
1 ωc
= sin(ωc n) = sinc(ωc n) (3.55)
πn π
142 FREQUENCY-DOMAIN ANALYSIS
0.2
0.15
Values of the samples
0.1
0.05
−0.05
−0.1
−21 −16 −11 −6 0 4 9 14 19
Value of the index n
Remember that we will use (3.55) and (3.56) in the design of FIR filters discussed
in Chapter 5.
Example 3.12
The properties and the DTFT-IDTFT pairs discussed here are often used in
frequency-domain analysis of discrete-time systems, including the design of
DTFT OF UNIT STEP SEQUENCE 143
1 ej ω
H (ej ω ) = =
1 − 0.2e−j ω ej ω − 0.2
1 e−j ω
X(ej ω ) = =
1 − 0.5ej ω e−j ω − 0.5
ej ω e−j ω
Y (ej ω ) = H (ej ω )X(ej ω ) =
ej ω − 0.2 e−j ω − 0.5
Now let
k1 e j ω k2 e−j ω
Y (ej ω ) = +
ej ω − 0.2 e−j ω − 0.5
so that we can easily obtain the inverse DTFT of each term. Note the difference
in the two terms.
Then we compute k1 from the following method, which is slightly different
from the partial fraction method we have used earlier:
e−j ω k2 (ej ω − 0.2) −j 2ω
Y (ej ω )e−j ω (ej ω − 0.2) = = k1 + e
e−j ω − 0.5 (e−j ω − 0.5)
1
Similarly K2 = Y (ej ω )(1 − 0.5ej ω )ej ω =2 = = 1.111
(1 − 0.2e −j ω ) ej ω=2
Example 3.13
Let x(n) = ej (0.3πn) and h(n) = (0.2)n u(n).As in Example 3.12, we can find the
DTFT of x(n) = ej (0.3πn) as X(ej ω ) = 2π δ(ω − 0.3π − 2πk) and the DTFT
of h(n) as
1 ej ω
H (ej ω ) = =
1 − 0.2e−j ω ej ω − 0.2
Thus
Y (ej ω ) = X(ej ω )H (ej ω ) = 2π δ(ω − 0.3π − 2πk)H (ej ω )
∞
= 2π δ(ω − 0.3π − 2πk)H (ej 0.3π )
k=−∞
ej 0.3π
= 2π δ(ω − 0.3π − 2πk)
(ej 0.3π − 0.2)
= 1.1146e−j (0.1813) 2π δ(ω − 0.3π − 2πk)
Therefore y(n) = 1.1146e−j (0.1813) ej (0.3πn) = 1.1146ej (0.3πn−0.1813) .
As an alternative method, we recollect that from the convolution of ej ωn and
h(n), we obtained y(n) = ej ωn H (ej ω ). In this example H (ej ω ) = [ej ω /(ej ω −
0.2)] and ω = 0.3π. Therefore y(n) = ej 0.3πn H (ej 0.3π ):
ej 0.3π
y(n) = e j 0.3πn
= 1.1146ej (0.3πn−0.1813)
(ej 0.3π − 0.2)
DTFT OF UNIT STEP SEQUENCE 145
By this method, we can also find that when the input is Re{x(n)} = cos(0.3πn),
the output is given by y(n) = Re{1.1146ej (0.3πn−0.1813) } = 1.1146 cos(0.3πn −
0.1813).
In words, this result X2∗ (e−j ω ) = X1 (ej ω ) means that we find the DTFT of the
complex conjugate sequence (ae−j θ )n and replace ω by −ω and then find the
complex conjugate of the result, which is the same as the DTFT of the sequence
(aej θ )n .
However, when x(n) is real, we know that x(n) = x ∗ (n), in which case
X1 (ej ω ) = X2 (ej ω ). So we have the following result:
X(ej ω ) = X(e−j ω ) (3.60)
Ang X(ej ω ) = −Ang X(e−j ω ) (3.61)
The even and odd parts of the sequence x(n) are defined by xe (n) = [x(n) +
x(−n)]/2 and x0 (n) = [x(n) − x(−n)]/2 respectively. When x(n) is real, and
we use the time reversal property, we get xe (n) ⇔ [{X(ej ω )} + {X(e−j ω )}]/2] =
Re{X(ej ω )}:
δ(n) 1
δ(n − k) e−j
ωk
If a function
is a finite−ksequence, such as the unit impulse response of an FIR filter
H (z−1 ) = N k=0 bk z , then the difference equation for that filter is given by
M
y(n) = bk x(n − k) (3.64)
k=0
M
Y (e ) = H (e )X(e ) =
jω jω jω
bk X(ej ω )e−j ωk
k=0
N
H (ej ω ) = bk e−j ωk (3.65)
k=0
There are two equivalent approaches to find the frequency response of this filter
as described below.
We find the inverse z transform h(n) of H (z−1 ), which gives an infinite number
of samples of its unit impulse response,
and now−j we can evaluate its frequency
response or its DTFT as H (e−j ω ) = ∞ h(n)e ωr
. The other approach uses
N r=0 M
the difference equation y(n) + k=1 ak y(n − k) = k=0 bk x(n − k) and finds
the DTFT of both sides as given by
N
M
−j ωk
Y (e ) 1 +
jω
ak e = X(ej ω ) bk e−j ωk
k=1 k=0
so that
M −j ωk
−j ω k=0 bk e
H (e )= N (3.66)
1+ −j ωk
k=1 ak e
In short, we can state that H (e−j ω ) = H (z−1 )z=ej ω , provided both exist.
To compute and plot the magnitude, phase, and/or the group delay of the FIR
or IIR filter transfer functions H (z−1 ), we use the MATLAB functions freqz,
abs, angle, unwrap, grpdelay very extensively in signal processing and
filter design. These functions are found in the Signal Processing Toolbox of
MATLAB.
When the sequence of coefficients bk and ak are known, they are entered as
the values in the vectors for the numerator and denominator. The function freqz
is used with several variations for the input variables as described below:
[h,w] = freqz(num,den,w)
[h,w] = freqz(num, den, f, Fs)
[h,w] = freqz(num,den,K,Fs)
[h,w] = freqz(num,den,K,’whole’)
[h,f] = freqz(num,den,K,’whole’,Fs)}
The vectors num and den are the row vectors of the numerator and denominator
coefficients ak and bk , respectively. The function freqz computes the values
of the frequency response as a column vector h at the discrete values of the
frequency w. The set of default frequencies w lie between 0 and π, and the set f
is the vector of values for the frequencies we can arbitrarily distinguish between
0 and Fs/2, where Fs is the sampling frequency in hertz. We can choose a value
for K as the number of frequency points within the default range; preferably K
USE OF MATLAB TO COMPUTE DTFT 149
[gd,w]=grpdelay(num,den,K,)
[gd,w]=grpdelay(num,den,K,’whole’)
Note that we can change the name for the variables num,den,h,H,
HdB,f,FT,K,ph,Ph,gd in the statements above to other variables as we like.
After we have computed H,HdB, ph,Ph,gd, we plot them using the plotting
function with different choices of variables, as illustrated in the examples given
below. When we plot H, Hdb, ph, Ph, or grpdelay, we normally plot them
as a function of the normalized frequency on a linear scale, between 0 and π.
But the function semilog(......) plots them as a function of log10(w) and
therefore a plot of semilog(HdB) becomes the familiar Bode plot of the digital
filter; alternatively, we define ww = log10(w) as the new frequency variable
and plot the magnitudes using plot(ww, H) or plot(ww, HdB).
The MATLAB function freqz(num,den) without any other arguments com-
putes and plots the magnitude in decibels as well as the phase response as a
function of frequency in the current figure window.
Example 3.14
xlabel(’Normalized frequency’)
subplot(1,2,2)
plot(w,HdB);grid
title(’Magnitude in dB of the frequency response’)
ylabel(’Magnitude in dB’)
xlabel(’Normalized frequency’)
figure(2)
subplot(1,2,1)
plot(w,ph);grid
title(’Phase response of the filter’)
ylabel(’Phase angle in radians’)
xlabel(’Normalized frequency’)
subplot(1,2,2)
plot(w,Ph);grid
title(’Unwrapped phase response filter’)
ylabel(’Phase angle in radians’)
xlabel(’Normalized frequency’)
%end
The magnitude response and phase response of this IIR filter are plotted in
Figures 3.23 and 3.24, respectively.
0.9
0.8 −50
0.7
−100
Magnitude in dB
0.6
Magnitude
0.5 −150
0.4
−200
0.3
0.2
−250
0.1
0 −300
0 1 2 3 4 0 1 2 3 4
Normalized frequency Normalized frequency
3
−2
−4
Phase angle in radians
0 −6
−1
−8
−2
−10
−3
−4 −12
0 1 2 3 4 0 1 2 3 4
Normalized frequency Normalized frequency
Example 3.15
4
Magnitude
0
0 1 2 3 4 5 6 7
Normalized frequency from 0 to 2p
The magnitude and phase responses for the normalized frequency range from
0 to 2π are shown in Figures 3.25 and 3.26, respectively. The phase response is
found to be linear as a function of the frequency in this example. We will work
out many more examples of computing and plotting the DTFT or the frequency
response of filters in Chapter 4, using the MATLAB functions listed above.
Example 3.16
In this example, we choose the sampling frequency Fs = 200 Hz, and the Nyquist
interval is divided into 100 equal parts as seen in the statement f = [0:99] in the
MATLAB program given below. The sample values of the signal are entered by
us, when prompted by the program. In the example, we entered [0.4 0.6 0.8]
as the input signal. The magnitude and phase are plotted in Figure 3.27 as a
function of the frequency from 0 to 100 Hz. But the group delay is plotted as a
function of the normalized frequency from 0 to π radians.
1.5
Phase angle in radians
0.5
−0.5
−1
−1.5
−2
−2.5
0 1 2 3 4 5 6 7
Normalized frequency from 0 to 2p
1.2 0
1 −2
0.8 −4
0.6 −6
0.4 −8
0.2 −10
0 20 40 60 80 100 0 20 40 60 80 100
Frequency in Hz Frequency in Hz
4
Phase angle in radians
2 3.5
1
3
0
2.5
−1
−2 2
−3 1.5
−4 1
0 20 40 60 80 100 0 1 2 3 4
Frequency in Hz Normalized frequency
Figure 3.27 Magnitude, phase, and group delay responses of an FIR filter.
154 FREQUENCY-DOMAIN ANALYSIS
3.6.1 Introduction
We discussed the DTFT-IDTFT pair for a discrete-time function given by
∞
X(ej ω ) = x(n)e−j ωn (3.67)
n=−∞
and
π
1
x(n) = X(ej ω )ej ωn dω (3.68)
2π −π
DTFS AND DFT 155
The theory for deriving the pair and their properties and applications are very
elegant, but from practical point of view, we see some limitations in computing
the DTFT and IDTFT. For example, the input signal is usually aperiodic and may
be finite in length, but the unit impulse response of an IIR filter is also aperiodic
but infinite in length; however, the values of its samples become almost negligible
in many practical applications as n becomes large but finite. So in (3.67), it is
reasonable to assume that the number of terms is finite, but X(ej ω ) is a function
of the continuous variable ω. We have given some examples of analytically
deriving closed-form expressions for this function and plotting it as a function
of the variable ω. We showed how we can do it by using MATLAB functions.
Let us consider one more example of a discrete-time function x(n) and its
DTFT X(ej ω ). Figure 3.28a shows a nonperiodic discrete-time function x(n)
x(n)
01 2 3 4 5 6 7 8 X
(a)
|X(e jw)|
0 π 2π 3π 4π ω
(b)
∠X(e jw)
π
π 2π 3π 4π ω
−π
(c)
Figure 3.28 A nonperiodic signal; (b) its magnitude response; (c) its phase response.
156 FREQUENCY-DOMAIN ANALYSIS
that is of finite length. Figure 3.28b,c show the magnitude response X(ej ω )
and phase response X(ej ω ) of the DTFT X(ej ω ).
The function X(ej ω ) in (3.68) is a function of the continuous variable ω,
and the integration is not very suitable for computation by a digital computer.
Of course, we can discretize the frequency variable and find discrete values
for X(ej ωk ) where ωk are discrete values of the frequency. In contrast to the
case of a continuous-time signal with a frequency response X(j ω), we notice
that we need to compute the DTFT at only a finite number of values since
X(ej ω ) is periodic, and therefore we need to compute it over one period only. In
(3.68), x(n) can be computed approximately, if the integration is substituted by
a summation and such a summation will be finite because the values of X(ej ωk )
have to be chosen only over the interval [−π, π]. [We may also note that the
reconstruction formula used to obtain x(t) from its samples x(n) is not suitable
for digital computer, either.] These limitations are mitigated by a theory based on
the model for a discrete-time signal that is periodic, and in the next section, we
describe the discrete-time Fourier series (DTFS) representation for such discrete-
time periodic signals. This theory exploits the property of the DTFT that it is
periodic, and hence we need to use only a finite frequency range of one period
that is sufficient to find its inverse.
N −1
xp (n) = Xp (k)ej (2π/N )kn (3.70)
k=0
To find these coefficients, let us multiply both sides by e−j mω0 k and sum over n
from n = 0 to (N − 1):
N−1
N −1 N
−1
−j mω0 k
xp (n)e = Xp (k)ej (2π/N )kn e−j mω0 k (3.71)
n=0 n=0 k=0
−1 j (2π/N )k(n−m)
It is next shown that [ N n=0 e ] is equal to N when n = m and zero
−1 j 0
for all values of n = m. When n = m, the summation reduces to [ N n=0 e ] =
N , and when n = m, we apply (3.52) and find that the summation yields zero.
Hence there is only one nonzero term Xp (k)N in (3.72). The final result is
N −1
1
Xp (k) = xp (n)e−j nω0 k (3.73)
N
n=0
In other words, when the DTFT of the finite length sequence x(n) is evaluated
at the discrete frequency ωk = (2π/N )k, (which is the kth sample when the
frequency range [0, 2π] is divided into N equally spaced points) and dividing
by N , we get the value of the Fourier series coefficient Xp (k).
The expression in (3.70) is known as the discrete-time Fourier series
(DTFS) representation for the discrete-time, periodic function xp (n) and (3.73),
which gives the complex-valued coefficients of the DTFS is the inverse DTFS
(IDTFS). Because both xp (n) and Xp (k) are periodic, with period N , we observe
158 FREQUENCY-DOMAIN ANALYSIS
that the two expressions above are valid for −∞ < n < ∞ and −∞ < k < ∞,
respectively. Note that some authors abbreviate DTFS to DFS.
To simplify the notation, let us denote e−j (2π/N )n by WN so that (3.70) and
(3.73) are rewritten in compact form for the DTFS-IDTFS pair as
N −1
xp (n) = Xp (k)W −kn , −∞ < n < ∞ (3.75)
k=0
N −1
1
Xp (k) = xp (n)W kn , −∞ < k < ∞ (3.76)
N
n=0
Xp(n)
|Xp(k )|
DFT |X(k )|
0 10 20 k
(b)
Xp(k )
p
10 20 k
(c)
Figure 3.29 A periodic signal; (b) its magnitude response; (c) its phase response.
DTFS AND DFT 159
N −1
1
jω
X(e ) = x(n)e−j ωn
N
k=0
N −1
N −1
x(n) = X(k)ej (2π/N )kn = X(k)W −kn , 0≤n≤N −1 (3.77)
k=0 k=0
N −1
1 1
N−1
−j (2π/N )kn
X(k) = x(n)e = x(n)W kn , 0≤k ≤N −1 (3.78)
N N
n=0 n=0
Note that we have not derived these properties from any new theory but only
defined them as a part of the infinite sequences for the DTFS and IDTFS derived
above.
Also note that whereas (3.75) is termed the discrete-time Fourier series (DTFS)
representation of xp (n), in which Xp (k) are the coefficients of the Fourier
series [and (3.76) is the IDTFS], it is (3.78) that is known as the discrete-time
Fourier transform (DFT) [and (3.77) is known as the inverse DFT (IDFT)]! In
Sections 3.6.1 and 3.6.2, note that we have used different notations to distinguish
the DTFT-IDTFT pair from the DFT-IDFT pair.
In most of the textbooks, and in MATLAB, the DFT-IDFT are simply defined
as given below, without any reference to the theory for deriving the DTFS, from
which the DFT are selected. Also note that the scale factor (1/N ) has been moved
from (3.78) to (3.77) in defining the DFT-IDFT pair as shown in (3.79) and (3.80)
160 FREQUENCY-DOMAIN ANALYSIS
(we will use these two equations for the DFT-IDFT pair in the remaining pages):
N −1 N −1
1 1
x(n) = X(k)ej (2π/N )kn = X(k)W −kn , 0≤n≤N −1 (3.79)
N N
k=0 k=0
N −1
N −1
X(k) = x(n)e−j (2π/N )kn = x(n)W kn , 0≤k ≤N −1 (3.80)
n=0 n=0
In Figure 3.29b, we have shown the DFT as a subset of the Fourier series
coefficients Xp (k), for k = 0, 1, 2, . . . , (N − 1). But we can choose any other
N consecutive samples as the DFT of x(n) [e.g., −[(N − 1)/2] ≤ n ≤ [(N −
1)/2]], so we will use the notation N = n modulo N to denote that n ranges
over one period of N samples.
Given a nonperiodic discrete-time function x(n), we constructed a mathe-
matical artifact xp (n) and derived the Fourier series representation for it and
also derived its inverse to get xp (n). Then we defined the DFT and IDFT as
argued above so that we could determine the frequency response of the non-
periodic function as samples of the DTFT X(ej ω ) at N equally spaced points
ωk = (2π/N )k. We know x(n) is nonperiodic, but since X(ej ω ) is periodic with
a period 2π, X(ej ωk ) = Xp (k) is periodic with a period N , so one can choose
the range n = N in Equations (3.79) and (3.80).
The two equations for the DFT and IDFT give us a numerical algorithm
to obtain the frequency response at least at the N discrete frequencies, and
by choosing a large value for N , we get a fairly good idea of the frequency
response for x(n).6 Indeed, we show below that from the samples of X(k), we
can reconstruct the DTFT of x(n) = X(ej ω ), which is a function of the contin-
uous variable ω. This is the counterpart of Shannon’s reconstruction formula to
obtain x(t) from its samples x(n), provided x(n) is bandlimited and the sam-
pling period Ts < (π/ωb ). There are similar conditions to be satisfied in deriving
the formula in the frequency domain, to reconstruct X(ej ω ) from its samples
X(ej ωk ) = X(k).
N −1
N −1
1
= X(k) ej (2πkn/N) e−j ωn (3.81)
N
k=0 n=0
6
Later we will discuss what is known as the “picket fence effect,” because of which we may not get
a fairly good idea of the frequency response.
DTFS AND DFT 161
N −1
Now we use (3.52) in the summation n=0 ej (2πkn/N) e−j ωn and reduce it as
follows:
N −1
1 − e−j (ωN −2πk )
ej (2πkn/N) e−j ωn =
1 − e−j [ω−(2πk/N)]
n=0
ωN − 2πk
sin
e−j [(ωN−2πk)/2] 2
= −j [(ωN−2πk)/2N ·
e ] ωN − 2πk
sin
2N
ωN − 2πk
sin
2
= e−j [ω−(2πk/N)][(N −1)/2] (3.82)
ωN − 2πk
sin
2N
Substituting the last expression in (3.81), we obtain the final result to reconstruct
the DTFT X(ej ω ), from only the finite number of the DFT samples X(k), as
given below:
ωN − 2πk
N −1 sin
1 2
X(ej ω ) = X(k) e−j [ω−(2πk/N)][(N −1)/2] (3.83)
N ωN − 2πk
k=0 sin
2N
If x(n) has M samples and we sample X(ej ω ) at N points in the range [0, 2π],
where N > M, then the N -point IDFT will yield N samples in the discrete-
time domain. It can be shown that this result will give rise to aliasing of the
N -point sequences (in the time domain). So we pad the given function x(n) with
(M − N ) zeros to make it a discrete-time function of length N ; otherwise, we
have to choose N ≤ M. In that case the sampling interval satisfies the condition
(2π/N ) ≥ (2π/M), which is dual to the condition that the sampling period T ≤
(1/2B) to be satisfied for Shannon’s reconstruction formula in the time domain.
To satisfy this condition for reconstruction in the frequency domain, we make
M = N by padding with zeros x(n) to avoid aliasing in the discrete-time domain.
in Table 3.4, we have used a notation such as X((−k))N , which means that we
choose DTFS coefficients Xp (k), use time reversal to get Xp (−k), and then
select any N samples that form a period. The double bracket with a subscript
N calls for three operations: choosing the DTFS coefficients Xp (k), carrying
out the operation indicated by the index within the brackets, and then selecting
n = 0, 1, 2, . . . , (N − 1) or n modulo N . This again confirms the statement that
all operations are carried out by the DTFS and then one period of the result
is chosen as the DFT of x(n). This is very significant when we carry out the
periodic convolution of the DTFS of x(n) and f (n) and select the values of this
convolution for n = 0, 1, 2, . . . , (N − 1). We illustrate this by Example 3.17.
Example 3.17
Let x(n) = [1.0 1.0 0.6 0.6] and f (n) = [1.0 0.6 0.4]. We can easily
find the output y1 (n) by using
either one of two methods: (1) the convolution
−1
sum y1 (n) = x(n) ∗ f (n) = N m=0 x(m)f (n − m) or (2) one of the two trans-
forms, namely, the z transform or the discrete-time Fourier transform (DTFT) of
x(n) and f (n), and find the
inverse z transform of [X(z)F (z)] or the inverse
DTFT of X(ej ω )F (ej ω ) . Indeed, we can give a proof to show that the z
x(m)
f(−m)
0 1 2 3 m −3 −2 −1 0 1 2 3 m
(a) (b)
f(1−m) f(2−m)
−2 −1 0 1 2 3 m −1 0 1 2 3 m
(c) (d)
f(4−m)
f(3−m)
0 1 2 3 m 0 1 2 3 4 m
(e) (f)
f(5−m)
f(6−m)
0 1 2 3 4 5 m 0 1 2 3 4 5 6 m
(g) y1(n) (h)
0 1 2 3 4 5 n
(i)
transform of the convolution sum y1 (n) = x(n) ∗ f (n) is [X(z)F (z)]. There-
fore, the results from both these methods agree with each other and we get
the output y1 (n) = [1.0 1.6 1.6 1.36 0.6 0.24], which is a sequence of
length 6. It is identified as the result of linear convolution. The three sequences
are shown in Figure 3.30, including the graphical procedure for carrying out the
linear convolution.
Now we ask the following question. What do we get if we compute the DTFS
of the periodic sequences xp (n) and fp (n) generated by extending x(n) and
f
(n), multiply Xp (k), and Fp (k), and find the inverse DTFS of their product
Xp (k)Fp (k) to get yp (n). From Table 3.3, we notice that Xp (k)Fp (k) is
N −1
the DTFS of the periodic convolution x(n) f (n) = m=0 x(m)f (n − m)N =
yp (n)—a result than can be proved analytically. We will provide a numerical
example below to verify this property. However, does this result of periodic
convolution match with the result of applying the familiar linear convolution?
We show in the example chosen below that yp (n) is not the periodic extension
of y1 (n), that is, the result of the periodic convolution and the linear convolution
do not match in the example chosen.
Example 3.18
−1of x(n),−jwhen
The DTFS coefficients we choose N = 4, are computed from the
formula Xp (k) = Nn=0 x(n)e (2π/N )kn
3
Xp (0) = x(n)e−j (2π/N )(0.n) = x(0)e−j (2π/4)(0) + x(1)e−j (2π/4)(0)
n=0
3
Xp (1) = x(n)e−j (2π/N )(1.n) = x(0)e−j (2π/4)(0) + x(1)e−j (2π/4)(1)
n=0
1
3
yp (n) = Yp (k)ej (2π/N )kn
N
k=0
xp(m)
−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8 9 10 11
(a)
fp(m)
−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8 9 10 11 12 13
(b)
fp(−m)
−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8 9 10 11 12 13
(c)
fp(1−m)
−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8 9 10 11 12 13
(d)
fp(2−m)
−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8 9 10 11 12
(e)
fp(3−m)
−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8 9 10 11 12
(f)
f(4−m)
−11 −10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8 9 10 11 12 m
(g)
fp(5−m)
−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8 9 10 11 12 m
(h)
y(n)
But we notice that this does not match the result of the linear convolution
y1 (n) = x(n) ∗ f (n) = [1.0 1.6 1.6 1.36 0.6 0.24]. It is obvious that the
length of y1 (n) is 6, whereas xp (n), fp (n), Xp (k), Fp (k), and yp (n) are all of
length 4, and for that reason alone, we do not expect the two results to match.
If we look carefully at Figure 3.31 and Figure 3.30, we see another reason why
they do not match. In Figures 3.31, f (4 − m), f (5 − m), f (6 − m), f (7 − m)
are found to be the same as the sequence f (0), f (1 − m), f (2 − m), f (3 − m),
168 FREQUENCY-DOMAIN ANALYSIS
Example 3.19
7
Xp (0) = x(n)e−j (2π/N )(0·n)
n=0
7
Xp (6) = x(n)e−j (2π/N )(6·n)
n=0
Yp (k) = [6.4 0.4 − j 3.5233 0.0 − j 0.48 0.4 − j 0.3233 0.0 + j 0.0
0.4+j 0.3233 0.0 + j 0.48 0.4 + j 3.5233].
and the inverse DTFS of Yp (k) given by yp (n) = 1/N 7k=0 Yp (k)ej (2π/N )kn is
computed to obtain yp (n) = [1.0 1.6 1.6 1.36 0.6 0.24 0.0 0.0].
As shown in Figures 3.32 and 3.33, we get the same result from the periodic
convolution. Moreover, we see that this result matches the result y1 (n) obtained
by linear convolution!
In general, if the length of x(n) is l1 and that of f (n) is l2 , we know that the
length of y1 (n) from linear convolution will be l1 + l2 − 1. So what we need to do
to match the result of linear convolution and periodic convolution of two signals
is to choose N to be equal to or greater than l1 + l2 − 1. With such a choice, we
can use the DTFS coefficients Xp (k) and Fp (k), each of length N ≥ l1 + l2 − 1,
and then compute the N inverse DTFS coefficients of Xp (k)Fp (k). Because the
formulas for computing the N coefficients of their DFT [i.e., X(k) and F (k)] are
the same as for computing their DTFS coefficients and consequently the DFT
(and inverse DFT) coefficients are a subset of the coefficients of DTFS (and
IDTFS), we conclude that if we are given, say, x(n) of length l1 as the input
signal and h(n) of length l2 as the unit impulse response of a linear discrete-time
system, then we can pad each of the signals with an appropriate number of zeros
to make both of them to be of length N ≥ l1 + l2 − 1, and find the inverse DFT
of X(k)H (k) to get the N samples of the output y(n) of the linear discrete-
time system. Conversely, if we are given any signal, we can easily obtain the N
170 FREQUENCY-DOMAIN ANALYSIS
xp(1−m)
01234567 m
fp(−m)
012345678 m
fp(1−m)
0 7 m
fp(2−m)
0 12 7 m
fp(3−m)
01234567 m
coefficients of its DFT, which indicates the frequency response of the signal at
N discrete frequencies equally spaced between 0 and 2π.
Note that when we computed each of the eight samples of the DFT in the previous
example, there was multiplication of the complex number ej (2π/N )kn = W −kn ,
k = 0, 1, 2, . . . , (N − 1), with the eight real-valued samples of the signal and
the product were added. So the total number of multiplications is 82 = 64 and
the number of additions are 72 = 49 in computing the eight samples of the
DFT. The same number of multiplications and additions are required to find the
inverse DFT; in this case, samples of both X(k) and W −kn are complex-valued.
In general, direct computation of the DFT and IDFT using (3.79) and (3.80)
requires N 2 multiplications and (N − 1)2 additions; so they become very large
FAST FOURIER TRANSFORM 171
fp(4−m)
0 1 2 3 4 5 6 7
fp(5−m)
−8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8
fp(6−m)
−8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8
y(n)
−8 −7 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 7 8 9 10 11 12
Figure 3.33 Circular convolution of xp (n) and fp (n) with N = 8 (continued from
Fig. 3.34).
numbers when N is chosen very large, in order to increase the resolution of the
frequency response X(k) of a given signal or to find the unit impulse response
of a filter as the IDFT of the given frequency response of a filter.
Fast Fourier transform (FFT) is a numerical algorithm that has been developed
to improve the computational efficiency by an enormous amount and is the most
popular method used in spectral analysis in digital signal processing, specifically,
to find the DFT of the signal and also the inverse DFT of the frequency response
to get the discrete-time signal. This is only a computational algorithm and not
172 FREQUENCY-DOMAIN ANALYSIS
another transform. In this FFT algorithm, when the value for the radix N is
chosen as 2R , where R is an integer, the number of complex multiplications is
of the order (N/2) log2 (N ) and the number of complex additions is of the order
N log2 N . As an illustration of this efficiency, let us choose N = 256; in this
case the number of complex multiplications is 65,536 in the direct computation,
whereas the number of complex multiplications is 1024 in the FFT algorithm,
which is an improvement by a factor of 64. As N increases to higher values, the
improvement factor increases very significantly, for example, when N = 1024,
we realize an improvement by a factor of 204.
Algorithms based on a radix N = 4 have been developed to further improve
the computational efficiency. Also when the length of a signals to be convolved
is large (e.g., N = 1024), some novel modifications to the FFT algorithm have
also been proposed. They are called the overlap-add method and the overlap-save
method. Basically in these methods, the signals are decomposed as a sequence
of contiguous segments of shorter length, their convolution is carried out in the
basic form, and then the responses are carefully added to get the same result
as that obtained by the direct FFT method applied on their original form. The
MATLAB function y = fftfilt(b,x) and y = fftfilt(b,x,N) implements
the convolution between the input signal x and the unit impulse response b of
the FIR filter, using the overlap-add method, the default value for the radix, is N
512; but it can be changed to any other value by including it as an argument in
the second command.
Example 3.20
Let us consider the same example for the signal x(n) that was chosen in the
previous example; that is, let x(n) = [1.0 1.0 0.6 0.6]. First we compute
its DTFT and plot it in Figure 3.34. Then we compute a 10-point DFT of the
same signal using the function fft found in the Signal Processing Toolbox of
MATLAB. The function is described by the following simple command:
[X,w] = fft(x,N)
In this function, X is the output vector of the complex-valued DFT of the given
signal x(n) and N is the value for the radix, which is chosen as 10 in this example.
The absolute value of the DFT is computed, and the magnitude |X(k)|, k =
0, 1, 2, . . . , 9 is superimposed on the same plot. It is seen that the values of DFT
match the value of DTFT at the discrete frequencies ej (2π/10)k , k = 0, 1, 2, . . . , 9,
as we expect. But we have chosen this example with N = 10 particularly to
illustrate what is known as the “picket fence effect”. Note that the frequency
response in Figure 3.34 has a local minimum value at the normalized frequency
of 2.5 and 7.5. But if we plot the DFT values alone, we will miss the fact that the
frequency response of the signal has a minimum value at these frequencies. This
USE OF MATLAB TO COMPUTE DFT AND IDFT 173
3
Magnitude of DTFT and DFT
2.5
1.5
0.5
0
0 1 2 3 4 5 6 7 8 9 10
Normalized frequency 5∗w/p and k
N −1
X(k + 1) = x(n + 1)WNkn (3.84)
n=0
whereas from theory we know that both indices run from 0 to (N − 1) and the
frequency response is normally displayed by MATLAB for the frequency range
of [0 π].
In superimposing the values of DFT on the plot for the DTFT, this fact about
the MATLAB function fft is important. It serves as an example where a thor-
ough understanding of theory is necessary for using MATLAB in digital signal
processing.
Example 3.21
We now consider another example showing the use of the MATLAB function
fft(x,N) and comparing the values of its DFT with the frequency response
(DTFT) of a discrete-time signal. We pick a signal x(n) = sin[0.1(πn)] for 0 ≤
n ≤ 10, which is plotted in Figure 3.35. We find its frequency response using the
function [h,w]=freqz(x,1,’whole’) and plot it for the full period of 2π in
Figure 3.36, in order to compare it with the values of its DFT X(k), which always
gives N samples for 0 ≤ k ≤ (N − 1), which corresponds to the full frequency
range of [0 2π]. The DFT X(k) are computed from the MATLAB function
X=fft(x,64). The absolute
values of X(k) are plotted in Figure 3.37 showing
that they match X(ej ω ).
Example 3.22
In this example, we consider the same signal x(n) = [1 1 0.6 0.6] and
h(n) = [1 0.6 0.4], which we considered in Example 3.17 and find the output
0.9
0.8
Values of the signal samples
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1 2 3 4 5 6 7 8 9 10 11
Index for n
5
Magnitude
0
0 1 2 3 4 5 6 7
Normalized frequency in radians
5
Magnitude
0
0 10 20 30 40 50 60 70
Index for k
y(n) of the discrete-time system using the FFT technique. The MATLAB program
we use and the final output from the program are given below:
x = [ 1 1 0.6 0.6];
h = [ 1 0.6 0.4];
176 FREQUENCY-DOMAIN ANALYSIS
X=fft(x,8);
H=fft(h,8);
Y=X.*H;
y=ifft(Y,8)
The output is
Example 3.23
Let us consider the DFT samples of a lowpass filter response as given below
and use the MATLAB function x = ifft(X,8) and plot the output x(n). The
MATLAB program and the output x(n) are given below and the plot of x(n) is
shown in Figure 3.38. Again note that the time index n in this figure runs from
1 to 8, instead of from 0 to 7 in this 8-point IDFT:
0.4
0.3
Values of the IDFT x(n)
0.2
0.1
−0.1
−0.2
1 2 3 4 5 6 7 8
Index n + 1
The output is
3.9 SUMMARY
We have discussed several topics in this chapter. First we showed that if a signal
is bandlimited and we sample it at a frequency larger than twice the maximum
frequency in the signal, we can use digital signal processing of the signal instead
of analog signal processing, because the digital signal has all the information that
is contained in the analog signal. Shannon’s sampling theorem and the formula
for reconstructing the analog signal from the samples was explained. When such
an analog signal is sampled, the frequency response of the discrete-time signal
is a continuous function of the digital frequency but is periodic with a period
equal to the sampling frequency. Next we discussed several properties of the
frequency response of the discrete-time signal (DTFT), illustrating them with
numerical examples; they are summarized below:
IDTFT of X(ej ω ):
π
1
x(n) = X(ej ω )ej ωn dω (3.87)
2π −π
N −1
N −1
xp (n) = X(k)ej (2π/N )kn = X(k)W −kn , −∞ ≤ n ≤ ∞ (3.88)
k=0 k=0
178 FREQUENCY-DOMAIN ANALYSIS
The discrete Fourier transform (DFT) and its inverse (IDFT) are a subset of
the DTFS and IDTFS coefficients, derived from the periodic DTFS and IDTFS
coefficients. They can be considered as nonperiodic sequences. A few examples
were worked out to show that the values of the DTFT, when evaluated at the
discrete frequencies, are the same as the DFT coefficients:
N−1
N −1
X(k) = x(n)e−j (2π/N )kn = x(n)W kn , 0 ≤ k ≤ (N − 1)
n=0 n=0
(3.90)
IDFT of X(k) with length N :
N −1 N −1
1 1
x(n) = X(k)ej (2π/N )kn = X(k)W −kn , 0 ≤ n ≤ (N − 1)
N N
k=0 k=0
(3.91)
The FFT algorithm for computing the DFT-IDFT coefficients offers very
significant computational efficiency and hence is used extensively in signal pro-
cessing, filter analysis, and design. It provides a unified computational approach to
find the frequency response from the time domain and vice versa. More examples
are added to show that the use of FFT and IFFT functions from MATLAB pro-
vides a common framework for getting the frequency response of a discrete-time
system from the discrete-time signal and finding the discrete-time signal from
the frequency response. Remember that the terms discrete-time (digital) signal,
sequence, or function have been used interchangeably in this book; we have also
used the terms discrete-time Fourier transform (DTFT), frequency response, and
spectrum synonymously in this chapter.
PROBLEMS
3.1 A signal f (t) = e−0.1t u(t) is sampled to generate a DT signal f (n) at such
a high sampling rate that we can assume that there is no aliasing. Find a
closed-form expression for the frequency response of the sequence f (n).
3.2 Find the Fourier transform X(j ω) of the signal x(t) = te−0.1t u(t) and
choose a frequency at which the attenuation is more than 60 dB. Assuming
PROBLEMS 179
|F(jw)|
Cos(2w)
0 w
that the signal is bandlimited by that frequency, what is the minimum sam-
pling frequency one can choose to sample x(t) without losing too much
information?
3.3 A continuous-time function f (t) with a bandwidth of 200 Hz is sam-
pled at 1000 Hz, and the sampled values are given by f (nT ) = {1.0
↑
0.4 0.1 0.001}. Find the value of f (t) at t = 0.005.
3.4 A bandlimited analog signal f (t) has a Fourier transform F (j ω) as shown
in Figure 3.39. What is the maximum sampling period T that can be used
to avoid aliasing in the frequency response F (ej ω ) of the sampled sequence
f (n)? Find the Fourier series coefficients for F (ej ω ).
3.5 Find the DTFT of x(n) = {−1 1 0 1 −1} and compute its value
↑
at j ω = j 0.4π. If the 10-point DFT Xk (j ωk ) of this x(n) is computed,
what is the value of the index k at which the DFT is equal to X(ej 0.4π )?
3.6 Find the DTFT of a finite sequence {1.0 0.0 −1.0} and evaluate it at
↑
ω = 0.5π. Calculate the value of the DTFT at ω = 0.5π, using the DFT
for this sequence to verify this result.
3.7 Find the DTFT of the following two functions:
(a) x1 (n) = 10(0.5)n cos(0.2πn + π3 )u(n)
(b) x2 (n) = n(0.2)n u(n)
3.8 Find the DTFT of x1 (n) = (0.5)n u(n) and x2 (n) = (0.5)n ; −5 ≤ n ≤ 5.
3.9 Find the DTFT of the following sequences:
x1 (n) = u(n) − u(n − 6)
x2 (n) = (0.5)n u(n + 3)
x3 (n) = (0.5)n+3 u(n)
x4 (n) = (0.5)−n+2 u(−n + 2)
x5 (n) = (0.3)n−2 u(−n + 2)
180 FREQUENCY-DOMAIN ANALYSIS
Determine the value of b (other than 0.5) such that the square of the mag-
nitude of its transfer function H (ej ω ) is a constant equal to b2 for all
frequencies.
3.13 A comb filter is defined by its transfer function H (z) = (1 − z−N )/N .
Determine the frequency response of the filter in a closed-form expression
for N = 10.
3.14 Show that the magnitude response of an IIR filter with
N −1
N k = 0, ±N, ±2N, . . .
ej (2π/N )kn =
0 otherwise
n=0
PROBLEMS 181
ej ω
H (ej ω ) =
(1 − 0.6e−j ω )
X(ej ω ) = 2e−j ω − 5e−j 5ω + e−j 6ω
3.23 Given H (ej ω ) and X(ej ω ) as shown below, find the output y(n)
1
H (ej ω ) =
ej ω + 0.3
ej ω
X(ej ω ) =
(1 + 0.5e−j ω )(1 − 0.5ej ω )
1 − e−j 2ω
Y (ej ω ) =
(1 + 0.2ej ω )(1 − 0.4e−j ω )(ej ω + 0.5)
1
Y (ej ω ) =
(ej ω + 0.1)(1 − e−j ω )(1 + ej ω )
3.26 If the input of an LTI-DT system is x(n) = (0.2)n u(−n) and its unit pulse
response h(n) is (0.4)n u(n), what is its output y(n)?
3.27 Given an input x(n) = (0.2)−n u(−n) + (0.5)n u(n) and the unit impulse
response of an LTI-DT system as (0.4)n u(n), find its output y(n).
3.28 Given a sequence x1 (n) = (0.3)−n u(−n) and another sequence x2 (n) =
(0.6)n u(−n), find their convolution sum x1 (n) ∗ x2 (n), using their DTFT.
3.29 Find the convolution y(n) = x1 (n) ∗ x2 (n) where x1 (n) = 0.5−n u(−n) and
x2 (n) = (0.2)−n u(−n).
3.30 Find the DTFT of xe (n) and x0 (n) where x(n) = (0.4)n u(n), and xe (n) =
[x(n) + x(−n)]/2 is the even part of x(n) and x0 (n) = [x(n) − x(−n)]/2
is the odd part of x(n).
182 FREQUENCY-DOMAIN ANALYSIS
|Hz(e jw)|
1.0
0.05
0.1p 0.2p 0.6p p
w
What is the magnitude at 6600 Hz? Compute the sample X8 (2) of its
8-point DFT.
3.42 Given the 6-point DFT of f (n), as given below compute the value of f (3):
F (0) = 10.0; F (1) = −3.5 − j 2.6; F (2) = −2.5 − j 0.866
F (3) = −2.0; F (4) = −2.5 + j 0.866; F (5) = −3.5 + j 2.6
3.43 Compute the 6-point IDFT of X(k) given below:
X(k) = {3 + j 0 −1 + j 0 −0 + j 1.732 5 + j 0 0 − j 1.732
− 1 − j 0}
3.44 If the N -point DFT of a real sequence x(n) is XN (k), prove that the
∗
DFT of x((−n))N is XN (k), using the property x((−n))N = X(N − n).
Show that the DFT of the even part xe (n) = [x(n) + x(−n)]/2 is given by
ReX(k) and the DFT of the odd part xo (n) = [x(n) − x(−n)]/2 is given
by j ImX(k).
3.45 Find the even part and odd part of the following functions:
x1 (n) = {1 −1 2 0 1 1}
x2 (n) = {1 2 1 −1 0 −2 0 1}
x3 (n) = {1 1 −1 3}
x4 (n) = {0 1 2 −1 1 0}
3.46 Determine which of the following functions have real-valued DFT and
which have imaginary-valued DFT:
x1 (n) = {1 0.5 1 0 0 1 0.5}
x2 (n) = {1 0.5 −1 1 0 1 −1 0.5}
x3 (n) = {0 0.5 −1 1 0 −1 1 − 0.5}
x4 (n) = {1 2 0 0 1 0 0 −2}
3.47 Compute the 4-point DFT and 8-point DFT of x(n) = {1 0.5 −1.5}.
Plot their magnitudes and compare their values.
3.48 Calculate the 5-point DFT of the x(n) = {1 0.5 − 1.5} above.
3.49 Calculate the 6-point DFT of x(n) = {1 1 0.5 0 −0.5}.
3.50 Given the following samples of the 8-point DFT
X(1) = 1.7071 − j 1.5858
X(3) = 0.2929 + j 4.4142
X(6) = −0 + j 2
find the values of X(2), X(5), and X(7).
3.51 Given the values of X(4), X(13), X(17), X(65), X(81), and X(90) of an
128-point DFT function, what are the values of X(124), X(63), X(115),
X(38), X(111), and X(47)?
184 FREQUENCY-DOMAIN ANALYSIS
MATLAB Problems
3.52 Compute the 16-point and 32-point DFTs of the 4-point sequence x(n) =
{1 0.5 0 −0.5}. Plot their magnitudes and compare them.
3.53 Compute the 24-point DFT of the sequence in Problem 3.52, plot the
magnitude of this DFT. Now compute 24-point IDFT of this DFT and
compare it with x(n) given above.
3.54 Plot the magnitude of the following transfer functions:
0.5 + 1.2z−1
X1 (z) =
1 + 0.2z−1 + 0.4z−2 + z−3 − z−4 + 0.06z−5
z−3 − 0.8z−5 + z−1 − 6
X2 (z) =
1 + z−1 + 0.8z−2 − 0.4z−3 − 0.3z−4 + z−5 + 0.05z−6
(1 − 0.3z)(1 + 0.2z + z2 )
X3 (z) =
(z2 + 0.2z + 1.0)(z2 − 0.1z + 0.05)(z − 0.3)
z z + 0.5 0.8
X4 (z) = − +
z + 0.4 (z + 0.1)2 z
3.55 Plot the magnitude and phase responses of the following functions:
0.2ej ω + 0.9ej 2ω
H1 (ej ω ) =
1 − 0.6ej ω + 0.6ej 2ω − 0.5ej 3ω + ej 4ω
1 + 0.4e−j ω
H2 (ej ω ) =
1 + 0.5e−j ω − 0.4e−j 2ω + e−j 3ω + 0.3e−j 4ω + 0.1e−j 5ω
H3 (ej ω ) = H1 (ej ω )H2 (ej ω )
0.25 + z−1
H (z) =
1− 0.8z−1+ 0.4z−2 − 0.05z−3
3.57 From the real sequence x(n) = {1 −1 2 0.5 0 −1 2 1}, show
that the DFT of [ xe (n)] = ReX(k) where the even part xe (n) = [x(n) +
x((−n))N ]/2.
3.58 From the real sequence in Problem 3.57, obtain its odd part and show that
its DFT = j ImX(k).
REFERENCES 185
REFERENCES
1. B. A. Shenoi, Magnitude and Delay Approximation of 1-D and 2-D Digital Filters,
Springer-Verlag, 1999.
2. C. E. Shannon, Communication in the presence of noise, Proc. IRE 37, 10–12
(Jan. 1949).
3. J. G. Proakis and D. G. Manolakis, Digital Signal Processing-Principles, Algorithms,
and Applications, Prentice-Hall, 1966.
4. B. P. Lathi, Signal Processing and Linear Systems, Berkeley Cambridge Press, 1998.
5. A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, Prentice-
Hall, 1989.
6. V. K. Ingle and J. G. Proakis, Digital Signal Processing Using MATLAB (R) V.4, PWS
Publishing, 1997.
7. S. K. Mitra, Digital Signal Processing—A Computer-Based Approach, McGraw-Hill,
2001.
8. S. K. Mitra and J. F. Kaiser, eds., Handbook for Digital Signal Processing, Wiley-
Interscience, 1993.
9. A. Antoniou, Digital Filters, Analysis, Design and Applications, McGraw-Hill, 1993.
CHAPTER 4
4.1 INTRODUCTION
186
INTRODUCTION 187
Magnitude
Magnitude
wc wc
Frequency Frequency
(a) (b)
Magnitude
Magnitude
w1 w2 w1 w2
Frequency Frequency
(c) (d )
Figure 4.1 Magnitude responses of analog filters: (a) lowpass filter; (b) highpass filter;
(c) bandpass filter; (d) bandstop filter.
Let us select any one of the following methods to specify the IIR filters. The
recursive algorithm is given by
N
M
y(n) = − a(k)y(n − k) + b(k)x(n − k) (4.1)
k=1 k=0
N
M
a(k)y(n − k) = b(k)x(n − k); a(0) = 1 (4.2)
k=0 k=0
1 du 1 dv
τ (ω) = − (4.8)
1 + u2 d ω 1 + v 2 d ω
where
M
k=0 b(k) sin(kω)
u = M (4.9)
k=0 b(k) cos(kω)
MAGNITUDE APPROXIMATION OF ANALOG FILTERS 189
and N
a(k) sin(kω)
v = Nk=0 (4.10)
k=0 a(k) cos(kω)
Designing an IIR filter usually means that we find a transfer function H (z)
in the form of (4.3) such that its magnitude response (or the phase response, the
group delay, or both the magnitude and group delay) approximates the specified
magnitude response in terms of a certain criterion. For example, we may want
to amplify the input signal by a constant without any delay or with a constant
amount of delay. But it is easy to see that the magnitude response of a filter or
the delay is not a constant in general and that they can be approximated only by
the transfer function of the filter. In the design of digital filters (and also in the
design of analog filters), three approximation criteria are commonly used: (1) the
Butterworth approximation, (2) the minimax (equiripple or Chebyshev) approxi-
mation, and (3) the least-pth approximation or the least-squares approximation.
We will discuss them in this chapter in the same order as listed here. Designing a
digital filter also means that we obtain a circuit realization or the algorithm that
describes its performance in the time domain. This is discussed in Chapter 6. It
also means the design of the filter is implemented by different types of hardware,
and this is discussed in Chapters 7 and 8.
Two analytical methods are commonly used for the design of IIR digital fil-
ters, and they depend significantly on the approximation theory for the design
of continuous-time filters, which are also called analog filters. Therefore, it is
essential that we review the theory of magnitude approximation for analog filters
before discussing the design of IIR digital filters.
The transfer function of an analog filter H (s) is a rational function of the complex
frequency variable s, with real coefficients and is of the form1
c0 + c1 s + c2 s 2 + · · · + c m s m
H (s) = , m≤n (4.11)
d0 + d1 s + d2 s 2 + · · · + dn s n
The frequency response or the Fourier transform of the filter is obtained as a
function of the frequency ω,2 by evaluating H (s) as a function of j ω
c0 + j c1 ω − c2 ω2 − j c3 ω4 + c4 ω4 + · · · + (j )m cm ωm
H (j ω) = (4.12)
d0 + j d1 ω − d2 ω2 − j d3 ω3 + d4 ω4 + · · · + (j )n cn ωn
= |H (j ω)| ej φ(ω) (4.13)
1
Much of the material contained in Sections 4.2–4.10 has been adapted from the author’s book
Magnitude and Delay Approximation of 1-D and 2-D Digital Filters and is included with permission
from its publisher, Springer-Verlag.
2
In Sections 4.2–4.8, discussing the theory of analog filters, we use ω and
to denote the angular
frequency in radians per second. The notation ω should not be considered as the normalized digital
frequency used in H (ej ω ).
190 INFINITE IMPULSE RESPONSE FILTERS
Example 4.1
s+1
H (s) = (4.16)
s 2 + 2s + 2
The first step is to multiply H (s) with H (−s) and evaluate the product at
s = j ω:
ω2 + 1
= (4.19)
ω4 + 1
From this example, we see that to find the transfer function H (s) in (4.16) from
the magnitude squared function in (4.19), we reverse the steps followed above in
deriving the function (4.19) from the H (s). In other words, we substitute j ω =
s (or ω2 = −s 2 ) in the given magnitude squared function to get H (s)H (−s)
and factorize its numerator and denominator. For every pole at sk (and zero)
in H (s), there is a pole at −sk (and zero) in H (−s). So for every pole in
the left half of the s plane, there is a pole in the right half of the s plane,
and it follows that a pair of complex conjugate poles in the left half of the s
plane appear with a pair of complex conjugate poles in the right half-plane also,
thereby displaying a quadrantal symmetry. Therefore, when we have factorized
MAGNITUDE APPROXIMATION OF ANALOG FILTERS 191
the product H (s)H (−s), we pick all its poles that lie in the left half of the
s-plane and identify them as the poles of H (s), leaving their mirror images in
the right half of the s-plane as the poles of H (−s). This assures us that the
transfer function is a stable function. Similarly, we choose the zeros in the left
half-plane as the zeros of H (s), but we are free to choose the zeros in the
right half-plane as the zeros of H (s) without affecting the magnitude. It does
change the phase response of H (s), giving a non–minimum phase response.
Consider a simple example: F1 (s) = (s + 1) and F2 (s) = (s − 1). Then F22 (s) =
(s + 1)[(s − 1)/(s + 1)] has the same magnitude as the function F2 (s) since
the magnitude of (s − 1)/(s + 1) is equal to |(j ω − 1)/(j ω + 1)| = 1 for all
frequencies. But the phase of F22 (j ω) has increased by the phase response of
the allpass function (s − 1)/(s + 1). Hence F22 (s) is a non–minimum phase
function. In general any function that has all its zeros inside the unit circle in the
z plane is defined as a minimum phase function. If it has atleast one zero outside
the unit circle, it becomes a non–minimum phase function.
Ideal Magnitude
1.0
1 − dp
Transition Band
Passband
Stopband
ds
wp ws w
Figure 4.2 Magnitude response of an ideal lowpass analog filter showing the tolerances.
192 INFINITE IMPULSE RESPONSE FILTERS
bandwidth ωp , (4) a stopband frequency ωs , and (5) the magnitude of the filter
at ωs . The transfer function of the analog filter with practical specifications like
these will be denoted by H (p) in the following discussion, and the prototype
lowpass filter will be denoted by H (s).
Before we proceed with the analytical design procedure, we normalize the
magnitude of the filter by H0 for convenience and scale the frequencies ωp and
ωs by ωp so that the bandwidth of the prototype filter and its stopband fre-
quency become
p = 1 and
s = ωs /ωp , respectively. The specifications about
the magnitude at
p and
s are satisfied by the proper choice of D2n and n
in the function (4.22) as explained below.
√ If, for example, the magnitude at the
passband frequency is required to be 1/ 2, which means that the log magnitude
required is −3 dB, then we choose D2n = 1. If the magnitude at the passband
frequency
=
p = 1 is required to be 1 − δp , then we choose D2n , normally
denoted by 2 , such that
1 1
|H (j 1)|2 = = = (1 − δp )2 (4.23)
1 + D2n 1 + 2
1
10 log = −Ap
1 + 2
10 log(1 + 2 ) = Ap
log(1 + 2 ) = 0.1Ap
(1 + 2 ) = 100.1Ap
"
From the last equation, we get the formula 2 = 100.1Ap − 1 and = 100.1Ap − 1.
Let us consider the common case of a Butterworth filter with a log magnitude
of −3 dB at the bandwidth of
p to develop the design procedure for a Butter-
worth lowpass filter. In this case, we use the function for the prototype filter, in
the form
1
|H (j
)|2 = (4.24)
1 +
2n
1.2
0.8
Magnitude
0.6
0.4
0.2
n=2
n=6
0
0 0.5 1 1.5 2 2.5 3 3.5 4
Frequency in rad/sec
−10 log |H (j
)|2 = 10 log(1 +
2n )
The attenuation over the passband only is shown in Figure 4.4a, and the maximum
attenuation in the passband is 3 dB for all n; the attenuation characteristic of the
filters over 1 ≤
≤ 10 for n = 1, 2, . . . , 10 is shown in Figure 4.4b.
7.0
6.0
Passband attenuation a, dB
5.0
4.0
3.0
2.0
n=1
2
6 3
1.0 5
4 7
8
9
10
0
0 0.2 0.4 0.6 0.8 1.0
ω
(a)
140
10 9
120 8
7
6
Stopband attenuation a, dB
100
5
80
4
60 3
40 2
20 n=1
0
0 2.0 4.0 6.0 8.0 10
ω
(b)
steps used to derive the magnitude squared function from H (p) as illustrated by
Example 4.1 earlier. First we substitute
= p/j or equivalently
2 = −p 2 in
(4.24):
1 1
= = H (p)H (−p) (4.25)
2n
1 +
2 =−p2 1 + (−1)n p 2n
1 + (−)n p 2n = 0 (4.26)
or the equation
1 = ej 2kπ n odd
p2n = (4.27)
−1 = ej (2k+1)π n even
and
or in general
We notice that in both cases, the poles have a magnitude of one and the angle
between any two adjacent poles as we go around the unit circle is equal to π/n.
There are n poles in the left half of the p plane and n poles in the right half of
the p plane, as illustrated for the cases of n = 2 and n = 3 in Figure 4.5. For
every pole of H (p) at p = pa that lies in the left half-plane, there is a pole of
H (−p) at p = −pa that lies in the right half-plane. Because of this property,
we identify n poles that are in the left half of the p plane as the poles of H (p)
so that it is a stable transfer function; the poles that are in the right half-plane
are assigned as the poles of H (−p). The n poles that are in the left half of the
p plane are given by
2k + n − 1
pk = exp j π k = 1, 2, 3, . . . , n (4.31)
2n
When we have found these n poles, we construct the denominator polynomial
D(p) of the prototype filter H (p) = D(p)
1
from
(
n
D(p) = (p − pk ) (4.32)
k=1
MAGNITUDE APPROXIMATION OF ANALOG FILTERS 197
q1
π
q2 3
π
2 q3
n=2 n=3
The only unknown parameter at this stage of design is the order n of the filter
function H (p), which is required in (4.31). This is calculated using the specifi-
cation that at the stopband frequency
s , the log magnitude is required to be no
more than −As dB or the minimum attenuation in the stopband to be As dB.
10 log |H (j
s )|2 = −10 log(1 +
2n
s ) ≤ −As (4.33)
log(100.1As − 1)
n≥ (4.34)
2 log
s
D(p) = 1 + d1 p + d2 p 2 + · · · + dn p n (4.35)
But there is no need to do so, since they can be computed from (4.32). They are
also listed in many books for n up to 10 in polynomial form and in some books
in a factored form also [3,2]. We list a few of them in Table 4.1.
198 INFINITE IMPULSE RESPONSE FILTERS
TABLE 4.1
n Butterworth Polynomial D(p) in Polynomial and Factored Form
1 p+1
√
2 p 2 + 2p + 1
3 p 3 + 2p 2 + 2p + 1 = (p + 1)(p 2 + p + 1)
4 p 4 + 2.61326p 3 + 3.41421p 2 + 2.61326p + 1
= (p 2 + 0.76537p + 1)(p 2 + 1.84776p + 1)
5 p + 3.23607p 4 + 5.23607p 3 + 5.23607p 2 + 3.23607p + 1
5
Example 4.2
0 dB 0 dB
−0.5
−3
Magnitude in dB
Magnitude in dB
−30 −30
w w
(a) (b)
Figure 4.6 Magnitude response specifications of prototype filters: (a) Butterworth filter;
(b) Chebyshev (equiripple) filter.
Hence the transfer function of the normalized prototype filter of third order is
1
H (p) = (4.39)
p 3 + 2p 2 + 2p + 1
To restore the magnitude scale, we multiply this function by H0 . Now the filter
function is
H0
H (p) = (4.40)
p3 + 2p 2 + 2p + 1
−1 Example (3)
Magnitude in dB
−2
−3
Example (4)
−4
Example (2)
−5
10°
Frequency in radians/sec-linear scale
and
2k + n − 1
pk = −(1/n) exp j π k = 1, 2, 3, . . . , n (4.45)
2n
Comparing (4.45) with (4.31), it is obvious that the poles have been scaled by a
factor −(1/n) . So the maximum attenuation at
p = 1 is the specified value of
Ap ; also the frequency at which the attenuation is 3 dB is equal to −(1/n) .
Example 4.3
H0
H (p) =
(p + 1.4199)(p + 0.71 − j 1.2297)(p + 0.71 + j 1.2297)
H0
= (4.46)
(p + 1.4199)(p 2 + 1.42p + 2.0163)
Since the maximum value has been normalized to 0 dB, which occurs at
= 0,
we equate the magnitude of H (p) evaluated at p = j 0 to one. Therefore H0 =
(1.4199)(2.0163) = 2.8629.
√ To raise the magnitude level to 5 dB, we have to
multiply this constant by 100.5 = 1.7783. Of course, we can compute the same
value for H0 in one step, from the specification 20 log |H (j 0)| = 20 log H (0) −
20 log(1.4199)(2.0163) = 5. The frequency scale is restored by putting p =
s/1000 in (4.46) to get (4.47) as the transfer function of the filter that meets
the given specifications:
(2.8629)(1.7783)
H (s) =
[s/1000 + 1.4199][(s/1000)2 + 1.42(s/1000) + 2.0163]
5.09 × 109
= (4.47)
[s + 1419.9][s 2 + 1420s + 2.0163 × 106 ]
The plot is marked as “Example (3)” in Figure 4.7. It is the magnitude response
of the prototype filter given by (4.46). It has a magnitude of −0.5 dB at
= 1
and approximately −33 dB at
= 5, which exceeds the specified value.
202 INFINITE IMPULSE RESPONSE FILTERS
H02
|H (j
)|2 = (4.48)
1 + 2 Cn2 (
)
where Cn (
) is the Chebyshev polynomial of degree n. It is defined by
Cn (
) = cos(n cos−1
) |
| ≤ 1 (4.49)
The polynomial Cn (
) approximates a value of zero over the closed interval
C0 (
) = 1
C1 (
) =
C2 (
) = 2
2 − 1
C3 (
) = 4
3 − 3
C4 (
) = 8
4 − 8
2 + 1
C5 (
) = 16
5 − 20
3 + 5
(4.50)
C0 (
) = 1
Ck+1 (
) = 2
Ck (
) − Ck−1 (
) (4.52)
MAGNITUDE APPROXIMATION OF ANALOG FILTERS 203
C2 (Ω) C3 (Ω)
1 1
1 1
C4 (Ω) C5 (Ω)
1 1
1 1
(a)
Ap
Ωp = 1
(b)
Figure 4.8 Chebyshev polynomials and Chebyshev filter: (a) magnitude of Chebyshev
polynomials; (b) attenuation of a Chebyshev I filter.
To see that Cn (
) = cos(n cos−1
) is indeed a polynomial of order n, consider
it in the following form:
cos(nφ) = Re ej nφ
n " n
= Re cos(φ) + j sin(φ = Re φ + j (1 − φ 2
" n
= Re φ + φ 2 − 1 (4.53)
" n
Expanding φ + φ 2 − 1 by the binomial theorem and choosing the real part,
we get the polynomial for
n(n − 1) n−2 2
cos(nφ) = φ n + φ (φ − 1)
2!
n(n − 1)(n − 2)(n − 3) n−4 2
+ φ (φ − 1)2 + · · · (4.54)
4!
204 INFINITE IMPULSE RESPONSE FILTERS
Recall that since n is a positive integer, the expansion expressed above has a
finite number of terms, and hence we conclude that it is a polynomial (of degree
n). We also note from (4.50) that
0 n odd
Cn2 (0) = (4.55)
1 n even
But
1 n odd
Cn2 (1) = (4.56)
1 n even
2 2
⏐H(j Ω)⏐ ⏐H( j Ω)⏐
1 1
n = 3 (odd) n = 4 (even)
Ωp Ωs Ωp Ωs
(a)
2 2
⏐H(j Ω)⏐ ⏐H( j Ω)⏐
1 1
n = 5 (odd)
n = 4 (even)
Ω p Ωs Ω p Ωs
(b)
−1 = j 2 ; we derive
j
Cn (
) = ±
= cos(nφ) = cos(n(ϕ1 + j ϕ2 ))
= cos(nϕ1 ) cosh(nϕ2 ) − j sin(nϕ1 ) sin(nϕ2 ) (4.59)
and
1
sin(nϕ1 ) sin(nϕ2 ) = ∓ (4.61)
From (4.60) we get
(2k − 1)π
ϕ1 = (4.62)
2n
206 INFINITE IMPULSE RESPONSE FILTERS
Now
= cos(φ) = cos(ϕ1 + j ϕ2 ) = cos(ϕ1 ) cosh(ϕ2 ) − j sin(ϕ1 ) sinh(ϕ2 ).
Therefore
j
= sin(ϕ1 ) sinh(ϕ2 ) + j cos(ϕ1 ) cosh(ϕ2 ) (4.64)
These are the roots in the p plane that satisfy the condition 1 + 2 Cn2 (
) = 0.
Hence the 2n poles of H (p)H (−p) are given by
(2k − 1)π (2k − 1)π
pk = sinh(ϕ2 ) sin + j cosh(ϕ2 ) cos
2n 2n
for
k = 1, 2, . . . , (2n) (4.65)
The 2n poles of H (p)H (−p) given by (4.65) can be shown to lie on an elliptic
contour in the p plane with a major semiaxis equal to cosh(ϕ2 ) alongthe j
The poles in the left half of the p plane only are given by
(2k − 1)π (2k − 1)π
pk = − sinh(ϕ2 ) sin + j cosh(ϕ2 ) cos
2n 2n
= − sinh(ϕ2 ) sin(θk ) + j cosh(ϕ2 ) cos(θk ) k = 1, 2, 3, . . . , n (4.67)
where ϕ2 is obtained from (4.63). In (4.67), note that θk are the angles measured
from the imaginary axis of the p plane and the poles lie in the left half of the
p plane.
The formula for finding the order n is derived from the requirement that
10 log[1 + 2 Cn2 (
s )] ≥ As . It is
)
cosh−1 (100.1As − 1)/(100.1Ap − 1)
n≥ (4.68)
cosh−1
s
and the value of n is chosen for calculating the poles using (4.67). Given ωp ,
Ap , ωs , and As as the specifications for a Chebyshev lowpass filter H (s), its
MAGNITUDE APPROXIMATION OF ANALOG FILTERS 207
maximum value in the passband is normalized to one, and its frequencies are
scaled by ωp , to get the values of
p = 1 and
s = ωs /ωp for the prototype
filter at which the attenuations are Ap and As , respectively. The design procedure
to find H (s) starts with the magnitude squared function (4.48) and proceeds as
follows:
"
1. Calculate = (100.1Ap − 1).
2. Calculate n from (4.68) and choose n = n.
3. Calculate ϕ2 from (4.63).
4. Calculate the poles pk (k = 1, 2, . . . , n) from (4.67).
5. Compute H (p) = H0 /[ nk=1 (p − pk )] = H0 /[ nk=0 dk p k ].
6. Find the value of H0 by equating
⎧
H0 ⎨* 1 n odd
H (0) = = 1
d0 ⎩ n even
1 + 2
Example 4.4
When we substitute p = s/2500 in this H (p) and simplify the expression, we get
19.886 × 1012
H (s) = (4.70)
(s 2 + 1566s + 714 × 106 )(s + 1566)
1 2 Cn2 (1/
) 1
1− = = (4.72)
1+ 2 Cn2 (1/
) 1 + 2 Cn2 (1/
) 1 + 2 C 21(1/
)
n
|H( j Ω)|2
1
| H( 1 ) |2
jΩ
1 |2
1− | H( )
jΩ
when n is odd, the number of finite zeros in the stopband is (n − 1)/2 = m. When
n is an odd integer, the term sec θk , which is involved in the design procedure
described below, attains a value of ∞ when k = (n + 1)/2. So one of the zeros
is shifted to j ∞; the remaining finite zeros appear in conjugate pairs on the
imaginary axis, and hence the numerator of the Chebyshev II filter is expressed
as shown in step 6 in Section 4.2.7. Note that the value of i calculated in step 1
is different from the value calculated in the design of Chebyshev I filters and
therefore the values of ϕi used in steps 3 and 4 are different from ϕ2 used in
the design of Chebyshev I filters. Hence it would be misleading to state that the
poles of the Chebyshev II filters are obtained as “the reciprocals of the poles of
the Chebyshev I filters.”
210 INFINITE IMPULSE RESPONSE FILTERS
6. Compute
H0 m (p +
20k )
H (p) = nk=1
k=1 (p − pk )
and calculate H0 = nk=1 (pk )/ m 2
k=1 (
0k ) .
7. Restore the magnitude scale.
8. Restore the frequency scale by putting p = s/ωs in H (p) to get H (s) for
the inverse Chebyshev filter.
Example 4.5
7. Calculate H0 = 0.049995.
8. Hence we simplify H (p) to the final form:
The magnitude response of (4.73) is plotted in Figure 4.11. It is seen that the
prototype filter meets the desired specifications. Now we only have to denormal-
ize the frequency by 2000, so that the passband of the specified filter changes
from 0.5 to 1000 rad/s, and it meets the specifications given in Example 4.5.
20
−20
Magnitude in dB
−40
−60
−80
−100
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Frequency in radians/sec-linear scale
−1 ≤
≤ 1 of the lowpass prototype filter. We calculate the frequency
s to
which the specified stopband frequency ωs maps, by putting s = j ωs in (4.75).
The stopband frequency is found to be
s = ωp /ωs . So the specified magnitude
response of the highpass filter is transformed into that of the lowpass prototype
equiripple filter. We design the prototype lowpass filter to meet these specifica-
tions and then substitute p = ωp /s in H (p) to get the transfer function H (s) of
the specified highpass filter.
Example 4.6
(0.715693)(1.7783)
H (p) =
(p 2 + 0.626456p + 1.142447)(p + 0.626456)
1.7783s 3
= (4.76)
[s 2 + 1370.9s + 5.4707 × 106 ][s + 3990]
The magnitude response of (4.76) is plotted in Figure 4.12 and is found to exceed
the specifications of the given highpass filter. The design of a highpass filter with
a maximally flat passband response or with an equiripple response in both the
passband and the stopband is carried out in a similar manner.
10
−10
Magnitude in dB
−20
−30
−40
−50
−60 −1
10 100 101
Frequency in rad/sec-log scale
1.0
1 − d1
d2
w3 w1 w2 w4
the maximum magnitude are specified. The type of passband response required
may be a Butterworth or Chebyshev response.
The lowpass–bandpass (LP–BP) frequency transformation p = g(s) that is
used for the design of a specified bandpass filter is
+ ,
1 s 2 + ω02
p= (4.77)
B s
ANALOG FREQUENCY TRANSFORMATIONS 215
√
where B = ω2 − ω1 is the bandwidth of the filter and ω0 = ω1 ω2 is the geo-
metric mean frequency of the bandpass filter.
A frequency s = j ωk in the bandpass filter is mapped to a frequency p = j
k
under this transformation, which is obtained by
+ ,
j ω02 − ωk2
j
k = (4.78)
B ωk
j ω0 ωk ω0
= − (4.79)
B ω0 ωk
Example 4.7
−10
−20
Magnitude in dB
−30
−40
−50
−60
−70
−80
−90
103 104 105 106
Frequency in rad/sec-log scale
5.7658 × 1019 s 4
H (s) =
D(s)
where
4
(s + 2.7162 × 104 s 3 + 101.729 × 108 s 2 + 2.7162 × 1013 s + 1018 )
D(s) =
×(s 4 + 6.5583 × 104 s 3 + 44.462 × 108 s 2 + 6.5583 × 1013 s + 1018 )
(4.82)
To verify the design, we have plotted the magnitude response of the bandpass
filter in Figure 4.14.
1.0
1 − d1
d1
w1 w3 w4 w2
This transformation transforms the entire passband of the bandstop filter to the
passband |
| ≤ 1 of the prototype lowpass filter. So we have to find the frequency
Example 4.8
Suppose that we are given the specification of a bandstop filter as shown in
Figure 4.15. In this example, we are given ω1 = 1500, ω2 = 2000, ωs = ω4 =
1800, Ap = 0.2 dB, and As = 55 dB. The passband is required to have a max-
imally flat response. With these specifications, we design the bandstop filter
following procedure given below:
√
1. B = 2000 − 1500 = 500 and ω0 = (2000)(1500) = 1732.1.
2. The LP–BS frequency transformation is p = 500[s/(s 2 + 3 × 106 )].
3. Let s = j ωs = j 1800. Then we get
s = 3.74.
4. Following
√ the design procedure used in Example 4.2, we get =
100.02 − 1 = 0.21709, and from (4.44), we get n = 5.946 and choose
n = 6.
5. The six poles are calculated from (4.45) as pk = −0.33385 ± j 1.2459,
−0.9121 ± j 0.9121, and −1.246 ± j 0.3329.
of the lowpass prototype filter H (p) is constructed
6. The transfer function
from H (p) = H0 /[ 4k=1 (p − pk )] as
(1.664)3
(p 2 + 0.6677p + 1.664)(p 2 + 1.824p + 1.664)(p 2 + 2.492p + 1.664)
(4.85)
7. Next we have to substitute p = 500[s/(s 2 + 3 × 106 )] in this H (p) and
simplify the expression to get the transfer function H (s) of the speci-
fied bandstop filter. This completes the design of the bandstop filter. The
magnitude response is found to exceed the given specifications.
The sections above briefly summarize the theory of approximating the piece-
wise constant magnitude of analog filters. This theory will be required for approx-
imating the magnitude of digital filters, which will be treated in the following
sections. The analog frequency transformations p = g(s) applied to the lowpass
prototype to generate the other types of filters are listed in Table 4.2.
Type of Transformation
Transformation p = g(s) Parameters Used
In contrast to analog filters, digital filters are described by two types of transfer
functions: transfer functions of finite impulse response filters and those of infinite
impulse response filters. The methods for designing FIR filters will be treated
in the next chapter. Now that we have reviewed the methods for approximating
the magnitude of analog filters, it is necessary to understand the relationship
between the frequency-domain description of analog and digital filters, in order
to understand the frequency transformation that is used to transform the analog
frequency response specifications to those of the digital filters.
The procedures used for designing IIR filters employ different transformations
of the form s = f (z) to transform H (s) into H (z). The transformation s = f (z)
must satisfy the requirement that the digital filter transfer function H (z) be stable,
when it is obtained from the analog filter transfer functions H (s). The transfer
functions for the analog filters obtained by the methods described above are stable
functions; that is, their poles are in the left half of the s plane. When H (s) and
f (z) are stable in the s and z domains, respectively, the poles of H (s) in the left
half of the s plane map to the poles inside the unit circle in the z plane; therefore
H (z) also is a stable transfer function. We also would like to have frequencies
from −∞ to ∞ on the j ω axis of the s plane mapped into frequencies on the
boundary of the unit circle—without encountering any discontinuities.
We have already introduced the transformation z = esT , in Chapter 2, when
we derived the z transform of a discrete-time signal x(nT ) generated from the
analog signal x(t).
We plot the magnitude response of the analog filter as a function of ω. Under
the impulse-invariant transformation, s = j ω maps to z = ej ωT . Although the
magnitude of the digital filter H (ej ωT ) is a function of the variable ej ωT , we
cannot plot it as a function of ej ωT . We can plot the magnitude response of
the digital filter only as a function of ωT . (Again, we point out that the nor-
malized digital frequency ωT is commonly denoted in the DSP literature by
the symbol ω.) When s = j ω increases values from −j ∞ along the imaginary
axis to +j ∞, the variable ej ωT increases counterclockwise from e−j π to ej π
(passing through z = 1) along the boundary of the unit circle in the z plane and
repeats itself since ej ωT = ej (ωT +2rπ) , where r is an integer. The strips in the
left half of the s plane bounded by ±j [(2r − 1)π/T ] and ±j [(2r + 1)π/T ]
on the j ω axis are mapped to the inside and the boundary of the unit cir-
cle in the zplane as shown in Figure 4.16. Therefore the frequency response
X∗ (j ω) = ∞ n=0 x(nT )e
−j ωT is periodic and will avoid aliasing only if the ana-
1n 3p
T
p
T
0
p
−
T
3p
−
T
z-plane s-plane
K
Rk
H (s) = (4.86)
s + sk
k=1
The unit impulse response hk (t) of a typical term Rk /(s + sk ) is Rk e−sk t . When
it is sampled with a sampling period T , and the z transform is evaluated, it
becomes
∞
1 z
Rk e−sk nT z−n = Rk = Rk (4.87)
1− e−sk T z−1 z − e−sk T
n=0
Hence H (z) derived from H (s) under the transformation z = esT is given by
K
Rk z
H (z) = (4.88)
z − e−sk T
k=1
Because the unit pulse response h(nT ) of the digital filter matches the unit
impulse response h(t) at the instants of sampling t = nT , the transformation z =
esT is called the impulse-invariant transformation. But the frequency response of
H (z) will not match the frequency response of H (s) unless h(t) is bandlimited.
If the magnitude response of the analog filter H (j ω) is very small for frequencies
larger than some frequency ωb , and h(t) is sampled at a frequency greater than
2ωb , the frequency response of the digital filter H (z) obtained from the impulse-
invariant transformation may give rise to a small amount of aliasing that may
or may not be acceptable in practical design applications. However, this method
BILINEAR TRANSFORMATION 221
is not applicable for the design of highpass, bandstop, and allpass filters since
their frequency responses are not bandlimited at all. If the impulse-invariant
transformation is applied to a minimum phase analog filter H (s), the resulting
digital filter may or may not be a minimum phase filter. For these reasons, the
impulse-invariant transformation is not used very often in practical applications.
The bilinear transformation is the one that is the most often used for designing
IIR filters. It is defined as
2 z−1
s= (4.89)
T z+1
To find how frequencies on the unit circle in the z plane map to those in the s
plane, let us substitute z = ej ωT in (4.89). Note that ω is the angular frequency
in radians per second and ωT is the normalized frequency in the z plane. Instead
of using ω as the notation for the normalized frequency of the digital filter, we
may denote θ as the normalized frequency to avoid any confusion in this section:
2 ej ωT − 1 2 ej (ωT /2) − e−j (ωT /2) 2 ωT
s= = = j tan
T ej ωT + 1 T ej (ωT /2) + e−j (ωT /2) T 2
ωT
= j 2fs tan = jλ
2
This transformation maps the poles inside the unit circle in the z plane to the
inside of the left half of the s plane and vice versa. It also maps the frequencies
on the unit circle in the z plane to frequencies on the entire imaginary axis of
the s plane, where s = σ + j λ. So this transformation satisfies both conditions
that we required for the mapping s = f (z) mentioned in the previous section or
its inverse relationship z = b(s). This mapping is shown in Figure 4.17 and may
be compared with the mapping shown in Figure 4.16.
To understand the mapping in some more detail, let us consider the frequency
response of an IIR filter over the interval (0, (ωs /2)), where ωs /2 = π/T is the
ωT
Nyquist frequency. As an example, we choose a frequency response H (e ) =
H (e ) of a Butterworth bandpass digital filter as shown in Figure 4.18a.
j θ
In Figure 4.18, we have also shown the curve depicting the relationship between
ωT and λ = 2fs tan (ωT /2). The value of λ corresponding to any value of ωT = θ
can be calculated from λ = 2fs tan (θ/2) as illustrated by mapping a few frequen-
cies such as ω1 T , ω2 T in Figure 4.18. The magnitude of the frequency response
of the digital filter at any normalized frequency ωk T is the magnitude of H (s) at
the corresponding frequency s = j λk , where λk = 2fs tan (ωk T /2).
The plot in Figure 4.17 shows that the magnitude response of the digital filter
over the Nyquist interval (0, π) maps over the entire range (0, ∞) of λ. So there
is a nonlinear mapping whereby the frequencies in the ω domain are warped
222 INFINITE IMPULSE RESPONSE FILTERS
jl
Image of s = j l
s
1
Image of left
half s-plane
s-plane z-plane
l = 2ƒs tan wT
l
2
Ω
l3l4
l1l2
Figure 4.18 Mapping of the digital filter response under bilinear transformation and
analog BP ⇒ LP transformation.
when mapped to the λ domain. Similarly, the frequencies in the interval (0, −π)
are mapped to the entire interval (0, −∞) of λ. From the periodic nature of
the function tan(.), we also see that the periodic replicates of the digital filter
frequency response in the ω domain map to the same frequency response in the
λ domain and the transfer function H (s) obtained under the bilinear transform
behaves like that of an analog filter. But it is to be pointed out that we use only
BILINEAR TRANSFORMATION 223
Example 4.9
The specified magnitude response of a maximally flat bandpass digital filter has a
maximum value of 1.0 in its passband, which lies between the cutoff frequencies
θ1 = 0.4π and θ2 = 0.5π. The magnitude at these cutoff frequencies is specified
to be no less than 0.93, and at the frequency θ3 = 0.7π in the stopband, the
magnitude is specified to be no more than 0.004. Design the IIR digital filter that
approximates these specifications, using the bilinear transformation.
It is obvious from these specifications that the frequencies are normalized
frequencies. So θ1 = 0.4π and θ2 = 0.5π are the normalized cutoff frequencies
and θ3 = 0.7π is the frequency in the stopband. The specified magnitude response
is plotted in Figure 4.19a. The two cutoff frequencies ω1 , ω2 and the stopband
frequency ω3 map to and λ1 , λ2 , λ3 as follows. In this example, we have chosen
to scale the frequencies in the s plane by fs ; thus, the values for λ1 , λ2 , and λ3
given below are obtained by the bilinear transform s = 2[(z − 1)/(z + 1)]:
The frequency response of the “analog” filter H (s) is plotted in Figure 4.19b.
224 INFINITE IMPULSE RESPONSE FILTERS
q
λ = 2fs tan
2
λ3
λ2
Ω2
Ω1
λ1
|H(jl)|
1.00
0.93
.004
|H(j Ω)|
1.00
0.93
.004
|H(jθ)|
(c) (b)
1.00
0.93
0.004
q1 q2 q3
(a)
10
−10
−20
Magnitude in dB
−30
−40
−50
−60
−70
−80
10−1 100 101
Frequency in rad./sec
Figure 4.20 Magnitude response of the analog prototype lowpass filter in Example 4.8.
2.5317
H (p) = (4.90)
p4 + 3.296p3 + 5.4325p2 + 5.24475p + 2.5317
0.2267s 4
H (s) =
D(s)
−10
−20
Magnitude in dB
−30
−40
−50
−60
−70
−80
0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7
Normalized frequency
Example 4.10
We choose the same specifications as in Example 4.9 and illustrate the procedure
to design the IIR filter using the digital spectral transformation from Table 4.3.
Let us choose the passband of the lowpass prototype digital filter to be θp =
0.5π. The values for the cutoff frequencies specified for the bandpass filter are
θl = 0.4π, θu = 0.5π. So we calculate
0.5π + 0.4π
cos
2
α= = 0.158
0.5π − 0.4π
cos
2
0.5π − 0.4π 0.5π
K = cot tan = 6.314
2 2
z−2 − 0.273z−1 + 0.727
z−1 = −
0.727z−2 − 0.273z−1 + 1
Now we have to find the frequency θs in the lowpass digital “prototype” filter
to which the prescribed stopband frequency θs
= 0.7π of the bandpass filter maps,
by substituting z = ej 0.7π in the digital spectral transformation given above. The
value is found to be θs = 2.8 rad = 0.8913π rad. Therefore the specification
for the lowpass prototype digital filter to be designed is given as shown in
Figure 4.22b.
Using the mapping of λ = 2 tan( θ2 ) versus θ , we map this lowpass frequency
response to the lowpass filter response |H (j λ)| as shown
in Figure 4.22(c). We
calculate λp = 2 tan(π/4) = 1.998 and λs = 2 tan 2.8 2 = 11.6 as the edge of the
passband and the edge of the stopband of this filter, respectively. So we scale its
frequency by 1.998 to get the frequency response of the lowpass prototype filter
H (j
) in order to get a normalized bandwidth
p = 1. The stopband frequency
s is scaled down to 5.8, which is slightly different from the value obtained in
228 INFINITE IMPULSE RESPONSE FILTERS
z−1 − a
LP–LP z−1 = θp = passband of prototype
1 − az−1
filter
θp
= passband of new LP filter
θp − θp
sin
2
a=
θp + θp
sin
2
z−1 + a
LP–HP z−1 = − θp
= cutoff frequency of the
1 + az−1
HP filter
θp + θp
cos
2
a=
θp θp
cos
⎛ ⎞ 2
z −2 − 2αK z−1 + (K − 1)
⎜ (K + 1) (K + 1) ⎟
LP–BP z−1 = −⎜
⎝ (K − 1) −2
⎟
⎠ θl = lower cutoff frequency of
2αK −1
z − z +1 BP filter
(K + 1) (K + 1) θu = upper cutoff frequency of
BP filter
θu + θl
cos
2
α=
θu − θl
cos
2
θu − θl θp
K = cot tan
2 2
2α 1−K
z−2 − z−1 +
−1 (K + 1) 1+K
LP–BS z = θl = lower cutoff frequency of
1 − K −2 2α
z − z−1 + 1 BS filter
1+K (K + 1)
θu = upper cutoff frequency of
BS filter
θu + θl
cos
2
α=
θu − θl
cos
2
θu − θl θp
K = tan tan
2 2
DIGITAL SPECTRAL TRANSFORMATION 229
ls
Ωs
lp
Ωp
1.00
0.93
0.004
1.00
0.93
0.004
(d) (c)
1.00
0.93
0.004
qp qs
(b)
1.00
0.93
0.004
ql qu q′s
(a)
Example 4.8 because of numerical inaccuracies. But the order of the lowpass
prototype analog filter is required to be the same, and hence the transfer function
is the same as in Example 4.8. The transfer function is repeated below: Note, we
use H (p) to denote the lowpass filter in this example
2.5317
H (p) = (4.93)
p4 + 3.2962p3 + 5.4325p2 + 5.2447p + 2.5317
Next we restore the frequency scale by substituting p = s/1.998 in H (p) to get
the transfer function H (s) as
40.5072
H (s) = (4.94)
s 4 + 6.5924s 3 + 21.73s 2 + 41.9576s + 40.5072
230 INFINITE IMPULSE RESPONSE FILTERS
and then apply the bilinear transformation s = 2[(z − 1)/(z + 1)] on this H (s)
to get the transfer function of the lowpass prototype digital filter H (z) as
z−N D(z)
Hap (z−1 ) = (4.100)
D(z−1 )
When the allpass filter has all its poles inside the unit circle in the z plane, it is
a stable function and its zeros are outside the unit circle as a result of the mirror
image symmetry. Therefore a stable, allpass filter function is non–minimum
function.
From (4.99), it is easy to see that the magnitude response of Hap (ej ω ) is equal
to one at all frequencies and is independent of all the coefficients:
+ · · · + a(N )ej Nω
Hap (ej ω ) = 1 + a(1)e + a(2)e
jω j 2ω
1 + a(1)e−j ω + a(2)e−j 2ω + · · · + a(N )e−j Nω = 1 (4.101)
But the phase response (and the group delay) is dependent on the coefficients of
the allpass filter. We know that the phase response—as defined by (4.6)—of an
IIR filter designed to approximate a specified magnitude response is a nonlinear
function of ω and therefore its group delay defined by (4.8) is far from a constant
value. When an allpass filter is cascaded with such a filter, the resulting filter
has a jfrequency response H1 (ej ω )Hap (ej ω ) = H1 (ej ω )Hap (ej ω ) ej [θ(ω)+φ(ω)] =
j [θ(ω)+φ(ω)]
H1 (e ) e
ω
. So the magnitude response does not change when the IIR
filter is cascaded with an allpass filter, but its phase response θ (ω) changes by the
addition of the phase response φ(ω) contributed by the allpass filter. The allpass
filters Hap (z) are therefore very useful for modifying the phase response (and
the group delay) of filters without changing the magnitude of a given IIR filter
H1 (z), when they are cascaded with H1 (z). However, the method used to find the
coefficients of the allpass filter Hap (z) such that the group delay of H1 (z)Ha (z)
is a very close approximation to a constant in the passband of the filter H1 (z)
poses a highly nonlinear problem, and only computer-aided optimization has
been utilized to solve this problem. When the allpass filters have been designed
to compensate for the group delay of the IIR filters that have been designed to
approximate a specified magnitude only, such that the cascade connection of the
two filters has a group delay that approximates a constant value, the allpass filters
are known as delay equalizers.
The design of IIR digital filters with Butterworth, Chebyshev I, Chebyshev II,
and elliptic filter responses, using MATLAB functions, are based on the theories
of bilinear transformation and analog filters. So they are commonly used to
approximate the piecewise constant magnitude characteristic of ideal LP, HP, BP,
and BS filters. The MATLAB function yulewalk is used to design IIR filters
232 INFINITE IMPULSE RESPONSE FILTERS
The four functions to estimate the order of the Butterworth, Chebyshev I, Cheby-
shev II, and elliptic filters are given respectively as
1. [N,Wn] = buttord(Wp,Ws,Rp,Rs)
2. [N,Wn] = cheb1ord(Wp,Ws,Rp,Rs)
3. [N,Wn] = cheb2ord(Wp,Ws,Rp,Rs)
4. [N,Wn] = ellipord(Wp,Ws,Rp,Rs)
where N is the order of the LP and HP filters (2N is the order of the BP and BS
filters) and Wn is the frequency scaling factor. These two variables are then used
in the four MATLAB functions to get the vectors b = [b(1) b(2) b(3) . . .
b(N+1)] and a = [a(1) a(2) a(3) . . . a(N+1)], for the coefficients of the
numerator and denominator of H (z−1 ) in descending powers of z. The constant
coefficient a(1) is equal to unity:
digital filters. They are described below, after we have obtained the order N of
the IIR filter:
1. [b,a] = butter(N,Wn)
2. [b,a] = cheby1(N,Rp,Wn)
3. [b,a] = cheby2(N,Rs,Wn)
4. [b,a] = ellip(N,Rp,Rs,Wn)
After we have obtained the coefficients of the transfer function, we use the
function freqz(b,a,N0) to get the magnitude response, phase response, and
group delay response, which can then be plotted. N0 is the number of dis-
crete frequencies in the interval [0 π] which is chosen by the user. For the
design of a high pass filter and a bandstop filter, we have to include a string
’high’ and ’stop’ as the last argument in the filter functions, for example,
[b,a] = butter(N,Wn,’high’) for designing a Butterworth highpass filter
and [b,a]=cheby2(N,Rs,Wn,’stop’) for designing a Chebyshev II stopband
filter. In these functions, the value of N and Wn are those obtained in the first step,
as the output variables from the functions for estimating the order of the filter.
We illustrate the use of these MATLAB functions by a few examples.
Example 4.11
1.2
1
Magnitude
0.8
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
−20
Magnitude in dB
−40
−60
−80
−100
−120
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
Example 4.12
0.9
0.8
0.7
Magnitude
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
[b,a]=cheby1(N,0.5,Wn)
[h,w]=freqz(b,a,256);
H=abs(h);
HdB=20*log10(H);
plot(w/pi,H);grid
title(’Magnitude response of a Chebyshev I Bandpass filter’)
ylabel(’Magnitude’)
xlabel(’Normalized frequency’)
%end
The order of this filter is found to be 12, and its magnitude response is shown
in Figure 4.25.
Example 4.13
[b,a]=butter(N,Wn,’stop’);
[h,w]=freqz(b,a,256);
H=abs(h);
plot(w/pi,H);grid
title(’Magnitude response of a Butterworth Bandstop filter’)
ylabel(’Magnitude’)
xlabel(’Normalized frequency’)
%end
The order of this filter is 8, and its magnitude response, shown in Figure 4.26,
acts like a notch filter. It can be used to filter out a single frequency at which
the attenuation is more than 65 dB. Since this frequency is ω = 0.2, it is 20%
of the Nyquist frequency or 10% of the sampling frequency. So if the sampling
frequency is chosen as 600 Hz, we can use this filter to filter out the undesirable
hum at 60 Hz due to power supply in an audio equipment.
The coefficients of the digital filter are copied below from the output of the
MATLAB script shown above:
1.2
1
Magnitude
0.8
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
Example 4.14
The magnitude response of this filter is of order 11, is shown in Figures 4.27
and 4.28.
0.9
0.8
0.7
0.6
Magnitude
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
−20
−40
Magnitude in dB
−60
−80
−100
−120
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
Now we introduce another function called yulewalk to find an IIR filter that
approximates an arbitrary magnitude response. The method minimizes the error
between the desired magnitude represented by a vector D and the magnitude of
the IIR filter H (ej ω ) in the least-squares sense.
In addition to the maximally flat approximation and the minimax (Chebyshev
or equiripple) approximation we have discussed so far, there is the least- squares
approximation, which is used extensively in the design of filters as well as other
systems. The error that is minimized in a more general case is known as the
least-pth approximation. It is defined by
p
J3 (ω) = W (ej ω ) H (ej ω ) − D(ej ω ) dω
ω∈R
[num,den] = yulewalk(N,F,D)
YULE–WALKER APPROXIMATION 239
where F is a vector of discrete frequencies in the range between 0 and 1.0, where
1.0 represents the Nyquist frequency; the vector F must include 0 and 1.0. The
vector D contains the desired magnitudes at the frequencies in the vector F; hence
the two vectors have the same length. N is the order of the filter. The coefficients
of the numerator and denominator are output data in the vectors num and den as
shown in (4.102).
Example 4.15
The magnitude of the IIR filter of order 10 obtained in this example is shown
in Figure 4.29. We can increase or decrease the order of the filter and choose the
design that satisfies the requirements for the application under consideration.
0.7
0.6 +
0.5 +
Magnitude
0.4
0.3 +
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
4.11 SUMMARY
In this chapter, three major topics have been discussed. First, the theory and
design procedure for approximating the piecewise constant magnitude of ideal
analog filters was discussed, followed by the theory and design procedure for
the design of the IIR filters. These are lowpass, highpass, bandpass, or bandstop
filters that approximate the desired piecewise constant magnitudes in either the
maximally flat sense or the equiripple sense. It is to be pointed out that the
constant group delay of analog filters does not transform to a constant group delay
of the IIR filter obtained by the bilinear transformation. Separate procedures for
designing IIR filters that approximate a constant group delay have been described
in [10].
Next we described the MATLAB functions that are used for designing these
IIR filters as well elliptic function filters. Finally we described the use of the
MATLAB function yulewalk that approximates an arbitrary magnitude response
in the least-squares sense. Design of IIR filters that approximate given fre-
quency specifications with additional approximation criteria are described in
Chapter 7.
PROBLEMS
4.1 Find the function |H (j ω)|2 from the transfer functions given below:
s+3
H1 (s) =
s 2 + 2s + 2
s2 + s + 1
H2 (s) =
s(s 2 + 4s + 20)
4.2 Find the transfer function H (s) from the functions given below:
(ω2 + 9)
|H1 (j ω)|2 = (4.103)
(ω2 + 4)(ω2 + 1)
(ω2 + 4)
|H2 (j ω)|2 = (4.104)
(ω2 + 16)(ω4 + 1)
4.3 An analog signal x(t) = e−2t u(t) is sampled to generate the discrete-
time sequence x(nT ) = e−2nT u(n). Find the z transform X(z) of the DT
sequence for T = 0.1, 0.05, 0.01 s.
4.4 An analog signal x(t) = 10 cos(2t)u(t) is sampled to generate the discrete-
time sequence x(nT ) = 10 cos(2nT )u(n). Find the z transform X(z) of
the DT sequence for T = 0.1, 0.01 s.
4.5 Derive transfer function H1 (z) obtained when the impulse-invariant trans-
formation is applied and H2 (z) when the bilinear transformation s =
PROBLEMS 241
|H(jω)|dB
0 dB
−0.4 dB
0 1000 3000 w
0 dB
−1.5 dB
−55 dB
w
4 × 104 105 4 × 105
4.11 What is the order of an analog bandpass Chebyshev I filter that has a
magnitude response as shown in Figure 4.31?
4.12 Determine the sampling period T such that a frequency s = j 15 of the
analog filter maps to the normalized frequency ω = 0.3π of the digital
filter.
4.13 A digital Butterworth lowpass filter is designed by applying the bilinear
transformation on the transfer function of an analog Butterworth low-
pass filter that has an attenuation of 45 dB at 1200 rad/s. What is the
PROBLEMS 243
|H(e jω)|
1.00
0.96
0.004
1.00
0.98
0.045
ws1 3000 5000 ws2 w
|H(e j 2pω)|
1.00
0.95
0.002
200 300 400 1000 f
4.20 Design a Chebyshev analog highpass filter that approximates the specifi-
cations as shown in Figure 4.35.
4.21 Design a Butterworth bandpass IIR filter that approximates the specifica-
tions given in Figure 4.36. Show all calculations step by step. Plot the
magnitude using MATLAB.
4.22 A Butterworth bandpass IIR filter of order 10 meets the following spec-
ifications: ωp1 = 0.5π, ωp2 = 0.65π, ωs2 = 0.8π, Ap = 0.5 dB. What is
the attenuation at ωs2 ?
4.23 A Chebyshev I bandstop digital filter meets the satisfy the following spec-
ifications: ωp1 = 0.1π, ωp2 = 0.8π, ωs2 = 0.4π, αp = 0.8, αs = 55. Find
the transfer function H (p) of the lowpass analog prototype filter.
PROBLEMS 245
|H(jw)|
1.00
0.93
0.004
300 800 w
|H(e jwT)|
1.00
0.93
0.004
1.00
0.92
0.003
X(z) Σ Σ Y(z)
a1 a2
Σ Σ
X(z) Σ Σ Y(z)
−1.0
0.5
0.06
Σ Σ
4.27 Derive the transfer function of the two circuits shown in Figures 4.38 and
4.39 and verify that they are allpass filters.
4.28 The transfer function of an analog allpass filter H (s) = (s 2 − as + b)/
(s 2 + as + b) has a magnitude response equal to one at all frequencies.
REFERENCES 247
Show that the IIR filter obtained by the application of the bilinear trans-
formation on H (s) is also an allpass digital filter.
MATLAB Problems
4.29 Design a Butterworth bandstop filter with Wp1 = 0.2, Ws1 = 0.35, Ws2 =
0.55, Wp2 = 0.7.0, Rp = 0.25, and Rs = 45. Plot the magnitude and the
group delay response.
4.30 Design a Chebyshev I bandpass filter to meet the following specifications:
Ws1 = 0.4, Wp1 = 0.45, Wp2 = 0.55, Ws2 = 0.6, Rp = 0.3, Rs = 50. Plot
the magnitude (in decibels) and the group delay to verify that the given
specifications have been met.
4.31 Design a Chebyshev II highpass filter with Ws = 0.1, Wp = 0.3, Rp = 0.8,
Rs = 60 dB. Plot the magnitude (in decibels) and the group delay of the
filter to verify that the design meets the specifications.
4.32 Design an elliptic lowpass filter with Wp = 0.2, Ws = 0.35, Rp = 0.8,
Rs = 40. Plot the magnitude (in decibels) and the group delay of the
filter.
4.33 Design an elliptic lowpass filter with Wp1 = 0.3, Ws = 0.4, Rp = 0.5,
Rs = 55. Plot the magnitude (in decibels) and the group delay of the
filter. Plot a magnified plot of the response in the stopband to verify that
the specifications have been met.
4.34 Design a Butterworth bandpass filter with Ws1 = 0.3, Wp1 = 0.5, Wp2 =
0.55, Ws2 = 0.8, Rp = 0.5, and Rs = 50. Plot its magnitude and phase
response.
4.35 Design an IIR filter with the following specifications: F = [0 0.2 0.4
0.5 1.0], D = [1.0 0.5 0.7 0.9 1.0], using yulewalk function.
Plot the magnitude of the filter.
4.36 Design an IIR filter with the following specifications, using the MATLAB
function yulewalk: F = [0.0 0.3 0.5 0.7 0.9 1.0]; D = [0.2
0.4 0.5 0.3 0.6 1.0]. Plot the magnitude of the filter.
4.37 Design an IIR filter that approximates the magnitude response with the
specifications F = [0.0 0.2 0.4 0.6 0.8 1.0]; D = [1.0 0.18
0.35 0.35 0.18 1.0] using the MATLAB function yulewalk. Plot
the magnitude and group delay response of the filter.
REFERENCES
5.1 INTRODUCTION
From the previous two chapters, we have become familiar with the magnitude
response of ideal lowpass, highpass, bandpass, and bandstop filters, which was
approximated by IIR filters. In the previous chapter, we also discussed the theory
and a few prominent procedures for designing the IIR filters.
The general form of the difference equation for a linear, time-invariant,
discrete-time system (LTIDT system) is
N
M
y(n) = − a(k)y(n − k) + b(k)x(n − k) (5.1)
k=1 k=0
M
y(n) = b(k)x(n − k) (5.4)
k=0
= b(0)x(n) + b(1)x(n − 1) + · · · + b(M)x(n − M) (5.5)
In this chapter, the properties of the FIR filters and their design will be dis-
cussed. When the input function x(n) is the unit sample function δ(n), the
249
250 FINITE IMPULSE RESPONSE FILTERS
M
H (z−1 ) = h(k)z−k = h(0) + h(1)z−1 + h(2)z−2 + · · · + h(M)z−(M) (5.6)
k=0
The FIR filters have a few advantages over the IIR filters as defined by (5.1):
1. We can easily design the FIR filter to meet the required magnitude response
in such a way that it achieves a constant group delay. Group delay is defined
as τ = −(dθ/dω), where θ is the phase response of the filter. The phase
response of a filter with a constant group delay is therefore a linear function
of frequency. It transmits all frequencies with the same amount of delay,
which means that there will not be any phase distortion and the input signal
will be delayed by a constant when it is transmitted to the output. A filter
with a constant group delay is highly desirable in the transmission of digital
signals.
2. The samples of its unit impulse response are the same as the coefficients
of the transfer function as seen from (5.5) and (5.6). There is no need to
calculate h(n) from H (z−1 ), such as during every stage of the iterative opti-
mization procedure or for designing the structures (circuits) from H (z−1 ).
3. The FIR filters are always stable and are free from limit cycles that arise
as a result of finite wordlength representation of multiplier constants and
signal values.
4. The effect of finite wordlength on the specified frequency response or the
time-domain response or the output noise is smaller than that for IIR filters.
5. Although the unit impulse response h(n) of an IIR filter is an infinitely
long sequence, it is reasonable to assume in most practical cases that the
value of the samples becomes almost negligible after a finite number; thus,
choosing a sequence of finite length for the discrete-time signal allows us
to use powerful numerical methods for processing signals of finite length.
5.1.1 Notations
It is to be remembered that in this chapterwe choose the order of the FIR
filter or degree of the polynomial H (z−1 ) = N
n=0 h(n)z
−n
as N , and the length
LINEAR PHASE FIR FILTERS 251
N
H (z−1 ) = h(n + 1)z−n (5.7)
n=0
The notation and meaning of angular frequency used in the literature on discrete-
time systems and digital signal processing also have to be clearly understood by
the students. One is familiar with a sinusoidal signal x(t) = A sin(wt) in which
w = 2πf is the angular frequency in radians per second, f is the frequency in
hertz, and its reciprocal is the period Tp in seconds. So we have w = 2π/Tp
radians per second. Now if we sample this signal with a uniform sampling
period, we need to differentiate the period Tp from the sampling period denoted
by Ts . Therefore, the sampled sequence is given by x(nTs ) = A sin(wnTs ) =
A sin(2πnTs /Tp ) = A sin(2πf/fs ) = A sin(w/fs ). The frequency w (in radians
per second) normalized by fs is almost always denoted by ω and is called the
normalized frequency (measured in radians). The frequency w is the analog fre-
quency variable, and the frequency ω is the normalized digital frequency. On
this basis, the sampling frequency ωs = 2π radians. Sometimes, w is normalized
by πfs or 2πfs so that the corresponding sampling frequency becomes 2 or 1
radian(s). Note that almost always, the sampling period is denoted simply by T
in the literature on digital signal processing when there is no ambiguity and the
normalized frequency is denoted by ω = wT . The difference between the angular
frequency in radians per second and the normalized frequency usually used in
DSP literature has been pointed out in several instances in this book.
Now we consider the special types of FIR filters in which the coefficients h(n)
of the transfer function H (z−1 ) = Nn=0 h(n)z
−n
are assumed to be symmetric
or antisymmetric. Since the order of the polynomial in each of these two types
252 FINITE IMPULSE RESPONSE FILTERS
can be either odd or even, we have four types of filters with different properties,
which we describe below.
Type I. The coefficients are symmetric [i.e., h(n) = h(N − n)], and the order
N is even.
Example 5.1
As shown in Figure 5.1a, for this type I filter, with N = 6, we see that h(0) =
h(6), h(1) = h(5), h(2) = h(4). Using these equivalences in the above, we get
h(n) h(n)
0 1 2 3 4 5 6
0 1 2 3 4 5 6 7
Center of symmetry
Type I N = 6 Type II N = 7
(a) (b)
h(n) h(n)
0 1 2 3 4 5 6 7
0 1 2 3 4 5 6
Center of antisymmetry
Type III N = 6 Type IV N = 7
(c) (d )
Figure 5.1 Unit impulse responses of the four types of linear phase FIR filters.
LINEAR PHASE FIR FILTERS 253
N
H (ej ω ) = h(n)e−j nω
n=0
Type II. The coefficients are symmetric [i.e., h(n) = h(N − n)], and the order
N is odd.
Example 5.2
Therefore
The phase angle θ (ω) = −3.5ω, and the group delay is τ = 3.5 samples.
In the general case of type II filter, we obtain
N
H (e−j ω ) = h(n)e−j nω = ej θ(ω) {HR (ω)}
n=0
⎧ ⎫
⎨(N
+1)/2 ⎬
−j ( N2 ω) N +1 1
=e 2h − n cos n − ω) (5.12)
⎩ 2 2 ⎭
n=1
which shows a linear phase θ (ω) = −[(N/2)ω] and a constant group delay =
N/2 samples.
Type III. The coefficients are antisymmetric [i.e., h(n) = −h(N − n)], and the
order N is even.
Example 5.3
Note that the phase angle for this filter is θ (ω) = −3ω + π/2, which is still a
linear function of ω. The group delay is τ = 3 samples for this filter.
In the general case, it can be shown that
⎧ ⎫
⎨ N/2 ⎬
N
H (e−j ω ) = e−j [(Nω−π)/2] 2 h − n sin(nω) (5.18)
⎩ 2 ⎭
n=1
and it has a linear phase θ (ω) = −[(N ω − π)/2] and a group delay τ = N/2
samples.
Type IV. The coefficients are antisymmetric [i.e., h(n) = −h(N − n)], and the
order N is odd.
Example 5.4
H (e−j ω ) = e−j 3.5ω {h(0)[ej 3.5ω − e−j 3.5ω ] + h(1)[ej 2.5ω − e−j 2.5ω ]
+ h(2)[ej 1.5ω − e−j 1.5ω ] + h(3)[ej 0.5ω − e−j 0.5ω ]}
= e−j 3.5ω {h(0)2j sin(3.5ω) + h(1)2j sin(2.5ω) + h(2)2j sin(1.5ω)
+ h(3)2j sin(0.5ω)}
= e −j [3.5ω−(π/2)] {2h(0) sin(3.5ω) + 2h(1) sin(2.5ω) + 2h(2) sin(1.5ω)
+ 2h(3) sin(0.5ω)} (5.20)
256 FINITE IMPULSE RESPONSE FILTERS
This type IV filter with N = 7 has a linear phase θ (ω) = −3.5ω + π/2 and a
constant group delay τ = 3.5 samples.
The transfer function of the type IV linear phase filter in general is given by
⎧ ⎫
+1)/2
⎨ (N ⎬
N +1 1
H (e−j ω ) = e−j [(Nω−π)/2] 2 h − n sin n− ω
⎩ 2 2 ⎭
n=1
(5.21)
The frequency responses of the four types of FIR filters are summarized below:
⎧ ⎫
⎨ N N/2
N
⎬
H (ej ω ) = e−j [(N/2)ω] h +2 h − n cos(nω)
⎩ 2 2 ⎭
n=1
for type I
⎧ ⎫
⎨ (N +1)/2 ⎬
N + 1 1
H (e−j ω ) = e−j [(N/2)ω] 2 h − n cos n− ω
⎩ 2 2 ⎭
n=1
for type II
⎧ ⎫
⎨ N/2 ⎬
N
H (e−j ω ) = e−j [(Nω−π)/2] 2 h − n sin(nω)
⎩ 2 ⎭
n=1
2
3
Magnitude
Magnitude
1.5
2
1
1
0.5
0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Normalized frequency Normalized frequency
2
1.5
1.5
1
1
0.5 0.5
0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Normalized frequency Normalized frequency
Figure 5.2 Magnitude responses of the four types of linear phase FIR filters.
explained later. For example, type I filters have a nonzero magnitude at ω = 0 and
also a nonzero value at the normalized frequency ω/π = 1 (which corresponds to
the Nyquist frequency), whereas type II filters have nonzero magnitude at ω = 0
but a zero value at the Nyquist frequency. So it is obvious that these filters are
not suitable for designing bandpass and highpass filters, whereas both of them
are suitable for lowpass filters. The type III filters have zero magnitude at ω = 0
and also at ω/π = 1, so they are suitable for designing bandpass filters but not
lowpass and bandstop filters. Type IV filters have zero magnitude at ω = 0 and
a nonzero magnitude at ω/π = 1. They are not suitable for designing lowpass
and bandstop filters but are candidates for bandpass and highpass filters.
In Figure 5.3a, the phase response of a type I filter is plotted showing the
linear relationship. When the transfer function has a zero on the unit circle in
the z plane, its phase response displays a jump discontinuity of π radians at the
corresponding frequency, and the plot uses a jump discontinuity of 2π whenever
the phase response exceeds ±π so that the total phase response remains within
the principal range of ±π. If there are no jump discontinuities of π radians,
that is, if there are no zeros on the unit circle, the phase response becomes a
258 FINITE IMPULSE RESPONSE FILTERS
3 0
2 −2
Phase angle in radians
0 6
−1 −8
−2 −10
−3 −12
−4 −14
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Normalized frequency Normalized frequency
(a) (b)
Similarly, the FIR filters with antisymmetric coefficients satisfy the property
0.5
Imaginary Part
6
0
−0.5
−1
Example 5.5
We consider the example of a type I FIR filter with H (z−1 ) = 0.4 + 0.6z−1 +
0.8z−2 + 0.2z−3 + 0.8z−4 + 0.6z−5 + 0.4z−6 , to illustrate these properties. When
it is expressed in the form H (z) = z−6 [0.4 + 0.6z + 0.8z2 + 0.2z3 + 0.8z4 +
0.6z5 + 0.4z6 ] and factorized, we get
These properties confirm the properties of the magnitude response of the filters
as illustrated by Figure 5.2. A zero at z = 1 corresponds to ω = 0, and a zero
at z = −1 corresponds to ω = π. As an example, we note that the type III FIR
filter has zero magnitude at ω = 0 and ω = 1, whereas we stated above that the
transfer function of the type III FIR filter has an odd number of zeros both at
z = 1 and z = −1.
Another important result that will be used in the Fourier series method for
designing FIR filters is given below. This is true of all FIR as well as IIR
filters and not just linear phase FIR filters. The Fourier transform (DTFT) of any
discrete-time sequence x(n) is
n=∞
X(ej ω ) = x(n)e−j nω (5.28)
n=−∞
FOURIER SERIES METHOD MODIFIED BY WINDOWS 261
Since H (ej ω ) is a periodic function with a period of 2π, it has a Fourier series
representation in the form
n=∞
X(e ) =
jω
c(n)e−j nω (5.29)
n=−∞
where
π
1
c(n) = X(ej ω )ej nω dω (5.30)
2π −π
Comparing (5.28) and (5.29), we see that x(n) = c(n) for −∞ < n < ∞.
n=∞ we consider
When the frequency response of the LTI-DT system H (ej ω ) =
−j nω , where h(n) = 0 for n < 0, we will find that c(n) = 0 for
n=0 h(n)e
n < 0. So we note that the Fourier series coefficients c(n) evaluated from (5.30)
are the same as the coefficients h(n) of the IIR or FIR filter. Evaluating the
coefficients c(n) = h(n) by the integral in the Equation (5.30) is easy when we
choose H (ej ω ) to be a constant in the subinterval within the interval of inte-
gration [−π, π] with zero phase or when H (ej ω ) is piecewise constant over
different disjoint passbands and stopbands, within [−π, π]. This result facili-
tates the design of FIR filters that approximate the magnitude response of ideal
lowpass, highpass, bandpass, and bandstop filters.1 The Fourier series method
based on the abovementioned properties of FIR filters for designing them is
discussed next.
The magnitude responses of four ideal classical types of digital filters are shown in
Figure 5.5. Let us consider the magnitude response of the ideal, desired, lowpass
digital filter to be HLP (ej ω ), in which the cutoff frequency is given as ωc . It has
a constant magnitude of one and zero phase over the frequency |ω| < ωc . From
(5.30), we get
π
ωc
1 j ω j nω 1
cLP (n) = HLP (e )e dω = ej nω dω
2π −π 2π −ωc
j nω ωc
1 e ej ωc − e−j ωc
= =
2π j n −ωc 2j (πn)
sin(ωc n)
= = −∞ < n < ∞ (5.31)
πn
1
Two other types of frequency response for which the Fourier series coefficients have been derived
are those for the Hilbert transformer and the differentiator. Students interested in them may refer to
other textbooks.
262 FINITE IMPULSE RESPONSE FILTERS
1 1
w w
−p −wc 0 wc p −p −wc 0 wc p
(a) (b)
1 1
w w
−p −wc2 −wc1 wc1 wc2 p −p −wc2 −wc1 wc1 wc2 p
(c) (d)
Figure 5.5 Magnitude responses of four ideal filters. (Reprinted from Ref. 9, with per-
mission from John Wiley & Sons, Inc.)
Continuing with the design of the lowpass filter, we choose the finite series
n=M −j nω
n=−M cLP (n)e = HM (ej ω ), which contains (2M
+ 1) coefficients from −M
to M, as an approximation to the infinite series n=∞ −j nω . In other
n=−∞ cLP (n)e
words, we approximate the ideal frequency response that exactly matches the given
HLP (ej ω ) containing the infinite number of coefficients by HM (ej ω ), which con-
tains a finite number of coefficients. As M increases, the finite series of HM (ej ω )
approximates the ideal response HLP (ej ω ) in the least mean-squares sense; that is,
the error defined as
π
1
J (c, ω) = HM (ej ω ) − HLP (ej ω )2 dω (5.37)
2π −π
π n=M 2
1 sin(ωc n) −j nω
= e − HLP (ej ω ) dω
2π −π πn
n=−M
So we have the product hw (n) = c(n) · wR (n), which is of finite length as shown
in Figure 5.7(c). Therefore the frequency response of the product of these two
264 FINITE IMPULSE RESPONSE FILTERS
1.1
1.0
0.9
0.8 Hid(w)
0.7
0.6
H(w)
0.5
0.4
0.3
0.2
0.1
0.0
−0.1
0 0.2π 0.4π 0.6π 0.8π π
Figure 5.6 Frequency response of a lowpass filter, showing Gibbs overshoot. (Reprinted
from Ref. 9, with permission from John Wiley & Sons, Inc.)
g(n)
(a) n
−M 0 M
wR (n)
(b) n
−M 0 M
hw (n)
(c) n
−M 0 M
(d ) n
0 M 2M
Figure 5.7 Coefficients of the FIR filter modified by a rectangular window function.
FOURIER SERIES METHOD MODIFIED BY WINDOWS 265
functions is obtained from the convolution of (ej ω ) with the frequency response
HLP (ej ω ) of the ideal, desired, frequency response.
π
1
HM (e ) =
jω
HLP (ej ϕ )(ej (ω−ϕ) ) dϕ (5.40)
2π −π
Hid(q)
y(w − q) y(w − q)
w=p wc < w < p
−wc wc q −wc wc q
p −p p
y(w − q) y(w − q)
w = wc 0 < w < wc
q q
−wc wc p −p −wc wc p
H(w)
−wc wc p w
Figure 5.8 Convolution of the frequency response of a rectangular window with an ideal
filter. (Reprinted from Ref. 9, with permission from John Wiley & Sons, Inc.)
266 FINITE IMPULSE RESPONSE FILTERS
Bartlett window:2
|n|
w(n) = 1 − ; −M ≤ n ≤ M
M +1
Hann window:
1 2πn
w(n) = 1 + cos ; −M ≤ n ≤ M
2 2M + 1
Hamming window:
2πn
w(n) = 0.54 + 0.46 cos ; −M ≤ n ≤ M
2M + 1
Blackman window:
2πn 4πn
w(n) = 0.42 + 0.5 cos + 0.08 cos ; −M ≤ n ≤ M
2M + 1 2M + 1
The frequency responses of the window functions listed above have different
mainlobe widths ωM and different peak magnitudes of their sidelobes. In the
plot of HM (ej ω ) shown in Figure 5.9, it is seen that the difference between the
two frequencies at which the peak error in HM (ej ω ) occurs is denoted as ωM .
When the frequency response of the window functions is convolved with the
frequency response of the desired lowpass filter, the transition bandwidth of the
filter is determined by the width of the mainlobe of the window chosen and hence
is different for filters modified by the different window functions. The relative
sidelobe level Asl is defined as the difference in decibels between the magni-
tudes of the mainlobe of the window function chosen and the largest sidelobe.
It determines the maximum attenuation As = −20 log10 (δ) in the stopband of
the filter.
In Figure 5.9 we have also shown the transition bandwidth ω and the center
frequency ωc = (ωp + ωc )/2, where ωp and ωs are respectively the cutoff fre-
quencies of the passband and the stopband. The value of the ripple δ does not
depend on the length (2M + 1) of the filter or the cutoff frequency ωc of the
2
In many textbooks, the Bartlett window is also called a triangular window, but in MATLAB, the
Bartlett window is different from the triangular window.
FOURIER SERIES METHOD MODIFIED BY WINDOWS 267
H(w)
1+d
Hid(w)
1−d
0.5 Δw
d
p w
−d wp wc
ws
ΔwM
y(q − we)
wc q
Figure 5.9 Frequency response of an ideal filter and final design. (Reprinted from Ref. 9,
with permission from John Wiley & Sons, Inc.)
filter. The width of the mainlobe ωM , the transition bandwidth ω, and the
relative sidelobe attenuation Asl for the few chosen window functions are listed in
Table 5.1. The last column lists the minimum attenuation As = −20 log10 δs real-
ized by the lowpass filters, using the corresponding window functions. It should
be pointed out that the numbers in Table 5.1 have been obtained by simulating
the performance of type I FIR filters with ωc = 0.4π and M = 128 [1], and they
would change if other types of filters and other values for ωc and M are chosen.
From Table 5.1, we see that as As increases, with fixed value for M, the transition
bandwidth ω also increases. Since we like to have a large value for As and a
small value for ω, we have to make a tradeoff between them. The choice of the
window function and the value for M are the only two freedoms that we have for
controlling the transition bandwidth ω, but the minimum stopband attenuation
As depends only on the window function we choose, and not the value of M.
Two window functions that provide control over both δs (hence As ) and the
width of the transition bandwidth ω are the Dolph–Chebyshev window [6] and
the Kaiser window functions [7], which have the additional parameters r and β,
respectively. The Kaiser window is defined by
5 " 6
I0 β 1 − (n/M)2
w(n) = ; −M ≤ n ≤ M (5.41)
I0 {β}
268 FINITE IMPULSE RESPONSE FILTERS
We compute the values of the Kaiser window function in three steps as follows:
(αs − 8)
N= (5.44)
2.285(ω)
3
The formulas given by Kaiser may not give a robust estimate of the order for all cases of FIR filters.
A more reliable estimate is given by an empirical formula [10] shown below, and that formula is
used in the MATLAB function remezord:
(ωs − ωp ) 2
D∞ (δp , δs ) − F (δp , δs )
2π
N∼ =
(ωs − ωp )
2π
where D∞ (δp , δs ) (when δp ≥ δs ) = a1 (log10 δp ) + a2 (log10 δp ) + a3 log10 δs − a4 (log10 δp )2 +
2
a5 (log10 δp ) + a6 , and F (δp , δs ) = b1 + b2 log10 δp − log10 δs , with a1 = 0.005309, a2 =
0.07114, a3 = −0.4761, a4 = 0.00266, a5 = 0.5941, a6 = 0.4278, b1 = 11.01217, b2 = 0.51244.
When δp < δs , they are interchanged in the expression for D∞ (δp , δs ) above.
270 FINITE IMPULSE RESPONSE FILTERS
Example 5.6
Design a bandpass filter that approximates the ideal magnitude response given in
Figure 5.5(c), in which ωc2 = 0.6π and ωc1 = 0.2π. Let us select a Hamming
window of length N = 11 and plot the magnitude response of the filter.
The coefficients cBP (n) of the Fourier series for the magnitude response given
are computed from formula (5.35) given below:
⎧
⎪ (ω − ωc1 )
⎨ c2 ; n=0
cBP (n) = sin(ωπ n) sin(ω n)
⎪
⎩ c2 c1
− ; |n| ≥ 0
πn πn
But since the Hamming window function has a length of 11, we need to compute
the coefficients cBP (n) also, from n = −5 to n = 5 only. So also we calculate
the 11 coefficients of the Hamming window, using the formula
2πn
wH (n) = 0.54 + 0.46 cos ; −5 ≤ n ≤ 5
N
Their products hw (n) = cBP (n)wH (n) are computed next. The 11 coefficients
cBP (n), wH (n) and hw (n) for −5 ≤ n ≤ 5 are listed below. Next the coefficients
hw (n) are delayed by five samples to get the coefficients of the FIR filter function
[i.e., h(n) = hw (n − 5)], and these are also listed for 0 ≤ n ≤ 10 below. The
plot of the four sequences and the magnitude response of the FIR are shown in
Figures 5.10 and 5.11, respectively.
Example 5.7
Design a lowpass FIR filter of length 11, with a cutoff frequency ωc = 0.3π.
Using a Hamming window, find the value−nof the samples h(3) and h(9) of the
FIR filter given by H (z−1 ) = 10 n=0 h(n)z .
Since the length of the FIR filter is given as 11, its order is N = 10. The
coefficients hw (n) have to be known for −5 ≤ n ≤ 5 and delayed by five samples.
FOURIER SERIES METHOD MODIFIED BY WINDOWS 271
Figure 5.10 Coefficients of the filter obtained during the design procedure.
1.2
1
Magnitude
0.8
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5
Normalized frequency
1.5
1.0 1.0
0.5
0.2
0 wc1 wc2 wc3 wc4 p 0 w p wc p
(a) (b)
Figure 5.12 (a) Ideal magnitude response of a multilevel FIR filter; (b) Magnitude
response of a lowpass filter with a spline function of zero order.
Since only h(3) and h(9) are asked for, by looking at Figure 5.10, we notice that
these samples are the same as hw (−2) and hw (4) because when they are shifted
by five samples, they become h(3) and h(9). So we have to calculate only
cLP (−2), cLP (4) and the values w(−2), w(4) of the Hamming window. Then
hw (−2) = cLP (−2)w(−2) and hw (4) = cLP (4)w(4).
If the frequency response of an FIR filter has multilevel magnitude levels, it
is easy to extend the method as illustrated by Figure 5.12a. We design a lowpass
filter with a cutoff frequency ωc1 and a maximum magnitude of 0.8, another
lowpass filter with a cutoff frequency ωc2 , and a maximum magnitude of 0.2
in the passband; we design a highpass filter with a cutoff frequency ωc3 and a
maximum value of 0.5 and another bandpass filter with cutoff frequencies ωc3 and
ωc4 and a maximum magnitude of 1.0. If all of these filters are designed to have
zero phase or the same phase response, then the sum of the four filters described
above will approximate the magnitude levels over the different passbands. Each
of the four filters should be designed to have very low sidelobes so that they
don’t spill over too much into the passbands of the adjacent filter.
Even when we design an FIR filter having a constant magnitude over one
passband or one stopband, using the methods described above will produce a
transition band between the ideal passband and the stopband. Instead of mitigating
the Gibbs overshoot at points of discontinuity by using tapered windows, we
can make a modification to the ideal piecewise, constant magnitude response to
remove the discontinuities. We choose a spline function of order p ≥ 0 between
the passband and the stopband [5]. The spline function of zero order is a straight
line joining the edge of the passband and the stopband as shown in Figure 5.12b
The Fourier series coefficients for the lowpass frequency response in this are
given by
⎧
⎪ ωc
⎨ π;
⎪ n=0
hLP (n) = (5.45)
⎪ 2 sin(ωn/2) sin(ωc n)
⎪
⎩ ; |n| > 0
(ωn) πn
Design procedure using this formula seems easier than the Fourier series method
using window functions—since we do not have to compute the coefficients of
window functions and multiply these coefficients by those of the ideal frequency
response. But it is applicable for the design of lowpass filters only. However,
extensive simulation of this design procedure shows that as the bandwidth ω
is decreased and as p is increased, the magnitude response of the filter exhibits
ripples in the passband as well as the stopband, and it is not much better than
the response we can obtain from the windowed FIR filters.
Example 5.8
If the magnitude response specified in the passband of an FIR filter lies between
1 + δp and 1 − δp , then the maximum attenuation αp ( in decibels) in the passband
274 FINITE IMPULSE RESPONSE FILTERS
100.05αp − 1
δp =
100.05αp + 1
If the passband magnitude lies between 1 and (1 − δp ), then the maximum atten-
uation in the passband αp = −20 log(1 − δp ), in which case δp is given by
(1 − 10−0.05αp ). If the magnitude in the stopband lies below δs , the minimum
attenuation in the stopband is given by αs = −20 log(δs ), from which we obtain
δs = 10−0.05αs . These relations are used to find the value of δp and δs if the
attenuations αp and αs in the passband and stopband are specified in decibels.
In the MATLAB function [N, fpoints, magpoints,wt] = remezord
(edgepoints, bandmag, dev, Fs), the input vector edgepoints lists the
edges of the disjoint bands between 0 and the Nyquist frequency but does not
include the frequency at ω = 0 and the Nyquist frequency, as the default value of
the Nyquist frequency is 1.0 (and therefore the sampling frequency Fs=2). The
vector bandmag lists the magnitudes over each of the passbands and stopbands.
If there is transition band between the passband and stopband, it is considered
as a “don’t care” region. Since the first edge at 0 and the last one at the Nyquist
frequency are not included in the vector edgepoints, the length of the vector
edgepoints is two times that of bandmag minus 2. For example, let us choose
a bandpass filter with a stopband [0 0.1], a transition band [0.1 0.12], a pass-
band [0.12 0.3], a transition band [0.3 0.32], and a stopband [0.32 1.0].
The input vector edgepoints and the output vector fpoints are the same,
when Fs=2, namely, [0.1 0.12 0.3 0.32]. The input vector bandmag is of
length 3, and the values may be chosen, for example, as [0 1 0] for the band-
pass filter. The vector dev lists the values for the maximum deviations δp and
δs in the passbands and stopbands, calculated from the specifications for αp and
αs as explained above. The output vector fpoints is the same as edgepoints
when Fs has the default value of 2; if the edgepoints and the sampling fre-
quency Fs are actual frequencies in hertz, then the output vector fpoints gives
their values normalized by the actual Nyquist frequency Fs/2. But it must be
pointed out that the output vector magpoints lists the magnitudes at both ends
of the passbands and stopbands. In the above example, the vector magpoints
is [0 0 1 1 0 0]. The output of this function is used as the input data to
fir1 and (also the function remez, discussed later) to obtain the unit impulse
response coefficients of the FIR filter.
Let us consider a lowpass filter with a passband over [0 0.3] and a magnitude
1.0 and a stopband over [0.4 1.0] with a magnitude 0.0. In this case, there is
transition band between 0.3 and 0.4 over which the magnitude is not specified,
and therefore it is a “don’t care” region. The vector edgepoints is [0.3 0.4],
and the vector bandmag is [1.0 0.0]. For the previous example of a bandpass
filter, we have already mentioned that edgepoints is [0.1 0.12 0.3 0.32]
and the vector bandmag is [0 1 0] . Let us select δp = δs = 0.01 for both
DESIGN OF WINDOWED FIR FILTERS USING MATLAB 275
and it yields a value N = 39, with the same vector fpoints as bandmag and the
vector wt=[1.0 1.0]. If we choose a sampling frequency of 2000 Hz, we use
remezord([0.3 0.4], [1.0 0.0], [0.01 0.01], 2000), and the output
would be N = 39, fpoints=[0.3 0.4], magpoints=[1 1 0 0] and the vec-
tor wt = [1.0 1.0]. The elements in the vector wt will be unequal if δp = δs
[i.e., wt = [(δs /δp ) 1]]. These output values are used as input in fir1 (or
remez) for the design of the lowpass filter.
The function for estimating the order of the bandpass filter is
which gives N = 195, with the same vector fpoints as the input vector edge-
points, magpoints= [0 0 1 1 0 0], and wt = [1 1 1] as in the input.
The MATLAB function kaiserord given below is used to estimate the order
N of the FIR filter using the Kaiser window. The input parameters for this
function are the same as for remezord, but the outputs are the approximate order
N of the Kaiser window required to meet the input specifications, the normalized
frequencies at the bandedges, the parameter beta, and the filtertype:
b=fir1(N,Wc)
b=fir1(N,Wc,’ftype’)
b=fir1(N,Wc,’ftype’,window)
These forms give the N + 1 samples of the unit impulse response of the lin-
ear phase FIR filter or the coefficients of its transfer function. Wc is the cutoff
frequency of the lowpass filter, and ftype is omitted; it is the cutoff of the high-
pass filter when ’ftype’ is typed as ’high’. But Wc is a two-element vector
Wc=[W1 W2], which lists the two cutoff frequencies, ωc1 and ωc2 (ωc2 ≥ ωc1 ),
of the bandpass filter. (Use help fir1 to get details when there are multiple
passbands.) The term ’ftype’ need not be typed. When ’ftype’ is typed as
’stop’, the vector Wc represents the cutoff frequencies of the stopband filter.
If the filter is a lowpass, it becomes a type I filter when N as obtained from
remezord is even and it is type II filter when N is odd. Note that the frequency
response of type II filters has a zero magnitude at the Nyquist frequency, that is,
their transfer function has a zero at z = −1 and therefore is a polynomial of odd
order. The highpass and bandstop filters that do not have a zero magnitude at the
Nyquist frequency cannot be realized as type II filters. When designing a highpass
or bandstop filter, N must be an even integer, and the function fir1 automatically
increases the value of N by 1 to make it an even number if the output from
remezord is an odd integer. Since the program assumes real values for the
magnitude and zero value for the phase, we do not get types III and IV filters
from this type of frequency specification. The window by default is the Hamming
window in fir1, but we can choose the rectangular (boxcar), Bartlett, triangular,
Hamming, Hanning, Kaiser, and Dolph–Chebyshev (chebwin) windows in the
function fir1. After getting the coefficients of the FIR filter, we can find the
magnitude (phase and group delay also) of the filter to verify that it meets the
specifications; otherwise we may have to increase the value of N , or change the
values in the vector dev.
Example 5.9
Now that we have obtained all the input data needed to design a LP filter with
N = 39 and ωc = 0.3 and a BP filter with N = 195 and ωc = [0.12 0.3], we
design the FIR filters with the Hamming window and the Kaiser window. So we
have four cases, discussed below.
The M-files for designing the four filters are given below, and the resulting
magnitude responses are shown in Figures 5.13–5.17.
It must be noted from Figure 5.9 that the magnitude of the filter designed by
the Fourier series method is 0.5 at ωc , whatever the window function used to
minimize the Gibbs overshoot. The order N = 39 for the lowpass filter, obtained
from the function remezord, is only an estimate that is very conservative because
it results in the magnitude response of the filter that does not meet the passband
error δp = 0.01 specified and used in that function. So we have to change the
value for the cutoff frequency ωc and the order N of the filter by trial and error
DESIGN OF WINDOWED FIR FILTERS USING MATLAB 277
−20
Magnitude in dB
−40
−60
−80
−100
−120
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
Figure 5.13 Magnitude response of a FIR lowpass filter using Hamming window.
0.99
Magnitude
0.98
0.97
0.96
0.95
0.94
0 0.05 0.1 0.15 0.2 0.25 0.3
Normalized frequency
Figure 5.14 Magnified frequency response of a FIR lowpass filter in the passband.
until the specifications are met. For the lowpass FIR filter, we have had to choose
ωc = 0.35 and N = 65 so that at the frequency ω = 0.3, the error δp ≤ 0.01 and
at ω = 0.4, the error δs ≤ 0.01 (equal to 40 dB). The magnitude response of this
final design is shown in Figures 5.13 and 5.14. Similar changes in the design of
the other filters designed by the Fourier series method are necessary.
278 FINITE IMPULSE RESPONSE FILTERS
−20
Magnitude of the filter in dB
−40
−60
−80
−100
−120
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency W/π
Figure 5.15 Magnitude response of a FIR bandpass filter using Hamming window.
0
Magnitude response
−20
−40
−60
−80
−100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency W/p
Figure 5.16 Magnitude response of a FIR lowpass filter using Kaiser window.
−20
Magnitude response
−40
−60
−80
−100
−120
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency W/p
Figure 5.17 Magnitude response of a FIR bandpass filter using Kaiser window.
The frequency response in the passband of FIR filters designed by using the mod-
ified Fourier series method as described above has a monotonically decreasing
response and a maximum error from the desired ideal response in the passband, at
the cutoff frequency ωc . Now we discuss another important method that “spreads
out” the error over the passband in an equiripple fashion, such that the maximum
error is the same at several points and can be made very small. This method min-
imizes the maximum error in the passband and is called as the minimax design
or the equiripple design. An example of the equiripple or Chebyshev response
of a lowpass filter is shown in Figure 5.18.
1.2
1
Magnitude
0.8
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
For types 1–4 FIR filters, the frequency response were shown in (5.22) to be
of the following form:
⎧ ⎫
⎨ N N/2
N
⎬
H (ej ω ) = e−j [(N/2)ω] h +2 h − n cos(nω)
⎩ 2 2 ⎭
n=1
In general, Equations (5.47)–(5.50) are of the form H (ej ω ) = e−j (N ω/2) ejβ ×
HR (ω), where4 β is either 0 or π/2 depending on the type of filter, and HR (ω)
is a real function of ω, which can have positive or negative values. It is easy to
see that HR (ω) for type I filters can be reduced to the form (2M = N )
M
HR (ω) = 7
a [k] cos(kω) (5.51)
k=0
where
7
a [0] = h[M],7
a [k] = 2h[M − k], 1≤k≤M (5.52)
Consider HR (ω) for the type II filter shown in (5.48) and given below:
(N +1)/2
N +1 1
HR (ω) = 2 h − n cos n− ω
2 2
n=1
4
This is not the same parameter β that is used in Kaiser’s window.
282 FINITE IMPULSE RESPONSE FILTERS
where b[k] = 2h{[(2M + 1)/2] − k}, 1 ≤ k ≤ [(2M + 1)/2]. This can be fur-
ther reduced to the form
ω (2M−1)/2
HR (ω) = cos 7
b[k] cos(kω) (5.54)
2
k=0
where
1 7
b[1] = b[1] + 27
b[0]
2
1 7
2M − 1
b[k] = b[k] + 7
b(k − 1) , 2≤k≤ (5.55)
2 2
2M + 1 1 7 2M − 1
b = b
2 2 2
Let us consider the function HR (ω) for a type III filter. Equation (5.49) can be
reduced to the form
(2M+1)/2
HR (ω) = c[k] sin(kω) (5.56)
k=1
M−1
HR (ω) = sin(ω) 7
c[k] cos(kω) (5.57)
k=0
where
c[1] = 7c[0] − 127
c[1]
c[k] = 1
2 c[k − 1] − 7
(7 c[k]) , 2≤k ≤M −1 (5.58)
c[M] = 12 7
c[M − 1]
(2M+1)/2
HR (ω) = d[k] sin k − 12 ω (5.59)
k=1
where d[k] = 2h{[(2M + 1)/2] − k}, 1 ≤ k ≤ (2M + 1)/2. Equation (5.59) can
be reduced to the form
ω (2M−1)/2
HR (ω) = sin 7 cos(kω)
d[k] (5.60)
2
k=0
EQUIRIPPLE LINEAR PHASE FIR FILTERS 283
where
7 − 1 d[1]
d[1] = d[0] 7
2
7 − 1] − d[k]
d[k] = 12 d[k 7 , 2≤k≤ 2M−1
2
2M + 1 2M − 1
d = d7
2 2
where ⎧
⎪
⎪ 1 for type I
⎨ cos ω
⎪
for type II
Q(ω) = 2 (5.62)
⎪ sin(ω)
⎪ for type III
⎪
⎩
sin ω2 for type IV
and
K
P (ω) = α[k] cos(kω) (5.63)
k=0
where ⎧
⎪
⎪ 7
a [k] for type I
⎪
⎨7b[k] for type II
α[k] = (5.64)
⎪
⎪ 7
c[k] for type III
⎪
⎩7
d[k] for type IV
and ⎧
⎪ M for type I
⎪
⎪
⎪
⎪ 2M − 1
⎨ for type II
K= 2 (5.65)
⎪
⎪ M −1 for type III
⎪
⎪
⎩ 2M − 1
⎪
for type IV
2
The coefficients α[k] in (5.67) are the unknown variables that have to be found
such that the maximum absolute value of the error |J (ω)| over the subintervals
of 0 ≤ ω ≤ π is minimized. It has been shown [13] that when this minimum
value is achieved, the frequency response exhibits an equiripple behavior:
min max
W 7d (ej ω )
7 (ej ω ) P (ω) − H (5.69)
over {α[k]} over {S}
where {S} is used to denote the union of the disjoint frequency bands in 0 ≤
ω ≤ π.
Once these coefficients are determined, the coefficients h(n) can be obtained
from the inverse relationships between α[k] and a[k], b[k], c[k], and d[k] depend-
ing on the type of filter and then using the relationship between h[n] and these
coefficients.
Parks and McClellan [2] were the original authors who solved the preceding
problem of minimizing the maximum absolute value of the error function J (ω),
using the theory of Chebyshev approximation, and developed an algorithm to
implement it by using a scheme called the Remez exchange algorithm. They also
published a computer program (in FORTRAN) for designing equiripple, linear
phase FIR filters. Although major improvements have been made by others to this
algorithm and to the software [13], it is still referred to as the Parks–McClellan
algorithm or the Remez exchange algorithm. We will work out a few examples of
designing such filters using the MATLAB function remez in the following section.
DESIGN OF EQUIRIPPLE FIR FILTERS USING MATLAB 285
The first step is to estimate the order of the FIR filter, using the function reme-
zord which was explained earlier. The next step is to find the coefficients of the
FIR filter using the function remez, which has several options:
The vector fpoints lists the edges of the passbands and stopbands, starting
from ω = 0 and ending with ω = 1 (which is the normalized Nyquist frequency).
In contrast to the function remezord, this vector fpoints includes 0 and 1.0
as the first and last entries. The edges between the passbands and the adjacent
stopband must have a separation of at least 0.1; otherwise the program automat-
ically creates a transition band of 0.1 between them, and these transition bands
are considered as “don’t care” regions. The vector magpoints lists the mag-
nitudes in the frequency response at each edge of the passband and stopband.
The weighting function can be prescribed for each frequency band as explained
above. The function remez chooses type I filters for even-order N and type II
filters for odd order as the default choice. The flags ’hilbert’ and ’differ-
entiator’ are used for the option ftype for designing the Hilbert transformer
and the differentiator, respectively. The other input variables are the same as
the outputs obtained from remezord, and hence we can use the two functions
remezord and remez together in one M-file to design an equiripple FIR filter
with linear phase as listed and described below.
band=’);
[N, fpoints, magpoints, wt]=remezord(edgepoints, bandmag,
dev, Fs);
disp(’Order of the FIR filter is’);disp(N);
b=remez(N, fpoints, magpoints, wt);
[h, w]=freqz(b, 1, 256);
H=abs(h);
Hdb=20*log10(H);
plot(w/pi,Hdb);grid
title(’Magnitude response of the equiripple, linear phase
FIR filter’)
ylabel(’Magnitude in dB’)
xlabel(’Normalized frequency’)
This program can be used to design lowpass, highpass, bandpass, and bandstop
filters. If the filter does not meet the given specification, one should increase
the order of the filter by 1, 2, or 3 until the specifications in the passbands and
stopbands are met. But when the cutoff frequencies are very close to 0 or 1, and
when we are designing highpass and stopband filters, the value of N estimated
by remezord may not be acceptable, and we may have to choose it arbitrarily
to meet the given specifications. If one is interested in getting an enlarged view
of the magnitude response over a frequency range such as the passband, then
the following lines may be added to the program listed above. In the function
axis, choose wc1 = 0 and wc2 = passband edge frequency for the lowpass filter
and wc1 and wc2 as the lower and upper cutoff frequencies of the passband of a
bandpass filter or the stopband of a bandstop filter:
Example 5.11
Let us work a few examples using the program displayed above for designing
lowpass and bandpass equiripple filters with the same specifications as the earlier
ones. We type in the following input data for designing an equiripple lowpass
filter:
edgepoints:[0.3 0.4]
DESIGN OF EQUIRIPPLE FIR FILTERS USING MATLAB 287
bandmag :[1 0]
dev :[0.01 0.01]
wt :[1 1]
The frequency response of the lowpass filter is shown in Figure 5.19, and a
magnified view of the response in the passband is shown in Figure 5.20. It is
seen that the deviation in the passband is within 0.087 dB, which is equivalent
to δp = 0.01 in the passband, but the magnitude in the stopband is not equal to
or less than −40 dB, corresponding to δs = 0.01.
Therefore we increase the value of N from 39 to 41 and show the resulting
filter response displayed in Figure 5.21, which does meet the stopband magnitude
required.
Next we design an equiripple bandpass filter meeting the same specifications
as those that used the Hamming window and the Kaiser window. The input
parameter values are the following:
−10
−20
Magnitude in dB
N = 30
−30
−40
−50
−60
−70
−80
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
Figure 5.19 Magnitude response of an equiripple FIR lowpass filter with N = 39.
288 FINITE IMPULSE RESPONSE FILTERS
0.08
0.06
0.04
Magnitude in dB
0.02
−0.02
−0.04
−0.06
−0.08
−0.1
0 0.05 0.1 0.15 0.2 0.25 0.3
Normalized frequency
Figure 5.20 Magnified plot of the passband response of an equiripple FIR lowpass filter.
−10
−20
Magnitude in dB
N = 41
−30
−40
−50
−60
−70
−80
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
Figure 5.21 Magnitude response of an equiripple FIR lowpass filter with N = 41.
When we design an equiripple bandstop filter with the same vector edgepoints
as in the preceding bandstop filter, we get a response that does not meet the
desired specifications even after the order of the FIR filter is increased from 195
to 205. Also, the design of a highpass filter using the Remez algorithm is not
always successful. In Section 5.7, an alternative approach to solve such problems
is suggested.
FREQUENCY SAMPLING METHOD 289
−10
N = 195
−20
Magnitude in dB
−30
−40
−50
−60
−70
−80
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
In the methods considered above for the design of linear phase FIR filters, the
magnitude response was specified as constant over disjoint bands, and the transi-
tion bands were “don’t care” regions. In this section, we discuss briefly the MAT-
LAB function fir2 that designs a linear phase filter with multistage magnitudes
b=fir2(N, F, M)
b=fir2(N, F, M, window)
b=fir2(N, F, M, window, npt)
and so on. As input parameters for this function, N is the order of the filter and
F is the vector of frequencies between 0 and 1 at which the magnitudes are
specified. In the vector F, we include the end frequencies 0 and 1 and list the
magnitudes at these frequencies in the vector M, so the lengths of F and M are the
same. The argument npt is the number of gridpoints equally spaced between 0
and 1; the default value is 512. Frequencies at the edge of adjacent bands can be
included and will appear twice in the vector F indicating a jump discontinuity.
290 FINITE IMPULSE RESPONSE FILTERS
The output of this function is the N + 1 coefficients of the unit impulse response
of the FIR filter with linear phase.
Example 5.12
0.8
0.7
0.6
Magnitude
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
filters only. To design FIR filters with linear phase in general, let us first review
the following results for the DTFT of a digital filter
H (ej ω ) = H ∗ (e−j ω ) = H (ej ω ) e−j (Mω−β)
N −1
H (k) = H (ej (2π/N )k ) = h(n)e−j (2π/N )kn (5.70)
n=0
= H (ej (2π/N )k ) e−j [M(2π/N )k−β] for k = 0, 1, 2, . . . , (N − 1)
1
N−1
h(n) = H (k)ej (2π/N )nk for n = 0 . . . (N − 1) (5.71)
N
k=0
These relationships provide us with a general method for finding the unit
sample response of an FIR filter with linear phase, from the values of the
DFT samples that are the values of H (ej ω ) e−j (Mω−β) at the discrete fre-
quencies ωk = (2π/N )k. Therefore we can prescribe both magnitude and phase
over the entire frequency range, including the transition bands of the filter. The
method followed to find the unit sample response coefficients h(n) is completely
numerical, and we know that the efficient FFT techniques are used to compute
the DFT and IDFT samples. We have already used the MATLAB functions fft
and ifft that compute the DFT and IDFT, respectively, in Chapter 3, and the
function fir2 simply implements IDFT of H (k) specified by the input vectors
F and M in the function fir2.
5.8 SUMMARY
In this chapter, we discussed the design theory of FIR filters with linear phase
and a magnitude response that approximate the magnitudes of ideal LP, HP, BP,
and BS filters as well as some filters that have magnitude specifications that are
smooth but not necessarily piecewise constant. We also described a few very
efficient and well-known MATLAB functions that obtain very good results in
designing these filters. But there are cases when these functions (and a few others
not included in this chapter) may not work satisfactorily. Students are encouraged
to work extensively these MATLAB functions with a variety of specifications and
input arguments and build their experience and insight about the relative mer-
its and advantages of the various methods and the MATLAB functions. It was
pointed out that the function remez does not work very efficiently in designing
highpass and bandstop filters. We suggest below an alternative approach to solve
this problem. However, the three transformations given below [1] are more gen-
eral and useful in transforming the magnitude response of a type I filter into that
of a wide variety of other magnituderesponse characteristics.
−n
Consider a type I filter H (z) = 2M n=0 h(n)z , h(2M − n) = h(n) that has
a passband frequency ωp and stopband frequency ωs . Its zero phase frequency
response is HR (ω) = h(M) + M n=1 2h(M − n) cos(nω). We can obtain three
new classes of type I filters that have the following transfer functions:
⎧ −M
⎨z − H (z) transformation A
G(z) = (−1)M H (−z) transformation B
⎩ −M
z − (−1)M H (−z) transformation C
H(w)
1 + dp
1 − dp
ds
− ds
0 wp ws p w
(a)
G(w) = 1 – H(w)
1 + ds
1 − ds
dp
− dp
0 w p ws p w
(b)
G(w) = H(p − w)
1 + dp
1 − dp
ds
− ds
0 p − ws p − wp p w
(c)
G(w) = 1-H(p − w)
1 + ds
1 − ds
dp
− dp
0 p − ws p − wp p w
(d)
Figure 5.24 Transformation of a lowpass filter response to other types of filter responses.
The responses of these filters are plotted in Figure 5.24. We notice that trans-
formation A transforms a type I, lowpass filter with a passband (cutoff) frequency
ωp and a stopband frequency ωs , into a highpass filter with the cutoff frequency
of ωs and a stopband frequency ωp , whereas transformation B transforms the
294 FINITE IMPULSE RESPONSE FILTERS
PROBLEMS
5.1 Derive the function for the frequency response in the form ej θ {HR (ω)}
for the FIR filters H (z−1 ) given below. Identify the type of filters also:
5.7 The passband of a lowpass FIR filter lies between 1.04 and 0.96, and its
stopband lies below 0.0016. Find the value of the passband attenuation αp
and the stopband attenuation αs .
5.8 The passband of a lowpass FIR filter lies between 1.15 and 0.9, and its
stopband lies below 0.0025. Find the value of the passband attenuation αp
and the stopband attenuation αs .
5.9 The passband of a lowpass FIR filter lies between 1 + δp and 1 − δp , and
its stopband lies below δs . If the passband attenuation is 0.15 dB and the
stopband attenuation is 45 dB, what are the values of δp and δs ?
5.10 The passband of a lowpass FIR filter lies between 1 + δp and 1 − δp , and
its stopband lies below δs . If the passband attenuation is 0.85 dB and the
stopband attenuation is 85 dB, what are the values of δp and δs ?
5.11 Design a lowpass FIR filter of length of 15, with ωc = 0.6π, using the
Fourier series method; truncate it with a Hann window, and delay the
samples by seven samples to get the transfer function of the causal filter.
5.12 In designing an FIR BP filter with ωc2 = 0.5π and ωc1 = 0.1π, using
the Fourier series method and a rectangular window of length 9, what
are the values of h(3) and h(9) in the transfer function of the causal FIR
filter?
5.13 In designing a bandpass FIR filter H (z−1 ) = 10 −n
n=0 h(n)z , using the
Fourier series method and a Bartlett window in order to approximate the
magnitude response of the filter with ωc2 = 5π/6 and ωc1 = π/2, what
are the values of the samples h(3) and h(7)?
5.14 An FIR bandpass filter has cutoff frequencies at 0.25π and 0.5π. Find the
coefficients h(3) and h(6) of its transfer function H (z−1 ) = 10 −n
n=0 h(n)z ,
assuming that it is designed using the Fourier series method and a Black-
man window.
5.15 Design an FIR filter of length 9, to get a highpass response, with ωc =
0.4π, using a Hamming window.
5.16 The coefficients of the Fourier series for the frequency response of the
differentiator are given by
⎧
⎨0 for n = 0
c(n) = cos(πn)
⎩ for |n| > 0
n
Using a Hann window of length 9, find the value of the coefficient h(6)
of the causal FIR filter that approximates the magnitude response of the
differentiator.
296 FINITE IMPULSE RESPONSE FILTERS
X1(n)
0.8
0 1 2 3 4 5
−5 −4 −3 −2 −1 n
−0.8
(a)
X2(n)
0.8
−5 −4 −3 −2 −1
0 1 2 3 4 5 n
−0.8
(b)
5.17 The coefficients of the Fourier series for the frequency response of a dig-
ital filter are c(n) = (0.5)|n| ; −∞ < n < ∞. A window function w(n) =
(−1)n , for −7 ≤ n ≤ 7, is applied to this sequence, and the product is
delayed by seven samples to get a causal sequence h(n). What is the
value of the fourth and eighth samples of h(n)?
5.18 Let x1 (n) be a window of length 11 shown in Figure 5.25a and y1 (n) =
x1 (n) ∗ x1 (n). Plot the function y1 (n) and derive its frequency response
Y1 (ej ω ).
PROBLEMS 297
X(ejw)
1.0
−0.5p −0.3p
0 0.3p 0.5p p w
(a)
X(ejw)
1.0
0.2
(b)
5.19 Find the convolution sum y2 (n) = x2 (n) ∗ x2 (n) where x2 (n) is as shown
in Figure 5.25b. Plot y2 (n) and derive its frequency response (DTFT)
Y2 (ej ω ).
5.20 What is the frequency response of the filter attained by cascading the two
filters described by Figure 5.27a,b?
5.21 Plot the spectrum of the product y(n) = x1 (n)x2 (n) where x1 (n) = 10 cos
(0.5πn) and x2 (n) = cos(0.25πn).
5.22 If the signal y(n) given in Problem 5.21 is the input to the filter shown
in Figure 5.27b what is the output signal?
5.23 Derive the expressions for the Fourier series coefficients (for −∞ <
n < ∞) for the DTFT of an LTI DT system as shown in Figure 5.26a,b.
5.24 Derive the expressions for the Fourier series coefficients for −∞ < n <
∞ for the frequency response of the LTI-DT system as shown in
Figure 5.27a,b, respectively.
298 FINITE IMPULSE RESPONSE FILTERS
x(ejw)
−0.5p 10 0.5p p w
(a)
x(ejw)
−p −0.5p 0 0.5p p w
(b)
5.25 Design an HP, FIR filter of length 21 and ωc = 0.4π, using a Hann
window. Plot its magnitude response using the MATLAB function
fft.
5.26 Derive the expressions for the Fourier series coefficients (for −∞ <
n < ∞) for the DTFT of an LTI-DT system as shown in Figure 5.28a,b.
X(ejw)
1.0 1.0
0.5p p
−0.5p 0.5p p −0.5p 0 w
−1.0 −1.0
(a) (b)
X(ejw)
1.0
X(ejw)
1.0
0.05
200 300 500 f −0.5p −0.4p 0.4p 0.5p p
w
(a) (b)
5.27 Find the Fourier series coefficients for the frequency response of the low-
pass digital filter as shown in Figure 5.29a, in which the Nyquist frequency
is 500 Hz.
5.28 Find the Fourier series coefficients for −5 < n < 5 for the frequency
response of the lowpass filter shown in Figure 5.29b.
5.29 Find the coefficients of the unit impulse response for 0 ≤ n ≤ 64, using
the MATLAB function fir2 after sampling the frequency response of
the lowpass filter shown in Figure 5.29b. Compare the result with that
obtained in Problem 5.28.
MATLAB Problems
5.30 Design a lowpass FIR filter of length 21, with ωp = 0.2π and ωs = 0.5π,
using the spline function of order p = 2, 4 for the transition band. Plot
the magnitude response of these filters on the same plot. Compare their
characteristics.
5.31 Design a lowpass FIR filter of length 41 with ωp = 0.3π and ωs = 0.5π,
using the spline function of order p = 2, 4 for the transition band. Show
the magnitude responses of these filters on the same plot. Compare their
characteristics.
5.32 Design a lowpass FIR filter of length 41 with ωp = 0.4π and ωs = 0.5π,
using the spline function of order p = 2, 4 for the transition band. Give
the magnitude responses of these filters on the same plot. Compare their
characteristics.
5.33 Design a lowpass FIR filter with a passband cutoff frequency ωc =
0.25π and a magnitude of 2 dB, a stopband frequency ωs = 0.4π,
300 FINITE IMPULSE RESPONSE FILTERS
5.43 Design a bandpass FIR filter with a passband between fc1 = 6 kHz and
fc2 = 7 kHz with αp = 0.2 dB, and two stopbands with stopband fre-
quencies at fs1 = 4 kHz and fs2 = 9 kHz with αs = 35 dB. The sampling
frequency is 20 kHz. Plot the magnitude response to verify that the spec-
ifications are met.
REFERENCES
1. T. Saramaki, Finite impulse response filter design, in Handbook for Digital Signal
Processing, S. K. Mitra and J. F. Kaiser, eds., Wiley-Interscience, New York, 1993,
Chapter 4, pp. 155–278.
2. T. W. Parks and J. H. McClellan, Chebyshev approximation of nonrecursive digital
filters with linear phase, IEEE Trans. Circuit Theory CT-19, 189–194 (1972).
3. G. C. Temes and D. Y. F. Zai, Least pth approximation, IEEE Trans. Circuit Theory
CT-16, 235–237 (1969).
4. L. R. Rabiner and B. Gold, Theory and Application of Digital Signal Processing,
Prentice-Hall, 1975.
5. T. W. Parks and C. S. Burrus, Digital Filter Design, Wiley-Interscience, New York,
1987.
6. H. D. Helms, Nonrecursive digital filters: Design method for achieving specifications
on frequency response, IEEE Trans. Audio Electroacoust. AU-16, 336–342 (Sept.
1968).
7. J. F. Kaiser, Nonrecursive digital filter design using the I0 -sinh window function,
Proc. 1974 IEEE Int. Symp. Circuits and Systems, April 1974, pp. 20–23.
8. S. K. Mitra, Digital Signal Processing—A Computer-Based Approach, McGraw-Hill,
New York, 2001.
9. S. K. Mitra and J. F. Kaiser, eds., Handbook for Digital Signal Processing, Wiley-
Interscience, 1993.
10. O. Herrmann, L. R. Rabiner and D. S. K. Chan, “Practical design rules for optimum
finite impulse response lowpass digital filters”, Bell System Tech. Journal, vol. 52,
pp. 769–799, 1973.
11. F. J. Harris, “On the use of windows for harmonic analysis with discrete Fourier
transform”, Proc. IEEE, 66, pp. 51–83, 1978.
12. J. K. Gautam, A. Kumar, and R. Saxena, “Windows: A tool in signal processing”,
IETE Technical Review, vol. 12, pp. 217–226, 1995.
13. Andreas Antoniou, Digital Filters: Analysis, Design and Applications, McGraw Hill,
1993.
14. Andreas Antoniou, “New improved method for the design of weighted-
Chebyshev, non-recursive, digital filters”, IEEE Trans. on Circuits and Systems,
CAS-30, pp. 740–750, 1983.
15. R. W. Hamming, Digital Filters, Prentice-Hall, 1977.
302 FINITE IMPULSE RESPONSE FILTERS
Filter Realizations
6.1 INTRODUCTION
Once we have obtained the transfer function of an FIR or IIR filter that approxi-
mates the desired specifications in the frequency domain or the time domain, our
next step is to investigate as many filter structures as possible, before we decide
on the optimal or suboptimal algorithm for actual implementation or applica-
tion. A given transfer function can be realized by several structures or what we
will call “circuits,” and they are all equivalent in the sense that they realize the
same transfer function under the assumption that the coefficients of the transfer
function have infinite precision. But in reality, the algorithms for implementing
the transfer function in hardware depend on the filter structure chosen to realize
the transfer function. We must also remember that the real hardware has a finite
number of bits representing the coefficients of the filter as well as the values
of the input signal at the input. The internal signals at the input of multipliers
and the signals at the output of the multipliers and adders also are represented
by a finite number of bits. The effect of rounding or truncation in the addition
and multiplications of signal values depends on, for example, the type of rep-
resentation of binary numbers, whether they are in fixed form or floating form,
or whether they are in sign magnitude or two-complementary form. The effects
of all these finite values for the number of bits used in hardware implemen-
tation is commonly called “finite wordlength effects,” which we will study in
Chapter 7.
In this chapter we develop several methods for realizing the FIR and IIR filters
by different structures. The analysis or simulation of any transfer function can
be easily done on a general-purpose computer, personal computer, or worksta-
tion with a high number of bits for the wordlength. We can also investigate the
performance of noncausal systems or unstable systems on personal computers.
Simulation of the performance of an actual microprocessor or a digital signal
processor (DSP chip) by connecting it to the PC, a development kit that contains
the microprocessor or the DSP chip, is far preferable to designing and building
303
304 FILTER REALIZATIONS
the digital filter hardware with different finite wordlength and testing its perfor-
mance. Of course, extensive analysis (simulation) of a given filter function under
other design criteria such as stability, modularity, pipeline architecture, and noise
immunity is also carried out on a personal computer or workstation using very
powerful software that is available today.
It is true that a real hardware can be programmed to implement a large number
of algorithms, by storing the data that represent the input signals and coefficients
of the filter in a memory. But remember that it can implement an algorithm only
in the time domain, whereas programming it to find the frequency response is
only a simulation. Three algorithms in the time domain that we have discussed
in earlier chapters are the recursive algorithm, convolution sum, and the FFT
algorithm. It is the difference equations describing these algorithms that have to
be implemented by real digital hardware.
Consider the general example of an IIR filter function:
M
n=0 b(n)z−n
H (z) = N (6.1)
1+ n=1 a(n)z−n
N
M
a(k)y(n − k) = b(k)x(n − k); a(0) = 1 (6.2)
k=0 k=0
In the following pages, it will be shown that this transfer function (6.1) can
be realized by several structures. We must remember that the algorithms used
to implement them in the time domain will vary for the different structures. All
the equivalent structures realize the same transfer function only under infinite
precision of the coefficients; otherwise their performance depends on the number
of bits used to represent the coefficients, as well as the input signal and the form
for representing the binary numbers. The same statement can be made for the
FIR FILTER REALIZATIONS 305
realization of an FIR filter function treated in the next section. The purpose of
realizing different structures and studying the effects of quantization is to find the
best possible structure that has the minimum quantization effect on the output of
the system.
Using these operations, we get the transpose of the structure of Figure 6.1
as Figure 6.2. This is known as direct form II structure; remember that this
(direct form II) structure will be called direct form I transposed structure in the
next chapter.
X(z)
z−1 z−1 z−1 z−1
Σ Σ Σ Σ Y(z)
X(z)
when M is even
H (z) = (M−1)/2
(6.6)
⎪ −1
1 + h(1m)z−1 + h(2m)z−2
⎪ h(0) (1 + h(10)z ) m=1
⎪
⎪
⎩
when M is odd
X(z) h(0)
Σ Σ Σ Y(z)
z−1 z−1
h(21) h(22)
z−1 z−1
−1
z
h(12) h(11)
Σ Σ
h(10)
z−1 z−1
h(22) h(21)
Since the polynomials in the square brackets contain only even-degree terms,
we denote A0 (z) = A0 (z2 ) and A1 (z) = z−1 A1 (z2 ). Hence we express H1 (z) =
A0 (z2 ) + z−1 A1 (z2 ). A block diagram showing this realization is presented in
Figure 6.5(a), where the two functions A0 (z2 ) and A1 (z2 ) are subfilters connected
in parallel.
308 FILTER REALIZATIONS
z−1 z−1
A1(z2) B1(z3) Σ
z−1
B2(z3)
(a) (b)
X(z) h(0)
h(2)
h(4)
h(6)
h(8) Y(z)
z−2 Σ z−2 Σ z−2 Σ z−2 Σ
z−1
h(7)
h(5)
h(3)
h(1)
These filters can be realized in either the direct form I or direct form II as
described earlier and illustrated in Figures 6.1 and 6.2, respectively. But there
would be 8 unit delays in building A0 (z) and 7 unit delays in z−1 A1 (z), which
adds up to 15 unit delay elements. We prefer to realize a circuit that would
require a minimum number of unit delays that is equal to the order of the filter.
A realization that contains the minimum number of delays is defined as a canonic
realization. To reduce the total number of delays to 8, we cause the two subfilters
to share the unit delays in order to get a canonic realization. Such a circuit
realization is shown in Figure 6.6.
Example 6.4
Consider the same example and decompose (6.7) as the sum of three terms:
H (z) = h(0) + h(3)z−3 + h(6)z−6 + z−1 h(1) + h(4)z−3 + h(7)z−6
+ z−2 h(2) + h(5)z−3 + h(8)z−6
= B0 (z3 ) + z−1 B1 (z3 ) + z−2 B2 (z3 ) (6.9)
M−1
H (z) = z−m Em (zM ) (6.10)
m=0
where
+1)/M
(N
Em (z) = h(Mn + m)z−n (6.11)
n=0
h(0)
X(z) h(3)
h(6)
h(1)
z−1
h(7) Y(z)
Σ z−3 Σ z−3 Σ
z−1 h(4)
h(8)
h(5)
h(2)
Example 6.5
By sharing the multipliers, we get the realization shown in Figure 6.9, which
uses only four multipliers. It is still a canonic realization that uses six delay
elements.
FIR FILTER REALIZATIONS 311
h(2)
h(5)
Σ
h(8)
z−1
X(z) h(7)
z−3 z−3
h(4)
Σ Σ
h(1)
z−1
h(6)
h(3)
Σ Σ Y(z)
h(0)
X(z)
z−1 z−1 z−1
Σ Σ Σ
Σ Σ Σ Y(z)
Figure 6.9 Direct-form structure of type I linear phase FIR filter function H3 (z).
312 FILTER REALIZATIONS
X(z)
z−1 z−1 z−1
Σ Σ Σ Σ z−1
Σ Σ Σ Y(z)
Figure 6.10 Direct-form structure of type II linear phase FIR filter function H4 (z).
Example 6.6
This is realized by the canonic circuit shown in Figure 6.10, thereby reducing
the total number of multipliers from 7 to 4. Similar cost saving is achieved in
the realization of FIR filters with antisymmetric coefficients.
The transfer function (6.1) of an IIR filter is the ratio of a numerator polynomial
to a denominator polynomial. First we decompose it as the product of an all-pole
function H1 (z) and a polynomial H2 (z)
M
n=0 b(n)z−n
H (z) = N (6.16)
1+ n=1 a(n)z−n
1
M
= H1 (z)H2 (z) = N b(n)z−n (6.17)
1+ −n
n=1 a(n)z n=0
and construct a cascade connection of an FIR filter H2 (z) and the all-pole IIR
filter H1 (z). Again we select an example to illustrate the method. Let H2 (z) =
b0 + b(1)z−1 + b(2)z−2 + b(3)z−3 and
1
H1 (z) =
1+ a(1)z−1 + a(2)z−2 + a(3)z−3
The realization of H1 (z) in direct form I is shown in Figure 6.11 as the filter
connected in cascade with the realization of the FIR filter H2 (z) also in direct
form I structure. The structure for the IIR filter is also called a direct form
I because the gain constants of the multipliers are directly available from the
coefficients of the transfer function.
We note that H1 (z) = V (z)/X(z) and H2 (z) = Y (z)/V (z). We also note that
the signals at the output of the three delay elements of the filter for H1 (z) are
V(z) b(0)
X(z) Σ Σ Y(z)
z−1 z−1
−a(1) b(1)
Σ v(n−1) v(n−1) Σ
z−1 z−1
−a(2) b(2)
Σ v(n−2) v(n−2) Σ
z−1 z−1
−a(3) b(3)
v(n−3) v(n−3)
b(0)
X(z) Σ Σ Y(z)
z−1
−a(1) b(1)
Σ Σ
z−1
−a(2) b(2)
Σ Σ
z−1
−a(3) b(3)
the same as those at the output of the three delay elements of filter H2 (z). Hence
we let the two circuits share one set of three delay elements, thereby reducing
the number of delay elements. The result of merging the two circuits is shown
in Figure 6.12 and is identified as the direct form II realization of the IIR filter.
Its transpose is shown in Figure 6.13. Both of them use the minimum number
of delay elements equal to the order of the IIR filter and hence are canonic
realizations.
The two filters realizing H1 (z) and H2 (z) can be cascaded in the reverse order
[i.e., H (z) = H2 (z)H1 (z)], and when their transpose is obtained, we see that
the three delay elements of H2 (z) can be shared with H1 (z), and thus another
realization identified as direct form I as well as its transpose can be obtained.
The filter function (6.16) can be decomposed as the product of transfer functions
in the form
N1 (z)N2 (z) · · · NK (z)
H (z) = (6.18)
D1 (z)D2 (z)D3 (z) · · · DK (z)
N1 (z) N2 (z) N3 (z) NK (z)
= ··· (6.19)
D1 (z) D2 (z) D3 (z) DK (z)
= H1 (z)H2 (z)H3 (z) · · · HK (z) (6.20)
IIR FILTER REALIZATIONS 315
b(0)
X(z) Σ Y(z)
z−1
b(1) −a(1)
Σ
z−1
b(2) −a(2)
Σ
z−1
b(3) −a(3)
Σ
where K = N/2 when N is even and the polynomials D1 (z), D2 (z), D3 (z), and
so on are second-order polynomials, with complex zeros appearing in conjugate
pairs in any such polynomial. When N is odd, K = (N − 1)/2, and one of the
denominator polynomials is a first-order polynomial. The numerator polynomials
N1 (z), N2 (z), . . . may be first-order or second-order polynomials or a constant:
1 + β11 z−1 ( 1 + β1k z−1 + β2k z−2
H (z) = H0 (6.21)
1 + α11 z−1 k 1 + α1k z−1 + α2k z−2
Each of the transfer functions H1 (z), H2 (z), . . . , HK (z) is realized by the direct
form I or direct form II or their transpose structures and then connected in
cascade. They can also be cascaded in many other sequential order, for example,
H (z) = H1 (z)H3 (z)H5 (z) . . . or H (z) = H2 (z)H1 (z)H4 (z)H3 (z) . . . .
There are more choices in the realization of H (z) in the cascade connection in
addition to those indicated above. We can pair the numerators N1 (z), N2 (z), . . .
and denominators D1 (z), D2 (z), D3 (z), . . . in many different combinations; in
other words, we can pair the poles and zeros of the polynomials in different
316 FILTER REALIZATIONS
Example 6.9
z(0.16z − 0.18)
H (z) = (6.22)
(z − 0.2)(z + 0.1)(z + 0.4)(z2 + z + 0.5)
Let us choose the last expression, (6.24), and rewrite it in inverse powers of z,
as given by
−1 z−1 z−1 (0.16z−1 − 0.18z−2 )
H (z ) =
1 + z−1 + 0.5z−2 (1 + 0.4z−1 ) (1 − 0.1z−1 − 0.02z−2 )
(6.25)
IIR FILTER REALIZATIONS 317
X(z)
Σ Σ
z−1 z−1
−1
Σ Σ
−0.4
z−1
z−1
0.1 0.16
−0.5 Σ
Y(z)
z−1 Σ
0.02 −0.18
The IIR transfer function can also be expanded as the sum of second-order
structures. It is decomposed into its partial fraction form, combining the terms
with complex conjugate poles together such that we have an expansion with real
coefficients only. We will choose the same example as (6.22) to illustrate this
form of realization.
One form of the partial fraction expansion of (6.22) is
R1 R2 R3 R4 z + R5
H (z) = + + +
z + 0.1 z − 0.2 z + 0.4 z2 + z + 0.5
R1 z−1 R2 z−1 R3 z−1 R4 z−1 + R5 z−2
= + + + (6.31)
1 + 0.1z−1 1 − 0.2z−1 1 + 0.4z−1 1 + z−1 + 0.5z−2
which gives rise to additional structures.
So, the transfer function given by (6.22) was decomposed in the form of (6.25)
and realized by the cascade structure shown in Figure 6.14; it was decomposed in
the form of (6.30) and realized by the parallel connection in the structure shown
in Figure 6.15.
The algorithm used to implement the structure in Figure 6.14 is of the form
4.2
Σ
Σ
z−1
Σ
0.1 −1.117
z−1
0.02
5.2134 Y(z)
X(z)
Σ Σ
z−1
−0.4
1.1314
Σ
z−1 Σ
−1.0
Σ
−0.1594
−1
z
−0.5
whereas the algorithm employed to implement the structure shown in Figure 6.15
has the form
7
y1 (n) = x(n) + 0.17
y1 (n − 1) + 0.027
y1 (n − 2)
7
y2 (n) = 4.27
y1 (n) − 1.1177
y1 (n − 1)
7
y3 (n) = x(n) − 0.47
y3 (n − 1)
y4 (n) = x(n) − 7
7 y4 (n − 1) − 0.57
y4 (n − 2)
7
y5 (n) = 1.13147
y4 (n) − 0.15947
y4 (n − 1)
y(n) = 7 y3 (n) + 7
y2 (n) + 5.21347 y5 (n)
Remember that under ideal conditions both algorithms give the same output
for a given input signal and the two structures realize the same transfer function
(6.22). But when the two algorithms have to be programmed and implemented
by hardware devices, the results would be very different and the accuracy of
320 FILTER REALIZATIONS
the resulting output, the speed of the execution, and the throughput, and other
factors would depend not only on the finite wordlength but also on so many
other factors, including the architecture of the DSP chip, program instructions
per cycle, and dynamic range of the input signal. We will discuss these factors
in a later chapter.
Next in importance is the structure shown in Figure 6.16. The transfer function
G(z) = Y (z)/X(z) is given by 12 [A1 (z) + A2 (z)], where A1 (z) and A2 (z) are
the allpass filters connected in parallel. But in this figure, there is another trans-
fer function, H (z) = V (z)/X(z), which is given by H (z) = 12 [A1 (z) − A2 (z)].
The structure shown in Figure 6.16 is also called the lattice structure or lattice-
coupled allpass structure by some authors. A typical allpass filter function is of
the form
N (z) an + an−1 z−1 + an−2 z−1 + · · · + a1 z−n+1 + a0 z−n
A(z) = =± (6.32)
D(z) a0 + a1 z−1 + a2 z−2 + · · · + an−1 z−n+1 + an z−n
which shows that the order of the coefficients in the numerator is the reverse of
that in the denominator, when both the numerator and denominator polynomial
are expressed in descending powers of z. Equation (6.32) can be expressed in
another form as
The zeros of the numerator polynomial D(z−1 ) are the reciprocals of the zeros
of the denominator D(z), and therefore the numerator polynomial D(z−1 ) is the
mirror image polynomial of D(z).
When the allpass filter has all its poles inside the unit circle in the z plane, it is
a stable function and its zeros are outside the unit circle as a result of the mirror
X(z)
But the phase response (and the group delay) is dependent on the coeffi-
cients of the allpass filter. We know that the phase response filter designed
to approximate a specified magnitude response is a nonlinear function of ω,
and therefore its group delay is far from a constant value. When an allpass fil-
ter is cascaded with such a filter, the resulting filter has a frequency response
H1 (ej ω )A(ej ω ) = H1 (ej ω )A(ej ω ) ej [θ(ω)+φ(ω)] = H1 (ej ω ) ej [θ(ω)+φ(ω)] . So the
magnitude response does not change when the IIR filter is cascaded with an all-
pass filter, but its phase response θ (ω) changes by the addition of the phase
response φ(ω) contributed by the allpass filter. The allpass filters A(z) are there-
fore very useful for modifying the phase response (and the group delay) of filters
without changing the magnitude of a given IIR filter H1 (z), when they are cas-
caded with H1 (z). However, the method used to find the coefficients of the allpass
filter A(z) such that the group delay of H1 (z)A(z) is a very close approximation
to a constant in the passband of the filter H1 (z) poses a highly nonlinear problem,
and only computer-aided optimization has been utilized to solve this problem.
Normally IIR filters are designed from specifications for its magnitude only, and
its group delay is far from a linear function of frequency. There are many appli-
cations that call for a constant group delay or a linear phase response, and in
such cases, the filters are cascaded with an allpass filter that does not affect its
magnitude—except by a constant—but is designed such that it compensates for
the phase distortion of the IIR filter. Allpass filters designed for this purpose are
cascaded with the IIR filters and are known as delay equalizers.
An important property of allpass filters is that if the coefficients change in
wordlength, the magnitude response of an allpass filter at all frequencies does
not change. Recall that a second-order allpass filter was analyzed in Chapter 2,
and that if the transfer function of an allpass filter is of a higher order, it can
be realized by cascading second-order filters and possibly one first-order allpass
filter. We illustrate a few more structures that realize first-order allpass trans-
fer functions as well as second-order allpass functions later in the chapter, in
Figures 6.23 and 6.24.
Let us assume that the two allpass filters shown in Figure 6.16 are of order
(N − r) and r, respectively, and are given by
and
z−r D2 (z−1 )
A2 (z) = (6.36)
D2 (z)
Substituting them in G(z) = 12 [A1 (z) + A2 (z)] and H (z) = 12 [A1 (z) − A2 (z)],
we get
1 z−(N −r) D1 (z−1 )D2 (z) + z−r D2 (z−1 )D1 (z)
G(z) = (6.37)
2 D1 (z)D2 (z)
and
1 z−(N −r) D1 (z−1 )D2 (z) − z−r D2 (z−1 )D1 (z)
H (z) = (6.38)
2 D1 (z)D2 (z)
and
N −n
Q(z) n=0 qn z
H (z) = = (6.40)
D(z) D(z)
then it can be shown that the following conditions are satisfied by (6.37) and
(6.38).
Property 6.2 Q(z−1 ) = −zN Q(z). Hence qn = −qN −n , that is, the coefficients
of Q(z) are antisymmetric.
in other words, G(z) and H (z) are said to form a power complementary pair.
In the next chapter the structure for realizing G(ej ω ) will be termed a lattice-
coupled allpass filter and because of the property stated here, the structure for
realizing H (ej ω ) will be called a lattice-coupled allpass power complementary
filter.
ALLPASS FILTERS IN PARALLEL 323
Property 6.4
j θ (ω)
G(ej ω ) = 1 e 1 + ej θ2 (ω) = 1 1 + ej (θ1 (ω)−θ2 (ω) ≤ 1 (6.42)
2 2
In the following analysis, we will assume that the four conditions described
above are satisfied by G(z) and H (z) and derive the results that they can be
obtained in the form G(z) = 12 [A1 (z) + A2 (z)] and H (z) = 12 [A1 (z) − A2 (z)].
Consider Property 6.3: P (z)P (z−1 ) + Q(z)Q(z−1 ) = D(z)D(z−1 ). Using
Properties 6.1 and 6.2, we get
Therefore
This shows that the zeros of [P (z) − Q(z)] are reciprocals of the zeros of
[P (z) + Q(z)].
It has been found that the Butterworth, Chebyshev, and elliptic lowpass filters
of odd order satisfy the four properties described above. We know from Chapter 4
that their transfer function G(z) obtained from the bilinear transformation of
the analog lowpass prototype filters has no poles on the unit circle. In other
words, the zeros of D(z) are within the unit circle, and therefore the zeros of
D(z−1 ) are outside the unit circle, because they are the reciprocals of the zeros
of D(z). From (6.49) we see that the zeros of [P (z) + Q(z)] and [P (z) − Q(z)]
cannot lie on the unit circle. Let us assume that [P (z) + Q(z)] has r zeros
324 FILTER REALIZATIONS
and
Thus we identify
(
r (
N
P (z) + Q(z) = α (1 − z−1 zk ) (z−1 − zj−1 ) (6.52)
k=1 j =r+1
1 (
r (
N
P (z) − Q(z) = (z −1
− zk ) (1 − z−1 zj−1 ) (6.53)
α
k=1 j =r+1
Then
+ ,
P (z) + Q(z) (
N
z−1 − zj−1
G(z) + H (z) = =α = αA1 (z) (6.54)
D(z)
j =r+1
1 − z−1 zj−1
r
P (z) − Q(z) 1 ( z−1 − zk 1
G(z) − H (z) = = = A2 (z) (6.55)
D(z) α 1 − z−1 zk α
k=1
G(z) = 1
2
[A1 (z) + A2 (z)] (6.56)
H (z) = 1
2
[A1 (z) − A2 (z)] (6.57)
can decompose G(z) as the sum of two allpass functions, A1 (z)/2 and A2 (z)/2.
Once we have derived the two allpass functions, we easily obtain H (z) as the
difference of A1 (z)/2 and A2 (z)/2 and realize it by the structure of Figure 6.16.
Because of the complementary power property, we see that H (z) realizes a
highpass filter.
We know the numerator polynomial P (z) and the denominator polynomial D(z)
of the filter function G(z), and hence we can compute the right side of the
−n
Equation 6.58. Let us denote Q2 (z) = Q(z)Q(z) as R(z) = 2N n=0 rn z . The
coefficients of R(z) = Q(z)Q(z) are computed by convolution of the coefficients
of Q(z) with the coefficients of Q(z):
n
rn = qn ∗ qn = qk qn−k (6.59)
k=0
having these zeros, we get the numerator of A2 (z), which has the zeros at zk−1 .
We identify the zeros of P (z) + Q(z) that are outside the unit circle as the
(N − r) zeros zj (j = r + 1, r + 2, . . . , N ) of A1 (z). By reversing the order of
the coefficients of the numerator polynomial having these zeros, we obtain the
denominator polynomial of A1 (z). It has (N − r) zeros at zj−1 as shown in (6.54).
This completes the design procedure used to obtain A1 (z) and A2 (z) from G(z).
An example is worked out in Section 6.5.
X(z) Y(z)
Σ Σ Σ
K2 K3
K1
K1 K2 K3
(a)
X(z)
Σ Σ Σ Y(z)
K3 K2 K1
K3 K2 K1
U(z)
Σ z−1 Σ z−1 Σ z−1
(b)
Figure 6.17 (a) Lattice structure for an FIR filter; (b) transpose of the lattice structure
for the FIR filter in (a).
REALIZATION OF FIR AND IIR FILTERS USING MATLAB 327
in Figure 6.19a, the value of the ladder coefficient v5 happens to be zero for the
numerical example, and therefore the multiplier v5 is zero. The lattice parameters
are also known as the reflection coefficients, and it has been shown that the poles
of the IIR filter function are inside the unit circle of the z plane if |ki | ≤ 1. So
this method is used to test whether an IIR filter is stable.
Many of the computations involved in the realization of FIR and IIR filters as
presented in this chapter can be carried out by MATLAB functions. For example,
an FIR filter realization in the cascaded structure can be obtained by finding the
roots of the transfer function and then finding the second-order polynomials with
complex conjugate pair of the roots or a pair of two real zeros.
−n
To find the roots of a polynomial H (z) = N n=0 b(n)z , we use the MATLAB
function R = roots(b) where the vector b = [b(0), b(1), b(2), · · ·
b(N)] and R is the vector of the N roots. Choosing a pair of complex conju-
gate roots or a pair of real roots, we construct the second-order polynomials
using the MATLAB function Pk =poly(Rk ), where Rk is the list of two roots and
Pk is the vector of the coefficients of the second-order polynomial. Of course, if
H (z) is an odd-order polynomial, one first-order polynomial with a single real
root will be left as a term in the decomposition of H (z).
Example 6.11
0.2682 + 0.8986i
0.2682 - 0.8986i
0.3383 + 0.6284i
0.3383 - 0.6284i
0.4166
Then we continue
R1=[0.2682+0.8986*i 0.2682-0.8986*i];
P1=poly(R1)
P1=
1.0000 -0.5364 0.8794
R2=[0.3383+0.6284*i 0.3383-0.6284*i];
P2=poly(R2)
328 FILTER REALIZATIONS
P2 =
1.0000 -0.6766 0.5093
From the output data for the coefficients of P1 and P2 displayed above, we
construct the polynomial
Example 6.12
Consider the same FIR filter as given in Example 6.11. We use the simple
MATLAB function k = tf2latc(b) to get the vector output k listing the reflec-
tion coefficients ki , i = 1, 2, 3, 4, 5, where b is the vector of the coefficients given
in Example 6.11.
The vector output k for the lattice coefficients is
-0.3597
0.9325
-0.5745
0.5238
-0.1866
and the structure of the lattice realization for the FIR filter or the MA model is
shown in Figure 6.18, where the lattice coefficients are as listed above.
X(z)
Σ Σ Σ Σ Σ Y(z)
K2 K3 K4 K5
K1
K1 K2 K3 K4 K5
Example 6.13
To get a cascade realization of an IIR filter, one could factorize both the numerator
and the denominator as the product of second-order polynomials (and possibly
one first-order polynomial) as illustrated in Example 6.9. Another approach is to
use the MATLAB functions tf2zp and zp2sos as explained below.
First we use the function [z,p,k] = tf2zp(num,den) to get the output
vector [z,p,k], which lists the zeros, poles, and the gain constant for the IIR
filter. Then the function sos = zp2sos(z,p,k) gives the coefficients of the
second-order polynomials of each section in a matrix of order L × 6 in the
following format:
⎡ ⎤
n01 n11 n21 d01 d11 d21
⎢ n02 n12 n22 d02 d12 d22 ⎥
⎢· · · · · · ⎥
⎣ ⎦
· · · · · ·
n0L n1L n2L d0L d1L d2L
The six elements in each row define the transfer function of each second-order
section Hi (z) used in the product form as indicated below:
(L (L
n0i + n1i z−1 + n2i z−2
H (z) = Hi (z) =
d0i + d1i z−1 + d2i z−2
i=1 i=1
These two MATLAB functions can be used to factorize an FIR function also.
Instead of the algorithm described above, we let the polynomial H (z) of the FIR
filter, as the denominator polynomial of an IIR filter and the numerator, be unity.
To illustrate this, let us consider the previous example and run the two functions
in the following MATLAB script:
num=1;
den=b
[z,p,k] = tf2zp(num,den);
sos = zp2sos(z,p,k)
sos =
0 0.5089 0 1.0000 -0.4166 0
0 0 1.0000 1.0000 -0.6766 0.5094
0 0 1.0000 1.0000 -0.5363 0.8794
Using the entries in this sos matrix, we write the factorized form of H (z) as
follows:
0.2545z−1 1 + 0.8204z−1 + 0.6247z−2
1 − 0.4166z−1 1 − 0.6766z−1 + 0.5094z−2
1 − 0.4204z−1 + 0.3201z−2
× (6.64)
1 − 0.5363z−1 + 0.8794z−2
This agrees with the result of expressing H (z) as the ratio of a fourth-order
numerator polynomial and a fifth-order denominator polynomial in positive pow-
ers of z. So care is to be taken to express the transfer function in positive powers
of z and then check the results after constructing the factorized form, because the
function zp2sos works only if the zeros are inside the unit circle of the z plane.
REALIZATION OF FIR AND IIR FILTERS USING MATLAB 331
But the factorized form of H (z) constructed from the sos matrix leads us cor-
rectly to the next step of drawing the realization structures for each section, for
example, by the direct form, and connecting them in cascade. Such a realization
is similar to that shown in Figure 6.14.
Example 6.14
b =
0.5000 0.2000 0.3000 0 0.1000
a =
1.9650 -3.2020 4.4350 -3.1400 1.5910 -0.3667
[r,p,k]=residuez(b,a)
r =
-0.1632 - 0.1760i
-0.1632 + 0.1760i
0.1516 - 0.0551i
0.1516 + 0.0551i
0.2777
p =
0.2682 + 0.8986i
0.2682 - 0.8986i
0.3383 + 0.6284i
0.3383 - 0.6284i
0.4166
k =
[]
r1 =
[-0.1632 + 0.1760i -0.1632-0.1760i]
p1 =
0.2682 - 0.8986i 0.2682+0.8986i
r2 =
[0.1516 - 0.0551i 0.1516 + 0.0551i]
p2 =
0.3383 + 0.6284i 0.3383 - 0.6284i
[b1,a1]=residuez(r1,p1,0)
b1 =
-0.3264 0.4038 0
332 FILTER REALIZATIONS
a1 =
1.0000 -0.5364 0.8794
[b2,a2]=residuez(r2,p2,0)
b2 =
0.3032 -0.0333 0
a2 =
1.0000 -0.6766 0.5093
The residue and the third pole are 0.2777 and 0.4166. So we construct the transfer
function H (z) as the sum of three terms
Example 6.15
X(z)
Σ Σ Σ Σ Σ
v4 v3 v2 v1 v0
Σ Σ Σ
Y(z)
(a)
X(z)
Σ Σ Σ Y(z)
V(z)
Σ z−1 Σ z−1 Σ z−1
(b)
Figure 6.19 (a) Lattice–ladder structure for an IIR filter (ARMA model); (b) lattice
structure for an all-pole IIR filter Y (z)/X(z) (AR model) and an allpass filter V (z)/X(z).
-0.5745
0.5238
-0.1866
v =
0.3831
0.3164
0.2856
0.1532
0.1000
0
Example 6.16
In order to illustrate the derivation of a lattice structure for an all-pole (AR model)
filter, we select a transfer function
1
H (z−1 ) = (6.66)
1− 0.2051z−1 − 0.0504z−2 + 0.0154z−3
and use the MATLAB function [k] = tf2latc(1,den) to get the vector output
for the lattice coefficients as shown below:
k = −0.2145
−0.0473
0.0154
334 FILTER REALIZATIONS
The lattice structure for this filter is shown in Figure 6.19b, where H (z−1 ) =
Y (z)/X(z).
Suppose that we select an allpass transfer function Hap (z−1 )
k = −0.2145
−0.0473
0.0154
v = 0.0000
0.0000
0.0000
1.0000
Although this allpass transfer function has a numerator and a denominator and
hence is not an AR model, the lattice structure for realizing it is the same as the
lattice structure for an AR model in Figure 6.19b, but the output is V (z) and not
Y (z). Hence the allpass transfer function realized is Hap (z−1 ) = V (z)/X(z).
When we compare Figure 6.17 for the lattice structure for the third-order
FIR (MA) filter and Figure 6.19b for the lattice structure for the third-order
IIR all-pole (AR) or the allpass (AP) filter, carefully note the direction of the
multipliers and their signs, which are different. Also note that the output terminals
are different for the all-pole filter and allpass filters in Figure 6.19b.
[N,Wn]=buttord(Wp,Ws,Ap,As);
M=mod(N,2);
if M==0
N=N+1
end
[b,a]=butter(N,Wn);
end
if ftype==2
disp(’Chebyshev I Lowpass Filter’)
[N,Wn]=cheb1ord(Wp,Ws,Ap,As);
M=mod(N,2);
if M==0
N=N+1
end
[b,a]=cheby1(N,Ap,Wn);
end
if ftype==3
disp(’Chebyshev II Lowpass filter’)
[N,Wn]=cheb2ord(Wp,Ws,Ap,As);
M=mod(N,2);
if M==0
N=N+1
end
336 FILTER REALIZATIONS
[b,a]=cheby2(N,As,Wn);
end
if ftype==4
disp(’Elliptic Lowpass Filter’)
[N,Wn]=ellipord(Wp,Ws,Ap,As);
M=mod(N,2);
if M==0
N=N+1
end
[b,a]=ellip(N,Ap,As,Wn);
end
[h0,w]=freqz(b,a,256);
H0=abs(h0);
plot(w/pi,H0);grid
axis([0.0 1.0 0.0 1.0])
title(’MAGNITUDE OF SPECIFIED LP FILTER’)
ylabel(’Magnitude’)
xlabel(’Normalized frequency’)
% TO FIND Q(z)
k=sum(a)/sum(b);
b=b*k;
fliped a= fliplr(a);
%R(z)= Q2(z)=P2(z)-z^-N D(z^-1)D(z)
R=conv(b,b)-conv(a,fliped a);
% Calculate Q
Q(1)=R(1)^(0.5);
Q(2)=R(2)/(2*Q(1));
for n=2:N
term=0;
for k=1:n-1
term=Q(k+1)*Q(n-k+1)+term;
end
Q(n+1)=(R(n+1)-term)/(2*Q(1));
end
%Zeros of P+Q is calculated
j=1;
k=0;
P plus Q=b+Q;
zeros=roots(P plus Q);
for i=1:N
if abs(zeros(i))<1
zero in(j)=zeros(i);
j=j+1;
else
k=k+1;
REALIZATION OF FIR AND IIR FILTERS USING MATLAB 337
zero out(k)=zeros(i);
end
end
A1N=poly(zero out);%Numerator of A 1(z)
A1D=fliplr(A1N); %Denominator of A 1(z)
A1=tf(A1N,A1D,1);
A2D=poly(zero in);%Denominator of A 2(z)
A2N=fliplr(A2D);%Numerator of A 2(z)
A2=tf(A2N,A2D,1);
G=0.5*(A1+A2); % LOWPASS FILTER FROM THE TWO ALLPASS FILTERS
[numlp,denlp]=tfdata(G,’v’);
[h1,w]=freqz(numlp,denlp,256);
H1=abs(h1);
figure
plot(w/pi,H1);grid
axis([0.0 1.0 0.0 1.0])
title(’MAGNITUDE OF LP FILTER FROM THE TWO ALLPASS FILTERS’)
ylabel(’Magnitude’)
xlabel(’Normalized frequency’)
H=0.5*(A1-A2); % HIGHPASS FILTER FROM THE TWO ALLPASS FILTERS
[numhp,denhp]=tfdata(H,’v’);
[h2,w]=freqz(numhp,denhp,256);
H2=abs(h2);
figure
plot(w/pi,H2);grid
axis([0.0 1.0 0.0 1.0])
title(’MAGNITUDE OF HP FILTER FROM THE TWO ALL PASS FILTERS’)
ylabel(’Magnitude’)
xlabel(’Normalized frequency’)
%END
Example 6.17
We illustrate the use of this program by taking the example of an elliptic lowpass
filter with the specifications Wp = 0.4, Ws = 0.6, Ap = 0.3, and As = 35, which
have been chosen only to highlight the passband and stopband responses. A
complete session for running this example is given below, including the three
magnitude response plots mentioned above:
4
Elliptic Lowpass Filter
N =
5
A1N
A1N =
1.0000 -1.3289 1.9650
A1D
A1D =
1.9650 -1.3289 1.0000
A2N
A2N =
-0.3667 1.1036 -0.9532 1.0000
A2D
A2D =
1.0000 -0.9532 1.1036 -0.3667
A1
Transfer function:
z^2 - 1.329 z + 1.965
-----------------------
1.965 z^2 - 1.329 z + 1
Sampling time: 1
A2
Transfer function:
-0.3667 z^3 + 1.104 z^2 - 0.9532 z + 1
--------------------------------------
z^3 - 0.9532 z^2 + 1.104 z - 0.3667
Sampling time: 1
Transfer function:
0.1397 z^5 + 0.1869 z^4 + 0.3145 z^3 + 0.3145 z^2 + 0.1869 z
+ 0.1397
-------------------------------------------------------------
1.965 z^5 - 3.202 z^4 + 4.435 z^3 - 3.14 z^2 + 1.591 z
- 0.3667
Sampling time: 1
REALIZATION OF FIR AND IIR FILTERS USING MATLAB 339
Transfer function:
0.8603 z^5 - 2.469 z^4 + 4.021 z^3 - 4.021 z^2 + 2.469 z
- 0.8603
----------------------------------------------------------
1.965 z^5 - 3.202 z^4 + 4.435 z^3 - 3.14 z^2 + 1.591 z
- 0.3667
Sampling time: 1
We rewrite the transfer function G(z−1 ) in the following form for reference in
the next chapter:
0.1397 (1 + 1.337z−1 + 2.251z−2 + 2.251z−3 + 1.337z−4 + z−5 )
1.965 (1 − 1.629z−1 + 2.256z−2 − 1.597z−3 + 0.8096z−4 − 0.1866z−5 )
(6.68)
The magnitude response of the lowpass elliptic filter G(z), the magnitude
response of the parallel connection G(z) = 12 [A1 (z) + A2 (z)], and that of the
highpass filter H (z) = 12 [A1 (z) − A2 (z)] are shown in Figures 6.20, 6.21, and
6.22, respectively.
The two allpass filter functions (6.69) and (6.71) obtained in the example
above are expressed in the form of (6.70) and 6.72, respectively. The function
A1 (z) can be realized in the direct form, and A2 (z) can be realized in many
of the structures that we have already discussed, for example, the direct form,
0.9
0.8
0.7
0.6
Magnitude
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
0.9
0.8
0.7
0.6
Magnitude
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
0.9
0.8
0.7
0.6
Magnitude
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized frequency
Figure 6.22 Magnitude response of a highpass filter from the two allpass filters in
parallel.
REALIZATION OF FIR AND IIR FILTERS USING MATLAB 341
−1
d1 d1
z−1
∑ Y(z)
(a) (b)
z−1 −1
d1 d1
−1
Y(z)
∑ ∑
(c) (d)
parallel form, or lattice–ladder form. But the class of allpass functions of first
and second orders can be realized by many structures that employ the fewest
multipliers [1]. A few examples of first-order and second-order allpass filters
are shown in Figures 6.23 and 6.24, respectively. Their transfer functions are
respectively given by
d1 + z−1
AI (z) =
1 + d1 z−1
d1 d2 + d1 z−1 + z−2
AII (z) =
1 + d1 z−1 + d1 d2 z−2
We choose the simpler structure of second-order allpass filter from Figure 6.24a
for A1 and A2 (z), which requires fewer delay elements than do the remaining four
second-order structures. When these two allpass filters are connected in parallel
(as shown in Fig. 6.16), we get the structure shown in Figure 6.25 for the transfer
function G(z) of the fifth-order elliptic lowpass filter chosen in this example:
z2 − 1.329z + 1.965
A1 (z) = (6.69)
1.965z2 − 1.329z + 1
1 − 1.329z−1 + 1.965z−2
=
1.965 − 1.329z−1 + z−2
0.5089 − 0.6763z−1 + z−2
= (6.70)
1 − 0.6763z−1 + 0.5089z−2
342 FILTER REALIZATIONS
X(z) Y(z)
X(z) Y(z) z−1 z−1 Σ
Σ z−1 z−1 Σ
−1
−1
d2 d1
d1 d2 Σ Σ
Σ Σ
−1
z−1 z−1
(a) (b)
−1
X(z) Y(z)
z−1 Σ z−1 Σ
X(z) Y(z)
z−1 Σ z−1 Σ
−1 −d1
Σ d1
Σ
Σ Σ
Σ Σ
z−1
z−1 d2
−1
Σ
d2 Σ
(c) (d)
1/2 Y(z)
Σ z−1 z−1 Σ Σ
−1
Σ Σ
−0.6763 −0.7525
Σ Σ −0.4165
z−1
−0.5367 −1.640
Figure 6.25 A fifth-order elliptic lowpass IIR filter realized as the parallel connection
of two allpass structures.
REALIZATION OF FIR AND IIR FILTERS USING MATLAB 343
0.8805 − 0.5368z−1 + z−2 −0.4165 + z−1
A2 (z) = (6.73)
1 − 0.5367z−1 + 0.8805z−2 1 − 0.4165z−1
Instead of designing the allpass functions found in Figures 6.24 and 6.25, we
can design them in the form of lattice allpass structures as described earlier.
The lattice coefficients for the second-order filter A1 (z) and the third-order filter
A2 (z) are found by using the MATLAB function [k,v] = tf2latc(num,den),
and after obtaining the lattice structures for them, they are connected in parallel
as shown in Figure 6.27.
K2 =
-0.3385
344 FILTER REALIZATIONS
0.8717
-0.3667
V2 =
0
0
0
1
The circuit realizing the third-order transfer function A2 (z) in the form of a
lattice–ladder structure is shown in Figure 6.26, where the values of V0 = V1 =
V2 = 0 as shown by the vector V2 above.
The circuit realizing the fifth-order lowpass elliptic filter as the parallel connec-
tion of two allpass filters A1 (z) and A2 (z), each realized by the lattice structures,
is shown in Figure 6.27.
Now let us compare the different circuits that we have designed to realize
a lowpass fifth-order, IIR filter. All of these circuits have been designed to
meet the following same specifications—Wp = 0.4, Ws = 0.6, Ap = 0.3, and
As = 35—and have been realized by a cascade connection, a parallel con-
nection, and a lattice–ladder connection as shown in Figures 6.14, 6.15, and
6.19, respectively. They use more than the minimum number of five multipliers,
whereas the lattice-coupled allpass filter shown in Figure 6.25 uses five multipli-
ers—disregarding the multipliers with a gain of −1 or 12 because they represent
X(z)
Σ Σ Σ
−K1
−K3 −K2
K3 K2 K1
V0
V3 V2 V1
Σ Σ Σ Y(z)
Σ Σ
Σ z−1 Σ z−1
0.5
X(z)
Y(z)
Σ
Σ Σ Σ
0.8717 −0.3385
−0.3667 0.3667 −0.8717 0.3385
Figure 6.27 Two lattice–ladder allpass structures connected in parallel to realize a fifth-
order lowpass elliptic filter.
minor operations on binary numbers. The direct-form IIR filter for a fifth-order
filter would also require more than five multipliers, whereas the filter shown in
Figure 6.27, which has lattice–ladder, coupled allpass filters, requires 10 mul-
tipliers. Therefore we conclude that the parallel connection of allpass filters as
shown in Figure 6.25 requires a minimum number of delay elements and thus
offers an advantage over the other structures.
The realization of IIR filters as a parallel connection of allpass filters has
another advantage, as explained below. It was pointed out that the magnitude
response of allpass filters does not change when the multiplier constants are
quantized to finite wordlength. The other advantage is that there are many struc-
tures for realizing allpass filters that contain a minimum number of multipliers
(and delay elements). In the method of realizing the lowpass filter by a connec-
tion of two allpass filters in parallel, we used Property (6.4) in Equation (6.42)
which is reproduced below:
G(ej ω ) = 1 ej θ1 (ω) + ej θ2 (ω) = 1 1 + ej (θ1 (ω)−θ2 (ω)) ≤ 1
2 2
This shows that the lowpass filter containing the two allpass structures in parallel
has a magnitude response equal to or less than unity. The magnitude response
in Figure 6.28 illustrates this property in the passband and attains the maximum
value at three frequencies in the passband, which are marked by arrows. As
346 FILTER REALIZATIONS
Magnitude response
1.01
0.99
0.98
0.97
Magnitude
0.96
0.95
0.94
0.93
0.92
0.91
1 2 3 4 5 6 7 8 9 10
Frequency (kHz)
long as the allpass filters maintain a constant magnitude at all frequencies and
remain stable, as their multiplier constants change in wordlength, the magnitude
response of the lowpass filter cannot exceed this constant at these three frequen-
cies, where the derivative of the magnitude response is zero. Hence for small
changes in wordlength (e.g., by 1 or 2 bits), the change in magnitude response
at these frequencies is almost zero. At other frequencies in the passband, the
change in magnitude is also expected to be small, if not zero. Simulation of
their performance with small changes in wordlength has verified that the change
in their magnitude response is significantly smaller than that displayed by the
other structures. This shows that the structure of allpass filters in parallel has
many advantages compared to the other structures that have been proposed for
realizing IIR filters. In the next chapter, where the effect of finite wordlength is
studied in greater detail, the structure for allpass filters in parallel will be called
lattice-coupled allpass structure. But these structures can be used to design only
lowpass filters (of odd order) whereas the lattice and lattice–ladder structures
can realize any transfer function in general.
6.6 SUMMARY
When we have obtained the transfer functions of FIR and IIR filters that
approximate a given set of specifications—as explained in the previous two
chapters—our next step is to choose the best structures that would meet some
PROBLEMS 347
important criteria before the algorithm in the time domain can be programmed
or a filter can be designed and built in hardware. It is obvious that the algorithm
for implementing a filter will depend on the particular structure being considered
to realize it. Under ideal assumptions that the magnitude of the input signals and
the values of the multiplier constants are available with infinite precision, any
one of the several alternative structures will realize the transfer function. But
when they are expressed with a finite number of bits, their actual performance
may be quite different, particularly when they are represented by a fixed-point
binary representation. So it is necessary to investigate in great detail their perfor-
mance in the time domain and the frequency domain and compare them. Some
of the performance criteria used for comparison are the effective degradation in
the frequency response, the stability and potential limit cycles, complexity of
the algorithm flow control, number of multiplications, and additions per sample
output. Extensive simulation on a computer is essential to address these issues,
before we chose a few structures for further investigation.
In this chapter, we discussed several structures to realize the FIR and IIR filters
and commented on the effects of finite wordlength. More detailed discussion of
this criterion and other issues will be included in the next chapter.
PROBLEMS
6.1 Draw the direct form and the cascade form of the FIR filter with the
following transfer function:
6.2 Find the polyphase structure for the FIR filter in Problem 6.1 and its
transpose.
6.3 Determine the transpose of the direct-form structure realizing the FIR filter
6.4 Determine the polyphase structure for the FIR filter given in Problem 6.3.
6.5 Find the polyphase structure for the FIR filter
6.6 Obtain the transfer functions H1 (z) = Y (z)/X(z) and H2 (z) = G(z)/X(z)
of the lattice circuit shown in Figure 6.29.
6.7 Draw the direct form and transpose of the circuit shown in Figure 6.29.
6.8 (a) Derive the transfer function H1 (z) = Y (z)/X(z) of the lattice structure
shown in Figure 6.30.
348 FILTER REALIZATIONS
X(z) Σ Σ Y(z)
K1 K2
K1 K2
X(z) Σ Σ Y(z)
K2 K1
−K2 −K1
(b) Derive the transfer function G(z) = G2 (z)/Y (z) and show that the
transfer function H2 (z) = G2 (z)/X(z) is an allpass function.
(c) If the transfer function for the lattice structure shown in Figure 6.30
is H1 (z) = 1/(1 + 1.38z−1 + 1.3z−2 ), what are the values of K1 and
K2 ?
6.9 Draw the transpose of the lattice structure shown in Figure 6.29.
6.10 Plot the unit pulse response of the filter shown in Figure 6.31a,b.
6.11 Derive the transfer function H (z) = Y (z)/X(z) for the structure shown in
Figure 6.32.
6.12 Draw the transpose of the structure shown in Figure 6.32.
6.13 (a) Draw the circuit in a parallel structure, to realize the following transfer
function H (z−1 ) and find its inverse z transform h(n):
(1 + 0.2z−1 )z−2
H (z−1 ) =
(1 − 0.6z−1 + 0.25z−2 )(1 + 0.4z−1 )
(1 + 0.2z)z2
H (z) =
(1 − 0.6z + 0.25z2 )(1 + 0.4z)
PROBLEMS 349
−1.0
4.0
(a)
X(n)
z−1 Σ z−1 Σ z−1 Σ z−1 Σ y(n)
(b)
z−1
Σ Σ Σ
Y(z)
6.14 Find the z transform X(z) of [(0.8)n − (0.4)n ] u(n). What is the inverse
z transform of X(−z)?
6.15 Draw the digital filter circuit in both cascade and parallel forms to realize
the following transfer function:
6.16 Draw the direct form I, direct form II, cascade and parallel structures for
the transfer function
z−1
H (z−1 ) =
(1 + 0.2z−1 )(1 + 0.6z−1 + 0.2z−2 )
1 + 0.1z−1
H (z−1 ) =
(1 + 0.3z−1 )(1 + 0.5z−1 )
obtain the cascade and parallel structures to realize it. Draw their transpose
structures also.
6.19 Draw the direct form II structure for the structure shown in Figure 6.33.
Find the unit pulse response of this structure for r = 0.6 and θ = π/5.
r cos q
r sin q
Σ z−1 Y(n)
−r sin q r sin q
X(n)
Σ z−1
−r cos q
r cos q
6.20 Obtain as many structures as you can to realize the following transfer
function:
(z + 0.2)
H (z) =
(z + 0.1)(z + 0.4)(z2 + 0.5z + 0.06)
6.21 Determine the cascade and parallel structures for the transfer function
(1 + 0.3z−1 )
H (z) = π π
(1 − 0.3z−1 )(1 − 0.5ej 3 z−1 )(1 − 0.5e−j 3 z−1 )
MATLAB Problems
6.22 Find the direct-form and cascade structures realizing the FIR filter function
H (z−1 )
6.23 Find the direct-form and cascade structures realizing the FIR filters
and
6.24 Determine the lattice structures to realize the FIR filters in Problem 6.23.
6.25 Find the direct form I and the cascade structures to realize the following
IIR filters:
1 − 0.25z−1 z−1 1 + 0.4z−1
H1 (z) = + +
1 + 0.9z−1 1 + 0.5z−1 1 + 0.2z−1 + 0.08z−2
1 + 0.1z−1 + z−2 − 0.2z−3 4z−1
H2 (z−1 ) = +
1 + z−1 + 0.24z−2 (1 − 0.8z−1 )(1 − 0.4z−1 )
6.26 Find the structure in the parallel and cascade connections to realize the
following filters:
−1 z−1 z−1 z−1 z−1
H1 (z ) = + +
1 − 0.5z−1 1 + 0.5z−1 1 − 0.2z−1 1 + 0.2z−1
1 − 0.25z−1 z−1
H2 (z−1 ) = +
1 + 0.9z−1 1 + 0.5z−1
1 + z−1 2z−2
× +
1 + 0.4z−1 1 + 0.6z−1 + 0.6z−2
352 FILTER REALIZATIONS
1.2 + z−1
H (z−1 ) =
1.0 + 1.1z−1 + 0.5z−2 + 0.1z−3
6.31 Find the lattice–ladder structure for the following IIR filter:
0.01 − 0.75z−1
H (z−1 ) =
1 − 0.75z−1 + 0.01z−2
6.32 Determine the lattice–ladder structure for the following IIR filter:
REFERENCES
1. S. K. Mitra and K. Hirano, Digital allpass networks, IEEE Trans. Circuits Syst. CAS-21,
688–700 (1974).
2. J. G. Proakis and D. G. Manolakis, Digital Signal Processing, Prentice-Hall, 1996.
3. S. K. Mitra, Digital Signal Processing—A Computer Based Approach. McGraw-Hill,
2001.
4. S. K. Mitra and J. F. Kaiser, eds., Handbook for Digital Signal Processing, Wiley-
Interscience, 1993.
5. B. A. Shenoi, Magnitude and Delay Approximation of 1-D and 2-D Digital Filters,
Springer-Verlag, 1999.
CHAPTER 7
7.1 INTRODUCTION
The analysis and design of discrete-time systems, digital filters, and their realiza-
tions, computation of DFT-IDFT, and so on discussed in the previous chapters
of this book were carried out by using mostly the functions in the Signal Pro-
cessing Toolbox working in the MATLAB environment, and the computations
were carried out with double precision. This means that all the data representing
the values of the input signal, coefficients of the filters, or the values of the unit
impulse response, and so forth were represented with 64 bits; therefore, these
numbers have a range approximately between 10−308 and 10308 and a precision
of ∼2−52 = 2.22 × 10−6 . Obviously this range is so large and the precision with
which the numbers are expressed is so small that the numbers can be assumed to
have almost “infinite precision.” Once these digital filters and DFT-IDFT have
been obtained by the procedures described so far, they can be further analyzed
by mainframe computers, workstations, and PCs under “infinite precision.” But
when the algorithms describing the digital filters and FFT computations have
to be implemented as hardware in the form of special-purpose microprocessors
or application-specific integrated circuits (ASICs) or the digital signal processor
(DSP) chip, many practical considerations and constraints come into play. The
registers used in these hardware systems, to store the numbers have finite length,
and the memory capacity required for processing the data is determined by the
number of bits—also called the wordlength —chosen for storing the data. More
memory means more power consumption and hence the need to minimize the
wordlength. In microprocessors and DSP chips and even in workstations and PCs,
we would like to use registers with as few bits as possible and yet obtain high
computational speed, low power, and low cost. But such portable devices such as
cell phones and personal digital assistants (PDAs) have a limited amount of mem-
ory, containing batteries with low voltage and short duration of power supply.
These constraints become more severe in other devices such as digital hearing
aids and biomedical probes embedded in capsules to be swallowed. So there is a
354
FILTER DESIGN–ANALYSIS TOOL 355
great demand for designing digital filters and systems in which they are embed-
ded, with the lowest possible number of bits to represent the data or to store the
data in their registers. When the filters are built with registers of finite length and
the analog-to-digital converters (ADCs) are designed to operate at increasingly
high sampling rates, thereby reducing the number of bits with which the samples
of the input signal are represented, the frequency response of the filters and the
results of DFT-IDFT computations via the FFT are expected to differ from those
designed with “infinite precision.” This process of representing the data with a
finite number of bits is known as quantization, which occurs at several points
in the structure chosen to realize the filter or the steps in the FFT computation
of the DFT-IDFT. As pointed out in the previous chapter, a vast number of
structures are available to realize a given transfer function, when we assume infi-
nite precision. But when we design the hardware with registers of finite length to
implement their corresponding difference equation, the effect of finite wordlength
is highly dependent on the structure. Therefore we find it necessary to analyze
this effect for a large number of structures. This analysis is further compounded
by the fact that quantization can be carried out in several ways and the arithmetic
operations of addition and multiplication of numbers with finite precision yield
results that are influenced by the way that these numbers are quantized.
In this chapter, we discuss a new MATLAB toolbox called FDA Tool avail-
able1 for analyzing and designing the filters with a finite number of bits for the
wordlength. The different form of representing binary numbers and the results of
adding and multiplying such numbers will be explained in a later section of this
chapter. The third factor that influences the deviation of filter performance from
the ideal case is the choice of FIR or IIR filter. The type of approximation chosen
for obtaining the desired frequency response is another factor that also influences
the effect of finite wordlength. We discuss the effects of all these factors in this
chapter, illustrating their influence by means of a design example.
An enormous amount of research has been carried out to address these problems,
but analyzing the effects of quantization on the performance of digital filters
and systems is not well illustrated by specific examples. Although there is no
analytical method available at present to design or analyze a filter with finite
precision, some useful insight can be obtained from the research work, which
serves as a guideline in making preliminary decisions on the choice of suitable
structures and quantization forms. Any student interested in this research work
should read the material on finite wordlength effects found in other textbooks
[1,2,4]. In this chapter, we discuss the software for filter design and analysis
that has been developed by The MathWorks to address the abovementioned
1
MATLAB and its Signal Processing Toolbox are found in computer systems of many schools and
universities but the FDA Tool may not be available in all of them.
356 QUANTIZED FILTER ANALYSIS
problem2 . This FDA Tool finite design–analysis (FDA) tool, found in the Filter
Design Toolbox, works in conjunction with the Signal Processing (SP) Toolbox.
Unlike the SP Toolbox, the FDA Tool has been developed by making extensive use
of the object-oriented programming capability of MATLAB, and the syntax for the
functions available in the FDA Tool is different from the syntax for the functions
we find in MATLAB and the SP Toolbox. When we log on to MATLAB and type
fdatool, we get two screens on display. On one screen, we type the fdatool
functions as command lines to design and analyze quantized filters, whereas the
other screen is a graphical user interface (GUI) to serve the same purpose. The
GUI window shown in Figure 7.1a displays a dialog box with an immense array
of design options as explained below.
First we design a filter with double precision on the GUI window using the
FDA Tool or on the command window using the Signal Processing Toolbox and
then import it into the GUI window. In the dialog box for the FDA Tool, we can
choose the following options under the Filter Type panel:
1. Lowpass
2. Highpass
3. Bandpass
4. Bandstop
5. Differentiator. By clicking the arrow on the tab for this feature, we get
the following additional options.
6. Hilbert transformer
7. Multiband
8. Arbitrary magnitude
9. Raised cosine
10. Arbitrary group delay
11. Half-band lowpass
12. Half-band highpass
13. Nyquist
Below the Filter Type panel is the panel for the design method. When the
button for IIR filter is clicked, the dropdown list gives us the following options
specifying the type of frequency response:
• Butterworth
• Chebyshev I
• Chebyshev II
• Elliptic
• Least-pth norm
• Constrained least-pth norm
2
The author acknowledges that the material on the FDA Tool described in this chapter is based on
the Help Manual for Filter Design Toolbox found in MATLAB version 6.5.
FILTER DESIGN–ANALYSIS TOOL 357
(a)
(b)
Figure 7.1 Screen capture of fdatool window: (a) window for filter design;
(b) window for quantization analysis.
• Equiripple
• Least squares
• Window
• Maximally flat
• Least-pth norm
• Constrained equiripple
358 QUANTIZED FILTER ANALYSIS
To the right of the panel for design method is the one for filter order. We can
either specify the order of the filter or let the program compute the minimum order
(by use of SP Tool functions Chebord, Buttord, etc.). Remember to choose an
odd order for the lowpass filter when it is to be designed as a parallel connection
of two allpass filters, if an even number is given as the minimum order. Below
this panel is the panel for other options, which are available depending on the
abovementioned inputs. For example, if we choose a FIR filter with the window
option, this panel displays an option for the windows that we can choose. By
clicking the button for the windows, we get a dropdown list of more than 10
windows. To the right of this panel are two panels that we use to specify the
frequency specifications, that is, to specify the sampling frequency, cutoff fre-
quencies for the passband and stopband, the magnitude in the passband(s) and
stopband(s), and so on depending on the type of filter and the design method
chosen. These can be expressed in hertz, kilohertz, megahertz, gigahertz, or nor-
malized frequency. The magnitude can be expressed in decibels, with magnitude
squared or actual magnitude as displayed when we click Analysis in the main
menu bar and then click the option Frequency Specifications in the drop-
down list. The frequency specifications are displayed in the Analysis panel,
which is above the panel for frequency specifications, when we start with the
filter design.
The options available under any of these categories are dependent on the
other options chosen. All the FDA Tool functions, which are also the functions
of the SP Tool, are called overloaded functions. After all the design options are
chosen, we click the Design Filter button at the bottom of the dialog box. The
program designs the filter and displays the magnitude response of the filter in the
Analysis area. But it is only a default choice, and by clicking the appropriate
icons shown above this area, the Analysis area displays one of the following
features:
• Magnitude response
• Phase response
• Magnitude and phase response
• Group delay response
• Impulse response
• Step response
• Pole–zero plot
• Filter coefficients
This information can also be displayed by clicking the Analysis button in the
main menu bar, and choosing the information we wish to display in the Anal-
ysis area. We can also choose some additional information, for example, by
clicking the Analysis Parameters. At the bottom of this dropdown list is the
option Full View Analysis. When this is chosen, whatever is displayed in the
Analysis area is shown in a new panel of larger dimensions with features that
FILTER DESIGN–ANALYSIS TOOL 359
are available in a figure displayed under the SP Tool. For example, by clicking the
Edit button and then selecting either Figure Properties, Axis Properties,
or Current Object Properties, the Property Editor becomes active and
properties of these three objects can be modified.
Finally, we look at the first panel titled Current Filter Information.
This lists the structure, order, and number of sections of the filter that we have
designed. Below this information, it indicates whether the filter is stable and
points out whether the source is the designed filter (i.e., reference filter designed
with double precision) or the quantized filter with a finite wordlength. The default
structure for the IIR reference filter is a cascade connection of second-order
sections, and for the FIR filter, it is the direct form. When we have completed
the design of the reference filter with double precision, we verify whether it
meets the desired specification, and if we wish, we can convert the structure of
the reference filter to any one of the other types listed below. We click the Edit
button on the main menu and then the Convert Structure button. A dropdown
list shows the structures to which we can convert from the default structure or
the one that we have already converted.
For IIR filters, the structures are
1. Direct form I
2. Direct form II
3. Direct form I transposed
4. Direct form II transposed
5. Lattice ARMA
6. Lattice-coupled allpass
7. Lattice-coupled allpass—power complementary
8. State space
Items 6 and 7 in this list refer to structures of the two allpass networks in
parallel as described in Chapter 6, with transfer functions G(z) = 12 [A1 (z) +
A2 (z)] and H (z) = 12 [A1 (z) − A2 (z)], respectively. The allpass filters A1 (z) and
A2 (z) are realized in the form of lattice allpass structures like the one shown
in Figure 6.19b. The MA and AR structures are considered special cases of the
lattice ARMA structure, which are also discussed in Chapter 6.
For FIR filters, the options for the structures are
• Direct-form FIR
• Direct-form FIR transposed
• Direct-form symmetric FIR
the first icon on the left-hand bar in the dialog box and adding the frequency
specifications for the new filter.
When we have finished the analysis of the reference filter, we can move to
construct the quantized filter as an object, by clicking the last icon on the bar
above the Analysis area and the second icon on the left-hand bar, which sets
the quantization parameters. The panel below the Analysis area now changes
as shown in Figure 7.1b. We can construct three objects inside the FDA Tool:
qfilt, qfft, and quantizer. Each of them has several properties, and these
properties have values, which may be strings or numerical values. Currently
we use the objects qfilt and quantizer to analyze the performance of the
reference filter when it is quantized. When we click the Turn Quantization
On button and the Set Quantization Parameters icon, we can choose the
quantization parameters for the coefficients of the filter. Quantization of the filter
coefficients alone are sufficient for finding the finite wordlength effect on the
magnitude response, phase response, and group delay response of the quantized
filter, which for comparison with the response of the reference filter displayed
in the Analysis area. Quantization of the other data listed below are necessary
when we have to filter an input signal:
The object quantizer is used to convert each of these data, and this object has
four properties: Mode, Round Mode, Overflow mode, and Format. In order to
understand the values of these properties, it is necessary to review and understand
the binary representation of numbers and the different results of adding them and
multiplying them. These will be discussed next.
Numbers representing the values of the signal, the coefficients of both the filter
and the difference equation or the recursive algorithm and other properties cor-
responding to the structure for the filter are represented in binary form. They are
based on the radix of 2 and therefore consist of only two binary digits, 0 and 1,
which are more commonly known as bits, just as the decimal numbers based on a
radix of 10 have 10 decimal numbers from 0 to 9. Placement of the bits in a string
determines the binary number as illustrated by the example x2 = 1001 1010,
BINARY NUMBERS AND ARITHMETIC 361
where the bits b2 , b1 , b0 , b−1 , b−2 , b−3 , b−4 are either 1 or 0. In general, when
x2 is represented as
I −1
x10 = bi 2i (7.4)
i=−F
In the binary representation (7.3), the integer part contains I bits and the bit bI −1
at the leftmost position is called the most significant bit (MSB); the fractional
part contains F bits, and the bit b−F at the rightmost position is called the least
significant bit (LSB). This can only represent the magnitude of positive numbers
and is known as the unsigned fixed-point binary number. In order to represent
positive as well as negative numbers, one more bit called the sign bit is added to
the left of the MSB. The sign bit, represented by the symbol s in (7.5), assigns
a negative sign when this bit is 1 and a positive sign when it is 0. So it becomes
a signed magnitude fixed-point binary number. Therefore a signed magnitude
number x2 = 11001 1010 is x10 = −9.625. In general, the signed magnitude
fixed-point number is given by
I −1
x10 = (−1) s
bi 2i (7.5)
i=−F
So two other form of representing the numbers are more commonly used: the
one’s-complement and two’s-complement forms (also termed one-complementary
and two-complementary forms) for representing the signed magnitude fixed-point
numbers. In the one’s-complement form, the bits of the fractional part are replaced
by their complement, that is, the ones are replaced by zeros and vice versa. By
adding a one as the least significant bit to the one’s-complement form, we get
the two’s-complement form of binary representation; the sign bit is retained in
both forms. But it must be observed that when the binary number is positive, the
signed magnitude form, one’s-complement form, and two’s-complement form are
the same.
Example 7.1
b8 b7 b0 b−1 b−23
s E (8 bits) F (23 bits)
(a)
(b)
Figure 7.2 IEEE format of bits for the 32- and 64-bit floating-point numbers.
Here, (1 F ) is the normalized mantissa with one integer bit and 23 fractional bits,
whereas (0 F ) is only the fractional part with 23 bits. Most of the commercial
DSP chips use this 32-bit, single-precision, floating-point binary representation,
although 64-bit processors are becoming available. Note that there is no provision
for storing the binary point ( ) in these chips; their registers simply store the bits
and implement the rules listed above. The binary point is used only as a notation
for our discussion of the binary number representation and is not counted in the
total number of bits.
The IEEE 754-1985 standard for the (64-bit), double-precision, floating-point
number is expressed by
It uses one sign bit, 11 bits for the exponent E, and 52 bits for F (one bit is
added to normalize it but is not counted). The representation for this format is
shown in Figure 7.2b.
Example 7.2
Consider the 16-bit floating-point number with 8 bits for the unbiased exponent
and 4 bits for the denormalized fractional part, namely, E = 8 and F = 4. The
364 QUANTIZED FILTER ANALYSIS
X2 = 0100000010 0110
Y2 = 100000111 0110
1. Round: round
2. Floor: floor
3. Ceiling: ceil
4. Fix: fix
5. Convergent: convergent
TABLE 7.1 Dynamic Range of Floating Point Numbers Found in FDA Tool
Type of
Floating-Point Normalized Normalized Exponent
Data Minimum Value Maximum Value Bias Precision
rounded toward negative infinity, and positive numbers that lie halfway between
two quantization levels are rounded toward positive infinity. If the number lies
exactly halfway between two levels, it is rounded toward positive infinity. The
operation called ’floor’ is commonly known as truncation since it discards all
the bits beyond the b bits, and this results in a number that is nearest to negative
infinity. These two are the most commonly used operations in binary arithmetic.
They are illustrated in Figure 7.3, where the dotted line indicates the actual value
of x and the solid line shows the quantized value xQ with b bits.
The ceiling operation rounds the value to the nearest quantization level
toward positive infinity, and the fix operation rounds to the nearest level toward
zero. The convergent operation is the same as rounding except that in the case
when the number is exactly halfway, it is rounded down if the penultimate bit is
zero and rounded up if it is one.
Suppose that two positive numbers or two negative numbers in the fixed-point
format with b bits are added together. It is possible that the result could exceed
X X
XQ XQ
Z−b Z−b
Z−b Z−b
2
X
XQ
Z−b
Z−b
the lower or upper limits of the range within which numbers with b bits lie. For
a signed magnitude, fixed-point number with wordlength w and fraction length
f , the numbers range from −2w−f −1 to 2w−f −1 − 2−f , whereas the range for
floating-point numbers is as given in Table 7.1. When the sum or difference of
two fixed-point numbers or the product of two floating-point numbers exceeds
its normal range of values, there is an overflow or underflow of numbers. The
overflow mode in the FDA panel for the quantized filter gives two choices: to
use saturate or to wrap. Choosing the saturate mode sets values that fall
outside the normal range to a value within the maximum or minimum value in
the range; that is, values greater than the maximum value are set to the maximum
value, and values less than the minimum value are set to the minimum value in
the range. This is the default choice for the overflow mode.
There is a third choice: to scale all the data. This choice is made by clicking
the Optimization button. Then from the dialog box that is displayed, we can
use additional steps to adjust the quantization parameters, scale the coefficients
without changing the overall gain of the filter response, and so on. The coefficients
are scaled appropriately such that there is no overflow or underflow of the data
at the output of every section in the realization.
Before we investigate the effects of finite wordlength and the many realization
structures, by using all the options in the dialog box in the FDA Tool, it is useful
to know some of the insight gleaned from the vast amount of research on this
complex subject. It has been found that in general, the IIR filters in the cascade
connection of second-order sections, each of them realized in direct form II, are
less sensitive to quantization than are those realized in the single section of direct
form I and direct form II. The lattice ARMA structure and the special case of
the AR structure are less sensitive to quantization than is the default structure
described above. The lattice-coupled allpass structure, also known as “two allpass
structures in parallel,” is less sensitive than the lattice ARMA structure. We will
determine whether realizing the two allpass filters A1 (z) and A2 (z) by lattice
allpass structures has any advantages of further reduction in the quantization
effects. If the specified frequency response can be realized by an FIR filter,
then the direct-form or the lattice MA structure realizing it may be preferable to
the structures described above, because the software development and hardware
design of the FIR filter is simpler, is always stable, has linear phase, and is free
from limit cycles.
We first design the reference filter that meets the desired specifications; then
we try different structures for the quantized filter with different levels and types of
quantization. Comparing the frequency response, phase response, and group delay
response of the reference filter with those of the quantized filter, we find out which
structure has the lowest deviation from the frequency response, phase response,
and so on of the reference filter, with the lowest finite wordlength. The FDA Tool
offers us powerful assistance in trying a large number of options available for the
type of filter, design method, frequency specification, quantization of the several
coefficients, and other variables, and comparing the results for the reference filter
QUANTIZATION ANALYSIS OF IIR FILTERS 367
and the quantized filter, it allows us to make a suboptimal choice of the filter.
This is illustrated by the following example.
Let us select the same fifth-order IIR lowpass elliptic filter that was considered
in Example 6.17. Its transfer function G(z) is given by
0.1397
G(z) =
1.965
(1 + 1.337z−1 + 2.251z−2 + 2.251z−3 + 1.337z−4 + z−5 )
×
(1 − 1.629z−1 + 2.256z−2 − 1.597z−3 + 0.8096z−4 − 0.1866z−5 )
(7.9)
The frequency specifications for the filter are given as ωp = 0.4, ωs = 0.6,
Ap = 0.3 dB, and As = 35 dB. The transfer function G(z) was decomposed as
the sum of two allpass filters A1 (z) and A2 (z) such that G(z) = 12 [A1 (z) + A2 (z)],
where
0.5089 − 0.6763z−1 + z−2
A1 (z) = (7.10)
1 − 0.6763z−1 + 0.5089z−2
and
0.8805 − 0.5368z−1 + z−2 −0.4165 + z−1
A2 (z) = (7.11)
1 − 0.5367z−1 + 0.8805z−2 1 − 0.4165z−1
Recollect the following lattice coefficients used to realize the lattice structures
for A1 (z) and A2 (z) computed in Chapter 6:
For A1 (z):
−0.4482
K1 =
0.5089
⎡ ⎤
0
V1 = ⎣ 0 ⎦
1
For A2 (z):
−0.2855
K2 =
0.8805
⎡ ⎤
0
V2 = ⎣ 0 ⎦
1
368 QUANTIZED FILTER ANALYSIS
k3 = [−0.4165]
0
v3 =
1
Now we log on to the FDA Toolbox by typing fdatool in the MATLAB com-
mand window and enter the following specifications to design the reference filter
under infinite precision. This is a lowpass, IIR, elliptic filter with sampling
frequency = 48,000 Hz, Fpass = 9600 Hz, Fstop = 14,400 Hz, which correspond
to the normalized sampling frequency = 2, Fpass = 0.4, Fstop = 0.6, respectively.
The maximum passband attenuation is set as Ap = 0.3 dB and the minimum stop-
band attenuation, as As = 35 dB. When we design this filter, we find that the
minimum order of the filter is given as 4, and therefore we increase it to 5 as
the order of the filter so that we can realize the allpass networks in parallel and
compare it with the frequency response of other types of filters. With this selec-
tion, the frequency response and phase response displayed in the Analysis area
are as shown in Figure 7.4. The coefficients of the numerator and denominator
of the IIR reference filter are given below.
Numerator coefficients (normalized to render the constant coefficient of the
numerator as one) of this reference filter are
1.000000000000
1.337660698390
2.251235030190
2.251235030190
0.8 −80
Phase (degrees)
0.6 −160
Magnitude
0.4 −240
0 −400
0 5 10 15 20
Frequency (kHz)
Figure 7.4 Magnitude response of an IIR elliptic lowpass (reference filter) filter.
QUANTIZATION ANALYSIS OF IIR FILTERS 369
1.337660698390
1.000000000000
1.000000000000000
−1.629530257267632
2.257141351394922
−1.598167067780082
0.809623494277134
−0.186626971448986
As expected these results match the coefficients in Equation (7.9) within an accu-
racy of four digits, because both of the filters were designed by the same Signal
Processing Toolbox function ellip.
Next, we turn on the quantization, and click the Set Quantization Param-
eters button. The quantization parameters are all set to the default values similar
to those shown in Figure 7.1. We change the format for the fixed-point coeffi-
cients of the filter from [16 15] to [9 8] without changing the format for any
of the other data—although most of the DSP chips currently available use 16 or
32 bits. The magnitude response of the cascade connection of two second-order
sections and one first-order section in direct form II when we quantize the filter
coefficients to 9 bit wordlength is shown in Figure 7.5, along with the magnitude
1.12 −80
Phase (degrees)
−160
Magnitude
0.84
0.56 −240
Direct From II, Second Order Sections in cascade
Format [9 8] for Filter Coefficients
0.28
Filter #1: Reference magnitude −320
Filter #1: Quantized magnitude
Filter #1: Reference phase
Filter #1: Quantized phase
0 −400
0 5 10 15 20
Frequency (kHz)
Figure 7.5 Magnitude response of reference filter and quantized filter with format
[9 8] in cascade connection of second-order sections.
370 QUANTIZED FILTER ANALYSIS
Magnitude response in dB
2.5
Filter #1: Reference
Filter #1: Quantized
2
1.5
1
Magnitude (dB)
0.5
−0.5
−2
1 2 3 4 5 6 7 8 9
Frequency (kHz)
Figure 7.6 Magnified plot of the magnitudes (in decibels) of the two filters in Figure 7.5.
response of the reference filter. The Figure 7.6 shows a magnified plot of the
magnitude in decibels in the passband, which gives the response of the quantized
filter, which is very close to the response of the reference filter. But most of
the DSP chips available on the market have a wordlength that is a power of 2
(wordlengths of 8, 16, 32, etc.). So we try a quantization of 8 bits, and the mag-
nitude response of this filter is shown in Figure 7.7. But we see that the deviation
of the magnitude from that of the reference filter is pronounced near the edge
of the passband. Although we prefer to choose a wordlength of 8 rather than 9,
this deviation is considered excessive, so we must choose other structures. As an
alternative structure, we convert the direct form II structure to the ARMA struc-
ture with the same wordlength of 8 bits, and the resulting magnitude response is
shown in Figure 7.8. A magnified plot of this response in its passband is shown
in Figure 7.9. It does not produce a significant improvement over the response
shown in Figure 7.7 for the quantized filter in the default structure of direct form
II, with the same wordlength of 8 bits.
So we decide to convert the lattice ARMA structure to the lattice-coupled
allpass structure; each allpass structure is realized by lattice allpass structures and
starts with a 9-bit fixed-point quantization for the filter coefficients, and we get the
result shown in Figure 7.10. Hardly any difference is seen between the reference
filter and the quantized filter with the format [9 8], the same as the direct form
QUANTIZATION ANALYSIS OF IIR FILTERS 371
Magnitude response
1.4
Filter #1: Reference
Filter #1: Quantized
1.2
1
Magnitude
0.8
0.6
0.4
Magnified Response of Direct Form II, Second Order Sections
in Cascade
0.2 Format [8 7] for Filter Coefficients
0
0 5 10 15 20
Frequency (kHz)
Figure 7.7 Magnitude responses of reference filter and quantized filter with format
[8 7] in cascade connection of second-order sections.
Magnitude response
1.4
Filter #1: Reference
Filter #1: Quantized
1.2
1
Magnitude
0.8
0.6
Lattice ARMA Lowpass: Elliptic, IIR Filter
0.4 Format [8 7] for Filter Coefficients
0.2
0
0 5 10 15 20
Frequency (kHz)
Figure 7.8 Magnitude responses of reference filter and quantized filter with format
[8 7] and lattice ARMA structure.
372 QUANTIZED FILTER ANALYSIS
Magnitude response in dB
0.5
Magnitude (dB)
−0.5
−1
Magnified Magnitude Response in dB in the passband of Lattice
ARMA Lowpass, IIR: Elliptic Filter
Format [8 7] for Filter Coefficients
−1.5
1 2 3 4 5 6 7 8 9
Frequency (kHz)
Figure 7.9 Magnified plot of the magnitude responses (in decibels) of reference filter
and quantized filter with format [8 7] in lattice ARMA structure.
Magnitude response
1.4
Filter #1: Reference
1.2 Filter #1: Quantized
1
Magnitude
0.8
0.6
0.4
0.2
Lattice Coupled Allpass Structure
Format [9 8] for filter Coefficients
0
0 5 10 15 20
Frequency (kHz)
Magnitude response
1.2
0.8
Magnitude
0.6
0.4
0
0 5 10 15 20
Frequency (kHz)
Figure 7.11 Magnitude responses of reference filter and quantized filter with format
[7 6], in lattice-coupled allpass structure.
II structure with 9 bits. Next we try the 7 bit wordlength for this structure and
the magnitude response shown in Figure 7.11. Again, we prefer to choose an
8 bit wordlength for this structure. The magnitude and phase responses of the
filter with 8 bits are shown in Figure 7.12. A magnified plot of the magnitude
in decibels in the passband of this 8-bit filter is shown in Figure 7.13. It shows
that the maximum attenuation for the reference filter is 0.3 dB as specified, and
the deviation from the specified passband magnitude, for the quantized filter is
about 0.1 dB. This amount of deviation is less than that exhibited by the lattice
ARMA filter in Figure 7.9 Therefore this lattice-coupled allpass structure for the
IIR filter is chosen as a compromise.
The lattice coefficients of the second-order allpass filter A1 (z) and those for the
third-order allpass filter A2 (z) realizing the reference filter are printed out and
shown in the right column of Figure 7.14. The lattice coefficients for the two
allpass filters displayed in Figure 7.14 match those given in the vectors K1, V1,
K2, V2, k3 and v3 given at the beginning of this section, within an accuracy
of four digits. In the left column are shown the corresponding coefficients of
the quantized filter with a 8 bit wordlength in the fixed-point, signed magnitude
format [8 7].
374 QUANTIZED FILTER ANALYSIS
0.96 −80
Phase (degrees)
0.72 −160
Magnitude
0 −400
0 5 10 15 20
Frequency (kHz)
Figure 7.12 Magnitude and phase responses of reference filter and the quantized filter
with format [8 7] in lattice-coupled allpass structure.
Magnitude response in dB
0
Magnitude (dB)
−0.5
−1
Magnified Magnitude Response in dB in the passband
of the Lattice Coupled Allpass, IIR, Lowpass, Elliptic Filter
Format [8 7] for Filter Coefficients
−1.5
1 2 3 4 5 6 7 8 9
Frequency (kHz)
Figure 7.13 Magnified plot of the magnitude responses (in decibels) of reference filter
and quantized filter with format [8 7] in lattice-coupled allpass structure.
QUANTIZATION ANALYSIS OF FIR FILTERS 375
Figure 7.14 Coefficients of reference filter and quantized filter with format [8 7] in a
lattice-coupled allpass structure.
Magnitude response
1.4
Filter #1
1.2
1
Magnitude
0.8
0.6
0.4
Magnitude Response of Lowpass, Equiripple, Reference Filter
Order 16
0.2
0
0 5 10 15 20
Frequency (kHz)
Figure 7.15 Magnitude response of a lowpass equiripple FIR (reference filter) filter.
1.12 −180
Phase (degrees)
0.84 −360
Magnitude
0.56 −540
Figure 7.16 Magnitude and phase responses of reference FIR filter and quantized filter
with format [7 6] for filter coefficients.
QUANTIZATION ANALYSIS OF FIR FILTERS 377
1.12 −180
Phase (degrees)
0.84 −360
Magnitude
0.56 −540
Figure 7.17 Magnitude and phase responses of FIR reference filter and quantized filter
with format [8 7] for filter coefficients.
Magnitude response in dB
0.5
Magnitude (dB)
−0.5
Magnified Magnitude in dB in the passband of the Lowpass, Equiripple
FIR Filter
−1
Format [8 7] for Filter Coefficients
−1.5
1 2 3 4 5 6 7 8 9
Frequency (kHz)
Figure 7.18 Magnified magnitude responses (in decibels) of reference FIR filter and
quantized filter with format [8 7] for the filter coefficients.
378 QUANTIZED FILTER ANALYSIS
Figure 7.19 Data for reference FIR filter and quantized FIR filter with 8 bits of word-
length.
of 8 bits. The number of multipliers required in FIR direct form is only 9 because
of the symmetry in its coefficients, whereas the lattice-coupled allpass network
requires 10 multipliers, which is not a significant difference. However, we know
that the phase response of the FIR filter is linear, which is a great advantage over
the IIR filter. Hardware implementation of the FIR filter is simpler than that of
the IIR filter. Unlike the IIR filter, the FIR filter does not exhibit limit cycles
and is always stable. This leads to investigate the 8-bit FIR filter further as a
candidate for generating the code to program a DSP chip of our choice.
It must be pointed out that the specifications we selected for the digital filter
may or may not meet typical application requirements. Also, we would like
to point out that while we argued that a 8 bit wordlength may be preferable
over a 9 bit wordlength, currently most of the digital signal processors (DSPs)
are 16-bit or 32-bit devices. The design process using the fdatool is meant
to illustrate only the different choices and decisions that an engineer may face
before arriving at a particular digital filter that will be considered for further
investigation as described below.
Now we assume that we have designed the digital filter and we have tested
its performance using the fdatool, when the coefficients of the filter and the
REFERENCES 379
input samples are represented by a finite number of bits. We have also considered
the effect of rounding or truncating the results of adding signals or multiplying
the signal value and the coefficient of the filter and ascertained that there is no
possibility of limit cycles or unstable operation in the filter. Very often a digital
filter is used as a prominent part of a digital system like a cell phone, which has
other components besides power supply, keyboard, or other I/O interfaces. So
we have to simulate the performance of the whole system with all components
connected together in the form of a block diagram.
7.7 SUMMARY
In this chapter we described the use of the MATLAB tool, called the fdatool,
to design digital filters with finite wordlength for the coefficients in fixed-point
and floating-point representations, and investigated several different types of fil-
ter structures and different types of magnitude response specifications. Once we
narrowed down the choice of the filter that meets the frequency response spec-
ifications, we have to simulate the performance of the filter using Simulink, to
check that the filter works satisfactorily under different types of input signals
that will be applied in practice. In Chapter 8 we discuss this and other practical
considerations that are necessary for hardware design of the filter or the whole
digital system in which the filter is embedded.
PROBLEMS
REFERENCES
8.1 INTRODUCTION
381
382 HARDWARE DESIGN USING DSP CHIPS
Figure 8.1 Screen capture of the Simulink browser and block diagram of a model.
The GUI interface is used to drag and drop these blocks from the blockset and
connect them to describe a block diagram representation of the dynamic system,
which may be a continuous-time system or a discrete-time system. A mechanical
system model [6] is shown in Figure 8.1. Simulink is based on object-oriented
programming, and the blocks are represented as objects with appropriate prop-
erties, usually specified in a dialog box. Indeed, the fdatool that we used in
Chapter 7 can be launched from SIMULINK as an object or from the MATLAB
command window because both of them are integrated together to operate in a
seamless fashion. Simulink itself can be launched either by typing simulink in
the MATLAB command window or by clicking the Simulink icon in its toolbar.
For the simulation of a digital filter, we choose the DSP blockset, which
contains the following blocks in a tree structure:
DSP Blockset
→DSP Sinks
→DSP Sources
→Estimation
→Filtering →Adaptive Filters
→Math Functions →Filter Design →Analog Filter Design
→Platform Specific I/O →Multirate filters →Digital Filter Design
→Quantizers →Digital Filter
→Signal Management →Filter Realization
Wizard
→Signal Operations →Overlap-Add FFT filter
→Statistics →Overlap-Save FFT filter
→Transforms
DESIGN PRELIMINARIES 383
All the design and simulation of digital filters and digital systems done by MAT-
LAB and Simulink is based on numerical computation of scientific theory. When
this work is completed, we have to decide on one of the following choices:
1. Design a VLSI chip, using software such as VHDL, to meet our particular
design specifications
2. Select a DSP chip from manufacturers such as Texas Instruments, Analog
Devices, Lucent, or Motorola and program it to work as a digital system
3. Choose a general-purpose microprocessor and program it to work as a
digital signal processor system.
4. Design the system using the field-programmable gate arrays (FPGAs).
the same as the clock frequency of the CPU in the chip or the rate at which data
will be transferred from and to the memory by the CPU (central processing unit).
This in turn determines the rating in mips (millions of instructions per second).
Depending on the amount of data or memory space required by the processor, the
amount of power is determined. Other considerations are the I/O (input/output)
interfaces, additional devices such as the power supply circuit, and the micro-
controller, add-on memory, and peripheral devices. Finally the most important is
the the cost per chip. We also need to consider the reliability of the software and
technical support provided by the manufacturer; credibility and sustainability of
the manufacturer also become important if the market for the digital filter or the
system is expected to last for many years.
The selection of the DSP chip is facilitated by an evaluation of the chips avail-
able from the major manufacturers listed above and their detailed specifications.
For example, the DSP Selection Guide, which can be downloaded from the TI
(Texas Instruments) Website www.dspvillage.ti.com, is an immense source
of information on all the chips available from them.
The DSP chips provided by TI are divided into three categories. The fam-
ily of TMS3206000 DSP platform are designed for systems with very high
performance, ranging within 1200–5760 mips for fixed-point operation and
600–1350 mflops (million floating-point operations per second) for floating-
point operation. The fixed-point DSPs are designated by TMS320C62x and
TMS320C64x, and the floating-point DSPs belong to the TMS320C67x family.
The fixed-point TMS32062x DSPs are optimized for multichannel, multifunc-
tion applications such as wireless base stations, remote-access servers, digital
subscriber loop (DSL) systems, central office switches, call processing, speech
recognition, image processing, biometric equipment, industrial scanners, pre-
cision instruments, and multichannel telephone systems. They use 16 bits for
multiplication and 32 bits for instructions in single-precision format as well as
double-precision format. The fixed-point TMS320C64x DSPs offer the high-
est level of performance at clock rates of up to 720 MHz and 5760 mips, and
they are best suited for applications in digital communications and video and
image processing, wireless LAN (local area networking), network cameras, base
station transceivers, DSL, and pooled modems, and so on. The floating-point
TMS320C67x DSPs operate at 225 MHz and are used in similar applications.
The TMS320C5000 DSP family is used in consumer digital equipments,
namely, products used in the Internet and in consumer electronics. Therefore
these chips are optimized for power consumption as low as 0.05 mW/mips and
speeds of ≤300 MHz and 600 mips; the TMS320C54x DSPs are well known as
the industry leader in portable devices such as cell phones(2G, 2.5G, and 3G), dig-
ital audio (MP3) players, digital cameras, personal digital assistants (PDAs), GPS
receivers, and electronic books. The TMS320C55x DSPs also deliver the highest
power efficiency and are software-compatible with the TMS320C54x DSPs.
The TMS320C2000 DSPs are designed for applications in digital con-
trol industry, including industrial drives, servocontrol, factory automation,
office equipment, controllers for pumps, fans, HVAC (heating–ventilation–air
CODE GENERATION 385
conditioning), and other home appliances. The TMS320C28x DSPs offer 32-bit,
fixed-point processing and 150 mips operation, whereas the TMS320C24x DSPs
offer a maximum of 40 mips operation.
More detailed information and specifications for the DSPs and other devices
such as ADCs, and codecs (coders/decoders) supplied by TI can be found in the
DSP Selection Guide. The amount of information on the software and hardware
development tools, application notes, and other resource material that is freely
available in this Website is enormous and indispensable. We must remember that
DSP chips produced by other manufacturers such as Analog Devices may be
better suited for specific applications, and they, too, provide a lot of information
about their chips and the applications.
The next task is to generate a code in machine language that the DSP we have
selected understands and that implements the algorithm for the digital system
we have designed. First we have to convert the algorithm for the system under
development to a code in C/C++ language. This can be done manually by one
who is experienced in C language programming. Or we simulate the performance
of the whole system modeled in Simulink, and use a blockset available in it,
known as the Real-Time Workshop [7] to generate the ANSI Standard C code
for the model.2 The C code can be run on PCs, DSPs, and microcontrollers in real
time and non–real time in a variety of target environments. We connect a rapid
prototyping target, for example, the xPC Target, to the physical system but use
the Simulink model as the interface to the physical target. With this setup, we test
and evaluate the performance of the physical target. When the simulation is found
to work satisfactorily, the Real-Time Workshop is used to create and download
an executable code to the target system. Now we can monitor the performance of
the target system and tune its parameters, if necessary. The Real-Time Workshop
is useful for validating the basic concept and overall performance of the whole
system that responds to a program in C code.
An extension of Real-Time Workshop called the Real-Time Workshop Embed-
ded Coder is used to generate optimized C code for embedded discrete-time
systems.
Note that the C code is portable in the sense that it is independent of any man-
ufacturer’s DSP chip. But the manufacturers may provide their own software to
generate the C code also, optimized for their particular DSP chip. However, pro-
gramming a code in machine language is different for DSP chips from different
manufacturers, and the different manufacturers provide the tools necessary to
obtain the machine code from the C code for their DSP chips.
2
Depending on the version of MATLAB/Simulink package installed on the computer in the college
or university, software such as FDA Tool, Real-Time Workshop and others mentioned in this chapter
may or may not be available.
386 HARDWARE DESIGN USING DSP CHIPS
Texas Instruments calls its integrated development tool the Code Composer Stu-
dio (IDE). The major steps to be carried out are outlined in Figure 8.2. Basically,
these steps denote the C compiler, assembler, linker, debugger, simulator, and
emulator functions. It must be pointed out that the other manufacturers also
design DSP chips for various applications meeting different specifications; their
own software bundle follows steps similar to those mentioned above for the Code
Composer Studio (CCS) from Texas Instruments (TI).
First the Code Composer Studio compiles the C/C++ code to an assembly
language code in either mnemonic form or algebraic form, for the particular
C/C++ Source
files
C/C++ Compiler
C/C++ and
Assembly files Assembly Macro library
Source Files files
Assembler
COFF Object
files
Executable COFF
Object Module
Figure 8.2 Software development flow for generating the object code from the C code.
CODE COMPOSER STUDIO 387
The [filenames] list the C program files, and other assembly language files, and
even object files with their default extensions .c, .asm, and .obj, respectively.
The C language is not very efficient in carrying out a few specific operations,
such as fixed-point data processing that are used in DSP applications. For this
reason, assembly language files are added to the C language program files in order
to improve the efficiency of the program in carrying out time-critical sections
of the assembly language code delivered by the assembler. We can choose from
many options in [-options] and in [link options] to control the way that
the compiler shell processes the files listed in [filenames] and the way that
the linker processes the object files. For more details, students should refer to the
TI simulator user’s guide [25].
The next step is translation of the assembly language code by the assembler
to the object code in binary form (or in machine language) specific to the DSP
platform. The CCS command to invoke the assembler is of the form
Since there might be several C program files that implement the original algo-
rithm in small sections, the assembler produces the output file in several sections.
It may also collect assembly source files from an external library, which imple-
ment processes that are used again and again at several stages of the software and
load them into the list of [filenames]. For example, Texas Instruments pro-
vides a large number of highly optimized functions in three libraries, namely, the
DSP library (DSPLib), the image processing library (IMAGELib), and the chip
support library (CSLib). Then there are assembly files that are long programs
and therefore are shortened to a macro so that they can be invoked by a single
or a few lines of instructions. All of these external files are added to the list
of assembly language files and converted to binary form, under a single format
known as the common-object file format (COFF). The object file produces
the object file in COFF format; the list file shows the binary object code as
well as the assembly source code and where the program and the variables are
allocated in the memory space. But they are allocated in temporary locations, not
in absolute locations. Therefore these relocatable object files can be archived into
a library of reusable files that may be used elsewhere. There are many options
in the assembler, and their use is described in Ref. 25.
The linker utility is invoked to combine all the object files generated by the
assembler to one single linked object code, and this is done by assigning absolute
addresses in the physical memory of the target DSP chip as specified by a memory
map. The memory map is created by a linker command xfile, which lists the
various sections of the assembly code and specifies the location of the starting
388 HARDWARE DESIGN USING DSP CHIPS
address and length of memory space in RAM and ROM (random access and
read-only memory), and where the individual sections are to be located in the
RAM and ROM, as well as the various options. Then the linker command is
invoked as follows:
The linker can call additional object files from an external library and also the
runtime support (RTS) library files that are necessary during the debugging proce-
dure. It also has many options that can be used to control the linker output, which
is an executable COFF object module that has .out as its extension. Detailed
information on the linker can be found in Ref. 17. Remember that information
on compiler, assembler, and linker commands may be different for other DSP
platforms, and information on these commands may be found in TI references
appropriate for the DSP platform chosen.
After we have created the executable COFF object module, we have to test and
debug it by using software simulation and/or by hardware emulation. For low-cost
simulation, we use the development starter kits, for example, the TMS3205402
DSP starter kit for the TMS320C54x DSP, and for more detailed evaluation and
debugging, we use an evaluation board such as the TMS320C5409. Finally, we
have the emulator boards such as the XDS510 JTAG emulator, which are used
to run the object code under real-time conditions.
The executable object code is downloaded to the DSP on the DSK board.
The simulator program installed on the PC that is connected to the DSK board
accepts the object code as its input and under the user’s control, simulates the
same actions that would be taken by the DSP device as it executes the object
code. The user can execute the object code one line at a time, by inserting
breakpoints at a particular line of the object program, halt the operation of the
program; view the contents of the data memory, program memory, auxiliary
registers, stacks, and so on; display the contents of the registers, for example,
the input and output of a filtering operation; and change the contents of any
register if so desired. One can also observe or monitor the registers controlling
the I/O hardware, serial ports, and other components. If minor changes are made,
the Code Composer Studio reassembles and links the files quickly to accelerate
the debugging process; otherwise the entire program has to be reassembled and
linked before debugging can proceed. When the monitoring and fixing the bug
at all breakpoints is over, execution of the program is resumed manually. By
inserting probe points, Code Composer Studio enables us to read the data from
a file or written to a file on the host PC, halting the execution of the program
momentarily, and then resume it. It should be obvious that simulation on a DSK
is a slow process and does not check the performance of the peripheral devices
that would be connected to the digital system.
CONCLUSION 389
In order to test the performance of the object code on the DSP in real time,
we connect an emulator board to the PC by a parallel printer cable, and the
XDS 510 Emulator conforms to the JTAG scan-based interface standard. The
peripheral devices are also connected to the emulator board. A DSP/BIOS II
plug-in is included in the Code Composer Studio to run the emulation of the
software. It also contains the RTDX (real-time data exchange) module that allows
transfer of data between the target DSP and the host PC in real time. The Code
Composer Studio enables us to test and debug the performance of the software
under real-time conditions, at full sampling rate. Without disrupting the execution
of the software, the emulator controls its execution of the breakpoints, single-
step execution, and monitoring of the memory and registers, and checks the
performance of the whole system, including the peripheral devices. When the
emulation of the whole system is found to operate correctly, the software is
approved for production and marketing.
This is a very brief outline of the hardware design process, carried out after
the design of the digital system is completed by use of MATLAB and Simulink.
Students are advised to refer to the extensive literature available from TI and
other manufacturers, in order to become proficient in the use of all software
tools available from them. For example, Analog Devices offers a development
software called Visual DSP++, which includes a C++ compiler, assembler, linker,
user interface, and debugging utilities for their ADSP-21xx DSP chips.
8.7 CONCLUSION
The material presented above is only a very brief outline of the design proce-
dure that is necessary to generate the assembly language code from the C code,
generate the object code using the assembler, and link the various sections of
the object code to obtain the executable object code in machine language. Then
this code is debugged by using an evaluation board, simulator, and emulator; all
of these steps are carried out by using an integrated, seamless software such as
the Code Composer Studio that was used to illustrate the steps. Like any design
process, this is an iterative procedure that may require that we go back to earlier
steps to improve or optimize the design, until we are completely satisfied with
the performance of the whole system in real-time conditions. Then the software
development is complete and is ready for use in the DSP chips chosen for the
specific application.
390 HARDWARE DESIGN USING DSP CHIPS
REFERENCES
MATLAB Primer
9.1 INTRODUCTION
1
The software is available from The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA
01760-2098, phone 508-647-7000, fax 508-647-7001, email [email protected], Website
http://www.mathworks.com.
2
If you are logging on a workstation connected to a computer network, you may have to set the
proper environment by typing setenv DISPLAY network number: or some other com-
mand before launching MATLAB.
391
392 MATLAB PRIMER
>>A = [1 2 0 3 1 5]
it displays
A = 1 2 0 3 1 5
If the array is typed without assigning a name for the array, >>[1 2 0 3 1 5],
MATLAB responds with
ans = 1 2 0 3 1 5
When you type elements with a semicolon between them, the elements are dis-
played in a column vector, for example
the directory to the disk drive a: if you have one in your computer so that the
contents of the workspace are saved in the floppy disk in that drive. Instead of
using the semicolumn between the elements, you can type the element on the
next line or by leaving one space, type three dots at the end of the line and
continue on the next line as shown below; this is useful when the array is very
long and extends beyond the end of the line:
>>C=[1 2 0
3 1 5
0 4 -2]
or
>>C=[ 1 2 0; 3 1 5; ...
0 4 -2]
displays the answer
C = 1 2 0
3 1 5
0 4 -2
ans
1
2
0
3
1
5
D =
1 3 0
2 1 4
0 5 -2
ans =
1
394 MATLAB PRIMER
3
0
2
1
4
0
5
-2
F= 5 10 0
15 5 25
0 20 -10
FF=x+C gives the output as
FF= 6 7 5
8 6 10
5 9 3
Addition +
Subtraction -
Multiplication *
Power or exponent ^
Transpose ’
Left division \
Right division /
Note that the command x=M\b gives us the solution to the equation M*x=b,
where M is a square matrix that is assumed to be nonsingular. In matrix algebra,
the solution is given by x = M −1 b. The left division is a more commonly used
operation in application of matrix algebra. (The command for the right division
x=b/M gives the solution to the equation x*M=b, assuming that x and M are
compatible for multiplication and the solution in matrix algebra is given by
x=bM−1 .)
INTRODUCTION 395
When we use the same variables used above but define them with new values,
they become the current values for the variables, so we define and use them
below as examples of the operations described above:
>>A=[1 2 1;0 1 1;2 1 1];
>>B=[2 1 0;1 1 1;-1 2 1];
>>C=A+B
C=
3 3 1
1 2 2
1 3 2
>>D=A*B
D=
3 5 3
0 3 2
4 5 2
>>M=A;
>>b=[2;4;4];
>>x=M\b
x = 0.0000 -2.0000 6.0000
Whereas the addition and subtraction of matrices are carried out by the addition
and subtraction term by term from the corresponding positions of the elements,
we know that the multiplication and “division” of matrices follow different rules.
MATLAB gives the correct answer in all the preceding operations. It has another
type of operation that is carried out when we use a dot before the sign for the
mathematical operation between the two matrices. The multiplication (.*), divi-
sion (./), and exponentiation (.^) of the terms in the corresponding positions
of the two compatible matrices are the three array operations.
Instead of multiplying the two matrices as D=A*B, now we type a dot before
the sign for multiplication. For example, the answer to >>D=A.*B is
D=
2 2 0
0 1 1
-2 2 1
Now we compute U=X.^2 and V=2.^X and get the following outputs:
>>U=X.^2
12 22
U =
32 42
>>V=2.^X:
21 22
V =
23 24
A matrix can be expanded by adding new matrices and column or row vectors
as illustrated by the following examples:
>>F=[A B]
F=
1 2 1 2 1 0
0 1 1 1 1 1
2 1 1 -1 2 1
>>b=[5 4 2];
>>G=[A;B;b]
G =
1 2 1
0 1 1
2 1 1
2 1 0
1 1 1
-1 2 1
5 4 2
The division operator ./ can be used to divide a scalar by each of the matrix
element as shown below, provided there are no zeros in the matrix:
ans = 2 3
2 1
G =
1 2 1
0 1 1
2 1 1
2 1 0
1 1 1
-1 2 1
5 6 2
The colon sign : can be used to extract a submatrix from a matrix as shown by
the following examples:
⎡ ⎤
2 5 6
⎢ ⎥
>> Q =⎣ 3 2 4⎦
−3 1 8
>>Q(:,2) gives a submatrix with elements in all rows and the second column only:
ans =
5
2
1
The command Q(3,:) gives the elements in all columns and the third row only:
ans =
-3 1 8
The command Q(1:2,2:3) gives the elements in the rows from 1 to 2 and in
the columns from 2 to 3:
ans =
5 6
2 4
There are many other operations that can be applied on a matrix, such as A,
as listed below:
MATRIX OPERATIONS
There are a few special matrices; we will list only three that are often found
useful in manipulating matrices:
ones(m,n), which gives a matrix with the number one in all its m rows and
n columns
398 MATLAB PRIMER
zeros(m,n), which gives a matrix with zeros in all its m rows and n columns
eye(m), which gives the “identity matrix” of order m × m.
We note that the inverse of a matrix A is obtained from the function inv(A),
the determinant of a matrix A is obtained from the function det(A) and the rank
from rank(A).
Since this is only a primer on MATLAB, it does not contain all the information
on its functions. You should refer to the user’s guide that accompanies every
software program mentioned above or any other books on MATLAB [1–3]. When
you have logged on to MATLAB or any of the subdirectories for the toolboxes,
there is an online help readily available. You type help functionname, where
functionname is the name of the function on which detailed information is
desired, and immediately that information is displayed on the command window.
So there is no need to memorize the syntax and various features of the function
and so on. The best way to learn the use of MATLAB and the toolboxes is to
try the functions on the computer, using the help utility if necessary.
t = [0.0:0.1:2.0];
v=sin(pi*t);
stem(v);grid
title(’Values of sin(pi*t)’)
ylabel(’Values of sin(pi*t)’)
xlabel(’Values of 10t’)
figure
INTRODUCTION 399
0.8
0.6
0.4
Values of sin (pi∗t)
0.2
−0.2
−0.4
−0.6
−0.8
−1
0 5 10 15 20 25
Values of 10t
0.8
0.6
0.4
Values of sin (pi∗t)
0.2
−0.2
−0.4
−0.6
−0.8
−1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Values of 10t
plot(t,v);grid
title(’Plot of sin(pi*t)’)
ylabel(’Value of sin(pi*t)’)
xlabel(’Value of t’)
400 MATLAB PRIMER
y yellow
m magenta
c cyan
r red
b blue
w white
k black
The next argument that can be added is a marker used to draw the curve. For
example, plot(t,v,’g’,’+’) will plot the curve with the + sign instead of
the line curve, which is the default marker. Other markers that are available are
o circle
. point
* star
- solid line
: dotted line
-- dashed line
-.- dash–dot–dash
One can plot several curves in the same figure; for example, we can plot both
v and y versus t by the command plot(t,v,’g’,’-’t,y,’r’,’*’). Another
way of plotting more than one variable in the same figure is to use the command
hold on after plotting the first variable and then typing the command for plotting
the second variable:
plot(t,v,’g’);
hold on
plot(t,y,’r’)
The use of the MATLAB commands subplot grid, and axis have been
described and used earlier in the book. The commands gtext and ginput are
also very useful in plotting. There is a tool called fvtool (filter visualization
tool) in the more recent versions of the Signal Processing Toolbox, which offers
several other features in plotting the response of digital filters. You may type help
gtext, help ginput, or help fvtool to get more information about them.
TRIGONOMETRIC FUNCTIONS
sin sine cos cosine
tan tangent asin arcsine
acos arccosine atan arctangent
atan2 four-quadrant arctangent sinh hyperbolic sine
cosh hyperbolic cosine tanh hyperbolic tangent
asinh hyperbolic arcsine acosh hyperbolic arccosine
atanh hyperbolic arctangent
MATHEMATICAL FUNCTIONS
abs absolute value or magnitude
angle phase angle of a complex number
sqrt square root
real real part of a complex number
imag imaginary part
conj complex conjugate
round round toward nearest integer
fix round toward zero
floor round toward −∞
ceil round toward ∞
sign signum function
rem remainder
exp exponential base 2
log natural logarithm
log10 log base 10
of precision if and when it is necessary, for example, when we use the functions
and scripts in the Signal Processing Toolbox.
But this problem is solved more easily by the following two statements to get
the same values for x(n), n = 1, 2, 3, . . . , 7, but we have to insert a dot before
the exponent (^) since n is a row vector of seven elements. It is very helpful to
find the order of a matrix or a vector A by using the statement S= size(A) to
know when to use the dot for the term-by-term operation—particularly when we
get an error message about the dimensions of the matrices:
>>n=1:7;
x(n)=(0.5).^n
When the script is to be executed, the program displays the statement Type in
the input parameters for xyz and waits for the input from the keyboard.
We may have requests for input for several parameters, and when the data for
all the parameters are entered by us from the keyboard, the program is executed.
404 MATLAB PRIMER
This is helpful when we wish to find the response (output) of the program with
different values for the input parameters.
Similarly, when we add the statement
the program displays the values for the parameter after the script has been
executed, which may not be otherwise displayed as the output from the pro-
gram.
When any statement is preceded by the % character, the statement is not exe-
cuted by the program; it is used only as a comment for information or explanation
of what the program does. It is a good practice to add a few lines with this %
character at the beginning of any script that we write and include the name of
the file also.
Example 9.1
We click File, Open, New from the menu bar when we are in the command
window and then choose M-file. An edit window appears next. Now we give an
example of a M-file that we write using the built-in text editor:
This file is saved with a name Ration.m on the current drive, and then we get
back to the command window, in which we type >>Ration. All the statements
of the M-file Ration.m are executed immediately, and the plot is shown in the
graphics window (see Fig. 9.3). If there are any error messages, we launch the
file in the edit window, then edit and correct the statements where necessary.
This example is a simple one, but we have many examples of M-files as well as
files used in an interactive mode discussed earlier in the book.
SIGNAL PROCESSING TOOLBOX 405
10
2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Value of x
The Signal Processing Toolbox is a collection of about 160 functions that are
extensively used for the analysis, design, and realization of discrete-time sys-
tems and tasks or operations such as modeling, detection, filtering, prediction,
and spectral analysis in digital signal processing. They run under MATLAB,
which has about 330 functions and operations. By typing help function in
the command window, where function is the name of these functions, detailed
information about them is displayed. By typing help signal, we get a com-
plete list of all the functions in the Signal Processing Toolbox, when this has
been installed as a subdirectory of the MATLAB directory. If we know the name
of the function that does the numerical processing but not the syntax and other
details, we can type help function. But when we have to carry out numerical
processing but don’t know the name of the MATLAB function, we may have to
go through the list of all MATLAB functions and choose the appropriate one for
the purpose. The list of all MATLAB functions in the Signal Processing Toolbox
is given in Section 9.2.1, and students are encouraged to use the help utility and
become familiar with as many of the functions as possible. That should improve
their efficiency in calling up the appropriate function immediately when the need
arises while they write and edit the script. Note that we can use any of the thou-
sands of functions found in all other toolboxes and in the simulation software
called Simulink that runs under MATLAB, which makes this software extremely
powerful and versatile.
406 MATLAB PRIMER
Filter analysis.
abs - Magnitude.
angle - Phase angle.
filternorm - Compute the 2-norm or inf-norm of a digital
filter.
freqs - Laplace transform frequency response.
freqspace - Frequency spacing for frequency response.
freqz - Z-transform frequency response.
fvtool - Filter Visualization Tool.
grpdelay - Group delay.
impz - Discrete impulse response.
phasez - Digital filter phase response.
phasedelay - Phase delay of a digital filter.
unwrap - Unwrap phase.
zerophase - Zero-phase response of a real filter.
zplane - Discrete pole-zero plot.
Filter implementation.
conv - Convolution.
conv2 - 2-D convolution.
convmtx - Convolution matrix.
deconv - Deconvolution.
fftfilt - Overlap-add filter implementation.
filter - Filter implementation.
filter2 - Two-dimensional digital filtering.
filtfilt - Zero-phase version of filter.
filtic - Determine filter initial conditions.
latcfilt - Lattice filter implementation.
medfilt1 - 1-Dimensional median filtering.
sgolayfilt - Savitzky-Golay filter implementation.
sosfilt - Second-order sections (biquad) filter
implementation.
upfirdn - Up sample, FIR filter, down sample.
Filter discretization.
Windows.
Window object.
Transforms.
Cepstral analysis.
Parametric modeling.
Linear Prediction.
Waveform generation.
Specialized operations.
See also SIGDEMOS, AUDIO, and, in the Filter Design Toolbox, FILTERDESIGN.
If we type help functionname, we get information about the syntax and
use of the function and so on, but if we type type functionname, we get
the program listing also. An example of this given below; one can modify any
function, save it with a different name and run it:
xi = 4*xi.^2;
w = besseli(0,beta*sqrt(1-xi/xind))/bes;
w = abs([w(n:-1:odd+1) w])’;
% [EOF] kaiser.m
REFERENCES
415
416 INDEX
Bartlett window, finite impulse response (FIR) Circuit model, discrete-time system, 71–73
filters, 266, 268–269 Closed-form expression, 65, 155
Base station controller (BSC), 25 Code-division multiple access (CDMA)
Base transceiver stations (BTSs), 25–27 technology, 2, 25
Bessel function, 268 Common-object file format (COFF), 387–388
Bilinear transformations, infinite impulse Complementary function/complementary
response (IIR) filters, 221–226 solution, 58
Binary coding, 6 Complementary metal oxide semiconductor
Binary numbers, in quantized filter analysis, (CMOS) transistors, 19, 23
360–367 Complex conjugate poles, 51–54
Binomial theorem, 203 Complex conjugate response, discrete-time
Biomedical systems, 2, 354 Fourier transform, 145
Blackman window, finite impulse response Computed tomography (CT) scanning, 2
(FIR) filters, 266, 268 Computer networking technology, 27
Bode plot, 149 Conjugation property, discrete-time Fourier
Bone scanning, 2 transform, 145
Bounded-input bounded output (BIBO) Consumer electronics, 2
stability, 77–78 Continuous-time filters, see Analog filters
Butterworth bandpass digital filters, 221, 236 Continuous-time function, 113
Butterworth lowpass filters Continuous-time signal, 3–4, 21, 28, 41–42
design theory of, 194–201 Continuous-time systems, 24
filter realization Convolution
generally, 323–324 allpass filters, 325
using MATLAB, 334–337 defined, 25
Butterworth magnitude response, 192–194 discrete-time Fourier series (DTFS),
Butterworth polynomials, 197–198 164–169
linear phase finite impulse response (FIR)
C/C++ language, 385–386 filters, 265
Canonic realization, FIR filters, 309–310 Convolution sum
Cardiac pacemakers, 2 discrete-time Fourier transform
Cascade realization (DTFT), 125
finite impulse response (FIR) filters, filter realizations, 304
306–307 time-domain analysis, 38–41, 82, 94
infinite impulse response (IIR) filters, z-transform theory, 65–70
313–317, 329–331, 366 Cooley–Tukey algorithm, 21
Cauer filter, 212 Cos(ω0 n), properties of, 14–19
Causal sequence, 9, 133 CPU (central processing unit), 384
Causal system, 33 Cramer’s rule, 62
Cell phones, 354 Cutoff frequency
Cell repeat pattern, mobile network system, 26 finite impulse response (FIR) filters, 266,
Channel coding, 2 293–294
Characteristic roots, 58 frequency-domain analysis, 141
Chebyshev (I/II) approximation, 189, 202, infinite impulse response (IIR) filters, 213,
208–209, 284 226–227
Chebyshev (I) bandpass filter, 125, 215, 235 linear phase FIR filters, 272
Chebyshev (I/II) highpass filters, 213, 237–238
Chebyshev (I/II) lowpass filters Data encryption, 2
characterized, 323–324 Decryption, 2
design of, 210–211 Delay, see also Group delay
design theory of, 204–208 defined, 33
realization using MATLAB, 334–337 equalizers, 231, 321
Chebyshev polynomials, properties of, hardware containing, 68
202–204 z-transform theory, 46–49
Circuit boards, filter design and, 19 Demodulation, 25
INDEX 417