Analog and Digital Signal Processing PDF
Analog and Digital Signal Processing PDF
Ashok Ambardar
Michigan Technological University
Pacific Grove Albany Belmont Bonn Boston Cincinnati Detroit Johannesburg London
Madrid Melbourne Mexico City New York Paris Singapore Tokyo Toronto Wahington
CONTENTS
LIST OF TABLES xi
PREFACE xiii
FROM THE PREFACE TO THE FIRST EDITION xv
1 OVERVIEW 1
1.0 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 The Frequency Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 From Concept to Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 ANALOG SIGNALS 8
2.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1 Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Operations on Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Signal Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 Harmonic Signals and Sinusoids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 Commonly Encountered Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6 The Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.7 The Doublet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.8 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3 DISCRETE SIGNALS 39
3.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.1 Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2 Operations on Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3 Decimation and Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.4 Common Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.5 Discrete-Time Harmonics and Sinusoids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.6 Aliasing and the Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.7 Random Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
v
vi Contents
4 ANALOG SYSTEMS 68
4.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.2 System Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3 Analysis of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.4 LTI Systems Described by Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.5 The Impulse Response of LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.6 System Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.7 Application-Oriented Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5 DISCRETE-TIME SYSTEMS 96
5.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.1 Discrete-Time Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.2 System Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.3 Digital Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.4 Digital Filters Described by Dierence Equations . . . . . . . . . . . . . . . . . . . . . . . . 103
5.5 Impulse Response of Digital Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.6 Stability of Discrete-Time LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.7 Connections: System Representation in Various Forms . . . . . . . . . . . . . . . . . . . . . 116
5.8 Application-Oriented Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
10 MODULATION 300
10.0 Scope and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
10.1 Amplitude Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
10.2 Single-Sideband AM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
10.3 Angle Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
10.4 Wideband Angle Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
10.5 Demodulation of FM Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
10.6 The Hilbert Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
REFERENCES 798
INDEX 801
LIST OF TABLES
xi
xii List of Tables
In keeping with the goals of the first edition, this second edition of Analog and Digital Signal Processing
is geared to junior and senior electrical engineering students and stresses the fundamental principles and
applications of signals, systems, transforms, and filters. The premise is to help the student think clearly in
both the time domain and the frequency domain and switch from one to the other with relative ease. The
text assumes familiarity with elementary calculus, complex numbers, and basic circuit analysis.
This edition has undergone extensive revision and refinement, in response to reviewer comments and to
suggestions from users of the first edition (including students). Major changes include the following:
1. At the suggestion of some reviewers, the chapters have been reorganized. Specifically, continuous and
discrete aspects (that were previously covered together in the first few chapters) now appear in separate
chapters. This should allow instructors easier access to either sequential or parallel coverage of analog
and discrete signals and systems.
2. The material in each chapter has been pruned and streamlined to make the book more suited as a
textbook. We highlight the most important concepts and problem-solving methods in each chapter by
including boxed review panels. The review panels are reinforced by discussions and worked examples.
Many new figures have been added to help the student grasp and visualize critical concepts.
3. New application-oriented material has been added to many chapters. The material focuses on how the
theory developed in the text finds applications in diverse fields such as audio signal processing, digital
audio special eects, echo cancellation, spectrum estimation, and the like.
4. Many worked examples in each chapter have been revised and new ones added to reinforce and extend
key concepts. Problems at the end of each chapter are now organized into Drill and Reinforcement,
Review and Exploration, and Computation and Design and include a substantial number of new
problems. The computation and design problems, in particular, should help students appreciate the
application of theoretical principles and guide instructors in developing projects suited to their own
needs.
5. The Matlab-based software supplied with the book has been revised and expanded. All the routines
have been upgraded to run on the latest version (currently, v5) of both the professional edition and
student edition of Matlab, while maintaining downward compatibility with earlier versions.
6. The Matlab appendices (previously at the end of each chapter) have been consolidated into a separate
chapter and substantially revamped. This has allowed us to present integrated application-oriented
examples spanning across chapters in order to help the student grasp important signal-processing
concepts quickly and eectively. Clear examples of Matlab code based on native Matlab routines,
as well as the supplied routines, are included to help accelerate the learning of Matlab syntax.
xiii
xiv Preface
7. A set of new self-contained, menu-driven, graphical user interface (GUI) programs with point-and-click
features is now supplied for ease of use in visualizing basic signal processing principles and concepts.
These GUIs require no experience in Matlab programming, and little experience with its syntax,
and thus allow students to concentrate their eorts on understanding concepts and results. The
programs cover signal generation and properties, time-domain system response, convolution, Fourier
series, frequency response and Bode plots, analog filter design, and digital filter design. The GUIs are
introduced at the end of each chapter, in the Computation and Design section of the problems. I
am particularly grateful to Craig Borghesani, Terasoft, Inc. (http://world.std.com/!borg/) for his
help and Matlab expertise in bringing many of these GUIs to fruition.
This book has profited from the constructive comments and suggestions of the following reviewers:
Professor Khaled Abdel-Ghaar, University of California at Davis
Professor Tangul Basar, University of Illinois
Professor Martin E. Kaliski, California Polytechnic State University
Professor Roger Goulet, Universite de Sherbrooke
Professor Ravi Kothari, University of Cincinnati
Professor Nicholas Kyriakopoulos, George Washington University
Professor Julio C. Mandojana, Mankato State University
Professor Hadi Saadat, Milwaukee School of Engineering
Professor Jitendra K. Tugnait, Auburn University
Professor Peter Willett, University of Connecticut
Here, at Michigan Technological University, it is also our pleasure to acknowledge the following:
Professor Clark R. Givens for lending mathematical credibility to portions of the manuscript
Professor Warren F. Perger for his unfailing help in all kinds of TEX-related matters
Professor Tim Schulz for suggesting some novel DSP projects, and for supplying several data files
Finally, at PWS Publishing, Ms Suzanne Jeans, Editorial Project Manager, and the editorial and production
sta (Kirk Bomont, Liz Clayton, Betty Duncan, Susan Pendleton, Bill Stenquist, Jean Thompson, and
Nathan Wilbur), were instrumental in helping meet (or beat) all the production deadlines.
We would appreciate hearing from you if you find any errors in the text or discover any bugs in the
software. Any errata for the text and upgrades to the software will be posted on our Internet site.
This book on analog and digital signal processing is intended to serve both as a text for students and as a
source of basic reference for professionals across various disciplines. As a text, it is geared to junior/senior
electrical engineering students and details the material covered in a typical undergraduate curriculum. As
a reference, it attempts to provide a broader perspective by introducing additional special topics towards
the later stages of each chapter. Complementing this text, but deliberately not integrated into it, is a set of
powerful software routines (running under Matlab) that can be used not only for reinforcing and visualizing
concepts but also for problem solving and advanced design.
The text stresses the fundamental principles and applications of signals, systems, transforms and filters.
It deals with concepts that are crucial to a full understanding of time-domain and frequency-domain rela-
tionships. Our ultimate objective is that the student be able to think clearly in both domains and switch
from one to the other with relative ease. It is based on the premise that what might often appear obvious
to the expert may not seem so obvious to the budding expert. Basic concepts are, therefore, explained and
illustrated by worked examples to bring out their importance and relevance.
Scope
The text assumes familiarity with elementary calculus, complex numbers, basic circuit analysis and (in a few
odd places) the elements of matrix algebra. It covers the core topics in analog and digital signal processing
taught at the undergraduate level. The links between analog and digital aspects are explored and emphasized
throughout. The topics covered in this text may be grouped into the following broad areas:
1. An introduction to signals and systems, their representation and their classification.
2. Convolution, a method of time-domain analysis, which also serves to link the time domain and the
frequency domain.
3. Fourier series and Fourier transforms, which provide a spectral description of analog signals, and
their applications.
4. The Laplace transform, which forms a useful tool for system analysis and its applications.
5. Applications of Fourier and Laplace techniques to analog filter design.
6. Sampling and the discrete-time Fourier transform (DTFT) of sampled signals, and the DFT and
the FFT, all of which reinforce the central concept that sampling in one domain leads to a periodic
extension in the other.
7. The z-transform, which extends the DTFT to the analysis of discrete-time systems.
8. Applications of digital signal processing to the design of digital filters.
xv
xvi From the Preface to the First Edition
We have tried to preserve a rational approach and include all the necessary mathematical details, but we
have also emphasized heuristic explanations whenever possible. Each chapter is more or less structured as
follows:
1. A short opening section outlines the objectives and topical coverage and points to the required back-
ground.
2. Central concepts are introduced in early sections and illustrated by worked examples. Special topics
are developed only in later sections.
3. Within each section, the material is broken up into bite-sized pieces. Results are tabulated and sum-
marized for easy reference and access.
4. Whenever appropriate, concepts are followed by remarks, which highlight essential features or limita-
tions.
5. The relevant software routines and their use are outlined in Matlab appendices to each chapter.
Sections that can be related to the software are specially marked in the table of contents.
6. End-of-chapter problems include a variety of drills and exercises. Matlab code to generate answers
to many of these appears on the supplied disk.
A solutions manual for instructors is available from the publisher.
Software
A unique feature of this text is the analog and digital signal processing (ADSP) software toolbox for signal
processing and analytical and numerical computation designed to run under all versions of Matlab. The
routines are self-demonstrating and can be used to reinforce essential concepts, validate the results of ana-
lytical paper and pencil solutions, and solve complex problems that might, otherwise, be beyond the skills
of analytical computation demanded of the student.
The toolbox includes programs for generating and plotting signals, regular and periodic convolution,
symbolic and numerical solution of dierential and dierence equations, Fourier analysis, frequency response,
asymptotic Bode plots, symbolic results for system response, inverse Laplace and inverse z-transforms, design
of analog, IIR and FIR filters by various methods, and more.
Since our primary intent is to present the principles of signal processing, not software, we have made no
attempt to integrate Matlab into the text. Software related aspects appear only in the appendices to each
chapter. This approach also maintains the continuity and logical flow of the textual material, especially for
users with no inclination (or means) to use the software. In any case, the self-demonstrating nature of the
routines should help you to get started even if you are new to Matlab. As an aside, all the graphs for this
text were generated using the supplied ADSP toolbox.
We hasten to provide two disclaimers. First, our use of Matlab is not to be construed as an endorsement
of this product. We just happen to like it. Second, our routines are supplied in good faith; we fully expect
them to work on your machine, but provide no guarantees!
Acknowledgements
This book has gained immensely from the incisive, sometimes provoking, but always constructive, criticism
of Dr. J.C.Mandojana. Many other individuals have also contributed in various ways to this eort. Special
thanks are due, in particular, to
Drs. R.W. Bickmore and R.T. Sokolov, who critiqued early drafts of several chapters and provided
valuable suggestions for improvement.
Dr. A.R. Hambley, who willingly taught from portions of the final draft in his classes.
From the Preface to the First Edition xvii
Drs. D.B. Brumm, P.H. Lewis and J.C. Rogers, for helping set the tone and direction in which the
book finally evolved.
Mr. Scott Ackerman, for his invaluable computer expertise in (the many) times of need.
At PWS Publishing, the editor Mr. Tom Robbins, for his constant encouragement, and Ms. Pam
Rockwell for her meticulous attention to detail during all phases of editing and production, and Ken
Morton, Lai Wong, and Lisa Flanagan for their behind-the-scenes help.
The students, who tracked down inconsistencies and errors in the various drafts, and provided extremely
useful feedback.
The Mathworks, for permission to include modified versions of a few of their m-files with our software.
We would also like to thank Dr. Mark Thompson, Dr. Hadi Saadat and the following reviewers for their
useful comments and suggestions:
Campus lore has it that students complain about texts prescribed by their instructors as being too
highbrow or tough and not adequately reflecting student concerns, while instructors complain about texts
as being low-level and, somehow, less demanding. We have consciously tried to write a book that both the
student and the instructor can tolerate. Whether we have succeeded remains to be seen and can best be
measured by your response. And, if you have read this far, and are still reading, we would certainly like to
hear from you.
Chapter 1
OVERVIEW
1.0 Introduction
I listen and I forget,
I see and I remember, I do and I learn.
A Chinese Proverb
This book is about signals and their processing by systems. This chapter provides an overview of the
terminology of analog and digital processing and of the connections between the various topics and concepts
covered in subsequent chapters. We hope you return to it periodically to fill in the missing details and get
a feel for how all the pieces fit together.
1.1 Signals
Our world is full of signals, both natural and man-made. Examples are the variation in air pressure when we
speak, the daily highs and lows in temperature, and the periodic electrical signals generated by the heart.
Signals represent information. Often, signals may not convey the required information directly and may
not be free from disturbances. It is in this context that signal processing forms the basis for enhancing,
extracting, storing, or transmitting useful information. Electrical signals perhaps oer the widest scope for
such manipulations. In fact, it is commonplace to convert signals to electrical form for processing.
The value of a signal, at any instant, corresponds to its (instantaneous) amplitude. Time may assume
a continuum of values, t, or discrete values, nts , where ts is a sampling interval and n is an integer.
The amplitude may also assume a continuum of values or be quantized to a finite number of discrete levels
between its extremes. This results in four possible kinds of signals, as shown in Figure 1.1.
t n t n
The music you hear from your compact disc (CD) player due to changes in the air pressure caused by
the vibration of the speaker diaphragm is an analog signal because the pressure variation is a continuous
function of time. However, the information stored on the compact disc is in digital form. It must be processed
1
2 Chapter 1 Overview
and converted to analog form before you can hear the music. A record of the yearly increase in the world
population describes time measured in increments of one (year), and the population increase is measured in
increments of one (person). It is a digital signal with discrete values for both time and population.
Few other technologies have revolutionized the world as profoundly as those based on digital signal
processing. For example, the technology of recorded music was, until recently, completely analog from end
to end, and the most important commercial source of recorded music used to be the LP (long-playing) record.
The advent of the digital compact disc has changed all that in the span of just a few short years and made
the long-playing record practically obsolete. Signal processing, both analog and digital, forms the core of
this application and many others.
1.2 Systems
Systems may process analog or digital signals. All systems obey energy conservation. Loosely speaking, the
state of a system refers to variables, such as capacitor voltages and inductor currents, which yield a measure
of the system energy. The initial state is described by the initial value of these variables or initial conditions.
A system is relaxed if initial conditions are zero. In this book, we study only linear systems (whose
input-output relation is a straight line passing through the origin). If a complicated input can be split into
simpler forms, linearity allows us to find the response as the sum of the response to each of the simpler
forms. This is superposition. Many systems are actually nonlinear. The study of nonlinear systems often
involves making simplifying assumptions, such as linearity.
A, t 0 A(1 et/ ), t 0
A cos(0 t )
A cos(0 t) , = tan1(0 )
(1 + 02 2 )1/2
A cos(0 t ) A0 t/
A cos(0 t), t 0 + e , t0
(1 + 0 )
2 2 1/2 1 + 02 2
vi(t) v0 (t)
1 + R + 1
Input Output
C
1 e- t /
t t
- -
(a) Input cos(0t) and response (dark) (b) Input cos(0t), t > 0 and response (dark)
1 1
0.5 0.5
Amplitude
Amplitude
0 0
0.5 0.5
1 1
t=0 t=0
Time t Time t
It is not our intent here to see how the solutions arise but how to interpret the results in terms of system
performance. The cosine input yields only a sinusoidal component as the steady-state response. The
response to the suddenly applied step and the switched cosine also includes a decaying exponential term
representing the transient component.
Figure 1.4 Step response of an RC circuit for various and the concept of rise time
Magnitude
ck cos (2 kf0 t + k ) ck
ck
f
kf0
t
Phase
k
f
kf0
1
0.5 Cosine input (dashed)
Amplitude
0
10
0.5 1
1 0.1
Time t
If the input consists of unit cosines at dierent frequencies, the magnitude and phase (versus frequency)
of the ratio of the output describes the frequency response, as shown in Figure 1.7. The magnitude
spectrum clearly shows the eects of attenuation at high frequencies.
20
Magnitude
40
0.5
60
80
0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Frequency f Frequency f
There are measures analogous to bandwidth that describe the time duration of a signal over which much
of the signal is concentrated. The time constant provides one such measure.
The relation B = 1 clearly brings out the reciprocity in time and frequency. The smaller the duration
or the more localized the time signal, the larger is its bandwidth or frequency spread. The quantity B is
a measure of the time-bandwidth product, a relation analogous to the uncertainty principle of quantum
physics. We cannot simultaneously make both duration and bandwidth arbitrarily small.
(a) Cosines at different frequencies (b) Sum of 100 cosines (c) Limiting form is impulse
100 2
1
1.5
0.5
Amplitude
Amplitude
Amplitude
50 1
0
0.5
0.5
0 0
1
0.5
t=0 t=0 t=0
Time t Time t Time t
The time-domain response to an impulse is called the impulse response. A system is completely char-
acterized in the frequency domain by its frequency response or transfer function. A system is completely
characterized in the time domain by its impulse response. Naturally, the transfer function and impulse
response are two equivalent ways of looking at the same system.
1.3.4 Convolution
The idea of decomposing a complicated signal into simpler forms is very attractive for both signal and system
analysis. One approach to the analysis of continuous-time systems describes the input as a sum of weighted
impulses and finds the response as a sum of weighted impulse responses. This describes the process of
convolution. Since the response is, in theory, a cumulative sum of infinitely many impulse responses, the
convolution operation is actually an integral.
operation of multiplication when we move to a transformed domain, but there is a price to pay. Since the
response is evaluated in the transformed domain, we must have the means to remap this response to the
time domain through an inverse transformation. Examples of this method include phasor analysis (for
sinusoids and periodic signals), Fourier transforms, and Laplace transforms. Phasor analysis only allows
us to find the steady-state response of relaxed systems to periodic signals. The Fourier transform, on the
other hand, allows us to analyze relaxed systems with arbitrary inputs. The Laplace transform uses
a complex frequency to extend the analysis both to a larger class of inputs and to systems with nonzero
initial conditions. Dierent methods of system analysis allow dierent perspectives on both the system and
the analysis results. Some are more suited to the time domain, others oer a perspective in the frequency
domain, and yet others are more amenable to numerical computation.
ANALOG SIGNALS
2.1 Signals
The study of signals allows us to assess how they might be processed to extract useful information. This is
indeed what signal processing is all about. An analog signal may be described by a mathematical expression
or graphically by a curve or even by a set of tabulated values. Real signals, alas, are not easy to describe
quantitatively. They must often be approximated by idealized forms or models amenable to mathematical
manipulation. It is these models that we concentrate on in this chapter.
t t t t
Piecewise continuous signals possess dierent expressions over dierent intervals. Continuous sig-
nals, such as x(t) = sin(t), are defined by a single expression for all time.
8
2.1 Signals 9
Periodic signals are infinite-duration signals that repeat the same pattern endlessly. The smallest
repetition interval is called the period T and leads to the formal definition
All time-limited functions of finite amplitude have finite absolute area. The criterion of absolute integrability
is often used to check for system stability or justify the existence of certain transforms.
The area of x2 (t) is tied to the power or energy delivered to a 1- resistor. The instantaneous power
pi (t) (in watts) delivered to a 1- resistor may be expressed as pi (t) = x2 (t) where the signal x(t) represents
either the voltage across it or the current through it. The total energy E delivered to the 1- resistor is
called the signal energy (in joules) and is found by integrating the instantaneous power pi (t) for all time:
! !
E= pi (t) dt = |x2 (t)| dt (2.3)
The absolute value |x(t)| allows this relation to be used for complex-valued signals. The energy of some
common signals is summarized in the following review panel.
The signal power P equals the time average of the signal energy over all time. If x(t) is periodic with
period T , the signal power is simply the average energy per period, and we have
!
1
P = |x(t)|2 dt (for periodic signals) (2.4)
T T
"
Notation: We use T to mean integration over any convenient one-period duration.
The average value can never exceed the rms value and thus xav xrms . Two useful results pertaining to the
power in sinusoids and complex exponentials are listed in the following review panel.
If x(t) is a nonperiodic power signal, we can compute the signal power (or average value) by averaging its
energy (or area) over a finite stretch T0 , and letting T0 to obtain the limiting form
! !
1 1
P = lim |x(t)|2 dt xav = lim x(t) dt (for nonperiodic signals) (2.6)
T0 T0 T T0 T0 T
0 0
We emphasize that these limiting forms are useful only for nonperiodic signals.
4 4
2 2 2
t t t t
6 1 4 1 4 1 4 6
Figure E2.1A The signals for Example 2.1(a)
2.1 Signals 11
Comment: The third term describes twice the area of x(t)y(t) (and equals 12).
(b) The signal x(t) = 2et 6e2t , t > 0 is an energy signal. Its energy is
! ! % & ' ! (
1
Ex = x (t) dt =
2
4e2t 24e3t + 36e4t dt = 2 8 + 9 = 3 J Note: t
e dt =
0 0 0
Comment: As a consistency check, ensure that the energy is always positive!
(c) Find the signal power for the periodic signals shown in Figure E2.1C.
x(t) f(t)
A y(t) A
A
T/2 t t
T t T
T
-A -A
Figure E2.1C The signals for Example 2.1(c)
We use the results of Review Panel 2.2 to find the energy in one period.
For x(t): The energy Ex in one period is the sum of the energy in each half-cycle. We compute
Ex = 12 A2 (0.5T ) + 12 (A)2 (0.5T ) = 0.5A2 T .
Ex
The power in x(t) is thus Px = = 0.5A2 .
T
For y(t): The energy Ey in one period of y(t) is Ey = 0.5A2 .
Ey
Thus Py = = 0.5A2 = 0.5A2 D where D = is the duty ratio.
T T T
For a half-wave rectified sine, D = 0.5 and the signal power equals 0.25A2 .
For a full-wave rectified sine, D = 1 and the signal power is 0.5A2 .
A2 T
For f (t): The energy Ef in one period is Ef = 13 A2 (0.5T ) + 13 (A)2 (0.5T ) = .
3
2
Ef A
The signal power is thus Pf = = .
T 3
(d) Let x(t) = Aejt . Since x(t) is complex valued, we work with |x(t)| (which equals A) to obtain
! !
1 T 1 T 2
Px = |x(t)|2 dt = A dt = A2
T 0 T 0
12 Chapter 2 Analog Signals
x(t), f (t) = 1 + x(t 1), g(t) = x(1 t), h(t) = x(0.5t + 0.5), w(t) = x(2t + 2)
To generate f (t) = 1 + x(t 1), we delay x(t) by 1 and add a dc oset of 1 unit.
To generate g(t) = x(1 t), we fold x(t) and then shift right by 1.
Consistency check: With t = 1 tn , the edge of x(t) at t = 2 translates to tn = 1 t = 1.
To generate h(t) = x(0.5t + 0.5), first advance x(t) by 0.5 and then stretch by 2 (or first stretch by 2
and then advance by 1).
Consistency check: With t = 0.5tn + 0.5, the edge of x(t) at t = 2 translates to tn = 2(t 0.5) = 3.
To generate w(t) = x(2t + 2), advance x(t) by 2 units, then shrink by 2 and fold.
Consistency check: With t = 2tn + 2, the edge of x(t) at t = 2 translates to tn = 0.5(t 2) = 0.
(b) Express the signal y(t) of Figure E2.2B in terms of the signal x(t).
x(t) y(t)
4
2
t t
1 1 1 5
Figure E2.2B The signals x(t) and y(t) for Example 2.2(b)
We note that y(t) is amplitude scaled by 2. It is also a folded, stretched, and shifted version of x(t).
If we fold 2x(t) and stretch by 3, the pulse edges are at (3, 3). We need a delay of 2 to get y(t), and
thus y(t) = 2x[(t 2)/3] = 2x( 3t + 23 ).
Alternatively, with y(t) = 2x(t + ), we use t = tn + to solve for and by noting that t = 1
corresponds to tn = 5 and t = 1 corresponds to tn = 1. Then
)
1 = 5 + 1 2
= =
1 = + 3 3
14 Chapter 2 Analog Signals
For an even symmetric signal, the signal values at t = and t = are equal. The area of an even
symmetric signal is twice the area on either side of the origin. For an odd symmetric signal, the signal values
at t = and t = are equal but opposite in sign and the signal value at the origin equals zero. The area
of an odd symmetric signal over symmetric limits (, ) is always zero.
Combinations (sums and products) of symmetric signals are also symmetric under certain conditions as
summarized in the following review panel. These results are useful for problem solving.
To find xe (t) and xo (t) from x(t), we fold x(t) and invoke symmetry to get
t t
1 2 1 2
Figure E2.3A(1) The signals for Example 2.3(a)
For x(t), we create 0.5x(t) and 0.5x(t), then add the two to give xe (t) and subtract to give xo (t) as
shown in Figure E2.3A(2). Note how the components get added (or subtracted) when there is overlap.
0.5x(t) 0.5x(t) xe (t) xo(t)
4
2
2 2
2 1
2 1 t
t t 1 2
t 1
1 2 2 1
2 1 1 2 2
Figure E2.3A(2) The process for finding the even and odd parts of x(t)
16 Chapter 2 Analog Signals
The process for finding the even and odd parts of y(t) is identical and shown in Figure E2.3A(3).
0.5y(t) 0.5y(t) ye (t) yo(t)
2
2 2 2 1
2 1 t
t t t 1 2
-1
1 2 2 1 2 1 1 2
2
Figure E2.3A(3) The process for finding the even and odd parts of y(t)
In either case, as a consistency check, make sure that the even and odd parts display the appropriate
symmetry and add up to the original signal.
(b) Let x(t) = (sin t + 1)2 . To find its even and odd parts, we expand x(t) to get
The complex exponential form requires two separate plots (its real part and imaginary part, for example)
for a graphical description.
If we write xp (t) = A cos(0 t + ) = A cos[0 (t tp )], the quantity tp = /0 is called the phase delay
and describes the time delay in the signal caused by a phase shift of .
The various time and frequency measures are related by
1 2 tp
f0 = 0 = = 2f0 = 0 tp = 2f0 tp = 2 (2.13)
T T T
We emphasize that an analog sinusoid or harmonic signal is always periodic and unique for any choice of
period or frequency (quite in contrast to digital sinusoids, which we study later).
2.4 Harmonic Signals and Sinusoids 17
For a combination of sinusoids at dierent frequencies, say y(t) =x1 (t) + x2 (t) + , the signal power
Py equals the sum of the individual powers and the rms value equals P y . The reason is that squaring y(t)
produces cross-terms such as 2x1 (t)x2 (t), all of which integrate to zero.
The frequencies (in rad/s) of the individual components are 23 , 12 , and 13 , respectively.
The fundamental frequency is 0 = GCD( 23 , 12 , 13 ) = 16 rad/s. Thus, T = 2 0 = 12 seconds.
+ 2 ,
The signal power is Px = 0.5 2 + 42 + 42 = 36 W.
The rms value is xrms = Px = 36 = 6.
18 Chapter 2 Analog Signals
(b) The signal x(t) = sin(t) + sin(t) is almost periodic because the frequencies 1 = 1 rad/s and 2 =
rad/s of the two components
+ are non-commensurate.
,
The signal power is Px = 0.5 12 + 12 = 1 W.
- -
1, |t| < 0.5 1 |t|, |t| 1
rect(t) = (width = 1) tri(t) = (width = 2) (2.16)
0, elsewhere 0, elsewhere
Both are even symmetric and possess unit area and unit height. The signal f (t) = rect( t ) describes a
t
rectangular pulse of width , centered at t = . The signal g(t) = tri( ) describes a triangular pulse of
width 2 centered at t = . These pulse signals serve as windows to limit and shape arbitrary signals.
Thus, h(t) = x(t)rect(t) equals x(t) abruptly truncated past |t| = 0.5, whereas x(t)tri(t) equals x(t) linearly
tapered about t = 0 and zero past |t| = 1.
An arbitrary signal may be represented in dierent forms each of which has its advantages, depending on
the context. For example, we will find signal description by intervals quite useful in convolution, a description
by a linear combination of shifted steps and ramps very useful in Laplace transforms and a description by
linear combinations of shifted rect and tri functions extremely useful in Fourier transforms.
(b) Refer to Figure E2.5B. Describe x(t) by a linear combination of rect and /or tri functions, y(t) by a
linear combination of steps and/or ramps, and both x(t) and y(t) by intervals.
x(t) y(t)
3
3
t t
3 6 3
Figure E2.5B The signals x(t) and y(t) for Example 2.5(b)
The signal x(t) may be described by a linear combination of shifted rect and tri functions as
The signal y(t) may be described by a linear combination of shifted steps and ramps as
Caution: We could also write y(t) = t rect[ 13 (t 1.5)], but this is a product (not a linear
combination) and not the preferred form.
The signals x(t) and y(t) may be described by intervals as
3 t, 0<t3 -
t, 0t<3
x(t) = 3 + t, 3t<6 y(t) =
0, elsewhere
0, elsewhere
sin(t)
sinc(t) = (2.17)
t
Since the sine term oscillates while the factor 1/t decreases with time, sinc(t) shows decaying oscillations.
At t = 0, the sinc function produces the indeterminate form 0/0. Using the approximation sin() (or
lH
opitals rule), we establish that sinc(t) = 1 in the limit as t 0:
1 A
sinc
sinc squared
Amplitude
It says that (t) is of zero duration but possesses finite area. To put the best face on this, we introduce
a third, equally bizarre criterion that says (t) is unbounded (infinite or undefined) at t = 0 (all of which
would make any mathematician wince).
( t)
1/
1/
(1)
1/ Area = 1
Area = 1 Area = 1 Area = 1
t t t t
Width = Width = Width =
Figure 2.2 The genesis of the impulse function
As we decrease , its width shrinks and the height increases proportionately to maintain unit area. As
0, we get a tall, narrow spike with unit area that satisfies all criteria associated with an impulse.
Signals such as the triangular pulse 1 tri(t/ ), the exponentials 1 exp(t/ )u(t) and 2 exp(|t|/ ), the
sinc functions 1 sinc(t/ ) and 1 sinc2 (t/ ), the Gaussian 1 exp[(t/ )2 ], and the Lorentzian /[( 2 + t2 )]
all possess unit area, and all are equivalent to the unit impulse (t) as 0.
The signal (t t0 ) describes an impulse located at t = t0 . Its area may be evaluated using any lower
and upper limits (say 1 and 2 ) that enclose its time of occurrence, t0 :
! 2 -
1, 1 < t0 < 2
( t0 ) d = (2.19)
1 0, otherwise
Notation: The area of the impulse A(t) equals A and is also called its strength. The function A(t) is
shown as an arrow with its area A labeled next to the tip. For visual appeal, we make its height proportional
to A. Remember, however, that its height at t = 0 is infinite or undefined. An impulse with negative area
is shown as an arrow directed downward.
This extremely important result is called the sifting property. It is the sifting action of an impulse (what
it does) that purists actually regard as a formal definition of the impulse.
From the product property, f (t) = x(t)(t 1) = x(1)(t 1). This is an impulse function with
strength x(1) = 2.
The derivative g(t) = x (t) includes the ordinary derivative (slopes) of x(t) and an impulse function of
strength 4 at t = 3.
"
By the sifting property, I = x(t)(t 2) dt = x(2) = 4.
"
(c) Evaluate I1 = 0
4t2 (t + 1) dt.
The result is I1 = 0 because (t + 1) (an impulse at t = 1) lies outside the limits of integration.
"2
(d) Evaluate I2 = 4
cos(2t)(2t + 1) dt.
Using the scaling and sifting properties of the impulse, we get
! 2
I2 = cos(2t)[0.5(t + 0.5)] dt = 0.5 cos(2t)|t=0.5 = 0.5
4
x(t) x I (t)
Multiplier
t t
Analog signal i(t) ts
(1) (1)
t Ideally sampled signal
ts
Sampling function
Figure 2.3 The ideally sampled signal is a (nonperiodic) impulse train
Note that even though xI (t) is an impulse train, it is not periodic. The strength of each impulse equals
the signal value x(kts ). This form actually provides a link between analog and digital signals.
To approximate a smooth signal x(t) by impulses, we section it into narrow rectangular strips of width
ts as shown in Figure 2.4 and replace each strip at the location kts by an impulse ts x(kts )(t kts ) whose
strength equals the area ts x(kts ) of the strip. This yields the impulse approximation
2
x(t) ts x(kts )(t kts ) (2.24)
k=
26 Chapter 2 Analog Signals
Section signal into narrow rectangular strips Replace each strip by an impulse
t t
ts ts
A signal x(t) may thus be regarded as a weighted, infinite sum of shifted impulses.
then correspond to (t). Now, x (t) is odd and shows two pulses of height 1/ 2 and 1/ 2 with zero area.
As 0, x (t) approaches + and from below and above, respectively. Thus, (t) is an odd function
characterized by zero width, zero area, and amplitudes of + and at t = 0. Formally, we write
- !
0, t = 0
(t) =
(t) dt = 0 (t) = (t) (2.26)
undefined, t=0
The two infinite spikes in (t) are not impulses (their area is not constant), nor do they cancel. In fact,
(t) is indeterminate at t = 0. The signal
" (t) is therefore sketched as a set
" of two spikes, which leads to
the name doublet. Even though its area (t) dt is zero, its absolute area | (t)| dt is infinite.
With = 1, we get (t) = (t). This implies that (t) is an odd function.
The product property of the doublet has a surprise in store. The derivative of x(t)(t ) may be
described in one of two ways. First, using the rule for derivatives of products, we have
d
[x(t)(t )] = x (t)(t ) + x(t) (t ) = x ()(t ) + x(t) (t ) (2.29)
dt
Second, using the product property of impulses, we also have
d d
[x(t)(t )] = [x()(t )] = x() (t ) (2.30)
dt dt
Comparing the two equations and rearranging, we get the rather unexpected result
x(t) (t ) = x() (t ) x ()(t ) (2.31)
This is the product property. Unlike impulses, x(t) (t ) does not just equal x() (t )!
Integrating the two sides that describe the product property, we obtain
! ! !
x(t) (t ) dt = x() (t ) dt x ()(t ) dt = x () (2.32)
This describes the sifting property of doublets. The doublet (t ) sifts out the negative derivative of
x(t) at t = .
Remark: Higher derivatives of (t) obey (n) (t) = (1)n (n) (t), are alternately odd and even, and possess
zero area. All are limiting forms of the same sequences that generate impulses, provided their ordinary
derivatives (up to the required order) exist. None are absolutely integrable. The impulse is unique in being
the only absolutely integrable function from among all its derivatives and integrals (the step, ramp, etc.).
The first derivative x (t) results in a rectangular pulse (the ordinary derivative of x(t)) and an impulse
(due to the jump) at t = 3.
The second derivative x (t) yields two impulses at t = 0 and t = 2 (the derivative of the rectangular
pulse) and a doublet at t = 3 (the derivative of the impulse).
! 2
(c) Evaluate I = [(t 3)(2t + 2) + 8 cos(t) (t 0.5)] dt.
2
With (2t + 2) = 0.5(t + 1), the sifting property of impulses and doublets gives
1 1
1 d 1
I = 0.5(t 3)1 8 [cos(t)]11 = 0.5(1 3) + 8 sin 0.5 = 2 + 8 = 23.1327
t=1 dt t=0.5
2.8 Moments
Moments are general measures of signal size based on area. The nth moment is defined as
!
mn = tn x(t) dt (2.33)
"
The zeroth moment m0 = x(t) dt is just the area of x(t). The normalized first moment mx = m1 /m0
is called the mean. Moments about the mean are called central moments. The nth central moment is
denoted n .
" !
tx(t) dt
mx = " n = (t mx )n x(t) dt (2.34)
x(t) dt
2.8 Moments 29
To account for complex valued signals or sign changes, it is often more useful to define moments in terms of
the absolute quantities |x(t)| or |x(t)|2 . The second central moment 2 is called the variance. It is often
denoted by 2 and defined by
m2
2 = 2 = m2x (2.35)
m0
The first few moments find widespread application. In physics, if x(t) represents the mass density, then
mx equals the centroid, 2 equals the moment of inertia, and equals the radius of gyration. In probability
theory, if x(t) represents the density function of a random variable, then mx equals the mean, 2 equals the
variance, and equals the standard deviation.
For power signals, the normalized second moment, m2 /m0 , equals the total power, and 2 equals the ac
power (the dierence between the total power and the dc power). The variance 2 may thus be regarded as
the power in a signal with its dc oset removed. For an energy signal, mx is a measure of the eective signal
delay, and is a measure of its eective width, or duration.
(b) Find the signal delay and duration for the signal x(t) = et u(t).
! ! !
The moments of x(t) are m0 = e dt = 1, m1 =
t
tet dt = 1, m2 = t2 et dt = 2.
0 0 0
% &1/2
We find that delay = mx = m1
m0 = 1, and duration = = m2
m0 m2x = (2 1)1/2 = 1.
30 Chapter 2 Analog Signals
CHAPTER 2 PROBLEMS
DRILL AND REINFORCEMENT
2.1 (Operations on Signals) For each signal x(t) of Figure P2.1, sketch the following:
(a) y(t) = x(t) (b) f (t) = x(t + 3)
(c) g(t) = x(2t 2) (d) h(t) = x(2 2t)
(e) p(t) = x[0.5(t 2)] (f ) s(t) = x(0.5t 1)
(g) xe (t) (its even part) (h) xo (t) (its odd part)
2.2 (Symmetry) Find the even and odd parts of each signal x(t).
(a) x(t) = et u(t) (b) x(t) = (1 + t)2 (c) x(t) = [sin(t) + cos(t)]2
2.3 (Symmetry) Evaluate the following integrals using the concepts of symmetry.
! 3 ! 2 3 4
(a) I = (4 t2 ) sin(5t) dt (b) I = 4 t3 cos(0.5t) dt
3 2
2.4 (Classification) For each periodic signal shown in Figure P2.4, evaluate the average value xav , the
energy E in one period, the signal power P , and the rms value xrms .
x(t) Signal 1 x(t) Signal 2 x(t) Signal 3
4 2
1
1
t 7 t t
2 2 5 2 2 5 2
2
5
Figure P2.4 Periodic signals for Problem 2.4
2.5 (Signal Classification) Classify each signal as a power signal, energy signal, or neither and find its
power or energy as appropriate.
(a) tet u(t) (b) et [u(t) u(t 1)] (c) te|t|
(d) et (e) 10et sin(t)u(t) (f ) sinc(t)u(t)
2.6 (Periodic Signals) Classify each of the following signals as periodic, nonperiodic, or almost periodic
and find the signal power where appropriate. For each periodic signal, also find the fundamental
frequency and the common period.
(a) x(t) = 4 3 sin(12t) + sin(30t) (b) x(t) = cos(10t)cos(20t)
(c) x(t) = cos(10t) cos(20t) (d) x(t) = cos(10t)cos(10t)
(e) x(t) = 2 cos(8t) + cos2 (6t) (f ) x(t) = cos(2t) 2 cos(2t 4 )
Chapter 2 Problems 31
2.7 (Periodic Signals) Classify each of the following signals as periodic, nonperiodic, or almost periodic
and find the signal power where appropriate. For each periodic signal, also find the fundamental
frequency and the common period.
(a) x(t) = 4 3 sin2 (12t) (b) x(t) = cos() + cos(20t) (c) x(t) = cos(t) + cos2 (t)
2.8 (Signal Description) For each signal x(t) shown in Figure P2.8,
(a) Express x(t) by intervals.
(b) Express x(t) as a linear combination of steps and/or ramps.
(c) Express x(t) as a linear combination of rect and/or tri functions.
(d) Sketch the first derivative x (t).
(e) Find the signal energy in x(t).
2.10 (Impulses and Comb Functions) Sketch the following signals. Note that the comb function is a
2
periodic train of unit impulses with unit spacing defined as comb(t) = (t k).
k=
2.12 (Generalized Derivatives) Sketch the signals x(t), x (t), and x (t) for the following:
(a) x(t) = 4 tri[ 21 (t 2)] (b) x(t) = et u(t) (c) x(t) = 2 rect(0.5t) + tri(t)
(d) x(t) = e|t| (e) x(t) = (1 et )u(t) (f ) x(t) = e2t rect( t1
2 )
32 Chapter 2 Analog Signals
2.13 (Ideally Sampled Signals) Sketch the ideally sampled signal and the impulse approximation for
each of the following signals, assuming a sampling interval of ts = 0.5 s.
(a) x(t) = rect(t/4) (b) x(t) = tri(t/2) (c) x(t) = sin(t) (d) x(t) = t rect(0.5t)
2.15 (rms Value) Find the signal power and rms value for a periodic pulse train with peak value A and
duty ratio D if the pulse shape is the following:
2.16 (Sketching Signals) Sketch the following signals. Which of these signals (if any) are identical?
(a) x(t) = r(t 2) (b) x(t) = tu(t) 2u(t 2) (c) x(t) = 2u(t) (t 2)u(t 2)
(d) x(t) = tu(t 2) 2u(t 2) (e) x(t) = tu(t 2) 2u(t) (f ) x(t) = (t 2)u(t) u(t 2)
2.17 (Signals and Derivatives) Sketch each signal x(t) and represent it as a linear combination of step
and/or ramp functions where possible.
(a) x(t) = u(t + 1)u(1 t) (b) x(t) = sgn(t)rect(t) (c) x(t) = t rect(t)
(d) x(t) = t rect(t 0.5) (e) x(t) = t rect(t 2) (f ) x(t) = u(t + 1)u(1 t)tri(t + 1)
2.18 (Areas) Use the signals x(t) = (t) and x(t) = sinc(t) as examples to justify the following:
(a) If the area of |x(t)| is finite, the area of x2 (t) need not be finite.
(b) If the area of x2 (t) is finite, the area of |x(t)| need not be finite.
(c) If the area of x2 (t) is finite, the area of x(t) is also finite.
2.19 (Energy) Consider an energy signal x(t), over the range 3 t 3, with energy E = 12 J. Find the
range of the following signals and compute their signal energy.
2.20 (Power) Consider a periodic signal x(t) with time period T = 6 and power P = 4 W. Find the time
period of the following signals and compute their signal power.
Use this result to show that for any energy signal, the signal energy equals the sum of the energy in
its odd and even parts.
2.22 (Areas and Energy) The area of the signal et u(t) equals unity. Use this result and the notion of
how the area changes upon time scaling to find the following (without formal integration).
(a) The area of x(t) = e2t u(t).
(b) The energy of x(t) = e2t u(t).
(c) The area of y(t) = 2e2t u(t) 6et u(t).
(d) The energy of y(t) = 2e2t u(t) 6et u(t).
2.23 (Power) Over one period, a periodic signal increases linearly from A to B in T1 seconds, decreases
linearly from B to A in T2 seconds, and equals A for the rest of the period. What is the power of this
periodic signal if its period T is given by T = 2(T1 + T2 )?
2.24 (Power and Energy) Use simple signals such as u(t), u(t 1), et u(t) (and others) as examples to
argue for or against the following statements.
(a) The sum of energy signals is an energy signal.
(b) The sum of a power and an energy signal is a power signal.
(c) The algebraic sum of two power signals can be an energy signal or a power signal or identically
zero.
(d) The product of two energy signals is zero or an energy signal.
(e) The product of a power and energy signal is an energy signal or identically zero.
(f ) The product of two power signals is a power signal or identically zero.
2.25 (Switched Periodic Signals) Let x(t) be a periodic signal with power Px . Show that the power Py
of the switched periodic signal y(t) = x(t)u(t t0 ) is given by Py = 0.5Px . Use this result to compute
the signal power for the following:
(a) y(t) = u(t) (b) y(t) = | sin(t)|u(t) (c) y(t) = 2 sin(2t)u(t) + 2 sin(t)
(d) y(t) = 2 u(t) (e) y(t) = (1 et )u(t) (f ) y(t) = 2 sin(t)u(t) + 2 sin(t)
2.26 (Power and Energy) Compute the signal energy or signal power as appropriate for each x(t).
(a) x(t) = e2t u(t) (b) x(t) = et1 u(t) (c) x(t) = e(1t) u(1 t)
(d) x(t) = e1+2t u(1 t) (e) x(t) = e(12t) u(1 2t) (f ) x(t) = et u(t 2)
2
(g) x(t) = e|1t| (h) x(t) = sinc(3t 1) (i) x(t) = et /2
2.27 (Power and Energy) Classify each signal as a power signal, energy signal, or neither, and compute
the signal energy or signal power where appropriate.
1
(a) x(t) = u(t) (b) x(t) = 1 + u(t) (c) x(t) =
1 + |t|
1 1
(d) x(t) = (e) x(t) = 1 + cos(t)u(t) (f ) x(t) = , t 1
1 + t2 t
1
(g) x(t) = , t 1 (h) x(t) = cos(t)u(t) (i) x(t) = cos(t)u(t) cos[(t 4)]u(t 4)
t
34 Chapter 2 Analog Signals
2.28 (Periodicity) The sum of two periodic signals is periodic if their periods T1 and T2 are commensurate.
Under what conditions will their product be periodic? Use sinusoids as examples to prove your point.
2.29 (Periodicity) Use Eulers identity to confirm that the signal x(t) = ej2f0 t is periodic with period
T = 1/f0 and use this result in the following:
(a) Is the signal
y(t) = x(2t) + 3x(0.5t) periodic? If so, what is its period?
(b) Is the signal
f (t) = 2ej16t + 3ej7t periodic? If so, what is its period?
(c) Is the signal
g(t) = 4ej16t 5e7t periodic? If so, what is its period?
(d) Is the signal
h(t) = 3ej16t 2e7 periodic? If so, what is its period?
2
(e) Is the signal s(t) = X[k]ej2kf0 t periodic? If so, what is its period?
k=
2.30 (Periodicity) It is claimed that each of the following signals is periodic. Verify this claim by sketching
each signal and finding its period. Find the signal power for those that are power signals.
2.32 (Periodicity) It is claimed that the sum of an energy signal x(t) and its shifted (by multiples of T )
replicas is a periodic signal with period T . Verify this claim by sketching the following and, for each
case, compare the area of one period of the periodic extension with the total area of x(t).
(a) The sum of x(t) = tri(t/2) and its replicas shifted by T = 6.
(b) The sum of x(t) = tri(t/2) and its replicas shifted by T = 4.
(c) The sum of x(t) = tri(t/2) and its replicas shifted by T = 3.
2.33 (Periodic Extension) The sum of an absolutely integrable signal x(t) and its shifted (by multiples
of T ) replicas is called the periodic extension of x(t) with period T . Show that the periodic extension
of the signal x(t) = et u(t) with period T is y(t) = x(t)/(1 eT ). How does the area of one period
of y(t) compare with the total area of x(t). Sketch y(t) and find its signal power.
2.34 (Half-Wave Symmetry) Argue that if a half-wave symmetric signal x(t) with period T is made
up of several sinusoidal components, each component is also half-wave symmetric over one period T .
Which of the following signals show half-wave symmetry?
(a) x(t) = cos(2t) + cos(6t) + cos(10t)
(b) x(t) = 2 + cos(2t) + sin(6t) + sin(10t)
(c) x(t) = cos(2t) + cos(4t) + sin(6t)
2.35 (Derivatives) Each of the following signals is zero outside the interval 1 t 1. Sketch the signals
x(t), x (t), and x (t).
(a) x(t) = cos(0.5t) (b) x(t) = 1 + cos(t) (c) x(t) = tri(t) (d) x(t) = 1 t2
Chapter 2 Problems 35
2.36 (Practical Signals) Energy signals that are commonly encountered as the response of analog systems
include the decaying exponential, the exponentially damped ramp, and the exponentially damped sine.
Compute the signal energy for the following:
(a) x(t) = et/ u(t) (b) x(t) = tet/ u(t) (c) f (t) = et sin(2t)u(t)
2.37 (Time Constant) For an exponential signal of the form x(t) = Aet/ u(t), the quantity is called
the time constant and provides a measure of how rapidly the signal decays. A practical estimate of
the time it takes for the signal to decay to less than 1% of its initial value is 5 . What is the actual
time it takes for x(t) to decay to exactly 1% of its initial value? How well does the practical estimate
compare with the exact result?
2.38 (Rise Time) The rise time is a measure of how fast a signal reaches a constant final value and is
commonly defined as the time it takes to rise from 10% to 90% of the final value. Compute the rise
time of the following signals.
-
sin(0.5t), 0 t 1
(a) x(t) = (1 e )u(t)
t
(b) y(t) =
1, t 1
2.39 (Rise Time and Scaling) In practice the rise time tR of the signal x(t) = (1 et/ )u(t) is often
approximated as tR 2.2 .
(a) What is the actual rise time of x(t), and how does it compare with the practical estimate?
(b) Compute the rise time of the signals f (t) = x(3t) and g(t) = x(t/3). How are these values related
to the rise time of x(t)? Generalize this result to find the rise time of h(t) = x(t).
2.40 (Settling Time) The settling time is another measure for signals that reach a nonzero final value.
The 5% settling time is defined as the time it takes for a signal to settle to within 5% of its final value.
Compute the 5% settling time of the following signals.
-
sin(0.5t), 0 t 1
(a) x(t) = (1 e )u(t)
t
(b) y(t) =
1, t 1
2.41 (Signal Delay) The delay of an energy signal is a measure of how far the signal has been shifted
from its mean position and is defined in one of two ways:
" "
tx(t) dt tx2 (t) dt
D1 = " D2 = "
x(t) dt
x2 (t) dt
(a) Verify that the delays D1 and D2 of x(t) = rect(t) are both zero.
(b) Find and compare the delays D1 and D2 of the following signals.
(1) x(t) = rect(t 2) (2) x(t) = et u(t) (3) x(t) = tet u(t)
2.42 (Signal Models) Argue that each of the following models can describe the signal of Figure P2.42,
and find the parameters A and for each model.
At
(a) x(t) = Atet u(t) (b) x(t) = A(et e2t ) (c) x(t) =
+ t2
x(t)
1
t
1
Figure P2.42 Signals for Problem 2.42
36 Chapter 2 Analog Signals
2.43 (Instantaneous Frequency) The instantaneous phase of the sinusoid x(t) = cos[(t)] is defined
as its argument (t), and the instantaneous frequency fi (t) is then defined by the derivative of the
instantaneous phase as fi (t) = (t)/2. Consider the signal y(t) = cos(2f0 t + ). Show that its
instantaneous frequency is constant and equals f0 Hz.
2.44 (Chirp Signals) Signals whose frequency varies linearly with time are called swept-frequency signals,
or chirp signals. Consider the signal x(t) = cos[(t)] where the time-varying phase (t) is also called
the instantaneous phase. The instantaneous frequency i (t) = (t) is defined as the derivative of the
instantaneous phase (in rad/s).
(a) What is the expression for (t) and x(t) if the instantaneous frequency is to be 10 Hz?
(b) What is the expression for (t) and x(t) if the instantaneous frequency varies linearly from 0 to
100 Hz in 2 seconds?
(c) What is the expression for (t) and x(t) if the instantaneous frequency varies linearly from 50 Hz
to 100 Hz in 2 seconds?
(d) Set up a general expression for a chirp signal x(t) whose frequency varies linearly from f0 Hz to
f1 Hz in t0 seconds.
2.45 (Chirp Signals) Chirp signals whose frequency varies linearly with time are often used in signal-
processing applications (such as radar). Consider the signal x(t) = cos(t2 ). How does the instanta-
neous frequency of x(t) vary with time. What value of will result in a signal whose frequency varies
from dc to 10 Hz in 4 seconds?
2.46 (Impulses as Limiting Forms) Argue that the following signals describe the impulse x(t) = A(t)
as 0. What is the constant A for each signal?
1 t2 /2
(a) x(t) = e (b) x(t) = (c) x(t) = 1
sinc( t ) (d) x(t) = 1 |t|/
e
2 + t2
2.47 (Impulses) It is possible to show that the signal [f (t)] is a string of impulses at the roots tk of
f (t) = 0 whose strengths equal 1/ | f (tk ) |. Use this result to sketch the following signals.
2.48 (Periodicity) Use Matlab to plot each signal over the range 0 t 3, using a small time step (say,
0.01 s). If periodic, determine the period and compute the signal power (by hand if possible or using
Matlab otherwise).
(a) x(t) = sin(2t) (b) y(t) = ex(t) (c) z(t) = ejx(t)
(d) f (t) = cos[x(t)] (e) g(t) = cos[x2 (t)]
2.49 (Curious Signals) Let s(t) = sin(t). Use Matlab to sketch each of the following signals over the
range 2 t 6, using a small time step (say, 0.02 s). Confirm that each signal is periodic and find
the period and power.
(a) x(t) = u[s(t)] (b) y(t) = sgn[s(t)] (c) f (t) = sgn[s(t)] + sgn[s(t + 0.5)]
(d) g(t) = r[s(t)] (e) h(t) = es(t)
"
2.50 (Numerical Integration) A crude way to compute the definite integral I = x(t) dt is to ap-
proximate x(t) by N rectangular strips of width ts and find I as the sum of the areas under each
strip:
I ts [x(t1 ) + x(t2 ) + + x(tN )], tk = kts
This is called the rectangular rule. The sum of the quantity in brackets can be computed using the
Matlab routine sum and then multiplied by ts to approximate the integral I as the area.
(a) Use the rectangular rule to approximate the integrals of x(t) = tri(t) and y(t) = sin2 (t), 0
t 1 with N = 5 and N = 10 and compare with the exact values. Does increasing the the
number of strips N lead to more accurate results?
(b) The trapezoidal rule uses trapezoidal strips of width ts to approximate the integral I. Show that
this rule leads to the approximation
I ts [ 12 x(t1 ) + x(t2 ) + + 12 x(tN )], tk = kts
(c) Use the trapezoidal rule to approximate the integral of the signals y(t) = sin2 (t), 0 t 1 and
x(t) = tri(t) with N = 5 and N = 10 and compare with the exact values. Are the results for a
given N more accurate than those found by the rectangular rule?
"t
2.51 (Numerical Integration) A crude way to find the running integral y(t) = 0
x(t) dt is to approxi-
mate x(t) by rectangular strips and compute
n
2
y(t) ts x(kts )
k=0
The cumulative sum can be computed using the Matlab routine cumsum and then multiplied by ts to
obtain y(t). Let x(t) = 10et sin(2t), 0 t T0 . Find an exact closed form result for y(t).
(a) Let T0 = 2. Plot the exact expression and the approximate running integral of x(t) using a time
step of ts = 0.1 s and ts = 0.01 s. Comment on the dierences.
(b) Let T0 = 5. Plot the approximate running integral of x(t) with ts = 0.1 s and ts = 0.01 s. From
the graph, can you predict the area of x(t) as T0 ? Does the error between this predicted
value and the value from the exact result decrease as ts decreases? Should it? Explain.
2.52 (Numerical Derivatives) The derivative x (t) can be numerically approximated by the slope
x[n] x[n 1]
x (t)|t=nts
ts
38 Chapter 2 Analog Signals
where ts is a small time step. The Matlab routine diff yields the dierence x[n] x[n 1] (whose
length is 1 less than the length of x[n]). Use Matlab to obtain the approximate derivative of x(t)
over 0 t 3 with ts = 0.1 s. Compute the exact derivative of x(t) = 10et sin(2t)u(t) and plot
both the exact and approximate results on the same plot. Also plot the error between the exact and
approximate results. What happens to the error if ts is halved?
2.53 (Signal Operations) The ADSP routine operate can be used to plot scaled and/or shifted versions
of a signal x(t). Let x(t) = 2u(t + 1) r(t + 1) + r(t 1). Use Matlab to plot the signals
x(t), y(t) = x(2t 1), and f (t) = x(1 2t).
2.54 (Periodic Signals) The ADSP routines lcm1 and gcd1 allow us to find the LCM or GCD of an array
of rational fractions. Use these routines to find the common period and fundamental frequency of the
signal x(t) = 2 cos(2.4t) 3 sin(5.4t) + cos(14.4t 0.2).
2.55 (Energy and Power) The ADSP routine enerpwr also computes the energy (or power) if x(t) is a
string expression (it does not require you to specify a time step). Let x(t) = 6 sinc(2t), 0.5 t 0.5.
Use the routine enerpwr to find the energy in this signal and compute the signal power if x(t) describes
one period of a periodic signal with period T = 1.4 s.
2.56 (Beats) Consider the amplitude modulated signal given by x(t) = cos(2f0 t)cos[2(f0 + f )t].
(a) This signal can be expressed in the form x(t) = A cos(2f1 t) + B cos(2f2 t). How are A, B, f1 ,
and f2 related to the parameters of the original signal? Use Matlab to plot x(t) for 0 t 1 s
with a time step of 8192
1
s, f0 = 400 Hz, and A = B = 1 for the following values of f .
2.57 (Chirp Signals) Consider the chirp signal x(t) = cos(t2 /6). Plot x(t) over 0 t T using a small
time step (say, 0.02 s) with T = 2, 6, 10 s. What do the plots reveal as T is increased? Is this signal
periodic? Should it be? How does its instantaneous frequency vary with time?
2.58 (Simulating an Impulse) An impulse may be regarded as a tall, narrow spike that arises as a
limiting form of many ordinary functions such as the sinc and Gaussian. Consider the Gaussian signal
2
x(t) = t10 e(t/t0 ) . As we decrease t0 , its height increases to maintain a constant area.
(a) Plot x(t) over 2 t 2 for t0 = 1, 0.5, 0.1, 0.05, 0.01 using a time step of 0.1t0 . What can you
say about the symmetry in x(t)?
(b) Find the area of x(t) for each t0 , using the Matlab command sum. How does the area change
with t0 ?
(c) Does x(t) approach an impulse as t0 0? If x(t) A(t), what is the value of A?
(d) Plot the (numerical) derivative x (t) for each t0 , using the Matlab command diff. Find the
area of x (t) for each t0 . How does the area change with t0 ? What can you say about the nature
of x(t) as t0 0?
Chapter 3
DISCRETE SIGNALS
A discrete signal x[n] is called right-sided if it is zero for n < N (where N is finite), causal if it is zero
for n < 0, left-sided if it is zero for n > N , and anti-causal if it is zero for n 0.
n n n n
N N
39
40 Chapter 3 Discrete Signals
Signals for which the absolute sum |x[n]| is finite are called absolutely summable. For nonperiodic signals,
the signal energy E is a useful measure. It is defined as the sum of the squares of the signal values
"
E= |x[n]|2 (3.3)
n=
The absolute value allows us to extend this relation to complex-valued signals. Measures for periodic signals
are based on averages since their signal energy is infinite. The average value xav and signal power P of a
periodic signal x[n] with period N are defined as the average sum per period and average energy per period,
respectively:
N 1 N 1
1 " 1 "
xav = x[n] P = |x[n]|2 (3.4)
N n=0 N n=0
Note that the index runs from n = 0 to n = N 1 and includes all N samples in one period. Only for
nonperiodic signals is it useful to use the limiting forms
M
" M
"
1 1
xav = lim x[n] P = lim |x[n]|2 (3.5)
M 2M + 1 M 2M + 1
n=M n=M
Signals with finite energy are called energy signals (or square summable). Signals with finite power are called
power signals. All periodic signals are power signals.
1% &
3 3
1" 1" 2
xav = x[n] = 0 P = x [n] = 36 + 36 = 18 W
4 n=0 4 n=0 4
1% &
3
1"
P = |x[n]|2 = 36 + 36 + 36 + 36 = 36 W
4 n=0 4
In either case, a sample of x[n] at the original index n will be plotted at a new index nN given by
n = nN , and this can serve as a consistency check in sketches.
42 Chapter 3 Discrete Signals
2
2 2 2 2 2
n n n n n n
2 3 6 4 1 3 2 2 3 5
Figure E3.2 The signals for Example 3.2
3.2.1 Symmetry
If a signal x[n] is identical to its mirror image x[n], it is called an even symmetric signal. If x[n]
diers from its mirror image x[n] only in sign, it is called an odd symmetric or antisymmetric signal.
Mathematically,
xe [n] = xe [n] xo [n] = xo [n] (3.6)
In either case, the signal extends over symmetric limits N n N . For an odd symmetric signal, xo [0] = 0
and the sum of xo [n] over symmetric limits (, ) equals zero:
M
"
xo [k] = 0 (3.7)
k=M
3.2 Operations on Discrete Signals 43
To find xe [n] and xo [n] from x[n], we fold x[n] and invoke symmetry to get
Naturally, if x[n] has even symmetry, xo [n] will equal zero, and if x[n] has odd symmetry, xe [n] will equal
zero.
The various !signals are sketched in Figure E3.3A. As a consistency check you should confirm that
xo [0] = 0, xo [n] = 0, and that the sum xe [n] + xo [n] recovers x[n].
44 Chapter 3 Discrete Signals
(b) Let x[n] = u[n] u[n 5]. Find and sketch its odd and even parts.
The signal x[n] and the genesis of its odd and even parts are shown in Figure E3.3B. Note the value
of xe [n] at n = 0 in the sketch.
x [n] 0.5x [n] 0.5x [n] x e [n] x o [n]
2 2
1 1 1 1
n n n n 4 n
4 4 4 4 4 4
1
Figure E3.3B The signal x[n] and its odd and even parts for Example 3.3(b)
3.3.1 Decimation
Suppose x[n] corresponds to an analog signal x(t) sampled at intervals ts . The signal y[n] = x[2n] then
corresponds to the compressed signal x(2t) sampled at ts and contains only alternate samples of x[n] (cor-
responding to x[0], x[2], x[4], . . .). We can also obtain y[n] directly from x(t) (not its compressed version)
if we sample it at intervals 2ts (or at a sampling rate S = 1/2ts ). This means a twofold reduction in the
sampling rate. Decimation by a factor of N is equivalent to sampling x(t) at intervals N ts and implies an
N-fold reduction in the sampling rate. The decimated signal x[N n] is generated from x[n] by retaining every
N th sample corresponding to the indices k = N n and discarding all others.
3.3.2 Interpolation
If x[n] corresponds to x(t) sampled at intervals ts , then y[n] = x[n/2] corresponds to x(t) sampled at ts /2
and has twice the length of x[n] with one new sample between adjacent samples of x[n]. If an expression for
x[n] (or the underlying analog signal) were known, it would be no problem to determine these new sample
values. If we are only given the sample values of x[n] (without its analytical form), the best we can do is
interpolate between samples. For example, we may choose each new sample value as zero (zero interpolation),
a constant equal to the previous sample value (step interpolation), or the average of adjacent sample values
(linear interpolation). Zero interpolation is referred to as up-sampling and plays an important role in
practical interpolation schemes. Interpolation by a factor of N is equivalent to sampling x(t) at intervals
ts /N and implies an N-fold increase in both the sampling rate and the signal length.
3.3 Decimation and Interpolation 45
Some Caveats
Consider the two sets of operations shown below:
x[n] decimate by 2 x[2n] interpolate by 2 x[n]
We see that decimation is indeed the inverse of interpolation, but the converse is not necessarily true.
After all, it is highly unlikely for any interpolation scheme to recover or predict the exact value of the
samples that were discarded during decimation. In situations where both interpolation and decimation are
to be performed in succession, it is therefore best to interpolate first. In practice, of course, interpolation or
decimation should preserve the information content of the original signal, and this imposes constraints on
the rate at which the original samples were acquired.
The step-interpolated signal is h[n] = x[ n3 ] = {1, 1, 1, 2, 2, 2, 5, 5, 5, 1, 1, 1}.
46 Chapter 3 Discrete Signals
The linearly interpolated signal is s[n] = x[ n3 ] = {1, 4 5
3, 3, 2, 3, 4, 5, 3, 1, 1, 23 , 13 }.
In linear interpolation, note that we interpolated the last two values toward zero.
(b) Let x[n] = {3, 4, 5, 6}. Find g[n] = x[2n 1] and the step-interpolated signal h[n] = x[0.5n 1].
In either case, we first find y[n] = x[n 1] = {3, 4, 5, 6}. Then
g[n] = y[2n] = x[2n 1] = {4, 6}.
h[n] = y[ n2 ] = x[0.5n 1] = {3, 3, 4, 4, 5, 5, 6, 6}.
(c) Let x[n] = {3, 4, 5, 6}. Find y[n] = x[2n/3] assuming step interpolation where needed.
Since we require both interpolation and decimation, we first interpolate and then decimate to get
After interpolation: g[n] = x[ n3 ] = {3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6}.
After decimation: y[n] = g[2n] = x[ 23 n] = {3, 3, 4, 5, 5, 6}.
This is just an impulse with strength x[k]. The product property leads directly to
"
x[n][n k] = x[k] (3.13)
n=
This is the sifting property. The impulse extracts the value x[k] from x[n] at the impulse location n = k.
The product and sifting properties are analogous to their analog counterparts.
For example, the signals u[n] and r[n] may be expressed as a train of shifted impulses:
"
"
u[n] = [n k] r[n] = k[n k] (3.15)
k=0 k=0
The signal u[n] may also be expressed as the cumulative sum of [n], and the signal r[n] may be described
as the cumulative sum of u[n]:
n
" n
"
u[n] = [k] r[n] = u[k] (3.16)
k= k=
48 Chapter 3 Discrete Signals
n n
-N N -N N
(b) Mathematically describe the signals of Figure E3.6B in at least two dierent ways.
x [n] y [n] h [n]
4 6 6
3 4 4 4
2 2 2
n 2 n n
1
1 2 1 2 3 4 5 6 3 2 1 1 2 3
1
Figure E3.6B The signals for Example 3.6(b)
1. The signal x[n] may be described as the sequence x[n] = {4, 2, 1, 3}.
It may also be written as x[n] = 4[n + 1] + 2[n] [n 1] + 3[n 2].
2. The signal y[n] may be represented variously as
A numeric sequence: y[n] = {0, 0, 2, 4, 6, 6, 6}.
A sum of shifted impulses: y[n] = 2[n 2] + 4[n 3] + 6[n 4] + 6[n 5] + 6[n 6].
A sum of steps and ramps: y[n] = 2r[n 1] 2r[n 4] 6u[n 7].
Note carefully that the argument of the step function is [n 7] (and not [n 6]).
3. The signal h[n] may be described as h[n] = 6 tri(n/3) or variously as
A numeric sequence: h[n] = {0, 2, 4, 6, 4, 2, 0}.
A sum of impulses: h[n] = 2[n + 2] + 4[n + 1] + 6[n] + 4[n 1] + 2[n 2].
A sum of steps and ramps: h[n] = 2r[n + 3] 4r[n] + 2r[n 3].
3.5 Discrete-Time Harmonics and Sinusoids 49
This complex-valued signal requires two separate plots (the real and imaginary parts, for example) for a
graphical description. If 0 < r < 1, x[n] describes a signal whose real and imaginary parts are exponentially
decaying cosines and sines. If r = 1, the real and imaginary parts are pure cosines and sines with a peak
value of unity. If r > 1, we obtain exponentially growing sinusoids.
The quantities f and = 2f describe analog frequencies. The normalized frequency F = f /S is called the
digital frequency and has units of cycles/sample. The frequency = 2F is the digital radian frequency
with units of radians/sample. The various analog and digital frequencies are compared in Figure 3.1. Note
that the analog frequency f = S (or = 2S) corresponds to the digital frequency F = 1 (or = 2).
Are all discrete-time sinusoids and harmonics periodic in time? Not always! To understand this idea,
suppose x[n] is periodic with period N such that x[n] = x[n + N ]. This leads to
cos(2nF + ) = cos[2(n + N )F + ] = cos(2nF + + 2NF ) (3.21)
The two sides are equal provided NF equals an integer k. In other words, F must be a rational fraction (ratio
of integers) of the form k/N . What we are really saying is that a DT sinusoid is not always periodic but only
if its digital frequency is a ratio of integers or a rational fraction. The period N equals the denominator of
k/N , provided common factors have been canceled from its numerator and denominator. The significance
of k is that it takes k full periods of the analog sinusoid to yield one full period of the sampled sinusoid. The
common period of a combination of periodic DT sinusoids equals the least common multiple (LCM) of their
individual periods. If F is not a rational fraction, there is no periodicity, and the DT sinusoid is classified as
nonperiodic or almost periodic. Examples of periodic and nonperiodic DT sinusoids appear in Figure 3.2.
Even though a DT sinusoid may not always be periodic, it will always have a periodic envelope.
(a) cos(0.125n) is periodic. Period N=16 (b) cos(0.5n) is not periodic. Check peaks or zeros.
1 Envelope 1 Envelope
0.5 is periodic 0.5 is periodic
Amplitude
Amplitude
0 0
0.5 0.5
1 1
0 4 8 12 16 20 24 28 0 4 8 12 16 20 24 28
DT Index n DT Index n
(b) What is the period of the harmonic signal x[n] = ej0.2n + ej0.3n ?
The digital frequencies in x[n] are F1 = 0.1 = 1
10 = k1
N1 and F2 = 0.15 = 3
20 = N2 .
k2
(c) The signal x(t) = 2 cos(40t) + sin(60t) is sampled at 75 Hz. What is the common period of the
sampled signal x[n], and how many full periods of x(t) does it take to obtain one period of x[n]?
The frequencies in x(t) are f1 = 20 Hz and f2 = 30 Hz. The digital frequencies of the individual
components are F1 = 2075 = 15 = N1 and F2 = 75 = 5 = N2 . Their periods are N1 = 15 and N2 = 5.
4 k1 30 2 k2
Consider an analog signal x(t) = cos(2f0 t + ) and its sampled version x[n] = cos(2nF0 + ), where
F0 = f0 /S. If x[n] is to be a unique representation of x(t), we must be able to reconstruct x(t) from x[n].
In practice, reconstruction uses only the copy or image of the periodic spectrum of x[n] in the principal
period 0.5 F 0.5, which corresponds to the analog frequency range 0.5S f 0.5S. We use a
lowpass filter to remove all other replicas or images, and the output of the lowpass filter corresponds to the
reconstructed analog signal. As a result, the highest frequency fH we can identify in the signal reconstructed
from its samples is fH = 0.5S.
Whether the frequency of the reconstructed analog signal matches x(t) or not depends on the sampling
rate S. If S > 2f0 , the digital frequency F0 = f0 /S is always in the principal range 0.5 F 0.5, and the
reconstructed analog signal is identical to x(t). If S < 2f0 , the digital frequency exceeds 0.5. Its image in
the principal range appears at the lower digital frequency Fa = F0 M (corresponding to the lower analog
frequency fa = f0 MS), where M is an integer that places the digital frequency Fa between 0.5 and 0.5 (or
the analog frequency fa between 0.5S and 0.5S). The reconstructed analog signal xa (t) = cos(2fa t + )
is at a lower frequency fa = SFa than f0 and is no longer a replica of x(t). This phenomenon, where a
reconstructed sinusoid appears at a lower frequency than the original, is called aliasing. The real problem
is that the original signal x(t) and the aliased signal xa (t) yield identical sampled representations at the
sampling frequency S and prevent unique identification of x(t) from its samples!
Thus, five periods of x(t) yield 12 samples (one period) of the sampled signal.
(b) A 100-Hz sinusoid is sampled at rates of 240 Hz, 140 Hz, 90 Hz, and 35 Hz. In each case, has aliasing
occurred, and if so, what is the aliased frequency?
To avoid aliasing, the sampling rate must exceed 200 Hz. If S = 240 Hz, there is no aliasing, and
the reconstructed signal (from its samples) appears at the original frequency of 100 Hz. For all other
choices of S, the sampling rate is too low and leads to aliasing. The aliased signal shows up at a lower
frequency. The aliased frequencies corresponding to each sampling rate S are found by subtracting out
multiples of S from 100 Hz to place the result in the range 0.5S f 0.5S. If the original signal
has the form x(t) = cos(200t + ), we obtain the following aliased frequencies and aliased signals:
1. S = 140 Hz, fa = 100 140 = 40 Hz, xa (t) = cos(80t + ) = cos(80t )
2. S = 90 Hz, fa = 100 90 = 10 Hz, xa (t) = cos(20t + )
3. S = 35 Hz, fa = 100 3(35) = 5 Hz, xa (t) = cos(10t + ) = cos(10t )
We thus obtain a 40-Hz sinusoid (with reversed phase), a 10-Hz sinusoid, and a 5-Hz sinusoid (with
reversed phase), respectively. Notice that negative aliased frequencies simply lead to a phase reversal
and do not represent any new information. Finally, had we used a sampling rate exceeding the Nyquist
rate of 200 Hz, we would have recovered the original 100-Hz signal every time. Yes, it pays to play by
the rules of the sampling theorem!
(c) Two analog sinusoids x1 (t) (shown light) and x2 (t) (shown dark) lead to an identical sampled version as
illustrated in Figure E3.8C. Has aliasing occurred? Identify the original and aliased signal. Identify the
digital frequency of the sampled signal corresponding to each sinusoid. What is the analog frequency
of each sinusoid if S = 50 Hz? Can you provide exact expressions for each sinusoid?
0.5
Amplitude
0.5
1
0 0.05 0.1 0.15 0.2 0.25 0.3
Time t [seconds]
Figure E3.8C The sinusoids for Example 3.8(c)
Look at the interval (0, 0.1) s. The sampled signal shows five samples per period. This covers three
full periods of x1 (t) and so F1 = 35 . This also covers two full periods of x2 (t), and so F2 = 25 . Clearly,
x1 (t) (with |F1 | > 0.5) is the original signal that is aliased to x2 (t). The sampling interval is 0.02 s.
54 Chapter 3 Discrete Signals
So, the sampling rate is S = 50 Hz. The original and aliased frequencies are f1 = SF1 = 30 Hz and
f2 = SF2 = 20 Hz.
From the figure, we can identify exact expressions for x1 (t) and x2 (t) as follows. Since x1 (t) is a delayed
cosine with x1 (0) = 0.5, we have x1 (t) = cos(60t 3 ). With S = 50 Hz, the frequency f1 = 30 Hz
actually aliases to f2 = 20 Hz, and thus x2 (t) = cos(40t 3 ) = cos(40t + 3 ). With F = 30 50 = 0.6
(or F = 0.4), the expression for the sampled signal is x[n] = cos(2nF 3 ).
(d) A 100-Hz sinusoid is sampled, and the reconstructed signal (from its samples) shows up at 10 Hz.
What was the sampling rate S?
If you said 90 Hz (100 S = 10), you are not wrong. But you could also have said 110 Hz (100 S =
10). In fact, we can also subtract out integer multiples of S from 100 Hz, and S is then found from
the following expressions (as long as we ensure that S > 20 Hz):
1. 100 MS = 10
2. 100 MS = 10
Solving the first expression for S, we find, for example, S = 45 Hz (with M = 2) or S = 30 Hz (with
M = 3). Similarly, the second expression gives S = 55 Hz (with M = 2). Which of these sampling
rates was actually used? We have no way of knowing!
period. The frequency fr of the reconstructed signal is then fr = SF = 540F = 200 Hz.
2. If S = 70 Hz, the digital frequency of the sampled signal is F = 100
70 = 7 , which does not lie in the
10
principal period. The frequency of the principal period is F = 7 1 = 37 , and the frequency fr of
10
reconstructed signal is then fr = 70F = SF = 30 Hz. The negative sign simply translates to a phase
reversal in the reconstructed signal.
3.7 Random Signals 55
3.7.1 Probability
Figure 3.3 shows the results of two experiments, each repeated under identical conditions. The first exper-
iment always yields identical results no matter how many times it is run and yields a deterministic signal.
We need to run the experiment only once to predict what the next, or any other run, will yield.
(a) Four realizations of a deterministic signal (b) Four realizations of a random signal
Amplitude
Amplitude
Time Time
The second experiment gives a dierent result or realization x(t) every time the experiment is repeated
and describes a stochastic or random system. A random signal or random process X(t) comprises
the family or ensemble of all such realizations obtained by repeating the experiment many times. Each
realization x(t), once obtained, ceases to be random and can be subjected to the same operations as we use
for deterministic signals (such as derivatives, integrals, and the like). The randomness of the signal stems
from the fact that one realization provides no clue as to what the next, or any other, realization might yield.
At a given instant t, each realization of a random signal can assume a dierent value, and the collection of
all such values defines a random variable. Some values are more likely to occur, or more probable, than
others. The concept of probability is tied to the idea of repeating an experiment a large number of times
in order to estimate this probability. Thus, if the value 2 V occurs 600 times in 1000 runs, we say that the
probability of occurrence of 2 V is 0.6.
The probability of an event A, denoted Pr(A), is the proportion of successful outcomes to the (very
large) number of times the experiment is run and is a fraction between 0 and 1 since the number of successful
56 Chapter 3 Discrete Signals
runs cannot exceed the total number of runs. The larger the probability Pr(A), the more the chance of event
A occurring. To fully characterize a random variable, we must answer two questions:
1. What is the range of all possible (nonrandom) values it can acquire? This defines an ensemble space,
which may be finite or infinite.
2. What are the probabilities for all the possible values in this range? This defines the probability
distribution function F (x). Clearly, F (x) must always lie between 0 and 1.
It is common to work with the derivative of the probability distribution function called the probability
density function f (x). The distribution function F (x) is simply the running integral of the density f (x):
* x
d F (x)
f (x) = or F (x) = f () d (3.22)
dx
The probability that X lies between x1 and x2 is Pr[x1 < X x2 ] = F (x2 ) F (x1 ). The area of f (x) is 1.
The mean, or expectation, is a measure of where the distribution is centered. The variance measures the
spread of the distribution about its mean. The less the spread, the smaller is the variance. The variance
is also a measure of the ac power in a signal. The quantity is known as the standard deviation and
provides a measure of the uncertainty in a physical measurement.
In a uniform distribution, every value is equally likely, since the random variable shows no preference for
a particular value. The density function f (x) is just a rectangular pulse defined by
1
, x
f (x) = (uniform distribution) (3.25)
0, otherwise
3.7 Random Signals 57
and the distribution F (x) is a ramp that flattens out. When quantizing signals in uniform steps, the error
in representing a signal value is assumed to be uniformly distributed between 0.5 and 0.5, where is
the quantization step. The density function of the phase of a sinusoid with random phase is also uniformly
distributed between and .
The bell-shaped Gaussian probability density is also referred to as normal and defined by
. /
1 (x mx )2
f (x) = exp (normal distribution) (3.26)
2 2 2
The mean (or variance) of the sum of Gaussian distributions equals the sum of the individual means (or
variances). The probability distribution of combinations of statistically independent, random phenomena
often tends to a Gaussian. This is the central limit theorem.
The idea of distributions also applies to deterministic periodic signals for which they can be found as
exact analytical expressions. Consider the periodic signal x(t) of Figure 3.5. The probability Pr[X < 0] that
x(t) < 0 is zero. The probability Pr[X < 3] that x(t) is less than 3 is 1. Since x(t) is linear over one period
(T = 3), all values in this range are equally likely, and F (x) must vary linearly from 0 to 1 over this range.
This yields the distribution F (x) and density f (x) as shown. Note that the area of f (x) equals unity.
In many situations, we use artificially generated signals (which can never be truly random) with prescribed
statistical features called pseudorandom signals. Such signals are actually periodic (with a very long
period), but over one period their statistical features approximate those of random signals.
Histograms: The estimates fk of a probability distribution are obtained by constructing a histogram from
a large number of observations. A histogram is a bar graph of the number of observations falling within
specified amplitude levels, or bins, as illustrated in Figure 3.6.
Number of observations
Number of observations
Signal-to-Noise Ratio: For a noisy signal x(t) = s(t) + An(t) with a signal component s(t) and a noise
component An(t) (with noise amplitude A), the signal-to-noise ratio (SNR) is the ratio of the signal power
s2 and noise power A2 n2 and usually defined in decibels (dB) as
# 2 $
s
SNR = 10 log dB (3.28)
A2 n2
(a) One realization of noisy sine (b) Average of 8 realizations (c) Average of 48 realizations
5 5 5
Amplitude
Amplitude
Amplitude
0 0 0
5 5 5
0 5 10 0 5 10 0 5 10
Time Time Time
CHAPTER 3 PROBLEMS
DRILL AND REINFORCEMENT
3.1 (Discrete Signals) Sketch each signal and find its energy or power as appropriate.
(a) x[n] = {6, 4, 2, 2} (b) x[n] = {3, 2, 1, 0, 1}
(c) x[n] = {0, 2, 4, 6} (d) x[n] = u[n] u[n 4]
(e) x[n] = cos(n/2) (f ) x[n] = 8(0.5)n u[n]
3.2 (Operations) Let x[n] = {6, 4, 2, 2}. Sketch the following signals and find their signal energy.
(a) y[n] = x[n 2] (b) f [n] = x[n + 2] (c) g[n] = x[n + 2] (d) h[n] = x[n 2]
3.3 (Operations) Let x[n] = 8(0.5)n (u[n + 1] u[n 4]). Sketch the following signals.
(a) y[n] = x[n 3] (b) f [n] = x[n + 1] (c) g[n] = x[n + 4] (d) h[n] = x[n 2]
3.4 (Decimation and Interpolation) Let x[n] = {4, 0, 2, 1, 3}. Find and sketch the following
signals and compare their signal energy with the energy in x[n].
(a) The decimated signal d[n] = x[2n]
(b) The zero-interpolated signal f [n] = x[ n2 ]
(c) The step-interpolated signal g[n] = x[ n2 ]
(d) The linearly interpolated signal h[n] = x[ n2 ]
3.5 (Symmetry) Sketch each signal and its even and odd parts.
(a) x[n] = 8(0.5)n u[n] (b) x[n] = u[n] (c) x[n] = 1 + u[n]
(d) x[n] = u[n] u[n 4] (e) x[n] = tri( n3
3 ) (f ) x[n] = {6, 4, 2, 2}
3.6 (Sketching Discrete Signals) Sketch each of the following signals:
(a) x[n] = r[n + 2] r[n 2] 4u[n 6] (b) x[n] = rect( n6 )
(c) x[n] = rect( n2
4 ) (d) x[n] = 6 tri( n4
3 )
3.8 (Discrete-Time Harmonics) Check for the periodicity of the following signals, and compute the
common period N if periodic.
(a) x[n] = cos( n2 ) (b) x[n] = cos( n2 )
(c) x[n] = sin( n
4 ) 2 cos( 6 )
n
(d) x[n] = 2 cos( n4 ) + cos ( 4 )
2 n
3.9 (Digital Frequency) Set up an expression for each signal, using a digital frequency |F | < 0.5, and
another expression using a digital frequency in the range 4 < F < 5.
3.10 (Sampling and Aliasing) Each of the following sinusoids is sampled at S = 100 Hz. Determine if
aliasing has occurred and set up an expression for each sampled signal using a digital frequency in the
principal range (|F | < 0.5).
(a) x(t) = cos(320t + 4 ) (b) x(t) = cos(140t 4 ) (c) x(t) = sin(60t)
3.12 (Signal Representation) The two signals shown in Figure P3.12 may be expressed as
(a) x[n] = An (u[n] u[n N ]) (b) y[n] = A cos(2F n + )
Find the constants in each expression and then find the signal energy or power as appropriate.
x [n] y [n] 2 2
4 1 1 1
1 1
n
1
n 1 1 1 1
1 2 3 4 5 2 2
Figure P3.12 Signals for Problem 3.12
3.13 (Energy and Power) Classify the following as energy signals, power signals, or neither and find the
energy or power as appropriate.
(a) x[n] = 2n u[n] (b) x[n] = 2n u[n 1] (c) x[n] = cos(n)
1 1
(d) x[n] = cos(n/2) (e) x[n] = u[n 1] (f ) x[n] = u[n 1]
n n
1
(g) x[n] = u[n 1] (h) x[n] = ejn (i) x[n] = ejn/2
n2
(j) x[n] = e(j+1)n/4
(k) x[n] = j n/4 (l) x[n] = ( j)n + ( j)n
3.14 (Energy and Power) Sketch each of the following signals, classify as an energy signal or power
signal, and find the energy or power as appropriate.
Chapter 3 Problems 61
"
(a) x[n] = y[n kN ], where y[n] = u[n] u[n 3] and N = 6
k=
"
(b) x[n] = (2)n5k (u[n 5k] u[n 5k 4])
k=
3.15 (Sketching Signals) Sketch the following signals and describe how they are related.
(a) x[n] = [n] (b) f [n] = rect(n) (c) g[n] = tri(n) (d) h[n] = sinc(n)
3.16 (Discrete Exponentials) A causal discrete exponential has the form x[n] = n u[n].
(a) Assume that is real and positive. Pick convenient values for > 1, = 1, and < 1; sketch
x[n]; and describe the nature of the sketch for each choice of .
(b) Assume that is real and negative. Pick convenient values for < 1, = 1, and > 1;
sketch x[n]; and describe the nature of the sketch for each choice of .
(c) Assume that is complex and of the form = Aej , where A is a positive constant. Pick
convenient values for and for A < 1, A = 1, and A > 1; sketch the real part and imaginary
part of x[n] for each choice of A; and describe the nature of each sketch.
(d) Assume that is complex and of the form = Aej , where A is a positive constant. Pick
convenient values for and for A < 1, A = 1, and A > 1; sketch the magnitude and imaginary
phase of x[n] for each choice of A; and describe the nature of each sketch.
3.17 (Interpolation and Decimation) Let x[n] = 4 tri(n/4). Sketch the following signals and describe
how they dier.
(a) x[ 23 n], using zero interpolation followed by decimation
(b) x[ 23 n], using step interpolation followed by decimation
(c) x[ 23 n], using decimation followed by zero interpolation
(d) x[ 23 n], using decimation followed by step interpolation
3.18 (Fractional Delay) Starting with x[n], we can generate the signal x[n 2] (using a delay of 2) or
x[2n 3] (using a delay of 3 followed by decimation). However, to generate a fractional delay of the
form x[n MN ] requires a delay, interpolation, and decimation!
(a) Describe the sequence of operations required to generate x[n 23 ] from x[n].
(b) Let x[n] = {1, 4, 7, 10, 13}. Sketch x[n] and x[n 23 ]. Use linear interpolation where required.
(c) Generalize the results of part (a) to generate x[n M
N ] from x[n]. Any restrictions on M and N ?
3.19 (The Roots of Unity) The N roots of the equation z N = 1 can be found by writing it as z N = ej2k
to give z = ej2k/N , k = 0, 1, . . . , N 1. What is the magnitude of each root? The roots can be
displayed as vectors directed from the origin whose tips lie on a circle.
(a) What is the length of each vector and the angular spacing between adjacent vectors? Sketch for
N = 5 and N = 6.
(b) Extend this concept to find the roots of z N = 1 and sketch for N = 5 and N = 6.
3.20 (Digital Sinusoids) Find the period N of each signal if periodic. Express each signal using a digital
frequency in the principal range (|F | < 0.5) and in the range 3 F 4.
3.21 (Aliasing and Signal Reconstruction) The signal x(t) = cos(320t + 4 ) is sampled at 100 Hz,
and the sampled signal x[n] is reconstructed at 200 Hz to recover the analog signal xr (t).
(a) Has aliasing occurred? What is the period N and the digital frequency F of x[n]?
(b) How many full periods of x(t) are required to generate one period of x[n]?
(c) What is the analog frequency of the recovered signal xr (t)?
(d) Write expressions for x[n] (using |F | < 0.5) and for xr (t).
3.22 (Digital Pitch Shifting) One way to accomplish pitch shifting is to play back (or reconstruct) a
sampled signal at a dierent sampling rate. Let the analog signal x(t) = sin(15800t + 0.25) be
sampled at a sampling rate of 8 kHz.
(a) Find its sampled representation with digital frequency |F | < 0.5.
(b) What frequencies are heard if the signal is reconstructed at a rate of 4 kHz?
(c) What frequencies are heard if the signal is reconstructed at a rate of 8 kHz?
(d) What frequencies are heard if the signal is reconstructed at a rate of 20 kHz?
3.23 (Discrete-Time Chirp Signals) Consider the signal x(t) = cos[(t)], where (t) = t2 . Show that
its instantaneous frequency fi (t) = 2 (t) varies linearly with time.
1
(a) Choose such that the frequency varies from 0 Hz to 2 Hz in 10 seconds, and generate the
sampled signal x[n] from x(t), using a sampling rate of S = 4 Hz.
(b) It is claimed that, unlike x(t), the signal x[n] is periodic. Verify this claim, using the condition
for periodicity (x[n] = x[n + N ]), and determine the period N of x[n].
(c) The signal y[n] = cos(F0 n2/M ), n = 0, 1, . . . , M 1, describes an M -sample chirp whose digital
frequency varies linearly from 0 to F0 . What is the period of y[n] if F0 = 0.25 and M = 8?
3.24 (Time Constant) For exponentially decaying discrete signals, the time constant is a measure of
how fast a signal decays. The 60-dB time constant describes the (integer) number of samples it takes
for the signal level to decay by a factor of 1000 (or 20 log 1000 = 60 dB).
(a) Let x[n] = (0.5)n u[n]. Compute its 60-dB time constant and 40-dB time constant.
(b) Compute the time constant in seconds if the discrete-time signal is derived from an analog signal
sampled at 1 kHz.
3.25 (Signal Delay) The delay D of a discrete-time energy signal x[n] is defined by
"
kx2 [k]
k=
D=
"
x2 [k]
k=
(a) Verify that the delay of the symmetric sequence x[n] = {4, 3, 2, 1, 0, 1, 2, 3, 4} is zero.
(b) Compute the delay of the signals g[n] = x[n 1] and h[n] = x[n 2].
(c) What is the delay of the signal y[n] = 1.5(0.5)n u[n] 2[n]?
3.26 (Periodicity) It is claimed that the sum of an absolutely summable signal x[n] and its shifted (by
multiples of N ) replicas is a periodic signal xp [n] with period N . Verify this claim by sketching the
following and, for each case, compute the power in the resulting periodic signal xp [n] and compare the
sum and energy of one period of xp [n] with the sum and energy of x[n].
Chapter 3 Problems 63
3.27 (Periodic Extension) The sum of an absolutely summable signal x[n] and its shifted (by multiples
of N ) replicas is called the periodic extension of x[n] with period N . Show that one period of the
x[n]
periodic extension of the signal x[n] = n u[n] with period N is y[n] = , 0 n N 1. How
1 N
does the one-period sum of y[n] compare with the sum of x[n]? What is the signal power in x[n] and
y[n]?
3.28 (Signal Norms) Norms provide a measure of the size of a signal. The p-norm, or H older norm,
! 1/p
xp for discrete signals is defined by xp = ( |x|p ) , where 0 < p < is a positive integer. For
p = , we also define x as the peak absolute value |x|max .
(a) Let x[n] = {3, j4, 3 + j4}. Find x1 , x2 , and x .
(b) What is the significance of each of these norms?
3.29 (Discrete Signals) Plot each signal x[n] over 10 n 10. Then, using the ADSP routine operate
(or otherwise), plot each signal y[n] and compare with the original.
(a) x[n] = u[n + 4] u[n 4] + 2[n + 6] [n 3] y[n] = x[n 4]
(b) x[n] = r[n + 6] r[n + 3] r[n 3] + r[n 6] y[n] = x[n 4]
(c) x[n] = rect( 10
n
) rect( n36 ) y[n] = x[n + 4]
(d) x[n] = 6 tri( 6 ) 3 tri( n3 )
n
y[n] = x[n + 4]
3.30 (Signal Interpolation) Let h[n] = sin(n/3), 0 n 10. Using the ADSP routine interpol (or
otherwise), plot h[n], the zero-interpolated, step-interpolated, and linearly interpolated signals using
interpolation by 3.
3.31 (Discrete Exponentials) A causal discrete exponential may be expressed as x[n] = n u[n], where
the nature of dictates the form of x[n]. Plot the following over 0 n 40 and comment on the
nature of each plot.
64 Chapter 3 Discrete Signals
3.32 (Discrete-Time Sinusoids) Which of the following signals are periodic and with what period? Plot
each signal over 10 n 30. Do the plots confirm your expectations?
(a) x[n] = 2 cos( n
2 ) + 5 sin( 5 )
n
(b) x[n] = 2 cos( n
2 ) sin( 3 )
n
3.33 (Complex-Valued Signals) A complex-valued signal x[n] requires two plots for a complete descrip-
tion in one of two formsthe magnitude and phase vs. n or the real part vs. n and imaginary part
vs. n.
(a) Let x[n] = {2, 1 + j, j2, 2 j2, 4}. Sketch each form for x[n] by hand.
(b) Let x[n] = ej0.3n . Use Matlab to plot each form over 30 n 30. Is x[n] periodic? If so,
can you identify its period from the Matlab plots? From which form, and how?
3.34 (Complex Exponentials) Let x[n] = 5 2ej ( 9 4 ) . Plot the following signals and, for each case,
n
derive analytic expressions for the signals plotted and compare with your plots. Is the signal x[n]
periodic? What is the period N ? Which plots allow you determine the period of x[n]?
(a) The real part and imaginary part of x[n] over 20 n 20
(b) The magnitude and phase of x[n] over 20 n 20
(c) The sum of the real and imaginary parts over 20 n 20
(d) The dierence of the real and imaginary parts over 20 n 20
3.35 (Complex Exponentials) Let x[n] = ( j)n + ( j)n . Plot the following signals and, for each case,
derive analytic expressions for the sequences plotted and compare with your plots. Is the signal x[n]
periodic? What is the period N ? Which plots allow you determine the period of x[n]?
(a) The real part and imaginary part of x[n] over 20 n 20
(b) The magnitude and phase of x[n] over 20 n 20
3.36 (Discrete-Time Chirp Signals) An N -sample chirp signal x[n] whose digital frequency varies
linearly from F0 to F1 is described by
. # $/
F1 F0 2
x[n] = cos 2 F0 n + n , n = 0, 1, . . . , N 1
2N
(a) Generate and plot 800 samples of a chirp signal x whose digital frequency varies from F = 0 to
F = 0.5. Observe how the frequency of x varies linearly with time, using the ADSP command
timefreq(x).
(b) Generate and plot 800 samples of a chirp signal x whose digital frequency varies from F = 0 to
F = 1. Is the frequency always increasing? If not, what is the likely explanation?
3.37 (Chirp Signals) It is claimed that the chirp signal x[n] = cos(n2 /6) is periodic (unlike the analog
chirp signal x(t) = cos(t2 /6)). Plot x[n] over 0 n 20. Does x[n] appear periodic? If so, can you
identify the period N ? Justify your results by trying to find an integer N such that x[n] = x[n + N ]
(the basis for periodicity).
Chapter 3 Problems 65
3.38 (Signal Averaging) Extraction of signals from noise is an important signal-processing application.
Signal averaging relies on averaging the results of many runs. The noise tends to average out to zero,
and the signal quality or signal-to-noise ratio (SNR) improves.
(a) Generate samples of the sinusoid x(t) = sin(800t) sampled at S = 8192 Hz for 2 seconds. The
sampling rate is chosen so that you may also listen to the signal if your machine allows.
(b) Create a noisy signal s[n] by adding x[n] to samples of uniformly distributed noise such that s[n]
has an SNR of 10 dB. Compare the noisy signal with the original and compute the actual SNR
of the noisy signal.
(c) Sum the signal s[n] 64 times and average the result to obtain the signal sa [n]. Compare the
averaged signal sa [n], the noisy signal s[n], and the original signal x[n]. Compute the SNR of
the averaged signal xa [n]. Is there an improvement in the SNR? Do you notice any (visual and
audible) improvement? Should you?
(d) Create the averaged result xb [n] of 64 dierent noisy signals and compare the averaged signal
xb [n] with the original signal x[n]. Compute the SNR of the averaged signal xb [n]. Is there an
improvement in the SNR? Do you notice any (visual and/or audible) improvement? Explain how
the signal xb [n] diers from xa [n].
(e) The reduction in SNR is a function of the noise distribution. Generate averaged signals, using
dierent noise distributions (such as Gaussian noise) and comment on the results.
3.39 (The Central Limit Theorem) The central limit theorem asserts that the sum of independent noise
distributions tends to a Gaussian distribution as the number N of distributions in the sum increases.
In fact, one way to generate a random signal with a Gaussian distribution is to add many (typically 6
to 12) uniformly distributed signals.
(a) Generate the sum of uniformly distributed random signals using N = 2, N = 6, and N = 12 and
plot the histograms of each sum. Does the histogram begin to take on a Gaussian shape as N
increases? Comment on the shape of the histogram for N = 2.
(b) Generate the sum of random signals with dierent distributions using N = 6 and N = 12. Does
the central limit theorem appear to hold even when the distributions are not identical (as long
as you select a large enough N )? Comment on the physical significance of this result.
3.40 (Music Synthesis I) A musical composition is a combination of notes, or signals, at various frequen-
cies. An octave covers a range of frequencies from f0 to 2f0 . In the western musical scale, there are 12
notes per octave, logarithmically equispaced. The frequencies of the notes from f0 to 2f0 correspond
to
f = 2k/12 f0 k = 0, 1, 2, . . . , 11
The 12 notes are as follows (the and stand for sharp and flat, and each pair of notes in parentheses
has the same frequency):
A (A or B ) B C (C or D ) D (D or E ) E F (F or G ) G (G or A )
An Example: Raga Malkauns: In Indian classical music, a raga is a musical composition based on
an ascending and descending scale. The notes and their order form the musical alphabet and grammar
from which the performer constructs musical passages, using only the notes allowed. The performance
of a raga can last from a few minutes to an hour or more! Raga malkauns is a pentatonic raga (with
five notes) and the following scales:
Ascending: D F G B C D Descending: C B G F D
66 Chapter 3 Discrete Signals
The final note in each scale is held twice as long as the rest. To synthesize this scale in Matlab, we
start with a frequency f0 corresponding to the first note D and go up in frequency to get the notes in
the ascending scale; when we reach the note D, which is an octave higher, we go down in frequency to
get the notes in the descending scale. Here is a Matlab code fragment.
Generate sampled sinusoids at these frequencies, using an appropriate sampling rate (say, 8192 Hz);
concatenate them, assuming silent passages between each note; and play the resulting signal, using the
Matlab command sound. Use the following Matlab code fragment as a guide:
3.41 (Music Synthesis II) The raw scale of raga malkauns will sound pretty dry! The reason for
this is the manner in which the sound from a musical instrument is generated. Musical instruments
produce sounds by the vibrations of a string (in string instruments) or a column of air (in woodwind
instruments). Each instrument has its characteristic sound. In a guitar, for example, the strings are
plucked, held, and then released to sound the notes. Once plucked, the sound dies out and decays.
Furthermore, the notes are never pure but contain overtones (harmonics). For a realistic sound, we
must include the overtones and the attack, sustain, and release (decay) characteristics. The sound
signal may be considered to have the form x(t) = (t)cos(2f0 t + ), where f0 is the pitch and (t)
is the envelope that describes the attack-sustain-release characteristics of the instrument played. A
crude representation of some envelopes is shown in Figure P3.41 (the piecewise linear approximations
will work just as well for our purposes). Woodwind instruments have a much longer sustain time and
a much shorter release time than do plucked string and keyboard instruments.
(t) Envelopes of (t) Envelopes of
woodwind instruments 1 string and keyboard instruments
1
t t
1 1
Figure P3.41 Envelopes and their piecewise linear approximations (dark) for Problem 3.41
Experiment with the scale of raga malkauns and try to produce a guitar-like sound, using the appro-
priate envelope form. You should be able to discern an audible improvement.
Chapter 3 Problems 67
3.42 (Music Synthesis III) Synthesize the following notes, using a woodwind envelope, and synthesize
the same notes using a plucked string envelope.
F (0.3) D(0.4) E(0.4) A(1) A(0.4) E(0.4) F (0.3) D(1)
All the notes cover one octave, and the numbers in parentheses give a rough indication of their relative
duration. Can you identify the music? (It is Big Ben.)
3.43 (Music Synthesis IV) Synthesize the first bar of Pictures at an Exhibition by Mussorgsky, which
has the following notes:
A(3) G(3) C(3) D(2) G (1) E(3) D(2) G (1) E(3) C(3) D(3) A(3) G(3)
All the notes cover one octave except the note G , which is an octave above G. The numbers in
parentheses give a rough indication of the relative duration of the notes (for more details, you may
want to listen to an actual recording). Assume that a keyboard instrument (such as a piano) is played.
3.44 (DTMF Tones) In dual-tone multi-frequency (DTMF) or touch-tone telephone dialing, each number
is represented by a dual-frequency tone. The frequencies for each digit are listed in Chapter 18.
(a) Generate DTMF tones corresponding to the telephone number 487-2550, by sampling the sum of
two sinusoids at the required frequencies at S = 8192 Hz for each digit. Concatenate the signals
by putting 50 zeros between each signal (to represent silence) and listen to the signal using the
Matlab command sound.
(b) Write a Matlab program that generates DTMF signals corresponding to a vector input repre-
senting the digits in a phone number. Use a sampling frequency of S = 8192 Hz.
Chapter 4
ANALOG SYSTEMS
4.1 Introduction
In its broadest sense, a physical system is an interconnection of devices and elements subject to physical
laws. A system that processes analog signals is referred to as an analog system or continuous-time (CT)
system. The signal to be processed forms the excitation or input to the system. The processed signal is
termed the response or output.
The response of any system is governed by the input and the system details. A system may of course
be excited by more than one input, and this leads to the more general idea of multiple-input systems. We
address only single-input, single-output systems in this text. The study of systems involves the input, the
output, and the system specifications. Conceptually, we can determine any one of these in terms of the other
two. System analysis implies a study of the response subject to known inputs and system formulations.
Known input-output specifications, on the other hand, usually allow us to identify, or synthesize, the system.
System identification or synthesis is much more dicult because many system formulations are possible
for the same input-output relationship.
Most real-world systems are quite complex and almost impossible to analyze quantitatively. Of necessity,
we are forced to use models or abstractions that retain the essential features of the system and simplify
the analysis, while still providing meaningful results. The analysis of systems refers to the analysis of the
models that in fact describe such systems, and it is customary to treat the system and its associated models
synonymously. In the context of signal processing, a system that processes the input signal in some fashion
is also called a filter.
68
4.1 Introduction 69
Such variables may represent physical quantities or may have no physical significance whatever. Their choice
is governed primarily by what the analysis requires. For example, capacitor voltages and inductor currents
are often used as state variables since they provide an instant measure of the system energy. Any inputs
applied to the system result in a change in the energy or state of the system. All physical systems are,
by convention, referenced to a zero-energy state (variously called the ground state, the rest state, the
relaxed state, or the zero state) at t = .
The behavior of a system is governed not only by the input but also by the state of the system at the
instant at which the input is applied. The initial values of the state variables define the initial conditions
or initial state. This initial state, which must be known before we can establish the complete system
response, embodies the past history of the system. It allows us to predict the future response due to any
input regardless of how the initial state was arrived at.
4.1.2 Operators
Any equation is based on a set of operations. An operator is a rule or a set of directionsa recipe if you
willthat shows us how to transform one function to another. For example, the derivative operator s ddt
transforms a function x(t) to y(t) = s{x(t)} or d x(t)
dt . If an operator or a rule of operation is represented by
the symbol O, the equation
O{x(t)} = y(t) (4.1)
implies that if the function x(t) is treated exactly as the operator O requires, we obtain the function y(t).
For example, the operation O{ } = 4 dt d
{ } + 6 says that to get y(t), we must take the derivative of x(t),
multiply by 4 and then add 6 to the result 4 dt d
{x(t)} + 6 = 4 dx
dt + 6 = y(t).
If an operation on the sum of two functions is equivalent to the sum of operations applied to each
separately, the operator is said to be additive. In other words,
O{x1 (t) + x2 (t)} = O{x1 (t)} + O{x2 (t)} (for an additive operation) (4.2)
If an operation on Kx(t) is equivalent to K times the linear operation on x(t) where K is a scalar, the
operator is said to be homogeneous. In other words,
O{Kx(t)} = KO{x(t)} (for a homogeneous operation) (4.3)
Together, the two describe the principle of superposition. An operator O is termed a linear operator
if it is both additive and homogeneous. In other words,
O{Ax1 (t) + Bx2 (t)} = AO{x1 (t)} + BO{x2 (t)} (for a linear operation) (4.4)
If an operation performed on a linear combination of x1 (t) and x2 (t) produces the same results as a linear
combination of operations on x1 (t) and x2 (t) separately, the operation is linear. If not, it is nonlinear.
Linearity thus implies superposition. An important concept that forms the basis for the study of linear
systems is that the superposition of linear operators is also linear.
Testing an Operator for Linearity: If an operator fails either the additive or the homogeneity test,
it is nonlinear. In all but a few (usually contrived) cases, if an operator passes either the additive or the
homogeneity test, it is linear (meaning that it will also pass the other). In other words, only one test,
additivity or homogeneity, suces to confirm linearity (or lack thereof) in most cases.
70 Chapter 4 Analog Systems
d{ }
(d) Consider the derivative operator O{ } = dt , which transforms x(t) to x (t).
We find that AO{x(t)} = Ax (t) and O{Ax(t)} = x (At) = Ax (t).
The two are equal, and the derivative operator is homogeneous and thus linear.
Of course, to be absolutely certain, we could use the full force of the linearity relation to obtain
O{Ax1 (t) + Bx2 (t)} = d
dt [Ax1 (t) + Bx2 (t)] and AO{x1 (t)} + AO{x2 (t)} = Ax1 (t) + Bx2 (t).
The two results are equal, and we thus confirm the linearity of the derivative operator.
y (n) (t) + a1 y (n1) (t) + + an1 y (1) (t) + an y(t) = b0 x(m) (t) + b1 x(m1) (t) + + bm1 x(1) (t) + bm x(t) (4.5)
The order n of the dierential equation refers to the order of the highest derivative of the output y(t).
It is customary to normalize the coecient of the highest derivative of y(t) to 1. The coecients ak and bk
k
may be functions of x(t) and/or y(t) and/or t. Using the derivative operator sk ddtk with s0 1, we may
recast this equation in operator notation as
d y(t) d2 y(t)
Notation: For low-order systems, we will also use the notation y (t) dt , y (t) dt2 etc.
4.2 System Classification 71
For a linear system, scaling the input leads to an identical scaling of the output. In particular, this means
zero output for zero input and a linear input-output relation passing through the origin. This is possible only
if every system element obeys a similar relationship at its own terminals. Since independent sources have
terminal characteristics that are constant and do not pass through the origin, a system that includes such
sources is therefore nonlinear. Formally, a linear system must also be relaxed (with zero initial conditions) if
superposition is to hold. We can, however, use superposition even for a system with nonzero initial conditions
(or internal sources) that is otherwise linear. We treat it as a multiple-input system by including the initial
conditions (or internal sources) as additional inputs. The output then equals the superposition of the outputs
due to each input acting alone, and any changes in the input are related linearly to changes in the response.
As a result, the response can be written as the sum of a zero-input response (due to the initial conditions
alone) and the zero-state response (due to the input alone). This is the principle of decomposition,
which allows us to analyze linear systems in the presence of nonzero initial conditions. Both the zero-input
response and the zero-state response obey superposition individually.
(c) y(t) = x(t) is linear but time varying. With t t, we see that AO{x(t)} = A[x(t)] and O{Ax(t)} =
Ax(t). The two are equal.
To test for time invariance, we find that O{x(t t0 )} = x(t t0 ) but y(t t0 ) = x[(t t0 )]. The
two are not equal, and the time-scaling operation is time varying. Figure E4.3C illustrates this for
y(t) = x(2t), using a shift of t0 = 2.
4.2 System Classification 73
x(t 2) y2 (t)
1 Time scale 1
(compress by 2) Not the same!!
2 6 1 3
Figure E4.3C Illustrating time variance of the system for Example 4.3(c)
(d) y(t) = x(t 2) is linear and time invariant. The operation t t 2 reveals that
AO{x(t)} = A[x(t 2)] and O{Ax(t)} = Ax(t 2). The two are equal.
O{x(t t0 )} = x(t t0 2) and y(t t0 ) = x(t t0 2). The two are equal.
(b) What can you say about the linearity and time-invariance of the four circuits and their governing
dierential equations shown in the Figure E4.4B.
3 + 3 i 2 (t) 3 3t
+ + + +
i(t) i(t) i(t) i(t)
For (a), 2i (t) + 3i(t) = v(t). This is LTI because all the element values are constants.
For (b), 2i (t) + 3i2 (t) = v(t). This is nonlinear due to the nonlinear element.
For (c), 2i (t) + 3i(t) + 4 = v(t). This is nonlinear due to the 4-V internal source.
For (d), 2i (t) + 3ti(t) = v(t). This is time varying due to the time-varying resistor.
y (n) (t) + a1 y (n1) (t) + + an1 y (1) (t) + an y(t) = x(t) (4.8)
{a0 sn + a1 sn1 + + an1 s + an }y(t) = x(t) (4.9)
Table 4.1 Form of the Natural Response for Analog LTI Systems
Entry Root of Characteristic Equation Form of Natural Response
et cos(t)(A0 + A1 t + A2 t2 + + Ap tp )
4 Complex, repeated: ( j)p+1
+ et sin(t)(B0 + B1 t + B2 t2 + + Bp tp )
Table 4.2 Form of the Forced Response for Analog LTI Systems
Note: If the right-hand side (RHS) is et , where is also a root of the characteristic
equation repeated r times, the forced response form must be multiplied by tr .
Entry Forcing Function (RHS) Form of Forced Response
5 t C0 + C1 t
6 tp C0 + C1 t + C2 t2 + + Cp tp
The forced response arises due to the interaction of the system with the input and thus depends on
both the input and the system details. It satisfies the given dierential equation and has the same form
as the input. Table 4.2 summarizes these forms for various types of inputs. The constants in the forced
response can be found uniquely and independently of the natural response or initial conditions simply by
satisfying the given dierential equation.
The total response is found by first adding the forced and natural response and then evaluating the
undetermined constants (in the natural component) using the prescribed initial conditions.
Remarks: For stable systems, the natural response is also called the transient response, since it decays to
zero with time. For systems with harmonic or switched harmonic inputs, the forced response is a harmonic
at the input frequency and is termed the steady-state response.
1. Since x(t) = 4e3t , we select the forced response as yF (t) = Ce3t . Then
yF (t) = 3Ce3t , yF (t) = 9Ce3t , and yF (t) + 3yF (t) + 2yF (t) = (9C 9C + 2C)e3t = 4e3t .
Thus, C = 2, yF (t) = 2e3t , and y(t) = yN (t) + yF (t) = K1 et + K2 e2t + 2e3t .
Using initial conditions, we get y(0) = K1 + K2 + 2 = 3 and y (0) = K1 2K2 6 = 4.
This gives K2 = 11, K1 = 12, and y(t) = (12et 11e2t + 2e3t )u(t).
2. Since x(t) = 4e2t has the same form as a term of yN (t), we must choose yF (t) = Cte2t .
Then yF (t) = 2Cte2t + Ce2t , and yF (t) = 2C(1 2t)e2t 2Ce2t . Thus,
yF (t) + 3yF (t) + 2yF (t) = (2C + 4Ct 2C 6Ct + 3C + 2Ct)e2t = 4e2t . This gives C = 4.
Thus, yF (t) = 4te2t , and y(t) = yN (t) + yF (t) = K1 et + K2 e2t 4te2t .
Using initial conditions, we get y(0) = K1 + K2 = 3 and y (0) = K1 2K2 4 = 4.
Thus, K2 = 11, K1 = 14, and y(t) = (14et 11e2t 4te2t )u(t).
EXAMPLE 4.7 (Zero-Input and Zero-State Response for the Single-Input Case)
Let y (t) + 3y (t) + 2y(t) = x(t) with x(t) = 4e3t and initial conditions y(0) = 3 and y (0) = 4.
Find its zero-input response and zero-state response.
The characteristic equation is s2 + 3s + 2 = 0 with roots s1 = 1 and s2 = 2.
Its natural response is yN (t) = K1 es1 t + K2 es2 t = K1 et + K2 e2t .
1. The zero-input response is found from yN (t) and the prescribed initial conditions:
2. Similarly, yzs (t) is found from the general form of y(t) but with zero initial conditions.
Since x(t) = 4e3t , we select the forced response as yF (t) = Ce3t .
Then, yF (t) = 3Ce3t , yF (t) = 9Ce3t , and yF (t) + 3yF (t) + 2yF (t) = (9C 9C + 2C)e3t = 4e3t .
Thus, C = 2, yF (t) = 2e3t , and yzs (t) = K1 et + K2 e2t + 2e3t .
With zero initial conditions, we obtain
yzs (0) = K1 + K2 + 2 = 0
yzs (0) = K1 2K2 6 = 0
3. The total response is the sum of yzs (t) and yzi (t):
y (n) (t) + a1 y (n1) (t) + + an y(t) = b0 x(m) (t) + b1 x(m1) (t) + + bm x(t) (4.12)
3. The ZIR is found from yzi (t) = C1 et + C2 e2t , with y(0) = 0 and y (0) = 1. This yields
yzi (0) = C1 + C2 = 0 and yzi
(0) C1 2C2 = 1. We find C1 = 1 and C2 = 1. Then,
yzi (t) = et e2t
4. Finally, the total response is y(t) = yzs (t) + yzi (t) = et + 11e2t 10e3t , t 0.
Impulse response h(t): The output of a relaxed LTI system if the input is a unit impulse (t).
Step response s(t): The output of a relaxed LTI system if the input is a unit step u(t).
1 eat
s(t) = u(t) (4.15)
a
The impulse response h(t) equals the derivative of the step response. Thus,
" #
d 1 eat
h(t) = s (t) =
u(t) = eat u(t) (4.16)
dt a
Similarly, it turns out that the impulse response of the second-order system y (t)+a1 y (t)+a2 y(t) = x(t)
can be found as the solution to the homogeneous equation y (t) + a1 y (t) + a2 y(t) = 0, with initial conditions
y(0) = 0 and y (0) = 1. These results can be generalized to higher-order systems. For the nth-order,
single-input system given by
the impulse response h(t) is found as the solution to the homogeneous equation
h(n) (t) + a1 h(n1) (t) + + an h(t) = 0 h(n1) (0) = 1 (and all other ICs zero) (4.18)
Note that the highest-order initial condition is h(n1) (0) = 1 and all other initial conditions are zero.
and compute its impulse response h0 (t) from the homogeneous equation
(n) (n1) (n1)
h0 (t) + a1 h0 (t) + + an h0 (t) = 0, h0 (0) = 1 (4.20)
(b) Find the impulse response of the system y (t) + 2y(t) = x (t) + 3x(t).
The impulse response h0 (t) of the single-input system y (t) + 2y(t) = x(t) is h0 (t) = e2t u(t).
The impulse response of the given system is thus
h(t) = h0 (t) + 3h0 (t) = (t) 2e2t u(t) + 3e2t u(t) = (t) + e2t u(t).
(c) Find the impulse response of the system y (t) + 3y (t) + 2y(t) = x (t).
The impulse response h0 (t) of the system y (t) + 3y (t) + 2y(t) = x(t) is (from Example 4.9)
h0 (t) = (et e2t )u(t). The required impulse response is then h(t) = h0 (t). We compute:
h0 (t) = (et + 2e2t )u(t)
d
h(t) = h0 (t) = [h (t)] = (et 4e2t )u(t) + (t)
dt 0
4.6 System Stability 85
y (n) (t) + a1 y (n1) (t) + + an y(t) = b0 x(m) (t) + b1 x(m1) (t) + + bm x(t), mn (4.22)
the conditions for BIBO stability involve the roots of the characteristic equation. A necessary and sucient
condition for BIBO stability of an LTI system is that every root of its characteristic equation must have
a negative real part (and the highest derivative of the input must not exceed that of the output). This
criterion is based on the results of Tables 4.1 and 4.2. Roots with negative real parts ensure that the natural
(and zero-input) response always decays with time (see Table 4.1), and the forced (and zero-state) response
always remains bounded for every bounded input. Roots with zero real parts make the system unstable.
Simple (non-repeated) roots with zero real parts produce a constant (or sinusoidal) natural response which
is bounded, but if the input is also a constant (or a sinusoid at the same frequency), the forced response is a
ramp or growing sinusoid (see Table 4.2) and hence unbounded. Repeated roots with zero real parts result
in a natural response that is itself a growing sinusoid or polynomial and thus unbounded.
If the highest derivative of the input exceeds (not just equals) that of the output, the system is unstable.
For example, if y(t) = d x(t)
dt , a step input (which is bounded) produces an impulse output (which is unbounded
at t = 0). In the next chapter, we shall see that the stability condition described here is entirely equivalent
to having an LTI system whose impulse response h(t) is absolutely integrable. The stability of nonlinear or
time-varying systems must usually be checked by other means.
(b) The system y (t) + 3y (t) = x(t) is unstable. The roots of its characteristic equation s2 + 3s = 0 are
s1 = 0 , and s2 = 3, and one of the roots does not have a negative real part. Although its natural
response is bounded (it has the form yN (t) = Au(t) + Be3t u(t)), the input x(t) = u(t) produces a
forced response of the form Ctu(t), which becomes unbounded.
(c) The system y (t) + 3y (t) = x(t) is unstable. The roots of its characteristic equation s3 + 3s2 = 0 are
s1 = s2 = 0, and s3 = 3. They result in the natural response yN (t) = Au(t) + Btu(t) + Ce3t u(t),
which becomes unbounded.
86 Chapter 4 Analog Systems
The impulse response h(t) equals the derivative of the step response. Thus,
1 t/
h(t) = s (t) = e u(t) (impulse response) (4.25)
Performance Measures
The time-domain performance of systems is often measured in terms of their impulse response and/or step
response. For an exponential signal Aet/ , the smaller the time constant , the faster is the decay. For
first-order systems, the time constant is a useful measure of the speed of the response, as illustrated in
Figure 4.1.
The smaller the time constant, the faster the system responds, and the more the output resembles
(matches) the applied input. An exponential decays to less than 1% of its peak value in about 5 . As
a result, the step response is also within 1% of its final value in about 5 . This forms the basis for the
observation that it takes about 5 to reach steady state. For higher-order systems, the rate of decay and the
4.7 Application-Oriented Examples 87
time to reach steady state depends on the largest time constant max (corresponding to the slowest decay)
associated with the exponential terms in its impulse response. A smaller max implies a faster response and
a shorter time to reach steady state. The speed of response is also measured by the rise time, which is often
defined as the time it takes for the step response to rise from 10% to 90% of its final value. Another useful
measure of system performance is the delay time, which is often defined as the time it takes for the step
response to reach 50% of its final value. These measures are also illustrated in Figure 4.1. Another measure
is the settling time, defined as the time it takes for the step response to settle to within a small fraction
(typically, 5%) of its final value.
+ 1 + + + + +
1 3
H 2 1H 1 2H
x(t) 1F y(t) x(t) 1F y(t) x(t) 1F 1F 1 y(t)
Second-order Bessel filter Second-order Butterworth filter Third-order Butterworth filter
Figure E4.12. Circuits for Example 4.12.
CHAPTER 4 PROBLEMS
DRILL AND REINFORCEMENT
4.1 (Operators) Which of the following describe linear operators?
$t
(a) O{ } = 4{ } (b) O{ } = 4{ } + 3 (c) y(t) = x(t) dt
d x(t)
(d) O{ } = sin{ } (e) y(t) = x(4t) (f ) y(t) = 4 + 3x(t)
dt
4.2 (System Classification) In each of the following systems, x(t) is the input and y(t) is the output.
Classify each system in terms of linearity, time invariance, memory, and causality.
(a) y (t) + 3y (t) = 2x (t) + x(t) (b) y (t) + 3y(t)y (t) = 2x (t) + x(t)
(c) y (t) + 3tx(t)y (t) = 2x (t) (d) y (t) + 3y (t) = 2x2 (t) + x(t + 2)
(e) y(t) + 3 = x2 (t) + 2x(t) (f ) y(t) = 2x(t + 1) + 5
(g) y (t) + et y (t) = |x (t 1)| (h) y(t) = x2 (t) + 2x(t + 1)
$t
(i) y (t) + cos(2t)y (t) = x (t + 1) (j) y(t) + t y(t) dt = 2x(t)
$t $ t+1
(k) y (t) + 0 y(t) dt = |x (t)| x(t) (l) y (t) + t 0 y(t) dt = x (t) + 2
4.3 (Classification) Classify the following systems in terms of their linearity, time invariance, causality,
and memory.
(a) The modulation system
y(t) = x(t)cos(2f0 t).
(b) The modulation system
y(t) = [A + x(t)]cos(2f0 t).
(c) The modulation system
y(t) = cos[2f0 tx(t)].
(d) The modulation system
y(t) = cos[2f0 t + x(t)].
%
(e) The sampling system y(t) = x(t) (t kts ).
k=
4.4 (Forced Response) Evaluate the forced response of the following systems.
(a) y (t) + 2y(t) = u(t) (b) y (t) + 2y(t) = cos(t)u(t)
(c) y (t) + 2y(t) = et u(t) (d) y (t) + 2y(t) = e2t u(t)
(e) y (t) + 2y(t) = tu(t) (f ) y (t) + 2y(t) = te2t u(t)
4.5 (Forced Response) Evaluate the forced response of the following systems.
(a) y (t) + 5y (t) + 6y(t) = 3u(t) (b) y (t) + 5y (t) + 6y(t) = 6et u(t)
(c) y (t) + 5y (t) + 6y(t) = 5 cos(t)u(t) (d) y (t) + 5y (t) + 6y(t) = 2e2t u(t)
(e) y (t) + 5y (t) + 6y(t) = 2tu(t) (f ) y (t) + 5y (t) + 6y(t) = (6et + 2e2t )u(t)
4.6 (Steady-State Response) The forced response of a system to sinusoidal inputs is termed the steady-
state response. Evaluate the steady-state response of the following systems.
(a) y (t) + 5y(t) = 2u(t) (b) y (t) + y(t) = cos(t)u(t)
(c) y (t) + 3y(t) = sin(t)u(t) (d) y (t) + 4y(t) = cos(t) + sin(2t)
(e) y (t) + 5y (t) + 6y(t) = cos(3t)u(t) (f ) y (t) + 4y (t) + 4y(t) = cos(2t)u(t)
90 Chapter 4 Analog Systems
4.7 (Zero-State Response) Evaluate the zero-state response of the following systems.
(a) y (t) + 2y(t) = u(t) (b) y (t) + y(t) = cos(t)u(t)
(c) y (t) + y(t) = r(t) (d) y (t) + 3y(t) = et u(t)
(e) y (t) + 2y(t) = e2t u(t) (f ) y (t) + 2y(t) = e2t cos(t)u(t)
4.8 (Zero-State Response) Evaluate the zero-state response of the following systems.
(a) y (t) + 5y (t) + 6y(t) = 6u(t) (b) y (t) + 4y (t) + 3y(t) = 2e2t u(t)
(c) y (t) + 2y (t) + 2y(t) = 2et u(t) (d) y (t) + 4y (t) + 5y(t) = cos(t)u(t)
(e) y (t) + 4y (t) + 3y(t) = r(t) (f ) y (t) + 5y (t) + 4y(t) = (2et + 2e3t )u(t)
4.9 (System Response) Evaluate the natural, forced, zero-state, zero-input, and total response of the
following systems.
(a) y (t) + 5y(t) = u(t) y(0) = 2
(b) y (t) + 3y(t) = 2e2t u(t) y(0) = 1
(c) y (t) + 4y(t) = 8tu(t) y(0) = 2
(d) y (t) + 2y(t) = 2 cos(2t)u(t) y(0) = 4
(e) y (t) + 2y(t) = 2e2t u(t) y(0) = 6
(f ) y (t) + 2y(t) = 2e2t cos(t)u(t) y(0) = 8
4.10 (System Response) Evaluate the response y(t) of the following systems.
(a) y (t) + y(t) = 2x (t) + x(t) x(t) = 4e2t u(t) y(0) = 2
(b) y (t) + 3y(t) = 3x (t) x(t) = 4e2t u(t) y(0) = 0
(c) y (t) + 4y(t) = x (t) x(t) x(t) = 4u(t) y(0) = 6
(d) y (t) + 2y(t) = x(t) + 2x(t 1) x(t) = 4u(t) y(0) = 0
(e) y (t) + 2y(t) = x (t) 2x(t 1) x(t) = 2et u(t) y(0) = 0
(f ) y (t) + 2y(t) = x (t) 2x (t 1) + x(t 2) x(t) = 2et u(t) y(0) = 4
4.11 (System Response) For each of the following, evaluate the natural, forced, zero-state, zero-input,
and total response. Assume y (0) = 1 and all other initial conditions zero.
(a) y (t) + 5y (t) + 6y(t) = 6u(t) y(0) = 0 y (0) = 1
(b) y (t) + 5y (t) + 6y(t) = 2et u(t) y(0) = 0 y (0) = 1
(c) y (t) + 4y (t) + 3y(t) = 36tu(t) y(0) = 0 y (0) = 1
(d) y (t) + 4y (t) + 4y(t) = 2e2t u(t) y(0) = 0 y (0) = 1
(e) y (t) + 4y (t) + 4y(t) = 8 cos(2t)u(t) y(0) = 0 y (0) = 1
(f ) [(s + 1)2 (s + 2)]y(t) = e2t u(t) y(0) = 0 y (0) = 1 y (0) = 0
4.12 (System Response) Evaluate the response y(t) of the following systems.
(a) y (t) + 3y (t) + 2y(t) = 2x (t) + x(t) x(t) = 4u(t) y(0) = 2 y (0) = 1
(b) y (t) + 4y (t) + 3y(t) = 3x (t) x(t) = 4e2t u(t) y(0) = 0 y (0) = 0
(c) y (t) + 4y (t) + 4y(t) = x (t) x(t) x(t) = 4u(t) y(0) = 6 y (0) = 3
(d) y (t) + 2y (t) + 2y(t) = x(t) + 2x(t 1) x(t) = 4u(t) y(0) = 0 y (0) = 0
(e) y (t) + 5y (t) + 6y(t) = x (t) 2x(t 1) x(t) = 2et u(t) y(0) = 0 y (0) = 0
(f ) y (t) + 5y (t) + 4y(t) = x (t) 2x (t 1) x(t) = 3et u(t) y(0) = 4 y (0) = 4
4.13 (Impulse Response) Find the impulse response of the following systems.
(a) y (t) + 3y(t) = x(t) (b) y (t) + 4y(t) = 2x(t)
(c) y (t) + 2y(t) = x (t) 2x(t) (d) y (t) + y(t) = x (t) x(t)
Chapter 4 Problems 91
4.14 (Impulse Response) Find the impulse response of the following systems.
(a) y (t) + 5y (t) + 4y(t) = x(t) (b) y (t) + 4y (t) + 4y(t) = 2x(t)
(c) y (t) + 4y (t) + 3y(t) = 2x (t) x(t) (d) y (t) + 2y (t) + y(t) = x (t) + x (t)
4.15 (Stability) Which of the following systems are stable, and why?
(a) y (t) + 4y(t) = x(t) (b) y (t) 4y(t) = 3x(t)
(c) y (t) + 4y(t) = x (t) + 3x(t) (d) y (t) + 5y (t) + 4y(t) = 6x(t)
(e) y (t) + 4y(t) = 2x (t) x(t) (f ) y (t) + 5y (t) + 6y(t) = x (t)
(g) y (t) 5y (t) + 4y(t) = x(t) (h) y (t) + 2y (t) 3y(t) = 2x (t)
4.16 (Impulse Response) The voltage input to a series RC circuit with a time constant is 1 t/
e u(t).
4.17 (System Response) The step response of an LTI system is given by s(t) = (1 et )u(t).
(a) Establish its impulse response h(t) and sketch both s(t) and h(t).
(b) Evaluate and sketch the response y(t) to the input x(t) = rect(t 0.5).
4.19 (System Classification) Investigate the linearity, time invariance, memory, causality, and stability
of the following operations.
! t ! t
(a) y(t) = y(0) + x() d (b) y(t) = x() d, t > 0
! t+1 0 !0 t+
(c) y(t) = x() d (d) y(t) = x( + 2) d
!t1
t !t
t+
(e) y(t) = x( 2) d (f ) y(t) = x( + 1) d
t t1
4.20 (Classification) Check the following for linearity, time invariance, memory, causality, and stability.
(a) The time-scaling system y(t) = x(2t)
(b) The folding system y(t) = x(t)
(c) The time-scaling system y(t) = x(0.5t)
(d) The sign-inversion system y(t) = sgn[x(t)]
(e) The rectifying system y(t) = |x(t)|
4.21 (Classification) Consider the two systems (1) y(t) = x(t) and (2) y(t) = x(t + ).
(a) For what values of is each system linear?
(b) For what values of is each system causal?
(c) For what values of is each system time invariant?
(d) For what values of is each system instantaneous?
92 Chapter 4 Analog Systems
4.22 (System Response) Consider the relaxed system y (t) + y(t) = x(t).
(a) The input is x(t) = u(t). What is the response?
(b) Use the result of part (a) (and superposition) to find the response of this system to the input
x1 (t) shown in Figure P4.22.
(c) The input is x(t) = tu(t). What is the response?
(d) Use the result of part (c) (and superposition) to find the response of this system to the input
x2 (t) shown in Figure P4.22.
(e) How are the results of parts (a) and (b) related to the results of parts (c) and (d)?
x 1(t) x 2(t)
4 4
2 t t
1 1 2
4 4
Figure P4.22 Input signals for Problem 4.22
4.23 (System Response) Consider the relaxed system y (t) + 1 y(t) = x(t).
(a) What is the response of this system to the unit step x(t) = u(t)?
(b) What is the response of this system to the unit impulse x(t) = (t)?
(c) What is the response of this system to the rectangular pulse x(t) = u(t) u(t )? Under what
conditions for and will the response resemble (be a good approximation to) the input?
4.24 (System Response) It is known that the response of the system y (t) + y(t) = x(t), = 0, is given
by y(t) = (5 + 3e2t )u(t).
(a) Identify the natural and forced response.
(b) Identify the values of and y(0).
(c) Identify the zero-input and zero-state response.
(d) Identify the input x(t).
4.25 (System Response) It is known that the response of the system y (t) + y(t) = x(t) is given by
y(t) = (5et + 3e2t )u(t).
(a) Identify the zero-input and zero-state response.
(b) What is the zero-input response of the system y (t) + y(t) = x(t) if y(0) = 10?
(c) What is the response of the relaxed system y (t) + y(t) = x(t 2)?
(d) What is the response of the relaxed system y (t) + y(t) = x (t) + 2x(t)?
4.26 (System Response) It is known that the response of the system y (t) + y(t) = x(t) is given by
y(t) = (5 + 2t)e3t u(t).
(a) Identify the zero-input and zero-state response.
(b) What is the zero-input response of the system y (t) + y(t) = x(t) if y(0) = 10?
(c) What is the response of the relaxed system y (t) + y(t) = x(t 2)?
(d) What is the response of the relaxed system y (t) + y(t) = 2x(t) + x (t)?
(e) What is the complete response of the system y (t) + y(t) = x (t) + 2x(t) if y(0) = 4?
Chapter 4 Problems 93
4.27 (Impulse Response) Consider the relaxed system y (t) + 1 y(t) = x(t).
(a) What is the impulse response of this system?
(b) What is the response of this system to the rectangular pulse x(t) = 1 [u(t) u(t )]? Show
that as 0, we obtain the system impulse response h(t).
(c) What is the response of this system to the exponential input x(t) = 1 et/ u(t)? Show that as
0, we obtain the system impulse response h(t).
4.28 (System Response) Find the response of the following systems for t 0.
(a) y (t) + 2y(t) = 2e(t1) u(t 1) y(0) = 5
(b) y (t) + 2y(t) = e2t u(t) + 2e(t1) u(t 1) y(0) = 5
(c) y (t) + 2y(t) = tet + 2e(t1) u(t 1) y(0) = 5
(d) y (t) + 2y(t) = cos(2t) + 2e(t1) u(t 1) y(0) = 5
4.29 (Impulse Response) Find the step response and impulse response of each circuit in Figure P4.29.
+ R + + R + + +
C
x(t) R y(t) x(t) y(t) x(t) R y(t)
C
Circuit 1 Circuit 2 Circuit 3
+ L + + R + + R +
Circuit 4 Circuit 5 Circuit 6
Figure P4.29 Circuits for Problem 4.29
4.30 (Impulse Response) The input-output relation for an LTI system is shown in Figure P4.30. What
is the impulse response h(t) of this system?
x(t) Input y(t) Output
4 4
2
t 5 t
1 3 1 3
4
Figure P4.30 Figure for Problem 4.30
4.31 (System Response) Consider two relaxed RC circuits with 1 = 0.5 s and 2 = 5 s. The input to
both is the rectangular pulse x(t) = 5 rect(t 0.5) V. The output is the capacitor voltage.
(a) Find and sketch the outputs y1 (t) and y2 (t) of the two circuits.
(b) At what time t > 0 does the output of both systems attain the same value?
4.32 (Classification and Stability) Argue for or against the following statements, assuming relaxed
systems and constant element values. You may validate your arguments using simple circuits.
(a) A system with only resistors is always instantaneous and stable.
(b) A system with only inductors and/or capacitors is always stable.
(c) An RLC system with at least one resistor is always linear, causal, and stable.
94 Chapter 4 Analog Systems
4.33 (Dierential Equations from Impulse Response) Though there is an easy way of obtaining a
system dierential equation from its impulse response using transform methods, we can also obtain such
a representation by working in the time domain itself. Let a system be described by h(t) = et u(t). If
we compute h (t) = (t) et u(t), we find that h (t) + h(t) = (t), and the system dierential equation
follows as y (t) + y(t) = x(t). Using this idea, determine the system dierential equation corresponding
to each impulse response h(t).
4.34 (Inverse Systems) If the input to a system is x0 (t) and its response is y0 (t), the inverse system
is defined as a system that recovers x0 (t) when its input is y0 (t). Inverse systems are often used to
undo the eects of measurement systems such as a transducers. The system equation of the inverse of
many LTI systems can be found simply by switching the input and output. For example, if the system
equation is y(t) = x(t 3), the inverse system is x(t) = y(t 3) (or y(t) = x(t + 3), by time invariance).
Find the inverse of each system and determine whether the inverse system is stable.
(a) y (t) + 2y(t) = x(t) (b) y (t) + 2y (t) + y(t) = x (t) + 2x(t)
4.35 (Inverse Systems) A requirement for a system to have an inverse is that unique inputs must produce
unique outputs. Thus, the system y(t) = |x(t)| does not have an inverse because of the sign ambiguity.
Determine which of the following systems are invertible and, for those that are, find the inverse system.
(a) y(t) = x2 (t) (b) y(t) = ex(t) (c) y(t) = cos[x(t)]
(d) y(t) = ejx(t) (e) y(t) = x(t 2) (f ) y (t) + y(t) = x(t)
4.36 (System Response in Symbolic Form) The ADSP routine sysresp1 yields a symbolic result for
the system response (see Chapter 21 for examples of its usage). Consider the system y (t) + 2y(t) =
2x(t). Use sysresp1 to obtain its
(a) Step response.
(b) Impulse response.
(c) Zero-state response to x(t) = 4e3t u(t).
(d) Complete response to x(t) = 4e3t u(t) with y(0) = 5.
4.37 (System Response) Use the ADSP routine sysresp1 to find the step response and impulse response
of the following filters and plot each result over 0 t 4. Compare the features of the step response
of each filter. Compare the features of the impulse response of each filter.
(a) y (t) + y(t)
= x(t) (a first-order lowpass filter)
(b) y (t) + 2y (t) + y(t) = x(t) (a second-order Butterworth lowpass filter)
Chapter 4 Problems 95
4.38 (Rise Time and Settling Time) For systems whose step response rises to a nonzero final value,
the rise time is commonly defined as the time it takes to rise from 10% to 90% of the final value. The
settling time is another measure for such signals. The 5% settling time, for example, is defined as the
time it takes for a signal to settle to within 5% of its final value. For each system, use the ADSP
routine sysresp1 to find the impulse response and step response and plot the results over 0 t 4.
For those systems whose step response rises toward a nonzero final value, use the ADSP routine trbw
to numerically estimate the rise time and the 5% settling time.
(a) y (t) + y(t)
= x(t) (a first-order lowpass filter)
(b) y (t) + 2y (t) + y(t) = x(t) (a second-order Butterworth lowpass filter)
(c) y (t) + y (t) + y(t) = x (t) (a bandpass filter)
(d) y (t) + 2y (t) + 2y (t) + y(t) = x(t) (a third-order Butterworth lowpass filter)
4.39 (System Response) Consider the system y (t) + 4y(t) + Cy(t) = x(t).
(a) Use sysresp1 to obtain its step response and impulse response for C = 3, 4, 5 and plot each
response over an appropriate time interval.
(b) How does the step response dier for each value of C? For what value of C would you expect
the smallest rise time? For what value of C would you expect the smallest 3% settling time?
(c) Confirm your predictions in the previous part by numerically estimating the rise time and settling
time, using the ADSP routine trbw.
4.40 (Steady-State Response in Symbolic Form) The ADSP routine ssresp yields a symbolic ex-
pression for the steady-state response to sinusoidal inputs (see Chapter 21 for examples of its usage).
Find the steady-state response to the input x(t) = 2 cos(3t 3 ) for each of the following systems and
plot the results over 0 t 3, using a time step of 0.01 s.
(a) y (t) + y(t) = 2x(t), for = 1, 2
(b) y (t) + 4y(t) + Cy(t) = x(t), for C = 3, 4, 5
4.41 (Numerical Simulation of Analog Systems) The ADSP routine ctsim returns estimates of
the system response using numerical integration such as Simpsons rule and Runge-Kutta methods.
Consider the dierential equation y (t) + y(t) = x(t). In the following, use the second-order Runge-
Kutta method throughout.
(a) Let x(t) = rect(t 0.5) and = 1. Evaluate its response y(t) analytically. Use ctsim to evaluate
its response y1 (t) over 0 t 3, using a time step of 0.1 s. Plot both results on the same graph
and compare.
(b) Let x(t) = sin(t), 0 t . Use ctsim to evaluate its response y1 (t) over 0 t 6, using
= 1, 3, 10 and a time step of 0.02 s. Plot each response along with the input x(t) on the same
graph. Does the response begin to resemble the input as is increased? Should it? Explain.
(c) Let x(t) = sin(t), 0 t . Use ctsim to evaluate its response y1 (t) over 0 t 6, using
= 100 and a time step of 0.02 s. Plot the response along with the input x(t) on the same graph.
Now change the time step to 0.03 s. What is the response? To explain what is happening, find
and plot the response for time steps of 0.0201 s, 0.0202 s, and 0.0203 s. Describe what happens
to the computed response and why.
Chapter 5
DISCRETE-TIME SYSTEMS
96
5.2 System Classification 97
Together, the two describe the principle of superposition. An operator O is termed a linear operator
if it is both additive and homogeneous:
O{Ax1 [n] + Bx2 [n]} = AO{x1 [n]} + BO{x2 [n]} (for a linear operation) (5.4)
Otherwise, it is nonlinear. In many instances, it suces to test only for homogeneity (or additivity) to
confirm the linearity of an operation (even though one does not imply the other). An important concept
that forms the basis for the study of linear systems is that the superposition of linear operators is also linear.
The order N describes the output term with the largest delay. It is customary to normalize the leading
coecient to unity.
5.2.1 Linearity
A linear system is one for which superposition applies and implies that the system is relaxed (with zero initial
conditions) and the system equation involves only linear operators. However, we can use superposition even
for a system with nonzero initial conditions that is otherwise linear. We treat it as a multiple-input system by
including the initial conditions as additional inputs. The output then equals the superposition of the outputs
due to each input acting alone, and any changes in the input are related linearly to changes in the response.
As a result, its response can be written as the sum of a zero-input response (due to the initial conditions
alone) and the zero-state response (due to the input alone). This is the principle of decomposition,
which allows us to analyze linear systems in the presence of nonzero initial conditions. Both the zero-input
response and the zero-state response obey superposition.
98 Chapter 5 Discrete-Time Systems
(c) y[n] = x[2n] is linear but time varying. The operation n 2n reveals that
AO{x[n]} = A(x[2n]), and O{Ax[n]} = (Ax[2n]). The two are equal.
O{x[n n0 ]} = x[2n n0 ], but y[n n0 ] = x[2(n n0 )]. The two are not equal.
(d) y[n] = x[n 2] is linear and time invariant. The operation n n 2 reveals that
AO{x[n]} = A(x[n 2]), and O{Ax[n]} = (Ax[n 2]). The two are equal.
O{x[n n0 ]} = x[n n0 2], and y[n n0 ] = x[n n0 2]. The two are equal.
1. Terms containing products of the input and/or output make a system equation nonlinear. A constant
term also makes a system equation nonlinear.
2. Coecients of the input or output that are explicit functions of n make a system equation time varying.
Time-scaled inputs or outputs such as y[2n] also make a system equation time varying.
(c) y[n] + 2y 2 [n] = 2x[n] x[n 1]. This is nonlinear but time invariant.
(d) y[n] 2y[n 1] = (2)x[n] x[n]. This is nonlinear but time invariant.
This describes an N th-order recursive filter whose present output depends on its own past values y[n k]
and on the past and present values of the input. It is also called an infinite impulse response (IIR) filter
because its impulse response h[n] (the response to a unit impulse input) is usually of infinite duration. Now
consider the dierence equation described by
Its present response depends only on the input terms and shows no dependence (recursion) on past values of
the response. It is called a nonrecursive filter, or a moving average filter, because its response is just
a weighted sum (moving average) of the input terms. It is also called a finite impulse response (FIR)
filter (because its impulse response is of finite duration).
y [n]
Delay elements in cascade result in an output delayed by the sum of the individual delays. The operational
notation for a delay of k units is z k . A nonrecursive filter described by
can be realized using a feed-forward structure with N delay elements, and a recursive filter of the form
requires a feedback structure (because the output depends on its own past values). Each realization is
shown in Figure 5.2 and requires N delay elements. The general form described by
requires both feed-forward and feedback and 2N delay elements, as shown in Figure 5.3. However, since
LTI systems may be cascaded in any order (as we shall learn in the next chapter), we can switch the two
subsystems to obtain a canonical realization with only N delays, as also shown in Figure 5.3.
x [n]
B0 +
y [n] x [n]
+ y [n]
+ +
z1 z1
A1
+
B1 + + +
z1 z1
A2
B2 +
+
+ +
z1 AN z1
BM
Figure 5.2 Realization of a nonrecursive (left) and recursive (right) digital filter
x [n]
+ + y [n] x [n]
+ + y [n]
+ B0 + + B0 +
z1 z1 z1
A1 A1
+ +
+ +
+ B1 + + B1 +
z1 z1 z1
A2 A2
+ + + +
+ B2 + + B2 +
z1 z1 z1
AN BN AN BN
Figure 5.3 Direct (left) and canonical (right) realization of a digital filter
The state variable representation describes an nth-order system by n simultaneous first-order dierence
equations called state equations in terms of n state variables. It is useful for complex or nonlinear systems
and those with multiple inputs and outputs. For LTI systems, state equations can be solved using matrix
methods. The state variable form is also readily amenable to numerical solution. We do not pursue this
method in this book.
(b) Consider a system described by y[n] = a1 y[n 1] + b0 nu[n]. Let the initial condition be y[1] = 0. We
then successively compute
y[0] = a1 y[1] = 0
y[1] = a1 y[0] + b0 u[1] = b0
y[2] = a1 y[1] + 2b0 u[2] = a1 b0 + 2b0
y[3] = a1 y[2] + 3b0 u[3] = a1 [a1 b0 + 2b0 ] + 3b0 = a21 + 2a1 b0 + 3b0
104 Chapter 5 Discrete-Time Systems
Using the closed form for the sum kxk from k = 1 to k = N (with x = a1 ), we get
a1 [1 (n + 1)an + na(n+1) ]
y[n] = an1 b0 an1 + b0 an1
1
(1 a1 )2
What a chore! More elegant ways of solving dierence equations are described later in this chapter.
(c) Consider the recursive system y[n] = y[n 1] + x[n] x[n 3]. If x[n] equals [n] and y[1] = 0, we
successively obtain
y[0] = y[1] + [0] [3] = 1 y[3] = y[2] + [3] [0] = 1 1 = 0
y[1] = y[0] + [1] [2] = 1 y[4] = y[3] + [4] [1] = 0
y[2] = y[1] + [2] [1] = 1 y[5] = y[4] + [5] [2] = 0
The impulse response of this recursive filter is zero after the first three values and has a finite length.
It is actually a nonrecursive (FIR) filter in disguise!
Table 5.1 Form of the Natural Response for Discrete LTI Systems
! "p+1 rn cos(n)(A0 + A1 n + A2 n2 + + Ap np )
4 Complex, repeated: rej
+ rn sin(n)(B0 + B1 n + B2 n2 + + Bp np )
Table 5.2 Form of the Forced Response for Discrete LTI Systems
Note: If the right-hand side (RHS) is n , where is also a root of the characteristic
equation repeated p times, the forced response form must be multiplied by np .
Entry Forcing Function (RHS) Form of Forced Response
5 n C0 + C1 n
6 np C0 + C1 n + C2 n2 + + Cp np
x [n]
+ y [n]
+
z1
0.6
The dierence equation describing this system is y[n] 0.6y[n 1] = x[n] = (0.4)n , n 0.
Its characteristic equation is 1 0.6z 1 = 0 or z 0.6 = 0.
Its root z = 0.6 gives the form of the natural response yN [n] = K(0.6)n .
Since x[n] = (0.4)n , the forced response is yF [n] = C(0.4)n .
We find C by substituting for yF [n] into the dierence equation
yF [n] 0.6yF [n 1] = (0.4)n = C(0.4)n 0.6C(0.4)n1 .
Cancel out (0.4)n from both sides and solve for C to get
C 1.5C = 1 or C = 2.
Thus, yF [n] = 2(0.4)n . The total response is y[n] = yN [n] + yF [n] = 2(0.4)n + K(0.6)n .
We use the initial condition y[1] = 10 on the total response to find K:
y[1] = 10 = 5 + K
0.6 and K = 9.
Thus, y[n] = 2(0.4) + 9(0.6)n , n 0.
n
(b) Consider the dierence equation y[n] 0.5y[n 1] = 5 cos(0.5n), n 0 with y[1] = 4.
Its characteristic equation is 1 0.5z 1 = 0 or z 0.5 = 0.
Its root z = 0.5 gives the form of the natural response yN [n] = K(0.5)n .
Since x[n] = 5 cos(0.5n), the forced response is yF [n] = A cos(0.5n) + B sin(0.5n).
We find yF [n 1] = A cos[0.5(n 1)] + B sin[0.5(n 1)] = A sin(0.5n) B cos(0.5n). Then
yF [n] 0.5yF [n 1] = (A + 0.5B)cos(0.5n) (0.5A B)sin(0.5n) = 5 cos(0.5n)
Equate the coecients of the cosine and sine terms to get
(A + 0.5B) = 5, (0.5A B) = 0 or A = 4, B = 2, and yF [n] = 4 cos(0.5n) + 2 sin(0.5n).
The total response is y[n] = K(0.5)n + 4 cos(0.5n) + 2 sin(0.5n). With y[1] = 4, we find
y[1] = 4 = 2K 2 or K = 3, and thus y[n] = 3(0.5)n + 4 cos(0.5n) + 2 sin(0.5n), n 0.
The steady-state response is 4 cos(0.5n) + 2 sin(0.5n), and the transient response is 3(0.5)n .
5.4 Digital Filters Described by Dierence Equations 107
(c) Consider the dierence equation y[n] 0.5y[n 1] = 3(0.5)n , n 0 with y[1] = 2.
Its characteristic equation is 1 0.5z 1 = 0 or z 0.5 = 0.
Its root, z = 0.5, gives the form of the natural response yN [n] = K(0.5)n .
Since x[n] = (0.5)n has the same form as the natural response, the forced response is yF [n] = Cn(0.5)n .
We find C by substituting for yF [n] into the dierence equation:
yF [n] 0.5yF [n 1] = 3(0.5)n = Cn(0.5)n 0.5C(n 1)(0.5)n1 .
Cancel out (0.5)n from both sides and solve for C to get Cn C(n 1) = 3, or C = 3.
Thus, yF [n] = 3n(0.5)n . The total response is y[n] = yN [n] + yF [n] = K(0.5)n + 3n(0.5)n .
We use the initial condition y[1] = 2 on the total response to find K:
y[1] = 2 = 2K 6, and K = 4.
Thus, y[n] = 4(0.5)n + 3n(0.5)n = (4 + 3n)(0.5)n , n 0.
Comparison with the generic realization of Figure 5.2 reveals that the system dierence equation is
EXAMPLE 5.7 (Zero-Input and Zero-State Response for the Single-Input Case)
(a) Consider the dierence equation y[n] 0.6y[n 1] = (0.4)n , n 0, with y[1] = 10.
The forced response and the form of the natural response were found in Example 5.6(a) as:
yF [n] = 2(0.4)n yN [n] = K(0.6)n
1. Its ZSR is found from the form of the total response is yzs [n] = 2(0.4)n + K(0.6)n , with zero
initial conditions:
yzs [1] = 0 = 5 + K
0.6 K=3 yzs [n] = 2(0.4)n + 3(0.6)n , n 0
2. Its ZIR is found from the natural response yzi [n] = K(0.6)n , with given initial conditions:
yzi [1] = 10 = K
0.6 K=6 yzi [n] = 6(0.6)n , n 0
3. The total response is y[n] = yzi [n] + yzs [n] = 2(0.4)n + 9(0.6)n , n 0.
This matches the results of Example 5.6(a).
(b) Let y[n] 16 y[n 1] 16 y[n 2] = 4, n 0, with y[1] = 0 and y[2] = 12.
1. The ZIR has the form of the natural response yzi [n] = K1 ( 12 )n + K2 ( 13 )n (see Example 5.6(d)).
To find the constants, we use the given initial conditions y[1] = 0 and y[2] = 12:
0 = K1 ( 12 )1 + K2 ( 13 )1 = 2K1 3K2 12 = K1 ( 12 )2 + K2 ( 13 )2 = 4K1 + 9K2
Thus, K1 = 1.2, K2 = 0.8, and
yzi [n] = 1.2( 21 )n + 0.8( 13 )n , n0
5.4 Digital Filters Described by Dierence Equations 109
2. The ZSR has the same form as the total response. Since the forced response (found in Exam-
ple 5.6(d)) is yF [n] = 6, we have
yzs [n] = K1 ( 12 )n + K2 ( 13 )n + 6
To find the constants, we assume zero initial conditions, y[1] = 0 and y[2] = 0, to get
y[1] = 0 = 2K1 3K2 + 6 y[2] = 0 = 4K1 + 9K2 + 6
(c) (Linearity of the ZSR and ZIR) An IIR filter is described by y[n] y[n 1] 2y[n 2] = x[n],
with x[n] = 6u[n] and initial conditions y[1] = 1, y[2] = 4.
1. Find the zero-input response, zero-state response, and total response.
2. How does the total response change if y[1] = 1, y[2] = 4 as given, but x[n] = 12u[n]?
3. How does the total response change if x[n] = 6u[n] as given, but y[1] = 2, y[2] = 8?
For the ZSR, we use the form of the total response and zero initial conditions:
yzs [n] = yF [n] + yN [n] = 0.5 + A(1)n + B(2)n , y[1] = y[2] = 0
+
+
z1
2
Impulse response h[n]: The output of a relaxed LTI system if the input is a unit impulse [n]
Step response s[n]: The output of a relaxed LTI system if the input is a unit step u[n]
the impulse response h[n] (with x[n] = [n]) is an M + 1 term sequence of the input terms, which may be
written as
h[n] = B0 [n] + B1 [n 1] + + BM [n M ] or h[n] = {B0 , B1 , . . . , BM } (5.20)
Since the input [n] is zero for n > 0, we must apparently assume a forced response that is zero and thus
solve for the natural response using initial conditions (leading to a trivial result). The trick is to use at
least one nonzero initial condition, which we must find by recursion. By recursion, we find h[0] = 1. Since
[n] = 0, n > 0, the impulse response is found as the natural response of the homogeneous equation
subject to the nonzero initial condition h[0] = 1. All the other initial conditions are assumed to be zero
(h[1] = 0 for a second-order system, h[1] = h[2] = 0 for a third-order system, and so on).
The impulse response of the given system is h[n] = 4h0 [n] h0 [n 1]. We find
h[n] = [1.2( 21 )n + 0.8( 13 )n ]u[n] [3.6( 21 )n1 + 2.4( 31 )n1 ]u[n 1]
Comment: Remember that the impulse response of this recursive system is of finite length.
h[1] = 0.4h[0] = 0.4 h[2] = 0.4h[1] = (0.4)2 h[3] = 0.4h[2] = (0.4)3 etc.
The general form is easily discerned as h[n] = (0.4)n and is valid for n 0.
Comment: The causal impulse response of y[n] y[n 1] = x[n] is h[n] = n u[n].
(b) Find the anti-causal impulse response of the first-order system y[n] 0.4y[n 1] = x[n].
For the anti-causal impulse response, we assume h[n] = 0, n 0, and solve for h[n], n < 0, by recursion
from h[n 1] = 2.5(h[n] [n]). With h[1] = 2.5(h[0] [0]) = 2.5, and [n] = 0, n = 0, we find
h[2] = 2.5h[1] = (2.5)2 h[3] = 2.5h[2] = (2.5)3 h[4] = 2.5h[3] = (2.5)4 etc.
The general form is easily discerned as h[n] = (2.5)n = (0.4)n and is valid for n 1.
Comment: The anti-causal impulse response of y[n] y[n 1] = x[n] is h[n] = n u[n 1].
5.6 Stability of Discrete-Time LTI Systems 115
(b) The system y[n] y[n 1] = x[n] is unstable. The root of its characteristic equation z 1 = 0 is z = 1
gives the natural response yN = Ku[n], which is actually bounded. However, for an input x[n] = u[n],
the forced response will have the form Cnu[n], which becomes unbounded.
(c) The system y[n] 2y[n 1] + y[n 2] = x[n] is unstable. The roots of its characteristic equation
z 2 2z + 1 = 0 are equal and produce the unbounded natural response yN [n] = Au[n] + Bnu[n].
(d) The system y[n] 12 y[n 1] = nx[n] is linear, time varying and unstable. The (bounded) step input
x[n] = u[n] results in a response that includes the ramp nu[n], which becomes unbounded.
(e) The system y[n] = x[n] 2x[n 1] is stable because it describes an FIR filter.
116 Chapter 5 Discrete-Time Systems
(b) Let h[n] = 3(0.6)n u[n]. This suggests a dierence equation whose left-hand side is y[n] 0.6y[n 1].
We then set up h[n] 0.6h[n 1] = 3(0.6)n u[n] 1.8(0.6)n1 u[n 1]. This simplifies to
h[n] 0.6h[n 1] = 3(0.6)n u[n] 3(0.6)n u[n 1] = 3(0.6)n (u[n] u[n 1]) = 3(0.6)n [n] = 3[n]
The dierence equation corresponding to h[n] 0.6h[n 1] = 3[n] is y[n] 0.6y[n 1] = 3x[n].
(c) Let h[n] = 2(0.5)n u[n] + (0.5)n u[n]. This suggests a characteristic equation (z 0.5)(z + 0.5).
The left-hand side of the dierence equation is thus y[n] 0.25y[n 2]. We now compute
h[n] 0.25h[n 2] = 2(0.5)n u[n] + (0.5)n u[n] 0.25(2(0.5)n1 u[n 1] + (0.5)n1 u[n 1])
This simplifies to
h[n] 0.25h[n 2] = [2(0.5)n + (0.5)n ](u[n] u[n 2]) = [2(0.5)n + (0.5)n ]([n] + [n 2])
This simplifies further to h[n] 0.25h[n 2] = 3[n] 0.5[n 1].
Finally, the dierence equation is y[n] 0.25y[n 2] = 3x[n] 0.5x[n 1].
Not all systems have an inverse. For a system to have an inverse, or be invertible, distinct inputs must
lead to distinct outputs. If a system produces an identical output for two dierent inputs, it does not have
an inverse. For invertible LTI systems described by dierence equations, finding the inverse system is as
easy as switching the input and output variables.
The original system is described by y[n] = x[n] 0.5x[n 1]. By switching the input and output, the
inverse system is described by y[n] 0.5y[n 1] = x[n]. The realization of each system is shown in
118 Chapter 5 Discrete-Time Systems
Figure E5.16A(2). Are they related? Yes. If you flip the realization of the echo system end-on-end
and change the sign of the feedback signal, you get the inverse realization.
+
Input + Output Input Output
+
z1 z1
0.5 0.5
g[n] = (4[n] + 4[n 1]) (2[n 1] + 2[n 2]) = 4[n] + 2[n 1]) 2[n 2])
y0 [0] = 0.5y0 [1] + 4[0] = 4 y0 [1] = 0.5y0 [0] + 2[0] = 4 y0 [2] = 0.5y0 [1] 2[0] = 0
All subsequent values of y0 [n] are zero since the input terms are zero for n > 2. The output is thus
y0 [n] = {4, 4}, the same as the input to the overall system.
(c) The linear (but time-varying) decimating system y[n] = x[2n] does not have an inverse. Two inputs,
which dier in the samples discarded (for example, the signals {1, 2, 4, 5} and {1, 3, 4, 8} yield the same
output {1, 4}). If we try to recover the original signal by interpolation, we cannot uniquely identify
the original signal.
(d) The linear (but time-varying) interpolating system y[n] = x[n/2] does have an inverse. Its inverse is a
decimating system that discards the very samples inserted during interpolation and thus recovers the
original signal.
(e) The LTI system y[n] = x[n] + 2x[n 1] also has an inverse. Its inverse is found by switching the input
and output as y[n] + 2y[n 1] = x[n]. This example also shows that the inverse of an FIR filter results
in an IIR filter.
5.8 Application-Oriented Examples 119
This describes an FIR filter whose output y[n] equals the input x[n] and its delayed (by D samples) and
attenuated (by ) replica of x[n] (the echo term). Its realization is sketched in Figure 5.5. The D-sample
delay is implemented by a cascade of D delay elements and represented by the block marked z D . This
filter is also called a comb filter (for reasons to be explained in later chapters).
+
Input + Output Input Output
+ +
z-D z-D
Reverberations are due to multiple echoes (from the walls and other structures in a concert hall, for
example). For simplicity, if we assume that the signal suers the same delay D and the same attenuation
in each round-trip to the source, we may describe the action of reverb by
Subtracting the second equation from the first, we obtain a compact form for a reverb filter:
This is an IIR filter whose realization is also sketched in Figure 5.5. Its form is reminiscent of the inverse of
the echo system y[n] + y[n D] = x[n], but with replaced by .
In concept, it should be easy to tailor the simple reverb filter to simulate realistic eects by including
more terms with dierent delays and attenuation. In practice, however, this is no easy task, and the filter
designs used by commercial vendors in their applications are often proprietary.
where x1 [n] corresponds to one period (N samples) of the signal x[n]. This form actually describes a reverb
system with no attenuation whose delay equals the period N . Hardware implementation often uses a circular
buer or wave-table (in which one period of the signal is stored), and cycling over it generates the periodic
signal. The same wave-table can also be used to change the frequency (or period) of the signal (to double
the frequency for example, we would cycle over alternate samples) or for storing a new signal.
CHAPTER 5 PROBLEMS
DRILL AND REINFORCEMENT
5.2 (System Classification) In each of the systems below, x[n] is the input and y[n] is the output.
Check each system for linearity, shift invariance, memory, and causality.
(a) y[n] y[n 1] = x[n] (b) y[n] + y[n + 1] = nx[n]
(c) y[n] y[n + 1] = x[n + 2] (d) y[n + 2] y[n + 1] = x[n]
(e) y[n + 1] x[n]y[n] = nx[n + 2] (f ) y[n] + y[n 3] = x2 [n] + x[n + 6]
(g) y[n] 2n y[n] = x[n] (h) y[n] = x[n] + x[n 1] + x[n 2]
5.3 (Response by Recursion) Find the response of the following systems by recursion to n = 4 and
try to discern the general form for y[n].
(a) y[n] ay[n 1] = [n] y[1] = 0
(b) y[n] ay[n 1] = u[n] y[1] = 1
(c) y[n] ay[n 1] = nu[n] y[1] = 0
(d) y[n] + 4y[n 1] + 3y[n 2] = u[n 2] y[1] = 0 y[2] = 1
5.4 (Forced Response) Find the forced response of the following systems.
(a) y[n] 0.4y[n 1] = u[n] (b) y[n] 0.4y[n 1] = (0.5)n
(c) y[n] + 0.4y[n 1] = (0.5)n (d) y[n] 0.5y[n 1] = cos(n/2)
5.5 (Forced Response) Find the forced response of the following systems.
(a) y[n] 1.1y[n 1] + 0.3y[n 2] = 2u[n] (b) y[n] 0.9y[n 1] + 0.2y[n 2] = (0.5)n
(c) y[n] + 0.7y[n 1] + 0.1y[n 2] = (0.5)n (d) y[n] 0.25y[n 2] = cos(n/2)
5.6 (Zero-State Response) Find the zero-state response of the following systems.
(a) y[n] 0.5y[n 1] = 2u[n] (b) y[n] 0.4y[n 1] = (0.5)n
(c) y[n] 0.4y[n 1] = (0.4)n (d) y[n] 0.5y[n 1] = cos(n/2)
5.7 (Zero-State Response) Find the zero-state response of the following systems.
(a) y[n] 1.1y[n 1] + 0.3y[n 2] = 2u[n] (b) y[n] 0.9y[n 1] + 0.2y[n 2] = (0.5)n
(c) y[n] + 0.7y[n 1] + 0.1y[n 2] = (0.5)n (d) y[n] 0.25y[n 2] = cos(n/2)
5.8 (System Response) Let y[n] 0.5y[n 1] = x[n], with y[1] = 1. Find the response of this system
for the following inputs.
(a) x[n] = 2u[n] (b) x[n] = (0.25)n u[n] (c) x[n] = n(0.25)n u[n]
(d) x[n] = (0.5)n u[n] (e) x[n] = n(0.5)n (f ) x[n] = (0.5)n cos(0.5n)
122 Chapter 5 Discrete-Time Systems
5.10 (System Response) Sketch a realization for each system, assuming zero initial conditions. Then
evaluate the complete response from the information given. Check your answer by computing the first
few values by recursion.
(a) y[n] 0.4y[n 1] = x[n] x[n] = (0.5)n u[n] y[1] = 0
(b) y[n] 0.4y[n 1] = 2x[n] + x[n 1] x[n] = (0.5)n u[n] y[1] = 0
(c) y[n] 0.4y[n 1] = 2x[n] + x[n 1] x[n] = (0.5)n u[n] y[1] = 5
(d) y[n] + 0.5y[n 1] = x[n] x[n 1] x[n] = (0.5)n u[n] y[1] = 2
(e) y[n] + 0.5y[n 1] = x[n] x[n 1] x[n] = (0.5)n u[n] y[1] = 0
5.11 (System Response) For each system, evaluate the natural, forced, and total response. Assume that
y[1] = 0, y[2] = 1. Check your answer for the total response by computing its first few values by
recursion.
(a) y[n] + 4y[n 1] + 3y[n 2] = u[n] (b) y[n] + 4y[n 1] + 4y[n 2] = 2n u[n]
(c) y[n] + 4y[n 1] + 8y[n 2] = cos(n)u[n] (d) {(1 + 2z 1 )2 }y[n] = n(2)n u[n]
(e) {1 + 34 z 1 + 18 z 2 }y[n] = ( 31 )n u[n] (f ) {1 + 0.5z 1 + 0.25z 2 }y[n] = cos(0.5n)u[n]
(g) {z 2 + 4z + 4}y[n] = 2n u[n] (h) {1 0.5z 1 }y[n] = (0.5)n cos(0.5n)u[n]
5.12 (System Response) For each system, set up a dierence equation and compute the zero-state,
zero-input, and total response, assuming x[n] = u[n] and y[1] = y[2] = 1.
(a) {1 z 1 2z 2 }y[n] = x[n] (b) {z 2 z 2}y[n] = x[n]
(c) {1 34 z 1 + 18 z 2 }y[n] = {z 1 }x[n] (d) {1 34 z 1 + 18 z 2 }y[n] = {1 + z 1 }x[n]
(e) {1 0.25z 2 }y[n] = x[n] (f ) {z 2 0.25}y[n] = {2z 2 + 1}x[n]
5.13 (Impulse Response by Recursion) Find the impulse response h[n] by recursion up to n = 4 for
each of the following systems.
(a) y[n] y[n 1] = 2x[n] (b) y[n] 3y[n 1] + 6y[n 2] = x[n 1]
(c) y[n] 2y[n 3] = x[n 1] (d) y[n] y[n 1] + 6y[n 2] = nx[n 1] + 2x[n 3]
5.14 (Analytical Form for Impulse Response) Classify each filter as recursive or FIR (nonrecursive),
and causal or noncausal, and find an expression for its impulse response h[n].
(a) y[n] = x[n] + x[n 1] + x[n 2] (b) y[n] = x[n + 1] + x[n] + x[n 1]
(c) y[n] + 2y[n 1] = x[n] (d) y[n] + 2y[n 1] = x[n 1]
(e) y[n] + 2y[n 1] = 2x[n] + 6x[n 1] (f ) y[n] + 2y[n 1] = x[n + 1] + 4x[n] + 6x[n 1]
(g) {1 + 4z 1 + 3z 2 }y[n] = {z 2 }x[n] (h) {z 2 + 4z + 4}y[n] = {z + 3}x[n]
(i) {z 2 + 4z + 8}y[n] = x[n] (j) y[n] + 4y[n 1] + 4y[n 2] = x[n] x[n + 2]
5.15 (Stability) Investigate the causality and stability of the following systems.
(a) y[n] = x[n 1] + x[n] + x[n + 1] (b) y[n] = x[n] + x[n 1] + x[n 2]
(c) y[n] 2y[n 1] = x[n] (d) y[n] 0.2y[n 1] = x[n] 2x[n + 2]
(e) y[n] + y[n 1] + 0.5y[n 2] = x[n] (f ) y[n] y[n 1] + y[n 2] = x[n] x[n + 1]
(g) y[n] 2y[n 1] + y[n 2] = x[n] x[n 3] (h) y[n] 3y[n 1] + 2y[n 2] = 2x[n + 3]
Chapter 5 Problems 123
5.17 (System Classification) Classify the following systems in terms of their linearity, time invariance,
memory, causality, and stability.
(a) y[n] = x[n/3] (zero interpolation)
(b) y[n] = cos(n)x[n] (modulation)
(c) y[n] = [1 + cos(n)]x[n] (modulation)
(d) y[n] = cos(nx[n]) (frequency modulation)
(e) y[n] = cos(n + x[n]) (phase modulation)
(f )y[n] = x[n] x[n 1] (dierencing operation)
(g) y[n] = 0.5x[n] + 0.5x[n 1] (averaging operation)
N
$ 1
(h) y[n] = N1 x[n k] (moving average)
k=0
(i) y[n] y[n 1] = x[n], 0 < < 1 (exponential averaging)
(j) y[n] = 0.4(y[n 1] + 2) + x[n]
5.18 (Classification) Classify each system in terms of its linearity, time invariance, memory, causality,
and stability.
(a) The folding system y[n] = x[n].
(b) The decimating system y[n] = x[2n].
(c) The zero-interpolating system y[n] = x[n/2].
(d) The sign-inversion system y[n] = sgn{x[n]}.
(e) The rectifying system y[n] = |x[n]|.
5.19 (Classification) Classify each system in terms of its linearity, time invariance, causality, and stability.
(a) y[n] = round{x[n]} (b) y[n] = median{x[n + 1], x[n], x[n 1]}
(c) y[n] = x[n] sgn(n) (d) y[n] = x[n] sgn{x[n]}
5.20 (Inverse Systems) Are the following systems invertible? If not, explain why; if invertible, find the
inverse system.
(a) y[n] = x[n] x[n 1] (dierencing operation)
(b) y[n] = 13 (x[n] + x[n 1] + x[n 2]) (moving average operation)
(c) y[n] = 0.5x[n] + x[n 1] + 0.5x[n 2] (weighted moving average operation)
(d) y[n] y[n 1] = (1 )x[n], 0 < < 1 (exponential averaging operation)
(e) y[n] = cos(n)x[n] (modulation)
(f ) y[n] = cos(x[n])
(g) y[n] = ex[n]
124 Chapter 5 Discrete-Time Systems
5.21 (An Echo System and Its Inverse) An echo system is described by y[n] = x[n] + 0.5x[n N ].
Assume that the echo arrives after 1 ms and the sampling rate is 2 kHz.
(a) What is the value of N ? Sketch a realization of this echo system.
(b) What is the impulse response and step response of this echo system?
(c) Find the dierence equation of the inverse system. Then, sketch its realization and find its
impulse response and step response.
5.22 (Reverb) A reverb filter is described by y[n] = x[n] + 0.25y[n N ]. Assume that the echoes arrive
every millisecond and the sampling rate is 2 kHz.
(a) What is the value of N ? Sketch a realization of this reverb filter.
(b) What is the impulse response and step response of this reverb filter?
(c) Find the dierence equation of the inverse system. Then, sketch its realization and find its
impulse response and step response.
5.23 (System Response) Consider the system y[n] 0.5y[n 1] = x[n]. Find its zero-state response to
the following inputs.
(a) x[n] = u[n] (b) x[n] = (0.5)n u[n] (c) x[n] = cos(0.5n)u[n]
(d) x[n] = (1)n u[n] (e) x[n] = j n u[n] (f ) x[n] = ( j)n u[n] + ( j)n u[n]
5.24 (System Response) For the system realization shown in Figure P5.24, find the response to the
following inputs and initial conditions.
(a) x[n] = u[n] y[1] = 0 (b) x[n] = u[n] y[1] = 4
(c) x[n] = (0.5)n u[n] y[1] = 0 (d) x[n] = (0.5)n u[n] y[1] = 6
(e) x[n] = (0.5)n u[n] y[1] = 0 (f ) x[n] = (0.5)n u[n] y[1] = 2
x [n]
+ y [n]
z1
0.5
5.26 (System Response) Find the impulse response of the following filters.
(a) y[n] = x[n] x[n 1] (dierencing operation)
(b) y[n] = 0.5x[n] + 0.5x[n 1] (averaging operation)
N
$ 1
(c) y[n] = N1 x[n k], N = 3 (moving average)
k=0
N
$ 1
(d) y[n] = 2
N (N +1) (N k)x[n k], N = 3 (weighted moving average)
k=0
(e) y[n] y[n 1] = (1 )x[n], N = 3, = N 1
N +1 (exponential averaging)
Chapter 5 Problems 125
5.27 (System Response) It is known that the response of the system y[n] + y[n 1] = x[n], = 0, is
given by y[n] = [5 + 3(0.5)n ]u[n].
(a) Identify the natural response and forced response.
(b) Identify the values of and y[1].
(c) Identify the zero-input response and zero-state response.
(d) Identify the input x[n].
5.28 (System Response) It is known that the response of the system y[n] + 0.5y[n 1] = x[n] is described
by y[n] = [5(0.5)n + 3(0.5)n )]u[n].
(a) Identify the zero-input response and zero-state response.
(b) What is the zero-input response of the system y[n] + 0.5y[n 1] = x[n] if y[1] = 10?
(c) What is the response of the relaxed system y[n] + 0.5y[n 1] = x[n 2]?
(d) What is the response of the relaxed system y[n] + 0.5y[n 1] = x[n 1] + 2x[n]?
5.29 (System Response) It is known that the response of the system y[n] + y[n 1] = x[n] is described
by y[n] = (5 + 2n)(0.5)n u[n].
(a) Identify the zero-input response and zero-state response.
(b) What is the zero-input response of the system y[n] + y[n 1] = x[n] if y[1] = 10?
(c) What is the response of the relaxed system y[n] + y[n 1] = x[n 1])?
(d) What is the response of the relaxed system y[n] + y[n 1] = 2x[n 1] + x[n]?
(e) What is the complete response of the y[n] + y[n 1] = x[n] + 2x[n 1] if y[1] = 4?
5.30 (System Interconnections) Two systems are said to be in cascade if the output of the first system
acts as the input to the second. Find the response of the following cascaded systems if the input is a
unit step and the systems are described as follows. In which instances does the response dier when the
order of cascading is reversed? Can you use this result to justify that the order in which the systems
are cascaded does not matter in finding the overall response if both systems are LTI?
(a) System 1: y[n] = x[n] x[n 1] System 2: y[n] = 0.5y[n 1] + x[n]
(b) System 1: y[n] = 0.5y[n 1] + x[n] System 2: y[n] = x[n] x[n 1]
(c) System 1: y[n] = x2 [n] System 2: y[n] = 0.5y[n 1] + x[n]
(d) System 1: y[n] = 0.5y[n 1] + x[n] System 2: y[n] = x2 [n]
5.31 (Systems in Cascade and Parallel) Consider the realization of Figure P5.31.
x [n]
+ + y [n]
+
z1
z1 +
+
z1
Figure P5.31 System realization for Problem 5.31
(a) Find its impulse response if = . Is the overall system FIR or IIR?
(b) Find its dierence equation and impulse response if = . Is the overall system FIR or IIR?
(c) Find its dierence equation and impulse response if = = 1. What is the function of the
overall system?
126 Chapter 5 Discrete-Time Systems
5.32 (Dierence Equations from Impulse Response) Find the dierence equations describing the
following systems.
(a) h[n] = [n] + 2[n 1] (b) h[n] = {2, 3, 1}
(c) h[n] = (0.3)n u[n] (d) h[n] = (0.5)n u[n] (0.5)n u[n]
5.33 (Dierence Equations from Impulse Response) A system is described by the impulse response
h[n] = (1)n u[n]. Find the dierence equation of this system. Then find the dierence equation of
the inverse system. Does the inverse system describe an FIR filter or IIR filter? What function does
it perform?
5.34 (Dierence Equations from Dierential Equations) Consider an analog system described by
the dierential equation y (t) + 3y (t) + 2y(t) = 2u(t).
(a) Confirm that this describes a stable analog system.
(b) Convert this to a dierence equation using the backward Euler algorithm and check the stability
of the resulting digital filter.
(c) Convert this to a dierence equation using the forward Euler algorithm and check the stability
of the resulting digital filter.
(d) Which algorithm is better in terms of preserving stability? Can the results be generalized to any
arbitrary analog system?
5.35 (Dierence Equations) For the filter realization shown in Figure P5.35, find the dierence equation
relating y[n] and x[n] if the impulse response of the filter is given by
z1
Figure P5.35 Filter realization for Problem 5.35
5.36 (Periodic Signal Generators) Find the dierence equation of a filter whose impulse response is a
periodic sequence with first period x[n] = {1, 2, 3, 4, 6, 7, 8}. Sketch a realization for this filter.
5.37 (Recursive and IIR Filters) The terms recursive and IIR are not always synonymous. A recursive
filter could in fact have a finite impulse response. Use recursion to find the the impulse response h[n]
for each of the following recursive filters. Which filters (if any) describe IIR filters?
(a) y[n] y[n 1] = x[n] x[n 2]
(b) y[n] y[n 1] = x[n] x[n 1] 2x[n 2] + 2x[n 3]
5.38 (Recursive Forms of FIR Filters) An FIR filter may always be recast in recursive form by the
simple expedient of including identical factors on the left-hand and right-hand side of its dierence
equation in operational form. For example, the filter y[n] = (1 z 1 )x[n] is FIR, but the identical
filter (1 + z 1 )y[n] = (1 + z 1 )(1 z 1 )x[n] has the dierence equation y[n] + y[n 1] = x[n] x[n 2]
and can be implemented recursively. Find two dierent recursive dierence equations (with dierent
orders) for each of the following filters.
(a) y[n] = x[n] x[n 2] (b) h[n] = {1, 2, 1}
Chapter 5 Problems 127
5.39 (Nonrecursive Forms of IIR Filters) An FIR filter may always be exactly represented in recursive
form, but we can only approximately represent an IIR filter by an FIR filter by truncating its impulse
response to N terms. The larger the truncation index N , the better is the approximation. Consider the
IIR filter described by y[n] 0.8y[n 1] = x[n]. Find its impulse response h[n] and truncate it to three
terms to obtain h3 [n], the impulse response of the approximate FIR equivalent. Would you expect the
greatest mismatch in the response of the two filters to identical inputs to occur for lower or higher
values of n? Compare the step response of the two filters up to n = 6 to justify your expectations.
5.40 (Nonlinear Systems) One way to solve nonlinear dierence equations is by recursion. Consider the
nonlinear dierence equation y[n]y[n 1] 0.5y 2 [n 1] = 0.5Au[n].
(a) What makes this system nonlinear?
(b) Using y[1] = 2, recursively obtain y[0], y[1], and y[2].
(c) Use A = 2, A = 4, and A = 9 in the results of part (b) to confirm that this system finds the
square root of A.
(d) Repeat parts (b) and (c) with y[1] = 1 to check whether the choice of the initial condition
aects system operation.
5.41 (LTI Concepts and Stability) Argue that neither of the following describes an LTI system. Then,
explain how you might check for their stability and determine which of the systems are stable.
(a) y[n] + 2y[n 1] = x[n] + x2 [n] (b) y[n] 0.5y[n 1] = nx[n] + x2 [n]
5.42 (Response of Causal and Noncausal Systems) A dierence equation may describe a causal or
noncausal system depending on how the initial conditions are prescribed. Consider a first-order system
governed by y[n] + y[n 1] = x[n].
(a) With y[n] = 0, n < 0, this describes a causal system. Assume y[1] = 0 and find the first few
terms y[0], y[1], . . . of the impulse response and step response, using recursion, and establish the
general form for y[n].
(b) With y[n] = 0, n > 0, we have a noncausal system. Assume y[0] = 0 and rewrite the dierence
equation as y[n 1] = {y[n] + x[n]}/ to find the first few terms y[0], y[1], y[2], . . . of the
impulse response and step response, using recursion, and establish the general form for y[n].
5.43 (Numerical Integration Algorithms) Numerical integration algorithms approximate the area y[n]
from y[n 1] or y[n 2] (one or more time steps away). Consider the following integration algorithms.
Use each of the rules to approximate the area of x(t) = sinc(t), 0 t 3, with ts = 0.1 s and ts = 0.3 s,
and compare with the expected result of 0.53309323761827. How does the choice of the time step ts
aect the results? Which algorithm yields the most accurate results?
5.44 (System Response) Use the Matlab routine filter to obtain and plot the response of the filter
described by y[n] = 0.25(x[n] + x[n 1] + x[n 2] + x[n 3]) to the following inputs and comment on
your results.
(a) x[n] = 1, 0 n 60
(b) x[n] = 0.1n, 0 n 60
(c) x[n] = sin(0.1n), 0 n 60
(d) x[n] = 0.1n + sin(0.5n), 0 n 60
$
(e) x[n] = [n 5k], 0 n 60
k=
$
(f ) x[n] = [n 4k], 0 n 60
k=
5.45 (System Response) Use the Matlab routine filter to obtain and plot the response of the filter
described by y[n] y[n 4] = 0.25(x[n] + x[n 1] + x[n 2] + x[n 3]) to the following inputs and
comment on your results.
(a) x[n] = 1, 0 n 60
(b) x[n] = 0.1n, 0 n 60
(c) x[n] = sin(0.1n), 0 n 60
(d) x[n] = 0.1n + sin(0.5n), 0 n 60
$
(e) x[n] = [n 5k], 0 n 60
k=
$
(f ) x[n] = [n 4k], 0 n 60
k=
5.46 (System Response) Use Matlab to obtain and plot the response of the following systems over the
range 0 n 199.
(a) y[n] = x[n/3], x[n] = (0.9)n u[n] (assume zero interpolation)
(b) y[n] = cos(0.2n)x[n], x[n] = cos(0.04n) (modulation)
(c) y[n] = [1 + cos(0.2n)]x[n], x[n] = cos(0.04n) (modulation)
5.47 (System Response) Use Matlab to obtain and plot the response of the following filters, using direct
commands (where possible) and also using the routine filter, and compare your results. Assume that
the input is given by x[n] = 0.1n + sin(0.1n), 0 n 60. Comment on your results.
N
$ 1
(a) y[n] = 1
N x[n k], N = 4 (moving average)
k=0
Chapter 5 Problems 129
N
$ 1
(b) y[n] = 2
N (N +1) (N k)x[n k], N = 4 (weighted moving average)
k=0
(c) y[n] y[n 1] = (1 )x[n], N = 4, = N 1
N +1 (exponential average)
5.48 (System Response) Use Matlab to obtain and plot the response of the following filters, using
direct commands and using the routine filter, and compare your results. Use an input that consists
of the sum of the signal x[n] = 0.1n + sin(0.1n), 0 n 60 and uniformly distributed random noise
with a mean of 0. Comment on your results.
N$1
(a) y[n] = N1 x[n k], N = 4 (moving average)
k=0
N
$ 1
(b) y[n] = 2
N (N +1) (N k)x[n k], N = 4 (weighted moving average)
k=0
(c) y[n] y[n 1] = (1 )x[n], N = 4, = N 1
N +1 (exponential averaging)
5.49 (System Response) Use the Matlab routine filter to obtain and plot the response of the following
FIR filters. Assume that x[n] = sin(n/8), 0 n 60. Comment on your results. From the results,
can you describe the the function of these filters?
(a) y[n] = x[n] x[n 1] (first dierence)
(b) y[n] = x[n] 2x[n 1] + x[n 2] (second dierence)
(c) y[n] = 13 (x[n] + x[n 1] + x[n 2]) (moving average)
(d) y[n] = 0.5x[n] + x[n 1] + 0.5x[n 2] (weighted average)
5.50 (System Response in Symbolic Form) The ADSP routine sysresp1 returns the system response
in symbolic form. See Chapter 21 for examples of its usage. Obtain the response of the following filters
and plot the response for 0 n 30.
(a) The step response of y[n] 0.5y[n] = x[n]
(b) The impulse response of y[n] 0.5y[n] = x[n]
(c) The zero-state response of y[n] 0.5y[n] = (0.5)n u[n]
(d) The complete response of y[n] 0.5y[n] = (0.5)n u[n], y[1] = 4
(e) The complete response of y[n] + y[n 1] + 0.5y[n 2] = (0.5)n u[n], y[1] = 4, y[2] = 3
5.51 (Inverse Systems and Echo Cancellation) A signal x(t) is passed through the echo-generating
system y(t) = x(t) + 0.9x(t ) + 0.8x(t 2 ), with = 93.75 ms. The resulting echo signal y(t) is
sampled at S = 8192 Hz to obtain the sampled signal y[n].
(a) The dierence equation of a digital filter that generates the output y[n] from x[n] may be written
as y[n] = x[n] + 0.9x[n N ] + 0.8x[n 2N ]. What is the value of the index N ?
(b) What is the dierence equation of an echo-canceling filter (inverse filter) that could be used to
recover the input signal x[n]?
(c) The echo signal is supplied as echosig.mat. Load this signal into Matlab (using the command
load echosig). Listen to this signal using the Matlab command sound. Can you hear the
echoes? Can you make out what is being said?
(d) Filter the echo signal using your inverse filter and listen to the filtered signal. Have you removed
the echoes? Can you make out what is being said? Do you agree with what is being said? If so,
please thank Prof. Tim Schulz (http://www.ee.mtu.edu/faculty/schulz) for this problem.
Chapter 6
CONTINUOUS CONVOLUTION
6.1 Introduction
The convolution method for finding the zero-state response y(t) of a system to an input x(t) applies to linear
time-invariant (LTI) systems. The system is assumed to be described by its impulse response h(t). An
informal way to establish a mathematical form for y(t) is illustrated in Figure 6.1.
t t t
t t
Input x(t)
t t
t t t
Superposition
t t
t
130
6.1 Introduction 131
We divide x(t) into narrow rectangular strips of width ts at kts , k = 0, 1, 2, . . . and replace each strip
by an impulse whose strength ts x(kts ) equals the area under each strip:
!
x(t) ts x(kts )(t kts ) (sum of shifted impulses) (6.1)
k=
Since x(t) is a sum of weighted shifted impulses, the response y(t), by superposition, is a sum of the
weighted shifted impulse responses:
!
y(t) = ts x(kts )h(t kts ) (sum of shifted impulse responses) (6.2)
k=
In the limit as ts d 0, kts describes a continuous variable , and both x(t) and y(t) may be
represented by integral forms to give
" "
x(t) = x()(t ) d y(t) = x()h(t ) d (6.3)
Note that the result for x(t) is a direct consequence of the sifting property of impulses. The result
"
y(t) = x(t) h(t) = x()h(t ) d (6.4)
describes the convolution integral for finding the zero-state response of a system. In this book, we use
the shorthand notation x(t) h(t) to describe the convolution of the signals x(t) and h(t).
"
Notation: We use x(t) h(t) (or x(t) h(t) in figures) as a shorthand notation for x()h(t ) d
h(t)
h( )
x(t) x(t )
t t
t
Increase t Increase t
t t t
Figure 6.2 Convolution as a process of sliding a folded signal past another
Apart from its physical significance, the convolution integral is just another mathematical operation. It
takes only a change of variable = t to show that
" " "
x(t) h(t) = x()h(t ) d = x(t )h() d = x(t )h() d = h(t) x(t) (6.5)
This is the commutative property, one where the order is unimportant. It says that, at least mathematically,
we can switch the roles of the input and the impulse response for any system.
For two causal signals x(t)u(t) and h(t)u(t), the product x()u()h(t )u(t ) is nonzero only over
the range 0 t (because u() is zero for < 0 and u(t ) is a left-sided step, which is zero for > t).
Since both u() and u(t ) are unity in this range, the convolution integral simplifies to
" t
y(t) = x()h(t ) d, x(t) and h(t) zero for t < 0 (6.6)
0
This result generalizes to the fact that the convolution of two right-sided signals is also right-sided and the
convolution of two left-sided signals is also left-sided.
This is simply another way of describing h(t) as the impulse response of a system. With h(t) = (t), we have
the less obvious result (t) (t) = (t). These two results are illustrated in Figure 6.3.
Convolution is a linear operation and obeys superposition. It is also a time-invariant operation and
implies that shifting the input (or the impulse response) by shifts the output (the convolution) by .
Figure E6.1 The signals of Example 6.1 and their convolution and product
134 Chapter 6 Continuous Convolution
(b) Let x(t) = et u(t + 3) and h(t) = et u(t 1). Then h() = e u( + 3) and x(t ) =
e(t) u(t 1). Since u( + 3) = 0, < 3, and u(t 1) = 0, > t 1, we obtain
" " t1
y(t) = e
u( + 1)e(t)
u(t 1) d = e e(t) d
3
Since et is not a function of , we can pull it out of the integral to get
" t1
y(t) = et
d = (t + 2)et , t 1 3 or y(t) = (t + 2)et u(t + 2)
3
(c) Consider the convolution of x(t) = u(t + 1) u(t 1) with itself. Changing the arguments to x() and
x(t ) results in the convolution
"
y(t) = [u( + 1) u( 1)][u(t + 1) u(t 1)] d
Since u(t + 1) = 0, < t + 1, and u(t 1) = 0, > t 1, the integration limits for the four
integrals can be simplified and result in
" t+1 " t1 " t+1 " t1
y(t) = d d d + d
1 1 1 1
6.3 Some Properties of Convolution 135
Based on each result and its range, we can express the convolution y(t) as
Properties Based on Linearity A linear operation on the input to a system results in a similar operation
on the response. Thus, the input x (t) results in the response y (t), and we have x (t) h(t) = y (t). In
fact, the derivative of any one of the convolved signals results in the derivative of the convolution. Repeated
derivatives of either x(t) or h(t) lead to the general result
x(m) (t) h(t) = x(t) h(m) (t) = y (m) (t) x(m) (t) h(n) (t) = y (m+n) (t) (6.8)
Integration of the input to a system results in integration of the response. The step response thus equals
the running integral of the impulse response. More generally, the convolution x(t) u(t) equals the running
integral of x(t) because
" " t
x(t) u(t) = x()u(t ) d = x() d (6.9)
Properties Based on Time Invariance If the input to a system is shifted by , so too is the response.
In other words, x(t ) h(t) = y(t ). In fact, shifting any one of the convolved signals by shifts the
convolution by . If both x(t) and h(t) are shifted, we can use this property in succession to obtain
The concepts of linearity and shift invariance lie at the heart of many other properties of convolution.
136 Chapter 6 Continuous Convolution
Time Scaling If both x(t) and h(t) are scaled by to x(t) and h(t), the duration property suggests
that the convolution y(t) is also scaled by . In fact, x(t) h(t) = | 1 |y(t), where the scale factor | 1 | is
required to satisfy the area property. The time-scaling property is valid only when both functions are scaled
by the same factor.
Symmetry If both signals are folded ( = 1), so is their convolution. As a consequence of this, the
convolution of an odd symmetric and an even symmetric signal is odd symmetric, whereas the convolution of
two even symmetric (or two odd symmetric) signals is even symmetric. Interestingly, the convolution of x(t)
with its folded version x(t) is also even symmetric, with a maximum at t = 0. The convolution x(t) x(t)
is called the autocorrelation of x(t) and is discussed later in this chapter.
" " t
1. u(t) u(t) = u()u(t ) d = d = tu(t) = r(t)
0
" " t
2. et u(t) et u(t) = e e(t) u()u(t ) d = et d = tet u(t)
0
6.3 Some Properties of Convolution 137
" " t
3. u(t) et u(t) = u(t )e u() d = e d = (1 et )u(t)
0
4. rect(t) rect(t) = [u(t + 0.5) u(t 0.5)] [u(t + 0.5) u(t 0.5)] = r(t + 1) 2r(t) + r(t 1) = tri(t)
(b) Using linearity, the convolution yr (t) = r(t) et u(t) = u(t) u(t) et u(t) is the running integral of
the step response s(t) = u(t) et u(t) and equals
" t " t
yr (t) = s(t) dt = 1 et dt = r(t) (1 et )u(t)
0 0
(c) Using shifting and superposition, the response y1 (t) to the input x(t) = u(t) u(t 2) equals
y1 (t) = [1 et ]u(t) [1 e(t2) ]u(t 2)
# #
(d) Using the area property, the area of y1 (t) equals [ et dt][ x(t) dt] = 2.
Comment: Try integrating y1 (t) directly at your own risk to arrive at the same answer!
(e) Starting with et u(t) et u(t) = tet u(t), and using the scaling property and u(t) = u(t),
1
et u(t) et u(t) = (t)et u(t) = tet u(t)
(f ) Starting with u(t) et u(t) = (1 et )u(t), and using the scaling property and u(t) = u(t),
1
u(t) et u(t) = (1 et )u(t)
(h) Let x(t) = u(t + 3) u(t 1) and h(t) = u(t + 1) u(t 1).
Using superposition, the convolution y(t) = x(t) x(t) may be described as
y(t) = u(t + 3) u(t + 1) u(t + 3) u(t 1) u(t 1) u(t + 1) + u(t 1) u(t 1)
Since u(t) u(t) = r(t), we invoke time invariance for each term to get
y(t) = r(t + 4) r(t + 2) r(t) + r(t + 2)
The signals and their convolution are shown in Figure E6.3H. The convolution y(t) is a trapezoid
extending from t = 4 to t = 2 whose duration is 6 units, whose starting time equals the sum of the
starting times of x(t) and h(t), and whose area equals the product of the areas of x(t) and h(t).
x(t) h(t) y(t)
2
1 * 1 =
t t t
3 1 1 1 4 2 2
Figure E6.3H The signals for Example 6.3(h) and their convolution
The Recipe for Convolution by Ranges is summarized in the following review panel. To sketch x()
versus , simply relabel the axes. To sketch x(t ) versus , fold x() and delay by t. For example, if the
end points of x() are (4, 3), the end points of (the folded) x(t ) will be (t 3, t + 4).
#
We used the indefinite integral e d = ( 1)e to simplify the results. The convolution results match
at the range end points. The convolution is plotted in Figure E6.5A.
h(t) x(t) y(t)
1 1
1/e
t e t =
*
t t t
1
Figure E6.5A The convolution of the signals for Example 6.5
140 Chapter 6 Continuous Convolution
The pairwise sum gives the end points of the convolution ranges as [3, 1, 1, 3]. For each range, we
superpose x(t ) = 2, t 1 t + 1, and h() = , 2 2 to obtain the following results:
The convolution results match at the range end points and are plotted in Figure E6.6A.
y(t)
4
h(t)
x(t)
2
t
2 t 3 1 t
2
t * 2
= 1 3
1 1 2
4
Figure E6.6A The convolution of the signals for Example 6.6
As a consistency check, note how the convolution results match at the end points of each range. Note
that one of the convolved signals has even symmetry, the other has odd symmetry, and the convolution result
has odd symmetry.
6.4 Convolution by Ranges (Graphical Convolution) 141
The convolution is plotted in Figure E6.7A. The convolution results match at the range end points. Since
x(t) is constant while h(t) is piecewise linear, their convolution must yield only linear or quadratic forms.
Our results also confirm this.
y(t)
5
x(t)
4
2
h(t)
1
t * = 1
t t t
1 3 -2 2 1 1 12 4
Figure E6.7A The convolution of the signals for Example 6.7
142 Chapter 6 Continuous Convolution
8
x( ) h(t ) t = 0 x( ) h(t ) t = 1 x( ) h(t ) t = 2
3 3 3 3 t
2 2 2
3 2 1 1 2 3
1 1 1
1 2 3 1 2 3 4 1 2 3 4 5
Figure E6.8 The signals for Example 6.8 and their convolution
The convolution starts at t = 3. The convolution ranges cover unit intervals up to t = 3. The area of
x()h(t ) with t chosen for each end point yields the following results:
Note that h(t) = x(t). The convolution x(t) x(t) is called the autocorrelation of x(t) and is always even
symmetric, with a maximum at the origin.
6.4 Convolution by Ranges (Graphical Convolution) 143
The response of the first system is y1 (t) = x(t) h1 (t). The response y(t) of the second system is
y(t) = y1 (t) h2 (t) = [x(t) h1 (t)] h2 (t) = x(t) [h1 (t) h2 (t)] (6.11)
If we wish to replace the cascaded system by an equivalent LTI system with impulse response h(t) such that
y(t) = x(t) h(t), it follows that h(t) = h1 (t) h2 (t). Generalizing this result, the impulse response h(t) of
N ideally cascaded LTI systems is simply the convolution of the N individual impulse responses:
If the hk (t) are energy signals, the order of cascading is unimportant. The overall impulse response of systems
in parallel equals the sum of the individual impulse responses, as shown in Figure 6.5.
2 + +
1 y(t) = x (t)
f(t) + g(t)
2e - t 1F
+
t
LTI system 1 LTI system 2
Figure E6.9A The interconnected system for Example 6.9(a)
144 Chapter 6 Continuous Convolution
The time constant of the RC circuit is = 1. Its impulse response is thus h(t) = et u(t). The input-
output relation for the second system has the form y0 (t) = x0 (t) + x0 (t). Its impulse response is thus
h2 (t) = (t) + (t).
The overall impulse response h(t) is given by their convolution:
This means that the overall system output equals the applied input and the second system acts as the
inverse of the first.
The output g(t) is thus g(t) = 2et u(t).
The output f (t) is given by the convolution f (t) = 2et u(t) et u(t) = 2tet u(t).
(b) Refer to the cascaded system shown in Figure E6.9B. Will the outputs g(t) and w(t) be equal? Explain.
2 + +
f(t) 1 g(t)
2e - t y(t) = x 2(t)
1F
2 + +
1 v(t) w(t)
2e - t y(t) = x 2(t)
1F
The impulse response of the RC circuit is h(t) = et u(t). For the first system, the output f (t) is
f (t) = 4e2t u(t). Using convolution, the output g(t) is given by
For the second system, the outputs v(t) and w(t) are
v(t) = et u(t) 2et u(t) = 2tet u(t) w(t) = v 2 (t) = 4t2 e2t u(t)
Clearly, w(t) and g(t) are not equal. The reason is that the order of cascading is unimportant only for
LTI systems and the squaring block is nonlinear.
If x(t) is bounded such that |x(t)| < M , then its folded, shifted version x(t ) is also bounded. Using
the fundamental theorem of calculus (the absolute value of any integral cannot exceed the integral of the
absolute value of its integrand), the convolution integral yields the following inequality:
" "
|y(t)| < |h( )||x(t )| d < M |h( )| d (6.14)
For BIBO stability, therefore, h(t) must be absolutely integrable. This is both a necessary and sucient
condition. If satisfied, we are guaranteed a stable LTI system. In particular, if h(t) is an energy signal, we
have a stable system.
Causal systems are also called physically realizable. Causality actually imposes a powerful constraint on
h(t). The even and odd parts of a causal h(t) cannot be independent, and h(t) can in fact be found from its
even symmetric (or odd symmetric) part alone.
where K = 1/(1 + j0 ) is a (complex) constant. The response y(t) = Kx(t) is also a harmonic at the input
frequency 0 . More generally, the response of LTI systems to any periodic input is also periodic with the
same period as the input. In the parlance of convolution, the convolution of two signals, one of which is
periodic, is also periodic and has the same period as the input.
The following review panel lists the periodic extensions of two useful signals. The area of one period of the
periodic extension xpe (t) equals the total area of x(t). In fact, adding y(t) and its infinitely many shifted
versions to obtain the periodic extension is equivalent to wrapping y(t) around in one-period segments and
adding them up instead. The wraparound method can thus be used to find the periodic output as the
periodic extension of the response to one period.
t t t
3 3 4 1 2
Figure E6.11A The pulse signal for Example 6.11(a) and its periodic extension
(b) The periodic extension of x(t) = et u(t) with period T may be expressed, using wraparound, as
! et
xpe (t) = et + e(t+T ) + e(t+2T ) + = et eT = (6.20)
1 eT
k=0
t t
T T
Figure E6.11B The exponential signal for Example 6.11(b) and its periodic extension
(a) (Periodic Extension) One period of the periodic extension of h0 (t) is given by h(t) = Aet , where
A = 1/(1 e2 ). We first find the regular convolution of one period of x(t) with h(t). The pairwise
sum gives the end points of the convolution ranges as [0, 1, 2, 3]. For each range, we superpose
x(t ) = 1, t 1 t, and h() = Ae , 0 2, to obtain the following results:
We wrap around the last 1-unit range past t = 2 (replacing t by t + 2), and add to the first term, to
get one period of the periodic output yp (t) as
& '
e(t1)
A(1 et ) + A e(t+1) e2 = 1 , 0t1
yp (t) = 1+e
& ' e(t2)
A e(t1) et = , 1t2
1+e
Ae t
* =
t t t
1 1 2 1 2 3
y(t) yp(t)
Wrap around
and add
t t
1 2 3 1 2 3
Figure E6.12A The regular and periodic convolution of the signals for Example 6.12(a)
(b) (The Cyclic Method) The output for one period may be computed using the cyclic approach by
first creating x(t ) and a one-period segment of h(), as shown in Figure E6.12B.
x(t ) h( )
1 A
Ae
t 2 t 1 t t+1 1 2
Figure E6.12B The signals x(t ) and one period of h() for Example 6.12(b)
We then slide the folded signal x(t ) past h() for a one-period duration (2 units), and find the
periodic convolution as follows:
As x(t ) slides right over one period of h(), we see portions of two pulses in partial view for
0 t 1, and one pulse in full view for 1 t 2. As expected, both methods yield identical results.
150 Chapter 6 Continuous Convolution
The process is exactly like finding the system response to periodic inputs, except that no periodic extension
is required. The periodic convolution of other power signals with non-commensurate periods must be
found from a limiting form, by averaging the convolution of one periodic signal with a finite stretch T0 of
the other, as T0 .
"
1
yp (t) = x(t)
h(t) = lim xT ()h(t ) d (for nonperiodic power signals) (6.22)
T0 T0 T0
What better choice for k (t) than one that yields a response that is just a scaled version of itself such that
k (t) = Ak k (t)? Then ! !
y(t) = k k = k Ak k (6.24)
k k
Finding the output thus reduces to finding just the scale factors k , which may be real or complex. Signals k
that are preserved in form by a system except for a scale factor k are called eigensignals, eigenfunctions,
or characteristic functions because they are intrinsic (in German, eigen) to the system. The factor k by
which the eigensignal is scaled is called the eigenvalue of the system or the system function.
The response equals the product of the eigensignal est and the system function (which is a function only of
the variable s). If we denote this system function by H(s), we have
"
H(s) = h()es d (6.26)
This is also called the transfer function. It is actually a description of h(t) by a weighted sum of complex
exponentials and is, in general, also complex. Now, the signal x(t) also yields a similar description, called
the two-sided Laplace transform:
"
X(s) = x()es d (two-sided Laplace transform) (6.27)
For a causal signal of the form x(t)u(t), we obtain the one-sided Laplace transform:
"
X(s) = x()es d (one-sided Laplace transform) (6.28)
0
Since y(t) also equals x(t) h(t), convolution in the time domain is equivalent to multiplication in the
transformed domain. This is one of the most fundamental results, and one that we shall use repeatedly in
subsequent chapters.
With only slight modifications, we can describe several other transformed-domain relations as follows:
1. With s = j2f , we use ej2f as the eigensignals and transform h(t) to the frequency domain in terms
of its steady-state transfer function H(f ) and the signal x(t) to its Fourier transform X(f ):
"
X(f ) = x()ej2f d (Fourier transform) (6.30)
2. For a single harmonic x(t) = ej2f0 t , the impulse response h(t) transforms to a complex constant
H(f0 ) = Kej . This produces the response Kej(2f0 t+) and describes the method of phasor analysis.
3. For a periodic signal xp (t) described by a combination of harmonics ejk2f0 t at the discrete frequencies
kf0 , we use superposition to obtain a frequency-domain description of xp (t) over one period in terms
of its Fourier series coecients X[k].
(a) If x(t) = h(t), the response is y(t) = x(t) h(t) = te u(t). The moments of y(t) are
t
" "
m0 (y) = y(t) dt = 1 m1 (y) = ty(t) dt = 2 Dy = 2
0 0
Comment: For a cascade of N identical lowpass filters (with/ = 1), the overall eective delay De is
De = NDh = N , and the overall eective duration Te is Te = N Th2 = Th N .
Here, mn is the sum of the individual means (delays), n2 is the sum of the individual variances, and the
constant K equals the product of the areas under each of the convolved functions.
0n "
K= xk (t) dt (6.37)
k=1
This result is one manifestation of the central limit theorem. It allows us to assert that the response
of a complex system composed of many subsystems is Gaussian, since its response is based on repeated
convolution. The individual responses need not be Gaussian and need not even be known.
The central limit theorem fails if any function has zero area, making K = 0. Sucient conditions for
it to hold require finite values of the average, the variance, and the absolute third moment. All time-
limited functions and many others satisfy these rather weak conditions. The system function H(f ) of a
large number of cascaded systems is also a Gaussian because convolution in the time domain is equivalent to
multiplication in the frequency domain. In probability theory, the central limit theorem asserts that the sum
of n statistically independent random variables approaches a Gaussian for large n, regardless of the nature
of their distributions.
6.9 Convolution Properties Based on Moments 155
(a) Repeated convolution of etu(t) for n = 40 (b) Repeated convolution of rect(t) for n = 3
0.8
0.06 Gaussian Gaussian
approximation 0.6 approximation
Amplitude
Amplitude
0 0
10 20 30 40 50 60 70 2 1 0 1 2
Time t [seconds] Time t [seconds]
Figure E6.15 The repeated convolution and its Gaussian approximation for the signals of Example 6.15
To find the Gaussian form as n , we start with the mean mh , variance 2 , and area A for et u(t).
We find "
m1
A = m0 = h(t) dt = 1 mh = =1 2 = 1
0 m0
For n cascaded systems, we have mN = nmh = n, N 2
= n 2 = n, and K = An = 1. These values lead
to the Gaussian approximation for yn (t) as
$ %
1 (t n)2
yn (t) exp
2n 2n
(b) An even more striking example is provided by the convolution of even symmetric rectangular pulses,
shown in Figure E6.15(b) for n = 3. Notice how the result begins to take on a Gaussian look after
only a few repeated convolutions.
156 Chapter 6 Continuous Convolution
6.10 Correlation
Correlation is an operation similar to convolution. It involves sliding one function past the other and finding
the area under the resulting product. Unlike convolution, however, no folding is performed. The correlation
rxx (t) of two identical functions x(t) is called autocorrelation. For two dierent functions x(t) and y(t),
the correlation rxy (t) or ryx (t) is referred to as cross-correlation.
Using the symbol to denote correlation, we define the two operations as
"
rxx (t) = x(t) x(t) = x()x( t) d (6.39)
"
rxy (t) = x(t) y(t) = x()y( t) d (6.40)
"
ryx (t) = y(t) x(t) = y()x( t) d (6.41)
The variable t is often referred to as the lag. The definitions of cross-correlation are not standard, and some
authors prefer to switch the definitions of rxy (t) and ryx (t).
At t = 0, we have "
rxy (0) = x()y() d = ryx (0) (6.43)
Thus, rxy (0) = ryx (0). The cross-correlation also satisfies the inequality
. /
|rxy (t)| rxx (0)ryy (0) = Ex Ey (6.44)
where Ex and Ey represent the signal energy in x(t) and y(t), respectively.
Correlation as Convolution
The absence of folding actually implies that the correlation of x(t) and y(t) is equivalent to the convolution
of x(t) with the folded version y(t), and we have rxy (t) = x(t) y(t) = x(t) y(t).
Commutation
The absence of folding means that the correlation depends on which function is shifted and, in general,
x(t) y(t) = y(t) x(t). Since shifting one function to the right is actually equivalent to shifting the other
function to the left by an equal amount, the correlation rxy (t) is related to ryx (t) by rxy (t) = ryx (t).
6.10 Correlation 157
Periodic Correlation
The correlation of two periodic signals or power signals is defined in the same sense as periodic convolution:
" "
1 1
rxy (t) = x()y( t) d rxy (t) = lim x()y( t) d (6.45)
T T T0 T0 T0
The first form defines the correlation of periodic signals with identical periods T , which is also periodic with
the same period T . The second form is reserved for nonperiodic power signals or random signals.
6.10.2 Autocorrelation
The autocorrelation operation involves identical functions. It can thus be performed in any order and
represents a commutative operation.
Symmetry
Since rxy (t) = ryx (t), we have rxx (t) = rxx (t). This means that the autocorrelation of a real function is
even. The autocorrelation of an even function x(t) also equals the convolution of x(t) with itself, because
the folding operation leaves an even function unchanged.
Maximum Value
It turns out that the autocorrelation function is symmetric about the origin where it attains its maximum
value. It thus satisfies
rxx (t) rxx (0) (6.46)
It follows that the autocorrelation rxx (t) is finite and nonnegative for all t.
Periodic Autocorrelation
For periodic signals, we define periodic autocorrelation in much the same way as periodic convolution. If
we shift a periodic signal with period T past itself, the two line up after every period, and the periodic
autocorrelation also has period T .
158 Chapter 6 Continuous Convolution
t
**
t
= t
*
t
= t
1 1 1 1 1 1
Figure E6.16A The signal for Example 6.16(a) and its autocorrelation
(b) Consider the autocorrelation of x(t) = et u(t). As we shift x(t) = et u(t) past x() = e u(),
we obtain two ranges (t < 0 and t > 0) over which the autocorrelation results are described as follows:
x( t)
t <0 Range: t < 0 y() = 0
x( ) "
e et d = 0.5et
y(0) = 0.5
0
t
x( ) Range: t > 0
x( t) y(0) = 0.5
t >0
"
e et d = 0.5et
y() = 0
t
t
(c) The cross-correlation of the signals x(t) and h(t), shown in Figure E6.16C, may be found using the
convolution of one signal and the folded version of the other. Observe that rxh (t) = rhx (t).
6.10 Correlation 159
Target
R
R Matched filter Output of matched filter
Transmitted signal Received signal y(t)
s(t) s(t t 0) h(t) = s( t)
t t t t
t0 t0
A transmitter sends out an interrogating signal s(t), and the reflected and delayed signal (the echo)
s(t t0 ) is processed by a correlation receiver, or matched filter, whose impulse response is matched to
the signal to obtain the target range. In fact, its impulse response is chosen as h(t) = s(t), a folded version
of the transmitted signal, in order to maximize the signal-to-noise ratio. The response y(t) of the matched
filter is the convolution of the received echo and the folded signal h(t) = s(t) or the correlation of s(t t0 )
(the echo) and s(t) (the signal). This response attains a maximum at t = t0 , which represents the time taken
to cover the round-trip distance 2R. The target range R is then given by R = 0.5ct0 , where c is the velocity
of signal propagation.
Why not use the received signal directly to estimate the delay? The reason is that we may not be able
to detect the presence (let alone the exact onset) of the received signal because it is usually much weaker
than the transmitted signal and also contaminated by additive noise. However, if the noise is uncorrelated
with the original signal (as it usually is), their cross-correlation is very small (ideally zero), and the cross-
correlation of the original signal with the noisy echo yields a peak (at t = t0 ) that stands out and is much
easier to detect. Ideally, of course, we would like to transmit narrow pulses (approximating impulses) whose
autocorrelation attains a sharp peak.
160 Chapter 6 Continuous Convolution
CHAPTER 6 PROBLEMS
DRILL AND REINFORCEMENT
6.1 (Convolution Kernel) For each signal x(t), sketch x() vs. and x(t ) vs. , and identify
significant points along each axis.
(a) x(t) = r(t) (b) x(t) = u(t 2) (c) x(t) = 2tri[0.5(t 1)] (d) x(t) = e|t|
6.2 (Convolution Concepts) Using the defining relation, compute y(t) = x(t) h(t) at t = 0.
(a) x(t) = u(t 1) h(t) = u(t + 2)
(b) x(t) = u(t) h(t) = tu(t 1)
(c) x(t) = tu(t + 1) h(t) = (t + 1)u(t)
(d) x(t) = u(t) h(t) = cos(0.5t)rect(0.5t)
6.3 (Analytical Convolution) Evaluate each convolution y(t) = x(t) h(t) and sketch y(t).
(a) x(t) = et u(t) h(t) = r(t)
(b) x(t) = tet u(t) h(t) = u(t)
(c) x(t) = et u(t) h(t) = cos(t)u(t)
(d) x(t) = et u(t) h(t) = cos(t)
(e) x(t) = 2t[u(t + 2) u(t 2)] h(t) = u(t) u(t 4)
(f ) x(t) = 2tu(t) h(t) = rect(t/2)
1
(g) x(t) = r(t) h(t) = u(t 1)
t
6.4 (Convolution with Impulses) Sketch the convolution y(t) = x(t) h(t) for each pair of signals
shown in Figure P6.4.
x(t) Convolution 1 h(t) x(t) Convolution 2 h(t)
3 3 (3)
* (2) (1)
* (2) (2)
t t t t
2 2 2 2 2 2 2 2
Figure P6.4 The signals x(t) and h(t) for Problem 6.4
6.5 (Convolution by Ranges) For each pair of signals x(t) and h(t) shown in Figure P6.5, establish
the convolution ranges. Then sketch x()h(t ) for each range, evaluate the convolution over each
range, and sketch the convolution result y(t).
Convolution 1 Convolution 2
x(t) h(t) x(t) h(t)
4 4 4 e 2 t
2 * 1 *
t t t t
2 2 1 3 2 2 1
Convolution 3
x(t) h(t) x(t) Convolution 4 h(t)
4
* 3 2
* 2
t t t t
2 2 2 2 2 1 1 2 4
Figure P6.5 The signals x(t) and h(t) for Problem 6.5
Chapter 6 Problems 161
6.7 (Properties) The step response of a system is s(t) = et u(t). What is the system impulse response
h(t)? Compute the response of this system to the following inputs.
(a) x(t) = r(t) (b) x(t) = rect(t/2) (c) x(t) = tri[(t 2)/2] (d) x(t) = (t + 1) (t 1)
6.8 (Properties) The step response of each system is s(t) and the input is x(t). Compute the response
of each system.
(a) s(t) = r(t) r(t 1) x(t) = sin(2t)u(t)
(b) s(t) = et u(t) x(t) = et u(t)
6.10 (Cascaded Systems) Find the response y(t) of the following cascaded systems.
(a) x(t) = u(t) h1 (t) = et u(t) h2 (t) = et u(t) y(t)
6.11 (Stability) Investigate the stability and causality of the following systems.
(a) h(t) = et+1 u(t 1) (b) h(t) = et u(t + 1) (c) h(t) = (t)
(d) h(t) = (1 et )u(t) (e) h(t) = (t) et u(t) (f ) h(t) = sinc(t 1)
6.12 (Causality) Argue that the impulse response h(t) of a causal system must be zero for t < 0. Based
on this result, if the input to a causal system starts at t = t0 , at what time does the response start?
6.13 (Signal-Averaging1 Filter)
2 Consider a a signal-averaging filter whose impulse response is described
by h(t) = T1 rect t0.5T
T .
(a) What is the response of this filter to the unit step input x(t) = u(t)?
(b) What is the response of this filter to a periodic sawtooth signal x(t) with peak value A, duty
ratio D, and period T ?
6.14 (Periodic Extension) For each signal shown in Figure P6.14, sketch the periodic extension with
period T = 6 and T = 4.
x(t) Signal 1 x(t) Signal 2 x(t) Signal 3 x(t) Signal 4
4
4 4 4
6 t
2
t t t 2 4 8
2 4 4 8 2 6 4
Figure P6.14 The signals for Problem 6.14
162 Chapter 6 Continuous Convolution
6.15 (Convolution and Periodic Inputs) The voltage input to a series RC circuit with time constant
= 1 is a rectangular pulse train starting at t = 0. The pulses are of unit width and unit height and
repeat every 2 seconds. The output is the capacitor voltage.
(a) Use convolution to compute the output at t = 1 s and t = 2 s.
(b) Assume that the input has been applied for a long time. What is the steady-state output?
6.16 (Periodic Convolution) Find and sketch the periodic convolution yp (t) = x(t)h(t)
of each pair
of periodic signals shown in Figure P6.16.
Convolution 1 Convolution 2
x(t) h(t) x(t) h(t)
(1) (2) 1 1
t
* t t
* t
1 2 3 1 2 3 1 2 3 1 2 3
Convolution 3 Convolution 4
x(t) h(t) x(t) h(t)
1 1
t
* t 1 t
* 1
t
1 2 3
1
2 4 6 3 4 7 1 2 3
Figure P6.16 The periodic signals for Problem 6.16
6.17 (Inverse Systems) Given a system whose impulse response is h(t) = et u(t), we wish to find the
impulse response hI (t) of an inverse system such that h(t) hI (t) = (t). The form that we require for
the inverse system hI (t) is hI (t) = K1 (t) + K2 (t).
(a) For what values of K1 and K2 will h(t) hI (t) = (t)?
(b) Is the inverse system stable? Is it causal?
(c) What is the impulse response hI (t) of the inverse system if h(t) = 2e3t u(t).
6.18 (Correlation) Let x(t) = rect(t + 0.5) and h(t) = t rect(t 0.5).
(a) Find the autocorrelation rxx (t).
(b) Find the autocorrelation rhh (t).
(c) Find the cross-correlation rhx (t).
(d) Find the cross-correlation rxh (t).
(e) How are the results of parts (c) and (d) related?
6.20 (Operations on the Impulse) Explain the dierence between each of the following operations on
the impulse (t 1). Use sketches to plot results if appropriate.
"
(a) [e u(t)](t 1)
t
(b) et (t 1) dt (c) et u(t) (t 1)
6.21 (Convolution) Compute and sketch the convolution of the following pairs of signals.
!
(a) x(t) = (t k) h(t) = rect(t)
k=
!
(b) x(t) = (t 3k) h(t) = tri(t)
k=
!
(c) x(t) = rect(t 2k) h(t) = rect(t)
k=
6.22 (Impulse Response and Step Response) Find the step response s(t) of each system whose impulse
response h(t) is given.
(a) h(t) = rect(t 0.5) (b) h(t) = sin(2t)u(t)
(c) h(t) = sin(2t)rect(t 0.5) (d) h(t) = e|t|
6.23 (Convolution and System Response) Consider a system described by the dierential equation
y (t) + 2y(t) = x(t).
(a) What is the impulse response h(t) of this system?
(b) Find its output if x(t) = e2t u(t) by convolution.
(c) Find its output if x(t) = e2t u(t) and y(0) = 0 by solving the dierential equation.
(d) Find its output if x(t) = e2t u(t) and y(0) = 1 by solving the dierential equation.
(e) Are any of the outputs identical? Should they be? Explain.
6.24 (System Response) Consider the two inputs and two circuits shown in Figure P6.24.
(a) Find the impulse response of each circuit.
(b) Use convolution to find the response of circuit 1 to input 1. Assume R = 1 , C =1 F.
(c) Use convolution to find the response of circuit 2 to input 1. Assume R = 1 , C =1 F.
(d) Use convolution to find the response of circuit 1 to input 2. Assume R = 1 , C =1 F.
(e) Use convolution to find the response of circuit 2 to input 2. Assume R = 1 , C =1 F.
(f ) Use convolution to find the response of circuit 1 to input 1. Assume R = 2 , C =1 F.
(g) Use convolution to find the response of circuit 1 to input 2. Assume R = 2 , C =1 F.
Input 1 Input 2 + + + +
R C
Input Output Input R Output
e t et C
t t
Circuit 1 Circuit 2
Figure P6.24 The circuits for Problem 6.24
6.25 (Impulse Response and Step Response) The step response of a system is s(t) = (t). The input
to the system is a periodic square wave described for one period by x(t) = sgn(t), 1 t 1. Sketch
the system output.
164 Chapter 6 Continuous Convolution
6.26 (Impulse Response and Step Response) The input to a system is a periodic square wave with
period T = 2 s described for one period by xp (t) = sgn(t), 1 t 1. The output is a periodic
triangular wave described by yp (t) = tri(t) 0.5, 1 t 1. What is the impulse response of the
system? What is the response of this system to the single pulse x(t) = rect(t 0.5)?
6.27 (Convolution) An RC lowpass filter has the impulse response h(t) = 1 et/ u(t), where is the time
constant. Find its response to the following inputs for = 0.5 and = 1.
(a) x(t) = e2t u(t) (b) x(t) = e2t u(t) (c) x(t) = e2|t|
6.28 (Convolution) Find the convolution of each pair of signals.
(a) x(t) = e|t| h(t) = e|t|
(b) x(t) = et u(t) et u(t) h(t) = x(t)
(c) x(t) = et u(t) et u(t) h(t) = x(t)
6.29 (Convolution by Ranges) Consider a series RC lowpass filter with = 1. Use convolution by
ranges to find the capacitor voltage, its maximum value, and time of maximum for each input x(t).
(a) x(t) = rect(t 0.5) (b) x(t) = t rect(t 0.5) (c) x(t) = (1 t)rect(t 0.5)
6.30 (Cascading) The impulse response of two cascaded systems equals the convolution of their impulse
responses. Does the step response sC (t) of two cascaded systems equal s1 (t) s2 (t), the convolution of
their step responses? If not, how is sC (t) related to s1 (t) and s2 (t)?
6.31 (Cascading) System 1 compresses a signal by a factor of 2, and system 2 is an RC lowpass filter with
= 1. Find the output of each cascaded combination. Will their output be identical? Should it be?
Explain.
(a) x(t) = 2et u(t) system 1 system 2 y(t)
(b) x(t) = 2et u(t) system 2 system 1 y(t)
6.32 (Cascading) System 1 is a squaring circuit, and system 2 is an RC lowpass filter with = 1. Find
the output of each cascaded combination. Will their output be identical? Should it be? Explain.
(a) x(t) = 2et u(t) system 1 system 2 y(t)
(b) x(t) = 2et u(t) system 2 system 1 y(t)
6.33 (Cascading) System 1 is a highpass RC circuit with h(t) = (t) et u(t), and system 2 is an
RC lowpass filter with = 1. Find the output of each cascaded combination. Will their output be
identical? Should it be? Explain.
(a) x(t) = 2et u(t) system 1 system 2 y(t)
(b) x(t) = 2et u(t) system 2 system 1 y(t)
6.34 (Cascading) System 1 is a highpass RC circuit with h(t) = (t) et u(t), and system 2 is an RC
lowpass filter with = 1.
(a) Find the impulse response hP (t) of their parallel connection.
(b) Find the impulse response h12 (t) of the cascade of system 1 and system 2.
(c) Find the impulse response h21 (t) of the cascade of system 2 and system 1.
(d) Are h12 (t) and h21 (t) identical? Should they be? Explain.
(e) Find the impulse response hI (t) of a system whose parallel connection with h12 (t) yields hP (t).
Chapter 6 Problems 165
6.35 (Cascading) System 1 is described by y(t) = x (t) + x(t), and system 2 is an RC lowpass filter with
= 1.
(a) What is the output of the cascaded system to the input x(t) = 2et u(t)?
(b) What is the output of the cascaded system to the input x(t) = (t)?
(c) How are system 1 and system 2 related? Should they be? Explain.
+ 1H
+ + R
+ + 1H R +
Input 1 Output Input Output Input 1 2R Output
2R
Circuit 1 Circuit 2 Circuit 3
Figure P6.36 The circuits for Problem 6.36
6.37 (Stability and Causality) Check for the causality and stability of each of the following systems.
(a) h(t) = e(t+1) u(t) (b) h(t) = et1 u(t + 1)
(c) h(t) = (t) et u(t) (d) h(t) = (t) et u(1 t)
6.38 (Stability and Causality) Check for the causality and stability of the parallel connection and
cascade connection of each pair of systems.
(a) h1 (t) = et u(t) h2 (t) = (t)
(b) h1 (t) = et+3 u(t 3) h2 (t) = (t + 2)
(c) h1 (t) = et u(t) h2 (t) = et+2 u(t 1)
(d) h1 (t) = et u(t) h2 (t) = et u(t)
(e) h1 (t) = e|t| h2 (t) = e|t1|
(f ) h1 (t) = e|t| h2 (t) = e|t1|
(g) h1 (t) = e|t1| h2 (t) = e|t1|
(a) y (t) = x(t) (b) y (t) + y(t) = x(t) (c) y (n) (t) = x(t) (d) y(t) = x(n) (t)
6.40 (Convolution and System Classification) The impulse response of three systems is
h1 (t) = 2(t) h2 (t) = (t) + (t 3) h3 (t) = et u(t)
(a) Find the response of each to the input x(t) = u(t) u(t 1).
(b) For system 1, the input is zero at t = 2 s, and so is the response. Does the statement zero
output if zero input apply to dynamic or instantaneous systems or both? Explain.
(c) Argue that system 1 is instantaneous. What about the other two?
(d) What must be the form of h(t) for an instantaneous system?
166 Chapter 6 Continuous Convolution
6.41 (Convolution and Smoothing) Convolution is usually a smoothing operation unless one signal is
an impulse or its derivative, but exceptions occur even for smooth signals. Evaluate and comment on
the duration and smoothing eects of the following convolutions.
(a) y(t) = rect(t) tri(t) (b) y(t) = rect(t) (t)
(c) y(t) = rect(t) (t) (d) y(t) = sinc(t) sinc(t)
2 2
(e) y(t) = et et (f ) y(t) = sin(2t) rect(t)
6.42 (Eigensignals) The input x(t) and response y(t) of two systems are given. Which of the systems are
linear, and why?
(a) x(t) = cos(t), y(t) = 0.5 sin(t 0.25)
(b) x(t) = cos(t), y(t) = cos(2t)
6.43 (Eigensignals) If the input to a system is its eigensignal, the response has the same form as the
eigensignal. Justify the following statements by computing the system response by convolution (if
the impulse response is given) or by solving the given dierential equation. You may pick convenient
numerical values for the parameters.
(a) Every signal is an eigensignal of the system described by h(t) = A(t).
(b) The signal x(t) = ejt is an eigensignal of any LTI system such as that described by the impulse
response h(t) = et u(t).
(c) The signal x(t) = cos(t) is an eigensignal of any LTI system described by a dierential equation
such as y (t) + y(t) = x(t).
(d) The signal x(t) = sinc(t) is an eigensignal of ideal filters described by h(t) = sinc(t), .
6.44 (Eigensignals) Which of the following can be the eigensignal of an LTI system?
(a) x(t) = e2t u(t) (b) x(t) = ej2t (c) x(t) = cos(2t) (d) x(t) = ejt + ej2t
6.45 (Stability) Investigate the causality and stability of the following systems.
(a) h(t) = u(t) (b) h(t) = e2t u(t) (c) h(t) = (t 1)
(d) h(t) = rect(t) (e) h(t) = sinc(t) (f ) h(t) = sinc2 (t)
6.46 (Invertibility) Determine which of the following systems are invertible and, for those that are, find
the impulse response of the inverse system.
(a) h(t) = e2t u(t) (b) h(t) = (t 1) (c) h(t) = sinc(t)
6.47 (Periodic Extension) The periodic extension x(t) with period T has the same form and area as
x(t). Use this concept to find the constants in the following assumed form for xpe (t) of each signal.
(a) Signal: x(t) = et/ u(t) Periodic extension for 0 t T : xpe (t) = Ket/
(b) Signal: x(t) = tet/ u(t) Periodic extension for 0 t T : xpe (t) = (A + Bt)et/
6.48 (The Duration Property) The convolution duration usually equals the sum of the durations of the
convolved signals. But consider the following convolutions.
y1 (t) = u(t) sin(t)[u(t) u(t 2)] y2 (t) = rect(t) cos(2t)u(t)
(a) Evaluate each convolution and find its duration. Is the duration infinite? If not, what causes it
to be finite?
(b) In the first convolution, replace the sine pulse by an arbitrary signal x(t) of zero area and finite
duration Td and argue that the convolution is nonzero only for a duration Td .
Chapter 6 Problems 167
(c) In the second convolution, replace the cosine by an arbitrary periodic signal xp (t) with zero
average value and period T = 1. Argue that the convolution is nonzero for only for 1 unit.
6.49 (Convolutions that Replicate) Signals that replicate under self-convolution include the impulse,
sinc, Gaussian, and Lorentzian. For each of the following known results, determine the constant A
using the area property of convolution.
(a) (t) (t) = A(t)
(b) sinc(t) sinc(t) = A sinc(t)
2 2 2
(c) et et = Aet /2
1 1 A
(d) =
1+t 2 1+t 2 1 + 0.25t2
6.50 (Convolution and Moments) For each of the following signal pairs, find the moments m0 , m1 , and
m2 and verify each of the convolution properties based on moments (as discussed in the text).
(a) x(t) = h(t) h(t) = rect(t)
(b) x(t) = et u(t) h(t) = e2t u(t)
6.51 (Central Limit Theorem) Show that the n-fold repeated convolution of the signal h(t) = et u(t)
with itself has the form
tn et
hn (t) =
u(t).
n!
(a) Show that hn (t) has a maximum at t = n.
(b) Assume a Gaussian approximation hn (t) gn (t) = K exp[(t n)2 ]. Equating hn (t) and gn (t)
at t = n, show that K = (en nn /n!). /
(c) Use the Stirling limit to show that K = 1/(2n). The Stirling limit is defined by
$ %
nn n 1
lim n e =
n n! 2
(d) Show that = 1/(2n) by equating the areas of hn (t) and gn (t).
6.52 (Matched Filters) A folded, shifted version of a signal s(t) defines the impulse response of a matched
filter corresponding to s(t).
(a) Find and sketch the impulse response of a matched filter for the signal s(t) = u(t) u(t 1).
(b) Find the response y(t) of this matched filter to the signal x(t) = s(t D), where D = 2 s
corresponds to the signal delay.
(c) At what time tm does the response y(t) attain its maximum value, and how is tm related to the
signal delay D?
6.53 (Autocorrelation Functions) A signal x(t) can qualify as an autocorrelation function only if it
satisfies certain properties. Which of the following qualify as valid autocorrelation functions, and why?
(a) rxx (t) = et u(t) (b) rxx (t) = e|t| (c) rxx (t) = tet u(t)
1
(d) rxx (t) = |t|e|t| (e) rxx (t) = sinc2 (t) (f ) rxx (t) =
1 + t2
t 1 + t2 t2 1
(g) rxx (t) = (h) rxx (t) = (i) rxx (t) = 2
1 + t2 4 + t2 t +4
168 Chapter 6 Continuous Convolution
6.54 (Correlation) Find the cross-correlations rxh (t) and rhx (t) for each pair of signals.
(a) x(t) = et u(t) h(t) = et u(t)
(b) x(t) = et u(t) h(t) = et u(t)
(c) x(t) = e|t| h(t) = e|t|
(d) x(t) = e(t1) u(t 1) h(t) = et u(t)
6.55 (Animation of Convolution) Use ctcongui to animate the convolution of each of the following
pairs of signals and determine whether it is possible to visually identify the convolution ranges.
(a) x(t) = rect(t) h(t) = rect(t)
(b) x(t) = rect(t) h(t) = et [u(t) u(t 5)]
(c) x(t) = rect(t) h(t) = tri(t)
(d) x(t) = et u(t) h(t) = (1 t)[u(t) u(t 1)]
(e) x(t) = et u(t) h(t) = t[u(t) u(t 1)]
6.56 (Convolution of Sinc Functions) It is claimed that the convolution of the two identical signals
x(t) = h(t) = sinc(t) is y(t) = x(t) h(t) = sinc(t). Let both x(t) and h(t) be described over the
symmetric limits t . Use ctcongui to animate the convolution of x(t) and h(t) for = 1,
= 5, and = 10 (you may want to choose a smaller time step in ctcongui for larger values of ).
Does the convolution begin to approach the required result as increases?
Chapter 7
DISCRETE CONVOLUTION
By superposition, the response to x[n] is the sum of scaled and shifted versions of the impulse response:
!
y[n] = x[k]h[n k] = x[n] h[n] (7.2)
k=
This is the defining relation for the convolution operation, which we call linear convolution, and denote
by y[n] = x[n] h[n] (or by x[n] h[n] in the figures) in this book. The expression for computing y[n] is called
the convolution sum. As with continuous-time convolution, the order in which we perform the operation
does not matter, and we can interchange the arguments of x and h without aecting the result. Thus,
!
y[n] = x[n k]h[k] = h[n] x[n] (7.3)
k=
!
Notation: We use x[n] h[n] to denote x[k]h[n k]
k=
169
170 Chapter 7 Discrete Convolution
Note that (n + 1)u[n] also equals r[n + 1], and thus u[n] u[n] = r[n + 1].
(b) Let x[n] = h[n] = an u[n], a < 1. Then x[k] = ak u[k] and h[n k] = ank u[n k]. The lower limit on
the convolution sum simplifies to k = 0 (because u[k] = 0, k < 0), the upper limit to k = n (because
u[n k] = 0, k > n), and we get
! n
! n
!
y[n] = ak ank u[k]u[n k] = ak ank = an 1 = (n + 1)an u[n]
k= k=0 k=0
(d) Let x[n] = (0.8)n u[n] and h[n] = (0.4)n u[n]. Then
! n
! n
!
y[n] = (0.8)k u[k](0.4)nk u[n k] = (0.8)k (0.4)nk = (0.4)n 2k
k= k=0 k=0
1 2n+1
Using the closed-form result for the sum, we get y[n] = (0.4)n = (0.4)n (2n+1 1)u[n].
12
7.2 Convolution Properties 171
(e) Let x[n] = nu[n + 1] and h[n] = an u[n], a < 1. With h[n k] = a(nk) u[n k] and x[k] = ku[k + 1],
the lower and upper limits on the convolution sum become k = 1 and k = n. Then
n
! n
!
y[n] = ka(nk) = an1 + an kak
k=1 k=0
n+1
a
= an1 + [1 (n + 1)an + nan+1 ]
(1 a)2
The results of analytical discrete convolution often involve finite or infinite summations. In the last part, we
used known results to generate the closed-form solution. This may not always be feasible, however.
The sum of the samples in x[n], h[n], and y[n] are related by
" #" #
! ! !
y[n] = x[n] h[n] (7.5)
n= n= n=
For causal systems (h[n] = 0, n < 0) and causal signals (x[n] = 0, n < 0), y[n] is also causal. Thus,
n
! n
!
y[n] = x[n] h[n] = h[n] x[n] = x[k]h[n k] = h[k]x[n k] (7.6)
k=0 k=0
An extension of this result is that the convolution of two left-sided signals is also left-sided and the convolution
of two right-sided signals is also right-sided.
(b) We find y[n] = u[n] x[n]. Since the step response is the running sum of the impulse response, the
convolution of a signal x[n] with a unit step is the running sum of the signal x[n]:
n
!
x[n] u[n] = x[k]
k=
(c) We find y[n] = rect(n/2N ) rect(n/2N ) where rect(n/2N ) = u[n + N ] u[n N 1].
The convolution contains four terms:
Using the result u[n] u[n] = r[n + 1] and the shifting property, we obtain
$ %
n
y[n] = r[n + 2N + 1] 2r[n] + r[n 2N 1] = (2N + 1)tri
2N + 1
The convolution of two rect functions (with identical arguments) is thus a tri function.
h[n] = 1 2 2 3
x[n] = 2 1 3
Input Response
2[n] 2h[n] = 2 4 4 6
[n 1] h[n 1] = 1 2 2 3
3[n 2] 3h[n 2] = 3 6 6 9
Sum = x[n] Sum = y[n] = 2 3 5 10 3 9
7.3 Convolution of Finite Sequences 173
So, y[n] = {2, 3, 5, 10, 3, 9} = 2[n] + 3[n 1] + 5[n 2] + 10[n 3] + 3[n 4] + 9[n 5].
h [n] 1 2 3 4 n
1 n 3
1 2 2 n
1 1
1 2 3 2 2 3
h [n] 6 6
3 3
1 2 2 n 3
n
2 1 2 3 n
1 2 3 4 5
Superposition Superposition
y [n] 10
9
x [n] h [n]
3 3
2 5
1 n 1 2 2 n 3 3
2 1 2 3 2
1 n
1 2 3 4 5
Figure E7.3A The discrete convolution for Example 7.3(a)
(b) Let h[n] = {2, 5, 0, 4} and x[n] = {4, 1, 3}.
We note that the convolution starts at n = 3 and use this to set up the index array and generate the
convolution as follows:
n 3 2 1 0 1 2
h[n] 2 5 0 4
x[n] 4 1 3
8 20 0 16
2 5 0 4
6 15 0 12
y[n] 8 22 11 31 4 12
The marker is placed by noting that the convolution starts at n = 3, and we get
y[n] = {8, 22, 11, 31, 4, 12}
174 Chapter 7 Discrete Convolution
(c) (Response of an Averaging Filter) Let x[n] = {2, 4, 6, 8, 10, 12, . . .}.
What system will result in the response y[n] = {1, 3, 5, 7, 9, 11, . . .}?
At each instant, the response is the average of the input and its previous value. This system describes
an averaging or moving average filter. Its dierence equation is simply y[n] = 12 (x[n] + x[n 1]).
Its impulse response is thus h[n] = 0.5{[n] + [n 1]}, or h[n] = {0.5, 0.5}.
Using discrete convolution, we find the response as follows:
x: 2 4 6 8 10 12 ...
1 1
h: 2 2
1 2 3 4 5 6 ...
1 2 3 4 5 6 ...
y: 1 3 5 7 9 11 ...
This result is indeed the averaging operation we expected.
2 5 0 4 2 5 0 4 2 5 0 4
Slide Slide
3 1 4 3 1 4 3 1 4
15 0 16 0 4 12
y[3] = sum of products = 31 y[4] = sum of products = 4 y[5] = sum of products = 12
Figure E7.4 The discrete signals for Example 7.4 and their convolution
7.3 Convolution of Finite Sequences 175
h(z) = 2z 3 + 5z 2 + 0z + 4 x(z) = 4z 2 + 1z + 3
h1 [n] = {2, 0, 5, 0, 0, 0, 4} x1 [n] = {4, 0, 1, 0, 3}
h1 (z) = 2z 6 + 5z 4 + 0z 2 + 4 x1 (z) = 4z 4 + 1z 2 + 3
If we append zeros to one of the convolved sequences, the convolution result will not change but will show
as many appended zeros. In fact, the zeros may be appended to the beginning or end of a sequence and will
will appear at the corresponding location in the convolution result.
y[n] = y1 [n] h2 [n] = (x[n] h1 [n]) h2 [n] = x[n] (h1 [n] h2 [n]) (7.7)
If we wish to replace the cascaded system by an equivalent LTI system with impulse response h[n] such that
y[n] = x[n] h[n], it follows that h[n] = h1 [n] h2 [n]. Generalizing this result, the impulse response h[n] of
N ideally cascaded LTI systems is simply the convolution of the N individual impulse responses
The overall impulse response of systems in parallel equals the sum of the individual impulse responses, as
shown in Figure 7.1:
The impulse response of the first system is h1 [n] = [n] 0.5[n 1]. The overall impulse response hC [n]
is given by the convolution
hC [n] = ([n] 0.5[n 1]) (0.5)n u[n] = (0.5)n u[n] 0.5(0.5)n1 u[n 1]
This simplifies to
hC [n] = (0.5)n (u[n] u[n 1]) = (0.5)n [n] = [n]
What this means is that the overall system output equals the applied input. The second system thus acts
as the inverse of the first.
In other words, h[n] must be absolutely summable. This is both a necessary and sucient condition, and if
met we are assured a stable system. We always have a stable system if h[n] is an energy signal. Since the
impulse response is defined only for linear systems, the stability of nonlinear systems must be investigated
by other means.
7.4.1 Causality
In analogy with analog systems, causality of discrete-time systems implies a non-anticipating system with
an impulse response h[n] = 0, n < 0. This ensures that an input x[n]u[n n0 ] starting at n = n0 results in
a response y[n] also starting at n = n0 (and not earlier). This follows from the convolution sum:
! n
!
y[n] = x[k]u(k n0 ]h[n k]u[n k] = x[k]h[n k] (7.12)
k n0
(c) A filter described by the dierence equation y[n] 0.5y[n 1] = nx[n] is causal but time varying. It
is also unstable. If we apply a step input u[n] (bounded input), then y[n] = nu[n] + 0.5y[n 1]. The
term nu[n] grows without bound and makes this system unstable. We caution you that this approach
is not a formal way of checking for the stability of time-varying systems.
Index n 0 1 2 3 4 5 6 7 8 9 10
x[n] 1 2 3 1 2 3 1 2 3 1 ...
h[n] 1 1
1 2 3 1 2 3 1 2 3 1 ...
1 2 3 1 2 3 1 2 3 ...
y[n] 1 3 1 2 3 1 2 3 1 2 ...
The convolution y[n] is periodic with period N = 3, except for start-up eects (which last for one
period). One period of the convolution is y[n] = {2, 3, 1}.
(b) Let x[n] = {1, 2, 3, 1, 2, 3, 1, 2, 3, . . .} and h[n] = {1, 1, 1}.
The convolution y[n] = x[n] h[n], using the sum-by-column method, is found as follows:
7.5 System Response to Periodic Inputs 179
Index n 0 1 2 3 4 5 6 7 8 9 10
x[n] 1 2 3 1 2 3 1 2 3 1 ...
h[n] 1 1 1
1 2 3 1 2 3 1 2 3 1 ...
1 2 3 1 2 3 1 2 3 ...
1 2 3 1 2 3 1 2 ...
y[n] 1 3 0 0 0 0 0 0 0 0 ...
Except for start-up eects, the convolution is zero. The system h[n] = {1, 1, 1} is a moving average
filter. It extracts the 3-point running sum, which is always zero for the given periodic signal x[n].
One way to find the system response to periodic inputs is to find the response to one period of the input and
then use superposition. In analogy with analog signals, if we add an absolutely summable signal (or energy
signal) x[n] and its infinitely many replicas shifted by multiples of N , we obtain a periodic signal with period
N , which is called the periodic extension of x[n]:
!
xpe [n] = x[n + kN ] (7.13)
k=
An equivalent way of finding one period of the periodic extension is to wrap around N -sample sections of
x[n] and add them all up. If x[n] is shorter than N , we obtain one period of its periodic extension simply
by padding x[n] with zeros (to increase its length to N ).
The methods for finding the response of a discrete-time system to periodic inputs rely on the concepts
of periodic extension and wraparound. In analogy with the continuous case, we find the regular convolution
due to one period of the input and use wraparound to generate one period of the periodic output.
(b) We find the response yp [n] of a moving average filter described by h[n] = {2, 1, 1, 3, 1} to a periodic
signal whose one period is xp [n] = {2, 1, 3}, with N = 3, using two methods.
y[n] = {2, 1, 3} {2, 1, 1, 3, 1} = {4, 4, 9, 10, 8, 10, 3}
To find yp [n], values past N = 3 are wrapped around and summed to give
4 4 9
{4, 4, 9, 10, 8, 10, 3} = wrap around = 10 8 10 = sum = {17, 12, 19}
3
2. In analogy with continuous convolution, we could also create the periodic extension hp [n], with
N = 3, and use wraparound to get hp [n] = {5, 2, 1}. The regular convolution of one period of
each sequence gives y[n] = {2, 1 , 3} {5, 2 , 1} = {1 0, 9, 19, 7, 3}. This result is then wrapped
around past these samples and gives yp [n] = {1 7, 12, 19}, as before.
N
! 1 N
! 1
yp [n] = xp [n]h
p [n] = hp [n]x
p [n] = xp [k]hp [n k] = hp [k]xp [n k] (7.14)
k=0 k=0
An averaging factor of 1/N is sometimes included with the summation. Periodic convolution can be imple-
mented using wraparound. We find the linear convolution of one period of xp [n] and hp [n], which will have
(2N 1) samples. We then extend its length to 2N (by appending a zero), slice it in two halves (of length
N each), line up the second half with the first, and add the two halves to get the periodic convolution.
Then, we append a zero, wrap around the last four samples, and add.
Index n 0 1 2 3
First half of y[n] 1 2 4 4
Wrapped around half of y[n] 5 4 1 0
Periodic convolution yp [n] 6 6 5 4
(b) Find the periodic convolution of xp [n] = {1, 2, 3} and hp [n] = {1, 0, 2}, with period N = 3.
The regular convolution is easily found to be yR [n] = {1, 2, 5, 4, 6}.
Appending a zero and wrapping around the last three samples gives yp [n] = {5, 8, 5}.
1 0 2
Shifting the folded sequence turns it clockwise. At each turn, the convolution equals the sum of the
pairwise products. This approach clearly brings out the cyclic nature of periodic convolution.
182 Chapter 7 Discrete Convolution
Note that each diagonal of the circulant matrix has equal values. Such a constant diagonal matrix is also
called a Toeplitz matrix. Its matrix product with an N 1 column matrix h describing h[n] yields the
periodic convolution y = Ch as an N 1 column matrix.
(a) The circulant matrix Cx and periodic convolution y1 [n] are given by
1 2 0 1 1 2 0 1 5
Cx = 0 1 2 h= 2 y1 [n] = 0 1 2 2 = 8
2 0 1 3 2 0 1 3 5
y1 [n]
Comment: Though not required, normalization by N = 3 gives yp1 [n] = = { 53 , 8 5
3 , 3 }.
3
(b) The periodic convolution y2 [n] of x[n] and h[n] over a two-period window yields
1 2 0 1 2 0 1 10
0 1 2 0 1 2 2 16
2 0 1 2 0 1 3 10
C2 =
h2 =
y2 [n] =
1 2 0 1 2 0 1 10
0 1 2 0 1 2 2 16
2 0 1 2 0 1 3 10
We see that y2 [n] has double the length (and values) of y1 [n], but it is still periodic with N = 3.
Comment: Normalization by a two-period window width (6 samples) gives
y2 [n]
yp2 [n] = = { 53 , 83 , 53 , 53 , 83 , 53 }
6
Note that one period (N = 3) of yp2 [n] is identical to the normalized result yp1 [n] of part (a).
7.7 Connections: Discrete Convolution and Transform Methods 183
In this complex exponential z n , the quantity z has the general form z = rej2F .
The response of a linear system to z n is Cz n , where C is a (possibly complex) constant.
184 Chapter 7 Discrete Convolution
The quantity Hp (F ) describes the discrete-time Fourier transform (DTFT) or discrete-time fre-
quency response of h[n]. Unlike the analog system function H(f ), the DT system function Hp (F ) is
periodic with a period of unity because ej2kF = ej2k(F +1) . This periodicity is also a direct consequence
of the discrete nature of h[n].
As with the impulse response h[n], which is just a signal, any signal x[n] may similarly be described by
its DTFT Xp (F ). The response y[n] = x[n]Hp [F ] may then be transformed to its DTFT Yp [n] to give
!
!
Yp (F ) = y[k]ej2F k = x[k]Hp (F )ej2F k = Hp (F )Xp (F ) (7.18)
k= k=
Once again, convolution in the time domain corresponds to multiplication in the frequency domain.
The response equals the input (eigensignal) modified by the system function H(z), where
!
H(z) = h[k]z k (two-sided z-transform) (7.20)
k=
The complex quantity H(z) describes the z-transform of h[n] and is not, in general, periodic in z. Denoting
the z-transform of x[n] and y[n] by X(z) and Y (z), we write
!
!
Y (z) = y[k]z k = x[k]H(z)z k = H(z)X(z) (7.21)
k= k=
The DTFT is thus the z-transform evaluated on the unit circle |z| = 1.
7.8 Deconvolution 185
7.8 Deconvolution
Given the system impulse response h[n], the response y[n] of the system to an input x[n] is simply the
convolution of x[n] and h[n]. Given x[n] and y[n] instead, how do we find h[n]? This situation arises very
often in practice and is referred to as deconvolution or system identification.
For discrete-time systems, we have a partial solution to this problem. Since discrete convolution may
be thought of as polynomial multiplication, discrete deconvolution may be regarded as polynomial division.
One approach to discrete deconvolution is to use the idea of long division, a familiar process, illustrated in
the following example.
The polynomial h(w) may be deconvolved out of x(w) and y(w) by performing the division y(w)/x(w):
7 4w + w + 3
2
If all goes well, we need to evaluate h[n] only at M N + 1 points, where M and N are the lengths of y[n]
and x[n], respectively.
Naturally, problems arise if a remainder is involved. This may well happen in the presence of noise,
which could modify the values in the output sequence even slightly. In other words, the approach is quite
susceptible to noise or roundo error and not very practical.
!
!
rhx [n] = h[n] x[n] = h[k]x[k n] = h[k + n]x[k] (7.28)
k= k=
Some authors prefer to switch the definitions of rxh [n] and rhx [n].
To find rxh [n], we line up the last element of h[n] with the first element of x[n] and start shifting h[n]
past x[n], one index at a time. We sum the pointwise product of the overlapping values to generate the
correlation at each index. This is equivalent to performing the convolution of x[n] and the folded signal
h[n]. The starting index of the correlation equals the sum of the starting indices of x[n] and h[n].
Similarly, rhx [n] equals the convolution of x[n] and h[n], and its starting index equals the sum of the
starting indices of x[n] and h[n]. However, rxh [n] does not equal rhx [n]. The two are folded versions of
each other and related by rxh [n] = rhx [n].
7.9.1 Autocorrelation
The correlation rxx [n] of a signal x[n] with itself is called the autocorrelation. It is an even symmetric
function (rxx [n] = rxx [n]) with a maximum at n = 0 and satisfies the inequality |rxx [n]| rxx [0].
Correlation is an eective method of detecting signals buried in noise. Noise is essentially uncorrelated
with the signal. This means that if we correlate a noisy signal with itself, the correlation will be due only to
the signal (if present) and will exhibit a sharp peak at n = 0.
a|n|
Since autocorrelation is an even symmetric function, we have rxx [n] = .
1 a2
188 Chapter 7 Discrete Convolution
(b) Let x[n] = an u[n], |a| < 1, and y[n] = rect(n/2N ). To find rxy [n], we shift y[k] and sum the products
over dierent ranges. Since y[k n] shifts the pulse to the right over the limits (N + n, N + n), the
correlation rxy [n] equals zero until n = N . We then obtain
! N
! +1
1 aN +n+1
N n N 1 (partial overlap): rxy [n] = x[k]y[k n] = ak =
1a
k= k=0
N
! +1 2N
! 1 a2N +1
n N (total overlap): rxy [n] = ak = amN +1 = aN +1
m=0
1a
k=N +1
As with discrete periodic convolution, an averaging factor of 1/N is sometimes included in the summation.
We can also find the periodic correlation rxh [n] using convolution and wraparound, provided we use one
period of the folded, periodic extension of the sequence h[n].
CHAPTER 7 PROBLEMS
DRILL AND REINFORCEMENT
7.1 (Folding) For each signal x[n], sketch g[k] = x[3 k] vs. k and h[k] = x[2 + k] vs. k.
(a) x[n] = {1, 2, 3, 4} (b) x[n] = {3, 3, 3, 2, 2, 2}
7.2 (Closed-Form Convolution) Find the convolution y[n] = x[n] h[n] for the following:
(a) x[n] = u[n] h[n] = u[n]
(b) x[n] = (0.8)n u[n] h[n] = (0.4)n u[n]
(c) x[n] = (0.5)n u[n] h[n] = (0.5)n {u[n + 3] u[n 4]}
(d) x[n] = n u[n] h[n] = n u[n]
(e) x[n] = n u[n] h[n] = n u[n]
(f ) x[n] = n u[n] h[n] = rect(n/2N )
7.3 (Convolution of Finite Sequences) Find the convolution y[n] = x[n] h[n] for each of the following
signal pairs. Use a marker to indicate the origin n = 0.
(a) x[n] = {1, 2, 0, 1} h[n] = {2, 2, 3}
(b) x[n] = {0, 2, 4, 6} h[n] = {6, 4, 2, 0}
(c) x[n] = {3, 2, 1, 0, 1} h[n] = {4, 3, 2}
(d) x[n] = {3, 2, 1, 1, 2} h[n] = {4, 2, 3, 2}
(e) x[n] = {3, 0, 2, 0, 1, 0, 1, 0, 2} h[n] = {4, 0, 2, 0, 3, 0, 2}
(f ) x[n] = {0, 0, 0, 3, 1, 2} h[n] = {4, 2, 3, 2}
7.4 (Properties) Let x[n] = h[n] = {3, 4, 2, 1}. Compute the following:
(a) y[n] = x[n] h[n] (b) g[n] = x[n] h[n]
(c) p[n] = x[n] h[n] (d) f [n] = x[n] h[n]
(e) r[n] = x[n 1] h[n + 1] (f ) s[n] = x[n 1] h[n + 4]
7.5 (Properties) Let x[n] = h[n] = {2, 6, 0, 4}. Compute the following:
(a) y[n] = x[2n] h[2n]
(b) Find g[n] = x[n/2] h[n/2], assuming zero interpolation.
(c) Find p[n] = x[n/2] h[n], assuming step interpolation where necessary.
(d) Find r[n] = x[n] h[n/2], assuming linear interpolation where necessary.
7.6 (Application) Consider a 2-point averaging filter whose present output equals the average of the
present and previous input.
(a) Set up a dierence equation for this system.
(b) What is the impulse response of this system?
(c) What is the response of this system to the sequence {1, 2, 3, 4, 5}?
(d) Use convolution to show that the system performs the required averaging operation.
190 Chapter 7 Discrete Convolution
7.7 (Stability) Investigate the causality and stability of the following systems.
(a) h[n] = (2)n u[n 1] (b) y[n] = 2x[n + 1] + 3x[n] x[n 1]
(c) h[n] = (0.5)n u[n] (d) h[n] = {3, 2, 1, 1, 2}
(e) h[n] = (0.5)n u[n] (f ) h[n] = (0.5)|n|
7.8 (Periodic Convolution) Find the regular convolution y[n] = x[n] h[n] of one period of each pair
of periodic signals. Then, use wraparound to compute the periodic convolution yp [n] = x[n]h[n].
In
each case, specify the minimum number of padding zeros we must use if we wish to find the regular
convolution from the periodic convolution of the zero-padded signals.
(a) x[n] = {1, 2, 0, 1} h[n] = {2, 2, 3, 0}
(b) x[n] = {0, 2, 4, 6} h[n] = {6, 4, 2, 0}
(c) x[n] = {3, 2, 1, 0, 1} h[n] = {4, 3, 2, 0, 0}
(d) x[n] = {3, 2, 1, 1, 2} h[n] = {4, 2, 3, 2, 0}
7.11 (Convolution and Interpolation) Consider the following system with x[n] = {0, 3, 9, 12, 15, 18}.
(a) Find the response y[n] if N = 2 and the filter impulse response is h[n] = tri(n/2). Show that,
except for end eects, the output describes a linear interpolation between the samples of x[n].
(b) Find the response y[n] if N = 3 and the filter impulse response is h[n] = tri(n/3). Does the
output describe a linear interpolation between the samples of x[n]?
(c) Pick N and h[n] if the system is to perform linear interpolation by 4.
7.12 (Correlation) For each pair of signals, compute the autocorrelation rxx [n], the autocorrelation rhh [n],
the cross-correlation rxh [n], and the cross-correlation rhx [n]. For each result, indicate the location of
the origin n = 0 by a marker.
Chapter 7 Problems 191
(a) x[n] = {1, 2, 0, 1} h[n] = {2, 2, 3}
(b) x[n] = {0, 2, 4, 6} h[n] = {6, 4, 2}
(c) x[n] = {3, 2, 1, 2} h[n] = {4, 3, 2}
(d) x[n] = {3, 2, 1, 1, 2} h[n] = {4, 2, 3, 2}
7.14 (Periodic Correlation) For each pair of periodic signals described for one period, compute the
periodic autocorrelations rxx [n] and rhh [n], and the periodic cross-correlations rxh [n] and rhx [n]. For
each result, indicate the location of the origin n = 0 by a marker.
(a) x[n] = {1, 2, 0, 1} h[n] = {2, 2, 3, 0}
(b) x[n] = {0, 2, 4, 6} h[n] = {6, 4, 2, 0}
(c) x[n] = {3, 2, 1, 2} h[n] = {0, 4, 3, 2}
(d) x[n] = {3, 2, 1, 1, 2} h[n] = {4, 2, 3, 2, 0}
7.16 (Convolution) Find the convolution y[n] = x[n] h[n] for each pair of signals.
(a) x[n] = (0.4)n u[n] h[n] = (0.5)n u[n]
(b) x[n] = n u[n] h[n] = n u[n]
(c) x[n] = n u[n] h[n] = n u[n]
(d) x[n] = n u[n] h[n] = n u[n]
7.17 (Step Response) Given the impulse response h[n], find the step response s[n] of each system.
(a) h[n] = (0.5)n u[n] (b) h[n] = (0.5)n cos(n)u[n]
(c) h[n] = (0.5)n cos(n + 0.5)u[n] (d) h[n] = (0.5)n cos(n + 0.25)u[n]
(e) h[n] = n(0.5)n u[n] (f ) h[n] = n(0.5)n cos(n)u[n]
7.18 (Convolution and System Response) Consider the system y[n] 0.5y[n] = x[n].
(a) What is the impulse response h[n] of this system?
(b) Find its output if x[n] = (0.5)n u[n] by convolution.
192 Chapter 7 Discrete Convolution
(c) Find its output if x[n] = (0.5)n u[n] and y[1] = 0 by solving the dierence equation.
(d) Find its output if x[n] = (0.5)n u[n] and y[1] = 2 by solving the dierence equation.
(e) Are any of the outputs identical? Should they be? Explain.
7.19 (Convolution of Symmetric Sequences) The convolution of sequences that are symmetric about
their midpoint is also endowed with symmetry (about its midpoint). Compute y[n] = x[n] h[n] for
each pair of signals and use the results to establish the type of symmetry (about the midpoint) in
the convolution if the convolved signals are both even symmetric (about their midpoint), both odd
symmetric (about their midpoint), or one of each type.
(a) x[n] = {2, 1, 2} h[n] = {1, 0, 1}
(b) x[n] = {2, 1, 2} h[n] = {1, 1}
(c) x[n] = {2, 2} h[n] = {1, 1}
(d) x[n] = {2, 0, 2} h[n] = {1, 0, 1}
(e) x[n] = {2, 0, 2} h[n] = {1, 1}
(f ) x[n] = {2, 2} h[n] = {1, 1}
(g) x[n] = {2, 1, 2} h[n] = {1, 0, 1}
(h) x[n] = {2, 1, 2} h[n] = {1, 1}
(i) x[n] = {2, 2} h[n] = {1, 1}
7.20 (Convolution and Interpolation) Let x[n] = {2, 4, 6, 8}.
(a) Find the convolution y[n] = x[n] x[n].
(b) Find the convolution y1 [n] = x[2n] x[2n]. Is y1 [n] related to y[n]? Should it be? Explain.
(c) Find the convolution y2 [n] = x[n/2] x[n/2], assuming zero interpolation. Is y2 [n] related to
y[n]? Should it be? Explain.
(d) Find the convolution y3 [n] = x[n/2] x[n/2], assuming step interpolation. Is y3 [n] related to
y[n]? Should it be? Explain.
(e) Find the convolution y4 [n] = x[n/2] x[n/2], assuming linear interpolation. Is y4 [n] related to
y[n]? Should it be? Explain.
7.21 (Linear Interpolation) Consider a system that performs linear interpolation by a factor of N . One
way to construct such a system, as shown, is to perform up-sampling by N (zero interpolation between
signal samples) and pass the up-sampled signal through a filter with impulse response h[n] whose
output y[n] is the linearly interpolated signal.
x[n] up-sample (zero interpolate) by N filter y[n]
(a) What should h[n] be for linear interpolation by a factor of N ?
(b) Let x[n] = 4tri(0.25n). Find y1 [n] = x[n/2] by linear interpolation.
(c) Find the system output y[n] for N = 2. Does y[n] equal y1 [n]?
7.22 (Causality) Argue that the impulse response h[n] of a causal system must be zero for n < 0. Based
on this result, if the input to a causal system starts at n = n0 , when does the response start?
7.23 (Numerical Convolution) The convolution y(t) of two analog signals x(t) and h(t) may be approx-
imated by sampling each signal at intervals ts to obtain the signals x[n] and h[n], and folding and
shifting the samples of one function past the other in steps of ts (to line up the samples). At each
instant kts , the convolution equals the sum of the product samples multiplied by ts . This is equivalent
to using the rectangular rule to approximate the area. If x[n] and h[n] are convolved using the sum-
by-column method, the columns make up the product, and their sum multiplied by ts approximates
y(t) at t = kts .
Chapter 7 Problems 193
(a) Let x(t) = rect(t/2) and h(t) = rect(t/2). Find y(t) = x(t) h(t) and compute y(t) at intervals
of ts = 0.5 s.
(b) Sample x(t) and h(t) at intervals of ts = 0.5 s to obtain x[n] and h[n]. Compute y[n] = x[n] h[n]
and the convolution estimate yR (nts ) = ts y[n]. Do the values of yR (nts ) match the exact result
y(t) at t = nts ? If not, what are the likely sources of error?
(c) Argue that the trapezoidal rule for approximating the convolution is equivalent to subtracting
half the sum of the two end samples of each column from the discrete convolution result and
then multiplying by ts . Use this rule to obtain the convolution estimate yT (nts ). Do the values
of yT (nts ) match the exact result y(t) at t = nts ? If not, what are the likely sources of error?
(d) Obtain estimates based on the rectangular rule and trapezoidal rule for the convolution y(t) of
x(t) = 2 tri(t) and h(t) = rect(t/2) by sampling the signals at intervals of ts = 0.5 s. Which rule
would you expect to yield a better approximation, and why?
7.25 (Impulse Response of Dierence Algorithms) Two systems to compute the forward and back-
ward dierence are described by
Forward dierence: yF [n] = x[n + 1] x[n] Backward dierence: yB [n] = x[n] x[n 1]
(a) What is the impulse response of each system?
(b) Which of these systems is stable? Which of these systems is causal?
(c) Find the impulse response of their parallel connection. Is the parallel system stable? Is it causal?
(d) What is the impulse response of their cascade? Is the cascaded system stable? Is it causal?
7.26 (System Response) Find the response of the following filters to the unit step x[n] = u[n], and to
the alternating unit step x[n] = (1)n u[n], using convolution concepts.
(a) h[n] = [n] [n 1] (dierencing operation)
(b) h[n] = {0.5, 0.5} (2-point average)
N
! 1
(c) h[n] = N1 [n k], N = 3 (moving average)
k=0
N
! 1
(d) h[n] = 2
N (N +1) (N k)[n k], N = 3 (weighted moving average)
k=0
(e) y[n] + N 1
N +1 y[n 1] = 2
N +1 x[n], N =3 (exponential average)
7.27 (Eigensignals) Which of the following can be the eigensignal of an LTI system?
(a) x[n] = 0.5n u[n] (b) x[n] = ejn/2 (c) x[n] = ejn
+ ejn/2
(d) x[n] = cos(n/2) (e) x[n] = j n (f ) x[n] = ( j) + ( j)n
n
7.29 (Systems in Cascade and Parallel) Consider the realization of Figure P7.29.
x [n]
+ + y [n]
+
z1
z1 +
+
z1
Figure P7.29 System realization for Problem 7.29
(a) Find its impulse response if = . Is the overall system FIR or IIR?
(b) Find its impulse response if = . Is the overall system FIR or IIR?
(c) Find its impulse response if = = 1. What does the overall system represent?
7.30 (Cascading) The impulse response of two cascaded systems equals the convolution of their impulse
responses. Does the step response sC [n] of two cascaded systems equal s1 [n] s2 [n], the convolution of
their step responses? If not, how is sC [n] related to s1 [n] and s2 [n]?
7.31 (Cascading) System 1 is a squaring circuit, and system 2 is an exponential averager described by
h[n] = (0.5)n u[n]. Find the output of each cascaded combination. Will their output be identical?
Should it be? Explain.
(a) 2(0.5)n u[n] system 1 system 2 y[n]
7.32 (Cascading) System 1 is an IIR filter with the dierence equation y[n] = 0.5y[n 1] + x[n], and
system 2 is a filter with impulse response h[n] = [n] [n 1]. Find the output of each cascaded
combination. Will their output be identical? Should it be? Explain.
(a) 2(0.5)n u[n] system 1 system 2 y[n]
7.33 (Cascading) System 1 is an IIR filter with the dierence equation y[n] = 0.5y[n 1] + x[n], and
system 2 is a filter with impulse response h[n] = [n] (0.5)n u[n].
(a) Find the impulse response hP [n] of their parallel connection.
(b) Find the impulse response h12 [n] of the cascade of system 1 and system 2.
(c) Find the impulse response h21 [n] of the cascade of system 2 and system 1.
(d) Are h12 [n] and h21 [n] identical? Should they be? Explain.
(e) Find the impulse response hI [n] of a system whose parallel connection with h12 [n] yields hP [n].
Chapter 7 Problems 195
7.34 (Cascading) System 1 is a lowpass filter described by y[n] = 0.5y[n 1] + x[n], and system 2 is
described by h[n] = [n] 0.5[n 1].
(a) What is the output of the cascaded system to the input x[n] = 2(0.5)n u[n]?
(b) What is the output of the cascaded system to the input x[n] = [n]?
(c) How are the two systems related?
7.35 (Periodic Convolution) Consider a system whose impulse response is h[n] = (0.5)n u[n]. Show that
h[n]
one period of its periodic extension with period N is given by hpe [n] = , 0 n N 1.
1 (0.5)N
Use this result to find the response of this system to the following periodic inputs.
(a) x[n] = cos(n) (b) x[n] = {1, 1, 0, 0}, with N = 4
(c) x[n] = cos(0.5n) (d) x[n] = (0.5)n , 0 n 3, with N = 4
7.36 (Convolution in Practice) Often, the convolution of a long sequence x[n] and a short sequence h[n]
is performed by breaking the long signal into shorter pieces, finding the convolution of each short piece
with h[n], and gluing the results together. Let x[n] = {1, 1, 2, 3, 5, 4, 3, 1} and h[n] = {4, 3, 2, 1}.
(a) Split x[n] into two equal sequences x1 [n] = {1, 1, 2, 3} and x2 [n] = {5, 4, 3, 1}.
(b) Find the convolution y1 [n] = h[n] x1 [n].
(c) Find the convolution y2 [n] = h[n] x2 [n].
(d) Find the convolution y[n] = h[n] x[n].
(e) How can you find y[n] from y1 [n] and y2 [n]? (Hint: Shift y2 [n] and use superposition. This forms
the basis for the overlap-add method of convolution discussed in Chapter 16.)
7.37 (Correlation) Find the correlation rxh [n] of the following signals.
(a) x[n] = n u[n] h[n] = n u[n]
(b) x[n] = nn u[n] h[n] = n u[n]
(c) x[n] = rect(n/2N ) h[n] = rect(n/2N )
7.38 (Mean and Variance from Autocorrelation) The mean value mx of a random signal x[n] (with
nonzero mean value) may be computed from its autocorrelation function rxx [n] as m2x = lim rxx [n].
|n|
The variance of x[n] is then given by x2 = rxx (0) m2x . Find the
$ mean, 2variance,
% and average power
1 + 2n
of a random signal whose autocorrelation function is rxx [n] = 10 .
2 + 5n2
7.39 (Nonrecursive Forms of IIR Filters) An FIR filter may always be exactly represented in recursive
form, but we can only approximately represent an IIR filter by an FIR filter by truncating its impulse
response to N terms. The larger the truncation index N , the better is the approximation. Consider
the IIR filter described by y[n] 0.8y[n 1] = x[n]. Find its impulse response h[n] and truncate it to
20 terms to obtain hA [n], the impulse response of the approximate FIR equivalent. Would you expect
the greatest mismatch in the response of the two filters to identical inputs to occur for lower or higher
values of n?
(a) Use the Matlab routine filter to find and compare the step response of each filter up to n = 15.
Are there any dierences? Should there be? Repeat by extending the response to n = 30. Are
there any dierences? For how many terms does the response of the two systems stay identical,
and why?
(b) Use the Matlab routine filter to find and compare the response to x[n] = 1, 0 n 10 for
each filter up to n = 15. Are there any dierences? Should there be? Repeat by extending the
response to n = 30. Are there any dierences? For how many terms does the response of the
two systems stay identical, and why?
7.40 (Convolution of Symmetric Sequences) The convolution of sequences that are symmetric about
their midpoint is also endowed with symmetry (about its midpoint). Use the Matlab command
conv to find the convolution of the following sequences and establish the type of symmetry (about the
midpoint) in the convolution.
(a) x[n] = sin(0.2n), 10 n 10 h[n] = sin(0.2n), 10 n 10
(b) x[n] = sin(0.2n), 10 n 10 h[n] = cos(0.2n), 10 n 10
(c) x[n] = cos(0.2n), 10 n 10 h[n] = cos(0.2n), 10 n 10
(d) x[n] = sinc(0.2n), 10 n 10 h[n] = sinc(0.2n), 10 n 10
7.41 (Extracting Periodic Signals Buried in Noise) Extraction of periodic signals buried in noise
requires autocorrelation (to identify the period) and cross-correlation (to recover the signal itself).
(a) Generate the signal x[n] = sin(0.1n), 0 n 499. Add some uniform random noise (with a
noise amplitude of 2 and a mean of 0) to obtain the noisy signal s[n]. Plot each signal. Can you
identify any periodicity from the plot of x[n]? If so, what is the period N ? Can you identify any
periodicity from the plot of s[n]?
(b) Obtain the periodic autocorrelation rpx [n] of x[n] and plot. Can you identify any periodicity
from the plot of rpx [n]? If so, what is the period N ? Is it the same as the period of x[n]?
(c) Use the value of N found
& above (or identify N from x[n] if not) to generate the 500-sample
impulse train i[n] = [n kN ], 0 n 499. Find the periodic cross-correlation y[n] of
s[n] and i[n]. Choose a normalizing factor that makes the peak value of y[n] unity. How is the
normalizing factor related to the signal length and the period N ?
(d) Plot y[n] and x[n] on the same plot. Is y[n] a close match to x[n]? Explain how you might
improve the results.
Chapter 8
FOURIER SERIES
The constant term a0 accounts for any dc oset in xp (t), and a0 , ak , and bk are called the trigonometric
Fourier series coecients. There is a pair of terms (a sine and cosine) at each harmonic frequency kf0 .
197
198 Chapter 8 Fourier Series
The polar form combines each sine-cosine pair at the frequency kf0 into a single sinusoid:
!
xp (t) = c0 + ck cos(2kf0 t + k ) (8.2)
k=1
Here, c0 = a0 , and ck and k are called the polar coecients and are related to the trigonometric coecients
ak and bk . This relationship is best found by comparing the phasor representation of the time-domain terms
ck k = ak jbk (8.3)
The exponential form invokes Eulers relation to express each sine-cosine pair at the frequency kf0 by
complex exponentials at kf0 :
!
xp (t) = X[k]ej2kf0 t (8.4)
k=
Here, the index k ranges from to and X[0] = a0 . To see how this form arises, we invoke Eulers
relation to give
ak cos(2kf0 t) + bk sin(2kf0 t) = 0.5ak (ej2kf0 t + ej2kf0 t ) j0.5bk (ej2kf0 t ej2kf0 t ) (8.5)
Combining terms at kf0 and kf0 , we obtain
ak cos(2kf0 t) + bk sin(2kf0 t) = 0.5(ak jbk )ej2kf0 t + 0.5(ak + jbk )ej2kf0 t (8.6)
The relation between the trigonometric and exponential forms is thus
X[k] = 0.5(ak jbk ), k1 (8.7)
The coecient X[k] is simply the complex conjugate of X[k]:
X[k] = 0.5(ak + jbk ) = X [k], k1 (8.8)
The connection between the three forms of the coecients is much like the connection between the three
forms of a complex number, and the Argand diagram is a useful way of showing these connections.
X[k] = 0.5(ak jbk ) = 0.5ck k , k1 (8.9)
The conjugate symmetry of X[k] also implies that ak is an even symmetric function of k and that bk
is an odd symmetric function of k.
To invoke orthogonality, we integrate the Fourier series over one period T to give
" " " "
xp (t) dt = a0 dt + + ak cos(2kf0 t) dt + + bk sin(2kf0 t) dt + (8.12)
T T T T
All but the first term of the right-hand side integrate to zero, and we obtain
"
1
a0 = xp (t) dt (8.13)
T T
The dc oset a0 thus represents the average value of xp (t).
If we multiply xp (t) by cos(2kf0 t) and then integrate over one period T , we get
" "
xp (t)cos(2kf0 t) dt = a0 cos(2kf0 t) dt
T T
" "
+ + ak cos(2kf0 t)cos(2kf0 t) dt + + bk sin(2kf0 t)cos(2kf0 t) dt +
T T
Once again, by orthogonality, all terms of the right-hand side are zero, except the one containing the
coecient ak , and we obtain
" " "
xp (t)cos(2kf0 t) dt = ak cos2 (2kf0 t) dt = ak 0.5[1 cos(4kf0 t)] dt = 0.5T ak (8.14)
T T T
The polar coecients ck and k , or the exponential coecients X[k], may now be found using the
appropriate relations. In particular, a formal expression for evaluating X[k] directly is also found as follows:
" "
1 1
X[k] = 0.5ak j0.5bk = xp (t)cos(2kf0 t) dt j xp (t)sin(2kf0 t) dt (8.17)
T T T T
(b) Let xp (t) = 3 + cos(4t) 2 sin(20t). Its fundamental frequency is f0 = GCD(2,10) = 2 Hz. The
signal contains a dc term (for k = 0), a second (k = 2) harmonic, and a tenth (k = 10) harmonic. We
thus have a0 = 3, a2 = 1, and b10 = 2. All other Fourier series coecients are zero.
t t
1 2 3 t0 T
Figure E8.1 The periodic signals x(t) and y(t) for Example 8.1(c and d)
Comment: Note how the Fourier series coecients of y(t) depend only on the duty ratio t0 /T .
t t
2 4 2 4
Remove dc offset Remove dc offset
1 1
If such hidden symmetry is present, some of the coecients will be zero and need not be computed. We
can then pick the symmetric (zero-oset) version (using simplifications through symmetry) or the original
signal (but using no such simplifications) to evaluate the remaining coecients.
1
j
X[0] = 0.5 X[k] = (k = 0)
2k
t
1 2
Figure E8.2B Periodic sawtooth and its Fourier series coecients for Example 8.2(b)
(c) (A Triangular Pulse Train) From Figure E8.2C, we compute a0 = 0.5. Since xp (t) has even
symmetry, bk = 0, k = 0. With T = 2 (or f0 = 0.5) and xp (t) = t, 0 t 1, we compute ak , k > 0,
as follows:
" "
4 T /2 1
ak = xp (t)cos(k0 t) dt = 2 t cos(kt) dt
T 0 0
#1
2 # 2[cos(k) 1] 4
= [cos(kt) + kt sin(kt)] ## = = 2 2 (k odd)
(k)2
0 k 2 2 k
Note that ak = 0 for even k, implying hidden half-wave symmetry (hidden because a0 = 0).
1
2
X[0] = 0.5 X[k] = (k odd)
2 k2
t
2 3 4 5
1
Figure E8.2C Triangular pulse train and its Fourier series coecients for Example 8.2(c)
204 Chapter 8 Fourier Series
(d) (A Half-Wave Symmetric Sawtooth Signal) From Figure E8.2D, xp (t) has half-wave symmetry.
Thus, a0 = 0, and the coecients ak and bk are nonzero only for odd k.
With T = 2 (f0 = 0.5) and xp (t) = t, 0 t 1, we compute ak and bk only for odd k as follows:
" "
4 T /2 1
ak = xp (t)cos(2kf0 t) dt = 2 t cos(kt) dt
T 0 0
#1
2 # 2[cos(k) 1] 4
= [cos(kt) + kt sin(kt)] ## = = 2 2 (k odd)
(k)2
0
2
k 2 k
" #1
1
2 # cos(k) 2
bk = 2 t sin(kt) dt = [sin(kt) kt cos(kt)] ## = 2 = (k odd)
0 (k)2 0 k k
1
2 1
2 t X[0] = 0 X[k] = j (k odd)
1 3 2 k2 k
-1
Figure E8.2D Half-wave symmetric sawtooth and its Fourier series coecients for Example 8.2(d)
(e) (A Trapezoidal Signal) From Figure E8.2E, this signal has both odd and hidden half-wave sym-
metry. Thus, a0 = 0 = ak , and the bk are nonzero only for odd k. With T = 8 (f0 = 18 ); and
xp (t) = t, 0 t 1, and xp (t) = 1, 1 t 2; we use simplifications due to both types of symmetry
to obtain " " 1 " 2
8 T /4
bk = xp (t)sin( 4 ) dt =
kt
t sin( 4 ) dt +
kt
sin( kt
4 ) dt = I1 + I2
T 0 0 1
Now, simplifying the two integrals I1 and I2 for odd k only, we find
#
1 ' ( #1 16 sin(k/4) 4 cos(k/4)
I1 = sin( 4 ) 4 cos( 4 ) ## =
kt kt kt
(k/4)2 0 k2 2 k
#2
cos(kt/4) ## 4[cos(k/2) cos(k/4)] 4 cos(k/4)
I2 = # = = (k odd)
(k/4) 1 k k
Then
16 sin(k/4)
bk = I1 + I2 = (k odd)
k2 2
1
8 sin(k/4)
5 7 t X[0] = 0 X[k] = j (k odd)
1 3 T =8 2 k2
-1
Figure E8.2E Trapezoidal pulse train and its Fourier series coecients for Example 8.2(e)
(f ) (A Half-Wave Rectified Sine) From Figure E8.2F, this signal has no symmetry. Its average value
equals a0 = 1/. With T = 2 (f0 = 0.5) and xp (t) = sin(t), 0 t 1, we compute the coecient ak
8.3 Parsevals Relation and the Power in Periodic Signals 205
This may seem plausible to the unwary, but if bk = 0, the signal x(t) must show even symmetry. It
does not! The catch is that for k = 1 the first term has the indeterminate form 0/0. We must thus
evaluate b1 separately, either by lHopitals rule,
$ % $ %
1 sin[(1 k)] 1 cos[(1 k)]
b1 = lim = lim = 0.5
2 k1 1k 2 k1 1
or directly from the defining relation with k = 1,
" 1 " 1
b1 = sin(t) sin(t) dt = 0.5 [1 cos(2t)] dt = 0.5
0 0
Thus, b1 = 0.5 and bk = 0, k > 1. By the way, we must also evaluate a1 separately (it equals zero).
Comment: Indeterminate forms typically arise for signals with sinusoidal segments.
1
1 j 1
X[0] = X[1] = X[k] = (k even)
4 (1 k2 )
t
1 2
Figure E8.2F Half-wave rectified sine and its Fourier series coecients for Example 8.2(f)
Equivalent formulations in terms of ck or X[k] are obtained by recognizing that c2k = a2k + b2k = 4|X[k]|2 and
|X[k]|2 = X[k]X [k] to give
!
!
!
P = a20 + 0.5(a2k + b2k ) = |X[k]| = 2
X[k]X [k] (8.21)
k=1 k= k=
The equivalence of the time-domain and frequency-domain expressions for the signal power forms the so-
called Parsevals relation: "
!
1
P = x2p (t) dt = |X[k]|2 (8.22)
T T
k=
We point out that Parsevals relation holds only if one period of xp (t) is square integrable (an energy signal).
Therefore, it does not apply to signals such as the impulse train.
It is far easier to obtain the total power using xp (t). However, finding the power in or up to a given
number of harmonics requires the harmonic coecients. The power PN up to the N th harmonic is simply
N
! N
! N
!
PN = c20 + 1 2
2 ck = a20 + 2 [ak
1 2
+ b2k ] = |X[k]|2 (8.23)
k=1 k=1 k=N
This is a sum of positive quantities and is always less than the total signal power. As N increases, PN
approaches the true signal power P .
t
1 2
Figure E8.3 The periodic signal of Example 8.3
sin(k/2)
X[k] = 0.5 sinc(0.5k)ejk/2 |X[k]| = 0.5 sinc(0.5k) =
k
8.4 The Spectrum of Periodic Signals 207
The coecients are zero for even k. To compute the power up to the second harmonic (k = 2), we use
2
! 1 1 2
P2 = |x[k]|2 = + 0.25 + 2 = 0.25 + 2 = 0.4526 W
2
k=2
To compute the total power from the harmonics, we would have to sum the infinite series:
!
! + ,
1 2 ! 1 2 2
P = |x[k]| = 0.25 + 2
2
= 0.25 + 2 = 0.25 + 2 = 0.5 W
k2 2 k2 8
k= k odd k odd
The summation in the above expression equals 2 /8 to give P = 0.5 W, as before. Finding the total power
from the harmonics is tedious! It is far better to find the total power directly from x(t).
k (index)
3 2 1 1 2 3
(rad/s)
3 0 2 0 0 0 2 0 3 0
f (Hz)
3 f 0 2 f 0 f0 f0 2 f0 3 f0
The magnitude spectrum and phase spectrum describe plots of the magnitude and phase of each
harmonic. They are plotted as discrete signals and sometimes called line spectra. One-sided spectra refer
to plots of ck and k for k 0 (positive frequencies). Two-sided spectra refer to plots of |X[k]| and k for
all k (all frequencies, positive and negative).
For real periodic signals, the X[k] display conjugate symmetry. As a result, the two-sided magnitude
|X[k]| shows even symmetry, and the two-sided phase spectrum shows reversed phase at negative indices (or
negative frequencies). The magnitude is usually sketched as a positive quantity. An example is shown in
Figure 8.3.
For real signals, the X[k] are purely real if xp (t) is even symmetric or purely imaginary if xp (t) is
odd symmetric. In such cases, it is more meaningful to sketch the amplitude spectrum as the real (or
imaginary) part of X[k] (which may include sign changes). An example is shown in Figure 8.4.
t
f f
T
We can identify xk (t) from the magnitude and phase spectrum. If the spectra are one-sided (plots of ck ),
the kth harmonic is simply xk (t) = ck cos(2kf0 t + k ). If the spectra are two-sided (plots of |X[k]|), the
kth harmonic is xk (t) = 2|X[k]|cos(2kf0 t + k ), where k is the phase at the frequency f = kf0 .
EXAMPLE 8.4 (Signal Symmetry and Measures from Fourier Series Spectra)
(a) Consider the signal x(t) whose two-sided spectra are shown in Figure E8.4.
For x(t) Magnitude For y(t) Magnitude
4 4 4
3 3 3
2 2
1 1 1
f Hz f Hz
28 20 12 12 20 28 5 15 25 35
Phase (degrees) Phase (degrees)
180 180 90
-28 12 f Hz 35 f Hz
-20 -12 20 28 5 15 25
-180 -180 -90
Figure E8.4 The spectra of the periodic signals x(t) and y(t) of Example 8.4(a and b)
The harmonic frequencies are 12 Hz, 20 Hz, and 28 Hz. The fundamental frequency is given by
f0 = GCD(12, 20, 28) = 4 Hz. The time period is T = 1/f0 = 0.25 s. Only the dc and odd-indexed
harmonics are present. Thus, the signal shows hidden (not true) half-wave symmetry. The phase of
the harmonics is either zero or 180 , implying only cosine terms. Thus, we also have even symmetry.
Since the spectrum is two-sided, the total signal power and the rms value are found as
!
P = |X[k]|2 = 1 + 16 + 9 + 4 + 9 + 16 + 1 = 56 W xrms = P = 56 = 7.4833
210 Chapter 8 Fourier Series
We may also write the Fourier series for the signal by inspection as either
x(t) = 2 + 6 cos(24t 180 ) + 8 cos(40t) + 2 cos(56t + 180 ) or
x(t) = 2 6 cos(24t) + 8 cos(40t) 2 cos(56t).
(b) Consider the signal y(t) whose one-sided spectra are shown in Figure E8.4.
The harmonic frequencies are 5 Hz, 15 Hz, 25 Hz, and 35 Hz. Thus, the fundamental frequency is
f0 = 5 Hz, and the time period is T = 1/f0 = 0.2 s. Only the odd-indexed harmonics are present.
Since the dc value is zero, the signal shows half-wave symmetry. The phase of the harmonics is either
zero (cosine terms) or 90 (sine terms). Thus, both sines and cosines are present, and the signal is
neither odd symmetric nor even symmetric.
Since the spectrum is one-sided, the total signal power and the rms value are found as
!
P = c20 + 0.5 c2k = 0.5(4 + 9 + 16 + 1) = 15 W yrms = P = 15 = 3.8730
We may also write the Fourier series for the signal by inspection as either
y(t) = 2 cos(10t + 90 ) + 3 cos(30t) + 4 cos(50t) + cos(70t 90 ) or
y(t) = 2 sin(10t) + 3 cos(30t) + 4 cos(50t) + sin(70t).
xp (t ) X[k]ej2kf0 (8.26)
The magnitude spectrum shows no change. The term exp(j2kf0 ) introduces an additional phase of
2kf0 , which varies linearly with k (or frequency). It changes the phase of each harmonic in proportion
to its frequency kf0 .
Time compression by changes the fundamental frequency of the scaled signal yp (t) from f0 to f0 . The
spectral coecients Y [k] are identical to X[k] for any k, but located at the frequencies kf0 . The more
compressed the time signal, the farther apart are its harmonics in the frequency domain.
t f
T 2T f0
t f
T 2T f0
8.5.4 Folding
The folding operation means = 1 in the scaling property and gives
The magnitude spectrum remains unchanged. However, the phase is reversed. This also implies that the ak
remain unchanged, but the bk change sign. Since xp (t) + xp (t) X [k] + X[k] = 2Re{X[k]}, we see that
the Fourier series coecients of the even part of xp (t) equal Re{X[k]} and the Fourier series coecients of
the odd part of xp (t) equal Im{X[k]}.
8.5.5 Derivatives
The derivative of xp (t) is found by dierentiating each term in its series:
!
!
dxp (t) d . /
= X[k]ej2kf0 t = j2kf0 X[k]ej2kf0 t (8.30)
dt dt
k= k=
212 Chapter 8 Fourier Series
In operational notation,
xp (t) j2kf0 X[k] (8.31)
The dc term X[0] vanishes, whereas all other X[k] are multiplied by j2kf0 . The magnitude of each
harmonic is scaled by 2kf0 , in proportion to its frequency, and the spectrum has significantly larger high-
frequency content. The derivative operation enhances the sharp details and features in any time function
and contributes to its high-frequency spectrum.
8.5.6 Integration
Integration of a periodic signal whose average value is zero (X[0] = 0) gives
" t
X[k]
xp (t) dt , (k = 0) + constant (8.32)
0 j2kf0
The integrated signal is also periodic. If we keep track of X[0] separately, integration and dierentiation
X[k]
may be thought of as inverse operations. The coecients j2kf 0
imply faster convergence and reduce the
high-frequency components. Since integration is a smoothing operation, the smoother a time signal, the
smaller its high-frequency content.
1 1
X[0] = X[1] = j0.25 X[k] = (k even)
(1 k2 )
Use properties to find the Fourier series coecients of y(t), f (t), and g(t).
x(t) y(t)
1 1
t t
1 2 3 1 2 3
f(t) g(t)
1 1
2 t t
1 3 1 2 3
1
Figure E8.5 The periodic signals for Example 8.5
8.5 Properties of Fourier Series 213
(a) We note that y(t) = x(t 0.5T ), and thus Y [k] = X[k]ejkf0 T = X[k]ejk = (1)k X[k].
The coecients change sign only for odd k, and we have
1 1
Y [0] = X[0] = Y [1] = X[1] = j0.25 Y [k] = X[k] = (k even)
(1 k2 )
(b) The sinusoid f (t) may be described as f (t) = x(t) y(t). Its Fourier series coecients may be written
as F [k] = X[k] Y [k], and are given by
(c) The full-wave rectified sine g(t) may be described as g(t) = x(t) + y(t). Its Fourier series coecients
G[k] = X[k] Y [k] are given by
2 2
G[0] = G[1] = 0 G[k] = (k even)
(1 k2 )
Note that the time period of g(t) is in fact half that of x(t) or y(t). Since only even-indexed coecients
are present, the Fourier series coecients may also be written as
2
G[k] = (all k)
(1 4k2 )
Its first derivative y(t) = x (t) contains only impulses. We use the sifting property to find the Y [k] as
"
1 T /2 1 ' jkf0 t0 ( 2j sin(kf0 t0 )
Y [k] = [(t 0.5t0 ) (t + 0.5t0 )]ej2kf0 t dt = e ejkf0 t0 =
T T /2 T T
214 Chapter 8 Fourier Series
The coecients of x(t) and y(t) are related by X[k] = Y [k]/j2kf0 . Thus,
Y [k] sin(kf0 t0 ) t0 sin(kf0 t0 ) t0
X[k] = = = = sinc(kf0 t0 )
j2kf0 kf0 T T kf0 t0 T
Its first derivative y(t) = x (t) contains rectangular pulses of width t0 and height 1/t0 . We use the
result of the previous part and the shifting property to find the Y [k] as
1 ' ( 2j sin(kf0 t0 )sinc(kf0 t0 )
Y [k] = sinc(kf0 t0 ) ejkf0 t0 ejkf0 t0 =
T T
The coecients of x(t) and y(t) are related by X[k] = Y [k]/j2kf0 . Thus,
Y [k] sin(kf0 t0 )sinc(kf0 t0 ) t0 sin(kf0 t0 ) t0
X[k] = = = sinc(kf0 t0 ) = sinc2 (kf0 t0 )
j2kf0 kf0 T T kf0 t0 T
t t t t
2 2 2 2
1 sin (t) 1
Figure E8.6C The signals for Example 8.6(c)
Its fundamental frequency is 0 = 2/T = 2 rad/s. Over one period, x(t) = sin(t). Two derivatives
result in impulses, and the sum y(t) = x(t) + x (t) yields an impulse train with strengths of 2 units
whose Fourier series coecients are simply Y [k] = 2/.
Since Y [k] = X[k] + (jk0 )2 X[k] = X[k](1 4k2 ), we have
2
X[k] =
(1 4k2 )
You can confirm this result by direct multiplication of x(t) and y(t).
A2 t1 t2
Z[k] = X[k]Y [k] = sinc(kf0 t1 )sinc(kf0 t2 )ejkf0 (t1 +t2 )
T2
216 Chapter 8 Fourier Series
The signal z(t) represents the periodic convolution of x(t) and y(t). We use the wraparound method
of periodic convolution to study three cases, as illustrated in Figure E8.7B.
1. A = 2, t1 = t2 = 1, and T = 2: There is no wraparound, and z(t) describes a triangular waveform.
The Fourier coecients simplify to
Z[k] = 1
4 sinc2 ( k2 )ejk = 1
4 sinc2 ( k2 ), (k odd)
This real result reveals the half-wave and even nature of z(t).
2. A = 1, t1 = 2, t2 = 1, and T = 4: There is no wraparound, and z(t) describes a trapezoidal
waveform. The Fourier coecients simplify to
Z[k] = 1
8 sinc( k2 )sinc( k4 )ej3k/4
We find Z[0] = 18 . Since sinc(k/2) = 0 for even k, the signal z(t) should show only hidden
half-wave symmetry. It does.
3. A = 1, t1 = t2 = 2, and T = 3: We require wraparound past 3 units, and the Fourier coecients
of z(t) simplify to
Z[k] = 49 sinc2 ( 32 k)ej4k/3
The signal has no symmetry, but since sinc( 23 k) = 0 for k = 3, 6, . . . , the third harmonic and its
multiples are absent.
Regular convolution Periodic convolution
x(t) Case 1 T = 2 y(t) of one-period segments z(t)
2 2 4 2
t t t t
1 2 1 2 1 1 1 2
x(t) Case 2 T = 4 y(t) z(t)
1 1 1
1/4
t t t t
1 4 2 4 1 2 3 1 2 3 4
Wraparound
x(t) Case 3 T = 3 y(t) 2 z(t)
1 1
1 2/3
t t t 1/3 t
2 3 2 3 1 2 3 4 1 2 3
Figure E8.7B The signals and their convolution for Example 8.7(b)
1 1
0.8 0.8
Amplitude
Amplitude
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0.2 0.2
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Time t [seconds] Time t [seconds]
1 1
0.5
Amplitude
Amplitude
0.5
0 0
0.5 0.5
1 1
0.5 0 0.5 1 1.5 0.5 0 0.5 1 1.5
(c) Triangular wave T=2 k=31 (d) Halfrectified sine T=2 k=31
1 1
Amplitude
Amplitude
0.5 0.5
0 0
0.5 0 0.5 1 1.5 0.5 0 0.5 1 1.5
(e) Trapezoid T=8 k=31 (f) Exponential T=2 k=31
1 1
0.5
Amplitude
Amplitude
0 0.5
0.5
1 0
As N increases, we would intuitively expect the oscillations to die out and lead to perfect reconstruction
as N . This is not what always happens. The same intuition also tells us that we can never get
perfect reconstruction for signals with jumps because we simply cannot approximate a discontinuous signal
by continuous signals (sinusoids). These intuitive observations are indeed confirmed by Figure 8.7, which
shows reconstructions of various signals, including those with jumps.
218 Chapter 8 Fourier Series
(a) Sawtooth T=2 k=21 (b) Sawtooth T=2 k=51 (c) Sawtooth T=2 k=
1 1 1
Amplitude
Amplitude
0.6 0.6 0.6
0 0 0
(b) A true sawtooth pulse exhibits the Gibbs eect as shown in Figure 8.8. A triangular pulse with a
very steep slope approximating a sawtooth does not, as shown in Figure 8.6(b), and leads to perfect
reconstruction for large enough k.
The signal xR (t) also equals the periodic convolution of xp (t) and wD (t), the signal corresponding to the
rectangular spectral window given by
N
! N
! sinc[(2N + 1)f0 t]
wD (t) = rect( 2N
n
)ej2kf0 t = ej2kf0 t = (2N + 1) (8.35)
sinc(f0 t)
k=N k=N
The signal wD (t) is called the Dirichlet kernel and shows variations similar to a sinc function. Its
periodic convolution with xp (t) is what leads to overshoot and oscillations in the reconstructed signal
xR (t) = xp (t)w
D (t) as illustrated in Figure 8.9. In other words, the Gibbs eect arises due to the abrupt
truncation of the signal spectrum by the rectangular window.
220 Chapter 8 Fourier Series
(a) Dirichlet kernel N=10 (b) Square wave T=1 (c) Their periodic convolution
21
1 1
Amplitude
Amplitude
Amplitude
0
0 0
6
0.5 0 0.5 0 0.5 1 0 0.5 1
Time t [seconds] Time t [seconds] Time t [seconds]
Figure 8.9 Periodic convolution of a signal with jumps and the Dirichlet kernel shows the Gibbs eect
The use of a tapered spectral window W [k] (whose coecients or weights typically decrease with increas-
ing |k|) yields much better convergence and a much smoother reconstruction. The reconstruction xW (t) is
given by
!N
xW (t) = xp (t)w(t)
= W [k]X[k]ej2kf0 t (8.36)
k=N
Table 8.1 lists some useful spectral windows. Note that all windows display even symmetry and are positive,
with W [0] = 1; all windows (except the rectangular or boxcar window) are tapered; and for all windows
(except the boxcar and Hamming), the end samples equal zero.
Every window (except the rectangular) provides a reduction, or elimination, of overshoot. The smoothed
reconstruction of a square wave using some of these windows is shown in Figure 8.10. Details of how the
smoothing comes about are described later.
(a) No windowing k=7 (b) Bartlett windowed k=7 (c) Hamming windowed k=7
1 1 1
Amplitude
Amplitude
Amplitude
0.5 0.5 0.5
0 0 0
Figure 8.10 The use of tapered windows leads to a smoother reconstruction free of overshoot and oscillations
In other words, to obtain the output, we multiply the input magnitude by Hk and augment the input phase
by k . Eulers relation and superposition (or the fact that cos = Re[ej ]) allows us to generalize this result
for sinusoids:
ck cos(jk0 t + k ) = ck Hk cos(k0 t + k + k ) (8.39)
The response yp (t) of an LTI system to a periodic input xp (t) with period T and fundamental frequency
0 = 2/T may be found as the superposition of the response due to each of the harmonics that make up the
input (by evaluating H() successively at each input frequency k0 ). Note that yp (t) is also periodic with
period T since it contains the same harmonic frequencies as the input, but with dierent magnitudes and
phase. For a half-wave symmetric input, the output is also half-wave symmetric because the odd-indexed
harmonics are missing in both the input and the output. The convergence rate of yp (t), and other type
of symmetry (or lack thereof), depends on both xp (t) and the system function. The closed form for the
response using Fourier series analysis is very dicult to obtain. We could of course use convolution to find
the exact time-domain result, but it lacks the power to untangle the frequency information for which the
Fourier series is so well suited.
+ R + + R +
1 1
x(t) C y(t) X() Y() H () =
jC 1 + jRC
Figure E8.9A The circuit for Example 8.9(a)
(b) Consider the system shown in Figure E8.9B(1). Sketch G[k] and g(t) and find the output y(t).
X [k] H(f)
+ +
5 4 5 2 G[k] 1
Y [k]
3 3 1 g(t) 1F y(t)
f f
-2 -1 1 2
-3 -1 1 3
Figure E8.9B(1) The system for Example 8.9(b)
The output of the filter produces the spectrum shown in Figure E8.9B(2).
The signal g(t) is g(t) = 12 + 6 cos(2t).
G[k] g(t) = 8 + 6 cos(2t )
8 14
3 3 8
f
2 t
-1 1
Figure E8.9B(2) The filter output of the system for Example 8.9(b)
1
The transfer function of the RC filter is T (f ) = .
1 + j2f
Since the input comprises components only at dc (f = 0) and f = 1 Hz, we evaluate T (0) = 1 and
T (1) = 1+j2
1
= 0.1572 80.96 . Then the output y(t) is given by
y(t) = 8 + (6)(0.1572)cos(2t 80.96 ) = 8 + 0.9431 cos(2t 80.96 )
(c) Consider the system shown in Figure E8.9C(1). Sketch g(t) and find the output spectrum Y [k].
x(t)
2
Output H(f)
G[k] 1 Y [k]
t 1
4 2 2 4 g(t) f y(t)
1 Input 1 1
2
Figure E8.9C(1) The system for Example 8.9(c)
The output
of the nonlinear system is unity whenever the input exceeds or equals unity. Now, since
x(t) = 2 cos(0.5t) = 1 at t = 0.5, the output g(t) describes a rectangular pulse train, as shown in
Figure E8.9C(2).
g(t) Y [k]
1
t
4 0.5 0.5 4
f
1 0.5 0.5 1
Figure E8.9C(2) The output g(t) of the system for Example 8.9(c)
The Fourier series coecients of g(t) are G[k] = sinc(k/4). Since f0 = 1/T = 0.25 Hz, the spectrum
Y [k] of the ideal filter contains harmonics only up to |k| = 4 (or 1 Hz) as shown.
224 Chapter 8 Fourier Series
The largest harmonic is c1 at the fundamental frequency 0 . The magnitude of the filter output (the capacitor
voltage) due to the component at 0 equals
# # # #
# c1 # # c1 #
|V1 | = |c1 H(0 )| = ## ## # , 0 1
1 + j0 # # 0 #
The dc output is simply V0 = c0 . For R < 1%, we require |V0 /V1 | > 100 and thus
# # # #
# c0 0 # # 100c1 # 25
# # > 100 or # #
# c1 # > # c0 0 # = f0
The time constant can now be used to select values for R and C (if f0 is specified).
0.5 L
15
0 10
0.5
5 HD3
1
0
0 0.5T T 0 10 20 30 40 50
Time t Clipping angle [degrees]
can happen, for example, if the amplifier reaches saturation and clips the input sinusoid to levels below its
peak value, as shown in Figure 8.11(a).
The clipping angle serves as a measure of the distortion. The larger its value, the more severe is the
distortion. The Fourier series of the clipped signal contains harmonics other than the desired fundamental,
and the magnitude of the fundamental is reduced from the ideal case. This reduction results in what is often
called signal loss. It is the dierence between the gain A of an ideal amplifier and the (amplified) level of
the fundamental component c1 at the output, expressed as a ratio
# #
# A c1 A #
L = ## # = |1 c1 |
# (8.40)
A
Total harmonic distortion (THD) is a measure of the power contained in the unwanted harmonics as
compared to the desired output. It is defined as the square root of the ratio of the power Pu in the unwanted
harmonics and the power P1 in the desired component (the fundamental). Since the unwanted power Pu is
simply the dierence between the total ac power PAC and the power P1 in the fundamental, we have
+ ,1/2 + ,1/2
Pu PAC P1 )
THD = = (total harmonic distortion) (8.41)
P1 P1
Note that Pu describes only the power in the harmonics and excludes the dc component (even if present).
The square root operation is used because it has been customary to deal with rms values instead of power.
We can also find the distortion HDk due only to the kth harmonic as
+ ,1/2 # #
Pk # ck #
HDk = = ## ## (kth harmonic distortion) (8.42)
P1 c1
(b) The Fourier series coecients of the symmetrically clipped sine of Figure 8.11 can be found to be
$ %
1 2 sin(k + 1) sin(k 1)
c1 = [sin 2 + 2] ck = (k odd)
k k+1 k1
The signal wD (t) corresponding to WD [k] is called the Dirichlet kernel. It represents the sum of harmonics
of unit magnitude at multiples of f0 , and can be written as the summation
N
!
wD (t) = ej2kf0 t (8.44)
k=N
From tables (at the end of this book), the closed form for wD (t) is
where M = 2N + 1. This kernel is periodic with period T = 1/f0 and has some very interesting properties,
as Figure 8.12 illustrates. Over one period,
1. Its area equals T , it attains a maximum peak value of M at t = 0, and its value at t = 0.5T is 1.
2. It shows N maxima, a positive mainlobe of width 2T /M , and decaying positive and negative sidelobes
of width T /M , with 2N zeros at kT /M, k = 1, 2, . . . , 2N .
3. The ratio R of the mainlobe height and the peak sidelobe magnitude stays nearly constant (between 4
and 4.7) for finite M , and R 1.5 4.71 (or 13.5 dB) for very large M .
4. Increasing M increases the mainlobe height and compresses the sidelobes. As M , wD (t) ap-
proaches an impulse with strength T .
8.9 The Dirichlet Kernel and the Gibbs Eect 227
(a) Dirichlet kernel T=1 N=3 (b) Dirichlet kernel T=1 N=5 (c) Dirichlet kernel T=1 N=10
7 11 21
Amplitude
Amplitude
Amplitude
1 1
1
0.5 0 0.5 0.5 0 0.5 0.5 0 0.5
Time t [seconds] Time t [seconds] Time t [seconds]
Since XN [k] = X[k]WD [k], the reconstructed signal xN (t) equals the periodic convolution wD (t)
xp (t):
"
1
xN (t) = wD (t) xp (t) = wD ( )xp (t ) d (8.46)
T T
If xp (t) contains discontinuities, xN (t) exhibits an overshoot and oscillations near each discontinuity, as
shown in Figure 8.13.
(a) Dirichlet kernel N=10 (b) Square wave T=1 (c) Their periodic convolution
21
1 1
Amplitude
Amplitude
Amplitude
0
0 0
6
0.5 0 0.5 0 0.5 1 0 0.5 1
Time t [seconds] Time t [seconds] Time t [seconds]
Figure 8.13 The Dirichlet kernel leads to overshoot when reconstructing signals with jumps
With increasing N , the mainlobe and sidelobes become taller and narrower, and the overshoot (and
associated undershoot) near the discontinuity becomes narrower but more or less maintains its height.
It can be shown that if a periodic function xp (t) jumps by Jk at times tk , the Fourier reconstruction
xp (t), as N , yields not only the function xp (t) but also a pair of straight lines at each tk , which extend
equally above and below the discontinuity by an amount Ak , given by
$ %
1 1
Ak = si () Jk = 0.0895Jk (8.47)
2
where si(x) is the sine integral defined as
" x
sin()
si(x) = d (8.48)
0
The lines thus extend by about 9% on either side of the jump Jk , as shown in Figure 8.14. This fraction is
independent of both the nature of the periodic signal and the number of terms used, as long as N is large.
We are of course describing the Gibbs eect.
228 Chapter 8 Fourier Series
1 1 1
Amplitude
Amplitude
0 0 0
1 1 1
Figure 8.14 The Gibbs eect occurs when reconstructing signals with jumps
This clearly reveals a triangular weighting on the individual terms rk . For a periodic signal xp (t) with
rk = X[k]ej2kf0 t , the arithmetic mean xN (t) of the partial sums may be written, by analogy, as
N
! 1 N
!
N |k|
xN (t) = X[k]ej2kf0 t = WF [k]X[k]ej2kf0 t (8.54)
N
k=(N 1) k=N
In this result, WF [k] describes a tapered triangular window, called the Bartlett window, whose weights
decrease linearly with |k|. With WF [N ] = 0, we may write
+ ,
k |k|
WF [k] = tri =1 , N k N (8.55)
N N
The reconstructed signal xN (t) is
"
1 T /2
xN (t) = wF ( )xp (t ) d (8.56)
T T /2
(a) Fejer kernel N=10 (b) Square wave T=1 (c) Their periodic convolution
10
1 1
Amplitude
Amplitude
Amplitude
0 0
0
0.5 0 0.5 0 0.5 1 0 0.5 1
Time t [seconds] Time t [seconds] Time t [seconds]
Figure 8.15 The Fejer kernel leads to a smooth reconstruction of signals with jumps
The quantity WL [k] = sinc(k/N ) describes the Lanczos window, which has a sinc taper. Its kernel, wL (t),
reduces the reconstruction overshoot to less than 1.2% (as compared with 9% for the Dirichlet kernel) but
does not eliminate it (as the Fejer kernel does). The reconstruction, however, shows a much steeper slope at
the discontinuities as compared with the Fejer kernel.
Here xy is the angle subtended by the two vectors, and X and Y represent the length of the vectors X
and Y, respectively. The inner product of two vectors equals zero if the angle they subtend is 90 , whereas
the inner product of a vector with itself simply equals the square of its length.
8.10 The Fourier Series, Orthogonality, and Least Squares 231
x2
x2
X X
x1
Error
Error
x1
~
~ Projection X
Projection X x3
To extend this idea to function spaces, let x(t) be a periodic signal with period T approximated by x
(t)
in terms of the orthogonal set of basis functions k (t) = ej2kf0 t as
N
!
(t) =
x k k (t) (8.62)
k=N
In analogy with vectors, this relation suggests that the coecients k describe the projections of x(t) on
the basis functions k (t). For a least squares solution, the k must therefore be chosen such that the error
(t) is orthogonal to every basis function k (t). Since the k (t) are complex, we require
x(t) x
"
[x(t) x
(t)]k (t) dt = 0, k = 0, 1, . . . , N (8.63)
T
Substitution for x
(t) and separation of terms results in
" " " $! %
[x(t) x
(t)]k (t) dt = x(t)k (t) dt m m (t) k (t) dt = 0 (8.64)
T T T m
The second integral equals T and represents the energy Ek in k (t), and we have
" "
1 1
k = x(t)k (t) dt = x(t)ej2kf0 t dt (8.66)
T T T T
The k correspond exactly to the Fourier series coecients X[k] and yield a least squares estimate for x(t).
Of course, the least squares method may also be implemented at a brute force level if we minimize the mean
squared error by setting its derivative (with respect to k ) to zero. This approach yields the same results.
8.11.1 Existence
The idea of existence may be justified by defining the coecients of the series by the proposed Fourier
relations and then showing that the series with these coecients does indeed represent x(t). We start from
the opposite end and assume the coecients can be found using the prescribed relations. Otherwise, the
existence of Fourier series is very dicult to prove in a formal, rigorous sense. This approach requires that
the Fourier series coecients |X[k]| be finite. Thus,
"
|X[k]| |x(t)||ej2kf0 t | dt < (8.67)
T
# #
Since #ej2kf0 t # = 1, we have
"
|x(t)| dt < (8.68)
T
Thus, x(t) must be absolutely integrable over one period. A consequence of absolute integrability is the
Riemann-Lebesgue theorem, which states that the coecients X[k] approach zero as k :
"
lim x(t)ej2kf0 t dt = 0 (8.69)
k T
8.11.2 Convergence
This result also leads us toward the idea of convergence of the series with the chosen coecients. Convergence
to the actual function x(t) for every value of t is known as uniform convergence. This requirement is
satisfied for every continuous function x(t) since we are trying to represent a function in terms of sines and
cosines, which are themselves continuous. We face problems when reconstructing functions with jumps. If
8.11 Existence, Convergence, and Uniqueness 233
we require convergence to the midpoint of the jump at the discontinuity, we sidetrack such problems. In
fact, this is exactly the condition that obtains for functions with jump discontinuities.
Even though our requirement calls for x(t) to be absolutely integrable over one period, it includes square
integrable (energy) signals for which "
|x(t)|2 dt < (8.70)
T
Obviously, every continuous or piecewise continuous signal, over one period, is square integrable. For such a
signal, it turns out that the Fourier series also converges to x(t) in the mean (or in the mean squared sense)
in that, as more terms are added to the series xN (t) truncated to N terms, the mean squared error decreases
and approaches zero as N :
"
|x(t) xN (t)|2 dt 0 as N (8.71)
T
A consequence of this result is that if xN (t) converges in the mean to x(t), it cannot converge to any other
function. In other words, x(t) is uniquely represented by xN (t), N , even though it may not converge
pointwise to x(t) at every value of t.
8.11.3 Uniqueness
Uniqueness means that if two periodic signals x(t) and y(t) possess the same Fourier series, then they are
equal. Here equality implies that the two functions could be dierent at a finite set of points (such as points
of discontinuity) over one period. For signals that are square integrable over one period, convergence in the
mean implies uniqueness. An important result in this regard is the converse of Parsevals relation, known as
the Riesz-Fischer theorem, which, loosely stated, tells us that a series with finite total power (P < )
given in terms of the summation
!
P = |X[k]|2 < (8.72)
k=
Since its inception, the theory of Fourier series has been an area of intense mathematical activity and
research and has resulted in many advances, most of little concern from a practical standpoint and most
well beyond our scope. But the convergence problem remains unsolved. No set of necessary and sucient
conditions has yet been formulated.
8.12.1 Prologue
To read between Jacobis lines, we must realize that Fourier was more an engineer or physicist than a
mathematician. If we bear in mind the two key ingredients of the Fourier series, harmonically related
sinusoids and the independence of coecients, we find that in this specific context, three facts emerge:
1. Fouriers own memoirs indicate quite a lack of mathematical rigor in the manner in which he obtained
his results concerning Fourier series.
2. Fourier certainly made no claim to originality in obtaining the formulas for calculating the coecients,
and such formulas were, in fact, described by others years earlier.
3. Fourier was not the first to propose the use of harmonically related sinusoids to represent functions.
This, too, was suggested well before his time.
What, then, is Fouriers claim to fame in the context of the series named after him? To be honest, his
single contribution was to unify these ideas, look beyond the subtle details, and proclaim that any arbitrary
function could be expressed as an infinite sum of harmonically related sinusoids. That Fourier realized
the enormity of his hypothesis is evident from the fact that he spent his last years basking in the glory of
his achievements, in the company of many a sycophant. But it is due more to the achievements of other
celebrated mathematicians, both contemporaries and successors, who sought to instill mathematical rigor
by filling in and clarifying the details of his grand vision and, in so doing, invented concepts that now form
the very fabric of modern mathematical analysis, that Fouriers name is immortalized.
8.12 A Historical Perspective 235
denying their very existence for an arbitrary function. In any event, the propagation of heat was made the
subject of the grand prix de mathematiques for 1812. It is curious that there was only one other candidate
besides Fourier! Whether this topic was selected because of Fouriers goading and the implied challenge to
his critics or because of the genuine interest of the referees in seeing his work furthered remains shrouded in
mystery. Fourier submitted his prize paper late in 1811, and even though he won the prize, the judgesthe
famous trio of Laplace, Lagrange, and Legendre among themwere less than impressed with the rigor of
Fouriers methods and decided to withhold publication in the Academys memoirs.
Remember that 1812 was also a time of great turmoil in France. Napoleons escape from Elba in 1815
and the second restoration led to a restructuring of the Academy to which Fourier was finally nominated in
1816 amidst protests and elected only a year later. And the year 1822 saw his election to the powerful post
of Secretary of the Academy and with it, the publication of his treatise on heat conduction, which contained
without change, his first paper of 1807. That he had resented criticism of his work all along became evident
when, a couple of years later, he caused publication in the Academys memoirs of his prize-winning paper
of 1811, exactly as it was communicated. Amidst all the controversy, however, the importance of his ideas
did not go unrecognized (Lord Kelvin, a physicist himself, called Fouriers work a great mathematical poem)
and soon led to a spate of activity by Poisson, Cauchy, and others. Cauchy, in particular, took the first step
toward lending mathematical credibility to Fouriers work by proposing in 1823 a definition of an integral as
the limit of a sum and giving it a geometric interpretation. But it was Dirichlets decade of dedicated work
that truly patched the holes, sanctified Fouriers method with rigorous proofs, and set the stage for further
progress. He not only extended the notion of a function, giving a definition that went beyond geometric
visualization, but more important he provided seminal results on the sucient conditions for the convergence
of the series (1829) and the criterion of absolute integrability (1837), which we encountered toward the end
of this chapter.
Motivated by Dirichlet, Riemann after completing his doctorate started work on his probationary essay
on trigonometric series in 1851 in the hope of gaining an academic post at the University of Gottingen. This
essay, completed in 1853, was unfortunately published only in 1867, a year after he died. Meanwhile, his
probationary lecture on the hypotheses that lie at the foundations of geometry on June 10, 1854, shook its
very foundations and paved the way for his vision of a new dierential geometry (which was to prove crucial
to the development of the theory of relativity). But even his brief excursion into Fourier series resulted in
two major advances. First, Riemann widened the concept of the integral of a continuous function beyond
Cauchys definition as the limit of a sum to include functions that were neither continuous nor contained
a finite number of discontinuities. Second, and more important, he proved that, for bounded integrable
functions, the Fourier coecients tend to zero as the harmonic index increases without limit, suggesting that
convergence at a point depends only on the behavior of the function in the vicinity of that point.
The issues of convergence raised by Riemann in his essayand the problems of convergence in general
motivated several personalities who made major contributions, Heine and Cantor among them. Heine in
1870 established uniform convergence (first conceived in 1847) at all points except at discontinuities for
functions subject to the Dirichlet conditions. Cantors studies, on the other hand, probed the foundations
of analysis and led him in 1872 to propose the abstract theory of sets.
There were other developments in the nineteenth century, too. In 1875 du Bois-Raymond showed that if
a trigonometric series converges to an integrable function x(t), then that series must be the Fourier series for
x(t). And a few years later in 1881, Jordan introduced the concept of functions of bounded variation leading
to his own convergence conditions. Interestingly enough, Parseval, who proposed a theorem for summing
series of products in 1805 that now carries his name, was not directly involved in the study of Fourier series,
and his theorem seems to have been first used in this context nearly a century later in 1893. The continuous
form, to be encountered in the chapter on Fourier transforms and often referred to as the energy relation,
8.12 A Historical Perspective 237
was first used by Rayleigh in connection with black body radiation. Both forms were later (much later, in
the twentieth century, in fact ) generalized by Plancherel.
The Gibbs eect was apparently first observed and described by the English mathematician Wilbraham.
However, it was made more widely known by the American physicist Michelson (the famous Michelson-
Morley experiment of 1887 is dubbed as the greatest experiment in physics that ever failed because its
negative results led to a firm foundation for the theory of relativity). Michelson, who had developed a
harmonic analyzer in 1898, could not get perfect-square wave reconstruction (without overshoot) by adding
its 80 harmonics. Convinced that it was not an artifact of the device itself, he described his problem in a
letter to the scientific journal Nature. A year later in 1899, Gibbs (who is remembered more for his work
in thermodynamics leading to the phase rule and, lest we forget, for introducing the notation and to
represent the dot product and cross-product of vectors) oered an explanation of the eect that now bears
his name in a letter to the same journal. The story goes that Michelson had written to Gibbs to seek an
explanation for his observations. It is curious that the collected works of Gibbs (which include his letters
to Michelson) make no reference to such correspondence. Gibbs letter to Nature dealt with a sawtooth
waveform, did not contain any proof, and received scant attention at the time. Only in 1906 did B ocher
demonstrate how the Gibbs eect actually occurs for any waveform with jump discontinuities.
In his studies on Fourier series smoothing, Fejer was led to propose summation methods usually reserved
for divergent series. In 1904 he showed that summation by the method of arithmetic means results in a
smoothing eect at the discontinuities and the absence of the Gibbs phenomenon. This was to be the
forerunner of the concept of spectral windows that are now so commonly used in spectral analysis.
8.12.5 Epilogue
In the words of Fourier, The profound study of nature is the most fertile source of mathematical discoveries.
Be that as it may, it surely would have been dicult, even for Fourier himself, to predict the household
recognition of his name within the scientific world that would follow from his own study of a tiny aspect of
that nature. Tiny or not, let us also remember that every time we describe a definite integral by the notation
4b
a
, we are using yet another one of his gifts. What else is there to say but viva Fourier!
238 Chapter 8 Fourier Series
CHAPTER 8 PROBLEMS
DRILL AND REINFORCEMENT
8.1 (Fourier Series Concepts) Express each of the following periodic signals by its Fourier series in all
three forms, identify the time period and the harmonics present, compute the signal power and the
rms value, and, for parts (a) and (b), sketch the one-sided and two-sided spectra.
(a) x(t) = 4 + 2 sin(4t + 4 ) + 3 sin(16t) + 4 cos(16t)
! 4
(b) x(t) = k sin( 2 )e
6 k jk/3 jk6t
e
k=4
k=0
k=1
(a) Identify the fundamental frequency f0 and time period T .
(b) Identify the Fourier series coecients ak , bk , ck , k and X[k].
(c) Identify any symmetry (true and hidden) present in x(t).
8.4 (Fourier Series Coecients) Sketch a few periods of each of the following periodic signals described
over one period, and find the indicated Fourier series coecient.
(a) X[k] for x(t) = et , 0 t 1, with T = 1
(b) ak for x(t) = rect(t 0.5), with T = 2
(c) bk for x(t) = (1 + t), 0 t 1, with T = 1
8.5 (Symmetry) The magnitude and phase spectra of two periodic signals are shown in Figure P8.5.
(a) Identify the harmonics present in the Fourier series of each signal.
(b) Identify the symmetry (hidden or otherwise) in each periodic signal (if any).
(c) Write out the Fourier series of each signal in polar form.
(d) Find the signal power and the rms value of each signal.
8.6 (Symmetry) A periodic signal x(t) is described by x(t) = t, 0 t 1, over a portion of its time
period T . Sketch this periodic signal over 2T t 2T and indicate what symmetry it possesses
about t = T2 and t = T4 for the following cases.
(a) x(t) possesses only even symmetry, and T = 2.
(b) x(t) possesses only odd symmetry, and T = 2.
(c) x(t) possesses even and half-wave symmetry, and T = 4.
(d) x(t) possesses odd and half-wave symmetry, and T = 4.
8.7 (Fourier Series) For each periodic signal shown in Figure P8.7,
(a) Compute the Fourier series coecients ak , bk , and X[k] and, where appropriate, simplify for
odd k and even k. Evaluate special or indeterminate cases separately, if necessary.
(b) Compute the signal power in the fundamental component.
(c) Compute the signal power up to the fourth harmonic.
(d) Compute the total signal power.
(e) Identify the convergence rate.
8.8 (Properties) The magnitude and phase spectra of a periodic signal x(t) are shown in Figure P8.8.
(a) Write out the Fourier series in polar form and simplify where appropriate.
(b) Sketch the magnitude and phase spectra of f (t) = x(2t).
(c) Sketch the magnitude and phase spectra of g(t) = x(t 16 ).
(d) Sketch the magnitude and phase spectra of h(t) = x (t).
Phase (degrees)
Magnitude 90
60
2 30
3 1 f (Hz)
f (Hz) 30 1 3
3 1 1 3 60
90
Figure P8.8 The spectra for Problem 8.8
8.9 (Properties) Let X[k] be the exponential Fourier series coecients of a periodic signal x(t). Find
the Fourier series coecients of the following:
(a) f (t) = x(2t) (b) g(t) = x(t) (c) h(t) = x(2t) (d) y(t) = 2 + x(2t)
8.10 (Derivative Method) Sketch each periodic signal x(t) described over one period (with T = 1) and
find its exponential Fourier series coecients X[k], using the derivative method.
(a) x(t) = rect(2t) (b) x(t) = et rect(t 0.5) (c) x(t) = t rect(t) (d) x(t) = tri(2t)
240 Chapter 8 Fourier Series
8.11 (Convergence and the Gibbs Eect) Consider a periodic signal with time period T = 2 whose
one period is described by x(t) = tri(2t). Answer the following without computing its Fourier series
coecients.
(a) What is the convergence rate? Will its Fourier series reconstruction show the Gibbs eect? If
so, what is the peak overshoot at each discontinuity?
(b) What value will its Fourier series converge to at t = 0, t = 0.25, t = 0.5, and t = 1?
8.12 (Convergence and the Gibbs Eect) Consider a periodic signal with time period T = 2 whose
one period is described by x(t) = 6 rect(t 0.5). Answer the following without computing its Fourier
series coecients.
(a) What is the convergence rate? Will its Fourier series reconstruction show the Gibbs eect? If
so, what is the peak overshoot at each discontinuity?
(b) What value will its Fourier series converge to at t = 0, t = 0.5, t = 1, and t = 1.5?
8.13 (Convergence and the Gibbs Eect) Consider a periodic signal with time period T = 2 whose
one period is described by x(t) = 6 rect(t)sgn(t). Answer the following without computing its Fourier
series coecients.
(a) What is the convergence rate? Will its Fourier series reconstruction show the Gibbs eect? If
so, what is the peak overshoot at each discontinuity?
(b) What value will its Fourier series converge to at t = 0, t = 0.5, t = 1, and t = 1.5?
8.14 (Convergence and the Gibbs Eect) Consider a periodic signal with time period T = 2 whose one
period is described by x(t) = 4et rect(t 0.5). Answer the following without computing its Fourier
series coecients.
(a) What is the convergence rate? Will its Fourier series reconstruction show the Gibbs eect? If
so, what is the peak overshoot at each discontinuity?
(b) What value will its Fourier series converge to at t = 0, t = 0.5, t = 1, and t = 1.5?
4
!
8.15 (Modulation) A periodic signal x(t) is described by x(t) = 2 + 6
k sin2 ( k
2 )cos(1600kt).
k=1
(a) Sketch the two-sided spectrum of x(t).
(b) Sketch the two-sided spectrum of the modulated signal y(t) = x(t)cos(1600t).
(c) Find the signal power in x(t) and in the modulated signal y(t).
8.16 (System Analysis) The periodic signal x(t) = |sin(250t)| is applied to an ideal filter as shown:
x(t) ideal filter y(t)
Sketch the output spectrum of the filter and the time-domain output y(t) if
(a) The ideal filter blocks all frequencies past 200 Hz.
(b) The ideal filter passes only frequencies between 200 and 400 Hz.
(c) The ideal filter blocks all frequencies past 400 Hz.
!
8.17 (System Analysis) A periodic signal x(t) is described by x(t) = 2 + 6
k sin2 ( k
2 )cos(1600kt).
k=1
This signal forms the input to the following system:
x(t) ideal filter RC lowpass filter ( = 1 ms) y(t)
Sketch the output spectrum of the ideal filter and the time-domain output y(t) if
Chapter 8 Problems 241
(a) The ideal filter blocks all frequencies past 200 Hz.
(b) The ideal filter passes only frequencies between 200 Hz and 2 kHz.
(c) The ideal filter blocks all frequencies past 2 kHz.
8.18 (System Analysis) The signal x(t) = |10 sin(t)| volts is applied to each circuit shown in Figure P8.18.
Assume that R = 1 , C = 1 F, and L = 1 H. For each circuit,
(a) Find the Fourier series coecients c0 and ck for the signal x(t).
(b) Find the dc component of the output y(t).
(c) Find the fundamental component of the output y(t).
(d) Find the power in y(t) up to (and including) the second harmonic.
+ R + + R + + +
C
x(t) R y(t) x(t) y(t) x(t) R y(t)
C
Circuit 1 Circuit 2 Circuit 3
+ L + + R + + R +
Circuit 4 Circuit 5 Circuit 6
Figure P8.18 The circuits for Problem 8.18
8.19 (Application) The input to an amplifier is x(t) = cos(10t), and the output of the amplifier is given
by y(t) = 10 cos(10t) + 2 cos(30t) + cos(50t).
(a) Compute the third harmonic distortion.
(b) Compute the total harmonic distortion.
8.20 (Application) The signal x(t) = sin(10t) is applied to the system whose output is y(t) = x3 (t).
(a) Which harmonics are present in the output y(t)?
(b) Compute the total harmonic distortion and the third harmonic distortion (if any) in y(t).
8.21 (Application) A square wave with zero dc oset and period T is applied to an RC circuit with time
constant . The output is the capacitor voltage. Without extensive computations,
(a) Sketch the filter output if 100 T .
(b) Sketch the filter output if 0.001 T .
8.22 (Design) We wish to design a dc power supply using the following scheme:
8.23 (Design) We wish to design a dc power supply using the following scheme:
x(t) half-wave rectifier RC lowpass filter y(t)
The input is a pure sine wave with T = 0.02 s. What value of , the filter time constant, will ensure a
ripple of less than 1% in the filter output y(t)? What value of C is required if R = 1 k?
8.26 (Fourier Series) Sketch the periodic signal x(t) whose nonzero Fourier series coecients are
" 2
bk = 12t sin(0.5kt) dt
0
8.27 (Fourier Series) Sketch the periodic signal x(t) whose nonzero Fourier series coecients are
" 1
ak = 12t cos(0.5kt) dt (k odd)
0
8.28 (Modulation) The signal x(t) = cos(2fC t) is modulated by an even symmetric periodic square wave
s(t) with time period T and one period described by s(t) = rect(t/ ). Find and sketch the spectrum
of the modulated signal if fC = 1 MHz, T = 1 ms, and = 0.1 ms.
8.29 (Application) The signal x(t) = sin(10t) is applied to the system whose output is y(t) = |x(t)|.
(a) Sketch the output y(t). Which harmonics are present in y(t)?
(b) Compute the total harmonic distortion and the third harmonic distortion (if any) in y(t).
8.30 (Application) The signal x(t) = sin(10t) is applied to the system whose output is y(t) = sgn[x(t)].
(a) Sketch the output y(t). Which harmonics are present in y(t)?
(b) Compute the total harmonic distortion and the third harmonic distortion (if any) in y(t).
8.31 (Application) Consider two periodic signals g(t) and h(t) described by
!
!
4 8
g(t) = 2 + cos(1600kt) h(t) = sin(800kt)
k k2
k=1 k=1
Chapter 8 Problems 243
(a) Each is passed through an ideal lowpass filter that blocks frequencies past 1 kHz to obtain the
filtered signals x(t) and y(t). Sketch the two-sided spectra of x(t) and y(t).
(b) The filtered signals are multiplied to obtain the signal w(t) = x(t)y(t). Sketch the two-sided
spectra of w(t).
(c) The signal w(t) is passed through an ideal lowpass filter that blocks frequencies past 1 kHz to
obtain the filtered z(t). Sketch z(t) and its two-sided spectra.
8.32 (Fourier Series) The Fourier series coecients of a periodic signal x(t) with period T = 2 are zero
for k 3. It is also known that x(t) = x(t) and x(t) = x(t 1). The signal power in x(t) is 4. Find
an expression for x(t).
8.33 (Fourier Series) A periodic signal x(t) with period T = 0.1 s is applied to an ideal lowpass filter
that blocks all frequencies past 15 Hz. Measurements of the filter output y(t) indicate that yav = 2,
yrms = 3, and y(0) = 4. Find an expression for y(t).
8.34 (System Analysis) A rectangular pulse train with a duty ratio of 0.5 is described over one period
by x(t) = rect(t). This signal is applied to a system whose dierential equation is y (t) + y(t) = x(t).
(a) What type of symmetry (if any) is present in the input x(t)?
(b) What type of symmetry (if any) will be present in the output y(t)?
(c) Compute the dc component and second harmonic component of y(t).
(d) Will the Fourier series reconstruction of y(t) exhibit the Gibbs eect? Should it? Explain.
8.35 (Design) For a series RLC circuit excited by a voltage source, the voltage across the resistor peaks at
the resonant frequency 0 = LC1
. The sharpness of the frequency response increases with the quality
factor Q = 0 L/R. Specify the Q of an RLC circuit that is excited by a half-wave symmetric square
wave pulse train and produces a resistor voltage that is essentially a sinusoid at the 25th harmonic of
the input with a contribution of less than 5% due to any other harmonics.
8.36 (Design) A periodic square wave signal with zero dc oset and T = 1 ms is applied to an RC lowpass
filter with R = 1 k.
(a) Find C such that the output phase diers from the input phase by exactly 45 at 5 kHz.
(b) The half-power frequency is defined as the frequency at which the output power equals half the
input power. How is this related to the time constant of the RC filter? What is the half-power
frequency of the RC filter designed in part (a)?
! 4
8.37 (Convergence) The Fourier series of a periodic signal is x(t) = 2 + sin(kt).
k
k odd
8.38 (Gibbs Eect) A periodic signal with convergence rate 1/k is applied to a series RC circuit.
(a) Will the capacitor voltage exhibit the Gibbs eect? Should it? Explain.
(b) Will the resistor voltage exhibit the Gibbs eect? Should it? Explain.
244 Chapter 8 Fourier Series
8.40 (Closed Forms for Infinite Series) Parsevals theorem provides an interesting approach to finding
closed-form solutions to infinite series. Starting with the Fourier series coecients of each signal, use
Parsevals theorem to generate the following closed-form results.
!
1 2
(a) = , using a sawtooth periodic signal
k2 6
k=1
! 1 4
(b) = , using a triangular periodic signal
k4 96
k=1,odd
!
1 2
(c) = 0.5 + , using a full-wave rectified sine
(1 4k2 )2 16
k=1
8.41 (Spectral
4 Bounds)
4 Starting with x(n) (t) (j2f0 )n X[k] and the fundamental theorem of calculus,
| x() d| |x()| d, and noting that |ej | = 1, prove that
"
1
|X[k]| |x(n) (t)| dt
T |2f0 |n T p
This result sets the nth bound on the spectrum in terms of the absolute area under the nth derivative
of xp (t). Since the derivatives of an impulse possess zero area, the number of nonzero bounds that
can be found equals the number of times we can dierentiate before only derivatives of impulses occur.
Use this result to find all the nonzero spectral bounds (starting with n = 0) for the following periodic
signals defined over one period.
(a) x(t) = rect(t), with T = 2
(b) x(t) = tri(t), with T = 2
(c) x(t) = | sin(t)|, with T =
8.42 (Smoothing Kernels) The periodic signal (Dirichlet kernel) d(t) corresponding to D[k] = rect( 2N k
)
sinc(M f0 t)
is d(t) = M sinc(f0 t) , where M = 2N + 1. Use this result (and the convolution property) to find the
periodic signal f (t) (the Fejer kernel) corresponding to the triangular window F [k] =tri( M
k
).
8.43 (Significance of Fourier Series) To appreciate the significance of Fourier series, consider a periodic
sawtooth signal x(t) = t, 0 < t < 1, with time period T = 1 whose nonzero Fourier series coecients
are a0 = 0.5 and bk = 1
k , k > 0. Its Fourier series up to the first harmonic is x1 (t) 2 sin(2t).
1 1
(a) Find the power PT in x(t), the power P1 in x1 (t), and the power error PT P1 .
(b) Approximate x(t) by y1 (t) = A0 + B1 sin(2t) as follows. Pick two time instants over (0, 1)say
t1 = 14 and t2 = 12 and substitute into the expression for y1 (t) to obtain:
(c) Solve for A0 and B1 , find the power Py in y1 (t), and find the power error PT Py . Does x1 (t)
or y1 (t) have a smaller power error?
(d) Start with t1 = 16 , t2 = 14 , and recompute A0 and B1 . Why are they dierent? Compute the
power error. Is the power error less than for part (c)? Is there a unique way to choose t1 and t2
to yield the smallest power error?
(e) If we want to extend this method to an approximation with many more harmonics and coecients,
what problems do you expect?
(f ) Argue that the Fourier series method of computing the coecients is better.
4b
8.44 (Orthogonal and Orthonormal Signals) A set of signals k satisfying a j k = 0, j = k, is said
4b
to be orthogonal over the interval (a, b). If, in addition, the energy Ek = a |k |2 = 1 in each k ,
the set is called orthonormal.
(a) Show that an even symmetric signal is always orthogonal to any odd symmetric signal over the
symmetric duration t .
(b) Let x(t) = cos(2t) and y(t) = cos(4t). Are they orthogonal over 0 t 1?
(c) Let x(t) = cos(2t) and y(t) = cos(4t). Are they orthogonal over 0 t 0.25?
4b
8.45 (Orthogonal Sets) A set of functions {k } is orthogonal only if a j k = 0, j = k, for every pair
of signals in the set. Of the following sets, which are orthogonal and which are also orthonormal?
(a) {1, t, t2 }, 1 t 1
(b) {et/2 u(t), (1 t)et/2 u(t), (1 2t + 0.5t2 )et/2 u(t)}
2 2
(c) {et /2 , tet /2 }, < t <
(d) {et , 2et 3e2t , 3et 12e2t + 10e3t }, t 0
8.46 (Generalized Fourier Series) A generalized Fourier series describes an energy signal x(t), a t b,
by a sum of orthogonal basis signals k (t), k = 0, 1, . . . over the same interval a t b as
!
x(t) = k k (t)
k=0
" b
1
(a) Show that the relation for finding k is k = x(t)k (t) dt, where Ek is the energy in k (t).
Ek a -
(b) Show that the energy in the generalized Fourier series equals |k |2 Ek .
8.49 (What Did Michelson See?) As described in the historical perspective (Section 8.12), Gibbs
explanation for the eect that bears his name was prompted by a letter from Michelson who failed to
perfectly reconstruct a square wave from its 80 harmonics using his harmonic analyzer.
(a) Use Matlab to reconstruct a square wave to 80 harmonics and describe what Michelson might
have observed.
(b) The explanation oered by Gibbs was for the reconstruction of a sawtooth wave. Use Matlab
to reconstruct a sawtooth wave to 80 harmonics and describe what Gibbs might have observed.
A numerical approach to estimating X[k] is based on sampling x(t) at N uniformly spaced intervals
t = nts , 0 n N 1, where ts = T /N , to generate N equations whose solution yields the X[k].
Argue that N = 2M + 1 such equations are needed to find all the coecients X[k], M k M .
We wish to use this approach for finding the Fourier series coecients of
8.51 (Numerical Approximation of Fourier Series Coecients) There is yet another way by which
we can approximate the Fourier series coecients of a periodic signal x(t). We sample x(t) at N
uniformly spaced intervals t = nts , 0 n N 1, where ts = T /N . Argue that the integral for
computing the coecients may be approximated by the summation
" N 1
1 T
1 !
X[k] = x(t)ej2kt/T dt x(nts )ej2kn/N
T 0 N n=0
We wish to use this approach for finding the Fourier series coecients of
(a) x(t) = t, 0 t 1, with T = 1 (b) x(t) = tri(t), 1 t 1, with T = 2
Compute the X[k], 0 k N 1, for N = 6 and N = 10 and compare with the exact results.
Comment on how the values of X[k], k > 0.5N , are related to X[N k]. Can you explain why they
are related? What are the eects of increasing N on the accuracy of X[k] for the first few harmonics?
Chapter 9
9.1 Introduction
The approach we adopt to develop the Fourier transform serves to unite the representations for periodic
functions and their aperiodic counterparts, provided the ties that bind the two are also construed to be the
very ones that separate them and are therefore understood in their proper context.
Recall that a periodic signal xp (t) with period T and its exponential Fourier series coecients X[k] are
related by
! "
1 T /2
xp (t) = X[k]e j2kf0 t
X[k] = xp (t)ej2kf0 t dt (9.1)
T T /2
k=
If the period T of a periodic signal xp (t) is stretched without limit, the periodic signal no longer remains
periodic but becomes a single pulse x(t) corresponding to one period of xp (t). The transition from a periodic
to an aperiodic signal also represents a transition from a power signal to an energy signal.
The harmonic spacing f0 = 1/T approaches zero, and its Fourier series spectrum becomes a continuous
curve. In fact, if we replace f0 by an infinitesimally small quantity df 0, the discrete frequency kf0 may be
replaced by the continuous frequency f . The factor 1/T in the integral relation means that the coecients
X[k] approach zero and are no longer a useful indicator of the spectral content of the aperiodic signal x(t).
However, if we eliminate the dependence of X[k] on the oending factor 1/T in the integral and work with
T X[k], as follows,
" T /2
T X[k] = xp (t)ej2kf0 t dt (9.2)
T /2
the integral on the right-hand side often exists as T (even though T X[k] is in indeterminate form), and
we obtain meaningful results. Further, since kf0 f , the integral describes a function of f . As a result, we
248
9.1 Introduction 249
This relation describes the Fourier transform X(f ) of the signal x(t) and may also be written in terms of
the frequency variable as "
X() = x(t)ejt dt (the -form) (9.4)
The Fourier transform provides a frequency-domain representation of the aperiodic signal x(t).
If T , resulting in the aperiodic signal x(t), it is the quantity T X[k] that describes its spectrum X(f ),
and we must modify the above expression (multiply and divide by T ) to give
!
!
1
xp (t) = T X[k]ej2kf0 t = T X[k]ej2kf0 t f0 (9.6)
T
k= k=
This is the inverse Fourier transform, which allows us to obtain x(t) from its spectrum X(f ). The inverse
transform relation may also be written in terms of the variable (by noting that d = 2 df ) to give
"
1
x(t) = X()ejt d (from the -form) (9.8)
2
Unlike the Fourier series, it does matter whether we use the f -form or -form, especially when using the
inverse transform relation! We prefer the f -form because the Fourier transform and its inverse are almost
symmetric and yield easily memorized forms for many signals. However, we shall work out some examples
in the -form and provide all properties in both forms.
The signal x(t) and its Fourier transform X(f ) or X() form a unique transform pair, and their rela-
tionship is shown symbolically using a double arrow:
1
X[k] = At0 sinc(kf0 t0 )
T
x p(t) x(t)
A A
t t
t0/2 t0/2 T t0/2 t0/2
Figure E9.1A The rectangular pulse train for Example 9.1(a)
The Fourier transform of the single pulse x(t) that represents one period of xp (t) is then
Note the equivalence of the envelope (a sinc function), the removal of T , and the change to the
continuous variable f (from kf0 ) as we go from the series to the transform.
(b) Suppose the Fourier transform of x(t) = tri(t) is X(f ) = sinc2 (f ). The Fourier series coecients of its
corresponding periodic extension xp (t) with period T are
t t t t
1 1 1 1 2 3 1 1 2 3 1.5 3
Signal Periodic extension T = 2 Periodic extension T = 1 Periodic extension T = 1.5
Figure E9.1B Periodic extensions of x(t) = tri(t) for Example 9.1(b)
For real signals, X(f ) is conjugate symmetric with X(f ) = X (f ). This means that the magnitude
|X(f )| or Re{X(f )} displays even symmetry and the phase (f ) or Im{X(f )} displays odd symmetry. It is
customary to plot the magnitude and phase of X(f ) as two-sided functions.
The phase spectrum may be restricted to values in the principal range (, ). Sometimes, it is more
convenient to unwrap the phase (by adding/subtracting multiples of 2) and plot it as a monotonic function.
The Fourier transform X(f ) of a real, even symmetric signal x(t) is always a real and even symmetric
function of f , and of the form X(f ) = A(f ). The Fourier transform X(f ) of a real, odd symmetric signal
x(t) is always imaginary and odd symmetric in f , and of the form X(f ) = jA(f ). For such signals, it is
convenient to plot just the amplitude spectrum A(f ).
We observe that x(t) has odd symmetry and X(f ) is purely imaginary. The signal x(t) and its amplitude
spectrum A(f ) = 2 sin(f ) are plotted in the Figure E9.2A(1).
The magnitude spectrum |X(f )| and phase spectrum is sketched in Figure E9.2A(2). The sign changes in
the amplitude account for the phase jumps of . The unwrapped phase is obtained by adding or subtracting
multiples of 2 at the phase jumps to make the phase a monotonic function.
Magnitude Unwrapped phase
2
f
1 1 2 3 3/2
/2 f
Phase
2 1 1 2 3
/2 f /2
1 2
/2 3/2
Figure E9.2A(2) Magnitude and phase spectra of the signal for Example 9.2
1 (t) 1 1
$%
2 rect(t) sinc(f ) sinc
2
$%
3 tri(t) sinc2 (f ) sinc2
2
$%
4 sinc(t) rect(f ) rect
2
1 1
7 et u(t)
+ j2f + j
1 1
8 tet u(t)
( + j2f )2 ( + j)2
2 2
9 e|t|
2 + 4 2 f 2 2 + 2
2 2 2
10 et ef e /4
1 2
11 sgn(t)
jf j
1 1
12 u(t) 0.5(f ) + () +
j2f j
+ j2f + j
13 et cos(2t)u(t)
( + j2f )2 + (2)2 ( + j)2 + (2)2
2 2
14 et sin(2t)u(t)
( + j2f )2 + (2)2 ( + j)2 + (2)2
! & ' & '
1 ! k 2 ! 2k
15 (t nT ) f
n=
T T T T
k= k=
! ! !
16 xp (t) = X[k]ej2kf0 t X[k](f kf0 ) 2X[k]( k0 )
k= k= k=
254 Chapter 9 The Fourier Transform
1
Multiplication x(t)h(t) X(f ) H(f ) X() H()
2
Modulation x(t)cos(2t) 0.5[X(f + ) + X(f )] 0.5[X( + 2) + X( 2)]
Times-t j2tx(t) X (f ) 2X ()
" t
1 1
Integration x(t) dt X(f ) + 0.5X(0)(f ) X() + X(0)()
j2f j
Conjugation x (t) X (f ) X ()
The Fourier transform is a linear operation and obeys superposition. Its properties are summarized in
Table 9.2 for both the f -form and -form. The conversion between the two forms is sometimes not so obvious
and may include (or omit) factors of 2. Our suggestion: Use one form consistently.
(c) (The rect Function) The signal x(t) = rect(t) is unity over (0.5, 0.5) and zero elsewhere. We find
X(f ) by evaluating the defining integral and using Eulers relation to give
" #1/2
1/2
ej2f t ## sin(f )
X(f ) = ej2f t dt = # = = sinc(f )
1/2 j2f 1/2 f
The interchange of time and frequency (t f ) also includes a sign reversal to account for the sign reversal
in the exponential in the direct and inverse transforms. For even symmetric functions, we can simply use
t f . For example, the transform pair rect(t) sinc(f ) leads to sinc(t) rect(f ) = rect(f ). Similarly,
the pair (t) 1 gives 1 (f ) = (f ). The transform pair for the decaying exponential gives
1 1
et u(t) (similarity) ef u(f )
+ j2f j2t
f t
t f
Transform pair New transform pair by symmetry
The time-scaling property follows from a change of variable. For > 0, for example, we have
" " & '
t= 1 1 f
x(t)ej2f t
dt = x()e j2f /a
d = X (9.12)
The scaling property is its own dual in that a scaling by in one domain results in inverse scaling (by 1 )
and amplitude scaling by ||1
in the other. For example, compression of x(t) to x(t) leads to a stretching
of X(f ) by and an amplitude reduction by ||. The multiplier ||
1
ensures that the scaled signal and the
scaled spectrum possess the same energy.
The folding property follows from the scaling property with = 1:
The time-shift property follows from the defining relation and the change of variable = t :
" "
x(t )ej2f t dt = x()ej2f (+) d = ej2f X(f ) (9.14)
The convolution property follows if we interchange the order of integration and use a change of
variables (t = ) in the defining integral. We obtain
" (" ) " (" )
x(t )u(t )h()d ej2f t dt = x(t )ej2f (t) dt h()ej2f d = X(f )H(f )
9.2 Fourier Transform Pairs and Properties 257
The modulation property follows from x(t)ej2t X(f ) (frequency shift) and Eulers relation:
x(t)cos(2t) = 0.5x(t)[ej2t + ej2t ] 0.5[X(f + ) + X(f )] (9.17)
It is a special case of the frequency-domain convolution property. If a signal x(t) is modulated by cos(2t),
its spectrum X(f ) gets shifted to higher frequencies and centered at f = .
The derivative property follows from the definition of X(f ) if we use integration by parts (assuming
that x(t) 0 as |t| ):
" # "
#
j2f t #
x (t)e
j2f t
dt = x(t)e # +j2f x(t)ej2f t dt = j2f X(f ) (9.18)
The integration property follows from the convolution property if we describe the running integral of
x(t) as the convolution of x(t) with u(t). With u(t) 0.5(f ) + j2f1
(we derive this pair later), we have
" t ( )
1 X(f )
x(t) dt = x(t) u(t) X(f ) 0.5(f ) + = 0.5X(0)(f ) + (9.20)
j2f j2f
*
Since X(0) equals the area (not *absolute area) of x(t), this relation holds only if x(t) dt (or X(0)) is finite.
The second term disappears if x(t) dt = X(0) = 0. If X(0) = 0, integration and dierentiation may be
regarded as inverse operations.
1
tet u(t) = et u(t) et u(t)
( + j2f )2
1
We could also start with et u(t) and use the times-t property to get
+ j2f
( )
j d 1 1
tet u(t) =
2 df + j2f ( + j2f )2
(c) Applying the times-t property successively to the previous result, we obtain
n!
tn et u(t)
( + j2f )n+1
1
(d) For x(t) = e|t| = et u(t) + et u(t), start with et u(t) and use the folding property
+ j2f
and superposition:
1 1 2
e|t| + = 2
+ j2f j2f + 4 2 f 2
(e) For x(t) = sgn(t), use the limiting form for y(t) = et u(t) et u(t) as 0 to give
1 1 4jf 1
et u(t) et u(t) = 2 sgn(t)
+ j2f j2f + 4 2 f 2 jf
*t
(f ) For x(t) = u(t), start with x(t) =
y(t) dt, where y(t) = (t) 1 and Y (0) = 1:
" t
Y (f ) 1
u(t) = y(t) dt + 0.5Y (0)(f ) = + 0.5(f )
j2f j2f
We cannot extend this result to find the transform of r(t) from u(t) because U (0) = . The Fourier
transform of signals that are not absolutely integrable usually contains impulses. We could also have
obtained the transform of u(t) using u(t) = 0.5 + 0.5 sgn(t).
9.2 Fourier Transform Pairs and Properties 259
(g) For x(t) = cos(2t) = 0.5ej2t + 0.5ej2t , start with 1 (f ) and use the dual of the time-shift
property:
cos(2t) = 0.5ej2t + 0.5ej2t 0.5(f ) + 0.5(f + )
Its magnitude spectrum (see Figure E9.4H(a)) is an impulse pair at f = , with strengths of 0.5.
(h) For x(t) = cos(2t + ), start with cos(2t) 0.5(f ) + 0.5(f + ), and use the shifting
property with t t + /2 (and the product property of impulses) to get
cos(2t + ) 0.5ejf / [(f ) + (f + )] = 0.5ej (f ) + 0.5ej (f )
Its magnitude spectrum is an impulse pair at f = with strengths of 0.5. Its phase spectrum shows
a phase of at f = and at f = . The spectra are shown in Figure E9.4H(b). These resemble
the two-sided spectrum for its Fourier series coecients X[k], except that the magnitude spectrum is
now plotted as impulses.
(a) Transform of cos( 2 t) (b) Transform of cos( 2 t+ )
Magnitude Phase (rad)
(0.5) (0.5) (0.5) (0.5)
f
f f
Figure E9.4H The Fourier transforms of cos(2t) and cos(2t + )
(i) For x(t) = cos(2t)u(t), start with u(t) 0.5(f ) + and use modulation to give 1
j2f
( )
1 1
cos(2t)u(t) 0.25[(f + 2) + (f 2)] + 0.5 +
j(f + 2) j(f 2)
This can be simplified further, if desired.
(a) The signal x(t) is a linear combination of rect and tri functions, x(t) = rect(t/2) tri(t), as shown in
Figure E9.5A.
x(t) 1
1 1
t t
= t
-1 1 -1 1 -1 1
Figure E9.5A Describing the signal for Example 9.5(a)
(b) The signal y(t) may be regarded as the derivative of tri(t), as shown in Figure E9.5B.
y(t)
1 1
Derivative
1 t
t 1
1 1
1
Figure E9.5B. Describing the signal for Example 9.5(b).
(c) The signal v(t) may be described as v(t) = t rect(t/2), as shown in Figure E9.5C.
v(t) 1
1 1
t t
1
1
t
= 1
t
1 1
1 1 1
j d + ,
By the times-t property, V (f ) = 2 sinc(2f )ej2f . This can be simplified, if required.
2 df
(d) The transform of the trapezoidal pulse g(t) may be found in several ways. It can be described as the
sum of three tri functions, as shown in Figure E9.5D(1).
g(t)
1
1
= 1 1
t t t t
2 1 1 2 2 1 1 1 1 2
Figure E9.5D(1) First way of describing the signal for Example 9.5(d)
9.2 Fourier Transform Pairs and Properties 261
Thus, g(t) = tri(t + 1) + tri(t) + tri(t 1). The pair tri(t) sinc2 (f ) and the shift property gives
G(f ) = sinc2 (f )ej2f + sinc2 (f ) + sinc2 (f )ej2f = sinc2 (f )[1 + 2 cos(f )]
It may also be described as g(t) = 2 tri(t/2) tri(t), as shown in Figure E9.5D(2).
g(t) 2
1
1 1
=
t t t
2 1 1 2 2 1 1 2 1 1
Figure E9.5D(2) Second way of describing the signal for Example 9.5(d)
t t
= t*
2 1 1 2 0.5 0.5 1.5 1.5
Figure E9.5D(3) Third way of describing the signal for Example 9.5(d)
(e) One way to find the transform of h(t) is to regard it as the sum of two signals, as shown in Fig-
ure E9.5E(1), whose transforms we have already found.
h(t)
2 1
1
= t
1
1
t
t -1 1
1
1
Figure E9.5E(1) Describing the signal for Example 9.5(e)
j d + ,
Superposition gives H(f ) = 2 sinc(2f ) sinc2 (f ) + 2 sinc(2f )ej2f
2 df
Another method is to take its derivative h (t) = 2 rect(t0.5)2(t0.5), as shown in Figure E9.5E(2).
sinc(f ) 1 jf
Since h (t) j2fH(f ), we get j2fH(f ) = 2 sinc(f )ejf 2ejf , or H(f ) = e .
jf
h(t)
2 2
Derivative
t 1 t
1
(2)
Figure E9.5E(2) Another way of describing the signal for Example 9.5(e)
We can also use the times-t property if we express the signal as h(t) = 2t rect(t 0.5).
262 Chapter 9 The Fourier Transform
(f ) The signal s(t) = cos(t)rect(t) may be regarded as the product of rect(t) and cos(t), as shown in
Figure E9.5F.
s(t)
1 1
t
= t 3 1 1 t
1 1 1 1 3
With cos(2t) 0.5(f + ) + 0.5(f ) and rect(t) sinc(f ), the convolution property gives
S(f ) = 0.5[(f + 0.5) + (f 0.5)] sinc(f ) = 0.5 sinc(f + 0.5) + 0.5 sinc(f 0.5)
(g) Consider the tone-burst signal x(t) = cos(2f0 t)rect(t/t0 ). To find its spectrum, we use the modulation
property and the transform pair rect(t/t0 ) t0 sinc(f t0 ) to give
X(f ) = 0.5[(f + f0 ) + (f f0 )] t0 sinc(f t0 ) = 0.5t0 sinc[t0 (f + f0 )] + 0.5t0 sinc[t0 (f f0 )]
This is just the sum of two shifted sinc functions. The signal x(t) and its spectrum are shown in
Figure E9.5G for t0 = 2 s and f0 = 5 Hz.
(a) Toneburst signal x(t) (b) Its Fourier transform X(f)
1 1
Amplitude
Amplitude
0
1
0.4
1.5 1 0.5 0 0.5 1 1.5 10 5 0 5 10
Time t [seconds] Frequency f [Hz]
Figure E9.5G The tone-burst signal for Example 9.5(g) and its spectrum
The Fourier transform of a periodic signal is an impulse train. The impulses are located at the harmonic
frequencies kf0 , and the impulse strengths equal the Fourier series coecients X[k]. The impulse train is
not, in general, periodic since the strengths X[k] are dierent. Note that we can also relate the Fourier
transform of a periodic signal xp (t) to the Fourier transform X1 (f ) of its single period x1 (t) if we recognize
that T X[k] = X1 (kf0 ). Thus,
!
1 !
xp (t) X[k](f kf0 ) = X1 (kf0 )(f kf0 ) (9.22)
T
k= k=
9.2 Fourier Transform Pairs and Properties 263
This result demonstrates, once again, the important concept that sampling in one domain is a consequence
of periodic extension in the other.
t f
T f0
The strength of the impulse at f = kf0 equals the Fourier series coecient X[k].
Recall that we can also find the Fourier series coecients of a periodic signal with period T = 1/f0
directly from the Fourier transform X(f ) of its single period using X[k] = T1 X(kf0 )
The signal x(t) and its transform X(f ) are identical in form.
x(t) X(f)
(1) (1) (1) (1) (1) (1) (1) (1) (1) (1)
t f
1 1 2 1 1 2
Figure E9.6A The impulse train for Example 9.6(a) and its spectrum
(b) The Fourier transform of one period x1 (t) of the signal x(t) shown in Figure E9.6B is given by X1 (f ) =
12 sinc(2f ). The transform X(f ) of the signal x(t) (with T = 12, and f0 = 1/12) is thus
1 ! !
X(f ) = X1 (kf0 )(f kf0 ) = sinc( k6 )(f 12 )
k
T
k= k=
X(f)
x(t) 1
6
t f
12 1 1 12
f 0 = 1/12
Figure E9.6B The rectangular pulse train for Example 9.6(b) and its spectrum
For X(f ), we write X(f ) = 1 rect(f 0.5). Thus, its inverse is x(t) = (t) sinc(t)ejt .
For y(t), each impulse pair in its magnitude spectrum corresponds to a sinusoid whose phase is read
from the phase plot. We thus have y(t) = 6 cos(4t 30 ) + 10 cos(8t + 60 ).
j3f
(b) Find the inverse transform of H(f ) = .
1 + jf
j2f
Rewrite this as H(f ) = 3 .
2 + j2f
1
Now, start with g(t) = e2t u(t) and the times-t property g (t) j2f G(f ), to get
2 + j2f
d 2t
h(t) = 3 [e u(t)] = 3(t) 6e2t u(t)
dt
Note that the impulse arises due to the jump in e2t u(t) at t = 0.
causes no change in the magnitude spectrum X(f ) but produces a linear phase change (with f ) in the phase
spectrum. In other words, the phase dierence 2f varies linearly with f . The magnitude spectrum of a
dierentiated signal increases in direct proportion to f . The phase is augmented by 90 for all frequencies.
Since dierentiation enhances sharp details and features in a time function, it follows that the sharp details
in a signal are responsible for the high-frequency content of its spectrum.
Similarly, the magnitude spectrum of an integrated signal decreases in inverse proportion to f , and the
phase is augmented by 90 for all frequencies. Since integration is a smoothing operation, the smoother a
function, the less significant the high-frequency content in its spectrum.
Modulation of x(t) by cos(2t) shifts the scaled-by-half transform 0.5X(f ) by . However, its mag-
nitude equals half the sum of the magnitudes 0.5|X(f + )| + 0.5|X(f )| only in special cases (when x(t)
is band-limited to |f | , for example).
(a) Sketch the magnitude and phase spectra for y(t) = x(2t).
The spectrum is compressed in frequency (by 2) and the magnitude is halved (see Figure E9.8A).
Transform of x(t) Transform of x( 2 t)
X(f) 0.5X(f /2)
2
1
f f
2 2 4 4
Figure E9.8A The spectrum of x(2t) for Example 9.8(a)
(b) Sketch the magnitude and phase spectra for y(t) = x(t 2).
Only the phase spectrum changes. Since the original phase is zero, the total phase is just 4f , as
sketched in Figure E9.8B.
Transform of x(t) Transform of x(t 2)
X(f) Magnitude Phase (rad)
2 2 8
2 f
f f
2
2 2 2 2 8
Figure E9.8B The spectrum of x(t 2) for Example 9.8(b)
(c) Sketch the magnitude and phase spectra for y(t) = x (t).
The magnitude is scaled by 2f and 90 is added to the phase, as shown in Figure E9.8C. We have
plotted the magnitude and phase to ensure conjugate symmetry.
266 Chapter 9 The Fourier Transform
Transform of x (t)
Transform of x(t)
Magnitude Phase (rad)
X(f) 8
2 /2
f 2 f
f
2 2 2
2 2 /2
(d) Sketch the magnitude and phase spectra for y(t) = tx(t).
We use the property j2tx(t) X (f ) or tx(t) 2
j
X (f ). The magnitude spectrum is dierentiated
and divided by 2. A phase of /2 is added to the already present phase (see Figure E9.8D). Since
the dierentiated spectrum contains impulses, the phase plot is discrete as shown, and we have plotted
the magnitude and phase to ensure conjugate symmetry.
Transform of tx(t)
Transform of x(t)
Magnitude Phase (rad)
X(f)
2 (1/) (1/) /2
2 f
f f
2
2 2 2 2
/2
Figure E9.8D The spectrum of tx(t) for Example 9.8(d)
(e) Sketch the spectra for y(t) = x2 (t) and h(t) = x(t) x(t).
For y(t), the spectrum is the convolution of two rectangular pulses. The phase is zero.
For h(t), the magnitude spectrum gets squared (to 4). The phase is zero (see Figure E9.8E).
Transform of x(t) * x(t)
Transform of x 2(t)
Transform of x(t) Magnitude
Magnitude 4
X(f) 16
2
f f
f
2 2 4 4
2 2
Figure E9.8E The spectra of x2 (t) and x(t) x(t) for Example 9.8(e)
Transform of x(t) Transform of x(t) cos (4t) Transform of x(t) cos ( 2t)
X(f) Magnitude Magnitude
2 2
1
f f 1 f
2 2 4 4 3 1 1 3
Figure E9.8F The spectra of x(t)cos(4t) and x(t)cos(2t) for Example 9.8(f)
We use |X(f )| (and not X(f )) to compute the signal energy because of its complex nature. This relation,
called Parsevals theorem, or Rayleighs theorem, applies only to energy signals. It is the counterpart
of and may be derived from Parsevals relation for the power in a periodic signal xp (t):
"
!
1 T /2
P = x2p (t) dt = |X[k]|2 (9.24)
T T /2 k=
Since x(t) corresponds to one period of xp (t) as T , the energy in x(t) may be expressed as
" " T /2
!
!
E= x2 (t) dt = lim x2p (t) dt = lim T |X[k]|2 = lim |T X[k]|2 f0 (9.25)
T T /2 T T
k= k=
(b) Let x(t) = et u(t). What is B if the frequency band (B B ) is to contain 50% of the total
signal energy?
We require EB = 0.5E. Using the results of part (a) (with E = 2
1
),
0.25 1
EB = = tan1 (B /) or B = tan(/4) =
Thus, 50% of the total energy of x(t) = et u(t) is contained in the frequency range || .
1 f 1 f
3 1 1 3 3 1 1 3
Figure E9.9C The spectra of x(t) = 8 sinc(4t)cos(2t) for Example 9.9(c)
By Parsevals relation, we find the energy as the area of |X(f )|2 graphically to give E = 12 J.
(d) What fraction of the total energy is contained in the central lobe of the spectrum of x(t) = rect(t)?
*
The total energy in x(t) is simply E = x2 (t) dt = 1.
Since X(f ) = sinc(f ), whose central lobe extends over |f | 1, we seek
" 1 " 1
EB = sinc2 (f ) df = 2 sinc2 (f ) df
1 0
This integral can only be evaluated numerically and yields EB = 0.9028 J. Thus, about 90% of the
energy is concentrated in the central lobe. This result is true of any rectangular pulse shape.
9.2 Fourier Transform Pairs and Properties 269
The central ordinate theorems follow directly from the defining relations for x(t) and X(f ) by setting
t = 0 or f = 0.
" " "
1
X(0) = x(t) dt x(0) = X(f ) df or X() d (9.29)
2
For real signals, we need use only Re{X(f )} to find x(0). For such signals, Im{X(f )} is odd and integrates
to zero between (, ). If x(t) possesses a discontinuity at the origin, we actually obtain x(0) as the
average value at the discontinuity. The central ordinate theorems form a useful check in problem solving.
The time-limited/band-limited theorem asserts that no signal can be both time-limited and band-
limited simultaneously. A time-limited function xL (t) may be regarded as the product of an infinite-duration
function x(t) and a rect function that restricts its view over a finite duration. The spectrum XL (f ) is just
the convolution of X(f ) and a sinc function of infinite duration and is thus also of infinite extent. Likewise,
a band-limited transform (confined to a finite frequency range |f | < fB ) corresponds to a time signal of
infinite extent. Of course, a signal may be of infinite extent in both domains.
t t
0.75 0.25 1.25 1
Figure E9.10A The spectra for Example 9.10(a)
*
By the central ordinate
* theorem, we find I = X(f ) df = x(0) = 0.75 (from the figure).
Similarly, J = Y (f ) df = y(0) = 0.5 (from the figure). Note that y(0) equals the average of the
discontinuity in y(t) at t = 0. You can confirm this by direct (but tedious) evaluation of the integral.
(b) Refer to the system shown in Figure E9.10B. What is the value of Y (0)?
x(t)
h(t)
6
1 y(t) Y(f)
e 2 t
t t
1 2 1
Figure E9.10B The system for Example 9.10(b)
The output y(t) is the convolution of x(t) and h(t). By the central ordinate theorem, Y (0) equals the
area of y(t). The area property of convolution says that the area of y(t) equals the product of the area
of x(t) (which is 6) and the area of h(t) (which is 0.5). Thus, Y (0) = (6)(0.5) = 3.
"
(c) Evaluate I = sinc4 (f ) df .
With x(t) = tri(t) X(f ) = sinc2 (f ), Parsevals theorem suggests that the integral I corresponds to
the energy in tri(t) (a triangular pulse of width 2 and unit height) and thus I = 23 .
"
(d) Evaluate I = 12 sinc(4t)cos(2t) dt.
" " 1
1 y(t)
I= x(t)y (t) dt =
t
e dt = 1 e
1
= 0.6321 x(t)
0 t
1
The quantity H(f ) or H() defines the system transfer function. The transfer function also equals the
Fourier transform of the impulse response h(t). The input-output relations in the time domain and frequency
domain are illustrated in Figure 9.1.
Figure 9.1 System input-output relations in the time domain and frequency domain
(b) Let y (t) + 3y (t) + 2y = 2x (t) + 3x(t). Then [(j)2 + 3j + 2]Y () = (2j + 3)X().
Y () 3 + 2j
Since H() = , we obtain H() = .
X() 2 + 3j + (j)2
Y ()
(c) Let H() = . Then cross-multiplication gives [(j)2 + 3j + 2]Y () = (2j + 3)X().
X()
Its inverse transform gives y (t) + 3y (t) + 2y(t) = 2x (t) + 3x(t).
+ L i(t) + j L I ()
C 1
v(t) R V () jC R
Time domain Frequency domain
Figure E9.12 The circuit for Example 9.12
274 Chapter 9 The Fourier Transform
Let v(t) = (t). Then V () = 1. We transform the circuit, as shown in Figure E9.12, and find
I() 1 j
H() = = =
V () 2 + j + 3/j 3 + 2j + (j)2
4
(b) An LTI system is described by H(f ) = .
2 + j2f
Find its response y(t) if the input is x(t) = u(t).
The response Y (f ) is given by
( )( )
4 1
Y (f ) = H(f )X(f ) = + 0.5(f )
2 + j2f j2f
We separate terms, use the product property of impulses, f (x)(x) = f (0)(x), and simplify:
2 2(f ) 2
Y (f ) = + = + (f )
jf (2 + j2f ) 2 + j2f jf (2 + j2f )
9.4 Frequency Response of Filters 275
The first term has no recognizable inverse, but (using partial fractions) we can write it as
2 1 2
=
jf (2 + j2f ) jf 2 + j2f
The response Y (f ) now equals
( )
1 2 1 2
Y (f ) = + (f ) = 2 + 0.5(f )
jf 2 + j2f j2f 2 + j2f
A frequency-selective filter is a device that passes a certain range of frequencies and blocks the rest.
The range of frequencies passed defines the passband, and the range of frequencies blocked defines the
stopband. The band-edge frequencies are called the cuto frequencies. Ideally, a filter should have
perfect transmission in the passband and perfect rejection in the stopband. Perfect transmission implies a
transfer function with constant gain and linear phase over the passband. If the filter has a constant gain
|H()|, the output is an amplitude-scaled replica of the input. Similarly, if the output undergoes a linear
phase shift with frequency, the output is a time-shifted replica of the input. If the gain is not constant over
the required frequency range, we have amplitude distortion. If the phase shift is not linear with frequency,
we have phase distortion as the signal undergoes dierent delays for dierent frequencies.
The phase and group delay of a system whose transfer function is H() = |H()| () are defined as
() d ()
tp = (phase delay) tg = (group delay) (9.34)
d
If () varies linearly with frequency, tp and tg are not only constant but also equal. For LTI systems (with
rational transfer functions), the phase () is a transcendental function but the group delay is always a
rational function of 2 and is much easier to work with in many filter applications.
HLP (f ) = rect(f /2fC ) hLP (t) = 2fC sinc(2fC t) (ideal LPF) (9.35)
Other filter types include highpass filters that block low frequencies (below fC , say), bandpass filters that
pass only a range of frequencies (of width 2fC centered at f0 , say), and bandstop filters that block a range
of frequencies (of width 2fC centered at f0 , say). The magnitude spectrum of these filters is shown in
Figure 9.2. The transfer function and impulse response of these filters can be generated from the lowpass
form, using properties of the Fourier transform.
f f f f
fC fC fC fC f0 f0
f0 fC f0 + fC f0 fC f0 + fC
HHP (f ) = 1 rect(f /2fC ) hHP (t) = (t) 2fC sinc(2fC t) (ideal HPF) (9.36)
9.4 Frequency Response of Filters 277
For an ideal bandpass filter (BPF), we use the modulation property to get
& ' & '
f + f0 f f0
HBP (f ) = rect + rect hBP (t) = 4fC sinc(2fC t)cos(2f0 t) (ideal BPF) (9.37)
2fC 2fC
The transfer function and impulse response of an ideal bandstop filter (BSF) follow as
& ' & '
f + f0 f f0
HBS (f ) = 1 rect + rect hBS (t) = (t) 4fC sinc(2fC t)cos(2f0 t) (9.38)
2fC 2fC
Ideal filters are unstable because their impulse response (which contains a sinc function) is not absolutely
integrable. The infinite extent of the sinc function also makes such filters noncausal and not physically
realizable. Truncating the impulse response (in an eort to make it causal and/or stable) leads to some
undesirable results. One-sided truncation leads to infinite spikes at the band edges. Two-sided truncation
produces overshoot and oscillations near band edges (Gibbs eect). Physically realizable filters cannot display
a region of constant gain, or linear phase, over their passband.
In practice, we often require the step response of lowpass filters to have a monotonic form, free of
overshoot. For a monotonic step response s(t), its derivative (the impulse response) must be entirely positive
(s (t) = h(t) > 0). An ideal lowpass filter can never have a monotonic step response because its impulse
response (a sinc function) is not entirely positive. Even though ideal filters are unrealizable, they form the
yardstick by which the design of practical filters is measured.
+ R
+ + R
+
1 1
x(t) C y(t) X() Y() H () =
jC 1 + jRC
Figure 9.3 An RC lowpass filter
The quantity = RC is the circuit time constant. The magnitude |H(f )| and phase (f ) of the transfer
function are sketched in Figure 9.4 and given by
1
|H(f )| = . (f ) = tan1 (2f ) (9.40)
1 + 4 2 f 2 2
278 Chapter 9 The Fourier Transform
The system is called a lowpass filter because |H(f )| decays monotonically with positive f , leading to a
reduced output amplitude at higher frequencies. At f = 2 1
, the magnitude equals 1/ 2 (or 0.707) and the
phase equals 45 . The frequency f = 2 1
is called the half-power frequency because the output power
of a sinusoid at this frequency is only half the input power. The frequency range 0 f 2 1
defines the
half-power bandwidth over which the magnitude is less than or equal to 1/ 2 (or 0.707) times the peak
magnitude.
The time-domain performance of this system is measured by its impulse response h(t) = et/ u(t), or by
its step response s(t), plotted in Figure 9.5, and described by
h(t) = et/ u(t) s(t) = (1 et/ )u(t) (9.41)
h(t) A s(t)
+ 1 +
+ R
1 + R 1
(1)
1 - t /
e
C C 1 e- t /
t t t t
Figure 9.5 Impulse response h(t) and step response s(t) of the RC lowpass filter
The step response rises smoothly to unity and is within 1% of the final value in about 5 . Other
performance measures include the rise time tr (the time taken to rise from 10% to 90% of the final value),
the delay time td (the time taken to rise to 50% of the final value), and the settling time tP % (the time taken
to settle to within P % of its final value). Commonly used measures are the 5% settling time and the 2%
settling time. Exact expressions for these measures are found to be
tr = ln 9 td = ln 2 t5% = ln 20 t2% = ln 50 (9.42)
A smaller time constant implies a faster rise time and a larger bandwidth. The phase delay and group
delay of the RC lowpass filter are given by
(f ) 1 1 d (f )
tp = = tan1 (2f ) tg = = (9.43)
2f 2f 2 df 1 + 4 2 f 2 2
We have H(f ) = 2 rect(f /2). This produces the output G(f ) = H(f )X(f ), as sketched.
The signal g(t) is thus g(t) = 12 + 6 cos(2t).
1
The transfer function of the RC filter is T (f ) = . Since the input contains components at dc
1 + j2f
1
(f = 0) and f = 1 Hz, we compute T (0) = 1 and T (1) = = 0.1572 80.96 .
1 + j2
The output y(t) is given by y(t) = 12 + (6)(0.1572)cos(2t 80.96 ) = 12 + 0.9431 cos(2t 80.96 ).
(b) Refer to the cascaded system of Figure E9.15B. Will the outputs g(t) and w(t) be equal? Explain.
2 + +
f(t) 1 g(t)
2e - t y(t) = x(2t)
1F
2 + +
1 v(t) w(t)
2e - t 1F
y(t) = x(2t)
1
The transfer function of the RC circuit is T (f ) = .
1 + j2f
For the first system, the output f (t) is f (t) = 2e2t u(t). Using its Fourier transform gives
2 2 2
G(f ) = = g(t) = 2(et e2t )u(t)
(1 + j2f )(2 + j2f ) 1 + j2f 2 + j2f
280 Chapter 9 The Fourier Transform
For the second system, the outputs V (f ), v(t), and w(t) are
2
V (f ) = v(t) = 2tet u(t) w(t) = v(2t) = 4te2t u(t)
(1 + j2f )2
Clearly, w(t) and g(t) are not equal. The reason is that the order of cascading is unimportant only for
LTI systems, and the scaling block represents a time-varying system.
(c) Refer to the system shown in Figure E9.15C. Find the signal energy in g(t) and y(t). Also find the
output y(t) at t = 0.
x(t) + H(f)
+
1 G(f) 2 Y(f)
(1)
1F g(t) f y(t)
t 0.5 0.5
Figure E9.15C The system for Example 9.15(c)
1
The transform of the input is X(f ) = 1, and the transfer function of the RC filter is T (f ) = .
1 + j2f
1
The output of the RC filter is G(f ) = . Thus, g(t) = et u(t). Its signal energy is
1 + j2f
"
Eg = et dt = 1
0
Now, Y (f ) = 2G(f ), 0.5 f 0.5. By Parsevals theorem, the signal energy in y(t) is
" "
1/2 1/2
4 4 tan1
Ey = |Y (f )|2 df = df = = 1.6076
1/2 1/2 1 + 4 2 f 2
By the central ordinate theorem, y(0) equals the area of Y (f ). We need compute only the area of
Re[Y (f )] because its odd part is imaginary and integrates to zero. Thus,
" 1/2 " 1/2
2 2 tan1
y(0) = Re[Y (f )] df = df = = 0.8038
1/2 1 + 2 f
2 2
1/2
Va ()
1 H( )
(t) C C h(t) 1 1 jC
jC 1
0.5R 2C
0.5R 2 jC
Figure E9.15D(1) The twin-T circuit for Example 9.15(d)
9.4 Frequency Response of Filters 281
With the input V1 (f ) = 1, the node equations and algebraic simplification result in
2Va
jC(Va 1) + + jC(Va H) = 0
R
Vb 1 Vb H 1/R2 C 2 2
+ + 2jCVb = 0 H() =
R R + (1/R2 C 2 ) + (j4/RC)
2
H Vb
+ jC(H Va ) = 0
R
Note that H(0) = 1 = H(). Also, H() = 0 when = RC 1
. These results confirm the bandstop
nature of this filter. The frequency 0 = RC is called the notch frequency or null frequency.
1
The stopband is definedas the region between the half-power frequencies 1 and 2 . At the half-power
frequencies, |H()| = 1 2 or |H()|2 = 0.5, and we have
(1/R2 C 2 2 )2
|H()|2 = = 0.5
(1/R2 C 2 2 )2 + (16 2 /R2 C 2 )
Rearranging and simplifying this equation, we obtain
& '2
1 16 2 1 4
2
= or 2 =
R2 C 2 R2 C 2 R2 C 2 RC
The solution of the two quadratics yields the two positive frequencies
1 = 0 ( 5 2) 2 = 0 ( 5 + 2)
The width of the stopband thus equals = 2 1 = 40 rad/s. The magnitude and phase spectra
of this filter are sketched in Figure E9.15D(2).
Phase (degrees)
Magnitude 90
1 45
0.707 1
0 2
45
1 0 2
90
Figure E9.15D(2) The spectra of the twin-T circuit for Example 9.15(d)
The phase of H() equals 0 at both = 0 and = . There is a phase reversal from 90 to 90
at = 0 . The phase equals 45 at 1 and +45 at 2 .
1
Hn (f ) = [H(f )]n =
(1 + j2f )n
To see that Hn (f ) also approaches a Gaussian form as n , we take logarithms and use the expansion
ln(1 + x) = x 12 x2 + 13 x3 to obtain
If we retain
terms only up to the second order in f (which is equivalent to assuming that f decays faster
than 1/ n), we have
2
f 2 j2nf
ln[Hn (f )] j2nf 2n 2 f 2 or Hn (f ) e2n e
This describes the Fourier transform of the Gaussian approximation to hn (t). The term ej2nf accounts
for the delay of n units.
This is the celebrated Wiener-Khintchine theorem. Application of the central ordinate theorem yields
" "
rxx (0) = Rxx (f ) df = |X(f )|2 df = E (9.45)
For power signals (and periodic signals), we must use averaged measures consistent with power (and not
energy). This leads to the concept of power spectral density (PSD) that corresponds to a continuous
function whose area equals the total signal power. The PSD is measured in watts per hertz (WHz1 ) and
may be thought of as the average power associated with a 1-Hz frequency bin centered at f hertz. For
example, the spectrum of a periodic signal with period T is a train of-impulses at the locations f = kf0
(where f0 = 1/T ) with strengths X[k]. The total signal power equals |X[k]|2 . To express its PSD as a
continuous function of frequency, we describe the signal power |X[k]| of each harmonic at f = kf0 by an
2
In other words, the PSD of a periodic signal is a train of impulses at f = kf0 with strengths |X[k]|2 whose
total area (the sum of the impulse strengths) equals the total signal power.
It is also possible to describe the PSD of a power signal x(t) in terms of a limiting form of its truncated
version xT (t), as T . Truncation ensures that xT (t) represents an energy signal and its Fourier transform
XT (f ) yields the total energy via Parsevals energy relation:
" "
x2T (t) dt = |XT (f )|2 df (9.47)
Using the limits (T, T ) on the left-hand side allows us to replace xT (t) by its original version x(t), and
dividing both sides by 2T , we get
" T "
1 1
x (t) dt =
2
|XT (f )|2 df (9.48)
2T T 2T
Taking limits as T and interchanging the integration and limiting operation on the right-hand side
yields
" T " " ( )
1 1 1
lim x2 (t) dt = lim |XT (f )|2 df = lim |XT (f )|2 df (9.49)
T 2T T T 2T T 2T
Since the left-hand side is just the signal power in x(t), the integrand on the right-hand side must represent
Rxx (f ) if it is to yield the power when integrated. We thus express the PSD as
|XT (f )|2
Rxx (f ) = lim (9.50)
T 2T
For power signals, rxx (t) and Rxx (f ) form a transform pair and play a role analogous to the signal x(t) and
its spectrum X(f ). For example, rxx (0) equals the average power in x(t). The PSD is also referred to as the
power spectrum and is often used to characterize random signals. It is important to realize that the PSD
is not a unique indicator of the underlying time signal because many time signals can yield the same power
spectral density.
284 Chapter 9 The Fourier Transform
Just as the autocorrelation function yields the PSD as its Fourier transform, the cross-correlation function
yields the cross-spectral density. We have
rxy (t) = x(t) y(t) = X(f )Y (f ) = Rxy (f ) ryx (t) = y(t) x(t) = Y (f )X(f ) = Ryx (f ) (9.52)
ryx (t) = y(t) x(t) = h(t) x(t) x(t) = h(t) rxx (t) (9.53)
the time duration. Time-bandwidth relations quantify this fundamental concept and assert that measures
of duration in the time domain are inversely related to measures of bandwidth in the frequency domain
and their product is a constant. Measures of duration and bandwidth are especially useful for lowpass and
bandpass filters. The duration usually refers to the duration of the impulse response. Since the impulse
response and step response are related, the rise time of the step response is often used as a practical measure
of impulse-response duration.
The commonly used 10% 90% rise time Tr is defined as the time required for the step response s(t)
to rise from 10% to 90% of its final value:
The bandwidth is a measure of the frequency spread of the spectrum of a signal and corresponds to the
range of frequencies that contains a significant fraction of the total signal energy. We shall use definitions
based on the one-sided bandwidth that includes only positive frequencies.
For a signal band-limited to B, the absolute bandwidth Babs is just its frequency extent B. For signals
whose spectrum shows sidelobes, we sometimes define a null bandwidth as the half-width of the central lobe
of the spectrum. A practical measure of bandwidth is the half-power bandwidth
or 3-dB bandwidth
B3dB , the range of positive frequencies over which the magnitude exceeds 1/ 2 times its maximum.
It should come as no surprise to you that measures based on the central ordinate theorems will yield
the result T0 B0 = 0.5 for any Fourier transform pair for which the required quantities exist.
286 Chapter 9 The Fourier Transform
(b) Consider an RC lowpass filter with time constant . Its transfer function H(f ), impulse response h(t),
and step response s(t) are given by
1 1 t/
H(f ) = h(t) = e u(t) s(t) = (1 et/ )u(t)
1 + j2f )
.
The half-power bandwidth B3dB is the range of positive frequencies for which |H(f )| 1/2.
This corresponds to a bandwidth of B3dB = 2 .
1
The 10%90% rise time Tr is computed by finding
The 3-dB bandwidth B3dB of this filter and the ratio Beq /B3dB are given by
1 Beq
B3dB = = = 1.57
2 B3dB 2
9.6 Time-Bandwidth Measures 287
(b) The nth-order Butterworth lowpass filter with a 3-dB bandwidth of B3dB" = 2 1
is defined by the
1 1 1
magnitude-squared function |H(f )|2 = . With |H(0)|2 = 1 and dx = ,
1 + (2f )2n 0 1 + xn sinc(n)
its equivalent bandwidth Beq is given by
* "
|H(f )|2 df
1 1 1
Beq = 0
= df = =
|H(0)|2 0 1 + (2f )2n 2 sinc(0.5/n) 4n sin(0.5/n)
We find that Beq equals 0.25 for n = 1, 2/8 for n = 2, and 1/6 for n = 3. We also find the ratio
Beq /B3dB equals 1.57 for n = 1, 1.111 for n = 2, and 1.047 for n = 3, and approaches unity with
increasing n.
Energy-based measures for duration and the time-bandwidth product rely on the following definitions
for equivalent duration and equivalent bandwidth
* *
{ h(t) dt}2 |H(f )|2 df
Teq = * 2 Beq = Teq Beq = 0.5 (9.58)
h (t) dt 2|H(0)|2
The equality Teq Beq = 0.5 holds for any transform pair for which the required quantities exist. To accom-
modate complex-valued functions, we often replace h(t) by |h(t)| and H(f ) by |H(f )|. In such a case, it
turns out that Teq Beq 0.5.
Another set of measures based on the moments of |h(t)|2 and |H(f )|2 yields the rms duration and rms
bandwidth defined by
/* 01/2 /* 01/2
t2 |h(t)|2 dt f 2 |H(f )|2 df 1
Trms = 2 *
Brms =
* Trms Brms (9.60)
|h(t)|2 dt
|H(f )|2 df 2
In the moment-based measures, the equality in the time-bandwidth product holds only for a Gaussian signal.
(b) Consider an RC lowpass filter with time constant . Its transfer function H(f ) and impulse response
h(t) are given by
1 1
H(f ) = h(t) = et/ u(t)
1 + j2f
From Example 9.18(a), its equivalent bandwidth is Beq = 4
1
. Its equivalent duration is given by
1* 22
1 t/
e dt
Teq = * 1 2t/ = 2
2
e dt
(c) Consider the cascade of two identical RC lowpass filters with time constant = 1. The transfer
function H(f ) and impulse response h(t) of the cascaded system are given by
1
H(f ) = h(t) = tet u(t)
(1 + j2f )2
The time-bandwidth product is Trms Brms = 3.4641 and satisfies the inequality Trms Brms 2 .
1
CHAPTER 9 PROBLEMS
DRILL AND REINFORCEMENT
9.1 (Fourier Transforms from the Definition) Sketch each signal x(t) and, starting with the defining
relation, find its Fourier transform X(f ).
(a) x(t) = rect(t 0.5) (b) x(t) = 2t rect(t) (c) x(t) = te2t u(t) (d) x(t) = e2|t|
9.2 (Fourier Transform in Sinc Forms) Consider the signals x(t) sketched in Figure P9.2. Express
each signal by linear combinations of rect and/or tri functions and evaluate the Fourier transform X(f )
in terms of sinc functions.
x(t) Signal 1 x(t) Signal 2 x(t) Signal 3
4 4
4
2 t 2 t t
2 4 6 2 3 4 6 3 1 1 3
x(t) Signal 4 x(t) Signal 5 x(t) Signal 6
4 4 4
t t t
3 6 2 3 4 6 2 4 6
Figure P9.2 Figure for Problem 9.2
9.3 (Fourier Transforms) Sketch each signal x(t), find its Fourier transform X(f ), and sketch its
magnitude and phase spectrum.
(a) x(t) = e2|t1| (b) x(t) = te2|t|
(c) x(t) = e2t cos(2t)u(t) (d) x(t) = u(1 |t|)
(e) x(t) = u(1 |t|)sgn(t) (f ) x(t) = sinc(t) sinc(2t)
(g) x(t) = cos2 (2t)sin(2t) (h) x(t) = (1 et )u(t)
9.4 (Properties) The Fourier transform of x(t) is X(f ) = rect(f /2). Use properties to find and sketch
the magnitude and phase of the Fourier transform of the following:
(a) d(t) = x(t 2) (b) f (t) = x (t) (c) g(t) = x(t)
(d) h(t) = tx(t) (e) p(t) = x(2t) (f ) r(t) = x(t)cos(2t)
(g) s(t) = x2 (t) (h) v(t) = tx (t) (i) w(t) = x(t) x(t)
9.5 (Properties) Consider the transform pair x(t) X(f ) where x(t) = te2t u(t). Without evaluating
X(f ), find the time signals corresponding to the following:
(a) Y (f ) = X(2f ) (b) D(f ) = X(f 1) + X(f + 1) (c) G(f ) = X (f )
(d) H(f ) = f X (f ) (e) M (f ) = j2f X(2f ) (f ) P (f ) = X(f /2)cos(4f )
(g) R(f ) = X 2 (f ) (h) S(f ) = (1 4 2 f 2 )X(f ) (i) V (f ) = X(f ) X(f )
9.7 (Fourier Transforms) Compute the Fourier transform of the following signals and plot their ampli-
tude spectrum and phase spectrum.
(a) x(t) = (t + 1) + (t 1) (b) x(t) = (t + 1) + (t) + (t 1)
(c) x(t) = (t + 1) (t 1) (d) x(t) = (t + 1) (t) + (t 1)
9.8 (Inverse Transforms) Find the inverse Fourier transforms of the following:
j2f
(a) u(1 |f |) (b)
1 + 4 2 f 2
j2f e4jf
(c) (d)
1 + 6jf 8 f2 2 1 + 4 2 f 2
(e) sinc(0.25f )cos(2f ) (f ) 8 sinc(2f )sinc(4f )
4 sinc(2f ) cos(f )ejf
(g) (h)
1 + j2f 1 + j2f
9.9 (Fourier Transforms and Convolution) Sketch each signal y(t), express it as a convolution of two
functions x(t) and h(t), and find its Fourier transform Y (f ) as the product of X(f ) and H(f ).
(a) y(t) = r(t + 4) 2r(t) + r(t 4)
(b) y(t) = r(t + 4) r(t + 2) r(t 2) r(t 4)
(c) y(t) = te2t u(t)
9.10 (Fourier Transforms and Convolution) Given the Fourier transforms X1 (f ) and X2 (f ) of two
signals x1 (t) and x2 (t), compute the Fourier transform Y (f ) of the product x1 (t)x2 (t).
(a) X1 (f ) = rect(f /4) X2 (f ) = rect(f /4)
(b) X1 (f ) = rect(f /2) X2 (f ) = X1 (f /2)
(c) X1 (f ) = (f ) + (f + 4) + (f 4) X2 (f ) = (f + 2) + (f 2)
(d) X1 (f ) = tri(f ) X2 (f ) = (f + 1) + (f ) + (f 1)
9.11 (Modulation) Sketch the spectrum of the modulated signal y(t) = x(t)m(t) if
(a) X(f ) = rect(f ) m(t) = cos(t)
(b) X(f ) = rect(0.25f ) m(t) = cos(2t)
(c) X(f ) = tri(f ) m(t) = cos(10t)
!
(d) X(f ) = tri(f ) m(t) = 0.5 (t 0.5k)
k=
9.12 (Parsevals Theorem) A series RC circuit with time constant = 1 is excited by a voltage input
x(t). The output is the capacitor voltage. Find the total energy in the input and output. Then,
compute the signal energy in the output over |f | 2
1
Hz and over |f | 1 Hz.
9.14 (Application) A signal x(t) is passed through an ideal lowpass filter as shown:
x(t) ideal lowpass filter y(t)
Chapter 9 Problems 291
The filter blocks frequencies above 5 Hz. Find X(f ) and Y (f ) and sketch their spectra for the following
x(t). Also compute and sketch the output y(t).
(a) x(t) = 4 + 2 sin(4t) + 6 cos(12t) + 3 sin(16t) + 4 cos(16t)
(b) x(t) = 8 8 cos2 (6t)
(c) x(t) = | sin(3t)|
9.15 (Frequency Response of Filters) Sketch the magnitude spectrum and phase spectrum of each of
the following filters and find their response to the input x(t) = cos(t).
9.17 (System Formulation) Set up the system dierential equation for each of the following systems.
3 1 + j2 2
(a) H() = (b) H() =
2 + j (1 2 )(4 2 )
2 1 2j j
(c) H() = (d) H() =
1 + j 2 + j 1 + j 2 + j
9.18 (Frequency Response) Find the frequency response H(f ) and impulse response h(t) of each system.
(a) y (t) + 3y (t) + 2y(t) = 2x (t) + x(t) (b) y (t) + 3y(t) = 2x(t)
(c) y (t) + 4y (t) + 4y(t) = 2x (t) + x(t) (d) y(t) = 0.2x(t)
9.20 (Ideal Filters) Express the frequency response H(f ) and impulse response h(t) of each filter in terms
of the frequency response HLP (f ) and impulse response hLP (t) of an ideal lowpass filter with cuto
frequency fC .
(a) A lowpass filter with cuto frequency 0.25fC
(b) A highpass filter with cuto frequency 2fC
(c) A bandpass filter with center frequency 8fC and passband 2fC
(d) A bandstop filter with center frequency 8fC and stopband 2fC
9.21 (Time-Bandwidth Product) Compute the time-bandwidth product of a lowpass filter whose im-
pulse response is h(t) = (1/ )et/ u(t), using the following:
292 Chapter 9 The Fourier Transform
9.28 (Properties) Let x(t) = 4 sinc(4t). Find its Fourier transform X(f ) and use the result to find the
Fourier transforms of the following signals.
(a) y(t) = x(t) (b) g(t) = x(2t 2)
(c) h(t) = ej2t x (t) (d) p(t) = (t 1)x(t 1)ej2t
*t
(e) r(t) = tx (t) (f ) s(t) = x() d
(g) w(t) = x(t)cos2 (2t) (h) v(t) = x2 (t)cos2 (2t)
9.29 (Convolution) Find the Fourier transform Y (f ) of each signal y(t).
(a) y(t) = et u(t) et u(t) (b) y(t) = et u(t) et u(t)
!
(c) y(t) = rect(t) rect(t) (d) y(t) = rect(t) (t 2k)
/ k= 0
1
! !
(e) y(t) = rect(t) (t 2k) (f ) y(t) = 0.25 rect(t) (t 2k) sinc2 (t/4)
k=1 k=
9.34 (Central Ordinate Theorems) Use the central ordinate theorem and the Fourier transform X(f )
to compute x(0). Verify your results by finding x(0) directly from x(t). For which cases does x(0)
equal the initial value x(0+)?
(a) x(t) = rect(t) (b) x(t) = u(t) (c) x(t) = et u(t) (d) x(t) = rect(t 0.5)
9.35 (Periodic Extension) Find the Fourier transform X(f ) of each signal x(t) and the Fourier transform
Y (f ) of its periodic extension y(t) with period T = 2.
(a) x(t) = (t) (b) x(t) = rect(t) (c) x(t) = et u(t)
9.36 (Symmetry) Recall that conjugate symmetry of the Fourier transform applies only to real time
signals. Identify the real/complex nature and symmetry of the time signals whose Fourier transform is
(a) Real (b) Real and even (c) Real and odd
9.37 (High-Frequency Decay) The high-frequency behavior of an energy signal may be estimated,
without computing X(f ), by taking successive derivatives of x(t). If it takes n derivatives to first show
impulses, the high-frequency decay rate is proportional to 1/f n . The high-frequency decay rate is an
indicator of how smooth the time signal x(t) is; the faster the high-frequency decay rate, the smoother
is the time signal x(t). Without computing the Fourier transform, predict the high-frequency decay
rate of the following signals.
(a) x(t) = rect(t) (b) x(t) = tri(t) (c) x(t) = [1 + cos(t)]rect(0.5t)
(d) x(t) = et u(t) (e) x(t) = cos(0.5t)rect(0.5t) (f ) x(t) = (1 t2 )rect(0.5t)
9.38 (The Initial Value Theorem) The value of a causal energy signal x(t)u(t) at t = 0+ may be found
from its transform X(f ), using the initial value theorem that states x(0+ ) = lim [j2f X(f )]. Find
f
the initial value x(0+ ) for the signal corresponding to each X(f ), compare with the value x(0) obtained
by using the central ordinate theorem, and explain the dierences, if any.
(a) x(t) = u(t) (b) x(t) = rect(t 0.5) (c) x(t) = tri(t 1) (d) x(t) = et u(t)
9.39 (System Response) For each circuit of Figure P9.39, let R = 1 , C = 1 F, and L = 1 H as needed.
(a) Find the transfer function H(f ).
(b) Sketch the magnitude and phase of H(f ) and identify the filter type.
(c) Find the response if the input is x(t) = (t).
(d) Find the response if the input is x(t) = u(t).
(e) Find the response if the input is x(t) = et u(t).
+ R + + R + + +
C
x(t) 2R y(t) x(t) y(t) x(t) R y(t)
C
Circuit 1 Circuit 2 Circuit 3
+ L + + R + + R +
Circuit 4 Circuit 5 Circuit 6
Figure P9.39 Figure for Problem 9.39
Chapter 9 Problems 295
9.40 (System Response) The signal x(t) = 4 + cos(4t) sin(8t) forms the input to a filter whose
impulse response is h(t), as shown. Find the response y(t).
x(t) filter h(t) y(t)
(a) h(t) = sinc(5t) (b) h(t) = sinc(5t 2)
(c) h(t) = sinc2 (5t 2) (d) h(t) = et u(t)
(e) h(t) = (t) et u(t) (f ) h(t) = sinc(t)cos(8t)
(g) h(t) = sinc2 (t)cos(5t) (h) h(t) = sinc2 (t)cos(16t)
!
9.41 (System Response) The impulse train input x(t) = (0.4)n (t 3n) is applied to a filter and the
n=0
filter output is y(t). Compute Y (f ) if the filter impulse response is
(a) h(t) = sinc(t) (b) h(t) = e2t u(t)
9.42 (Application) Each of the following signals is applied to an ideal lowpass filter as shown.
x(t) ideal filter y(t)
The filter blocks frequencies above 2 Hz. Compute the response Y (f ) and (where possible) y(t).
(a) x(t) = et u(t) (b) x(t) = e|t| (c) x(t) = sinc(8t)
(d) x(t) = rect(2t) (e) x(t) = | sin(3t/2)| (f ) x(t) = cos2 (3t)
9.43 (Cascaded Systems) Find the response y(t) and sketch the magnitude and phase of Y (f ) for each
cascaded system. Do the two systems yield the same output? Should they? Explain.
(a) x(t) = cos(t) phase shift of 2 squaring circuit y(t)
(b) x(t) = cos(t) squaring circuit phase shift of 2 y(t)
9.44 (Modulation and Filtering Concepts) System 1 is a modulator that multiplies a signal by cos(6t),
and system 2 is an ideal lowpass filter with a cuto frequency of 3 Hz. Compute the output of all systems
in both the time domain and the frequency domain if the input is x(t) = sinc2 (2t) and the systems are
connected in cascade as shown. Do the two configurations produce identical outputs?
(a) x(t) system 1 system 2 y(t)
(b) x(t) system 2 system 1 y(t)
9.45 (Modulation and Filtering Concepts) System 1 is a modulator that multiplies a signal by cos(6t),
system 2 is an ideal lowpass filter with a cuto frequency of 3 Hz, and system 3 is a squaring system.
Compute the output of all systems in both the time domain and the frequency domain if the input
is x(t) = sinc(2t) and the systems are connected in cascade as shown. Are any of the overall outputs
identical?
(a) x(t) system 1 system 2 system 3 y(t)
(b) x(t) system 3 system 2 system 1 y(t)
(c) x(t) system 2 system 3 system 1 y(t)
(d) x(t) system 3 system 1 system 2 y(t)
(e) x(t) system 1 system 3 system 2 y(t)
(f ) x(t) system 2 system 1 system 3 y(t)
296 Chapter 9 The Fourier Transform
2 + 2j
9.46 (System Response) The transfer function of a system is H() = . Find the time-
4 + 4j 2
domain response y(t) and the spectrum Y (f ) for the following inputs.
(a) x(t) = 4 cos(2t) (b) x(t) = et u(t) (c) x(t) = (t) (d) x(t) = 2(t) + (t)
9.47 (System Response) The transfer function of a filter is given by H(f ) = rect(0.1f )ejf . Find its
response y(t) to each input x(t).
(a) x(t) = 1 (b) x(t) = cos(8t)
(c) x(t) = 4 + cos(t 0.25) cos(40t) (d) x(t) = sinc[5(t 1)]
9.48 (System Response) The signal x(t) = sinc(0.5t) + sinc(0.25t) is applied to a filter whose impulse
response is h(t) = A sinc(t).
(a) For what values of A and will the filter output equal sinc(0.5t)?
(b) For what values of A and will the filter output equal sinc(0.25t)?
(c) For what values of A and will the filter output equal x(t)?
(d) What will be the output if A = 1 and = 1?
9.49 (System Response) Find the output of a filter whose impulse response is h(t) = 4 sinc(5t 8) to
the following inputs.
(a) x(t) = 1 (b) x(t) = cos(2t) (c) x(t) = cos(12t)
!
(d) x(t) = sinc(t) (e) x(t) = sinc(12t) (f ) x(t) = tri(2t k)
k=
9.50 (System Response) Find the output of a filter whose impulse response is h(t) = 4 sinc2 (2t 1) to
the following inputs.
(a) x(t) = 1 (b) x(t) = cos(2t) (c) x(t) = cos(12t)
!
(d) x(t) = sinc(t) (e) x(t) = sinc(12t) (f ) x(t) = tri(2t k)
k=
9.51 (Integral Equations) Integral equations arise in many contexts. One approach to solving integral
equations is by using the convolution property of the Fourier transform. Find the transfer function
and impulse response of a filter whose input-output relation is described by the following:
" t
(a) y(t) = x(t) 2 y()e(t) u(t ) d
" t
(b) y(t) = x(t) + y()e3(t) u(t ) d
9.52 (System Representation) Find the transfer function H(f ) of each system described by its impulse
response h(t), find the dierential equation and its order (where possible), and determine if the system
is stable and/or causal.
(a) h(t) = e2t u(t) (b) h(t) = (1 e2t )u(t)
(c) h(t) = sinc(t) (d) h(t) = tet u(t)
(e) h(t) = 0.5(t) (f ) h(t) = (et e2t )u(t)
(g) h(t) = (t) et u(t) (h) h(t) = sinc2 (t + 1)
Chapter 9 Problems 297
9.53 (System Response) Consider the relaxed system y (t) + 1 y(t) = 1 x(t).
(a) What is the frequency response H(f ) of this system?
(b) What is the response y(t) of this system if the input is x(t) = cos(2f0 t)?
(c) Compute and sketch y(t) if f0 = 10 Hz and = 1 s. Repeat for f0 = 1 Hz and = 1 ms.
(d) Under what conditions for and f0 would you expect the response to resemble (be a good
approximation to) the input?
9.54 (Eigensignals) If the input to a system is its eigensignal, the response has the same form as the
eigensignal. Argue for or against the following statements by computing the system response in the
frequency domain using the Fourier transform.
(a) Every signal is an eigensignal of the system described by h(t) = A(t).
(b) The signal x(t) = ej2t is an eigensignal of any LTI system such as that described by the impulse
response h(t) = et u(t).
(c) The signal x(t) = ej2t is an eigensignal of any LTI system described by a dierential equation
such as y (t) + y(t) = x(t).
(d) The signal x(t) = cos(t) is an eigensignal of any LTI system described by a dierential equation
such as y (t) + y(t) = x(t).
(e) The signal x(t) = sinc(t) is an eigensignal of ideal filters described by h(t) = sinc(t), > .
(f ) The signal x(t) = cos(2t) is an eigensignal of ideal filters described by h(t) = sinc(t), > 2.
(g) A signal x(t) band-limited to B Hz is an eigensignal of the ideal filter h(t) = sinc(t), > 2B.
8
9.55 (Frequency Response of Filters) The frequency response of a filter is H() = .
8 2 + 4j
(a) What is the dc gain A0 of this filter?
(b) At what frequency does the gain equal A0 / 2? At what frequency (approximately) does the
gain equal 0.01A0 ?
(c) Compute the response y(t) of this filter to the input x(t) = 4 + 4 cos(2t) 4 sin(2t) + cos(30t).
1
9.56 (Frequency Response of Filters) The frequency response of a filter is H() = .
1 2 + 2j
(a) What is the dc gain A0 of this filter?
(b) At what frequency does the gain equal A0 / 2? At what frequency (approximately) does the
gain equal 0.01A0 ?
(c) Compute the response y(t) of this filter to the input x(t) = 4 + 4 cos(t) 4 sin(t) + cos(10t).
9.57 (Power Spectral Density) Sketch the power spectral density (PSD) of the following:
(a) x(t) = 8 cos(10t) + 4 sin(20t) + 6 cos(30t + 4 )
(b) x(t) = 4 + 8 cos(10t) + 4 sin(20t)
9.58 (Power Spectral Density) A transform X(f ) can qualify as the PSD of an autocorrelation function
only if it satisfies certain properties. For each of the following that qualify as a valid PSD, find the
autocorrelation function. For those that do not, explain why they cannot be a valid PSD.
1 1
(a) R(f ) = (b) R(f ) =
1 + j2f 1 + 4 2 f 2
1 3
(c) R(f ) = (d) R(f ) =
1 + 2f + 4 f 2 2 4 + 20 f 2 + 16 4 f 4
2
1
(e) R(f ) = (f ) R(f ) = sinc2 (2f )
1 32 2 f 2 + 16 4 f 4
2f
(g) R(f ) = (f ) + sinc2 (2f ) (h) R(f ) = (f ) +
1 + 4 2 f 2
298 Chapter 9 The Fourier Transform
9.59 (Power Spectral Density) Find the PSD corresponding to the following autocorrelation functions
and describe their spectra.
(a) rxx (t) = A(t) (white noise) (b) rxx (t) = Ae|t|
2
(c) rxx (t) = A sinc(t) (band-limited white noise) (d) rxx (t) = Aet
9.60 (Dirichlet Kernel) Find the quantities indicated for the following signals.
(a) x(t) = (t + 1) + (t) + (t 1). Find X(f ).
!N
sinc[(2N + 1)f ]
(b) d(t) = (t k). Show that D(f ) = (2N + 1) .
sinc(f )
k=N
This is the Dirichlet kernel. Does D(f ) equal X(f ) for N = 1?
!N
(c) y(t) = (t kt0 ). Find its Fourier transform Y (f ).
k=N
9.61 (Bandwidth) Find the indicated bandwidth measures for each signal x(t).
(a) x(t) = sinc(t) (the absolute bandwidth)
(b) x(t) = rect(t) (the null bandwidth)
(c) x(t) = et u(t) (the half-power bandwidth)
2
(d) x(t) = et (the rms bandwidth)
2
(e) x(t) = et (the equivalent bandwidth)
9.62 (Time-Bandwidth Product) Compute the time-bandwidth product of an ideal lowpass filter as-
suming that the duration corresponds to the width of the central lobe of its impulse response h(t) and
the bandwidth equals the absolute bandwidth.
9.63 (Time-Bandwidth Product) Find the equivalent bandwidth and the time-bandwidth product for
a second-order Butterworth filter with a cuto frequency of 1 rad/s.
9.64 (Time-Bandwidth Product) Compute the indicated time-bandwidth product for each system.
(a) h(t) = tri(t), using equivalent duration and equivalent bandwidth
2
(b) h(t) = et , using the moment-based measures TM and BM as defined in the text
9.65 (Equivalent Noise Bandwidth) Find the equivalent bandwidth and the half-power bandwidth for
(a) A first-order lowpass filter with time constant .
(b) The cascade of two first-order lowpass filters, each with time constant .
(c) A second-order Butterworth filter with a cuto frequency of 1 rad/s.
9.66 (Frequency Response of Oscilloscopes) To permit accurate displays and time measurements,
commercial oscilloscopes are designed to have an approximately Gaussian response. This also permits
a monotonic step response (with no ringing) and a constant group delay (freedom from distortion).
However, a more fundamental reason is based on the fact that an oscilloscope is a complex instrument
with many subsystems and the overall response depends on the response of the individual systems.
How does this fact account for the Gaussian response of commercial oscilloscopes (think central limit
theorem)?
9.67 (Oscilloscope Bandwidth) When displaying step-like waveforms on an oscilloscope, the measured
rise time Tm is related to the oscilloscope rise time To and the signal rise time Ts by Tm
2
Ts2 + To2 .
Chapter 9 Problems 299
(a) Pick a reasonable estimate of the rise time-bandwidth product and determine the oscilloscope
bandwidth required to measure a signal rise time of 1 ns with an error of 10% or less.
(b) The signal is connected to the oscilloscope through a probe whose time constant is 0.1 ns. Make a
reasonable estimate of the probe rise time from its time constant and determine the oscilloscope
bandwidth required to measure a signal rise time of 1 ns with an error of 10% or less.
9.68 (Frequency Response of Butterworth Filters) The gain of an nth-order Butterworth lowpass
filter is described by
1
|H()| = .
1 + ( )2n
(a) Show that the half-power frequency of this filter is = for any n.
(b) Use Matlab to plot the magnitude spectrum for = 1 and n = 2, 3, 4, 5 on the same graph. At
what frequency do these filters have identical magnitudes? Describe how the magnitude spectrum
changes as the order n is increased.
9.69 (Performance Measures) Performance measures for lowpass filters include the rise time, the 5%
settling time, the half-power bandwidth, and the time-bandwidth product. Plot the step response,
magnitude spectrum, and phase spectrum of the following filters. Then, use the ADSP routine trbw
to numerically estimate each performance measure.
(a) A first-order lowpass filter with = 1
(b) The cascade of two first-order filters, each with = 1
(c) y (t) + 2y (t) + y(t) = x(t) (a second-order Butterworth lowpass filter)
(d) y (t) + 2y (t) + 2y (t) + y(t) = x(t) (a third-order Butterworth lowpass filter)
9.70 (Steady-State Response in Symbolic Form) The ADSP routine ssresp yields a symbolic ex-
pression for the steady-state response to sinusoidal inputs (see Chapter 21 for examples of its usage).
Find the steady-state response to the input x(t) = 2 cos(3t 3 ) for each of the following systems.
(a) y (t) + y(t) = 2x(t), for = 1, 2
(b) y (t) + 4y(t) + Cy(t) = x(t), for C = 3, 4, 5
Chapter 10
MODULATION
10.1.1 DSBSC AM
The simplest amplitude modulation scheme is to multiply the carrier xC (t) by the message signal xS (t), as
illustrated in Figure 10.1.
The modulated signal xM (t) and its spectrum XM (f ) are given by
The spectrum of the modulated signal shows the message spectrum centered about fC . It exhibits sym-
metry about fC . The portion for f > |fC | is called the upper sideband (USB) and that for f < |fC |, the
lower sideband (LSB). Since xS (t) is band-limited to B, the spectrum occupies a band of 2B (centered
about fC ), which is twice the signal bandwidth.
This scheme is called double-sideband suppressed-carrier (DSBSC) AM. The lowpass signal xS (t) is band-
limited to B and called a baseband signal. The signal xM (t) is also band-limited to B + fC , but it is called
a bandpass signal because its spectrum does not include the origin.
300
10.1 Amplitude Modulation 301
Message spectrum
DSBSC AM spectrum (LSB shown shaded).
Multiplier
f
f
-f fC
C
(0.5) (0.5)
f
-f fC
C Carrier spectrum
In the time domain, note that the envelope of the modulated signal xM (t) follows the message xS (t),
provided xS (t) 0, as illustrated in Figure 10.2.
(a) Unipolar message (dark) and modulated signal (b) Bipolar message (dark) and modulated signal
1
1
0.5
Amplitude
Amplitude
0 0
0.5
1
1
Time t Time t
Figure 10.2 The envelope of the modulated signal corresponds to the message
signal only if the message amplitude is always positive.
10.1.2 Standard AM
In standard AM, the modulated signal is transmitted along with the carrier signal, as illustrated in Fig-
ure 10.3. The message signal xS (t) modulates the carrier xC (t) to give the modulated signal xM (t) described
by
xM (t) = [AC + xS (t)]cos(2fC t) (10.2)
With AS = |xS (t)|max and = AS /AC , we may rewrite this as
AC
xM (t) = AC cos(2fC t) + xS (t)cos(2fC t) (10.3)
AS
Here, is called the modulation index. The spectrum of the modulated signal xM (t) equals
AC AC
XM (f ) = [(f + fC ) + (f fC )] + [X(f + fC ) + X(f fC )] (10.4)
2 2AS
The spectrum is identical to that of DSBSC AM, except for the presence of two impulses at fC due to the
carrier component. Its bandwidth equals 2B (centered about fC ), which is twice the signal bandwidth.
302 Chapter 10 Modulation
(0.5) (0.5)
f
-f fC
C Carrier spectrum
Figure 10.3 Standard AM
In the time domain, if xS (t) varies slowly compared with the carrier, the envelope of xM (t) follows xS (t),
provided 0 < < 1 (or |xS (t)| < AC ). For || > 1, the envelope no longer resembles xS (t), and we have
over-modulation (see Figure 10.2).
The spectrum of xM (t) consists of the carrier component with strength AC , and two sidebands at fC f0
with strengths 0.5AC and zero phase.
The power PM in the modulated signal equals
The total power in the two sidebands is 0.25A2C 2 . The eciency is thus
0.25A2C 2 2
= =
0.25A2C (2 + 2 ) 2 + 2
The maximum possible eciency occurs when = 1 (with AS = AC ) and equals 13 (33.33%). With a
modulation index of = 0.5, the eciency drops to 19 (11.11%). For periodic signals with < 1, it turns out
that < 1/(1 + p2f ), where pf is the peak factor (the ratio of the peak and rms value). Since xrms xp ,
we have pf 1. Large values of pf imply poor eciency.
Message spectrum
Spectrum of demodulated signal
0.5X(f)
Lowpass
f filter f
2 fC B B 2 fC B B
If xD (t) is passed through a lowpass filter whose bandwidth exceeds the message bandwidth B, the high-
frequency component at 2fC is blocked, and we recover 0.5xS (t), which corresponds to the message signal.
The demodulation scheme for standard AM is shown in Figure 10.5.
Standard AM spectrum
(0.5A) (0.5A) Carrier spectrum
Multiplier
(0.5) (0.5)
f f
-f fC -f fC
C C
If xD (t) is passed through a lowpass filter whose bandwidth exceeds the message bandwidth B, the high-
frequency component at 2fC is blocked, and we recover 0.5[AC + xS (t)]. If xS (t) contains no dc component,
we can use ac coupling to eliminate the dc component 0.5AC and recover the message component 0.5xS (t).
This method is called synchronous or coherent detection, since it requires a signal whose frequency
and phase (but not amplitude) must be synchronized or matched to the carrier signal at the transmitter. For
DSBSC the coherent carrier may be obtained by transmitting a small fraction of the carrier (a pilot signal)
along with xM (t). Synchronous detection is costly in practice.
+ +
(a) Modulated signal and detector output (dark) (b) Output of lowpass filter
1.5 1.5
1 1
0.5 0.5
Amplitude
Amplitude
0 0
0.5 0.5
1 1
1.5 1.5
Time t Time t
The charging constant (which equals RD C, where RD includes the source and diode resistance) is usually
small enough (with C fC ) to allow rapid charging and let the output follow the input (if fC B).
Figure 10.8 illustrates how the discharging time constant D aects the detector output. The discharging
time constant must be carefully chosen to satisfy D > 1/fC , to ensure that the output is maintained
between peaks; and D < 1/B, to ensure that the capacitor discharges rapidly enough to follow downward
excursions of the message and does not miss any peaks. Envelope detection is an inexpensive scheme that
finds widespread use in commercial AM receivers.
(a) Large means missed peaks (b) Small means rapid discharging
1.5 1.5
1 1
0.5 0.5
Amplitude
Amplitude
0 0
0.5 0.5
1 1
1.5 1.5
Time t Time t
Figure 10.8 The eect of discharging time constant D on the detected signal
X1(f)
f f
B1 f1 f1
X2(f)
Multiplexed
f f signal
B2 f2 f2
X3(f)
f f
B3 f3 f3
X1(f)
Bandpass Lowpass
filter filter f
B1
Multiplexed signal f1 X2(f)
Bandpass Lowpass
f filter filter f
f1 f2 f3 B2
f2 X3(f)
Bandpass Lowpass
filter filter f
B3
f3
Each message is modulated using a carrier at a dierent frequency in a way that the spectra of the
modulated signals do not overlap. To recover the messages, the multiplexed signal is fed to bandpass filters
with dierent center frequencies that extract the modulated signals. Each modulated signal is demodulated
separately at the same carrier frequency as used in modulation. The problem is that this scheme is wasteful
of equipment, requiring many local oscillators, and even more wasteful of transmission bandwidth and power.
f (kHz)
0.6 7.4 8.6
15.4 16.6 23.4 24.6 31.4
(Axis not to scale)
Figure E10.2 Signal spectra for Example 10.2 and spectrum of the multiplexed signal
The frequency multiplexed signal is shown in Figure E10.2. Since there are four messages, we must choose
four carrier frequencies at 4, 12, 20, and 36 kHz, to provide a separation of 1.2 kHz between the messages.
The spectrum of the multiplexed signal shows the message spectra in the bands 0.67.4 kHz, 8.615.4 kHz,
16.623.4 kHz, and 24.631.4 kHz. The bandwidth of the multiplexed signal is thus 31.4 kHz.
Antenna
RF section: Mixer Speaker
IF section: Envelope Audio
Tunable BPF
and amplifier Fixed BPF detector amplifier
Common
tuning knob Local AGC
oscillator
f C + f IF
The incoming signal passes through an RF section that includes a tunable bandpass amplifier. The
tuning knob serves two functions. It adjusts the center frequency of the RF amplifier to the required carrier
frequency fC . It also serves to adjust the local oscillator frequency to fC + fIF in order to shift the incoming
message to the fixed intermediate frequency fIF = 455 kHz. The mixer produces the heterodyned signal,
which is then filtered and amplified by an IF amplifier (with a bandwidth of 10 kHz) and demodulated using
an envelope detector. The audio amplifier (a low pass filter with a cuto frequency of about 5 kHz) raises
the message power to a level suitable for the speakers. The automatic gain control (AGC) compensates for
the variations in the RF signal level at the receiver input due to fading. It uses the dc oset of the detector
output (which is proportional to the RF signal level) to control the gain of the IF amplifier.
For an RF tuning range of 5401600 kHz, the frequency of the local oscillator must cover 851145 kHz
if fLO = fC fIF , or 9952055 kHz if fLO = fC + fIF . The tuning range of 13:1 in the first case is harder to
implement that the tuning range 2:1. For this reason, the local oscillator frequency is chosen as fLO = fC +fIF
(hence the name superheterodyne). If the local oscillator is tuned to receive the frequency fC = fLO fIF ,
we will also pick up the image frequency fI = fLO + fIF = fC + 2fIF . The bandwidth of the RF amplifier
should thus be narrow enough to reject such image signals. Strong signals at subfrequencies (such as fC /2)
may also cause interference (due to nonlinearities in the IF amplifier).
10.1 Amplitude Modulation 309
Narrow-band
Test Rectifier and
bandpass filter
signal lowpass filter Vertical input
(variable center frequency)
Oscilloscope
Figure 10.12 Conceptual block diagram of a spectrum analyzer
The center frequency of the bandpass filter can be varied in response to a ramp signal, which also serves
as the horizontal channel input to an oscilloscope. The output of the bandpass filter is rectified to produce
the rms value of the signal, filtered, and finally applied to the vertical channel input of the oscilloscope. The
oscilloscope display approximates the magnitude spectrum of the input signal as the center frequency of the
bandpass filter scans the frequency range of the input signal. If the rectifier is replaced by a square law
detector, the display corresponds to the PSD. The accuracy of the display is governed by the sharpness of
the bandpass filter and the sweep rate of the ramp generator. The narrower the passband of the bandpass
filter, the better is the accuracy. The sweep rate of the ramp generator should be low enough to permit
the bandpass filter to produce a steady-state response. If the filter bandwidth is f , its rise time will be
around 1/f . If the sweep rate (in Hz per second) is R, the filter will provide an adequate response only if
f /R > 1/f or R < f 2 . The sweep rate is thus limited by the filter bandwidth.
In practice, it is dicult to design variable-frequency narrow-band bandpass amplifiers that can cover a
wide range of frequencies. A practical alternative is to design a bandpass filter with a fixed center frequency f0
and use a voltage-controlled oscillator (VCO) to tune in the signal frequencies, as illustrated in Figure 10.13.
Mixer Linear
Test Bandpass filter Rectifier and Log
signal (fixed center frequency) lowpass filter amplifier Log Vertical
input
Periodic ramp
VCO generator Horizontal input
Oscilloscope
The VCO generates a frequency that sweeps from f0 to f0 + B, where B is the bandwidth of the test
signal. At some input frequency fin , the output of the mixer is at the frequencies (f0 + fin ) fin = f0
and (f0 + fin ) + fin = f0 + 2fin . Only the lower sideband component at f0 is passed by the bandpass
filter. The amplitude spectrum may also be displayed in decibels (dB) (using appropriate calibration) if
the rectified and filtered signal is amplified by a logarithmic amplifier before being fed to the oscilloscope
310 Chapter 10 Modulation
vertical channel. Logarithmic displays are useful in detecting small amplitude signals in the presence of large
amplitude signals. Due to system imperfections and instrument noise, a spectrum analyzer does not display
a decibel value of if no input signal is present (as it ideally should), and its dynamic range is typically
limited to between 60 and 90 dB.
10.2 Single-Sideband AM
Single-sideband (SSB) modulation makes use of the symmetry in the spectrum of an amplitude modulated
signal to reduce the transmission bandwidth. Conceptually, we can transmit just the upper sideband (by
using a bandpass or highpass filter) or the lower sideband (by using a bandpass or lowpass filter). This
process requires filters with sharp cutos. A more practical method is based on the idea of the Hilbert
transform, an operation that shifts the phase of x(t) by 2 . The phase shift can be achieved by passing
x(t) through a system whose transfer function H(f ) is
!
j, f >0
H(f ) = j sgn(f ) = (10.9)
j, f <0
In the time domain, the phase-shifted signal x(t) is given by the convolution
1
(t) =
x x(t) (10.10)
t
The phase-shifted signal x(t) defines the Hilbert transform of x(t). The spectrum X(f ) of the Hilbert-
transformed signal equals the product of X(f ) with the transform of 1/t. In other words,
) = j sgn(f )X(f )
X(f (10.11)
A system that shifts the phase of a signal by 2 is called a Hilbert transformer or a quadrature filter.
Such a system can be used to generate an SSB AM signal, as illustrated in Figure 10.14.
First, xS (t) is modulated by xC (t) to give the in-phase component:
xMI (t) = xC (t)xS (t) = cos(2fC t)xS (t) (10.12)
Next, both xS (t) and xC (t) are shifted in phase by 90 to yield their Hilbert transforms xS (t) and x
C (t).
Note that shifting the carrier xC (t) = cos(2fC t) by 90 simply yields x
C (t) = sin(2fC t). Then, x
C (t) is
modulated by x S (t) to give the quadrature component:
xMQ (t) = sin(2fC t)
xS (t) (10.13)
The lower sideband xML (t) and upper sideband xMU (t) are obtained from xMI (t) and xMQ (t) as
xML (t) = xMI (t) + xMQ (t) = xS (t)cos(2fC t) + x
S (t)sin(2fC t) (10.14)
xMU (t) = xMI (t) xMQ (t) = xS (t)cos(2fC t) x
S (t)sin(2fC t) (10.15)
Since the transmitted modulated signal contains only one sideband, its envelope does not correspond to the
message signal.
Message spectrum
f
-f fC
C
f fC
Cosine carrier
+
+ f
Sine carrier -f fC
C
fC
Hilbert
transformer
H(f) = j sgn (f)
-f fC f
C
Carrier spectrum
Multiplier
(0.5) (0.5)
f f
-f fC -f fC
C C
Message spectrum
Spectrum of demodulated signal 0.5X(f)
Lowpass
f filter f
2 fC B B 2 fC B B
The phase-shifted (by 90 ) signals sin(2f0 t) and sin(2fC t) yield the quadrature component
The sum and dierence of xMI (t) and xMQ (t) generates the SSB signals:
The frequencies of the LSB and USB signals are f0 fC and f0 + fC , respectively.
The argument of the sinusoid may itself be regarded as an instantaneous phase angle C (t), with
The carrier completes one full cycle as C (t) changes by 2. The peak phase deviation is defined as
|(t)|max .
The derivative C (t) describes the rate of change of C (t) and yields the instantaneous frequency as
dC 1
M (t) = = 2fC + C
(t) fM (t) = fC + (t) (10.20)
dt 2 C
kP
PM (t) = 2fC t + kP xS (t) fPM (t) = fC + x (t) (10.22)
2 S
The peak frequency deviation equals
kP
fP = |x (t)|max (10.23)
2 S
In analogy with AM, we also define a modulation index P , called the deviation ratio, in terms of the
highest frequency B in the message as
fP
P = (10.24)
B
fF
fF = kF |xS (t)|max F = (10.27)
fB
A PM signal may be viewed as a special case of an FM signal, with the modulating quantity as d xdtS (t)
(instead
of xS (t)). Similarly,
% an FM signal may be viewed as a special case of a PM signal, with the modulating
quantity as xS (t) dt (instead of xS (t)). This is illustrated in Figure 10.16.
Amplitude
0.5
0 0
0.5
1
1
2
0 2 4 6 0 2 4 6
(b) PM signal. Note phase reversal at t = 1 (d) FM signal
1 1
0.5
Amplitude
Amplitude
0.5
0 0
0.5 0.5
1 1
0 2 4 6 0 2 4 6
Time t [seconds] Time t [seconds]
Figure 10.16 How phase modulation and frequency modulation are related
(a) The carrier frequency and message frequency are fC = 100 Hz and f0 = 5 Hz, respectively.
Its instantaneous phase is C (t) = 200t + 0.4 sin(10t).
The peak phase deviation thus equals 0.4 rad/s.
Its instantaneous frequency equals fM (t) = 2 C (t)
1
= 100 + 2 cos(10t).
The peak frequency deviation f thus equals 2 Hz.
f 2
The deviation ratio then equals = = = 0.4.
f0 5
%
(c) If xM (t) is regarded as an FM signal, then 2kF xS (t) dt = 0.4 sin(10t).
Suppose kF = 4 Hz/unit. Taking derivatives, we get
2kF xS (t) = 8xS (t) = 4 cos(10t) or xS (t) = 0.5 cos(10t).
As a check, we evaluate fF = kF |xS (t)|max = 4(0.5) = 2 Hz.
10.3 Angle Modulation 315
If C (f ) represents the Fourier transform of C (t), the spectrum XM (f ) of the modulated signal can be
described by
XM (f ) = 0.5AC [(f + fC ) + (f fC ) jC (f + fC ) + jC (f fC )] (10.30)
Figure 10.17 compares the spectrum of standard AM, narrowband PM, and narrowband FM signals. The
spectrum of both PM and FM signals contains two impulses at f fC , with strength 0.5AC , and two
sidebands. The sidebands for PM retain the same shape as the message spectrum X(f ), but for FM, the
factors f f
1
C
make X(f ) droop as we move away from fC . This makes FM more susceptible to noise. To
counter the unwanted eects of noise, we use pre-emphasis to boost high frequencies in the message before
modulation (using a dierentiating filter with |H(f )| f ). At the receiver, demodulation is followed by
de-emphasis (using a lowpass filter) to recover the original message.
Droop
f f f f
The spectrum of xM (t) consists of a carrier at fC of strength AC and two components at fC f0 (both of
strength 0.5AC ), whose phase is 0 at f = fC + f0 and 180 at f = fC f0 . Thus, a narrowband angle-
modulated signal is similar to an AM signal except for a phase of 180 in the component at f = fC f0 .
Like AM, the bandwidth of the angle-modulated single-tone signal xM (t) also equals 2f0 .
Since C (t) is assumed periodic, so is ejC (t) , which can be described by its Fourier series:
* "
1
ejC (t) = X[k]ej2kf0 t X[k] = ej(C 2kf0 t) dt (10.34)
T T
k=
10.4 Wideband Angle Modulation 317
The spectrum of xM (t) shows components at the frequencies fC kf0 . Figure 10.18 shows some signals used
for frequency modulation.
Triangular wave: X[k] = 1 [C(k ) C(k )]cos(0.52k ) + 1 [S(k ) S(k )]sin(0.52k ). Here, we define
= 2, k = k + 0.5, and k = k 0.5.
Figure 10.19 shows the spectra, centered about fC , of the resulting FM signals for each modulating
signal (with = 10). Except for square-wave modulation, the coecients |X[k]| are seen to be negligible for
|k| > , and most of the power is concentrated in the components in the range fC f0 .
(a) For a sine wave ( = 10) (b) For a square wave ( = 10)
0.4 0.4
Amplitude
Amplitude
0.2 0.2
0 0
20 10 0 10 20 20 10 0 10 20
(c) For a sawtooth wave ( = 10) (d) For a triangular wave ( = 10)
0.4 0.4
Amplitude
Amplitude
0.2 0.2
0 0
20 10 0 10 20 20 10 0 10 20
Harmonic index k [ f = fC kf0 ] Harmonic index k [ f = fC kf0 ]
For narrowband FM ( < 0.25 or so), the bandwidth is approximately 2B, and more than 98% of
the power lies in this frequency band for single-tone modulation. For larger values of , Carsons rule
underestimates the bandwidth, and for > 2, it is sometimes modified to BWB = 2B(2 + ). The second
method is a numerical estimate based on applying Parsevals theorem to the signal ejC (t) and its spectral
coecients X[k].
" - -
*
1 - jC (t) -2
P = -e - dt = |X[k]|2 = 1 (10.38)
T T
k=
Since |e | = 1, the total power equals 1. The sum of |X[k]|2 through a finite number of harmonics k = N
jC (t)
equals the fraction of the total power over the frequency band fC N f0 . If we define BWB as the range over
which a large fractionsay, 98%of the total power is concentrated, we can estimate N and compute the
bandwidth as BWB 2N f0 .
(b) An FM signal is generated using a single-tone sinusoidal modulating signal at a frequency f0 , with
= 6. What fraction of the total power is contained in the frequency range fC kf0 , |k| 4?
For single-tone modulation, the Fourier series coecients are |X[k]| = |Jk ()|. With = 6, we find
X[0] = |J0 (6)| = 0.1506, X[1] = |J1 (6)| = 0.2767, X[2] = |J2 (6)| = 0.2429, X[3] = |J3 (6)| = 0.1148,
and X[0] = |J4 (6)| = 0.3576. With |X[k]| = |X[k]|, we use Parsevals relation to obtain the power as
4
*
P = |X[k]|2 = 0.5759 W
k=4
This represents about 57.6% of the total power. In fact, it turns out that more than 99% of the total
power is contained in the harmonics |k| 7 and leads to a bandwidth of BWB 2N f0 = 14f0 . This
compares well with Carsons rule that says BWB 2f0 (1 + ) = 14f0
(c) An FM signal is generated using a square-wave modulating signal whose fundamental frequency is f0 .
What fraction of the total power is contained in the frequency range fC kf0 , |k| 4, if = 6?
For square-wave modulation, the Fourier series coecients are
X[k] = 0.5 sinc[0.5( k)] + 0.5(1)k sinc[0.5( + k)]
With = 6, we find X[0] = 0, X[1] = 0.1091, X[2] = 0, X[3] = 0.1415, and X[4] = 0.
With |X[k]| = |X[k]|, we use Parsevals relation to obtain the power as
4
*
P = |X[k]|2 = 0.0638 W
k=4
This represents only about 6.4% of the total power. Even though the first few harmonics contain a
very small fraction of the total power, it turns out that 97.8% of the total power is contained in the
harmonics |k| 7 and leads to a bandwidth of BWB 2N f0 = 14f0 . This also compares well with
Carsons rule that says BWB 2f0 (1 + ) = 14f0
the idea is to generate a signal that is proportional to the modulating message signal xS (t). This may be
accomplished in several ways. One commonly used method is dierentiation of the FM signal followed by
envelope detection. Assuming an ideal dierentiator, its output will be
& " '
. /
xD (t) = xFM (t) = AC 2fC + 2kF xS (t) sin 2fC t + 2kF xS (t) dt (10.40)
This describes an AM and FM signal. The AM portion contains the message that can be recovered by an
envelope detector because the detector is insensitive to the FM characteristics of the signal. A practical
scheme for demodulation based on this approach is illustrated in Figure 10.20.
320 Chapter 10 Modulation
The dierentiator is preceded by a hard limiter and a bandpass filter. The limiter converts the FM signal
to a square wave whose zero crossings correspond to those of the FM signal and thus contain information
about the message signal. Its function is to remove any spurious amplitude variations that may have been
introduced into the FM signal during generation or transmission. The bandpass filter is centered at the
carrier frequency, and the filtered waveform is thus a sinusoidal FM signal with a constant peak amplitude
(constant envelope). The dierentiator introduces AM variations proportional to the message, as explained
previously, and the envelope detector recovers the message.
Another method for demodulation of FM signals is based on the Hilbert transform (and described in
the following section). Yet another demodulation method is based on the so-called phase-locked loop (PLL)
described in Chapter 12.
10.5.1 FM Transmission
FM broadcasting was introduced in the 1930s and now includes the immensely popular stereo broadcasting
(introduced in the United States of America in 1961) and (the much less frequently used) quadraphonic FM
broadcasting (introduced in 1975). In the United States of America, commercial FM broadcast stations use
carrier frequencies in the range 88108 MHz, allocated by the FCC (Federal Communications Commission).
The frequencies are separated by 200 kHz (to allow transmission of high-fidelity audio signals), and the
frequency deviation is fixed at 75 kHz. During transmission the two stereo channels are combined to produce
sum and dierence signals. The dierence signal amplitude-modulates a 38-kHz carrier to produce a DSB
signal that is added to the sum signal. A 19-kHz pilot signal (obtained from the 38-kHz carrier by frequency
division) is also added. Together, the three signals produce a stereo multiplexed signal whose spectrum
X(f ) is shown in Figure 10.21. This multiplexed signal modulates the carrier to generate the FM signal for
transmission.
X(f)
L+R
Pilot
L-R
f (kHz)
15 19 23 38 53
Figure 10.21 Spectrum of a stereo multiplexed signal for modulating the FM carrier
10.5.2 FM Receivers
The block diagram of a typical FM receiver is shown in Figure 10.22 and consists of an FM discriminator
followed by circuits for generating either monophonic or stereo signals. For monophonic reception, the
discriminator output is lowpass filtered by a filter whose bandwidth extends to 15 kHz to obtain the sum
signal, which is fed to the speaker. In stereophonic reception, the discriminator output is also applied to a
bandpass filter and then synchronously demodulated by a 38-kHz carrier (extracted from the pilot signal)
10.6 The Hilbert Transform 321
and lowpass filtered to obtain the dierence signal. Finally, the sum and dierence signals are combined in a
matrix to yield the left and right stereo-channel signals. In practice, pre-emphasis filters are used to combat
the eect of noise during transmission (followed by de-emphasis filters at the receiver).
Lowpass 0.5(L+R)
filter L
0-15 kHz
Matrix To speakers
FM Bandpass Lowpass
discriminator filter filter R
23-53 kHz 0-15 kHz
Pilot 0.5(L-R)
filter 2
19 kHz Stereo indicator light
(d) The Hilbert transform of a periodic signal xp (t) with zero oset described by its Fourier series is
*
*
*
*
xp (t) = ak cos(k0 t) + bk sin(k0 t) p (t) =
x ak sin(k0 t) bk cos(k0 t)
k=1 k=1 k=1 k=1
This tells us, for example, that the Hilbert transform of the folded function x(t) is x(t).
If xB (t) is a band-limited signal and xC (t) is a signal whose spectrum does not overlap the spectrum of
xB (t), the Hilbert transform of the product x(t) = xB (t)xC (t) equals the product of xB (t) and the Hilbert
transform of xC (t):
(t) = xB (t)
x xC (t) (for non-overlapping spectra) (10.45)
(f ) Let xB (t) represent a signal band-limited to B that modulates a high-frequency carrier given by
xC (t) = cos(2fC t), with fC > B, to yield the modulated signal xM (t) = xB (t)cos(2fC t). The
Hilbert transform of xM (t) then equals x
M (t) = xB (t)sin(2fC t).
xa (t) = x(t) + j x
(t) (analytic signal or pre-envelope) (10.46)
The real and imaginary parts of an analytic signal are Hilbert transforms. For example, the analytic signal
of the real signal x(t) = cos(t) is simply
xa (t) = x(t) + j x
(t) = cos(t) + j sin(t) = ejt (10.47)
The phase of xa (t) is, however, identical to the phase of x(t). Just as an analytic time signal corresponds to
a causal transform, an analytic transform corresponds to a causal time signal. This is the basis for finding
the causal impulse response corresponding to a given magnitude spectrum (or transfer function). We simply
set up the analytic transform and find its inverse.
The analytic signal of x(t) is also called the pre-envelope of x(t). The complex envelope of x(t) is
defined, in analogy with phasors, at a given frequency fC as
The complex envelope xe (t) is also an analytic signal in that its imaginary part equals the Hilbert transform
of its real part:
xe (t) = xI (t) + jxQ (t) = xI (t) + j x
I (t) (10.50)
Note that xQ (t) = x
I (t) is the Hilbert transform of the in-phase component. The quantities xI (t) and xQ (t)
are called the in-phase and quadrature components of x(t) at the frequency fC . In analogy with phasors,
we may also describe the real signal x(t) in terms of its complex envelope at the frequency fC as
x(t) = Re[xa (t)] = Re{xe (t)ej2fC t } (10.51)
The signal x(t) may then be expressed in terms of its in-phase and quadrature components as
. /
x(t) = Re {xI (t) + jxQ (t)}ej2fC t = xI (t)cos(2fC t) xQ (t)sin(2fC t) (10.52)
This has the same form as a USB SSB AM signal. Since xe (t) = xa (t)ej2fC t , its spectrum can be found
using the modulation property to give
Xe (f ) = Xa (f + fC ) = 2X(f + fC )u(f + fC ) (10.53)
The spectrum of xe (t) is also one-sided. If x(t) represents a signal band-limited to B such that fC B, the
spectrum Xe (f ) is centered at fC and confined to the frequency band (fC B, fC + B)
The natural envelope of x(t) is a real quantity that represents the magnitude of xa (t) or xe (t). If
the spectrum X(f ) undergoes a constant phase change to X(f )ej sgn(f ), the analytic signal changes to
xa (t)[cos() + j sin()], whose envelope is still |xa (t)| and thus invariant to the phase change. Since a(t)
|x(t)|, the natural envelope encloses the family of all possible signals we can generate from x(t) by changing
its phase spectrum by an arbitrary constant . At the instants where a(t) = x(t), the Hilbert transform x (t)
equals zero.
This encloses all the signals we can generate from sinc(t) by changing its phase spectrum by an arbitrary
constant .
10.6 The Hilbert Transform 325
xa (t) = xM (t) + j x
M (t) = f (t)cos(2fC t + ) + jf (t)sin(2fC t + ) = f (t)ej(2fC t+)
Its complex envelope is xe (t) = xa (t)ej2fC t = f (t)ej . The natural envelope is a(t) = |f (t)|. The
in-phase and quadrature components are
xa (t) = xM (t) + j x
M (t) = cos[2fC t + (t)] + j sin[2fC t + (t)] = ej[2fC t+(t)]
Its complex envelope is xe (t) = ej(t) . Its natural envelope is a(t) = 1. Its in-phase and quadrature
components are
xM I (t) = cos[((t)] xM Q (t) = sin[(t)]
This represents the most general form of a modulated signal describing both amplitude and phase modulation
of the carrier. In particular, (t) = 0 for AM and a(t) = 1 for PM or FM. To demodulate xM (t), we first
form its pre-envelope:
xa (t) = xM (t) + j x
M (t) (10.55)
If a(t) is band-limited to a frequency B < fC , then x
M (t) = a(t)sin[2fC t + (t)], and we get
d { xS (t)}
The message signal corresponds to Re[xD (t) for AM, to { xS (t)} for PM, and dt for FM.
CHAPTER 10 PROBLEMS
DRILL AND REINFORCEMENT
10.1 (DSBSC AM) Consider the message x(t) = 2 cos(2f1 t) used to modulate the carrier cos(2fC t),
to generate a DSBSC AM signal xM (t).
(a) Write an expression for xM (t) and sketch its spectrum.
(b) Sketch the spectrum of the demodulated signal if demodulation is achieved by cos(2fC t).
10.2 (DSBSC AM) A 1.5-MHz carrier is modulated by a music signal whose frequencies range from
50 Hz to 15 kHz. What is the range of frequencies over which the upper and lower sidebands extend?
10.3 (Standard AM) Consider the message x(t) = 2 cos(2f1 t)+cos(2f2 t) used to modulate the carrier
cos(2fC t) to generate xM (t) = [AC + x(t)]cos(2fC t).
(a) What value of AC ensures a modulation index = 0.5?
(b) What is the eciency when = 0.5?
(c) Sketch the spectrum of x(t) and xM (t) if f1 =10 Hz, f2 =20 Hz, and fC =100 Hz.
10.4 (Standard AM) An AM station operates with a modulation index of 0.8 and transmits a total
power of 50 kW.
(a) What is the power in the transmitted carrier?
(b) What fraction of the total power resides in the message?
10.5 (Standard AM) Find the eciency of a single-tone AM modulator with a modulation index of 0.5
if the carrier power equals 32 W.
10.6 (Synchronous Demodulation) The signal x(t) = 2 cos(2f1 t) + cos(2f2 t) is used to modulate
the carrier cos(2fC t), to generate xM (t) = [AC + x(t)]cos(2fC t). The modulated signal xM (t) is
synchronously demodulated using the same carrier signal.
(a) Sketch the spectrum of the demodulated signal.
(b) Sketch the spectrum of an ideal lowpass filter that can be used to recover x(t).
10.7 (Envelope Detection) The following signals are used to generate DSBSC AM signals. Which
signals can be recovered using envelope detection?
(a) x(t) = cos(2f1 t) (b) x(t) = 2 cos(2f1 t) + cos(2f2 t)
(c) x(t) = 2 + cos(2f1 t) (d) x(t) = 2 + 2 cos(2f1 t) + cos(2f2 t)
10.8 (SSB AM) The message x(t) = 2 cos(2f1 t) + cos(2f2 t) modulates the carrier cos(2fC t), to
generate xM (t) = x(t)cos(2fC t). Find expressions for the upper and lower sideband components of
xM (t).
10.9 (Instantaneous Frequency) Find the instantaneous frequency of the following signals.
(a) x(t) = cos(10t + 4 ) (b) x(t) = cos(10t + 2t)
(c) x(t) = cos(10t + 2t2 ) (d) x(t) = cos[10t + 2 sin(2t)]
Chapter 10 Problems 327
10.11 (FM) The peak deviation in commercial FM is 75 kHz. The frequency of the modulating signal
varies between 50 Hz and 15 kHz. What is the permissible range for the modulation index?
10.12 (FM) A signal, band-limited to 15 kHz, is to be transmitted using FM. The peak deviation is 30 kHz.
What bandwidth is required? Use Carsons rule or its modification as appropriate.
10.14 (Amplitude Modulation) A message signal x(t) containing three unit-amplitude cosines at 100,
200, and 300 Hz is used to modulate a carrier at fC = 10 kHz to generate the AM signals x1 (t) and
x2 (t). Sketch the spectrum of each AM signal and compute the power in the sidebands as a fraction
of the total power.
(a) x1 (t) = [1 + 0.2x(t)]cos(2fC t)
(b) x2 (t) = x(t)cos(2fC t)
10.15 (Standard AM) The periodic signal x(t) = 4 + 4 cos(2t) + 2 cos(4t) modulates the carrier
cos(40t) to generate the standard AM signal xAM (t) = [20 + x(t)]cos(40t). What is the modulation
index? What is the power in the AM signal? What fraction of the total signal power resides in the
sidebands?
10.17 (Standard AM) Consider a periodic signal x(t) with |x(t)| < 1, zero dc oset, band-limited to
B Hz, and with power P . This signal modulates a high-frequency carrier A cos(2fC t), fC B, to
generate the standard AM signal xAM (t) = A[1 + x(t)] cos(2fC t). What is the power in the AM
signal as a function of P ?
328 Chapter 10 Modulation
10.18 (Synchronous Detection) Synchronous detection requires both phase and frequency coherence.
Let the signal x(t) = 2 cos(2f0 t) be used to modulate the carrier cos(2fC t) to generate a DSBSC
AM signal xM (t).
(a) What is the spectrum of the demodulated signal if the demodulating carrier is cos(2fC t + )?
(b) Are there any values of for which it is impossible to recover the message?
(c) Let the demodulating carrier have a frequency oset to become cos[2(fC + f )t]. Find the
spectrum of the demodulated signal. Do we recover the message signal in this case?
10.19 (Synchronous Detection) Let x(t) = cos(100t) be used to modulate the carrier cos(2000t) to
generate a DSBSC AM signal xM (t). The modulated signal xM (t) is demodulated using the signal
cos[2(1000+f )t+]. For each case, explain whether (and how) the original signal can be recovered
from the demodulated signal.
(a) f =0 =0
(b) f =0 = 0.25
(c) f =0 = 0.5
(d) f = 10 Hz =0
(e) f = 10 Hz = 0.5
10.21 (SSB AM) The message x(t) = sinc2 (t) modulates the carrier cos(10t) to generate the modulated
signal xM (t) = x(t)cos(2fC t).
(a) Sketch the spectrum of x(t), the carrier, and xM (t).
(b) Sketch the spectrum of the LSB SSB signal.
(c) The signal in part (b) is synchronously demodulated. Sketch the spectrum of the demodulated
signal.
10.22 (Wideband FM) Consider an FM signal with = 5. Find the power contained in the harmonics
fC kf0 , k = 0, 1, 2, 3, for the following:
10.23 (The Hilbert Transform) Unlike most other transforms, the Hilbert transform belongs to the
same domain as the signal transformed. The Hilbert transform shifts the phase of x(t) by 2 . Find
the Hilbert transforms of the following:
(a) x(t) = cos(2f t) (b) x(t) = sin(2f t) (c) x(t) = cos(2f t) + sin(2f t)
(d) x(t) = ej2f t (e) x(t) = (t) (f ) x(t) = t
1
10.24 (Properties of the Hilbert Transform) Using some of the results of the preceding problem as
examples, or otherwise, verify the following properties of the Hilbert transform.
(a) The magnitude spectra of x(t) and x (t) are identical.
(b) The Hilbert transform of x(t) taken twice returns x(t).
(c) The Hilbert transform of an even function is odd, and vice versa.
(d) The Hilbert transform of x(t) is sgn() x(t).
(e) The Hilbert transform of x(t) y(t) equals x(t) y(t) or x
(t) y(t).
(f ) The Hilbert transform of a real signal is also real.
Chapter 10 Problems 329
10.26 (Modulation) A signal xB (t) band-limited to B Hz modulates a carrier xC (t) = cos(2fC t). If
fC > B, show that the Hilbert transform of the modulated signal xM (t) = xB (t)cos(2fC t) is given
by the expression x
M (t) = xB (t)sin(2fC t).
10.27 (Amplitude Modulation) The signal x(t) = cos(20t) modulates a carrier at fC = 100 Hz. Obtain
and plot the modulated signal for each modulation scheme listed. For which cases does the message
signal correspond to the envelope of the modulated signal? For which cases can the message signal be
recovered by envelope detection? For which cases can the message signal be recovered by synchronous
detection?
(a) DSBSC AM
(b) Standard AM with a modulation index of 0.5.
(c) Standard AM with a modulation index of 0.8.
(d) Standard AM with a modulation index of 1.2.
(e) SSB AM with only the upper sideband present.
10.28 (Demodulation of AM Signals) The signal x(t) = cos(20t) modulates a carrier at fC = 100 Hz.
Generate the modulated signals as in Problem 10.27. To demodulate the signals, multiply each
modulated signal by a carrier signal whose frequency and phase are identical to the modulated carrier
and plot the resulting signal. How would you extract the message from the demodulated signal?
10.29 (Amplitude Modulation) The phase of the carrier is important in demodulating AM signals by
synchronous detection. Generate the modulated signals as in Problem 10.27. Demodulate each signal,
using the following carrier signals. For which cases can you recover the message exactly? Are there
any cases for which the recovered message signal equals zero? Explain.
(a) Frequency f0 , same phase as the modulating carrier
(b) Frequency f0 , phase oset = 5
(c) Frequency f0 , phase oset = 30
(d) Frequency f0 , phase oset = 90
(e) Frequency 1.1f0 , phase oset = 0
Chapter 11
We also define the two-sided Laplace transform by allowing the lower limit to be . However, this form
finds limited use and is not pursued here.
The complex quantity s = + j generalizes the concept of frequency to the complex domain. It is often
referred to as the complex frequency, with measured in radians/second, and in nepers/second. The
relation between x(t) and X(s) is denoted symbolically by the transform pair x(t) X(s). The double
arrow suggests a one-to-one correspondence between the signal x(t) and its Laplace transform X(s).
The Laplace transform provides unique correspondence only for causal signals, since the Laplace transform
of x(t) and its causal version x(t)u(t) are clearly identical due to the lower limit of 0. To include signals such
as (t), which are discontinuous at the origin, the lower limit is chosen as 0 whenever appropriate.
330
11.1 The Laplace Transform 331
11.1.1 Convergence
The Laplace transform is an integral operation. It exists if x(t)et is absolutely integrable. Clearly, only
certain choices of will make this happen. The range of that ensures existence defines the region of
convergence (ROC) of the Laplace transform. For example, the Laplace transform X(s) of the signal
x(t) = e2t u(t) exists only if > 2, which ensures that the product is a decaying exponential. Thus, > 2
defines the region of convergence of the transform X(s). Since we shall deal only with causal signals (whose
ROC is > , 0), we avoid explicit mention of the ROC (except in Example 11.1).
The choice 0 for the lower limit is due to the fact that (t) is discontinuous at t = 0. The ROC is the
entire s plane.
(b) (Unit Step) From the defining relation, the transform of x(t) = u(t) is given by
! !
1
X(s) = u(t)est dt = est dt =
0 0 s
Since exp(st) = exp[( + j)t], which equals zero at the upper limit only if > 0, the region of
convergence of X(s) is > 0.
(e) (Switched Cosine) With x(t) = cos(t)u(t), we use Eulers relation to get
! !
1 " (s+j)t # s
X(s) = cos(t)est dt = e + e(sj)t dt = 2
0 2 0 s + 2
The region of convergence of X(s) is > 0.
16 Final value x(t)|t = lim [sX(s)] (if poles of X(s) lie in LHP)
s0
334 Chapter 11 The Laplace Transform
The times-sin and times-cos properties form a direct extension if the sines and cosines are expressed
as exponentials using Eulers relation:
! !
x(t)cos(t)est dt = 0.5x(t)[ej + ej ]est dt = 0.5[X(s + j) + X(s j)] (11.3)
0 0
The quantity es may be regarded as a unit-shift operator that shifts or delays the time function by 1 unit.
Similarly, es contributes a delay of units to the time signal.
The derivative property follows from the defining relation:
! & !
&
st &
x (t)e
st
dt = x(t)e & +s x(t)est dt = x(0) + sX(s) (11.6)
0 0 0
The times-t and derivative properties are duals. Multiplication by the variable (t or s) in one domain results
in a derivative in the other.
The convolution property involves changing the order of integration and a change of variable (t = ):
! '! ( ! '! (
x(t )u(t )h() d est dt = x(t )es(t) dt h()es d
0 0 0
= X(s)H(s)
1
(Another way) Start with tet u(t) as in part (a).
(s + 1)2
Then, rewrite x(t) as 13 (3t)e3t u(t) and use the time-scaling property to get
' (
1 1/3 1
X(s) = =
3 ( 3s + 1)2 (s + 3)2
) * ) s * ) *
d2 es d e es 2 2 1
X(s) = 2 = 2 =e s
2+
ds s ds s s s3 s s
Comment: Note how we expressed x(t) by terms containing the argument t 1 of the shifted
step to ensure correct use of the time-shift property.
336 Chapter 11 The Laplace Transform
1
(Another way) Start with the known pair y(t) = e3t u(t) Y (s) = .
s+3
Then, use the times-cos property cos(t)y(t) 0.5[Y (s + j) + Y (s j)] to give
) *
1 1 s+3
X(s) = 0.5 + =
s + j2 + 3 s j2 + 3 (s + 3)2 + 4
(e) (Signals Described Graphically) Find the transform of the signals shown in Figure E11.2E.
w(t) x(t) h(t) y(t)
1 1 1 1 sine
e 2 t
t t t t
1 1 1 1
Figure E11.2E The signals for Example 11.2(e)
1 e(s+2)
Y (s) = W (s + 2) =
s+2
4. The signal h(t) may be described as h(t) = sin(t)w(t). Using the times-sin property and the
identity ej = 1 (to simplify the result), we obtain
' (
1 + e(s+j) 1 + e(sj) (1 + es )
H(s) = j0.5 =
s + j s j s2 + 2
An easier way is to write h(t) = sin(t)u(t) + sin[(t 1)]u(t 1) and use the time-shift property
to get
es (1 + es )
H(s) = 2 + 2 =
s + 2 s + 2 s2 + 2
11.2 Properties of the Laplace Transform 337
1 1
(b) We can write H(s) = + .
s+1 s+2
The impulse response thus equals h(t) = et u(t) + e2t u(t).
(c) Since H(s) = Y (s)/X(s), we can also write (s2 + 3s + 2)Y (s) = (2s + 3)X(s).
This leads to y (t) + 3y (t) + 2y(t) = 2x (t) + 3x(t).
Note that H(s), h(t), and the dierential equation are three ways of describing the same system.
11.3 Poles and Zeros of the Transfer Function 339
Note that X(s) will in general contain terms with real constants and pairs of terms with complex conjugate
residues and can be written as
K1 K2 A1 A1 A2 A2
X(s) = + ++ + + + + (11.19)
s + p1 s + p2 s + r1 s + r1 s + r2 s + r2
For a real root, the residue (coecient) will also be real. For each pair of complex conjugate roots, the
residues will also be complex conjugates, and we thus need compute only one of these.
11.4 The Inverse Laplace Transform 341
P (s) K1 A0 A1 Ak1
X(s) = = + + + + (11.20)
(s + p1 )(s + r)k s + p1 (s + r)k (s + r)k1 s+r
Observe that the constants Aj ascend in index j from 0 to k1, whereas the denominators (s+r)m descend in
power m from k to 1. The coecient K1 for the non-repeated term s+p K1
1
is found by evaluating (s + p1 )X(s)
at s = p1 :
&
&
K1 = (s + p1 )X(s)& (11.21)
s=p1
The coecients Aj for the repeated roots require (s + r)k X(s) (and its derivatives) for their evaluation. We
successively find
& 1 d2 &
& &
A0 = (s + r)k X(s)& A2 = [(s + r)k X(s)]&
s=r 2! ds2 s=r
d & 1 dn &
& &
A1 = [(s + r)k X(s)]& An = [(s + r)k
X(s)]&
ds s=r n! dsn s=r
Even though this process allows us to find the coecients independently of each other, the algebra in finding
the derivatives can become tedious if the multiplicity k of the roots exceeds 2 or 3.
2. Terms corresponding to real factors will have the form Kept u(t).
3. Terms corresponding to each complex conjugate pair of roots will be of the form Aert + A er t .
Using Eulers relation, this can be reduced to a real term in one of the forms listed in Table 11.3.
342 Chapter 11 The Laplace Transform
With 1.5 + j0.5 = 1.581 18.4 = 1.581 0.1024, Table 11.3 also yields x(t) as
x(t) = u(t) 2et u(t) + 3.162e2t cos(2t 0.1024)u(t)
4
(b) (Repeated Poles) Let X(s) = . Its partial fraction expansion is
(s + 1)(s + 2)3
K1 A0 A1 A2
X(s) = + + +
s + 1 (s + 2)3 (s + 2)2 (s + 2)
& 4 &
& &
We compute K1 = (s + 1)X(s)& = & = 4.
s=1 (s + 2)3 s=1
4
Since (s + 2)3 X(s) = , we also successively compute
s+1
4 &&
A0 = & = 4
s + 1 s=2
' (& &
d 4 & 4 &
A1 = & = & = 4
ds s + 1 s=2 (s + 1)2 s=2
' (
1 d
2
4 && 4 &
&
A2 = 2 ds2 s + 1 & = & = 4
s=2 (s + 1)3 s=2
This gives the result
4 4 4 4
X(s) =
s + 1 (s + 2)3 (s + 2)2 s+2
From Table 11.3, x(t) = 4et u(t) 2t2 e2t u(t) 4te2t u(t) 4e2t u(t).
se2s + 1
(b) (The Eect of Delay) Let X(s) = . We split this into two terms:
(s + 1)(s + 2)
se2s 1
X(s) = + = e2s X1 (s) + X2 (s)
(s + 1)(s + 2) (s + 1)(s + 2)
We find the partial fractions of X1 (s) and X2 (s) to get
s 2 1 1 1 1
X1 (s) = = X2 (s) = =
(s + 1)(s + 2) s+2 s+1 (s + 1)(s + 2) s+1 s+2
Thus, x1 (t) = 2e2t u(t) et u(t) and x2 (t) = et) (t) e2t u(t).
From the time-shift property, the inverse transform of e2s X1 (s) equals x1 (t 2), and we get
x(t) = x1 (t 2) + x2 (t) = [2e2(t2) e(t2) ]u(t 2) + (et e2t )u(t)
A system with poles in the left half-plane and simple, finite poles on the j-axis is also called wide
sense stable, or marginally stable.
A polynomial with real coecients whose roots lie entirely in the left half of the s-plane is called
strictly Hurwitz. If it also has simple j-axis roots, it is called Hurwitz. The coecients of
Hurwitz polynomials are all nonzero and of the same sign. A formal method called the Routh test
allows us to check if a polynomial is Hurwitz (without having to find its roots) and also gives us the
number (but not the location) of right half-plane roots.
Asymptotic Stability
If the zero-input response of a system decays to zero with time, we classify the system as being asymptot-
ically stable. Such a system has a transfer function whose poles lie entirely within the LHP and this also
implies BIBO stability. BIBO stability and asymptotic stability are often used interchangeably.
Any passive linear system that includes one or more resistive elements is always asymptotically stable.
The resistive elements dissipate energy and allow the system to relax to zero state no matter what the initial
state.
Liapunov Stability
If the zero-input response always remains bounded (it may or may not approach zero), we classify the system
as being stable in the sense of Liapunov. In addition to poles in the LHP, the transfer function of such
a system may also contain simple (not repeated) poles on the j-axis. Asymptotic stability is a special case
of Liapunov stability.
346 Chapter 11 The Laplace Transform
A passive system with resistors is both asymptotically and Liapunov stable. A passive system with only
lossless elements (e.g., L and C) is Liapunov stable but not asymptotically stable. The energy in such a
system due to a nonzero initial state remains constant. It cannot increase because the system is passive. It
cannot be dissipated because the system is lossless.
Note that this theorem predicts the initial value at t = 0+. If X(s) is not strictly proper, we get correct results
by using the strictly proper part of X(s) obtained after long division since the remaining part corresponds
to impulses and their derivatives (which occur at t = 0 and are zero at t = 0+). If X(s) or its strictly
proper part equals P (s)/Q(s) and P (s) is of degree M and Q(s) is of degree N , x(0+) = 0 if N M > 1.
If N M = 1, x(0+) equals the ratio of the leading coecients of P (s) and Q(s). In other words, x(0+) is
nonzero only if N M = 1.
The final value theorem predicts the value of the time signal x(t) as t from its transform X(s),
and reads
x() = lim [sX(s)] (final value theorem) (11.23)
s0
Note that this theorem applies only if the poles of X(s) = P (s)/Q(s), with common factors in P (s) and
Q(s) canceled, lie in the left half of the s-plane. The only j-axis pole permitted is a simple pole at s = 0.
We should expect x() = 0 if all poles of X(s) lie to the left of the j-axis (because x(t) contains only
damped exponentials). We should also expect x() to be constant if there is a single pole at s = 0 (which
corresponds to a step function). Finally, x() will be indeterminate if there are conjugate pole pairs on the
j-axis (because x(t) includes sinusoids whose final value is indeterminate). The final value theorem yields
erroneous results if used in this case.
12(s + 1)
(a) X(s) = .
s(s2 + 4)
We find
Initial value: x(0+) = lim sX(s) = 0 since N M = 2 for X(s).
s
2s + 6
(b) X(s) = .
s(4s + 2)
We find
2s + 6 2 + 6/s
x(0+) = lim sX(s) = lim = lim = 0.5
s 4s + 2 s 4 + 2/s
s
s+6
x() = lim sX(s) = lim =3
s0 s0 s + 2
4s + 5
(c) X(s) = .
2s + 1
Since this is not strictly proper, we use long division to rewrite it as
3
X(s) = 2 + = 2 + Y (s)
2s + 1
The strictly proper part Y (s) gives the initial value of x(t) as
3s
x(0+) = lim sY (s) = lim = 1.5
s s 2s + 1
For the final value we use X(s) directly, to obtain
4s2 + 5s
x() = lim sX(s) = lim =0
s0 s0 2s + 1
conditions the total response equals the sum of the zero-state response (due only to the input) and the
zero-input response (due only to the initial conditions). These components can be easily identified using
Laplace transforms.
' (&
d 3s2 + 19s + 30 &&
A1 = & = 11
ds s+1 s=2
(b) (Zero-State Response) For the zero-state response, we assume zero initial conditions to obtain
4
(s2 + 3s + 2)Yzs (s) =
s+2
This gives
4 4 4 4
Yzs (s) = =
(s + 2)(s2+ 3s + 2) s + 1 (s + 2)2 s+2
Inverse transformation gives yzs (t) = (4et 4te2t 4e2t )u(t).
11.6 The Laplace Transform and System Analysis 349
(c) (Zero-Input Response) For the zero-input response, we assume zero input to obtain
3s + 13 10 7
(s2 + 3s + 2)Yzi (s) = 3s + 13 Yzi (s) = =
s2 + 3s + 2 s+1 s+2
Upon inverse transformation, yzi (t) = (10et 7e2t )u(t). The total response equals
y(t) = yzs (t) + yzi (t) = (14et 4te2t 11e2t )u(t)
This matches the result found from the direct solution.
Y (s)
(b) (Zero-Input Response) For the zero-input response, write the transfer function H(s) = X(s) as
(s + 3s + 2)Y (s) = X(s) to obtain the system dierential equation y (t) + 3y (t) + 2y(t) = x(t).
2
(c) (Total Response) The total response is the sum of the zero-state response and the zero-input
response. Thus,
y(t) = yzs (t) + yzi (t) = (14et 4te2t 11e2t )u(t)
350 Chapter 11 The Laplace Transform
L henrys sL ohms 1
C farads ohms
sC
i(t) i(0) +
s v(t) C v (0)
OR OR
L henrys v (0)
L i(0) C farads
s
+ +
sL ohms + 1
i(t) v(t) ohms
sC
Figure 11.2 Modeling inductors and capacitors with nonzero initial conditions
Time domain s-domain
Figure E11.12A Circuit for Example 11.12(a)
The transfer function and impulse response for each of the outputs labeled is found as
VR (s) 2 0.5
HR (s) = = = hR (t) = 0.5e0.5t u(t)
V (s) 4s + 2 s + 0.5
VL (s) 4s 0.5
HL (s) = = =1 hL (t) = (t) 0.5e0.5t u(t)
V (s) 4s + 2 s + 0.5
I(s) 1 0.25
HI (s) = = = hI (t) = 0.25e0.5t u(t)
V (s) 4s + 2 s + 0.5
The system transfer function depends on what is specified as the output.
352 Chapter 11 The Laplace Transform
(b) (The Eect of Initial Conditions) Consider the RL circuit and its s-domain versions in the absence
of initial conditions and with the initial condition i(0) = 1 A, as shown in Figure E11.12B.
i(t) I(s) I(s)
+ + + + + +
2 2 2
v(t) 4H vL(t) V(s) 4s VL(s) V(s) 4s 1/ s VL(s)
Time domain s-domain i (0) = 0 s-domain i (0) = 1 A
Figure E11.12B Circuit for Example 11.12(b)
s 2 1
Vzs (s) = V (s)HL (s) = =
(s + 1)(s + 0.5) s + 1 s + 0.5
The zero-state response then equals vzs (t) = 2et u(t) e0.5t u(t).
2. With the initial condition i(0) = 1 A, we must include the eect of the initial conditions to
transform the circuit as shown. A node equation gives
' (
VL (s) V (s) VL (s) 1 1 1 1 1
+ + = 0 or + VL (s) = V (s)
2 4s s 2 4s 2 s
1
Substituting V (s) = , and solving for VL (s), we get
s+1
s+2 2 3
VL (s) = =
(s + 1)(s + 0.5) s + 1 s + 0.5
3. Since the zero-state response is vzs (t) = 2et u(t) e0.5t u(t), the initial condition is responsible
for the additional term 2e0.5t u(t), which corresponds to the zero-input response vzi (t).
We could also compute the zero-input response vzi (t) separately by short circuiting the input and
finding Vzi (s) from the resulting node equation:
2
Vzi (s) = vzi (t) = 2e0.5t u(t)
s + 0.2
11.6 The Laplace Transform and System Analysis 353
2. x(t) = 8 cos(2t) + 16 sin(2t): Both terms are at the same frequency, = 2 rad/s.
Since H(2) = 0.3536 45 , each input term is aected by this transfer function to give
yss (t) = 8(0.3536) cos(2t + 45 ) + 16(0.3536) sin(2t + 45 ) = 2.8284 cos(2t + 45 ) + 5.6569 sin(2t + 45 )
s2
(b) Find the forced response of the system H(s) = to the input x(t) = 3et sin(t 75 )u(t).
s2 + 3s + 2
We recognize the complex input frequency as s0 = 1 + j and evaluate H(s) at s = s0 = 1 + j:
& (1 + j)2
&
H(s)& = = 1.4142 135
s=1+j (1 + j)2 + 3(1 + j) + 2
The forced response thus equals
yF (t) = 3(1.4142)et sin(t 75 + 135 )u(t) = 4.2426et sin(t + 60 )u(t)
Here, pk are the poles of the transfer function H(s) and Ak are constants. The steady-state response yss (t)
for the first period (0, T ) is then
yss (t) = y1 (t) yN (t), 0tT (11.31)
11.6 The Laplace Transform and System Analysis 355
The constants Ak in yss (t) may be evaluated by letting yss (0) = yss (T ) (which ensures periodicity of yss (t))
or by evaluating Ak = (s pk )Y (s)|s=pk , where Y (s) = X(S)H(S) and X(s) is the Laplace transform of the
switched periodic signal xp (t)u(t). Both approaches are illustrated in the following example.
The steady-state response for the first period (0, 2) then equals
et
yss (t) = y1 (t) Ket u(t) = (1 et )u(t) [1 e(t1) ]u(t 1) + u(t)
1+e
We can also express yss (t) by intervals as
e(t1)
1 , 0t1
1+e
yss (t) =
e(t2)
, 1t2
1+e
By the way, we can also find yss (t) using periodic convolution (see Example 6.12).
The concentration following the N th dose and prior to the next dose is
" #
c(t) = c0 et/ + c0 e(tT )/ + c0 e(t2T )/ + + c0 e(tN T )/ , N T t (N + 1)T (11.35)
As N becomes large, the infinite series sums to 1/(1 eT / ), and the concentration reaches a steady state
described by
c0
c(t) = e(tN T )/ , N T t (N + 1)T (11.37)
1 eT /
Immediately after the N th dose (t = N T ) and just prior to the (N + 1)th dose (t = (N + 1)T ), the
concentrations are
c0 c0 eT /
csat = c min = = csat c0 (11.38)
1 eT / 1 eT /
11.6 The Laplace Transform and System Analysis 357
c(t)
csat
csat c0
c0
t
T 2T 3T NT
The response c(t) is sketched in Figure 11.3. In the steady state, the drug concentration hovers between
a saturation level of csat and a minimum level of csat c0 . Too low a saturation level may be ineective,
while too high a saturation level may cause harmful side eects. We must thus choose the interval T and
the appropriate dose c0 carefully in order to arrive at an optimum saturation level csat for a given drug.
c0 10
csat = 20 =
1 eT / 1 eT /7
This gives T = 4.83 hours. So, the safe interval between doses of 10 mgL1 is about 5 hours.
In words, the overall transfer function of a cascaded system is the product of the individual transfer functions
(assuming ideal cascading with no loading eects).
Similarly, for N systems in parallel, the overall transfer function is the algebraic sum of the N individual
transfer functions:
11.7 Connections
The Fourier transform describes a signal as a sum (integral) of weighted harmonics, or complex exponentials.
However, it cannot handle exponentially growing signals, and it cannot handle initial conditions in system
analysis. In addition, the Fourier transform of signals that are not absolutely integrable usually includes
impulses. The Laplace transform overcomes these shortcomings by redefining the Fourier transform as the
sum of exponentially weighted harmonics:
! !
X(s) = x(t)e(+j2f )t dt = x(t)est dt (11.41)
This is the two-sided Laplace transform. If we change the lower limit in the integration to zero, we obtain
the one-sided Laplace transform: !
X(s) = x(t)est dt (11.42)
0
It applies only to causal signals but permits the analysis of systems with arbitrary initial conditions to
arbitrary inputs.
This connection allows us to relate the Laplace transform and Fourier transform of absolutely integrable
causal signals such as the decaying exponential et u(t), the exponentially damped ramp tn et u(t), and the
exponentially damped sinusoid et cos(t + )u(t).
(b) The signal x(t) = u(t) is not absolutely integrable, but et x(t) is absolutely integrable for > 0.
Since this excludes = 0 (the j-axis), we find the Laplace of u(t) by dropping the impulsive part of
X(f ) = 0.5(f ) + j2f
1
and replacing the quantity j2f by s, to give X(s) = 1s .
360 Chapter 11 The Laplace Transform
CHAPTER 11 PROBLEMS
DRILL AND REINFORCEMENT
11.1 (The Laplace Transform and its ROC) Use the defining relation to find the Laplace transform
and its region of convergence for the following:
(a) x(t) = e3t u(t)
(b) x(t) = e3t u(t)
11.4 (Properties) The Laplace transform of x(t) = e2t u(t) is X(s). Find the time signal corresponding
to the following transforms without computing X(s).
(a) Y (s) = X(2s) (b) F (s) = X (s)
(c) G(s) = sX(s) (d) H(s) = sX (s)
11.5 (Pole-Zero Patterns) Sketch the pole-zero patterns of the following systems. Which of these
describe stable systems?
(s + 1)2 s2
(a) H(s) = 2 (b) H(s) =
s +1 (s + 2)(s2 + 2s 3)
s2 2(s2 + 4)
(c) H(s) = (d) H(s) =
s(s + 1) s(s2 + 1)(s + 2)
16 2(s + 1)
(e) H(s) = 2 (f ) H(s) =
s (s + 4) s(s2 + 1)2
11.6 (Initial and Final Value Theorems) Find the initial and final values for each X(s) for which the
theorems apply.
s (s + 2)2
(a) X(s) = (b) X(s) =
s+1 s+1
(s + 1)2 s2
(c) X(s) = 2 (d) X(s) =
s +1 (s + 2)(s2 + 2s + 2)
2 2(s2 + 1)
(e) X(s) = (f ) X(s) =
s(s + 1) s(s + 2)(s + 5)
16 2(s + 1)
(g) X(s) = 2 (h) X(s) =
(s + 4)2 s(s2 + 1)2
Chapter 11 Problems 361
11.7 (Partial Fractions and Inverse Transforms) Find the inverse transforms of the following:
2 2s
(a) H(s) = (b) H(s) =
s(s + 1) (s + 1)(s + 2)(s + 3)
4s 4(s + 2)
(c) H(s) = (d) H(s) =
(s + 3)(s + 1)2 (s + 3)(s + 1)2
2 4(s + 1)
(e) H(s) = (f ) H(s) =
(s + 2)(s2 + 4s + 5) (s + 2)(s2 + 2s + 2)
2(s2 + 2) 2
(g) H(s) = (h) H(s) =
(s + 2)(s2 + 4s + 5) (s + 2)(s + 1)3
11.8 (Inverse Transforms for Nonstandard Forms) Find the time signal h(t) corresponding to each
Laplace transform H(s).
s (s + 2)2
(a) H(s) = (b) H(s) =
s+1 s+1
4(s 2es ) 4(s2 es )
(c) H(s) = (d) H(s) =
(s + 1)(s + 2) (s + 1)(s + 2)
11.9 (Transfer Function) Find the transfer function, dierential equation, and order of the following
systems and determine if the system is stable.
(a) h(t) = e2t u(t) (b) h(t) = (1 e2t )u(t)
(c) h(t) = tet u(t) (d) h(t) = 0.5(t)
(e) h(t) = (t) et u(t) (f ) h(t) = (et + e2t )u(t)
11.10 (Transfer Function) Find the transfer function and impulse response of the following systems.
(a) y (t) + 3y (t) + 2y(t) = 2x (t) + x(t)
(b) y (t) + 4y (t) + 4y(t) = 2x (t) + x(t)
(c) y(t) = 0.2x(t)
11.11 (System Formulation) Set up the impulse response and the system dierential equation from each
transfer function.
3 1 + 2s + s2
(a) H(s) = (b) H(s) =
s+2 (1 + s2 )(4 + s2 )
2 1 2s s
(c) H(s) = (d) H(s) =
1+s 2+s 1+s 2+s
2 + 2s
11.12 (System Response) The transfer function of a system is H(s) = . Find its response
4 + 4s + s2
y(t) for each input x(t).
(a) x(t) = (t) (b) x(t) = 2(t) + (t)
(c) x(t) = et u(t) (d) x(t) = tet u(t)
(e) x(t) = 4 cos(2t)u(t) (f ) x(t) = [4 cos(2t) + 4 sin(2t)]u(t)
11.13 (System Analysis) Find the zero-state, zero-input, and total response of the following systems,
assuming x(t) = e2t u(t), y(0) = 1 and y (0) = 2.
(a) y (t) + 4y (t) + 3y(t) = 2x (t) + x(t)
(b) y (t) + 4y (t) + 4y(t) = 2x (t) + x(t)
(c) y (t) + 4y (t) + 5y(t) = 2x (t) + x(t)
362 Chapter 11 The Laplace Transform
11.14 (Transfer Function) Find the transfer function H(s) and impulse response h(t) of each circuit
shown in Figure P11.14. Assume that R = 1 , C = 1 F, and L = 1 H where required.
+ R
+ + R
+ + +
C
x(t) 2R y(t) x(t) y(t) x(t) R y(t)
C
Circuit 1 Circuit 2 Circuit 3
+ L
+ + R
+ + R
+
Circuit 4 Circuit 5 Circuit 6
Figure P11.14 Circuits for Problem 11.14
2 + 2s
11.15 (Steady-State Response) The transfer function of a system is H(s) = . Find the
4 + 4s + s2
steady-state response of this system for the following inputs.
(a) x(t) = 4u(t) (b) x(t) = 4 cos(2t)u(t)
(c) x(t) = [cos(2t) + sin(2t)]u(t) (d) x(t) = [4 cos(t) + 4 sin(2t)]u(t)
11.16 (Response to Periodic Inputs) Find the steady-state response and the total response of an RC
lowpass filter with time constant = 2 for the switched periodic inputs x(t) whose one period x1 (t)
and time period T are defined as follows:
(a) x1 (t) = u(t) u(t 1), with T = 2.
(b) x1 (t) = t[u(t) u(t 1)], with T = 1.
11.18 (The Laplace Transform) Find the Laplace transform of the following signals.
(a) x(t) = cos(t 4 )u(t) (b) x(t) = cos(t 4 )u(t 4 )
(c) x(t) = cos(t)u(t 4 ) (d) x(t) = | sin(t)|u(t)
(e) x(t) = u[sin(t)]u(t) (f ) x(t) = [sin(t)]u(t)
4
11.19 (Properties) The Laplace transform of a signal x(t) is X(s) = . Find the Laplace transform
(s + 2)2
of the following without computing x(t).
Chapter 11 Problems 363
11.20 (Properties) The Laplace transform of x(t) = e2t u(t) is X(s). Find the time signal corresponding
to the following transforms without computing X(s).
(a) Y (s) = sX (2s)
(b) G(s) = e2s X(s)
(c) H(s) = se2s X(2s)
11.21 (Partial Fractions and Inverse Transforms) Find the partial fraction expansion for each H(s).
Then compute the time signal x(t).
4 4s
(a) H(s) = 2 (b) H(s) = 2
(s + 2s + 2)2 (s + 2s + 2)2
32 4
(c) H(s) = 2 (d) H(s) =
(s + 4) 2 s(s + 1)2
2
11.22 (Inverse Transforms for Nonstandard Forms) Find the time signal corresponding to the fol-
lowing Laplace transforms.
(s + 1)2 s3
(a) H(s) = 2 (b) H(s) =
s +1 (s + 2)(s2 + 2s + 2)
1e s
1 es
(c) H(s) = (d) H(s) = 2
s(1 e )
3s s (1 + es )
11.23 (Initial Value Theorem) If H(s) = N (s)/D(s) is a ratio of polynomials with N (s) of degree N
and D(s) of degree D, in which of the following cases does the initial value theorem apply, and if it
does, what can we say about the initial value h(0+)?
(a) D N
(b) D > N + 1
(c) D = N + 1
11.24 (Inversion and Partial Fractions) There are several ways to set up a partial fraction expansion
and find the inverse transform. For a quadratic denominator [(s+)2 + 2 ] with complex roots, we use
As + B
a linear term As + B in the numerator. For H(s) = , we find h(t) = et [K1 cos(t) +
(s + )2 + 2
K2 sin(t)].
(a) Express the constants K1 and K2 in terms of A, B, , and .
2(s2 + 2) C As + B
(b) Let H(s) = = + . Find the constants A, B, and C by
(s + 2)(s + 2s + 2)
2 s + 2 (s + )2 + 2
comparing the numerator of H(s) with the assumed form and then find h(t).
40
(c) Extend these results to find x(t) if X(s) = 2 .
(s + 4)(s2 + 2s + 2)
s+2
11.25 (Inversion and Partial Fractions) Let H(s) = . Its partial fraction expansion
(s + 3)(s + 4)3
(PFE) has the form
K A0 A1 A2
H(s) = + + +
s + 3 (s + 4)3 (s + 4)2 s+4
364 Chapter 11 The Laplace Transform
Finding the two constants K and A0 is easy, but A1 and A2 require derivatives of (s + 1)3 H(s). An
alternative is to recognize that H(s) and its PFE are valid for any value of s, excluding the poles. All
we need is to evaluate the PFE at two values of s to yield two equations in two unknowns, assuming
K and A0 are already known.
(a) Try this approach using s = 2 and 5 to find the PFE constants A1 and A2 , assuming you
have already evaluated K and A0 .
(b) Repeat part (a), choosing s = 0 and s = 6. Is there a best choice?
11.26 (Integral Equations) Integral equations arise in many contexts. One approach to solving integral
equations is by using the convolution property of the Laplace transform. Find the transfer function
and impulse response of a filter whose input-output relation is described by
! t
(a) y(t) = x(t) 2 y()e(t) u(t ) d
! t
(b) y(t) = x(t) + y()e3(t) u(t ) d
11.27 (System Analysis) Consider a system whose impulse response is h(t) = 2e2t u(t). Find its response
to the following inputs.
(a) x(t) = (t) (b) x(t) = u(t)
(c) x(t) = et u(t) (d) x(t) = e2t (t)
(e) x(t) = cos(t) (f ) x(t) = cos(t)u(t)
(g) x(t) = cos(2t) (h) x(t) = cos(2t)u(t)
11.28 (System Analysis) For each circuit shown in Figure P11.28, assume that R = 1 , C = 1 F, and
L = 1 H where required and
(a) Find the transfer function H(s) and the impulse response h(t).
(b) Find the response to x(t) = et u(t), assuming vC (0) = 0 and iL (0) = 0.
(c) Find the response to x(t) = u(t), assuming vC (0) = 1 V and iL (0) = 2 A (directed down).
+ R
+ + R
+
Circuit 1 Circuit 2
+ R + + +
C
x(t) L C y(t) x(t) R L y(t)
Circuit 3 Circuit 4
Figure P11.28 Circuits for Problem 11.28
11.29 (System Analysis) Consider a system whose impulse response is h(t) = 2e2t cos(t)u(t). Let an
input x(t) produce the output y(t). Find x(t) for the following outputs.
(a) y(t) = cos(2t) (b) y(t) = 2 + cos(2t) (c) y(t) = cos2 (t)
Chapter 11 Problems 365
(a) Let x(t) = et u(t). Find the output y(t), using Laplace transforms. Verify your results, using
time-domain convolution.
(b) Let x(t) = u(t). Find the output y(t), using Laplace transforms. Verify your results, using
time-domain convolution.
(c) Let x(t) = cos(t). Find the output y(t), using Laplace transforms. Verify your results, using
time-domain convolution.
11.31 (Response to Periodic Inputs) Find the steady-state response and the total response of the
following systems to the switched periodic inputs x(t) whose one period equals x1 (t).
s+3
(a) H(s) = 2 x1 (t) = [u(t) u(t 1)] T =2
s + 3s + 2
s + 2)
(b) H(s) = 2 x1 (t) = tri(t 1) T =2
s + 4s + 3
d x(t)
11.32 (Stability) A perfect dierentiator is described by y(t) = .
dt
(a) Find its transfer function H(s) and use the condition for BIBO stability to show that the system
is unstable.
(b) Verify your conclusion by finding the impulse response h(t) and applying the condition for BIBO
stability in the time domain.
! t
11.33 (Stability) A perfect integrator is described by y(t) = x(t) dt.
(a) Find its transfer function H(s) and use the condition for BIBO stability to show that the system
is unstable.
(b) Verify your conclusion by finding the impulse response h(t) and applying the condition for BIBO
stability in the time domain.
11.34 (Model-Order Reduction) For a stable system, the eect of poles much farther from the j-axis
than the rest is negligible after some time, and the behavior of the system may be approximated by
a lower-order model from the remaining or dominant poles.
100
(a) Let H(s) = . Find its impulse response h(t), discard the term in h(t) that
(s + 1)(s + 20)
makes the smallest contribution to obtain the reduced impulse response hR (t), and establish
the transfer function of the reduced model by computing HR (s).
(b) If the poles (s + k ) of H(s) to be neglected are written in the form (1 + sk ), HR (s) may
also be computed directly from H(s) if we discard just the factors (1 + sk ) from H(s). Obtain
HR (s) from H(s) using this method and explain any dierences from the results of part (a).
(c) As a rule of thumb, poles with magnitudes ten times larger than the rest may be neglected. Use
400
this idea to find the reduced model HR (s) and its order if H(s) = 2 .
(s + 2s + 200)(s + 20)(s + 2)
366 Chapter 11 The Laplace Transform
11.35 (System Response in Symbolic Form) The ADSP routine sysresp2 yields a symbolic expression
for the system response (see Chapter 21 for examples of its usage). Consider a system described by
the dierential equation y (t) + 2y(t) = 2x(t). Use sysresp2 to obtain
(a) Its step response.
(b) Its impulse response.
(c) Its zero-state response to x(t) = 4e3t u(t).
(d) Its complete response to x(t) = 4e3t u(t), with y(0) = 5.
11.36 (System Response in Symbolic Form) Consider the system y (t) + 4y(t) + Cy(t) = x(t).
(a) Use sysresp2 to obtain its step response and impulse response for C = 3, 4, 5 and plot each
response.
(b) How does the step response dier for each value of C? For what value of C would you expect
the smallest rise time? For what value of C would you expect the smallest 3% settling time?
(c) Confirm your predictions in part (b) by numerically estimating the rise time and settling time,
using the ADSP routine trbw.
11.37 (Steady-State Response in Symbolic Form) The ADSP routine ssresp yields a symbolic
expression for the steady-state response to sinusoidal inputs (see Chapter 21 for examples of its
usage). Find the steady-state response to the input x(t) = 2 cos(3t 3 ) for each of the following
systems and plot the results over 0 t 3.
(a) y (t) + y(t) = 2x(t), for = 1, 2
(b) y (t) + 4y(t) + Cy(t) = x(t), for C = 3, 4, 5
Chapter 12
APPLICATIONS OF
THE LAPLACE TRANSFORM
367
368 Chapter 12 Applications of the Laplace Transform
The output is the resistor voltage. The transfer function may be written as
Y (s) R sRC s
H(s) = = = =
X(s) R + 1/sC 1 + sRC 1 + s
The magnitude and phase of the transfer function is sketched in Figure E12.1A(2).
Phase (degrees)
Magnitude 90
1
45
0.707
1/
1/ 90
Figure E12.1A(2) Frequency response of the RC circuit for Example 12.1(a)
The system describes a highpass filter because |H()| increases monotonically from
|H(0)| = 0 to a
maximum of Hmax = |H()| = 1. Its half-power frequency (where |H()| = Hmax / 2) is = 1/ ,
but its half-power bandwidth is infinite. The phase decreases from a maximum of 90 at = 0 to 0
as . The phase at = 1/ is 45 .
(b) Discuss the frequency response of the RLC filter shown in Figure E12.1A(1).
We find the transfer function H(s) and frequency response H() of this filter as
s j
H(s) = H() =
s2 +s+1 (1 2 ) + j
This describes a bandpass filter because its magnitude response is zero at = 0 and very small at
high frequencies, with a peak in between. Frequency-domain
measures for bandpass filters include the
half-power frequencies 1 , 2 at which the gain is 1/ 2 times the peak gain, the center frequency
0 (defined as 0 = 1 2 , the geometric mean of the half-power frequencies), and the half-power
bandwidth (the frequency band B = 2 1 covered by the half-power frequencies). The peak gain
12.2 Minimum-Phase Filters 369
H(s) 1 1
S(s) = = 2 =
s s +s+1 (s + 12 )2 + ( 21 3)2
3 t/2
Inverse transformation yields s(t) = e sin( 23 t). This is a damped sinusoid that is typical of the
2
impulse response of bandpass filters. The impulse response nearly dies out in about 5 time constants
(or 10 s), or less than three half-cycles.
(s z1 )(s z2 ) (s zM )
H(s) = K , N > M, K > 0, Re[zk ] < 0, Re[pk ] < 0 (12.2)
(s p1 )(s p2 ) (s pN )
It has the smallest group delay and smallest deviation from zero phase, at every frequency, among all transfer
functions with the same magnitude spectrum |H()|. A stable system is called mixed phase if some of its
zeros lie in the right half-plane (RHP) and maximum phase if all its zeros lie in the RHP. Of all possible
stable systems with the same magnitude response, there is only one minimum-phase system.
All have the same magnitude but dierent phase and delay. H1 (s) is minimum phase (no zeros in the RHP),
H2 (s) is mixed phase (one zero outside the RHP), and H3 (s) is maximum phase (all zeros in the RHP).
370 Chapter 12 Applications of the Laplace Transform
j j
Quadrantal symmetry
For a complex root For a real root
The system transfer function H(s) = KP (s)/Q(s) is obtained from H(s)H(s) by selecting only the left
half-plane poles (for stability) and left half-plane zeros (to ensure minimum phase) and a value of K that
matches the magnitudes of H(s) and H() at a convenient frequency (such as dc).
log(). Since any phase variation can be brought into the range , the phase () is typically plotted on
a linear scale versus log(). Since log(0) = and the logarithm of a negative number is not real, log and
Bode plots use only positive frequencies and exclude dc. For LTI systems whose transfer function is a ratio of
polynomials in j, a rough sketch can be quickly generated using linear approximations called asymptotes
over dierent frequency ranges to obtain asymptotic Bode plots.
The log operator provides nonlinear compression. A decade (tenfold) change in |H()| results in exactly
a 20-dB change in its decibel magnitude, because
Similarly, a twofold (an octave) change in |H()| results in approximately a 6-dB change in its decibel
magnitude, because
20 log |2H()| = 20 log |H()| + 20 log 2 HdB + 6 (12.4)
The slope of the magnitude is measured in decibels per decade (dB/dec) or decibels per octave (dB/oct).
For example, if |H()| = A k , the slope of HdB versus is 20k dB/dec or 6k dB/oct.
The numerator and denominator can be factored into linear and quadratic factors in j with real coecients.
A standard form is obtained by setting the real part of each factored term to unity to obtain
The standard factored form oers two advantages. Upon taking logarithms, products transform to a sum.
This allows us to sketch the decibel magnitude of the simpler individual factors separately and then use
superposition to plot the composite response. Similarly, the phase is the algebraic sum of the phase contri-
butions due to each factor, and we can thus plot the composite phase by superposition. Note that H() is a
constant or proportional to some power of j as or 0. There are only four types of terms that
we must consider in order to plot the spectrum for any rational transfer function: a constant, the term j,
the term (1 + j/), and a quadratic term.
1 1
For H() = j For H() = For H() = 1+ j (/) For H() =
j 1+ j (/)
HdB HdB HdB HdB
20 Slope
-20 dB/dec Slope 10
20 dB/dec
(log) 1 10 20 (log)
1 10 (log)
(log) -20
Slope Slope
20 dB/dec -20 10 -20 dB/dec
Figure 12.2 Asymptotic Bode magnitude plots for for some standard forms
The decibel magnitude of H() = 1 + j is HdB = 20 log |1 + j | dB, and yields separate linear
approximations for low and high frequencies
# 0 (slope = 0 dB/dec),
# ## # #
HdB = 20 log #1 + j # (12.7)
20 log ## ## (slope = 20 dB/dec),
The two straight lines intersect at = where HdB 3 dB. The frequency is called the break frequency,
or corner frequency. For a linear factor, it is also called the half-power frequency, or the 3-dB
frequency. The dierence between the asymptotic and true magnitude equals 3 dB at = , 1 dB at
frequencies an octave above and below , and nearly 0 dB at frequencies a decade above and below .
All the slopes and magnitudes change sign if a term is in the denominator and increase k-fold if a term is
repeated k times.
The decibel value of a constant H() = K is HdB = 20 log |K| dB, a constant for all . If K is negative, the
negative sign contributes a phase of 180 to the phase plot.
The Bode magnitude plot for a transfer function with several terms is the sum of similar plots for each
of its individual terms. Since the asymptotic plots are linear, so is the composite plot. It can actually be
sketched directly if we keep track of the following in the transfer function:
1. The initial slope is zero if terms of the form (j)k are absent. Otherwise, the initial slope equals
20k dB and the asymptote always passes through 0 dB at = 1 if we ignore the constant term.
2. If the break frequencies are arranged in ascending order, the slope changes at each successive break
frequency. The change in slope equals 20k dB/dec for (1 + j/B )k . The slope increases by 20k
dB/dec if B corresponds to a numerator term, and decreases by 20k dB/dec otherwise.
3. A constant term contributes a constant decibel value and shifts the entire plot. We include this last.
As a consistency check, the final slope should equal 20n dB/dec, where n is the dierence between the
order of the numerator and denominator polynomials in H().
12.3 Bode Plots 373
A rough sketch of the true plot may also be generated by adding approximate correction factors at the
break frequencies. The correction factor at the break frequency , corresponding to a linear factor 1 + j is
3 dB if the surrounding break frequencies are at least a decade apart or 2 dB if an adjacent break frequency
is an octave away. For repeated factors with multiplicity k, these values are multiplied by k. At all other
frequencies, the true value must be computed directly from H().
The term 1/j provides a starting asymptote 20 dB/dec whose value is 0 dB at = 1 rad/s. We
can now sketch a composite plot by including the other terms:
At 1 = 0.25 rad/s (numerator), the slope changes by +20 dB/dec to 0 dB/dec.
At 2 = 10 rad/s (numerator), the slope changes by +20 dB/dec to 20 dB/dec.
At 3 = 20 rad/s (denominator), the slope changes by 20 dB/dec to 0 dB/dec.
We find the asymptotic magnitudes at the break frequencies:
1 = 0.25 rad/s (numerator): 12 dB (2 octaves below = 1 rad/s).
2 = 10 rad/s (numerator): 12 dB (zero slope from = 0.5 rad/s to = 10 rad/s).
3 = 20 rad/s (denominator): 18 dB (1 octave above = 10 rad/s, with slope 6 dB/oct).
Finally, the constant 5 shifts the plot by 20 log 5 = 14 dB. Its Bode plot is shown in Figure E12.4(a).
374 Chapter 12 Applications of the Laplace Transform
(a) Asymptotic (dark) and exact magnitude (b) Asymptotic (dark) and exact magnitude
60
20
50 10
[dB]
[dB]
0
Magnitude
Magnitude
40
10
30 20
30
20
2 1 0 1 2 3 1 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10 10 10
Frequency [rad/s] Frequency [rad/s]
Figure E12.4 Bode magnitude plots for Example 12.4(a and b)
(1 + j)(1 + j/100)
(b) (Repeated Factors) Let H() = .
(1 + j/10)2 (1 + j/300)
Its Bode plot is sketched in Figure E12.4(b). We make the following remarks:
The starting slope is 0 dB/dec since a term of the form (j)k is absent.
The slope changes by 40 dB/dec at B = 10 rad/s due to the repeated factor.
For a true plot, the correction factor at B = 10 rad/s (denominator) is 6 dB.
The true magnitudes at = 100 rad/s and = 300 rad/s, which are neither an octave apart nor a
decade apart, must be computed directly from H().
(c) For the Bode magnitude plot shown in Figure E12.4C, find , , and H(s).
HdB
26
20 dB/dec 20 dB/dec
6
(log)
1
Figure E12.4C Bode plot for Example 12.4(c)
12.3 Bode Plots 375
The 20-dB/dec slope and a 20-dB rise from to 1 rad/s places a decade below 1 rad/s. Thus,
= 0.1 rad/s. The 20-dB/dec (or 6 dB/oct) slope and a 26-dB drop from 1 rad/s to places a
decade (a 20-dB drop) plus an octave (a 6-dB drop) away, at = 20 rad/s. We may also find from
the slope = rise over run rule provided we compute the run from logarithms of the frequencies:
0 26
slope = 20 dB/dec = log = 1.3 20 rad/s
log log 1
The 20-dB/dec slope at 0.1 rad/s corresponds to a numerator factor (1 +j/0.1), and the slope change
of 40 dB/dec at 1 rad/s corresponds to the quadratic denominator factor (1 + j)2 . The decibel shift
of 6 dB corresponds to a linear gain of K = 2 (because 20 log K = 6 dB). Thus,
2(1 + j/0.1) 2(1 + s/0.1) 0.2(s + 0.1)
H() = H(s) = =
(1 + j)2 (1 + s)2 (s + 1)2
The phase of j/ is 90 for all . The phase of (1+j/) is () = tan1 (/). At = , we have the
exact result () = 45 . For low and high frequencies, we find that () 0, and () 90 , .
To generate an asymptotic phase plot, we also need a linear approximation in the intermediate frequency
range. The break frequency = provides a convenient reference. Since the dierence between the true
and asymptotic magnitude almost disappears within a decade of the break frequency, it is customary to start
the high-frequency and low-frequency approximations a decade above and a decade below = . Over the
two-decade range (0.1, 10), we approximate the phase by a straight line with a slope of 45/dec (which
corresponds to 13.5/oct). As with Bode magnitudes, all the slopes and phase values change sign if a term
is in the denominator and increase k-fold if a term is repeated k times. The constant term contributes zero
phase if positive and 180 if negative.
The Bode phase plot for a transfer function with several terms is just the sum of similar plots for each of
its individual terms. Since asymptotic plots are linear, the composite plot can be sketched directly without
sketching the individual terms, if we note the following:
1. A negative constant contributes 180 , while the term (j)k also contributes a constant value 90k .
Both may be used as the last step to shift the phase plot by an appropriate amount.
2. If the break frequencies are arranged in ascending order, the slope changes at frequencies one decade
on either side of each successive break frequency. The slope increases by 45k/dec for a factor of the
form (1 + j/B )k .
376 Chapter 12 Applications of the Laplace Transform
Here are some consistency checks. The initial and final asymptotes have zero slope. The initial phase is
nonzero only if a term of the form (j)k is present. The final phase equals 90n , where n is the dierence
between the order of the numerator and denominator polynomials in H().
Next, we find the slopes of the asymptotes for the individual terms.
The frequencies at which the slopes change are [0.05, 1, 2, 5, 100, 200] rad/s.
The slopes for the composite plot are found as
2. 45/dec between 0.05 rad/s and 1 rad/s (due to the first term only).
3. 90/dec between 1 rad/s and 2 rad/s (due to the first two terms).
5. 45/dec between 5 rad/s and 100 rad/s (due to the last two terms).
6. 90/dec between 100 rad/s and 200 rad/s (due to the last term only).
We sketch the asymptotic plot and shift it by 90 due to the term 1/j. As a check, both the initial
and final values of the phase equal 90 . The phase plot is shown in Figure E12.5.
12.3 Bode Plots 377
20
[degrees]
40
60
Phase
80
100
3 2 1 0 1 2 3
10 10 10 10 10 10 10
Frequency [rad/s]
Figure E12.5 Bode phase plot for Example 12.5
(a) Bode magnitude of quadratic factor (b) Bode phase of quadratic factor
for = 0.01, 0.1, 0.2, 0.5, 1 for = 0.01, 0.1, 0.2, 0.5, 1
40
0
= 0.01 = 0.01
Phase [degrees]
[dB]
0
=1
Magnitude
90
40
=1
180
80
1 0 1 2 3 1 0 1 2 3
10 10 10 10 10 10 10 10 10 10
Frequency [rad/s] Frequency [rad/s]
Figure 12.4 Magnitude and phase of a quadratic numerator at the break frequency
Figure 12.4 shows the magnitude and phase plots for various values of . At the break frequency = ,
the true magnitude may dier significantly from the asymptotic, and depends on (or Q) with
HdB = 20 log(2) = 20 log Q (12.10)
378 Chapter 12 Applications of the Laplace Transform
For = 0.5 (or Q = 1), the true value matches the asymptotic value but for < 0.5 (or Q > 1), the true
value exceeds the asymptotic value.
The phase plot reveals even greater dierences. For = 1 (or Q = 0.5), the asymptotic phase plot is
identical to that for a squared linear factor. For < 1 (or Q > 0.5), the transition between 0 and 180
occurs in less than two decades. For < 0.1 (or Q > 5), the phase change occurs in an almost step-like
fashion, and the asymptotes extend almost up to the break frequency itself.
(1 + j/0.5)
H() =
1 + j2(0.1)(/10) + (j/10)2
The Bode magnitude and phase plots are sketched in Figure E12.6. The Bode magnitude is 0 dB up to
= 0.5 rad/s with a 20-dB/dec asymptote starting at = 0.5 rad/s and another of 20 dB/dec starting at
= 10 rad/s. The asymptotic magnitude at = 10 rad/s is 14 dB. The true magnitude diers significantly.
With = 0.1, we compute 20 log(2) = 20 log(0.2) = 14 dB. Thus, the true value at = 10 rad/s diers
from the asymptotic value by 14 dB and equals 28 dB. The phase plot shows zero phase until = 0.05 rad/s,
followed by asymptotes with slopes of 45/dec until = 1 rad/s, 45/dec until = 5 rad/s, and 90/dec
until = 100 rad/s. After = 100 rad/s, the phase stays constant at 90 . Note how dierent the
asymptotic phase plot is compared with the exact phase plot, especially in the mid-frequency range.
(a) Asymptotic (dark) and exact magnitude (b) Asymptotic (dark) and exact phase
40 80
45
[degrees]
[dB]
20
0
Magnitude
Phase
0 45
90
20
3 2 1 0 1 2 3 3 2 1 0 1 2 3
10 10 10 10 10 10 10 10 10 10 10 10 10 10
Frequency [rad/s] Frequency [rad/s]
Figure E12.6 Bode magnitude and phase plot for Example 12.6
Measure Explanation
Time delay Time between application of input and appearance of response
Typical measure: Time to reach 50% of final value
Rise time Measure of the steepness of initial slope of response
Typical measure: Time to rise from 10% to 90% of final value
Overshoot Deviation (if any) beyond the final value
Typical measure: Peak overshoot
Settling time Time for oscillations to settle to within a specified value
Typical measure: Time to settle to within 5% or 2% of final value
Damping Rate of change toward final value
Typical measure: Damping factor or quality factor Q
j High Q Low Q j
Poles closer to j -axis Poles farther from j -axis
p
p
Frequency-Domain Measures
For Q < 1/ 2, the magnitude |H()| is monotonic, but for Q > 1/ 2 it shows overshoot and a peak near
p , and the peaking increases with Q, as illustrated in Figure 12.6.
380 Chapter 12 Applications of the Laplace Transform
pk
For Q > 1/ 2, the frequency pk of the peak and the peak magnitude Hpk are found by setting
d|H()|/d to zero to give
Q
pk = p [1 (1/2Q2 )]1/2 Hpk = , Q > 1/ 2 (12.13)
p [1 (1/4Q2 )]1/2
Time-Domain Measures
The nature of the step response depends on the poles of H(s), as illustrated in Figure 12.7.
For Q < 0.5, the poles are real and distinct, and the step response shows a smooth, monotonic rise to
the final value (overdamped) with a large time constant. For Q > 0.5, the poles are complex conjugates,
and the step response is underdamped with overshoot and decaying oscillations (ringing) about the final
value. This results in a small rise time but a large settling time. The frequency of oscillations increases with
Q. For Q = 0.5, the poles are real and equal, and the response is critically damped and yields the fastest
monotonic approach to the steady-state value with no overshoot. The 5% settling time and 2% settling time
12.4 Performance Measures 381
+ 1 + + +
1 3
H 2 1H
x(t) 1F y(t) x(t) 1F y(t)
Second-order Bessel filter Second-order Butterworth filter
Figure E12.7 The filters for Example 12.7
The peak overshoot occurs at tOS = 3.63 s. The approximate 5% and 2% settling times and their exact
values (in parentheses) are
6Q 8Q
t5% = 2 (2.19) s t2% = 2.66 (2.5) s
p p
382 Chapter 12 Applications of the Laplace Transform
The rise time is numerically estimated as tr = 1.573 s, and the delay time as td = 0.9 s.
The phase response () and group delay tg are given by
! "
3 d () 3 2 + 9
() = tan1 tg = =
3 2 d 4 + 3 2 + 9
1
(b) The second-order lowpass Butterworth filter is described by H(s) = .
s2 + s 2 + 1
Comparison of the denominator with s2 + sp /Q + p2 gives p = 1 rad/s and Q = 1/ 2 (the highest
value of Q that still ensures a monotonic |H()|). The half-power frequency is = 1 rad/s.
Since Q > 0.5, the step response will be underdamped and show overshoot. In fact, the step response
s(t) corresponds to the inverse transform of
The rise time is numerically estimated as tr = 2.15 s and the delay time as td = 1.43 s. The settling
time, rise time, and delay time are much larger than those for the Bessel filter.
12.5 Feedback
Feedback plays an extremely important role in regulating the performance of systems. In essence, we
compare the output of a system with the desired input and correct for any deficiencies to make improvements.
Feedback systems abound in nature. Temperature regulation in humans is an example. When we are cold,
the body responds by shivering to increase the temperature. The concept of feedback has been eectively
used to both study and design systems in a wide variety of disciplines ranging from biomedical engineering
to space exploration and has spawned the fast growing discipline of automatic control systems. A typical
feedback system involves the structure shown in Figure 12.8.
The system is typically analyzed under normal operating conditions or about a set point that describes
a steady state. Each variable then describes its deviation from its corresponding steady-state value. The
plant describes the system to be controlled. The input R(s) is a reference signal. The output C(s) is
fed back to generate an error signal E(s), which controls the output in a manner that can be adjusted
by compensators in either the forward or return path or both. Any disturbances (unwanted signals) that
cause a change in the steady state C(s) are modeled as a separate signal D(s).
12.5 Feedback 383
D(s)
+
R(s) E(s) + C(s)
+ Plant Compensator
Compensator
D(s)
+
R(s) E(s) + C(s)
+ G(s)
H(s)
If we combine the forward compensator and the plant, we obtain the feedback system of Figure 12.9.
Depending on the requirements, such a feedback system can be operated for tracking or for regulation.
In the tracking mode, the system tracks (or responds to) the reference signal such that C(s) R(s). This
requires minimization of the error signal E(s). During regulation, the system minimizes changes in C(s) (in
the presence of a disturbance), assuming a constant reference. Regulation is important in situations where
we must maintain the response in the face of changing system parameters. How feedback aects system
stability is also an important aspect.
Tracking
Since good tracking requires C(s) R(s), this implies
G(s)
1 (12.21)
1 + G(s)H(s)
To ensure this we could, for example, choose H(s) = 1 (unity feedback) and G(s) 1.
384 Chapter 12 Applications of the Laplace Transform
Regulation
The regulating eect of feedback is based on choosing a large gain for the term G(s)H(s), such that
|G(s)H(s)| 1. Then
G(s) 1
T (s) = (12.22)
1 + G(s)H(s) H(s)
The system transfer function depends only on the feedback path. Since it is also independent of G(s), the
system is insensitive to variations in the plant dynamics.
The Open-Loop Gain
Feedback is hardly useful when the open-loop gain is small (|GH| 1), since T (s) G(s). Clearly, the
influence of feedback (closing the loop) depends on the term G(s)H(s), regardless of whether we want to
study tracking, regulation, or stability. Curiously, G(s)H(s) (with its sign reversed) is called the open-loop
transfer function. If we consider only the parts in the feedback loop, as shown in Figure 12.10, and open
the loop at any point, an input I(s) inserted at one end returns as G(s)H(s)I(s) at the other end (to act
as the new input at its starting point). Thus, the open-loop transfer function equals
O(s)
OLT = = G(s)H(s) (12.23)
I(s)
+
G(s)
I(s)
O(s) = G(s)H(s)I(s)
H(s)
Advantages of Feedback
Even though feedback reduces the overall gain, it can lead to a stable overall system, regardless of the
complexity, variations, or even lack of stability of the plant itself. Since H(s) can usually be realized using
inexpensive, highly reliable devices and elements, feedback aords a simple, inexpensive means of regulating
stable systems or stabilizing unstable ones. The major advantages of feedback include:
1. Insensitivity to plant variations
2. Reduced eects of distortion
3. Improvement in the system bandwidth
4. Improvement in the system linearity and stability
For a large open-loop gain, the overall transfer function of a feedback system is T (s) 1/H(s), and its
stability (and linearity) is dictated primarily by H(s). The plant G(s) itself could be unstable or highly
nonlinear, but if H(s) is stable (or linear), so is the overall system.
R(s) C(s)
+ G A1
A2
(b) (Disturbance Rejection) Amplifiers are prone to distortion at full-power output. If the distortion
is small, they may be modeled by the linear system of Figure E12.8B. The structure is similar to that
of part (a) but the output C(s) now equals the sum of an undistorted output and a term D(s) that
accounts for distortion.
D(s)
+
R(s) E(s) + C(s)
+
G A1
A2
(c) (Noise Reduction) Feedback is eective in combating noise only if it appears in the output, not if
it adds to the input. The error signal E(s) is generated by comparing the input with the measured
output. Measurement error results in a noisy signal C (s) = C(s) + N (s) and an error that equals
E(s) = R(s) C (s) = R(s) N (s) C(s). This is equivalent to a system with the noisy input
R (s) = R(s) N (s). In the absence of any other disturbances (if D(s) = 0), we get
GA1 GA1 GA1
C(s) = R (s) = R(s) N (s)
1 + GA1 A2 1 + GA1 A2 1 + GA1 A2
It should be obvious that both the input R(s) and the noise N (s) are multiplied by the same factor,
and no noise reduction is thus possible! The noise must be minimized by other means. For example,
the eect of measurement noise may be minimized by using more accurate sensors.
386 Chapter 12 Applications of the Laplace Transform
(d) (Bandwidth Improvement) Consider an amplifier with transfer function G(s) = K/(1 + s ). Its
half-power bandwidth is B = 1/ . If this amplifier is made part of a feedback system as illustrated
in Figure E12.8D, the overall transfer function may be written as
G(s) K1 K
T (s) = = , where K1 = and 1 =
1 + AG(s) 1 + s1 1 + KA 1 + KA
R(s) C(s)
+
G(s)
R(s) C(s)
+
G >> 1
C(s) 1
T(s) =
R(s) H(s)
H(s)
It is thus possible to realize the inverse system corresponding to some H(s) simply by designing a feedback
system with H(s) in the feedback path, and a large open-loop gain. And as an added advantage, the exact
details of H(s) need not even be known! This is a useful way to undo the damaging influence of some system
H(s), such as a sensor or measurement device, on a signal. We feed the signal to the device and feed its
output to a feedback system with the same device in the feedback path. The overall transfer function of the
cascaded system is unity, and we recover the input signal!
e(t)
x FM(t) Lowpass y(t)
filter
x 0(t)
VCO
Under ideal conditions, the rate of change of 0 (t) is also proportional to the PLL output y(t) such that
) t
1 di
0 (t) = 2k0 y(t) dt y(t) = (12.28)
0 2kF dt
If 0 (t) i (t), it follows that y(t) m(t)(kF /k0 ) describes a scaled version of the demodulated message.
In other words, for demodulation, the PLL behaves as an ideal dierentiator.
If we assume that both i (t) and 0 (t) vary slowly compared to 2fC t, and the filter rejects frequencies
outside the range f 2fC , the filter output may be written as
d0
= 0.5AB(2k0 )sin(i 0 ) = K sin(i 0 ) (12.30)
dt
Here, K = 0.5AB(2k0 ). The system is said to be synchronized or in phase-lock if 0 i . When this
happens, we obtain the linear approximation
d0
K[i (t) 0 (t)] (12.31)
dt
This linear model is amenable to analysis using Laplace transforms. The Laplace transform of both sides
gives
K
s0 (s) = K[i (s) 0 (s)] 0 (s) = i (s) (12.32)
s+K
Since d0 /dt = 2k0 y(t) and di /dt = 2kF m(t), we have Y (s) = s0 (s)/(2k0 ) and i (s) = 2kF M (s)/s.
We then obtain the following:
For demodulation, we would like y(t) m(t) or Y (s) M (s). If K 1 (implying a very large loop gain),
we obtain Y (s) M (s)(kF /k0 ) and do in fact recover the message signal (scaled by the factor kF /k0 , of
course). It is important to realize that this result holds only when the PLL has locked on to the phase of the
FM signal and when the phase error i 0 is small. If the phase error is not restricted to small values, the
output will suer from distortion. Naturally, during the locking process, the phase error is much too large
to warrant the use of a linear model.
i (s)
M(s) K
s + H(s) Y(s)
0 (s)
K
s
produces a phase that varies with y(t). Under linear operation, the output of the detector (multiplier and
lowpass filter) is approximated by i 0 , and its action is thus modeled by a summing junction.
The various transfer functions are readily obtained as
For a large open-loop gain |KH(s)| 1, we have T (s) = Y (s)/M (s) 1, and the output y(t) equals the
message signal m(t) as required. If the input phase does not vary, the error signal i (s) 0 (s) equals zero
and so does the steady-state response.
12.6.4 Implementation
The block H(s) may be realized in passive or active configurations. If H(s) = 1, we obtain a first-order
loop with T (s) = K/(s + K). Its frequency response and phase error may be written as
Y (f ) 0 (f ) K i (f ) 0 (f ) j2f
= = = (12.36)
M (f ) i (f ) K + j2f i (f ) K + j2f
The loop gain K completely characterizes the closed-loop response. If H(s) is a first-order system of the
form H(s) = (s2 + 1)/(s1 + ), we obtain a second-order loop whose frequency response and phase error
can be described by
Y (s) 0 (s) K(s2 + 1)
= = (12.37)
M (s) i (s) 1 s + ( + K2 )s + K
2
The parameters K, , 1 , and 2 are chosen to confine the phase error to small values while minimizing the
distortion in the output.
390 Chapter 12 Applications of the Laplace Transform
CHAPTER 12 PROBLEMS
DRILL AND REINFORCEMENT
0.2s
12.1 (Filter Response) Consider a filter described by H(s) = .
s2 + 0.2s + 16
(a) What traditional filter type does it describe?
(b) Estimate its response to the input x(t) = cos(0.2t) + sin(4t) + cos(50t).
12.2 (Filter Analysis) Find the impulse response h(t) and step response s(t) of each filter in Figure P12.2.
+ 1 + + + + +
1 3
H 2 1H 1 2H
x(t) 1F y(t) x(t) 1F y(t) x(t) 1F 1F 1 y(t)
Second-order Bessel filter Second-order Butterworth filter Third-order Butterworth filter
Figure P12.2 Filters for Problem 12.2
12.3 (Phase Delay and Group Delay) For each filter, find the phase delay and group delay.
s1
(a) H(s) = (an allpass filter)
s+1
1
(b) H(s) = (a second-order Butterworth lowpass filter)
s + 2s + 1
2
3
(c) H(s) = 2 (a second-order Bessel lowpass filter)
s + 3s + 3
12.4 (Minimum-Phase Systems) Find the stable minimum phase transfer function H(s) from the
following |H()|2 and classify each filter by type. Also find a stable transfer function that is not
minimum phase, if possible.
2
(a) |H()|2 = 2
+4
4 2
(b) |H()|2 =
4 + 17 2 + 16
4 + 8 2 + 16
(c) |H()|2 =
4 + 17 2 + 16
12.5 (Bode Plots) Sketch the Bode magnitude and phase plots for each transfer function.
10 + j 10j(10 + j)
(a) H() = (b) H() =
(1 + j)(100 + j) (1 + j)(100 + j)
100 + j 1 + j
(c) H() = (d) H() =
j(10 + j)2 (10 + j)(5 + j)
10s(s + 10) 10(s + 1)
(e) H(s) = (f ) H(s) =
(s + 1)(s + 5)2 s(s2 + 10s + 100)
Chapter 12 Problems 391
10 + 10j
12.6 (Bode Plots) The transfer function of a system is H() = . Compute the
(10 + j)(2 + j)
following quantities in decibels (dB) at = 1, 2, and 10 rad/s.
(a) The asymptotic, corrected and exact magnitude
(b) The asymptotic and exact phase
1 + j
12.7 (Bode Plots) The transfer function of a system is H() = .
(0.1 + j)2
(a) Find the asymptotic phase in degrees at =0.01, 0.1, 1, and 10 rad/s.
(b) What is the exact phase in degrees at = 0.01, 0.1, 1, and 10 rad/s?
12.8 (Minimum-Phase Systems) For each Bode magnitude plot shown in Figure P12.8,
(a) Find the minimum-phase transfer function H(s).
(b) Plot the asymptotic Bode phase plot from H(s).
20
20
40
40
rad/s (log) rad/s (log)
0.1 1 10 100 0.1 1 10 100
Figure P12.8 Bode magnitude plots for Problem 12.8
12.9 (Frequency Response and Bode Plots) Find and sketch the frequency response magnitude of
H() and the Bode magnitude plots for each of the following filters. What is the expected and exact
value of the decibel magnitude of each filter at = 1 rad/s?
1
(a) H(s) = (a first-order Butterworth lowpass filter)
s+1
1
(b) H(s) = (a second-order Butterworth lowpass filter)
s2 + s 2 + 1
1
(c) H(s) = (a third-order Butterworth lowpass filter)
(s + 1)(s2 + s + 1)
12.10 (Bode Plots of Nonminimum-Phase Filters) Even though it is usually customary to use Bode
plots for minimum-phase filters, they can also be used for nonminimum-phase filters. Consider the
filters described by
10(s + 1) 10(s 1) 10(s + 1)es
(a) H(s) = (b) H(s) = (c) H(s) =
s(s + 10) s(s + 10) s(s + 10)
Which of these describe minimum-phase filters? Sketch the Bode magnitude and phase plots for each
filter and compare. What are the dierences (if any)? Which transfer function do you obtain (working
backward) from the Bode magnitude plot of each filter alone? Why?
12.11 (Lead-Lag Compensators) Lead-lag compensators are often used in control systems and have
1 + s1
the generic form described by H(s) = . Sketch the Bode magnitude and phase plots of this
1 + s2
compensator for the following values of 1 and 2 .
(a) 1 = 0.1 s, 2 = 10 s (b) 1 = 10 s, 2 = 0.1 s
392 Chapter 12 Applications of the Laplace Transform
1
|H()| = '
1 + ( )2n
12.13 (Subsonic Filters) Some audio amplifiers include a subsonic filter to remove or reduce unwanted
low-frequency noise (due, for example, to warped records) that might otherwise modulate the audible
frequencies, causing intermodulation distortion. One manufacturer states that their subsonic filter
provides a 12-dB/oct cut for frequencies below 15 Hz. Sketch the Bode plot of such a filter and find
its transfer function H(s).
12.14 (Audio Equalizers) Many audio amplifiers have bass, midrange, and treble controls to tailor the
frequency response. The controls adjust the cuto frequencies of lowpass, bandpass, and highpass
filters connected in cascade (not parallel). The bass control provides a low-frequency boost (or cut)
without aecting the high-frequency response. Examples of passive bass and treble control circuits
are shown in Figure P12.14. The potentiometer setting controls the boost or cut.
+ +
0.1R C 0.1R
+ +
C R R
kR kR
Vi Vi
Bass control Treble control
circuit Vo circuit C Vo
0.1R
0.1R
Figure P12.14 Figure for Problem 12.14
(a) Find the transfer function of the bass-control filter. What is its gain for k = 0, k = 0.5, and
k = 1? What is the high-frequency gain? What are the minimum and maximum values of the
low-frequency boost and cut in decibels provided by this filter?
(b) Find the transfer function of the treble-control filter. What is its gain for k = 0, k = 0.5, and
k = 1? What is the low-frequency gain? What are the minimum and maximum values of the
high-frequency boost and cut in decibels provided by this filter?
(c) The Sony TA-AX 500 audio amplifier lists values of the half-power frequency for its bass control
filter as 300 Hz and for its treble control filter as 5 kHz. Select the circuit element values for
the two filters to satisfy these half-power frequency requirements.
Chapter 12 Problems 393
12.15 (Allpass Filters) Allpass filters exhibit a constant gain and have a transfer function of the form
H(s) = P (s)/P (s). Such filters are used as delay equalizers in cascade with other filters to adjust the
phase or to compensate for the eects of phase distortion. Two examples are shown in Figure P12.15.
L
C R1 R1
C
+ First-order Second-order +
Vi + Vo Vi + Vo
allpass circuit allpass circuit
R
R1 R
R1
(b) Assume R = 1 , L = 1 H, and C = 1 F and find expressions for the gain, phase delay, and
group delay of each filter as a function of frequency.
12.16 (Allpass Filters) Consider a lowpass filter with impulse response h(t) = et u(t). The input to this
filter is x(t) = cos(t). We expect the output to be of the form y(t) = A cos(t + ). Find the values of A
and . What should be the transfer function H1 (s) of a first-order allpass filter that can be cascaded
with the lowpass filter to correct for the phase distortion and produce the signal z(t) = B cos(t) at
its output? If we use the first-order allpass filter of Problem 12.15, what will be the value of B?
12.17 (RIAA Equalization) Audio signals usually undergo a high-frequency boost (and low-frequency
cut) before being used to make the master for commercial production of phonograph records. During
playback, the signal from the phono cartridge is fed to a preamplifier (equalizer) that restores the
original signal. The frequency response of the preamplifier is based on the RIAA (Recording Industry
Association of America) equalization curve whose Bode plot is shown in Figure P12.17, with break
frequencies at 50, 500, and 2122 Hz.
HdB RIAA equalization curve
20 R
C
-20 dB/dec R1
+ C1
+
R2 +
Vi
V0
50 500 2122 f Hz(log)
Figure P12.17 RIAA equalization and op-amp circuit for Problem 12.17
(a) What is the transfer function H(s) of the RIAA equalizer? From your expression, obtain the
exact decibel magnitude at 1 kHz.
(b) It is claimed that the op-amp circuit shown in Figure P12.17 can realize the transfer function
H(s) (except for a sign inversion). Justify this claim and determine the values for the circuit
elements if R2 = 10 k.
12.18 (Pre-Emphasis and De-Emphasis Filters for FM) To improve eciency and signal-to-noise
ratio, commercial FM stations have been using pre-emphasis (a high-frequency boost) for input signals
394 Chapter 12 Applications of the Laplace Transform
prior to modulation and transmission. At the receiver, after demodulation, the signal is restored using
a de-emphasis circuit that reverses this process. Figure P12.18 shows the circuits and their Bode plots.
R1 dB magnitude
+ + + dB magnitude
+ R
C 20 dB/dec 2.1 f (kHz)
R
f (kHz) C
20 dB/dec
2.1
Pre-emphasis De-emphasis
Figure P12.18 Pre-emphasis and de-emphasis circuits for Problem 12.18
The lower break-frequency corresponds to f = 2122 Hz (or = 75s), and the upper break-frequency
should exceed the highest frequency transmitted (say, 15 kHz). Find the transfer function of each
circuit and compute the element values required. While pre-emphasis and de-emphasis schemes have
been used for a long time, FM stations are turning to alternative compression schemes (such as Dolby
B) to transmit high-fidelity signals without compromising their frequency response or dynamic range.
12.19 (Allpass Filters) Argue that for a stable allpass filter, the location of each left half-plane pole must
be matched by a right half-plane zero. Does an allpass filter represent a minimum-phase, mixed-phase,
or maximum-phase system?
12.20 (Allpass Filters) Argue that the group delay tg of a stable allpass filter is always greater than or
equal to zero.
12.21 (Allpass Filters) The overall delay of cascaded systems is simply the sum of the individual delays.
By cascading a minimum-phase filter with an allpass filter, argue that a minimum-phase filter has the
smallest group delay from among all filters with the same magnitude spectrum.
12.22 (Design of a Notch Filter) To design a second-order notch filter with a notch frequency of 0 ,
we select conjugate zeros at s = j0 where the response goes to zero. The poles are then placed
close to the zeros but to the left of the j-axis to ensure a stable filter. We thus locate the poles at
s = 0 j0 , where the parameter determines the sharpness of the notch.
(a) Find the transfer function H(s) of a notch filter with 0 = 10 rad/s and as a variable.
(b) Find the bandwidth of the notch filter for = 0.1.
(c) Will smaller values of (i.e., 1) result in a sharper notch?
12.23 (Design of a Bandpass Filter) Let us design a bandpass filter with lower and upper cuto
frequencies 1 and 2 by locating conjugate poles at s = 1 j1 and s = 2 j2 (where
< 1) and a pair of zeros at s = 0.
(a) Find the transfer function H(s) of a bandpass filter with 1 = 40 rad/s, 2 = 50 rad/s, and
= 0.1.
(b) What is the filter order?
(c) Estimate the center frequency and bandwidth.
(d) What is the eect of changing on the frequency response?
12.24 (Frequency Response of Filters) The transfer function of a second-order Butterworth filter is
1
H(s) = .
s2 + 2s + 1
(a) Sketch its magnitude and phase spectrum and its Bode magnitude plot.
Chapter 12 Problems 395
12.25 (Frequency Response of Filters) The transfer function of a second-order Butterworth lowpass
1
filter is H(s) = .
(0.1s) + 0.1 2s + 1
2
12.27 (Compensating Probes for Oscilloscopes) Probes are often used to attenuate large amplitude
signals before displaying them on an oscilloscope. Ideally, the probe should provide a constant at-
tenuation for all frequencies. However, the capacitive input impedance of the oscilloscope and other
stray capacitance eects cause loading and distort the displayed signal (especially if it contains step
changes). Compensating probes employ an adjustable compensating capacitor to minimize the eects
of loading. This allows us, for example, to display a square wave as a square wave (with no undershoot
or ringing). An equivalent circuit using a compensating probe is shown in Figure P12.27.
Source
Scope
Cp
Rp +
Rs
+ R
Probe Vo
Vi C
Figure P12.27 Figure for Problem 12.27
(a) Assuming Rs = 0.1R, show that Rp = 8.9R provides a 10:1 attenuation at dc.
396 Chapter 12 Applications of the Laplace Transform
(b) In the absence of Cp (no compensation) but with Rs = 0.1R and Rp = 8.9R as before, find
the transfer function H(s) = Vo (s)/Vi (s). Compute and sketch its step response. What are the
time constant and rise time?
(c) In the presence of Cp , and with Rs = 0.1R and Rp = 8.9R as before, compute the transfer
function Hp (s) = Vo (s)/Vi (s). Show that if Rp Cp = RC, then Hp (s) = 0.1/(1 + 0.01sRC).
Compute and sketch its step response. How does the presence and choice of the compensating
capacitor aect the composite time constant and rise time? Does compensation work?
12.28 (Design of Allpass Filters) Consider a lowpass filter with impulse response h(t) = et u(t). The
input to this filter is x(t) = cos(t) + cos(t/3). Design an allpass filter H2 (s) that can be cascaded
with the lowpass filter to correct for the phase distortion and produce a signal that has the form
z(t) = B cos(t) + C cos(t/3) at its output. If the allpass filter has a gain of 0.5, what will be the values
of B and C?
12.30 (Bode Plots) The gain of an nth-order Butterworth lowpass filter is described by
1
|H()| = '
1 + (
)
2n
(a) Show that the half-power bandwidth of this filter is = for any n.
(b) Use paper and pencil to make a sketch of the asymptotic Bode plot of a Butterworth filter
for = 1 and n = 2, 3, 4. What is the asymptotic and true magnitude of each filter at the
half-power frequency?
(c) Determine the minimum-phase transfer function H(s) for n = 2, 3, 4, 5.
(d) Use Matlab to plot the Bode magnitude plot for = 1 and n = 2, 3, 4, 5 on the same graph.
Do these plots confirm your results from the previous part?
Chapter 12 Problems 397
(e) Use Matlab to plot the phase for = 1 and n = 2, 3, 4, 5 on the same graph. Describe how
the phase spectrum changes as the order n is increased. Which filter has the most nearly linear
phase in the passband?
(f ) Use Matlab to plot the group delay for = 1 and n = 2, 3, 4, 5 on the same plot. Describe
how the group delay changes as the order n is increased. Which filter has the most nearly
constant group delay in the passband?
12.31 (Bode Plots) The ADSP routine bodelin plots the asymptotic Bode magnitude and phase plots
of a minimum-phase transfer function H(s). Plot the asymptotic Bode magnitude and phase plots of
the following filters.
1
(a) H(s) = (a second-order Butterworth lowpass filter)
s + 2s + 1
2
s
(b) H(s) = 2 (a second-order bandpass filter)
s +s+1
(c) y (t) + 2y (t) + 2y (t) + y(t) = x(t) (a third-order Butterworth lowpass filter)
12.32 (System Response) Use the ADSP routine sysresp2 to find the step response and impulse response
of the following filters and plot each result. Compare the features of the step response of each filter.
Compare the features of the impulse response of each filter.
(a) y (t) + y(t)
= x(t) (a first-order lowpass filter)
(b) y (t) + 2y (t) + y(t) = x(t) (a second-order Butterworth lowpass filter)
(c) y (t) + y (t) + y(t) = x (t) (a bandpass filter)
(d) y (t) + 2y (t) + 2y (t) + y(t) = x(t) (a third-order Butterworth lowpass filter)
12.33 (Frequency Response) Use the ADSP routine bodelin to obtain Bode plots of the following
filters. Also plot the frequency response on a linear scale. Compare the features of the frequency
response of each filter.
(a) y (t) + y(t)
= x(t) (a first-order lowpass filter)
(b) y (t) + 2y (t) + y(t) = x(t) (a second-order Butterworth lowpass filter)
(c) y (t) + y (t) + y(t) = x (t) (a bandpass filter)
(d) y (t) + 2y (t) + 2y (t) + y(t) = x(t) (a third-order Butterworth lowpass filter)
12.34 (Performance Measures) Performance measures for lowpass filters include the rise time, the 5%
settling time, the half-power bandwidth, and the time-bandwidth product. Plot the step response,
magnitude spectrum, and phase spectrum of the following systems. Then, use the ADSP routine trbw
to numerically estimate each performance measure.
(a) A first-order lowpass filter with = 1
(b) The cascade of two first-order filters, each with = 1
1
(c) H(s) = (a second-order Butterworth lowpass filter)
s2 + 2s + 1
(d) y (t) + 2y (t) + 2y (t) + y(t) = x(t) (a third-order Butterworth lowpass filter)
12.35 (Steady-State Response in Symbolic Form) The ADSP routine ssresp yields a symbolic
expression for the steady-state response to sinusoidal inputs (see Chapter 21 for examples of its
usage). Find the steady-state response to the input x(t) = 2 cos(3t 3 ) for each of the following
systems.
2
(a) H(s) = , for = 1, 2
s+
(b) y (t) + 4y(t) + Cy(t) = x(t), for C = 3, 4, 5
Chapter 13
ANALOG FILTERS
13.1 Introduction
A filter may be regarded as a frequency-selective device that allows us to shape the magnitude or phase
response in a prescribed manner. The terminology of filters based on magnitude specifications is illustrated
in Figure 13.1. The passband ripple (deviation from maximum gain) and stopband ripple (deviation from
zero gain) are usually described by their attenuation (reciprocal of the linear gain).
398
13.1 Introduction 399
Of the four classical filter types based on magnitude specifications, the Butterworth filter is monotonic
in the passband and stopband, the Chebyshev I filter displays ripples in the passband but is monotonic in
the stopband, the Chebyshev II filter is monotonic in the passband but has ripples in the stopband, and the
elliptic filter has ripples in both bands.
The design of analog filters typically relies on frequency specifications (passband and stopband edge(s))
and magnitude specifications (maximum passband attenuation and minimum stopband attenuation) to gen-
erate a minimum-phase filter transfer function with the smallest order that meets or exceeds specifications.
Most design strategies are based on converting the given frequency specifications to those applicable to a
lowpass prototype (LPP) with a cuto frequency of 1 rad/s (typically the passband edge), designing the
lowpass prototype, and converting to the required filter type using frequency transformations.
LP2LP transformation
s s/ x
1 x
The lowpass-to-highpass (LP2HP) transformation converts a lowpass prototype HP (s) with a cuto
frequency of 1 rad/s to a highpass filter H(s) with a cuto frequency of x rad/s, using the nonlinear
transformation s x /s. This is illustrated in Figure 13.3.
LP2HP transformation
s x /s
1 x
s2 + 02 2
BP 02
s LP (13.1)
sB BP B
Here, 0 is the geometric mean of the band edges L and H , with L H = 02 , and the bandwidth is given
by B = H L . Any pair of geometrically symmetric bandpass frequencies a and b , with a b = 02 ,
corresponds to the lowpass prototype frequency (b a )/B. The lowpass prototype frequency at infinity
is mapped to the bandpass origin. This quadratic transformation yields a transfer function with twice the
order of the lowpass filter.
Here B = H L and 02 = H L . The lowpass origin maps to the bandstop frequency 0 . Since the roles
of the passband and the stopband are now reversed, a pair of geometrically symmetric bandstop frequencies
a and b , with a b = x2 , maps to the lowpass prototype frequency LP = B/(b a ). This quadratic
transformation also yields a transfer function with twice the order of the lowpass filter.
1 L 0 H
For a highpass filter with band edges p and s , the specifications for a lowpass prototype with a passband
edge of 1 rad/s are p = 1 rad/s and s = p /s rad/s. The LP2HP transformation is s p /s.
For bandpass or bandstop filters, we start with the band-edge frequencies arranged in increasing order as
[1 , 2 , 3 , 4 ]. If these frequencies show geometric symmetry, the center frequency 0 is 02 = 2 3 = 1 4 .
The band edges of the lowpass prototype are p = 1 rad/s and s = (4 1 )/(3 2 ) rad/s for both
the bandpass and bandstop case. The bandwidth is B = 4 1 for bandstop filters and B = 4 1 for
bandpass filters. These values of 0 and B are used in subsequent transformation of the lowpass prototype
to the required filter type.
If the given specifications [1 , 2 , 3 , 4 ] are not geometrically symmetric, some of the band edges must
be relocated (increased or decreased) in a way that the new transition widths do not exceed the original. This
is illustrated in Figure 13.6. It is common practice to hold the passband edges fixed and relocate the other
frequencies. For example, for the bandpass case, the passband edges [2 , 3 ] are held fixed, and 02 = 2 3 .
To ensure that the new transition widths do not exceed the original, we must increase 1 or decrease 4 (or
do both) such that 02 = 1 4 (for geometric symmetry).
f f
Fixed f2 f3 Fixed f1 Fixed Fixed f4
f1 f4 f2 f3
Increase f 3 or decrease f 2 to ensure f02 = f 2 f 3 Increase f 1 or decrease f 4 to ensure f02 = f 1 f 4
Figure 13.6 The band edges for bandpass and bandstop filters
(b) For a highpass filter with p = 500 rad/s and s = 200 rad/s, [1 , w2 ] = [200, 500] rad/s. The lowpass
prototype band edges are p = 1 rad/s and s = 2.5 rad/s. The highpass filter is designed from the
lowpass prototype, using the LP2HP transformation s p /s, with p = 500 rad/s.
(c) Consider a bandpass filter with band edges [1 , 2 , 3 , 4 ] = [16, 18, 32, 48] rad/s.
For a fixed passband, we choose B = 3 2 and 02 = 2 3 = 576.
Since 1 4 > 02 , we recompute 4 = 02 /1 = 36 rad/s, which ensures both geometric symmetry and
transition widths not exceeding the original.
Thus, [1 , 2 , 3 , 4 ] = [16, 18, 32, 36] rad/s and B = 3 2 = 32 18 = 14 rad/s.
With 4 1 = 36 16 = 20 rad/s, the lowpass prototype band edges are
4 1 20
p = 1 rad/s s = = = 1.4286 rad/s
B 14
The bandpass filter is designed from the LPP using the LP2BP transformation
s2 + 02 s2 + 576
s =
sB 14s
(d) Consider a bandstop filter with band edges [1 , 2 , 3 , 4 ] = [16, 18, 32, 48] rad/s.
For a fixed passband, we choose B = 4 1 , and 02 = (16)(48) = 768.
Since 2 3 < 02 , we recompute 3 = 02 2 = 42.667 to give
[1 , 2 , 3 , 4 ] = [16, 18, 42.6667, 48] rad/s B = 4 1 = 48 16 = 32 rad/s
filter transfer function from this approximation. The magnitude squared function and attenuation for an
nth-order lowpass prototype are of the form
1
|H()|2 = AdB () = 10 log[1 + 2 L2n ()] (dB) (13.3)
1 + 2 L2n ()
Here, Ln () is an nth-order polynomial or rational function, and controls the passband ripple (deviation
from maximum gain). The various classical filters dier primarily in the choice for L2n () to best meet desired
specifications. We require L2n () 0 in the passband (to ensure a gain of nearly unity) and L2n () 0 in
the stopband (to ensure a gain of nearly zero). The attenuation equation is the only design equation we
need. We find the filter order n and the parameter by evaluating the attenuation relation at the passband
and stopband edges. This establishes the exact form of |H()|2 from which the prototype transfer function
HP (s) can be readily obtained. If we design the prototype to exactly meet attenuation requirements at the
cuto frequency (usually the passband edge), the attenuation requirements will be exceeded at all the other
frequencies. The final step is to use frequency transformations to obtain the required filter.
H ()
1
1
1+ 2
1 s
For an nth-order Butterworth filter, the attenuation rate at high frequencies is 20n dB/dec and the
attenuation may be approximated by
A() = 10 log(1 + 2 2n ) 10 log(2 2n ) = 20 log + 20n log (13.6)
404 Chapter 13 Analog Filters
This equation suggests that we find the 2n roots of 1. With ej(2k1) = 1, we get
The poles lie on a circle of radius R = (1/)1/n in the s-plane. The poles are equally spaced /n radians
apart. Their angular orientation (with respect to the positive j-axis) is given by k = (2k 1)/2n. There
can never be a pole on the j-axis because k can never be zero. The pole locations are illustrated for n = 2
and n = 3 in Figure 13.8.
45 o 30 o
Butterworth poles 60 o R
R
90 o 1/ n
Pole radius R = (1/)
n =2 n =3
Figure 13.8 The poles of a Butterworth lowpass filter lie on a circle.
The real and imaginary parts of the left half-plane poles are given by
Each conjugate pole pair yields a real quadratic factor of the form
For odd n, there is always one real pole located at s = R. The factored form for HP (s) (which is useful
for realizing higher-order filters by cascading individual sections) may then be written
int(n/2)
! "
K M = 0, n even
HP (s) = QP (s) = (s + R) M
(s + 2sR sin k + R ),
2 2
(13.16)
QP (s) M = 1, n odd
k=1
For unit peak gain, we choose K = QP (0). We can also express the denominator QP (s) in polynomial
(unfactored) form as
QP (s) = q0 sn + q1 sn1 + q2 sn2 + + qn1 s + qn (13.17)
To find qk , we use the recursion relation
R cos[(k 1)/2n]
q0 = 1 qk = qk1 (13.18)
sin(k/2n)
(b) Find the attenuation at f = 7.5 kHz of a fourth-order Butterworth filter whose 1 dB passband edge is
located at 3 kHz.
We find 2 = 100.1Ap 1 = 0.2589. Since f = 7.5 kHz corresponds to a lowpass prototype (normalized)
frequency of = 7.5
3 = 2.5 rad/s, and n = 4, we find
(c) Find the pole radius and orientations of a third-order Butterworth lowpass filter with a passband edge
of 100 rad/s and = 0.7.
The lowpass prototype poles lie on a circle of radius R = 3 = (1/)1/n = (1/0.7)1/3 = 1.1262.
The pole radius of the actual filter is 100R = 112.62 rad/s. The left half-plane pole orientations are
k = (2k1)
2n = (2k1)
6 , k = 1, 2, 3, or 30 , 90 , and 150 , with respect to the j-axis.
(d) (Design from a Specified Order) What is the transfer function H(s) of a third-order Butterworth
lowpass prototype with = 0.7 and a peak gain of 10?
The lowpass prototype poles lie on a circle of radius R = 3 = (1/)1/n = (1/0.7)1/3 = 1.1262. The
left half-plane pole orientations are [ 6 , 2 , 5
6 ] rad. The pole locations are thus s = 1.1262 and
s = 1.1262 sin(/6) 1.1262 cos(/6) = 0.5651 j0.9754. The denominator of H(s) is thus
For a peak gain of 10, the numerator K of H(s) = K/Q(s) is K = 10Q(0) = 14.286, and thus
K 14.286
H(s) = = 3
Q(s) s + 2.2525s2 + 2.5369s + 1.4286
(e) (Design from High-Frequency Decay Rate) Find the transfer function of a 3-dB Butterworth
lowpass filter with a passband edge of 50 Hz and a high-frequency decay rate of 40 dB/dec.
For an nth-order filter, the high-frequency decay rate is 20n dB/dec. Thus, n = 2. The pole radius of
a 3-dB lowpass prototype is R = 1. With k = [/4, 3/4] rad, the pole locations are described by
13.3 The Butterworth Filter 407
s = sin(/4) j cos(/4) = 0.707 j0.707. This gives QP (s) = s2 + 2 + 1 and HP (s) = 1/QP (s).
With p = 2fp = 100 rad/s, the LP2LP transformation gives the required transfer function
9.8696(10)4
H(s) = HP (s/100) =
s2 + 444.29 + 9.8696(10)4
(f ) (Design from a Specified Order) Design a fourth-order Butterworth bandpass filter with a 2-dB
passband of 200 Hz and a center frequency of f0 = 1 kHz.
To design the filter, we start with a second-order lowpass prototype. We find 2 = 100.1Ap 1 = 0.5849.
With n = 2, the pole radius is R = (1/)1/n = 1.1435.
The pole locations are s = R sin(/4) jR cos(/4) = 0.8086 j0.8086.
The prototype denominator is
QP (s) = (s + 0.8086 + j0.8086)(s + 0.8086 + j0.8086) = s2 + 1.6171s + 1.3076
The prototype transfer function is
1.3076
HP (s) =
s2 + 1.6171s + 1.3076
We now use the LP2BP transformation s (s2 + 02 )/Bs, with 0 = 2f0 = 2000 rad/s and
B = 2(200) = 400 rad/s, to obtain the required transfer function:
2.0648(10)6 s2
HBP (s) =
s4 + 2.0321(10)3 s3 + 8.1022(10)7 s2 + 8.0226(10)10 s + 1.5585(10)15
1 1+s
2 1 + 2s + s2
3 (1 + s)(1 + s + s2 )
4 (1 + 0.76536s + s2 )(1 + 1.84776s + s2 )
5 (1 + s)(1 + 0.6180s + s2 )(1 + 1.6180s + s2 )
6 (1 + 0.5176s + s2 )(1 + 2s + s2 )(1 + 1.9318s + s2 )
Choosing K = 1.965 for unit gain, the prototype transfer function becomes
1.965
HP (s) =
s5 + 3.704s4 + 6.861s3 + 7.853s2 + 5.556s + 1.965
We obtain the required filter as H(s) = HP (s/4) using the LP2LP transformation s s/4 to get
2012.4
H(s) = HP (s/4) =
s5 + 14.82s4 + 109.8s3 + 502.6s2 + 1422.3s + 2012.4
Comment: Had we been asked to exactly meet the attenuation at the stopband edge, we would first
compute s and the stretching factor = s /
s as
# $1/2n
100.1As 1
s = = 1.5833 = s /
s = 2/1.8124 = 1.1035
2
We would need to frequency scale HP (s) using s s/ and then denormalize using s s/4. We
can combine these operations and scale HP (s) to H2 (s) = HP (s/4) = HP (s/4.4141) to match the
attenuation at the stopband edge. We get
3293.31
H2 (s) = HP (s/4.4141) =
s5 + 16.35s4 + 133.68s3 + 675.44.05s2 + 2109.23s + 3293.31
The linear and decibel magnitude of the two filters is sketched in Figure E13.3. The passband gain is
0.89 (corresponding to Ap = 1 dB). The stopband gain is 0.1 (corresponding to As = 20 dB). Note how
the design attenuation is exactly met at the passband edge for the original filter H(s) (shown light),
and at the stopband edge for the redesigned filter H2 (s) (shown dark).
(a) Butterworth LPF magnitude meeting (b) Butterworth LPF magnitude in dB meeting
passband, stopband (dark), and 3dB (dashed) specs passband, stopband (dark), and 3dB (dashed) specs
1 10
0.89 3
Magnitude [linear]
Magnitude [dB]
0.707
10
0.5
20
0.1
0 30
0 4 6 8 12 0 4 6 8 12
Frequency [rad/s] Frequency [rad/s]
Figure E13.3 Butterworth lowpass filters for Example 13.3(a and b)
(b) (A Design with Additional Constraints) Consider the design of a Butterworth lowpass filter to
meet the following specifications: Ap 1 dB for 4 rad/s, As 20 dB for 8 rad/s, and a
half-power frequency of 3 = 6 rad/s.
Since we must exactly meet specifications at the half-power frequency 3 , it is convenient to select
the cuto frequency as 3 = 1 rad/s and the prototype band edges as p = p /3 = 4/6 rad/s and
410 Chapter 13 Analog Filters
s = s /3 = 8/6 rad/s. Since 3 = 1, we have 2 = 1. To find the filter order, we solve the attenuation
relation for n at both p = p /3 and s = s /3 and choose n as the larger value. Thus,
1
HP (s) =
s8 + 5.126s7 + 13.138s6 + 21.848s5 + 25.691s4 + 21.848s3 + 13.138s2 + 5.126s + 1
1679616
H(s) =
s8 + 30.75s7 + 473s6 + 4718s5 + 33290s4 + 169876s3 + 612923s2 + 1434905s + 1679616
Its linear and decibel magnitude, shown dashed in Figure E13.3, exactly meets the half-power (3-dB)
frequency requirement, and exceeds the attenuation specifications at the other band edges.
(c) (A Bandstop Filter) Design a Butterworth bandstop filter with 2-dB passband edges of 30 Hz and
100 Hz, and 40-dB stopband edges of 50 Hz and 70 Hz.
The band edges are [f1 , f2 , f3 , f4 ] = [30, 50, 70, 100] Hz. Since f1 f4 = 3000 and f2 f3 = 3500,
the specifications are not geometrically symmetric. Assuming a fixed passband, we relocate the upper
stopband edge f3 to ensure geometric symmetry f2 f3 = f1 f4 . This gives f3 = (30)(100)/50 = 60 Hz.
f4 f1
The lowpass prototype band edges are p = 1 rad/s and s = = 7 rad/s.
f3 f2
We compute 2 = 100.1Ap 1 = 0.5849 and the lowpass prototype order as
1.3076
HP (s) =
s3 + 2.1870s2 + 2.3915s + 1.3076
The linear and decibel magnitude of this filter is sketched in Figure E13.3C.
13.3 The Butterworth Filter 411
(a) Butterworth bandstop filter meeting passband specs (b) dB magnitude of bandstop filter in (a)
1 0
2
0.794 10
[linear]
[dB]
20
Magnitude
0.5
Magnitude
30
40
0.01 50
0 30 50 70 100 130 0 30 50 70 100 130
Frequency [rad/s] Frequency [rad/s]
Figure E13.3C Butterworth bandstop filter of Example 13.3(c)
1 1
HP (s) = HP () = () = tan1 (/R) (13.21)
s+R R + j
1
HP (s) = (13.22)
s2 + 2sR sin k + R2
% &
1 2(/R)sin k
HP () = P () = tan1 (13.23)
[1 (/R)2 ) + j2(/R)sin k ] 1 (/R)2
The total phase () equals the sum of the contributions due to each of the individual sections. The group
delay tg = d/d may be found by dierentiating the phase and leads to the result
n
'
1 (/R)2(k1)
tg () = (13.24)
1 + (/R)2n sin k
k=1
The phase approaches n/2 radians at high frequencies. The group delay is fairly constant for small
frequencies but peaks near the passband edge. As the filter order n increases, the delay becomes less flat,
and the peaking is more pronounced. The phase or group delay of other filter types may be found using the
appropriate frequency transformations.
The Chebyshev polynomials can also be obtained from the recursion relation
3 3
Tn(x)
1 1
0 0
1 1 n=1
n=2 n=4
3 3
5 5
1 0.5 0 0.5 1 1 0.5 0 0.5 1
x x
Figure 13.9 illustrates the characteristics of Chebyshev polynomials. The leading coecient of an nth-
order Chebyshev polynomial always equals 2n1 . The constant coecient is always 1 for even n and 0
for odd n. The nonzero coecients alternate in sign and point to the oscillatory nature of the polynomial.
Chebyshev polynomials possess two remarkable characteristics in the context of both polynomial approxi-
mation and filter design. First, Tn (x) oscillates about zero in the interval (1, 1), with n + 1 maxima and
minima (that equal 1). Outside the interval (1, 1), |Tn (x)| shows a very steep increase. Over (1, 1),
the normalized Chebyshev polynomial Tn (x) = Tn (x)/2n1 has the smallest maximum absolute value (which
equals 1/2n1 ) among all normalized nth-degree polynomials. This is the celebrated Chebyshev theorem.
H () n =2 H () n =3
1 1
1 1
1+ 2 1+ 2
1 s 1 s
The filter order can be found by counting the number of maxima and minima in the passband (excluding
the passband edge). Since |Tn ()| increases monotonically (and rapidly) for > 1, the gain decreases rapidly
outside the passband. The attenuation rate at high frequencies is 20n dB/dec. Since Tn () 2n1 n for
large , the attenuation may be approximated by
A() = 10 log[1 + 2 Tn2 ()] 20 log Tn () 20 log 2n1 n = 20 log + 20(n 1)log 2 + 20n log (13.30)
The Chebyshev filter provides an additional 6(n 1) dB of attenuation in the stopband compared with the
Butterworth filter, and this results in a much sharper transition for the same filter order. This improvement
is made at the expense of introducing ripples in the passband, however.
414 Chapter 13 Analog Filters
This results in
Equating the real and imaginary parts, we obtain the two relations
1
cos(n)cosh(n) = 0 sin(n)sinh(n) = (13.39)
Since cosh(n) 1 for all n, the first relation gives
(2k 1)
cos(n) = 0 or k = , k = 1, 2, . . . , 2n (13.40)
2n
Note that k (in radians) has the same expression as the Butterworth poles. For > 0, sin(n) = 1, we get
1 1
sinh(n) = or = (13.41)
n sinh1 (1/)
Since s = j cos z, we have
The poles of HP (s) are just the left half-plane roots (with negative real parts):
k2 k2
+ =1 (13.44)
sinh cosh2
2
This suggests that the Chebyshev poles lie on an ellipse in the s-plane, with a major semi-axis (along the
j-axis) that equals cosh and a minor semi-axis (along the -axis) that equals sinh . The pole locations
for n = 2 and n = 3 are illustrated in Figure 13.11.
45 o 30 o
Chebyshev poles 60 o
90 o
n =2 n =3
Figure 13.11 Pole locations of Chebyshev lowpass prototypes
Comparison with the 3-dB Butterworth prototype poles pB = sin k + j cos k reveals that the real part
of the Chebyshev pole is scaled by sinh , and the imaginary part by cosh . The Chebyshev poles may thus
be found directly from the orientations of the Butterworth poles (for the same order) that lie on a circle and
contracting the circle to an ellipse. This leads to the geometric evaluation illustrated in Figure 13.12.
We draw concentric circles of radii cosh and sinh with radial lines along k , the angular orientation
of the Butterworth poles. On each radial line, label the point of intersection with the larger circle (of radius
416 Chapter 13 Analog Filters
A Radius = cosh A
Radius = sinh
B B
Chebyshev poles
cosh ) as A, and with the smaller circle as B. Draw a horizontal line through A and a vertical line through
B. Their intersection represents a Chebyshev pole location.
The lowpass prototype transfer function HP (s) = K/QP (s) may be readily obtained in factored form by
expressing the denominator QP (s) as
QP (s) = (s p1 )(s p2 ) (s pk ) (13.45)
Each conjugate pole pair will yield a quadratic factor. For odd n, QP (s) will also contain the linear factor
s + sinh .
For unit dc gain, we choose K = QP (0) for any filter order. For unit peak, however, we choose
QP (0), n odd
K= Q (0) (13.46)
P , n even
1+ 2
Figure E13.4A Chebyshev magnitude spectrum for Example 13.4(a)
13.4 The Chebyshev Approximation 417
The filter order is n = 2 because we see a total of two maxima and minima in the passband. Since
T2 (0) = 0 and |H(0)| = 0.9, we find |H(0)|2 = 1/(1 + 2 ) = (0.9)2 . This gives 2 = 0.2346. Since
T2 () = 2 2 1, we have T22 () = 4 4 4 2 + 1. Thus,
1 1 1
|H()|2 = = =
1 + 2 T22 () 1 + 2 (4 4 4 2 + 1) 1.2346 0.9383 2 + 0.9383 4
(b) What is the attenuation at f = 1.5 kHz of a second-order Chebyshev lowpass filter with a 1-dB
passband of 500 Hz?
With Ap = 1 dB, we find 2 = 100.1Ap 1 = 0.2589. The prototype (normalized) frequency corre-
sponding to f = 1.5 kHz is = 3 rad/s, and the attenuation at = 3 rad/s is found from
(c) Find the half-power frequency of a fifth-order Chebyshev lowpass filter with a 2-dB passband edge at
1 kHz.
With Ap = 2 dB, we find 2 = 100.1Ap 1 = 0.5849, = 0.7648.
The prototype half-power frequency is 3 = cosh[(1/n)cosh1 (1/)] = 1.01174.
The half-power frequency of the actual filter is thus f3 = 10003 = 1011.74 Hz.
(d) (Design from a Specified Order) Find the transfer function of a third-order Chebyshev lowpass
prototype with a passband ripple of 1.5 dB.
We find 2 = 100.1Ap 1 = 0.4125. With n = 3, we find
k = [/6, /2, 5/6] = (1/n) sinh1 (1/) = 0.4086 sinh = 0.4201 cosh = 1.0847
The Chebyshev poles are pk = sin k sinh + j cos k cosh = [0.2101 j0.9393, 0.4201].
The denominator of HP (s) is thus QP (s) = s3 + 0.8402s2 + 1.1030s + 0.3892.
Since n is odd, the prototype transfer function HP (s) with unit (dc or peak) gain is
0.3892
HP (s) =
s3 + 0.8402s2+ 1.1030s + 0.3892
(e) (A Bandpass Filter) Design a fourth-order Chebyshev bandpass filter with a peak gain of unity, a
2-dB passband of 200 Hz, and a center frequency of f0 = 1 kHz.
To design the filter, we start with a second-order lowpass prototype. We find 2 = 100.1Ap 1 = 0.5849.
With n = 2, k = [/4, 3/4], = (1/n) sinh1 (1/) = 0.5415, sinh = 0.5684, and cosh = 1.1502.
The Chebyshev poles are pk = sin k sinh + j cos k cosh = 0.4019 j0.8133. The denominator
of the prototype transfer function HP (s) is thus QP (s) = s2 + 0.8038s + 0.8231.
Since n is even, the
numerator of HP (s) for peak unit gain is K = QP (0)/ 1 + 2 = 0.8231/ 1.5849 = 0.6538, and the
prototype transfer function with unit peak gain is
K 0.6538
HP (s) = = 2
QP (s) s + 0.8038s + 0.8231
418 Chapter 13 Analog Filters
With 0 = 2f0 = 2000 rad/s and B = 400 rad/s, the LP2BP transformation s (s2 + 02 )/Bs
yields the required transfer function H(s) as
1.0234(10)6 s2
H(s) =
s4 + 1.0101(10)3 s3 + 8.0257(10)7 s2 + 3.9877(10)10 s + 1.5585(10)15
The denominator QP (s) of the prototype transfer function HP (s) = K/QP (s) is thus
Since n is odd, we choose K = QP (0) = 0.4913 for unit (dc or peak) gain to get
0.4913
HP (s) =
s3 + 0.9883s2 + 1.2384s + 0.4913
Using the attenuation equation, we compute Ap = A(p ) = 1 dB and As = A(s ) = 22.456 dB. The
attenuation specification is thus exactly met at the passband edge p and exceeded at the stopband
edge s .
13.4 The Chebyshev Approximation 419
Finally, the LP2LP transformation s s/4 gives the required lowpass filter H(s) as
31.4436
H(s) = HP (s/4) =
s3 + 3.9534s2 + 19.8145s + 31.4436
Comment: To match As to the stopband edge, we first find s and the stretching factor :
" % &+
1 1 s 2
s = cosh cosh1 (100.1As 1)1/2 = 1.8441 = = = 1.0845
n s 1.8841
The filter HS (s) = HP (s/) matches the stopband specs and is given by
0.6267
HS (s) = HP (s/) =
s3 + 1.0719s2 + 1.4566s + 0.6267
The LP2LP transformation s s/4 on HS (s) gives the required filter H2 (s) = HS (s/4). We can also
combine the two scaling operations and obtain H2 (s) = HP (s/4) = HP (s/4.3381), to get
40.1096
H2 (s) = HS (s/4) = HP (s/4.3381) =
s3 + 4.2875s2 + 23.3057s + 40.1096
The linear and decibel magnitude of these filters is plotted in Figure E13.5 and shows an exact match
at the passband edge for H(s) (shown light) and at the stopband edge for H2 (s) (shown dark).
(a) Passband, stopband (dark), and 3dB (dashed) specs (b) dB magnitude of the filters in (a)
1 0
1
0.89 3
[linear]
[dB]
0.707
10
Magnitude
0.5
Magnitude
20
0 30
0 4 6 8 12 0 4 6 8 12
Frequency [rad/s] Frequency [rad/s]
Figure E13.5 Chebyshev lowpass filters for Example 13.5(a and b)
(b) We design a Chebyshev lowpass filter to meet the specifications: Ap 1 dB for 4 rad/s, As 20
dB for 8 rad/s, and 3 = 6 rad/s.
We find an initial estimate of the order (by ignoring the half-power frequency specification) as
2 = 100.1Ap 1 = 0.2589 n = 2.783 n = 3
Next, we normalize the stopband edge with respect to the half-power frequency to give s = 8/6 rad/s.
From the estimated order (n = 3), we compute 3 as
3 = cosh[ 13 cosh1 (1/)] = 1.0949
We use 3 to find the true stopband edge s = s /3 and check if the attenuation A(s ) exceeds As .
If not, we increase n by 1, recompute 3 for the new order, and repeat the previous step until it does.
Starting with n = 3, we successively find
420 Chapter 13 Analog Filters
n 3 s = s /N As = A(
s ) As 20 dB?
3 1.0949 1.4598 12.5139 No
4 1.0530 1.4040 18.4466 No
5 1.0338 1.3784 24.8093 Yes
Thus, n = 5. With this value of n, we compute the prototype transfer function HP (s) as
0.1228
HP (s) =
s5 + 0.9368s4 + 1.6888s3 + 0.9744s2 + 0.5805s + 0.1228
We rescale HP (s) to H3 (s) = HP (s3 ) (for unit half-power frequency) and scale it to HA (s) =
H3 (s/3 ) = H3 (s/6) to get the required filter that exactly meets 3-dB specifications. We can im-
plement these two steps together to get HA (s) = HP (1.0338s/6) = HP (s/5.8037) as
808.7896
HA (s) =
s5 + 5.4371s4 + 56.8852s3 + 190.4852s2 + 658.6612s + 808.7896
Its linear and decibel magnitude, shown dashed in Figure E13.5, exactly meets the half-power (3-dB)
frequency requirement, and exceeds the attenuation specifications at the other band edges.
(c) (A Bandpass Filter) Let us design a Chebyshev bandpass filter for which we are given:
Passband edges: [1 , 2 , 3 , 4 ] = [0.89, 1.019, 2.221, 6.155] rad/s
Maximum passband attenuation: Ap = 2 dB Minimum stopband attenuation: As = 20 dB
The frequencies are not geometrically symmetric. So, we assume fixed passband edges and compute
02 = 2 3 = 1.5045. Since 1 4 > 02 , we decrease 4 to 4 = 02 /1 = 2.54 rad/s.
Then, B = 3 2 = 1.202 rad/s, p = 1 rad/s, and s = 4 1
B = 2.540.89
1.202 = 1.3738 rad/s.
The value of 2 and the order n is given by
The half-power frequency is 3 = cosh[ 13 cosh1 (1/)] = 1.018. To find the LHP poles of the prototype
filter, we need
(2k 1) (2k 1)
= (1/n)sinh1 (1/) = 0.2708 k (rad) = = , k = 1, 2, 3, 4
2n 8
From the LHP poles pk = sinh sin k + j cosh cos k , we compute p1 , p3 = 0.1049 j0.958 and
p2 , p4 = 0.2532 j0.3968. The denominator QP (s) of the prototype HP (s) = K/QP (s) is thus
0.1634
HP (s) =
s4 + 0.7162s3 + 1.2565s2 + 0.5168s + 0.2058
13.5 The Inverse Chebyshev Approximation 421
We transform this using the LP2BP transformation s (s2 + 02 )/sB to give the eighth-order analog
bandpass filter H(s) as
0.34s4
H(s) =
s8 + 0.86s7 + 10.87s6 + 6.75s5 + 39.39s4 + 15.27s3 + 55.69s2 + 9.99s + 26.25
The linear and decibel magnitude of this filter is shown in Figure E13.5C.
(a) Chebyshev BPF meeting passband specs (b) dB magnitude of filter in (a)
1 0
2
0.794
[linear]
[dB]
10
Magnitude
0.5
Magnitude
20
0.1
1.02 1.02
30
0 0.89 2.22 4 6.16 8 0 0.89 2.22 4 6.16 8
Frequency [rad/s] Frequency [rad/s]
Figure E13.5C Chebyshev I bandpass filter for Example 13.5(c)
Since Tn () 2n1 n for large , the phase at high frequencies approaches n/2 radians. The phase
response of the Chebyshev filter is not nearly as linear as that of a Butterworth filter of the same order.
to improve the delay performance, and somehow retain equiripple behavior to ensure a steep transition.
The equiripple property clearly dictates the use of Chebyshev polynomials. To improve the delay charac-
teristics, we need, somehow, to transfer the ripples to a region outside the passband. This is achieved by a
frequency transformation that reverses the characteristics of the normal Chebyshev response. This results
in the inverse Chebyshev filter, or Chebyshev II filter, with equiripple behavior in the stopband and
a monotonic response (which is also maximally flat) in the stopband. Consider an nth-order Chebyshev
lowpass prototype, with ripple , described by
1 1
|HC ()|2 = = (13.50)
1 + L( )
2 1 + Tn2 ()
2
2 2 2 2
HC() HC(1/) H () = 1 HC(1/)
1 1 1
1/
2
1+2
1 s 1 p 1
What we need next is a means to convert this to a lowpass form. We cannot use the inverse transformation
1/ again because it would relocate the ripples in the passband. Subtracting |HC ()|2 from unity,
however, results in
1 2 Tn2 (1/)
|H()|2 = 1 |HC (1/)|2 = 1 = (13.52)
1 + 2 Tn2 (1/) 1 + 2 Tn2 (1/)
The function |H()|2 (also shown in Figure 13.13) now possesses a lowpass form that is monotonic in the
passband and rippled in the stopband, just as required. The ripples in |H()|2 start at = 1 rad/s, which
defines the start of the stopband, the frequency where the response first reaches the maximum stopband
magnitude. This suggests that we normalize the frequency with respect to s , the stopband edge. In keeping
with previous forms, we may express the inverse Chebyshev filter characteristic as
2 Tn2 (1/) 1 1
|H()|2 = = = (13.53)
1 + Tn (1/)
2 2 1 + [1/ Tn (1/)]
2 2 1 + L2n ()
The function L2n () is now a rational function rather than a polynomial. The general form of a maximally
flat rational function is N ()/[N () + A 2n ], where the degree of N () is less than 2n. To show that |H()|2
yields a maximally flat response at = 0, we start with Tn (1/) (using n = 2 and n = 3 as examples):
2 2 4 3 2
T2 (1/) = 2 2 1 = T3 (1/) = 4 3 1 = (13.54)
2 3
13.5 The Inverse Chebyshev Approximation 423
The linear and decibel magnitude of H(s) (dark) and H2 (s) (light) is shown in Figure E13.6.
(a) Passband, stopband (dark), and 3dB (dashed) specs (b) dB magnitude of the lowpass filters in (a)
1 0
1
0.89 3
[linear]
[dB]
0.707
10
Magnitude
0.5
Magnitude
20
0.1
0 30
0 4 6 8 12 20 0 4 6 8 12 20
Frequency [rad/s] Frequency [rad/s]
Figure E13.6 Chebyshev II lowpass filters for Example 13.6(a and b)
(b) Consider the design of a Chebyshev II lowpass filter that meets the specifications Ap 1 dB for 4
rad/s, As 20 dB for 8 rad/s, and 3 = 6 rad/s.
We find an initial estimate of the order (by ignoring the half-power frequency specification) as
p 1 cosh1 [1/(100.1Ap 1)2 ]1/2
p = = 0.5 2 = = 0.0101 n= = 2.78 n = 3
s 100.1As 1 cosh1 (1/p )
We normalize the stopband edge with respect to the half-power frequency to give s = 8/6. The half-
power frequency is 3 = 1/cosh[(1/n)cosh1 (1/)] = 0.65. We use this to find the true stopband edge
s = s /3 and check that the attenuation A(
s ) exceeds As . If not, we increase n by 1, recompute 3
for the new order, and repeat the previous step until it does. Starting with n = 3, we find
n 3 s = s /N As = A(
s ) As 20 dB?
3 0.65 0.8667 11.6882 No
4 0.7738 1.0318 25.2554 Yes
Thus, n = 4 and 3 = 0.7738, and we find the prototype transfer function HP (s) as
0.1s4 + 0.8s2 + 0.8
HP (s) =
s4 + 2.2615s3 + 2.6371s2 + 1.7145s + 0.8
We first scale HP (s) to H3 (s) = HP (s3 ) (for unit half-power frequency), and then scale H3 (s) to
H(s) = H3 (s/3 ) = H3 (s/6) to get the required filter that exactly meets 3-dB specifications. We can
implement these two steps together to get H(s) = HP (0.7738s/6) = HP (s/7.7535) as
0.1s4 + 48.0940s2 + 2891.2863
H(s) = HP (s/7.7535) =
s4 + 17.5344s3 + 158.5376s2 + 799.1547s + 2891.2863
Its linear and decibel magnitude, shown dashed in Figure E13.6, exactly meets the half-power (3-dB)
frequency requirement and exceeds the attenuation specifications at the other band edges.
Comment: We designed a Chebyshev I filter for the same specifications in Example 13.5(a and b).
With 3 not specified, both types (Chebyshev I and II) yield identical orders (n = 3). With 3 also
specified, the Chebyshev I filter yields n = 5, but the Chebyshev II filter yields n = 4 (a lower order).
13.6 The Elliptic Approximation 427
H ()
1
1
1+ 2
1 s
Here, Rn (, ) is the so-called Chebyshev rational function, 2 describes the passband ripple, and the
additional parameter provides a measure of the stopband ripple magnitude. Clearly, Rn (, ) must be
sought in the rational function form A( 2 )/B( 2 ), with both the numerator and denominator satisfying near
optimal constraints and possessing properties analogous to Chebyshev polynomials. This implies that, for a
given order n,
1. Rn (, ) should exhibit oscillatory behavior with equal extrema in the passband (|| < 1), and with all
its n zeros within the passband.
2. Rn (, ) should exhibit oscillatory behavior, with equal extrema in the stopband (|| > 1), and with
all its n poles within the stopband.
3. Rn (, ) should be even symmetric if n is even and odd symmetric if n is odd.
We also impose the additional simplifying constraint
1
Rn (1/, ) (13.72)
Rn (, )
This provides a reciprocal relation between the poles and zeros of Rn and suggests that if we can find a
function of this form with equiripple behavior in the passband 0 < < 1, it will automatically result in
equiripple behavior in the reciprocal range 1 < < representing the stopband. The functional form for
Rn (, ) that meets these criteria may be described in terms of its root locations k by
int(n/2)
! 2 k2
Rn (, ) = C N
(13.73)
2 B/k2
k=1
where int(x) is the integer part of x, k is a root of the numerator, the constants B and C are chosen to
ensure that Rn (, ) = 1 for even n and Rn (, ) = 0 for odd n, and N = 0 for even n and N = 1 for odd n.
Elliptic filters yield the lowest filter order for given specifications by permitting ripples in both the
passband and stopband. For a given order, they exhibit the steepest transition region, but the very presence
of the ripples also makes for the most nonlinear phase and the worst delay characteristics.
428 Chapter 13 Analog Filters
If the amplitude equals /2, we get the complete elliptic integral of the first kind, u(/2, m), denoted
by K(m), which is now a function only of m:
. /2
K(m) = u(/2, m) = (1 m sin2 x)1/2 dx (13.77)
0
We thus have K (m) = K(1 m). The quantity m is called the complementary parameter, and k is
called the complementary modulus.
The Jacobian elliptic functions also involve two arguments, one being the elliptic integral u(, m),
and the other the parameter m. Some of these functions are
The Jacobian elliptic functions are actually doubly periodic for complex arguments. In particular, sn(u, m),
where u is complex, has a real period of 4K and an imaginary period of 2K . Jacobian elliptic functions
resemble trigonometric functions and even satisfy some of the same identities. For example,
sn2 + cn2 = 1 cn2 = 1 sn2 dn = d/du = (1 m sin2 )1/2 = (1 m sn2 )1/2 (13.80)
13.6 The Elliptic Approximation 429
The Jacobian elliptic sine, sn(u, m) is illustrated in Figure 13.15. It behaves much like the trigonometric
sine, except that it is flattened and elongated. The degree of elongation (called the period) and flatness
depend on the parameter m. For m = 0, sn(u, m) is identical to sin(). For small u, sn(u, m) closely follows
the sine and with increasing m, becomes more flattened and elongated, reaching an infinite period when
m = 1. This ability of sn(u, m) to change shape with changing m is what provides us with the means to
characterize the function Rn .
0.5
1
0 5 10 15
u
Here, N = 0 for even n, and N = 1 for odd n. The constant C is found by recognizing that Rn (, ) = 1 at
= 1 rad/s, leading to
int(n/2)
! 1 p2
C= k
, k = 1, 2, . . . , n (13.82)
1 zk2
k=1
The poles pk and zeros zk are imaginary, and their locations depend on the order n. They are also related
by the reciprocal relation
s
pk = , k = 1, 2, . . . , int( n2 ) (13.83)
zk
where s is the first value of at which Rn (, ) = and
"
sn[2kK(m)/n, m], n odd
zk = where k = 1, 2, . . . , int( n2 ) (13.84)
sn[(2k 1)K(m)/n, m], n even
where sn is the Jacobian elliptic sine function, K(m) = K(1/s2 ) is the complete elliptic integral of the first
kind, and is the maximum magnitude of oscillations for || > s .
430 Chapter 13 Analog Filters
Note that Rn (, ) obeys the reciprocal relation Rn (, ) = /Rn (s /, ). The frequencies pass of the
passband maxima (where Rn (, ) = 1) are given by
"
sn[(2k + 1)K(m)/n, m], n odd
pass = where k = 1, 2, . . . , int( n2 ) (13.86)
sn[2kK(m)/n, m], n even
The frequencies stop of the stopband maxima (where Rn (, ) = ) are related to pass by
s
stop = (13.87)
pass
Filter Order
Like the Jacobian elliptic functions, Rn (, ) is also doubly periodic. The periods are not independent but
are related through the order n, as follows:
K(1/s2 )K (1/ 2 ) K(1/s2 )K(1 1/ 2 )
n= = (13.88)
K (1/s2 )K(1/ 2 ) K(1 1/s2 )K(1/ 2 )
int(n/2)
! "
(s2 + p2k )2 0, n even
HP (s)HP (s) = , where N = (13.94)
int(n/2)
! int(n/2)
! 1, n odd
(s +2
p2k )2 + (s ) C
2 N 2 2
(s +
2
zk2 )2
The numerator P (s) is found from the conjugate symmetric roots jpk and equals
int(n/2)
!
P (s) = (s2 + p2k ) (13.95)
k=1
int(n/2)
! int(n/2)
! "
0, n even
Q(s)Q(s) = 0 = (s2 + p2k )2 + (s2 )N 2 C 2 (s2 + zk2 )2 , where N = (13.96)
1, n odd
k=1 k=1
For n > 2, finding the roots of this equation becomes tedious and often requires numerical methods. For
unit dc gain, we choose K= Q(0)/P (0). Since Rn (0, ) = 0 for odd n and Rn (0, ) = 1 for even n, H(0) = 1
for odd n and H(0) = 1/ 1 + 2 for even n. Therefore, for a peak gain of unity, we choose K = Q(0)/P (0)
100.1As 1
2 = 100.1Ap 1 = 0.5849 2 = = 169.2617
2
We let m = 1/s2 = 0.25 and p = 1/ 2 = 0.0059. Then, K(m) = 1.6858, K (m) = K(1 m) = 2.1565,
K(p) = 1.5731, and K (p) = K(1 p) = 3.9564. The filter order is given by
We choose n = 2. Note that to exactly meet passband specifications, we (iteratively) find that the actual s
that yields n = 2 is not s = 2 rad/s but s = 1.942 rad/s. However, we perform the following computations
using s = 2 rad/s. With n = 2, we compute
The zeros and poles of Rn (, ) are then zk = 0.7321 and pk = s /zk = 2.7321.
With HP (s) = KP (s)/Q(s), we find P (s) from the poles pk as
int(n/2)
!
P (s) = (s2 + p2k ) = s2 + (2.7321)2 = s2 + 7.4641
k=1
432 Chapter 13 Analog Filters
The two left half-plane roots are r1,2 = 0.3754 j0.8587, and we get
Q(s) = (s + 0.3754 + j0.8587)(s + 0.3754 j0.8587) = s2 + 0.7508s + 0.8783
The prototype transfer function may now be written as
s2 + 7.4641
HP (s) = K
s2 + 0.7508s + 0.8783
Since n is even, we choose K = 0.8783/(7.4641 1 + 2 ) = 0.09347 for a peak gain of unity. The required
filter H(s) is found using the LP2LP transformation s s/p = s/4, to give
0.09347s2 + 11.1624
H(s) = HP (s/p ) = HP (s/4) =
s2 + 3.0033s + 14.0527
The linear and decibel magnitude of H(s) is shown in Figure E13.7.
(a) Elliptic lowpass filter meeting passband specs (b) dB magnitude of elliptic filter in (a)
1 0
2
0.794 10
[linear]
[dB]
20
Magnitude
0.5
Magnitude
30
40
0.1
0 50
0 4 8 12 16 20 0 4 8 12 16 20
Frequency [rad/s] Frequency [rad/s]
Figure E13.7 Elliptic lowpass filter for Example 13.7
If the denominators of each term are expressed in terms of the first-order binomial approximation given by
(1 x)n (1 + nx) for x 1, we obtain
% & # $3 % &
q1 2 q1 3 2
() 1+ +31
1+ (13.106)
q0 q0 q0 q0
Retaining terms to 3 , and comparing with the required phase () , we get
% 3
&
q1 q1 q q1 q1 q13
() 3 2 13 =1 =0 (13.107)
q0 q0 3q0 q0 q02 3q03
Although the two equations in q0 and q1 are nonlinear, they are easy enough to solve. Substitution of the
first equation into the second yields q0 = 3 and q1 = 3, and the transfer function H(s) becomes
K
H(s) = (13.108)
3 + 3s + s2
We choose K = 3 for dc gain. The delay tg () is given by
% # $&
d d 3 9 + 3 2
tg () = = tan1 = (13.109)
d d 3 2 (9 + 3 2 ) + 4
Once again, the delay is a maximally flat function.
This approach becomes quite tedious for higher-order filters due to the nonlinear equations involved (even
for the second-order case).
cosh(s) E(s) 1 1
= = + (13.113)
sinh(s) O(s) s 3 1
+
s 5 1
+
s 7 1
+
s ..
. 2n 1 1
+ +
s ..
.
13.7 The Bessel Approximation 435
All terms in the continued fraction expansion are positive. We truncate this expansion to the required order,
reassemble E(s)/O(s), and establish the polynomial form Q(s) = E(s) + O(s), and H(s) = K/Q(s). Finally,
we choose K so as to normalize the dc gain to unity.
E(s) 1 1 1 s 3 + s2
= + = + =
O(s) s 3 s 3 3s
s
This leads to the polynomial representation
3
H(s) =
3 + 3s + s2
This is identical to the H(s) based on maximal flatness.
E(s) 1 1 1 1 1 5s 15 + 6s2
= + = + = + =
O(s) s 3 1 s 3 s s 15 + s2 15s + s3
+ +
s 5 s 5
s
leading to the polynomial approximation
The expressions for the magnitude, phase, and group delay of the Bessel filter in terms of Bessel functions
derive from this relationship and yield the following results:
Q2n (0)
|H()|2 = (13.124)
2n+2 [J n () + J 2n1 ()]
2
% &
J n ()
() = + tan1 cos(n) (13.125)
J n1 ()
d 1 2n |H()|2
tg = =1 2 2 = 1 (13.126)
d [J n () + J 2n1 ()] Q2n (0)
Magnitude Response
As the filter order increases, the magnitude response of Bessel filters tends to a Gaussian shape as shown in
Figure 13.16. On this basis, it turns out that the attenuation Ax at any frequency x for reasonably large
filter order (n > 3, say) can be approximated by
10x2
Ax (13.127)
(2n 1)ln 10
The frequency p corresponding to a specified passband attenuation Ap equals
2 ,
p 0.1Ap (2n 1)ln 10 3dB (2n 1)ln 2, n 3 (13.128)
0.8
Magnitude [linear]
n=1 n=7
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
Frequency [rad/s]
10 2 5 2
Ap n (13.129)
(2n 1) ln 10 Ap ln 10
3. We find the normalized transfer function HN (s) and calculate A() = 10 log |H()|2 from either of
the following relations
5. If 1 tn exceeds the given tolerance, or A() exceeds Ap , we increase the filter order n by one and
repeat steps 3 and 4.
6. Finally, we establish the prototype transfer function HP (s) and the required filter using s stg to
obtain H(s) = HP (stg ), which provides the required delay tg .
Another popular but much less flexible approach resorts to nomograms of delay and attenuation for
various filter orders to establish the smallest order that best matches specifications.
2
Magnitude [dB]
0.0195
4 0.019
Group delay
6 0.0185
0.018
8
0.0175
10 0.017
0 50 100 150 200 0 50 100 150 200
Frequency [rad/s] Frequency [rad/s]
Figure E13.9 Magnitude and delay of Bessel lowpass filter for Example 13.9
440 Chapter 13 Analog Filters
CHAPTER 13 PROBLEMS
DRILL AND REINFORCEMENT
13.1 (Lowpass Prototypes) For each set of filter specifications, find the normalized lowpass prototype
frequencies. Assume a fixed passband, where necessary.
(a) Lowpass: passband edge at 2 kHz; stopband edge at 3 kHz
(b) Highpass: passband edge at 3 kHz; stopband edge at 2 kHz
(c) Bandpass: passband edges at [10, 15] kHz; stopband edges at [5, 20] kHz
(d) Bandstop: passband edges at [20, 40] kHz; stopband edges at [30, 34] kHz
13.2 (Lowpass Prototypes) For each set of bandpass filter specifications, find the normalized lowpass
prototype frequencies. Assume a fixed center frequency f0 (if given) or a fixed passband (if given) or
fixed stopband, in that order.
(a) Passband edges at [10, 14] Hz; stopband edges at [5, 20] Hz; f0 = 12 Hz
(b) Passband edges at [10, 40] Hz; lower stopband edge at 5 Hz
(c) Passband edges at [10, 40] Hz; lower stopband edge at 5 Hz; f0 = 25 Hz
(d) Stopband edges at [5, 50] Hz; lower passband edge at 15 Hz; f0 = 25 Hz
13.3 (Lowpass Prototypes) For each set of bandstop filter specifications, find the normalized lowpass
prototype frequencies. Assume a fixed center frequency f0 (if given) or a fixed passband (if given) or
a fixed stopband, in that order.
(a) Passband edges at [20, 40] Hz; stopband edges at [26, 36] Hz; f0 = 30 Hz
(b) Passband edges at [20, 80] Hz; lower stopband edge at 25 Hz
(c) Stopband edges at [40, 80] Hz; lower passband edge at 25 Hz
(d) Passband edges at [20, 60] Hz; lower stopband edge at 30 Hz; f0 = 40 Hz
1
13.4 (Prototype Transformations) Let H(s) = describe the transfer function of a lowpass
s2 + s + 1
filter with a passband of 1 rad/s. Use frequency transformations to find the transfer function of the
following filters.
(a) A lowpass filter with a passband of 10 rad/s
(b) A highpass filter with a cuto frequency of 1 rad/s
(c) A highpass filter with a cuto frequency of 10 rad/s
(d) A bandpass filter with a passband of 1 rad/s and a center frequency of 1 rad/s
(e) A bandpass filter with a passband of 10 rad/s and a center frequency of 100 rad/s
(f ) A bandstop filter with a stopband of 1 rad/s and a center frequency 1 rad/s
(g) A bandstop filter with a stopband of 2 rad/s and a center frequency of 10 rad/s
13.5 (Butterworth Filter Poles) Sketch the pole locations of a third-order and a fourth-order Butter-
worth lowpass filter assuming the following information.
(a) A half-power frequency of 1 rad/s
(b) = 0.707 and a passband edge at 1 rad/s
(c) = 0.707 and a passband edge at 100 Hz
Chapter 13 Problems 441
13.6 (Butterworth Filters) Set up the form of |H()|2 for a third-order Butterworth lowpass filter with
= 0.7 and find the following:
(a) The attenuation at = 0.5 rad/s and = 2 rad/s
(b) The half-power frequency 3
(c) The pole orientations and magnitudes
(d) The transfer function H(s)
(e) The high-frequency decay rate
13.7 (Butterworth Filters) Consider a fifth-order lowpass Butterworth filter with a passband of 1 kHz
and a maximum passband attenuation of 1 dB. What is the attenuation, in decibels, of this filter at
a frequency of 2 kHz?
13.8 (Butterworth Filters) We wish to design a Butterworth lowpass filter with a 2-dB bandwidth of
1 kHz and an attenuation of at least 25 dB beyond 2 kHz. What is the actual attenuation, in decibels,
of the designed filter at f = 1 kHz and f = 2 kHz?
13.9 (Butterworth Filter Design) Design a Butterworth lowpass filter with a peak gain of 2 that meets
the following sets of specifications.
(a) A half-power frequency of 4 rad/s and high-frequency decay rate of 60 dB/dec
(b) A 1-dB passband edge at 10 Hz and filter order n = 2
(c) Minimum passband gain of 1.8 at 5 Hz and filter order n = 2
13.10 (Butterworth Filter Design) Design a Butterworth filter that meets the following sets of speci-
fications. Assume Ap = 2 dB, As = 40 dB, and a fixed passband where necessary.
(a) Lowpass filter: passband edge at 2 kHz; stopband edge at 6 kHz
(b) Highpass filter: passband edge at 4 kHz; stopband edge at 1 kHz
(c) Lowpass filter: passband edge at 10 Hz; stopband edge at 50 Hz, f3dB = 15 Hz
(d) Bandpass filter: passband edges at [20, 30] Hz; stopband edges at [10, 50] Hz
(e) Bandstop filter: passband edges at [10, 60] Hz; stopband edges at [20, 25] Hz
13.11 (Chebyshev Filter Poles) Analytically evaluate the poles of the following filters.
(a) A third-order Chebyshev lowpass filter with = 0.5
(b) A fourth-order Chebyshev lowpass filter with a passband ripple of 2 dB
13.12 (Chebyshev Filter Poles) Use a geometric construction to locate the poles of a Chebyshev lowpass
filter with the following characteristics.
(a) = 0.4 and order n = 3
(b) A passband ripple of 1 dB and filter order n = 4
13.13 (Chebyshev Filters) Set up |H()|2 for a third-order Chebyshev lowpass prototype with = 0.7
and find the following:
(a) The attenuation at = 0.5 rad/s and = 2 rad/s
(b) The half-power frequency 3
(c) The pole orientations and magnitudes
(d) The transfer function H(s)
(e) The high-frequency decay rate
442 Chapter 13 Analog Filters
13.14 (Chebyshev Filters) Consider a fifth-order lowpass Chebyshev filter with a passband of 1 kHz and
a passband ripple of 1 dB. What is the attenuation of this filter (in dB) at f = 1 kHz and f = 2 kHz?
13.15 (Chebyshev Filters) What is the order n, the value of the ripple factor , and the normalized
passband attenuation Ap of the Chebyshev lowpass filter of Figure P13.15?
H ()
10
9.55
1 s
Figure P13.15 Chebyshev lowpass filter for Problem 13.15
13.16 (Chebyshev Filters) A fourth-order Chebyshev filter shows a passband ripple of 1 dB up to 2 kHz.
What is the half-power frequency of the lowpass prototype? What is the actual half-power frequency?
13.17 (Chebyshev Filter Design) Design a Chebyshev lowpass filter with a peak gain of unity that
meets the following sets of specifications.
(a) A half-power frequency of 4 rad/s and high-frequency decay rate of 60 dB/dec
(b) A 1-dB passband edge at 10 Hz and filter order n = 2
(c) A minimum passband gain of 0.9 at the passband edge at 5 Hz and filter order n = 2
13.18 (Chebyshev Filter Design) Design a Chebyshev filter to meet the following sets of specifications.
Assume Ap = 2 dB, As = 40 dB, and a fixed passband where necessary.
(a) Lowpass filter: passband edge at 2 kHz; stopband edge at 6 kHz
(b) Highpass filter: passband edge at 4 kHz; stopband edge at 1 kHz
(c) Lowpass filter: passband edge at 10 Hz; stopband edge at 50 Hz; f3dB = 15 Hz
(d) Bandpass filter: passband edges at [20, 30] Hz; stopband edges at [10, 50] Hz
(e) Bandstop filter: passband edges at [10, 60] Hz; stopband edges at [20, 25] Hz
13.19 (Inverse Chebyshev Filter Poles) Find the poles and pole locations of a third-order inverse
Chebyshev lowpass filter with = 0.1.
13.20 (Inverse Chebyshev Filters) Set up |H()|2 for a third-order inverse Chebyshev lowpass filter
with = 0.1 and find the following:
(a) The attenuation at = 0.5 rad/s and = 2 rad/s
(b) The half-power frequency 3
(c) The pole orientations and magnitudes
(d) The transfer function H(s)
(e) The high-frequency decay rate
13.21 (Inverse Chebyshev Filters) Consider a fourth-order inverse Chebyshev lowpass filter with a
stopband edge of 2 kHz and a minimum stopband ripple of 40 dB. What is the attenuation of this
filter, in decibels, at f = 1 kHz and f = 2 kHz?
13.22 (Inverse Chebyshev Filter Design) Design an inverse Chebyshev filter to meet the following sets
of specifications. Assume Ap = 2 dB, As = 40 dB, and a fixed passband where necessary.
Chapter 13 Problems 443
13.23 (Elliptic Filter Design) Design an elliptic filter that meets the following sets of specifications.
Assume Ap = 2 dB, As = 40 dB, and a fixed passband where necessary.
(a) Lowpass filter: passband edge at 2 kHz; stopband edge at 6 kHz
(b) Highpass filter: passband edge at 6 kHz; stopband edge at 2 kHz
13.24 (Bessel Filters) Design a Bessel lowpass filter with a passband edge of 100 Hz and a passband
delay of 1 ms with a 1% tolerance for each of the following cases.
(a) The passband edge corresponds to the half-power frequency.
(b) The maximum attenuation in the passband is 0.3 dB.
N ()
R() =
N () + A n
13.26 (Butterworth Filter Design) Design a Butterworth bandpass filter to meet the following sets of
specifications. Assume Ap = 2 dB and As = 40 dB. Assume a fixed center frequency (if given) or a
fixed passband (if given) or fixed stopband, in that order.
(a) Passband edges at [30, 50] Hz; stopband edges at [5, 400] Hz; f0 = 40 Hz
(b) Passband edges at [20, 40] Hz; lower stopband edge at 5 Hz
(c) Stopband edges at [5, 50] Hz; lower passband edge at 15 Hz; f0 = 20 Hz
13.27 (Butterworth Filter Design) Design a Butterworth bandstop filter to meet the following sets of
specifications. Assume Ap = 2 dB and As = 40 dB. Assume a fixed center frequency (if given) or a
fixed passband (if given) or fixed stopband, in that order.
(a) Passband edges at [20, 50] Hz; stopband edges at [26, 36] Hz; f0 = 30 Hz
(b) Stopband edges at [30, 100] Hz; lower passband edge at 50 Hz
(c) Stopband edges at [50, 200] Hz; lower passband edge at 80 Hz; f0 = 90 Hz
13.28 (Chebyshev Filter Design) Design a Chebyshev bandpass filter that meets the following sets of
specifications. Assume Ap = 2 dB and As = 40 dB. Assume a fixed center frequency (if given) or a
fixed passband (if given) or fixed stopband, in that order.
444 Chapter 13 Analog Filters
(a) Passband edges at [30, 50] Hz; stopband edges at [5, 400] Hz; f0 = 40 Hz
(b) Passband edges at [20, 40] Hz; lower stopband edge at 5 Hz
(c) Stopband edges at [5, 50] Hz; lower passband edge at 15 Hz; f0 = 20 Hz
13.29 (Chebyshev Filter Design) Design a Chebyshev bandstop filter that meets the following sets of
specifications. Assume Ap = 2 dB and As = 40 dB. Assume a fixed center frequency (if given) or a
fixed passband (if given) or fixed stopband, in that order.
(a) Passband edges at [20, 50] Hz; stopband edges at [26, 36] Hz; f0 = 30 Hz
(b) Stopband edges at [30, 100] Hz; lower passband edge at 50 Hz
(c) Stopband edges at [50, 200] Hz; lower passband edge at 80 Hz; f0 = 90 Hz
13.30 (Inverse Chebyshev Filter Design) Design an inverse Chebyshev bandpass filter that meets the
following sets of specifications. Assume Ap = 2 dB and As = 40 dB. Assume a fixed center frequency
(if given) or a fixed passband (if given) or fixed stopband, in that order.
(a) Passband edges at [30, 50] Hz; stopband edges at [5, 400] Hz; f0 = 40 Hz
(b) Passband edges at [20, 40] Hz; lower stopband edge at 5 Hz
13.31 (Inverse Chebyshev Filter Design) Design an inverse Chebyshev bandstop filter that meets the
following sets of specifications. Assume Ap = 2 dB and As = 40 dB. Assume a fixed center frequency
(if given) or a fixed passband (if given) or fixed stopband, in that order.
(a) Passband edges at [20, 50] Hz; stopband edges at [26, 36] Hz; f0 = 30 Hz
(b) Stopband edges at [30, 100] Hz; lower passband edge at 50 Hz
13.32 (Group Delay) Find a general expression for the group delay of a second-order lowpass filter
K
described by the transfer function H(s) = 2 . Use this result to compute the group delay
s + As + B
of the following filters. Do any of the group delays show a maximally flat form?
(a) A second-order normalized Butterworth lowpass filter
(b) A second-order normalized Chebyshev lowpass filter with Ap = 1 dB
(c) A second-order normalized Bessel lowpass filter
13.33 (Identifying Analog Filters) Describe how you would go about identifying a lowpass transfer
function H(s) as a Butterworth, Chebyshev I, Chebyshev II, or elliptic filter. Consider both the pole
locations (and orientations) and the zero locations as possible clues.
13.34 (Subsonic Filters) Some audio amplifiers or equalizers include a subsonic filter to remove or reduce
unwanted low-frequency noise (due, for example, to warped records) that might otherwise modulate
the audible frequencies causing intermodulation distortion. The audio equalizer of one manufacturer
is equipped with a subsonic filter that is listed as providing an 18-dB/octave cut for frequencies below
15 Hz. Sketch the Bode plot of such a filter. Design a Butterworth filter that can realize the design
specifications.
13.35 (A Multi-Band Filter) A requirement exists for a multi-band analog filter with the following
specifications.
Passband 1: From dc to 20 Hz
Peak gain = 0 dB
Maximum passband attenuation = 1 dB (from peak)
Minimum attenuation at 40 Hz = 45 dB (from peak)
No ripples allowed in the passband or stopband
Passband 2: From 150 Hz to 200 Hz
Peak gain = 12 dB
Maximum passband ripple = 1 dB
Minimum attenuation at 100 Hz and 300 Hz = 50 dB (from peak)
Ripples permitted in the passband but not in the stopband
(a) Design each filter separately using the ADSP routine afd. Then adjust the gain(s) and combine
the two stages to obtain the filter transfer function H(s). To adjust the gain, you need to scale
only the numerator array of a transfer function by the linear (not dB) gain.
(b) Plot the filter magnitude spectrum of H(s) over 0 f 400 Hz in increments of 1 Hz.
Tabulate the attenuation in dB at f = 20, 40, 100, 150, 200, 300 Hz and compare with expected
design values. What is the maximum attenuation in dB in the range [0, 400] Hz, and at what
frequency does it occur?
(c) Find the poles and zeros of H(s). Is the filter stable (with all its poles in the left half of the
s-plane)? Should it be? Is it minimum phase (with all its poles and zeros in the left half of the
s-plane)? Should it be? If not, use the ADSP routine minphase to generate the minimum-phase
transfer function HM (s). Check the poles and zeros of HM (s) to verify that it is a stable,
minimum-phase filter.
(d) Plot the magnitude of H(s) and HM (s) in dB against the frequency on a log scale on the same
plot. Are the magnitudes identical? Should they be? Explain. Use the ADSP routine bodelin
to obtain a Bode plot from HM (s). Can you identify the break frequencies from the plot? How
does the asymptotic Bode magnitude plot compare with the actual magnitude in dB?
Chapter 14
SAMPLING AND
QUANTIZATION
x(t) x I (t)
Multiplier
t t
Analog signal i(t) ts
(1) (1)
t Ideally sampled signal
ts
Sampling function
Figure 14.1 The ideal sampling operation
Here, the discrete signal x[n] simply represents the sequence of sample values x(nts ). Clearly, the sampling
operation leads to a potential loss of information in the ideally sampled signal xI (t), when compared with its
underlying analog counterpart x(t). The smaller the sampling interval ts , the less is this loss of information.
446
14.1 Ideal Sampling 447
Intuitively, there must always be some loss of information, no matter how small an interval we use. Fortu-
nately, our intuition notwithstanding, it is indeed possible to sample signals without any loss of information.
The catch is that the signal x(t) must be band-limited to some finite frequency B.
The spectra associated with the various signals in ideal sampling are illustrated in Figure 14.2. The
impulse train i(t) is a periodic signal with period T = ts = 1/S and Fourier series coecients I[k] = S. Its
Fourier transform is a train of impulses (at f = kS) whose strengths equal I[k].
!
!
I(f ) = I[k](f kS) = S (f kS) (14.2)
k= k=
The ideally sampled signal xI (t) is the product of x(t) and i(t). Its spectrum XI (f ) is thus described by the
convolution
! !
XI (f ) = X(f ) I(f ) = X(f ) S (f kS) = S X(f kS) (14.3)
k= k=
The spectrum XI (f ) consists of X(f ) and its shifted replicas or images. It is periodic in frequency, with a
period that equals the sampling rate S.
i(t)
(1) (1) Multiply
t t t
ts ts
Analog signal Sampling function Ideally sampled signal
and and and
its spectrum its spectrum its spectrum
X(f) I(f) X I (f) = X(f) *I(f)
(S ) (S ) (S ) (S ) Convolve SA
A
f f f
-B B -S S 2S -S -B B S 2S
Since the spectral image at the origin extends over (B, B), and the next image (centered at S) extends
over (S B, S + B), the images will not overlap if
Figure 14.3 illustrates the spectra of an ideally sampled band-limited signal for three choices of the
sampling frequency S. As long as the images do not overlap, each period is a replica of the scaled analog
signal spectrum SX(f ). We can thus extract X(f ) (and hence x(t)) as the principal period of XI (f ) (between
0.5S and 0.5S) by passing the ideally sampled signal through an ideal lowpass filter with a cuto frequency
of 0.5S and a gain of 1/S over the frequency range 0.5S f 0.5S.
f f f
S B B S S B S S BS
S B S B
Figure 14.3 Spectrum of an ideally sampled signal for three choices of the sampling frequency
This is the celebrated sampling theorem, which tells us that an analog signal band-limited to a fre-
quency B can be sampled without loss of information if the sampling rate S exceeds 2B (or the sampling
interval ts is smaller than 2B
1
). The critical sampling rate SN = 2B is often called the Nyquist rate or
Nyquist frequency and the critical sampling interval tN = 1/SN = 1/2B is called the Nyquist interval.
If the sampling rate S is less than 2B, the spectral images overlap and the principal period (0.5S, 0.5S)
of XI (f ) is no longer an exact replica of X(f ). In this case, we cannot exactly recover x(t), and there is loss
of information due to undersampling. Undersampling results in spectral overlap. Components of X(f )
outside the principal range (0.5S, 0.5S) fold back into this range (due to the spectral overlap from adjacent
images). Thus, frequencies higher than 0.5S appear as lower frequencies in the principal period. This is
aliasing. The frequency 0.5S is also called the folding frequency.
Sampling is a band-limiting operation in the sense that in practice we typically extract only the principal
period of the spectrum, which is band-limited to the frequency range (0.5S, 0.5S). Thus, the highest
frequency we can recover or identify is 0.5S and depends only on the sampling rate S.
14.1 Ideal Sampling 449
f (Hz)
Aliased
Actual
0.5S
Aliased
-0.5 S
Figure 14.4 Relation between the actual and aliased frequency
A periodic signal xp (t) with period T can be described by a sum of sinusoids at the fundamental frequency
f0 = 1/T and its harmonics kf0 . In general, such a signal may not be band-limited and cannot be sampled
without aliasing for any choice of sampling rate.
450 Chapter 14 Sampling and Quantization
The reconstructed signal corresponds to xS (t) = 13 cos(2t) + sin(6t) + 6 cos(8t) 6 sin(8t), which
cannot be distinguished from xp (t) at the sampling instants t = nts , where ts = 0.1 s. To avoid aliasing
and recover xp (t), we must choose S > 2B = 66 Hz.
(c) Suppose we sample a sinusoid x(t) at 30 Hz and obtain the periodic spectrum of the sampled signal,
as shown in Figure E14.2C. Is it possible to uniquely identify x(t)?
Spectrum of a sinusoid x(t) sampled at 30 Hz
1
f (Hz)
20 10 10 20 40 50 70 80
Figure E14.2C Spectrum of sampled sinusoid for Example 14.2(c)
We can certainly identify the period as 30 Hz, and thus S = 30 Hz. But we cannot uniquely identify
x(t) because it could be a sinusoid at 10 Hz (with no aliasing) or a sinusoid at 20 Hz, 50 Hz, 80 Hz, etc.
(all aliased to 10 Hz). However, the analog signal y(t) reconstructed from the samples will describe a
10-Hz sinusoid because reconstruction extracts only the principal period, (15, 15) Hz, of the periodic
spectrum. In the absence of any a priori information, we almost invariably use the principal period as
a means to uniquely identify the underlying signal from its periodic spectrum, for better or worse.
14.1 Ideal Sampling 451
(0.5) (0.5)
f f f
f0 f0 f0 f0
S S f0 f0 S S S f0 f0 S
If the sampling rate is chosen to be less than f0 (i.e., S < f0 ), the spectral component at f0 will alias
to the smaller frequency fa = f0 S. To ensure no phase reversal, the aliased frequency must be less
than 0.5S. Thus, f0 S < 0.5S or S > f0 /1.5. The choice of sampling frequency S is thus bounded by
1.5 < S < f0 . Subsequent recovery by a lowpass filter with a cuto frequency of fC = 0.5S will yield the
f0
signal y(t) = 1 + cos[2(f0 S)t], which represents a time-stretched version of x(t). With y(t) = x(t/),
the stretching factor is = f0 /(f0 S). The closer S is to the fundamental frequency f0 , the larger is the
stretching factor and the more slowed down the signal y(t). This reconstructed signal y(t) is what is displayed
on the oscilloscope (with appropriate scale factors to reflect the parameters of the original signal). As an
example, if we wish to see a 100-MHz sinusoid slowed down to 2 MHz, then f0 S = 2 MHz, S = 98 MHz,
= 50, and fC = 49 MHz. We may use an even lower sampling rate SL = S/L, where L is an integer, as
long as the aliased component appears at 2 MHz and shows no phase reversal. For example, SL = 49 MHz
(for L = 2), SL = 14 MHz (for L = 7) or SL = 7 MHz (for L = 14) will all yield a 2-MHz aliased signal
with no phase reversal. The only disadvantage is that at these lower sampling rates we acquire less than one
sample per period and must wait much longer in order to acquire enough samples to build up a one-period
display.
More generally, if a periodic signal x(t) has a fundamental frequency of f0 and is band-limited to the N th
harmonic frequency Nf0 , the sampling rate must satisfy NN+0.5
f0
< S < f0 , while the stretching factor remains
= f0 /(f0 S). For the same stretching factor, the sampling rate may also be reduced to SL = S/L (where
L is an integer) as long as none of the aliased components shows phase reversal.
For recovery, we must sample at exactly S. However, other choices also result in bounds on the sampling
frequency S. These bounds are given by
2fH 2fL
S , k = 1, 2, . . . , N (14.5)
k k1
Here, the integer k can range from 1 to N , with k = N yielding the smallest value S = 2fH /N , and k = 1
corresponding to the Nyquist rate S 2fH .
f (kHz) f (kHz)
4 6 2 4 6 8
Spectrum of signal sampled at 7 kHz Spectrum of signal sampled at 14 kHz
f (kHz) f (kHz)
1 34 6 4 6 8 10
Figure E14.3 Spectra of bandpass signal and its sampled versions for Example 14.3
The spectrum XN (f ) of the sampled signal xS (t) = x(t)p(t) is described by the convolution
!
!
XN (f ) = X(f ) P (f ) = X(f ) P [k](f kS) = P [k]X(f kS) (14.7)
k= k=
14.1 Ideal Sampling 453
t t
Analog signal p(t) ts
t Naturally sampled signal
ts
Sampling function
Figure 14.6 Natural sampling
p(t)
Multiply
t t t
ts ts
Analog signal Sampling function Naturally sampled signal
and and and
its spectrum its spectrum its spectrum
X(f) P(f) X N (f) = X(f) *P(f)
(|P [0]|) Convolve AP [0]
A
(|P [1]|)
f f f
-B B -S S 2S -S -B B S 2S
The various spectra for natural sampling are illustrated in Figure 14.7. Again, XN (f ) is a superposition
of X(f ) and its amplitude-scaled (by P [k]), shifted replicas. Since the P [k] decay as 1/k, the spectral images
get smaller in height as we move away from the origin. In other words, XN (f ) does not describe a periodic
spectrum. However, if there is no spectral overlap, the image centered at the origin equals P [0]X(f ), and
X(f ) can still be recovered by passing the sampled signal through an ideal lowpass filter with a cuto
frequency of 0.5S and a gain of 1/P [0], 0.5S f 0.5S, where P [0] is the dc oset in p(t). In theory,
natural sampling can be performed by any periodic signal with a nonzero dc oset.
equivalent to ideal sampling followed by a system whose impulse response h(t) = rect[(t 0.5ts )/ts ] is a
pulse of unit height and duration ts (to stretch the incoming impulses). This is illustrated in Figure 14.8.
The sampled signal xZOH (t) can be regarded as the convolution of h(t) and an ideally sampled signal:
# $
!
xZOH (t) = h(t) xI (t) = h(t) x(nts )(t nts ) (14.8)
n=
The transfer function H(f ) of the zero-order-hold circuit is the sinc function
1 %f &
H(f ) = ts sinc(f ts )ejf ts = sinc ejf /S (14.9)
S S
'
Since the spectrum of the ideally sampled signal is S X(f kS), the spectrum of the zero-order-hold
sampled signal xZOH (t) is given by the product
%f & !
XZOH (f ) = sinc ejf /S X(f kS) (14.10)
S
k=
This spectrum is illustrated in Figure 14.9. The term sinc(f /S) attenuates the spectral images X(f kS)
and causes sinc distortion. The higher the sampling rate S, the less is the distortion in the spectral image
X(f ) centered at the origin.
f f
B B B S B S
Figure 14.9 Spectrum of a zero-order-hold sampled signal
An ideal lowpass filter with unity gain over 0.5S f 0.5S recovers the distorted signal
% &
) = X(f )sinc f ejf /S ,
X(f 0.5S f 0.5S (14.11)
S
To recover X(f ) with no amplitude distortion, we must use a compensating filter that negates the eects
of the sinc distortion by providing a concave-shaped magnitude spectrum corresponding to the reciprocal of
the sinc function over the principal period |f | 0.5S, as shown in Figure 14.10.
(b) The magnitude spectrum of the zero-order-hold sampled signal is a version of the ideally sampled signal
distorted (multiplied) by sinc(f /S), and described by
%f & !
! ( )
f 5000k
XZOH (f ) = sinc X(f kS) = 1.25 sinc(0.0002f ) rect
S 4000
k= k=
(c) The spectrum of the naturally sampled signal, assuming a rectangular sampling pulse train p(t) with
unit height, and a duty ratio of 0.5, is given by
!
! ( )
1 f 5000k
XN (f ) = P [k]X(f kS) = P [k]rect
4000 4000
k= k=
Here, P [k] are the Fourier series coecients of p(t), with P [k] = 0.5 sinc(0.5k).
456 Chapter 14 Sampling and Quantization
The discrete signal x[n] is just the sequence of samples x(nts ). We can recover x(t) by passing xI (t) through
an ideal lowpass filter with a gain of ts and a cuto frequency of 0.5S. The frequency-domain and time-
domain equivalents are illustrated in Figure 14.11.
The impulse response of the ideal lowpass filter is a sinc function given by h(t) = sinc(t/ts ). The recovered
signal x(t) may therefore be described as the convolution
# $
! !
x(t) = xI (t) h(t) = x(nts )(t nts ) h(t) = x[n]h(t nts ) (14.14)
n= n=
This describes the superposition of shifted versions of h(t) weighted by the sample values x[n]. Substituting
for h(t), we obtain the following result that allows us to recover x(t) exactly from its samples x[n] as a sum
of scaled shifted versions of sinc functions:
! % t nt &
s
x(t) = x[n]sinc (14.15)
n=
ts
14.2 Sampling, Interpolation, and Signal Recovery 457
The signal x(t) equals the superposition of shifted versions of h(t) weighted by the sample values x[n]. At
each sampling instant, we replace the sample value x[n] by a sinc function whose peak value equals x[n] and
whose zero crossings occur at all the other sampling instants. The sum of these sinc functions yields the
analog signal x(t), as illustrated in Figure 14.12.
3
Amplitude
1
1 0.5 0 0.5 1 1.5 2 2.5 3 3.5 4
Index n and time t = nts
If we use a lowpass filter whose impulse response is hf (t) = 2ts B sinc(2Bt) (i.e., whose cuto frequency
is B instead of 0.5S), the recovered signal x(t) may be described by the convolution
!
x(t) = xI (t) hf (t) = 2ts B x[n]sinc[2B(t nts )] (14.16)
n=
This general result is valid for any oversampled signal with ts 0.5/B and reduces to the previously obtained
result if the sampling rate S equals the Nyquist rate (i.e., S = 2B).
Sinc interpolation is unrealistic from a practical viewpoint. The infinite extent of the sinc means that it
cannot be implemented on-line, and perfect reconstruction requires all its past and future values. We could
truncate it on either side after its magnitude becomes small enough. But, unfortunately, it decays very
slowly and must be preserved for a fairly large duration (covering many past and future sampling instants)
in order to provide a reasonably accurate reconstruction. Since the sinc function is also smoothly varying, it
cannot properly reconstruct a discontinuous signal at the discontinuities even with a large number of values.
Sinc interpolation is also referred to as band-limited interpolation and forms the yardstick by which all
other schemes are measured in their ability to reconstruct band-limited signals.
In addition, we also require hi (t) to be absolutely integrable to ensure that it stays finite between the
sampling instants. The interpolated signal x(t) is simply the convolution of hi (t) with the ideally sampled
signal xI (t), or a summation of shifted versions of the interpolating function
!
(t) = hi (t) xI (t) =
x x[n]hi (t nts ) (14.18)
n=
At each instant nts , we erect the interpolating function hi (t nts ), scale it by x[n], and sum to obtain
(t). At a sampling instant t = kts , the interpolating function hi (kts nts ) equals zero, unless n = k
x
when it equals unity. As a result, x(t) exactly equals x(t) at each sampling instant. At all other times, the
interpolated signal x
(t) is only an approximation to the actual signal x(t).
Step Interpolation
Step interpolation is illustrated in Figure 14.13 and uses a rectangular interpolating function of width ts ,
given by h(t) = rect[(t 0.5ts )/ts ], to produce a stepwise or staircase approximation to x(t). Even though
it appears crude, it is quite useful in practice.
h(t)
1
t
t ts t
ts ts
At any time between two sampling instants, the reconstructed signal equals the previously sampled value
and does not depend on any future values. This is useful for on-line or real-time processing where the output
is produced at the same rate as the incoming data. Step interpolation results in exact reconstruction of
signals that are piecewise constant.
A system that performs step interpolation is just a zero-order-hold. A practical digital-to-analog converter
(DAC) for sampled signals uses a zero-order-hold for a staircase approximation (step interpolation) followed
by a lowpass (anti-imaging) filter (for smoothing the steps).
Linear Interpolation
Linear interpolation is illustrated in Figure 14.14 and uses the interpolating function h(t) = tri(t/ts ) to
produce a linear approximation to x(t) between the sample values.
At any instant t between adjacent sampling instants nts and (n + 1)ts , the reconstructed signal equals
x[n] plus an increment that depends on the slope of the line joining x[n] and x[n + 1]. We have
x[n + 1] x[n]
(t) = x[n] + (t nts )
x , nts t < (n + 1)ts (14.19)
ts
14.2 Sampling, Interpolation, and Signal Recovery 459
h(t)
1
t
t ts ts t
ts ts
This operation requires one future value of the input and cannot actually be implemented on-line. It can,
however, be realized with a delay of one sampling interval ts , which is tolerable in many situations. Systems
performing linear interpolation are also called first-order-hold systems. They yield exact reconstructions
of piecewise linear signals.
(b) If we use linear interpolation, the signal value at t = 2.5 s is simply the average of the values at t = 2
and t = 3. Thus, x (2.5) = 0.5(3 + 2) = 2.5.
So, x
(2.5) = 0.1273 0.4244 + 1.9099 + 2.5465 = 4.1592.
460 Chapter 14 Sampling and Quantization
sinc(t) cos(0.5t)
x(t) = 1t2
+ 2 sinc(t1) cos[0.5(t1)]
1(t1)2
+ 3 sinc(t2) cos[0.5(t2)]
1(t2)2
+ 4 sinc(t3) cos[0.5(t3)]
1(t3)2
Thus, x
(2.5) = 0.0171 0.2401 + 1.8006 + 2.4008 = 3.9785.
14.3 Quantization
The importance of digital signals stems from the proliferation of high-speed digital computers for signal
processing. Due to the finite memory limitations of such machines, we can process only finite data sequences.
We must not only sample an analog signal in time but also quantize (round or truncate) the signal amplitudes
to a finite set of values. Since quantization aects only the signal amplitude, both analog and discrete-time
signals can be quantized. Quantized discrete-time signals are called digital signals.
Each quantized sample is represented as a group (word) of zeros and ones (bits) that can be processed
digitally. The finer the quantization, the longer the word. Like sampling, improper quantization leads to loss
of information. But unlike sampling, no matter how fine the quantization, its eects are irreversible, since
word lengths must necessarily be finite. The systematic treatment of quantization theory is very dicult
because finite word lengths appear as nonlinear eects. Quantization always introduces some noise, whose
eects can be described only in statistical terms, and is usually considered only in the final stages of any
design, and many of its eects (such as overflow and limit cycles) are beyond the realm of this text.
V V
The eect of quantization errors due to rounding or truncation is quite dicult to quantify analytically
unless statistical estimates are used. The dynamic range or full-scale range of a signal x(t) is defined as
its maximum variation D = xmax xmin . If x(t) is sampled and quantized to L levels using a quantizer with
a full-scale range of D, the quantization step size, or resolution, , is defined as
= D/L (14.24)
This step size also corresponds to the least significant bit (LSB). The dynamic range of a quantizer is often
expressed in decibels. For a 16-bit quantizer, the dynamic range is 20 log 216 96 dB.
For quantization by rounding, the maximum value of the quantization error must lie between /2 and
/2. If L is large, the error is equally likely to take on any value between /2 and /2 and is thus
uniformly distributed. Its probability density function f () has the form shown in Figure 14.17.
f ()
1/
/2 /2
Figure 14.17 Probability density function of a signal quantized by rounding
The noise power PN equals its variance 2 (the second central moment), and is given by
" /2 "
1 /2 2 2
PN = =2
f () d =
2
d = (14.25)
/2 /2 12
The quantity = / 12 defines the rms quantization error. With = D/L, we compute
D2
10 log PN = 10 log = 20 log D 20 log L 10.8 (14.26)
12L2
A statistical estimate of the SNR in decibels, denoted by SNRS , is thus provided by
SNRS (dB) = 10 log PS 10 log PN = 10 log PS + 10.8 + 20 log L 20 log D (14.27)
For a B-bit quantizer with L = 2B levels (and = D/2B ), we obtain
SNRS (dB) = 10 log PS + 10.8 + 6B 20 log D (14.28)
This result suggests a 6-dB improvement in the SNR for each additional bit. It also suggests a reduction in
the SNR if the dynamic range D is chosen to exceed the signal limits. In practice, signal levels do not often
reach the extreme limits of their dynamic range. This allows us to increase the SNR by choosing a smaller
dynamic range D for the quantizer but at the expense of some distortion (at very high signal levels).
(b) Consider the ramp x(t) = 2t over (0, 1). For a sampling interval of 0.1 s, and L = 4, we obtain the
sampled signal, quantized (by rounding) signal, and error signal as
x[n] = {0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0}
xQ [n] = {0, 0.0, 0.5, 0.5, 1.0, 1.0, 1.0, 1.5, 1.5, 2.0, 2.0}
e[n] = {0, 0.2, 0.1, 0.1, 0.2, 0.0, 0.2, 0.1, 0.1, 0.2, 0.0}
3. If x(t) forms one period of a periodic signal with T = 1, we can also find PS and SNRS as
"
1 4
PS = x2 (t) dt = SNRS = 10 log( 43 ) + 10 log 12 + 20 log 4 20 log 2 = 18.062 dB
T 3
Why the dierences between the various results? Because SNRS is a statistical estimate. The larger
the number of samples N , the less SNRQ and SNRS dier. For N = 500, for example, we find that
SNRQ = 18.0751 dB and SNRS = 18.0748 dB are very close indeed.
(c) Consider the sinusoid x(t) = A cos(2f t). The power in x(t) is PS = 0.5A2 . The dynamic range of
x(t) is D = 2A (the peak-to-peak value). For a B-bit quantizer, we obtain the widely used result
SNRS = 10 log PS + 10.8 + 6B 20 log D = 6B + 1.76 dB
22 2
22 /NS Hn (f)
2 /S 22 /NS NS
2 /NS
2 /S
f f f
NS /2 S /2 S /2 NS /2 NS /2 S /2 S /2 NS/2 NS /2 S /2 S /2 NS/2
S2 = NS Hz, we obtain the same in-band quantization noise power when its power spectral density is also
2 /S, as illustrated in Figure 14.18. Thus,
2 2 22
= 2 2 = (14.29)
S NS N
D2 D2
= N = 22B B = 0.5 log2 N (14.30)
(12)22B (12N )22(BB)
This result suggests that we gain 0.5 bits for every doubling of the sampling rate. For example, N = 4
(four-times oversampling) leads to a gain of 1 bit. In practice, a better trade-o between bits and samples
is provided by quantizers that not only use oversampling but also shape the noise spectrum (using filters) to
further reduce the in-band noise, as shown in Figure 14.18. A typical noise shape is the sine function, and
a pth-order noise-shaping filter has the form HNS (f ) = |2 sin(f /N S)|p , 0.5NS f 0.5NS, where N is
the oversampling factor. Equating the filtered in-band noise power to 2 , we obtain
( ) " S/2
22 1
2 = |HNS (f )|2 df (14.31)
N S S/2
If N is large, HNS (f ) |(2f /NS)|p over the much smaller in-band range (0.5S, 0.5S), and we get
( ) " ( )2p
22 1 S/2
2f 22 2p
2 = df = (14.32)
N S S/2 NS (2p + 1)N 2p+1
This shows that noise shaping (or error-spectrum shaping) results in a savings of p log2 N additional bits.
With p = 1 and N = 4, for example, B 2 bits. This means that we can make do with a (B 2)-bit DAC
during reconstruction if we use noise shaping with four-times oversampling. In practical implementation,
noise shaping is achieved by using an oversampling sigma-delta ADC. State-of-the-art CD players make use
of this technology. What does it take to achieve 16-bit quality using a 1-bit quantizer (which is just a sign
detector)? Since B = 15, we could, for example, use oversampling by N = 64 (to 2.8 MHz for audio signals
sampled at 44.1 kHz) and p = 3 (third-order noise shaping).
14.4 Digital Processing of Analog Signals 465
Quantizer 01101001110
Sampler Hold and
Analog signal DT signal circuit Staircase signal encoder Digital signal
An analog lowpass pre-filter or anti-aliasing filter (not shown) limits the highest analog signal frequency
to allow a suitable choice of the sampling rate and ensure freedom from aliasing. The sampler operates
above the Nyquist sampling rate and is usually a zero-order-hold device. The quantizer limits the sampled
signal values to a finite number of levels (16-bit quantizers allow a signal-to-noise ratio close to 100 dB). The
encoder converts the quantized signal values to a string of binary bits or zeros and ones (words) whose
length is determined by the number of quantization levels of the quantizer.
A digital signal processing system, in hardware or software (consisting of digital filters), processes the
encoded digital signal (or bit stream) in a desired fashion. Digital-to-analog conversion essentially reverses
the process and is accomplished by the system shown in Figure 14.20.
01101001110 Analog
Decoder Hold post-
Digital signal DT signal circuit Staircase signal Analog signal
filter
A decoder converts the processed bit stream to a discrete signal with quantized signal values. The zero-
order-hold device reconstructs a staircase approximation of the discrete signal. The lowpass analog post-
filter, or anti-imaging filter, extracts the central period from the periodic spectrum, removes the unwanted
replicas (images), and results in a smoothed reconstructed signal.
+ RS rd RL
+
-
Buffer -
Switch C
C
T0
Ground
T
Block diagram of sample-and-hold system Implementation using an FET as a switch
high input resistance of the buer amplifier (when the switch is open). Ideally, the sampling should be as
instantaneous as possible and, once acquired, the level should be held constant until it can be quantized and
encoded. Practical circuits also include an operational amplifier at the input to isolate the source from the
hold capacitor and provide better tracking of the input. In practice, the finite aperture time TA (during
which the signal is being measured), the finite acquisition time TH (to switch from the hold mode to the
sampling mode), the droop in the capacitor voltage (due to the leakage of the hold capacitor), and the finite
conversion time TC (of the quantizer) are all responsible for less than perfect performance.
A finite aperture time limits both the accuracy with which a signal can be measured and the highest
frequency that can be handled by the ADC. Consider the sinusoid x(t) = A sin(2f0 t). Its derivative
x (t) = 2Af0 cos(2f0 t) describes the rate at which the signal changes. The fastest rate of change equals
2Af0 at the zero crossings of x(t). If we assume that the signal level can change by no more than X
during the aperture time TA , we must satisfy
X X
2Af0 or f0 (14.34)
TA DTA
where D = 2A corresponds to the full-scale range of the quantizer. Typically, X may be chosen to equal the
rms quantization error (which equals / 12) or 0.5 LSB (which equals 0.5). In the absence of a sample-
and-hold circuit (using only a quantizer), the aperture time must correspond to the conversion time of the
quantizer. This time may be much too large to permit conversion of frequencies encountered in practice. An
important reason for using a sample-and-hold circuit is that it holds the sampled value constant and allows
much higher frequencies to be handled by the ADC. The maximum sampling rate S that can be used for
an ADC (with a sample-and-hold circuit) is governed by the aperture time TA , and hold time TH , of the
sample-and-hold circuit, as well as the conversion time TC of the quantizer, and corresponds to
1
S (14.35)
TA + TH + TC
Naturally, this sampling rate must also exceed the Nyquist rate. During the hold phase, the capacitor
discharges through the high input resistance R of the buer amplifier, and the capacitor voltage is given
by vC (t) = V0 et/ , where V0 is the acquired value and = RC. The voltage during the hold phase is
not strictly constant but shows a droop. The maximum rate of change occurs at t = 0 and is given by
v (t)|t=0 = V0 / . A proper choice of the holding capacitor C can minimize the droop. If the maximum
droop is restricted to V during the hold time TH , we must satisfy
V0 V V0 TH
or C (14.36)
RC TH R V
14.4 Digital Processing of Analog Signals 467
To impose a lower bound, V0 is typically chosen to equal the full-scale range, and V may be chosen to
equal the rms quantization error (which equals / 12) or 0.5 LSB (which corresponds to 0.5). Naturally,
an upper bound on C is also imposed by the fact that during the capture phase the capacitor must be able
to charge very rapidly to the input level.
A digital-to-analog converter (DAC) allows the coded and quantized signal to be converted to an analog
signal. In its most basic form, it consists of a summing operational amplifier, as shown in Figure 14.22.
R0
Vi RF
R1
V0
+
RN-1
Ground
The switches connect either to ground or to the input. The output voltage V0 is a weighted sum of only
those inputs that are switched in. If the resistor values are selected as
R R R
R0 = R R1 = R2 = RN 1 = (14.37)
2 22 2N 1
the output is given by
RF + ,
V0 = Vi bN 1 2N 1 + bN 2 2N 2 + + b1 21 + b0 20 (14.38)
R
where the coecients bk correspond to the position of the switches and equal 1 (if connected to the input)
or 0 (if connected to the ground). Practical circuits are based on modifications that use only a few dierent
resistor values (such as R and 2R) to overcome the problem of choosing a wide range of resistor values
(especially for high-bit converters). Of course, a 1-bit DAC simply corresponds to a constant gain.
The conversion time of most practical quantizers is much larger (in the microsecond range), and as a
result, we can use such a quantizer only if it is preceded by a sample-and-hold circuit whose aperture
time is less than 5 ns.
468 Chapter 14 Sampling and Quantization
(b) If we digitize a signal band-limited to 15 kHz, using a sample-and-hold circuit (with a capture time of
4 ns and a hold time of 10 s) followed by a 12-bit quantizer, the conversion time of the quantizer can
be computed from
1 1 1
S TA + TH + TC = TC = 23.3 s
TA + TH + TC S (30)103
The value of TC is well within the capability of practical quantizers.
(c) Suppose the sample-and-hold circuit is buered by an amplifier with an input resistance of 1 M. To
ensure a droop of no more that 0.5 LSB during the hold phase, we require
V0 TH
C
RV
Now, if V0 corresponds to the full-scale value, V = 0.5 LSB = V0 /2B+1 , and
V0 TH (10)106 213
C = 81.9 nF
R(V0 /2B+1 ) 106
Figure 14.23 Staircase reconstruction of a signal at low (left) and high (right) sampling rates
The smaller steps in the staircase reconstruction for higher sampling rates lead to a better approximation
of the analog signal, and these smaller steps are much easier to smooth out using lower-order filters with less
stringent specifications.
Spectral images
sinc( f/S )
f (kHz)
20 S 20 S
Figure E14.9A Spectra for Example 14.9(a)
(b) If we assume zero-order-hold sampling and S = 176.4 kHz, the signal spectrum is multiplied by
sinc(f /S). This already provides a signal attenuation of 20 log[sinc(20/176.4)] = 0.184 dB at the
passband edge of 20 kHz and 20 log[sinc(156.4/176.4)] = 18.05 dB at the stopband edge of 156.4 kHz.
The new filter attenuation specifications are thus Ap = 0.5 0.184 = 0.316 dB and As = 60 18.05 =
41.95 dB, and we require a Butterworth filter whose order is given by
We see that oversampling allows us to use reconstruction filters of much lower order. In fact, the
earliest commercial CD players used four-times oversampling (during the DSP stage) and second-order
or third-order Bessel reconstruction filters (for linear phase).
The eects of sinc distortion may also be minimized by using digital filters with a 1/sinc response
during the DSP phase itself (prior to reconstruction).
Edisons invention of the phonograph in 1877. The analog recording of audio signals on long-playing (LP)
records suers from poor signal-to-noise ratio (about 60 dB), inadequate separation between stereo channels
(about 30 dB), wow and flutter, and wear due to mechanical tracking of the grooves. The CD overcomes
the inherent limitations of LP records and cassette tapes and yields a signal-to-noise ratio, dynamic range,
and stereo separation, all in excess of 90 dB. It makes full use of digital signal processing techniques during
both recording and playback.
14.5.1 Recording
A typical CD recording system is illustrated in Figure 14.24. The analog signal recorded from each micro-
phone is passed through an anti-aliasing filter, sampled at the industry standard of 44.1 kHz and quantized
to 16 bits in each channel. The two signal channels are then multiplexed, the multiplexed signal is encoded,
and parity bits are added for later error correction and detection. Additional bits are also added to provide
information for the listener (such as playing time and track number). The encoded data is then modulated
for ecient storage, and more bits (synchronization bits) are added for subsequent recovery of the sampling
frequency. The modulated signal is used to control a laser beam that illuminates the photosensitive layer
of a rotating glass disc. As the laser turns on or o, the digital information is etched on the photosensitive
layer as a pattern of pits and lands in a spiral track. This master disk forms the basis for mass production
of the commercial CD from thermoplastic material.
Left Analog
16-bit
anti-aliasing
mic filter ADC Encoding,
Optics
Multiplex modulation,
and and
synchronization recording
Right Analog
16-bit
anti-aliasing
mic filter ADC
How much information can a CD store? With a sampling rate of 44.1 kHz and 32 bits per sample (in
stereo), the bit rate is (44.1)(32)103 = (1.41)106 audio bits per second. After encoding, modulation, and
synchronization, the number of bits roughly triples to give a bit rate of (4.32)106 channel bits per second. For
a recording time of an hour, this translates to about (15)109 bits, or 2 gigabytes (with 8 bits corresponding
to a byte).
14.5.2 Playback
The CD player reconstructs the audio signal from the information stored on the compact disc in a series
of steps that essentially reverses the process at the recording end. A typical CD player is illustrated in
Figure 14.25. During playback, the tracks on a CD are optically scanned by a laser to produce a digital
signal. This digital signal is demodulated, and the parity bits are used for the detection of any errors (due
to manufacturing defects or dust, for example) and to correct the errors by interpolation between samples
(if possible) or to mute the signal (if correction is not possible). The demodulated signal is now ready for
reconstruction using a DAC. However, the analog reconstruction filter following the DAC must meet tight
specifications in order to remove the images that occur at multiples of 44.1 kHz. Even though the images are
well above the audible range, they must be filtered out to prevent overloading of the amplifier and speakers.
What is done in practice is to digitally oversample the signal (by a factor of 4) to a rate of 176.4 kHz and
472 Chapter 14 Sampling and Quantization
pass it through the DAC. A digital filter that compensates for the sinc distortion of the hold operation is
also used prior to digital-to-analog conversion. Oversampling relaxes the requirements of the analog filter,
which must now smooth out much smaller steps. The sinc compensating filter also provides an additional
attenuation of 18 dB for the spectral images and further relaxes the stopband specifications of the analog
reconstruction filter. The earliest systems used a third-order Bessel filter with a 3-dB passband of 30 kHz.
Another advantage of oversampling is that it reduces the noise floor and spreads the quantization noise over
a wider bandwidth. This allows us to round the oversampled signal to 14 bits and use a 14-bit DAC to
provide the same level of performance as a 16-bit DAC.
14-bit Analog
lowpass
DAC filter
Demodulation 4 Amplifier
Optical and
and Over-
pickup error correction speakers
sampling Analog
CD 14-bit lowpass
DAC filter
Variable-gain + +
Input amplifier Output
C R
Level
detector Control signal c(t)
Analog level detector
Figure 14.27 Block digram of a dynamic range processor
two (called the release time) to restore the gain to unity. Analog circuits for dynamic range processing may
use a peak detector (much like the one used for the detection of AM signals) that provides a control signal.
Digital circuits replace the rectifier by simple binary operations and simulate the control signal and the
attack characteristics and release time by using digital filters. In concept, the compression ratio (or gain),
the delay, attack, and release characteristics may be adjusted independently.
14.6.1 Companders
Dynamic range expanders and compressors are often used to combat the eects of noise during transmission
of signals, especially if the dynamic range of the channel is limited. A compander is a combination of a
compressor and expander. Compression allows us to increase the signal level relative to the noise level. An
expander at the receiving end returns the signal to its original dynamic range. This is the principle behind
noise reduction systems for both professional and consumer use. An example is the Dolby noise reduction
system.
In the professional Dolby A system, the input signal is split into four bands by a lowpass filter with a
cuto frequency of 80 Hz, a bandpass filter with band edges at [80, 3000] Hz, and two highpass filters with
cuto frequencies of 3 kHz and 8 kHz. Each band is compressed separately before being mixed and recorded.
During playback, the process is reversed. The characteristics of the compression and expansion are shown
in Figure 14.28.
During compression, signal levels below 40 dB are boosted by a constant 10 dB, signal levels between
40 dB and 20 dB are compressed by 2:1, and signal levels above 20 dB are not aected. During playback
(expansion), signal levels above 20 dB are cut by 10 dB, signal levels between 30 dB and 20 dB face a 1:2
expansion, and signal levels below 30 dB are not aected. In the immensely popular Dolby B system found
in consumer products (and also used by some FM stations), the input signal is not split but a pre-emphasis
circuit is used to provide a high-frequency boost above 600 Hz. Another popular system is dbx, which uses
pre-emphasis above 1.6 kHz with a maximum high-frequency boost of 20 dB.
474 Chapter 14 Sampling and Quantization
20
30 Compression during recording
40 Expansion during playback
50
50 30 Input level (dB)
40 20
Figure 14.28 Compression and expansion characteristics for Dolby A noise reduction
Voice-grade telephone communication systems also make use of dynamic range compression because the
distortion caused is not significant enough to aect speech intelligibility. Two commonly used compressors
are the -law compander (used in North America and Japan) and the A-law compander (used in Europe).
For a signal x(t) whose peak level is normalized to unity, the two compression schemes are defined by
A|x|
sgn(x), 0 |x| 1
ln(1 + |x|) 1 + ln A A
y (x) = sgn(x) yA (x) = (14.40)
ln(1 + )
1 + ln(A|x|)
sgn(x), 1
|x| 1
1 + ln A A
Output Output
1 1
A=100
=100
=0
A=2 A=1
=4
Input Input
1 1
-law compression A-law compression
Figure 14.29 Characteristics of -law and A-law compressors
The characteristics of these compressors are illustrated in Figure 14.29. The value = 255 has become
the standard in North America, and A = 100 is typically used in Europe. For = 0 (and A = 1), there is no
compression or expansion. The -law compander is nearly linear for |x| 1. In practice, compression is
based on a piecewise linear approximation of the theoretical -law characteristic and allows us to use fewer
bits to digitize the signal. At the receiving end, an expander (ideally, a true inverse of the compression law)
restores the dynamic range of the original signal (except for the eects of quantization). The inverse for
-law compression is
(1 + )|y| 1
|x| = (14.41)
The quantization of the compressed signal using the same number of bits as the uncompressed signal results
in a higher quantization SNR. For example, the value of = 255 increases the SNR by about 24 dB. Since
the SNR improves by 6 dB per bit, we can use a quantizer with fewer (only B 4) bits to achieve the same
performance as a B-bit quantizer with no compression.
Chapter 14 Problems 475
CHAPTER 14 PROBLEMS
DRILL AND REINFORCEMENT
14.1 (Sampling Operations) The signal x(t) = cos(4000t) is sampled at intervals of ts . Sketch the
sampled signal over 0 t 2 ms and also sketch the spectrum of the sampled signal over a frequency
range of at least 5 kHz f 5 kHz for the following choices of the sampling function and ts .
(a) Impulse train (ideal sampling function) with ts = 0.2 ms
(b) Impulse train (ideal sampling function) with ts = 0.25 ms
(c) Impulse train (ideal sampling function) with ts = 0.8 ms
(d) Rectangular pulse train with pulse width td = 0.1 ms and ts = 0.2 ms
(e) Flat-top (zero-order-hold) sampling with ts = 0.2 ms
14.2 (Sampling Operations) The signal x(t) = sinc(4000t) is sampled at intervals of ts . Sketch the
sampled signal over 0 t 2 ms and also sketch the spectrum of the sampled signal for the following
choices of the sampling function and ts
(a) Impulse train (ideal sampling function) with ts = 0.2 ms
(b) Impulse train (ideal sampling function) with ts = 0.25 ms
(c) Impulse train (ideal sampling function) with ts = 0.4 ms
(d) Rectangular pulse train with pulse width td = 0.1 ms and ts = 0.2 ms
(e) Flat-top (zero-order-hold) sampling with ts = 0.2 ms
14.3 (Digital Frequency) Express the following signals using a digital frequency |F | < 0.5.
(a) x[n] = cos( 4n
3 )
(b) x[n] = cos( 4n
7 ) + sin( 7 )
8n
14.4 (Sampling Theorem) Establish the Nyquist sampling rate for the following signals.
(a) x(t) = 5 sin(300t + /3) (b) x(t) = cos(300t) sin(300t + 51 )
(c) x(t) = 3 cos(300t) + 5 sin(500t) (d) x(t) = 3 cos(300t)sin(500t)
(e) x(t) = 4 cos2 (100t) (f ) x(t) = 6 sinc(100t)
(g) x(t) = 10 sinc2 (100t) (h) x(t) = 6 sinc(100t)cos(200t)
14.5 (Sampling Theorem) A sinusoid x(t) = A cos(2f0 t) is sampled at three times the Nyquist rate
for six periods. How many samples are acquired?
14.6 (Sampling Theorem) The sinusoid x(t) = A cos(2f0 t) is sampled at twice the Nyquist rate for
1 s. A total of 100 samples is acquired. What is f0 and the digital frequency of the sampled signal?
14.7 (Sampling Theorem) A sinusoid x(t) = sin(150t) is sampled at a rate of five samples per three
periods. What fraction of the Nyquist sampling rate does this correspond to? What is the digital
frequency of the sampled signal?
14.8 (Spectrum of Sampled Signals) Given the spectrum X(f ) of an analog signal x(t), sketch the
spectrum of its sampled version x[n], assuming a sampling rate of 50, 40, and 30 Hz.
14.9 (Spectrum of Sampled Signals) Sketch the spectrum of the following sampled signals against the
digital frequency F .
(a) x(t) = cos(200t), ideally sampled at 450 Hz
(b) x(t) = sin(400t 4 ), ideally sampled at 300 Hz
(c) x(t) = cos(200t) + sin(350t), ideally sampled at 300 Hz
(d) x(t) = cos(200t + 4 ) + sin(250t 4 ), ideally sampled at 120 Hz
14.10 (Sampling and Aliasing) The signal x(t) = et u(t) is sampled at a rate S such that the maximum
aliased magnitude (at f = 0.5S) is less than 5% of the peak magnitude of the un-aliased image.
Estimate the sampling rate S.
14.11 (Signal Recovery) A sinusoid x(t) = sin(150t) is ideally sampled at 80 Hz. Describe the signal
y(t) that is recovered if the sampled signal is passed through the following filters.
(a) An ideal lowpass filter with cuto frequency fC = 10 Hz
(b) An ideal lowpass filter with cuto frequency fC = 100 Hz
(c) An ideal bandpass filter with a passband between 60 Hz and 80 Hz
(d) An ideal bandpass filter with a passband between 60 Hz and 100 Hz
14.12 (Interpolation) The signal x[n] = {1, 2, 3, 2} (with ts = 1 s) is passed through an interpolating
filter.
(a) Sketch the output if the filter performs step interpolation.
(b) Sketch the output if the filter performs linear interpolation.
(c) What is the interpolated value at t = 2.5 s if the filter performs sinc interpolation?
(d) What is the interpolated value at t = 2.5 s if the filter performs raised cosine interpolation
(assume that R = 0.5)?
14.13 (Quantization SNR) Consider the signal x(t) = t2 , 0 t 2. Choose ts = 0.1 s, four quantization
levels, and rounding to find the following:
(a) The sampled signal x[n]
(b) The quantized signal xQ [n]
(c) The actual quantization signal to noise ratio SNRQ
(d) The statistical estimate of the quantization SNRS
(e) An estimate of the SNR, assuming x(t) to be periodic, with period T = 2 s
14.15 (Sampling and Reconstruction) A periodic signal whose one full period is x(t) = tri(20t) is
band-limited by an ideal analog lowpass filter whose cuto frequency is fC . It is then ideally sampled
at 80 Hz. The sampled signal is reconstructed using an ideal lowpass filter whose cuto frequency is
40 Hz to obtain the signal y(t). Find y(t) if fC = 20, 40, and 60 Hz.
Chapter 14 Problems 477
14.16 (Sampling and Reconstruction) Sketch the spectra at the intermediate points and at the output
of the following cascaded systems. Assume that the input is x(t) = 5sinc(5t), the sampler operates at
S = 10 Hz and performs ideal sampling, and the cuto frequency of the ideal lowpass filter is 5 Hz.
(a) x(t) sampler ideal LPF y(t)
(c) x(t) sampler h(t) = u(t)u(t 0.1) ideal LPF |H(f )| = |sinc(0.1f
1
)| y(t)
14.17 (Sampling and Aliasing) A signal x(t) is made up of the sum of pure sines with unit peak value
at the frequencies 10, 40, 200, 220, 240, 260, 300, 320, 340, 360, 380, and 400 Hz.
(a) Sketch the magnitude and phase spectra of x(t).
(b) If x(t) is sampled at S = 140 Hz, which components, if any, will show aliasing?
(c) The sampled signal is passed through an ideal reconstruction filter whose cuto frequency is
fC = 0.5S. Write an expression for the reconstructed signal y(t) and sketch its magnitude
spectrum and phase spectrum. Is y(t) identical to x(t)? Should it be? Explain.
(d) What is the minimum sampling rate S that will allow ideal reconstruction of x(t) from its
samples?
14.18 (Sampling and Aliasing) The signal x(t) = cos(100t) is applied to the following systems. Is it
possible to find a minimum sampling rate required to sample the system output y(t)? If so, find the
Nyquist sampling rate.
(a) y(t) = x2 (t) (b) y(t) = x3 (t)
(c) y(t) = |x(t)| (d) h(t) = sinc(200t)
(e) h(t) = sinc(500t) (f ) h(t) = (t 1)
(g) y(t) = x(t)cos(400t) (h) y(t) = u[x(t)]
14.19 (Sampling and Aliasing) The sinusoid x(t) = cos(2f0 t + ) is sampled at 400 Hz and shows up
as a 150-Hz sinusoid upon reconstruction. When the signal x(t) is sampled at 500 Hz, it again shows
up as a 150-Hz sinusoid upon reconstruction. It is known that f0 < 2.5 kHz.
(a) If each sampling rate exceeds the Nyquist rate, what is f0 ?
(b) By sampling x(t) again at a dierent sampling rate, explain how you might establish whether
aliasing has occurred?
(c) If aliasing occurs and the reconstructed phase is , find all possible values of f0 .
(d) If aliasing occurs but the reconstructed phase is , find all possible values of f0 .
14.20 (Sampling and Aliasing) An even symmetric periodic triangular waveform x(t) with period T = 4
!
is ideally sampled by the impulse train i(t) = (t k) to obtain the impulse sampled signal
k=
xs (t) described over one period by xs (t) = (t) (t 2).
(a) Sketch the signals x(t) and xs (t).
(b) If xs (t) is passed through an ideal lowpass filter with a cuto frequency of 0.6 Hz, sketch the
filter output y(t).
!
(c) How do xs (t) and y(t) change if the sampling function is i(t) = (1)k (t k)?
k=
478 Chapter 14 Sampling and Quantization
14.21 (Sampling Oscilloscopes) It is required to ideally sample a signal x(t) at S Hz and pass the
sampled signal s(t) through an ideal lowpass filter with a cuto frequency of 0.5S Hz such that its
output y(t) = x(t/) is a stretched-by- version of x(t).
(a) Suppose x(t) = 1 + cos(20t). What value of S will ensure that the output of the lowpass filter
is y(t) = x(0.1t). Sketch the spectra of x(t), s(t), and y(t) for the chosen value of S.
(b) Suppose x(t) = 2 cos(80t) + cos(160t) and the sampling rate is chosen as S = 48 Hz. Sketch
the spectra of x(t), s(t), and y(t). Will the output y(t) be a stretched version of x(t) with
y(t) = x(t/)? If so, what will be the value of ?
(c) Suppose x(t) = 2 cos(80t) + cos(100t). What value of S will ensure that the output of the
lowpass filter is y(t) = x(t/20)? Sketch the spectra of x(t), s(t), and y(t) for the chosen value
of S to confirm your results.
14.22 (Bandpass Sampling) The signal x(t) is band-limited to 500 Hz. The smallest frequency present
in x(t) is f0 . Find the minimum rate S at which we can sample x(t) without aliasing and explain how
we can recover the signal if
(a) f0 = 0.
(b) f0 = 300 Hz.
(c) f0 = 400 Hz.
14.23 (Bandpass Sampling) A signal x(t) is band-limited to 40 Hz and modulated by a 320-Hz carrier
to generate the modulated signal y(t). The modulated signal is processed by a square law device that
produces g(t) = y 2 (t).
(a) What is the minimum sampling rate for x(t) to prevent aliasing?
(b) What is the minimum sampling rate for y(t) to prevent aliasing?
(c) What is the minimum sampling rate for g(t) to prevent aliasing?
14.24 (Interpolation) We wish to sample a speech signal band-limited to 4 kHz using zero-order-hold
sampling.
(a) Select the sampling frequency if the spectral magnitude of the sampled signal at 4 kHz is to be
within 90% of its peak magnitude.
(b) On recovery, the signal is filtered using a Butterworth filter with an attenuation of less than 1 dB
in the passband and more than 30 dB for all image frequencies. Compute the total attenuation
in decibels due to both the sampling and filtering operations at 4 kHz and 12 kHz.
(c) What is the order of the Butterworth filter?
14.25 (Quantization SNR) A sinusoid with a peak value of 4 V is sampled and then quantized by a
12-bit quantizer whose full-scale range is 5 V. What is the quantization SNR of the quantized signal?
14.26 (Quantization Noise Power) The quantization noise power based on quantization by rounding
is 2 = 2 /12, where is the quantization step size. Find similar expressions for the quantization
noise power based on quantization by truncation and quantization by sign-magnitude truncation.
Can these estimates be improved by taking more signal samples (using the same interpolation
schemes)?
(b) Use the sinc interpolation formula
!
x(t) = x[n]sinc[(t nts )/ts ]
n=
to obtain an estimate of x(0.5). With t = 0.5 and ts = 1/S = 1, compute the summation for
|n| 10, 20, 50 to generate three estimates of x(0.5). How good are these estimates? Would you
expect the estimate to converge to the actual value as more terms are included in the summation
(i.e., as more signal samples are included)? Compare the advantages and disadvantages of sinc
interpolation with the schemes in part (a).
14.38 (Interpolating Functions) To interpolate a signal x[n] by N , we use an up-sampler (that places
N 1 zeros after each sample) followed by a filter that performs the appropriate interpolation as
shown:
x[n] up-sample N interpolating filter y[n]
The filter impulse response for step interpolation, linear interpolation, and ideal (sinc) interpolation
is
hS [n] = u[n] u[n (N 1)] hL [n] = tri(n/N ) hI [n] = sinc(n/N ), |n| M
Note that the ideal interpolating function is actually of infinite length but must be truncated in
practice. Generate the test signal x[n] = cos(0.5n), 0 n 3. Up-sample this by N = 8 (seven
zeros after each sample) to obtain the signal xU [n]. Use the Matlab routine filter to filter xU [n],
using
(a) The step interpolation filter to obtain the filtered signal xS [n]. Plot xU [n] and xS [n] on the
same plot. Does the system perform the required interpolation? Does the result look like a sine
wave?
(b) The linear interpolation filter to obtain the filtered signal xL [n]. Plot xU [n] and a delayed (by
8) version of xL [n] (to account for the noncausal nature of hL [n]) on the same plot. Does the
system perform the required interpolation? Does the result look like a sine wave?
(c) The ideal interpolation filter (with M = 4, 8, 16) to obtain the filtered signal xI [n]. Plot xU [n]
and a delayed (by M ) version of xI [n] (to account for the noncausal nature of hI [n]) on the
same plot. Does the system perform the required interpolation? Does the result look like a
sine wave? What is the eect of increasing M on the interpolated signal? What is the eect of
increasing both M and the signal length? Explain.
14.39 (Sampling and Quantization) The signal x(t) = cos(2t) + cos(6t) is sampled at S = 20 Hz.
(a) Generate 200 samples of x[n] and superimpose plots of x(t) vs. t and x[n] vs. n/S. Use a 4-bit
quantizer to quantize x[n] by rounding and obtain the signal xR [n]. Superimpose plots of x(t)
vs. t and xR [n] vs. n/S. Obtain the quantization error signal e[n] = x[n] xR [n] and plot e[n]
and its 10-bin histogram. Does it show a uniform distribution? Compute the quantization SNR
in decibels. Repeat for 800 samples. Does an increase in the signal length improve the SNR?
(b) Generate 200 samples of x[n] and superimpose plots of x(t) vs. t and x[n] vs. n/S. Use an
8-bit quantizer to quantize x[n] by rounding and obtain the signal xR [n]. Superimpose plots
of x(t) vs. t and xR [n] vs. n/S. Obtain the quantization error signal e[n] = x[n] xR [n] and
Chapter 14 Problems 481
plot e[n] and its 10-bin histogram. Compute the quantization SNR in decibels. Compare the
quantization error, the histogram, and the quantization SNR for the 8-bit and 4-bit quantizer
and comment on any dierences. Repeat for 800 samples. Does an increase in the signal length
improve the SNR?
(c) Based on your knowledge of the signal x(t), compute the theoretical SNR for a 4-bit and 8-bit
quantizer. How does the theoretical value compare with the quantization SNR obtained in parts
(a) and (b)?
(d) Repeat parts (a) and (b), using quantization by truncation of x[n]. How does the quantization
SNR for truncation compare with the the quantization SNR for rounding?
Chapter 15
Using the Fourier transform pair (t ) exp(j2f ), the spectrum Xp (f ) may also be described by
!
!
xI (t) = x(kts )(t kts ) Xp (f ) = x(kts )ej2kts f (15.2)
k= k=
Note that Xp (f ) is periodic with period S and its central period equals SX(f ). To recover the analog signal
x(t), we pass the sampled signal through an ideal lowpass filter whose gain equals 1/S over 0.5S f 0.5S.
Formally, we obtain x(t) (or its samples x(nts )) from the inverse Fourier transform result
" "
1 S/2 1 S/2
x(t) = Xp (f )ej2f t
df x(nts ) = Xp (f )ej2f nts df (15.3)
S S/2 S S/2
Equations (15.2) and (15.3) define a transform pair. They allow us to obtain the periodic spectrum Xp (f ) of
an ideally sampled signal from its samples x(nts ), and to recover the samples x(nts ) from the spectrum. We
point out that these relations are the exact duals of the Fourier series relations for a periodic signal xp (t) and
its discrete spectrum X[k] (the Fourier series coecients). We can revise these relations for discrete-time
signals if we use the digital frequency F = f /S and replace x(nts ) by the discrete sequence x[n] to obtain
! " 1/2
Xp (F ) = x[k]ej2kF x[n] = Xp (F )ej2nF dF (the F -form) (15.4)
k= 1/2
482
15.2 Connections: The DTFT and the Fourier Transform 483
The first result defines Xp (F ) as the discrete-time Fourier transform (DTFT) of x[n]. The second result is
the inverse DTFT (IDTFT), which allows us to recover x[n] from its spectrum. The DTFT Xp (F ) is periodic
with unit period because it assumes unit spacing between samples of x[n]. The interval 0.5 F 0.5 (or
0 F 1) defines the principal period.
The DTFT relations may also be written in terms of the radian frequency as
! "
1
Xp () = x[k]ejk x[n] = Xp ()ejn d (the -form) (15.5)
2
k=
The quantity Xp () is now periodic with period = 2 and represents a scaled (stretched by 2) version
of Xp (F ). The principal period of Xp () corresponds to the interval or 0 2. We will
find it convenient to work with the F -form because, as in the case of Fourier transforms, it rids us of factors
of 2 in many situations.
For real signals, the DTFT shows conjugate symmetry about F = 0 (or = 0) with
F F F
1 0.5 0.5 1 1.5 0.5 0.5 0.5 1
Even symmetry about F = 0 Even symmetry about F = 0.5
The DTFT is periodic, with period F = 1
Naturally, if we work with Xp (), it shows conjugate symmetry about the origin = 0, and about
= , and may be plotted only over its principal period ( ) (with conjugate symmetry about
= 0) or over (0 2) (with conjugate symmetry about = ). The principal range for each form is
illustrated in Figure 15.2.
F = 2 F
0 0.5 1 0 2
Principal period Principal period
If a real signal x[n] is even symmetric, its DTFT is always real and even symmetric in F , and has the form
Xp (F ) = A(F ). If a real signal x[n] is odd symmetric, its DTFT is always imaginary and odd symmetric
in F , and has the form Xp (F ) = jA(F ). A real symmetric signal is called a linear-phase signal. The
quantity A(F ) is called the amplitude spectrum. For linear-phase signals, it is much more convenient to
plot just the amplitude spectrum (rather than the magnitude spectrum and phase spectrum separately).
(b) The DTFT of the sequence x[n] = {1, 0, 3, 2} also follows from the definition as
!
Xp (F ) = x[k]ej2kF = 1 + 3ej4F 2ej6F
k=
For finite sequences, the DTFT can be written just by inspection. Each term is the product of a sample
value at index n and the exponential ej2nF (or ejn ).
486 Chapter 15 The Discrete-Time Fourier Transform
1 [n] 1 1
2 n u[n], | < 1| 1 1
1 ej2F 1 ej
3 nn u[n], | < 1| ej2F ej
(1 ej2F )2 (1 ej )2
5 |n| , | < 1 1 2 1 2
1 2 cos(2F ) + 2 1 2 cos + 2
6 1 (F ) 2()
(c) The DTFT of the exponential signal x[n] = n u[n] follows from the definition and the closed form for
the resulting geometric series:
!
! & j2F 'k 1
Xp (F ) = k ej2kF = e = , || < 1
1 ej2F
k=0 k=0
The sum converges only if |ej2F | < 1, or || < 1 (since |ej2F | = 1). In the -form,
!
! & 'k 1
Xp () = k ejk = ej = , || < 1
1 ej
k=0 k=0
(d) The signal x[n] = u[n] is a limiting form of n u[n] as 1 but must be handled with care, since
u[n] is not absolutely summable. In analogy with the analog pair u(t) 0.5(f ) + j2f
1
, Xp (F ) also
includes an impulse (now an impulse train due to the periodic spectrum). Over the principal period,
1 1
Xp (F ) = + 0.5(F ) (F -form) Xp () = + () (-form)
1 ej2F 1 ej
15.3 Properties of the DTFT 487
Folding: With x[n] Xp (F ), the DTFT of the signal y[n] = x[n] may be written (using a change of
variable m = n) as
!
!
Yp (F ) = x[n]ej2nF = x[l]ej2mF = Xp (F ) (15.10)
n= m=
A folding of x[n] to x[n] results in a folding of Xp (F ) to Xp (F ). The magnitude spectrum of the folded
signal stays the same, but the phase is reversed.
Time shift: With x[n] Xp (F ), the DTFT of the signal y[n] = x[n m] may be written (using a change
of variable l = n m) as
!
!
Yp (F ) = x[n m]ej2nF = x[l]ej2(m+l)F = Xp (F )ej2mF (15.11)
n= l=
A time shift of x[n] to x[n m] does not aect the magnitude spectrum. It augments the phase spectrum
by (F ) = 2mF (or () = m), which varies linearly with frequency.
Frequency shift: By duality, a frequency shift of Xp (F ) to Xp (F F0 ) yields the signal x[n]ej2nF0 .
Half-period shift: If Xp (F ) is shifted by 0.5 to Xp (F 0.5), then x[n] changes to ejn = (1)n x[n].
Thus, samples of x[n] at odd index values (n = 1, 3, 5, . . .) change sign.
Modulation: Using the frequency-shift property and superposition gives the modulation property
) j2nF0 *
e + ej2nF0 Xp (F + F0 ) + Xp (F F0 )
x[n] = cos(2nF0 )x[n] (15.12)
2 2
Convolution: The regular convolution of discrete-time signals results in the product of their DTFTs. The
product of discrete-time signals results in the periodic convolution of their DTFTs.
The times-n property: With x[n] Xp (F ), dierentiation of the defining DTFT relation gives
!
dXp (F )
= (j2n)x[n]ej2nF (15.13)
dF n=
j d Xp (F )
The corresponding signal is (j2n)x[n], and thus the DTFT of y[n] = nx[n] is Yp (F ) = 2 dF .
15.3 Properties of the DTFT 489
Parsevals relation: The DTFT is an energy-conserving transform, and the signal energy may be found
in either domain by Parsevals theorem:
! " "
1
x2 [k] = |Xp (F )|2 dF = |Xp ()|2 d (Parsevals relation) (15.14)
1 2 2
k=
+ +
The notation 1
(or 2
) means integration over a one-period duration (typically, the principal period).
Central ordinate theorems: The DTFT obeys the central ordinate relations (found by substituting F = 0
(or = 0) in the DTFT or n = 0 in the IDTFT.
" " !
1
x[0] = Xp (F ) dF = Xp () d Xp (0) = x[n] (central ordinates) (15.15)
1 2 2 n=
(b) The DTFT of the signal x[n] = (n + 1)n u[n] may be found if we write x[n] = nn u[n] + n u[n], and
use superposition, to give
ej2F 1 1
Xp (F ) = + =
(1 ej2F )2 1 ej2F (1 ej2F )2
In the -form,
ej 1 1
Xp () = + =
(1 ej )2 1 ej (1 ej )2
By the way, if we recognize that x[n] = n u[n] n u[n], we can also use the convolution property to
obtain the same result.
(c) To find DTFT of the N -sample exponential pulse x[n] = n , 0 n < N , express it as x[n] =
n (u[n] u[n N ]) = n u[n] N nN u[n N ] and use the shifting property to get
1 ej2F N 1 (ej2F )N
Xp (F ) = N
=
1 ej2F 1 ej2F 1 ej2F
In the -form,
1 ejN 1 (ej )N
Xp () = N
=
1 ej 1 ej 1 ej
(d) The DTFT of the two-sided decaying exponential x[n] = |n| , || < 1, may be found by rewriting this
signal as x[n] = n u[n] + n u[n] [n] and using the folding property to give
1 1
Xp (F ) = + 1
1 ej2F 1 ej2F
Simplification leads to the result
1 2 1 2
Xp (F ) = or Xp () =
1 2 cos(2F ) + 2 1 2 cos + 2
j d 4(j/2)(j2ej2F ) 4ej2F
Yp (F ) = Xp (F ) = =
2 dF (2 ej2F )2 (2 ej2F )2
In the -form,
d 4(j/2)(j2ej ) 4ej
Yp () = j Xp () = =
d (2 ej )2 (2 ej )2
where T is the period and f0 = 1/T is the fundamental frequency. This result is a direct consequence of the
fact that periodic extension in one domain leads to sampling in the other. By analogy, if X1 (F ) describes
492 Chapter 15 The Discrete-Time Fourier Transform
the DTFT of one period x1 [n] (over 0 n N 1) of a periodic signal xp [n], the DTFT Xp (F ) of xp [n]
may be found from the sampled version of X1 (F ) as
N 1
1 !
Xp (F ) = X1 (kF0 )(f kF0 ) (over one period 0 F < 1) (15.18)
N
k=0
where N is the period and F0 = 1/N is the fundamental frequency. The summation index is shown from
n = 0 to n = N 1 because the DTFT is periodic and because the result is usually expressed for one period
over 0 F < 1. The DTFT of a periodic signal with period N is a periodic impulse train with N impulses
per period. The impulse strengths correspond to N samples of X1 (F ), the DTFT of one period of xp [n].
Note that Xp (F ) exhibits conjugate symmetry about k = 0 (corresponding to F = 0 or = 0) and k = N2
(corresponding to F = 0.5 or = ). By analogy with analog signals, the quantity
N 1
1 1 !
XDFS [k] = X1 (kF0 ) = x1 [n]ej2nk/N (15.19)
N N
k=0
may be regarded as the Fourier series coecients of the periodic signal xp [n]. The sequence XDFS [k] describes
the discrete Fourier series (DFS) coecients (which we study in the next chapter). In fact, if the samples
correspond to a band-limited periodic signal xp (t) sampled above the Nyquist rate for an integer number of
periods, the coecients XDFS [k] are an exact match to the Fourier series coecients X[k] of xp (t).
The signal xp [n] and its DTFT Xp (F ) are shown in Figure E15.3A. Note that the DTFT is conjugate
symmetric about F = 0 (or k = 0) and F = 0.5 (or k = 0.5N = 2).
x p [n] Xp(F)
3 3 3 (2) (2) (2)
2 2 2 2 2
(0.5) (0.5)
1 1 1
n F
3 2 1 1 2 3 4 5 1 0.5 0.5 1
Figure E15.3A Periodic signal for Example 15.3(a) and its DTFT
15.3 Properties of the DTFT 493
(b) Let x1 [n] = {1, 0, 2, 0, 3} describe one period of a periodic signal xp [n].
Its period is N = 5. The discrete Fourier series (DFS) coecients of xp [n] are
XDFS [k] = N X1 (kF0 )
1
= 0.2X1 (kF0 )
where (
(
j8F (
X1 (kF0 ) = 1 + 2e j4F
+ 3e ( , k = 0, 1, . . . , 5
F =k/5
We find that
XDFS [k] = {1.2, 0.0618 + j0.3355, 0.1618 + j0.7331, 0.1618 j0.7331, 0.0618 j0.3355}
The DTFT Xp (F ) of the periodic signal xp [n], for one period 0 F < 1, is then
4
!
Xp (F ) = Xp [k](f k5 ) (over one period 0 F < 1)
k=0
Both XDFS [k] and Xp (F ) are conjugate symmetric about k = 0 and about k = 0.5N = 2.5.
2ej
(b) Let Xp () = . We factor the denominator and use partial fractions to get
1 0.25ej2
2ej 2 2
Xp () = =
(1 0.5ej )(1 + 0.5ej ) 1 0.5ej 1 + 0.5ej
We then find x[n] = 2(0.5)n u[n] 2(0.5)n u[n].
(c) An ideal dierentiator is described by Hp (F ) = j2F, |F | < 0.5. Its magnitude and phase spectrum
are shown in Figure E15.4C.
Magnitude Phase (radians)
2F /2
F
F 0.5 0.5
0.5 0.5 /2
Figure E15.4C DTFT of the ideal dierentiator for Example 15.4(c)
494 Chapter 15 The Discrete-Time Fourier Transform
To find its inverse h[n], we note that h[0] = 0 since Hp (F ) is odd. For n = 0, we also use the odd
symmetry of Hp (F ) in the IDTFT to obtain
" 1/2 " 1/2
h[n] = j2F [cos(2nF ) + j sin(2nF )] dF = 4 F sin(2nF ) dF
1/2 0
(d) A Hilbert transformer shifts the phase of a signal by 90 . Its magnitude and phase spectrum are
shown in Figure E15.4D.
Magnitude Phase (radians)
/2
F
F 0.5 0.5
0.5 0.5 /2
Figure E15.4D DTFT of the Hilbert transformer for Example 15.4(d)
Its DTFT given by Hp (F ) = j sgn(F ), |F | < 0.5. This is imaginary and odd. To find its inverse
h[n], we note that h[0] = 0 and
" 1/2 " 1/2
1 cos(n)
h[n] = j sgn(F )[cos(2nF ) + j sin(2nF )] dF = 2 sin(2nF ) dF =
1/2 0 n
Yp (F ) Yp ()
Hp (F ) = or Hp () = (15.22)
Xp (F ) Xp ()
We emphasize that the transfer function is defined only for a relaxed LTI systemeither as the ratio
Yp (F )/Xp (F ) (or Yp ()/Xp ()) of the DTFT of the output y[n] and input x[n], or as the DTFT of the
impulse response h[n]. The equivalence between the time-domain and frequency-domain operations is illus-
trated in Figure 15.3.
15.4 The Transfer Function 495
Figure 15.3 The equivalence between the time domain and frequency domain
Yp (F ) B0 + B1 ej2F + + BM ej2MF
Hp (F ) = = (15.24)
Xp (F ) 1 + A1 ej2F + + AN ej2NF
In the -form,
Yp () B0 + B1 ej + + BM ejM
Hp () = = (15.25)
Xp () 1 + A1 ej + + AN ejN
1 0.5ej2F
H(F ) = 1 =
1 0.5ej2F 1 0.5ej2F
In the -form,
1 0.5ej
H() = 1 =
1 0.5ej 1 0.5ej
From this we find
Y (F )
H(F ) = or Y (F )(1 0.5ej2F ) = X(F )(0.5ej2F )
X(F )
In the -form,
Y ()
H() = or Y ()(1 0.5ej ) = X()(0.5ej )
X()
Inverse transformation gives y[n] 0.5y[n 1] = 0.5x[n 1].
Since the magnitude at high frequencies increases, this appears to be a highpass filter.
Comment: We can also use the central ordinate relations to compute Hp (0) and Hp (0.5) directly
3
! 3
!
from h[n]. Thus, Hp (0) = h[n] = 0 and Hp (0.5) = (1)n h[n] = 1 + 2 + 2 1 = 2.
n=0 n=0
(b) Let h[n] = (0.8)n u[n]. Identify the filter type and establish whether the impulse response is a linear-
phase sequence.
1 1 1
We find that Hp (0) = = 0.556andHp (0.5) = = = 5.
1 + 0.8 1 + 0.8ej 1 0.8
Since the magnitude at high frequencies increases, this appears to be a highpass filter.
(c) Consider a system described by y[n] = y[n 1] + x[n], 0 < < 1. This is an example of a reverb
filter whose response equals the input plus a delayed version of the output. Its frequency response may
be found by taking the DTFT of both sides to give Yp (F ) = Yp (F )ej2F + Xp (F ). Rearranging this
equation, we obtain
Yp (F ) 1
Hp (F ) = =
Xp (F ) 1 ej2F
Using Eulers relation, we rewrite this as
1 1
Hp (F ) = =
1 ej2F 1 cos(2F ) j sin(2F )
Its magnitude and phase are given by
, -
1 sin(2F )
|Hp (F )| = (F ) = tan 1
[1 2 cos(2F ) + 2 ]1/2 1 cos(2F )
A typical magnitude and phase plot for this system (for 0 < < 1) is shown in Figure E15.6C. The
impulse response of this system equals h[n] = n u[n].
Magnitude
Phase
0.5 F
0.5
F
0.5 0.5
Figure E15.6C Frequency response of the system for Example 15.6(c)
(d) Consider the 3-point moving average filter, y[n] = 13 {x[n 1] + x[n] + x[n + 1]}. The filter replaces
each input value x[n] by an average of itself and its two neighbors. Its impulse response h1 [n] is simply
h1 [n] = { 13 , 13 , 13 }. The frequency response is given by
1
! . /
H1 (F ) = h[n]ej2F n = 1
3 ej2F + 1 + ej2F = 13 [1 + 2 cos(2F )]
n=1
1/3
F F
2/3 0.5 0.5
Figure E15.6D Frequency response of the filters for Example 15.6(d and e)
15.5 System Analysis Using the DTFT 499
(e) Consider the tapered 3-point averager h2 [n] = { 14 , 12 , 14 }. Its frequency response is given by
1
!
H2 (F ) = h[n]ej2F n = 14 ej2F + 1
2 + 14 ej2F = 1
2 + 1
2 cos(2F )
n=1
Figure E15.6D shows that |H2 (F )| decreases monotonically to zero at F = 0.5 and shows a much better
smoothing performance. This is a lowpass filter and actually describes the von Hann (or Hanning)
smoothing window. Other tapering schemes lead to other window types.
(f ) Consider the dierence operator described by y[n] = x[n] x[n 1]. Its impulse response may be
written as h[n] = [n] [n 1] = {1, 1}. This is actually odd symmetric about its midpoint
(n = 0.5). Its DTFT (in the -form) is given by
Its phase is () = 2 2 and shows a linear variation with . Its amplitude A() = 2 sin(/2)
increases from zero at = 0 to two at = . In other words, the dierence operator enhances high
frequencies and acts as a highpass filter.
1
Yp () = Hp ()Xp () =
(1 ej )2
Its inverse transform gives the response as y[n] = (n + 1)n u[n]. We could, of course, also use
convolution to obtain y[n] = h[n] x[n] directly in the time domain.
(b) Consider the system described by y[n] = 0.5y[n 1] + x[n]. Its response to the step x[n] = 4u[n] is
found using Yp (F ) = Hp (F )Xp (F ):
, -
1 4
Yp (F ) = Hp (F )Xp (F ) = + 2(F )
1 0.5ej2F 1 ej2F
500 Chapter 15 The Discrete-Time Fourier Transform
(b) Consider a system described by h[n] = (0.8)n u[n]. We find its steady-state response to the step input
x[n] = 4u[n]. The transfer function Hp (F ) is given by
1
Hp (F ) =
1 0.8ej2F
We evaluate Hp (F ) at the input frequency F = 0 (corresponding to dc):
(
( 1
Hp (F )(( = =5
F =01 0.8
The steady-state response is then yss [n] = (5)(4) = 20.
(c) Design a 3-point FIR filter with impulse response h[n] = {, , } that completely blocks the frequency
F = 13 and passes the frequency F = 0.125 with unit gain. What is the dc gain of this filter?
The filter transfer function is Hp (F ) = ej2F + + ej2F = + 2 cos(2F ).
From the information given, we have
H( 31 ) = 0 = + 2 cos( 2
3 )= H(0.125) = 1 = + 2 cos( 2
8 )= 2
This gives = = 0.4142 and h[n] = {0.4142, 0.4142, 0.4142}.
#
The dc gain of this filter is H(0) = h[n] = 3(0.4142) = 1.2426.
15.6 Connections
The DTFT and Fourier series are duals. This allows us to carry over the operational properties of the
Fourier series virtually unchanged to the DTFT. A key dierence is that while the discrete sequence x[n] is
usually real, the discrete spectrum X[k] is usually complex. Table 15.3 lists some of the analogies between
the properties of the DTFT and Fourier series.
The DTFT can also be related to the Fourier transform by recognizing that sampling in one domain
results in a periodic extension in the other. If we sample an analog signal x(t) to get x[n], we obtain
Xp (F ) as a periodic extension of X(f ) with unit period. For signals whose Fourier transforms X(f ) are not
band-limited, the results are often unwieldy.
502 Chapter 15 The Discrete-Time Fourier Transform
EXAMPLE 15.9 (Connecting the DTFT, Fourier Series, and Fourier Transform)
(a) To find the DTFT of x[n] = rect(n/2N ), with M = 2N + 1, we start with the Fourier series result
M sinc(M t) $ k %
x(t) = X[k] = rect
sinc(t) 2N
$ n % M sinc(MF )
Using duality, we obtain rect .
2N sinc F
(b) To find the DTFT of x[n] = cos(2n), we can start with the Fourier transform pair
cos(2t) 0.5[(f + ) + (f )]
Sampling x(t), (t n) results in the periodic extension of X(f ) with period 1 (f F ), and we get
cos(2n) 0.5[(F + ) + (F )]
(c) To find the DTFT of x[n] = sinc(2n), we can start with the Fourier transform pair
x(t) = 2 sinc(2t) X(f ) = rect(f /2)
Sampling x(t), (t n) results in the periodic extension of X(f ) with period 1 (f F ), and we get
2 sinc(2n) rect(F/2)
(d) The DTFT of x[n] = en u[n] = (e1 )n u[n] may be found readily from the defining DTFT relation as
1 1
Xp (F ) = . Suppose we start with the analog Fourier transform pair et u(t) .
1 e1 ej2F 1 + j2f
Sampling the time signal to x[n] = en u[n] leads to periodic extension of X(f ) and gives the DTFT
! 1 1
as Xp (F ) = . But does this equal ? The closed-form result is pretty
1 + j2(F + k) 1 e ej2F
1
k
hard to find in
# a handbook of mathematical tables! However, if we start with the ideally sampled
signal x(t) = en (t n), its Fourier transform (using the shifting property) is
1
X(f ) = 1 + e1 ej2f + e2 ej4f + e3 ej6f + =
1 e1 ej2f
Since the DTFT corresponds to X(f ) with f F , we have the required equivalence!
Fourier transform
t f
Fourier series
t f
T f0
DTFT
n F
1 1
DFT
n k
1 1
Figure 15.4 Features of the various transforms
a complicated form. Of course, the DTFT of signals that are not absolutely summable usually includes
impulses (u[n], for example) or does not exist (growing exponentials, for example). These shortcomings of
the DTFT can be removed as follows.
Since the DTFT describes a discrete-time signal as a sum of weighted harmonics, we include a real
weighting factor rk and redefine the transform as the sum of exponentially weighted harmonics of the form
z = r exp(j2F ) to obtain
!
!
!
& 'k
X(z) = x[k]ej2kF rk = x[k] rej2F = x[k]z k (15.26)
k= k= k=
This defines the two-sided z-transform of x[k]. The weighting rk acts as a convergence factor to allow
transformation of exponentially growing signals. The DTFT of x[n] may thus be viewed as its z-transform
X(z) evaluated for r = 1 (along the unit circle in the z-plane).
For causal signals, we change the lower limit in the sum to zero to obtain the one-sided z-transform.
This allows us to analyze discrete LTI systems with arbitrary initial conditions in much the same way that
the Laplace transform does for analog LTI systems. The z-transform is discussed in later chapters.
Magnitude
1
F
FC FC
0.5 0.5
Its impulse response hLP [n] is found using the IDTFT to give
" 0.5 " " (F
FC FC
sin(2nF ) (( C
hLP [n] = HLP (F )e j2nF
dF = ej2nF
dF = cos(2nF ) dF = ( (15.27)
0.5 FC FC 2n FC
Figure 15.6 shows two ways to obtain a highpass transfer function from this lowpass filter.
Lowpass filter H(F) Highpass filter H(F 0.5) Highpass filter 1 H(F)
1 1 1
F F F
0.5 FC FC 0.5 0.5 0.5 0.5 FC FC 0.5
0.5FC
The first way is to shift H(F ) by 0.5 to obtain HH1 (F ) = H(F 0.5), a highpass filter whose cuto frequency
is given by FH1 = 0.5 FC . This leads to
hH1 [n] = (1)n h[n] = 2(1)n FC sinc(2nFC ) (with cuto frequency 0.5 FC ) (15.30)
Alternatively, we see that HH2 (F ) = 1 H(F ) also describes a highpass filter, but with a cuto frequency
given by FH2 = FC , and this leads to
Figure 15.7 shows how to transform a lowpass filter to a bandpass filter or to a bandstop filter.
F F F
0.5 FC FC 0.5 0.5 0.5 0.5 0.5
F0 FC F0 F0 FC F0
To obtain a bandpass filter with a center frequency of F0 and band edges [F0 FC , F0 + FC ], we simply shift
H(F ) by F0 to get HBP (F ) = H(F + F0 ) + H(F F0 ), and obtain
A bandstop filter with center frequency F0 and band edges [F0 FC , F0 + FC ] can be obtained from the
bandpass filter using HBS (F ) = 1 HBP (F ), to give
hBS [n] = [n] hBP [n] = [n] 4FC cos(2nF0 )sinc(2nFC ) (15.33)
506 Chapter 15 The Discrete-Time Fourier Transform
The closed form for the summation is readily available. The quantity WD (F ) describes the Dirichlet
kernel or the aliased sinc function and also equals the periodic extension of sinc(MF ) with period F = 1.
Figure 15.8 shows the Dirichlet kernel for N = 3, 5, and 10.
15.7 Ideal Filters 507
Amplitude
Amplitude
Amplitude
0 1
1 1
The Dirichlet kernel has some very interesting properties. Over one period, we observe the following:
1. It shows N maxima, a positive main lobe of width 2/M , decaying positive and negative sidelobes of
width 1/M , with 2N zeros at k/M, k = 1, 2, . . . , 2N .
2. Its area equals unity, and it attains a maximum peak value of M (at the origin) and a minimum peak
value of unity (at F = 0.5 for odd N ). Increasing M increases the mainlobe height and compresses the
sidelobes. The ratio R of the main lobe and peak sidelobe magnitudes stays nearly constant (between
4 and 4.7) for finite M , and R 1.5 4.71 (or 13.5 dB) for very large M . As M , Xp (F )
approaches a unit impulse.
To find the DTFT of the triangular window x[n] = tri(n/M ), we start with the DTFT pair x[n] =
rect(n/2N ) M sinc(MF )
sincF , M = 2N + 1. Since rect(n/2N ) rect(n/2N ) = (2N + 1)tri( 2N +1 ) = M tri(n/M ),
n
sinc2 (MF )
tri(n/M ) M = WF (F ) (15.36)
sinc2 F
The quantity WF (F ) is called the Fejer kernel. Figure 15.9 shows the Fejer kernel for N = 3, 5, and 10.
It is always positive and shows M maxima over one period with a peak value of M at the origin.
Amplitude
Amplitude
2
3 6
2 4
1
1 2
0 0 0
0.5 0 0.5 0.5 0 0.5 0.5 0 0.5
Digital frequency F Digital frequency F Digital frequency F
(a) Dirichlet kernel N = 10 (b) Ideal LPF with Fc = 0.25 (c) Their periodic convolution
21
1 1
Amplitude
Amplitude
Amplitude
0
0 0
6
0.5 0 0.5 0.5 0.25 0 0.25 0.5 0.5 0.25 0 0.25 0.5
Digital frequency F Digital frequency F Digital frequency F
What causes the overshoot and ringing is the slowly decaying sidelobes of WD (F ) (the Dirichlet kernel
corresponding to the rectangular window). The overshoot and oscillations persist no matter how large the
truncation index N . To eliminate overshoot and reduce the oscillations, we multiply hLP [n] by a tapered
window whose DTFT sidelobes decay much faster (as we also did for Fourier series smoothing). The trian-
gular window (whose DTFT describes the Fejer kernel) for example, is one familiar example whose periodic
convolution with the ideal filter spectrum leads to a monotonic response and the complete absence of over-
shoot and oscillations, as shown in Figure 15.11. All windows are designed to minimize (or eliminate) the
overshoot in the spectrum while maintaining as abrupt a transition as possible.
(a) Fejer kernel N = 10 (b) Ideal LPF with Fc = 0.25 (c) Their periodic convolution
10
1
Amplitude
Amplitude
Amplitude
0 0
0
0.5 0 0.5 0.5 0.25 0 0.25 0.5 0 0.25 0.5
Digital frequency F Digital frequency F Digital frequency F
1 Hp (F0 ) 1 dHp (F )
tp (F ) = (phase delay) tg (F ) = (group delay) (15.37)
2 F0 2 dF
Performance in the time domain is based on the characteristics of the impulse response and step response.
n
F n F
0.5 0.5 0.5 0.5
Figure 15.12 Spectrum and impulse response of first-order lowpass and highpass filters
We can find the half-power frequency of the lowpass filter from its magnitude squared function:
( (2
( 1 ( 1
|HLP (F )|2 = (( ( = (15.39)
1 cos(2F ) + j sin(2F ) ( 1 2 cos(2F ) + 2
At the half-power frequency, we have |HLP (F )|2 = 0.5, and this gives
1 1 $ 2 1 %
= 0.5 or F = cos1 , |F | < 0.5 (15.40)
1 2 cos(2F ) + 2 2 2
The phase and group delay of the lowpass filter are given by
, -
sin(2F ) 1 d(F ) cos(2F ) 2
(F ) = tan1 tg (F ) = = (15.41)
1 cos(2F ) 2 dF 1 2 cos(2F ) + 2
For low frequencies, the phase is nearly linear and the group delay is nearly constant, and they can be
approximated by
2F 1 d(F )
(F ) , |F | 1 tg (F ) = , |F | 1 (15.42)
1 2 dF 1
510 Chapter 15 The Discrete-Time Fourier Transform
The time constant of the lowpass filter describes how fast the impulse response decays to a specified fraction
of its initial value. For a specified fraction of %, we obtain the eective time constant in samples as
ln
= = (15.44)
ln
This value is rounded up to an integer, if necessary. We obtain the commonly measured 1% or 40-dB time
constant if = 0.01 (corresponding to an attenuation of 40 dB) and the 0.1% or 60-dB time constant if
= 0.001. If the sampling interval ts is known, the time constant in seconds can be computed from
= ts . The 60-dB time constant is also called the reverberation time. For higher-order filters whose
impulse response contains several exponential terms, the eective time constant is dictated by the term with
the slowest decay (that dominates the response).
Now, consider the first-order filter where we change the sign of to give
1 1 1
HHP (F ) = HHP (0) = HHP (0.5) = , 0<<1 (15.45)
1 + ej2F 1+ 1
This describes a highpass filter because its magnitude at low frequencies (closer to F = 0) is small with
|HHP (0)| < |HHP (0.5)|, as shown in Figure 15.12. The impulse response of this filter is given by
It is the alternating sign changes in the samples of hHP [n] that cause rapid time variations and lead to its
highpass behavior. Its spectrum HHP (F ) is related to HLP (F ) by HHP (F ) = HLP (F 0.5). This means
that a lowpass cuto frequency F0 corresponds to the frequency 0.5 F0 of the highpass filter.
B
X =A 0 +B
j 2 F A 0 +B X
H(F) = A + Be = =
B + Ae j 2 F
B 0 +A Y
A Y =B 0 +A
The vectors X and Y are of equal length
for any avlue of F (or ). =2 F
The magnitude of H(F) is thus unity.
A 0 B 0
(b) Let h[n] = (0.8)n u[n]. Identify the filter type, establish whether the impulse response is a linear-phase
sequence, and find its 60-dB time constant.
1 1 1 1
We have Hp (F ) = , Hp (0) = = 5, and Hp (0.5) = = = 0.556
1 0.8ej2F 1 0.8 1 0.8ej 1 + 0.8
The sequence h[n] is not a linear-phase phase because it shows no symmetry.
Since the magnitude at high frequencies decreases, this appears to be a lowpass filter. The 60-dB time
constant of this filter is found as
ln ln 0.001
= = = 30.96 = 31 samples
ln ln 0.8
For a sampling frequency of S = 100 Hz, this corresponds to = ts = /S = 0.31 s.
(c) Let h[n] = 0.8[n] + 0.36(0.8)n1 u[n 1]. Identify the filter type and establish whether the impulse
response is a linear-phase sequence.
0.36ej2F 0.8 + ej2F 0.8 1
We find Hp (F ) = 0.8 + = . So, Hp (0) = 1 and Hp (0.5) = = 1.
1 + 0.8ej2F 1 + 0.8ej2F 1 0.8
The sequence h[n] is not a linear-phase sequence because it shows no symmetry.
Since the magnitude is identical at low and high frequencies, this could be a bandstop or an allpass
filter. Since the numerator and denominator coecients in Hp (F ) appear in reversed order, it is an
allpass filter.
512 Chapter 15 The Discrete-Time Fourier Transform
(d) Let h[n] = [n] [n 8]. Identify the filter type and establish whether the impulse response is a
linear-phase sequence.
We find Hp (F ) = 1 ej16F . So, Hp (0) = 0 and Hp (0.5) = 0.
This suggests a bandpass filter with a maximum between F = 0 and F = 0.5. On closer examination,
we find that Hp (F ) = 0 at F = 0, 0.125, 0.25, 0.375, 0.5 when ej16F = 1 and |Hp (F )| = 2 at four
frequencies halfway between these. This multi-humped response is sketched in Figure E15.11D and
describes a comb filter.
We may write h[n] = {1, 0, 0, 0, 0, 0, 0, 0, 1}. This is a linear-phase sequence because it shows odd
symmetry about the index n = 4.
Comment: Note that the dierence equation y[n] = x[n] x[n 8] is reminiscent of an echo filter.
Magnitude of comb filter
2
F
0.5
Figure E15.11D Spectrum of the comb filter for Example 15.11(d)
ts ts
n 1 n n n +1
Backward Euler algorithm Forward Euler algorithm
Most numerical integration algorithms estimate the area y[n] from y[n 1] by using step, linear, or
quadratic interpolation between the samples of x[n], as illustrated in Figure 15.15.
Only Simpsons rule finds y[n] over two time steps from y[n 2]. Discrete algorithms are good approx-
imations only at low digital frequencies (F < 0.1, say). Even when we satisfy the sampling theorem, low
digital frequencies mean high sampling rates, well in excess of the Nyquist rate. This is why the sampling
rate is a critical factor in the frequency-domain performance of these operators. Another factor is stability.
For example, if Simpsons rule is used to convert analog systems to discrete systems, it results in an unstable
system. The trapezoidal integration algorithm and the backward dierence are two popular choices.
15.9 Frequency Response of Discrete Algorithms 513
Numerical Integration
n 1 n n 1 n
Rectangular rule Trapezoidal rule
HT (F ) j2F F (F )2 (F )4
= = 1
HI (F ) j2 tan(F ) tan(F ) 3 45
514 Chapter 15 The Discrete-Time Fourier Transform
Figure E15.12 shows the magnitude and phase error by plotting the ratio HT (F )/HI (F ). For an
ideal algorithm, this ratio should equal unity at all frequencies. At low frequencies (F 1), we
have tan(F ) F , and HT (F )/HI (F ) 1, and the trapezoidal rule is a valid approximation to
integration. The phase response matches the ideal phase at all frequencies.
(a) Magnitude spectrum of integration algorithms (c) Magnitude spectrum of difference algorithms
1.1 1.1
Rectangular
Backward and forward
1 1
Magnitude
Magnitude
Simpson
0.8 0.8
Phase [degrees]
Rectangular
20 20
0 0
Simpson and trapezoidal Central
20 20
Backward
40 40
0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2
Digital frequency F Digital frequency F
Figure E15.12 Frequency response of the numerical algorithms for Example 15.12
(b) Simpsons numerical integration algorithm yields the following normalized result:
) *
HS (F ) 2 + cos 2F
= 2F
HI (F ) 3 sin 2F
It displays a perfect phase match for all frequencies, but has an overshoot in its magnitude response
past F = 0.25 and thus amplifies high frequencies.
Yp (F )
Yp (F ) = Xp (F )ej2F Xp (F ) = Xp (F )[ej2F 1] HF (F ) = = ej2F 1
Xp (F )
HF (F ) 1 1
= 1 + j (2F ) (2F )2 +
HD (F ) 2! 3!
Again, we observe correspondence only at low digital frequencies (or high sampling rates). The high
frequencies are amplified, making the algorithm susceptible to high-frequency noise. The phase response
also deviates from the true phase, especially at high frequencies.
15.10 Oversampling and Sampling Rate Conversion 515
HC (F ) sin(2F ) 1 1
= = 1 (2F )2 + (2F )4 +
HD (F ) 2F 3! 5!
We see a perfect match only for the phase at all frequencies.
f f f f
B B S B NS B S/M
Figure 15.16 Spectra of a signal sampled at three sampling rates
The spectrum of the oversampled signal shows a gain of N but covers a smaller fraction of the principal
period. The spectrum of a signal sampled at the lower rate S/M is a stretched version with a gain of 1/M .
In terms of the digital frequency F , the period of all three sampled versions is unity. One period of the
spectrum of the signal sampled at S Hz extends to B/S, whereas the spectrum of the same signal sampled at
NS Hz extends only to B/NS Hz, and the spectrum of the same signal sampled at S/M Hz extends farther
out to BM/S Hz. After an analog signal is first sampled, all subsequent sampling rate changes are typically
made by manipulating the signal samples (and not resampling the analog signal). The key to the process
lies in interpolation and decimation.
n F
1234 0.5 1
n F
1234 0.5/4 0.5 1
Figure 15.17 Zero interpolation of a signal leads to spectrum compression
This describes Yp (F ) as a scaled (compressed) version of the periodic spectrum Xp (F ) and leads to N -fold
spectrum replication. The spectrum of the interpolated signal shows N compressed images in the principal
period |F | 0.5, with the central image occupying the frequency range |F | 0.5/N . This is exactly
analogous to the Fourier series result where spectrum zero interpolation produced replication (compression)
of the periodic signal.
Similarly, the spectrum of a decimated signal y[n] = x[nM ] is a stretched version of the original spectrum
and described by
1 $F %
Yp (F ) = Xp (15.49)
M M
The factor 1/M ensures that we satisfy Parsevals relation. If the spectrum of the original signal x[n] extends
to |F | 0.5/M in the central period, the spectrum of the decimated signal extends over |F | 0.5, and there
is no aliasing or overlap between the spectral images.
X(F) Y(F)
2 2
f f
0.2 0.8 1 0.1 0.4 0.5 0.6 0.9 1
D(F) G(F)
1 1
f f
0.4 0.6 1 0.2 0.3 0.5 0.7 0.8 1
Figure E15.13 The spectra of the signals for Example 15.13
1
1 2 n 0.5 1 2 F
ts t 0.5S S 2S f
1
1 2 n 0.5 1 F
ts /2 t 0.5S1 S1 = 2 S f
1 2 n 0.25 0.5 1 F
ts /2 t 0.5S1 S1 f
An up-sampler inserts N 1 zeros between signal samples and results in an N -fold zero-interpolated
signal corresponding to the higher sampling rate NS. Zero interpolation by N results in N -fold replication
of the spectrum whose central image over |F | 0.5/N corresponds to the spectrum of the oversampled signal
518 Chapter 15 The Discrete-Time Fourier Transform
(except for a gain of N ). The spurious images are removed by passing the zero-interpolated signal through
a lowpass filter with a gain of N and a cuto frequency of FC = 0.5/N to obtain the required oversampled
signal. If the original samples were acquired at a rate that exceeds the Nyquist rate, the cuto frequency of
the digital lowpass filter can be made smaller than 0.5/N . This process yields exact results as long as the
underlying analog signal has been sampled above the Nyquist rate and the filtering operation is assumed
ideal.
F F F F
1/2 1/2 1/4 1/4 1/8 1/8 1/8 1/8
Figure E15.14A The spectra at various points for the system of Example 15.14
x[n] Anti-aliasing digital filter v[n] Down-sample (keep every M th sample) d[n]
0.5
Gain = 1 FC = M
S M S S/M
Figure 15.20 Sampling rate reduction by an integer factor
First, the sampled signal is passed through a lowpass filter (with unit gain) to ensure that it is band-
limited to |F | 0.5/M in the principal period and to prevent aliasing during the decimation stage. It is then
decimated by a down-sampler that retains every M th sample. The spectrum of the decimated signal is a
15.10 Oversampling and Sampling Rate Conversion 519
stretched version that extends to |F | 0.5 in the principal period. Note that the spectrum of the decimated
signal has a gain of 1/M as required, and it is for this reason that we use a lowpass filter with unit gain.
Figure 15.21 shows the signals and their spectra at various points in the system for M = 2. This process
yields exact results as long as the underlying analog signal has been sampled at a rate that is M times the
Nyquist rate (or higher) and the filtering operation is assumed ideal.
1 2 n 0.25 0.5 1 F
ts t 0.25S 0.5S S f
1 2 n 0.25 0.5 1 F
ts t 0.25S 0.5S S f
0.5
1 2 n 0.5 1 2 F
2 ts t 0.5S1 S1 = 0.5S 2 S1 f
Figure 15.21 The spectra of various signals during sampling rate reduction
Fractional sampling-rate changes by a factor M/N can be implemented by cascading a system that
increases the sampling rate by N (interpolation) and a system that reduces sampling rate by M (decimation).
In fact, we can replace the two lowpass filters that are required in the cascade (both of which operate at the
sampling rate NS) by a single lowpass filter whose gain is 1/N and whose cuto frequency is the smaller of
0.5/M and 0.5/N , as shown in Figure 15.22.
CHAPTER 15 PROBLEMS
DRILL AND REINFORCEMENT
15.1 (DTFT of Sequences) Find the DTFT Xp (F ) of the following signals and evaluate Xp (F ) at
F = 0, F = 0.5, and F = 1.
(a) x[n] = {1, 2, 3, 2, 1} (b) x[n] = {1, 2, 0, 2, 1}
(c) x[n] = {1, 2, 2, 1} (d) x[n] = {1, 2, 2, 1}
15.2 (DTFT from Definition) Find the DTFT Xp (F ) of the following signals.
(a) x[n] = (0.5)n+2 u[n] (b) x[n] = n(0.5)2n u[n] (c) x[n] = (0.5)n+2 u[n 1]
(d) x[n] = n(0.5)n+2 u[n 1] (e) x[n] = (n + 1)(0.5)n u[n] (f ) x[n] = (0.5)n u[n]
4
15.3 (Properties) The DTFT of x[n] is X(F ) = . Find the DTFT of the following signals
2 ej2F
without first computing x[n].
(a) y[n] = x[n 2] (b) d[n] = nx[n]
(c) p[n] = x[n] (d) g[n] = x[n] x[n 1]
(e) h[n] = x[n] x[n] (f ) r[n] = x[n]ejn
(g) s[n] = x[n]cos(n) (h) v[n] = x[n 1] + x[n + 1]
15.4 (Properties) The DTFT of the signal x[n] = (0.5)n u[n] is X(F ). Find the time signal corresponding
to the following transforms without first computing Xp (F ).
(a) Y (F ) = X(F ) (b) G(F ) = X(F 0.25)
(c) H(F ) = X(F + 0.5) + X(F 0.5) (d) P (F ) = X (F )
(e) R(F ) = X 2 (F ) (f ) S(F ) = X(F )X(F )
(g) D(F ) = X(F )cos(4F ) (h) T (F ) = X(F + 0.25) X(F 0.25)
15.5 (Spectrum of Discrete Periodic Signals) Sketch the DTFT magnitude spectrum and phase
spectrum of the following signals over |F | 0.5.
(a) x[n] = cos(0.5n)
(b) x[n] = cos(0.5n) + sin(0.25n)
(c) x[n] = cos(0.5n)cos(0.25n)
15.6 (System Representation) Find the transfer function H(F ) and the system dierence equation for
the following systems described by their impulse response h[n].
(a) h[n] = ( 31 )n u[n] (b) h[n] = [1 ( 13 )n ]u[n]
(c) h[n] = n( 31 )n u[n] (d) h[n] = 0.5[n]
(e) h[n] = [n] ( 13 )n u[n] (f ) h[n] = [( 13 )n + ( 12 )n ]u[n]
15.7 (System Representation) Find the transfer function and impulse response of the following systems
described by their dierence equation.
(a) y[n] + 0.4y[n 1] = 3x[n] (b) y[n] 16 y[n 1] 16 y[n 2] = 2x[n] + x[n 1]
(c) y[n] = 0.2x[n] (d) y[n] = x[n] + x[n 1] + x[n 2]
Chapter 15 Problems 521
15.8 (System Representation) Set up the system dierence equation for the following systems described
by their transfer function.
6ej2F 3 ej2F
(a) Hp (F ) = (b) Hp (F ) =
3ej2F + 1 ej2F + 2 ej2F + 3
6 6ej2F + 4ej4F
(c) Hp (F ) = (d) Hp (F ) =
1 0.3ej2F (1 + 2ej2F )(1 + 4ej2F )
15.9 (Steady-State Response) Consider the filter y[n] + 0.25y[n 2] = 2x[n] + 2x[n 1]. Find the
filter transfer function H(F ) of this filter and use this to compute the steady-state response to the
following inputs.
(a) x[n] = 5u[n] (b) x[n] = 3 cos(0.5n + 4 ) 6 sin(0.5n 4 )
(c) x[n] = 3 cos(0.5n)u[n] (d) x[n] = 2 cos(0.25n) + 3 sin(0.5n)
15.10 (Response of Digital Filters) Consider the 3-point averaging filter described by the dierence
equation y[n] = 13 (x[n] + x[n 1] + x[n 2]).
(a) Find its impulse response h[n].
(b) Find and sketch its frequency response H(F ).
(c) Find its response to x[n] = cos( n
3 + 4 ).
15.11 (Frequency Response) Sketch the frequency response of the following digital filters and describe
the function of each filter.
(a) y[n] + 0.9y[n 1] = x[n] (b) y[n] 0.9y[n 1] = x[n]
(c) y[n] + 0.9y[n 1] = x[n 1] (d) y[n] = x[n] x[n 4]
15.12 (DTFT of Periodic Signals) Find the DTFT of the following periodic signals described over one
period, with N samples per period.
(a) x[n] = {1, 0, 0, 0, 0}, N = 5
(b) x[n] = {1, 0, 1, 0}, N = 4
(c) x[n] = {3, 2, 1, 2}, N = 4
(d) x[n] = {1, 2, 3}, N = 3
15.13 (System Response to Periodic Signals) Consider the 3-point moving average filter described by
y[n] = x[n] + x[n 1] + x[n 2]. Find its response to the following periodic inputs.
(a) x[n] = {1, 0, 0, 0, 0}, N = 5
(b) x[n] = {1, 0, 1, 0}, N = 4
(c) x[n] = {3, 2, 1}, N = 3
(d) x[n] = {1, 2}, N = 2
15.14 (Sampling, Filtering, and Aliasing) The sinusoid x(t) = cos(2f0 t) is sampled at 1 kHz to yield
the sampled signal x[n]. The signal x[n] is passed through a 2-point averaging filter whose dierence
equation is y[n] = 0.5(x[n] + x[n 1]). The filtered output y[n] is reconstructed using an ideal lowpass
522 Chapter 15 The Discrete-Time Fourier Transform
filter with a cuto frequency of 0.5 kHz to generate the analog signal y(t). Find an expression for
y(t) if f0 = 0.2, 0.5, 0.75 kHz
(a) x[n] = sinc(0.2n) (b) h[n] = sin(0.2n) (c) g[n] = sinc2 (0.2n)
15.16 (DTFT) Compute the DTFT of the following signals and plot their amplitude and phase spectra.
(a) x[n] = [n + 1] + [n 1] (b) x[n] = [n + 1] [n 1]
(c) x[n] = [n + 1] + [n] + [n 1] (d) x[n] = u[n + 1] u[n 1]
(e) x[n] = u[n + 1] u[n 2] (f ) x[n] = u[n] u[n 3]
15.17 (DTFT) Compute the DTFT of the following signals and sketch the magnitude and phase spectrum
over 0.5 F 0.5.
(a) x[n] = cos(0.4n) (b) x[n] = cos(0.2n + 4 )
(c) x[n] = cos(n) (d) x[n] = cos(1.2n + 4 )
(e) x[n] = cos(2.4n) (f ) x[n] = cos2 (2.4n)
15.18 (DTFT) Compute the DTFT of the following signals and sketch the magnitude and phase spectrum
over 0.5 F 0.5.
(a) x[n] = sinc(0.2n) (b) x[n] = sinc(0.2n)cos(0.4n)
(c) x[n] = sinc2 (0.2n) (d) x[n] = sinc(0.2n)cos(0.1n)
(e) x[n] = sinc2 (0.2n)cos(0.4n) (f ) x[n] = sinc2 (0.2n)cos(0.2n)
15.20 (Properties) The DTFT of a real signal x[n] is X(F ). How is the DTFT of the following signals
related to X(F )?
(a) y[n] = x[n] (b) g[n] = x[n] x[n]
(c) r[n] = x[n/4] (zero interpolation) (d) s[n] = (1)n x[n]
(e) h[n] = (j)n x[n] (f ) v[n] = cos(2nF0 )x[n]
(g) w[n] = cos(n)x[n] (h) z[n] = [1 + cos(n)]x[n]
(i) b[n] = (1)n/2 x[n] (j) p[n] = ejn x[n 1]
15.21 (Properties) Let x[n] = tri(0.2n) and let X(F ) be its DTFT. Compute the following without
evaluating X(F ).
(a) The
DTFT of the odd part of x[n]
(b) The
value of X(F ) at F = 0 and F = 0.5
(c) The
phase of X(F )
(d) The
phase of the DTFT of x[n]
" 0.5 " 0.5
(e) The integrals X(F ) dF and X(F 0.5) dF
0.5 0.5
Chapter 15 Problems 523
" " ( (
0.5 0.5 ( dX(F ) (2
(f ) The integrals |X(F )| dF and
2 ( (
( dF ( dF
0.5 0.5
dX(F )
(g) The derivative
dF
15.22 (DTFT of Periodic Signals) Find the DTFT of the following periodic signals with N samples
per period.
(a) x[n] = {1, 1, 1, 1, 1}, N = 5 (b) x[n] = {1, 1, 1, 1}, N = 4
(c) x[n] = (1)n (d) x[n] = 1 (n even) and x[n] = 0 (n odd)
15.23 (IDTFT) Compute the IDTFT x[n] of the following X(F ) described over |F | 0.5.
15.24 (Properties) Confirm that the DTFT of x[n] = nn u[n], using the following methods, gives identical
results.
(a) From the defining relation for the DTFT
(b) From the DTFT of y[n] = n u[n] and the times-n property
(c) From the convolution result n u[n] n u[n] = (n + 1)n u[n + 1] and the shifting property
(d) From the convolution result n u[n] n u[n] = (n + 1)n u[n + 1] and superposition
15.25 (Properties) Find the DTFT of the following signals using the approach suggested.
(a) Starting with the DTFT of u[n], show that the DTFT of x[n] = sgn[n] is X(F ) = j cot(F ).
(b) Starting with rect(n/2N ) (2N + 1) sinc[(2N +1)F ]
sincF , use the convolution property to find the
DTFT of x[n] = tri(n/N ).
(c) Starting with rect(n/2N ) (2N + 1) sinc[(2N +1)F ]
sincF , use the modulation property to find the
DTFT of x[n] = cos(n/2N )rect(n/2N ).
15.26 (DTFT of Periodic Signals) One period of a periodic signal is given by x[n] = {1, 2, 0, 1}.
(a) Find the DTFT of this periodic signal.
(b) The signal is passed through a filter whose impulse response is h[n] = sinc(0.8n). What is the
filter output?
15.27 (Frequency Response) Consider a system whose frequency response H(F ) in magnitude/phase
form is H(F ) = A(F )ej(F ) . Find the response y[n] of this system for the following inputs.
(a) x[n] = [n] (b) x[n] = 1 (c) x[n] = cos(2nF0 ) (d) x[n] = (1)n
15.28 (Frequency Response) Consider a system whose frequency response is H(F ) = 2 cos(F )ejF .
Find the response y[n] of this system for the following inputs.
(a) x[n] = [n] (b) x[n] = cos(0.5n)
(c) x[n] = cos(n) (d) x[n] = 1
(e) x[n] = ej0.4n (f ) x[n] = (j)n
15.29 (Frequency Response) The signal x[n] = {1, 0.5} is applied to a system with frequency response
H(F ), and the resulting output is y[n] = [n] 2[n 1] [n 2]. What is H(F )?
524 Chapter 15 The Discrete-Time Fourier Transform
15.30 (Frequency Response) Consider the 2-point averager y[n] = 0.5x[n] + 0.5x[n 1].
(a) Sketch its magnitude (or amplitude) and phase spectrum.
(b) Find its response to the input x[n] = cos(n/2).
(c) Find its response to the input x[n] = [n].
(d) Find its response to the input x[n] = 1 and x[n] = 3 + 2[n] 4 cos(n/2).
cos1 ( 0.5)
(e) Show that its half-power frequency is given by FC = .
(f ) What is the half-power frequency of a cascade of N such 2-point averagers?
15.31 (Frequency Response) Consider the 3-point averaging filter h[n] = 13 {1, 1, 1}.
(a) Sketch its magnitude (or amplitude) and phase spectrum. What type of filter is this?
(b) Find its phase delay and group delay. Is this a linear-phase filter?
(c) Find its response to the input x[n] = cos(n/3).
(d) Find its response to the input x[n] = [n].
(e) Find its response to the input x[n] = (1)n .
(f ) Find its response to the input x[n] = 3 + 3[n] 6 cos(n/3).
15.32 (Frequency Response) Consider a filter described by h[n] = 13 {1, 1, 1}.
(a) Sketch its magnitude (or amplitude) and phase spectrum. What type of filter is this?
(b) Find its phase delay and group delay. Is this a linear-phase filter?
(c) Find its response to the input x[n] = cos(n/3).
(d) Find its response to the input x[n] = [n].
(e) Find its response to the input x[n] = (1)n .
(f ) Find its response to the input x[n] = 3 + 3[n] 3 cos(2n/3).
15.33 (Frequency Response) Consider the tapered 3-point averaging filter h[n] = {0.5, 1, 0.5}.
(a) Sketch its magnitude (or amplitude) and phase spectrum. What type of filter is this?
(b) Find its phase delay and group delay. Is this a linear-phase filter?
(c) Find its response to the input x[n] = cos(n/2).
(d) Find its response to the input x[n] = [n 1].
(e) Find its response to the input x[n] = 1 + (1)n .
(f ) Find its response to the input x[n] = 3 + 2[n] 4 cos(n/2).
15.34 (Frequency Response) Consider the 2-point dierencing filter h[n] = [n] [n 1].
(a) Sketch its magnitude (or amplitude) and phase spectrum. What type of filter is this?
(b) Find its phase and group delay. Is this a linear-phase filter?
(c) Find its response to the input x[n] = cos(n/2).
(d) Find its response to the input x[n] = u[n].
(e) Find its response to the input x[n] = (1)n .
(f ) Find its response to the input x[n] = 3 + 2u[n] 4 cos(n/2).
15.35 (Frequency Response) Consider the 5-point averaging filter h[n] = 19 {1, 2, 3, 2, 1}.
(a) Sketch its magnitude (or amplitude) and phase spectrum. What type of filter is this?
Chapter 15 Problems 525
(b) Find its phase delay and group delay. Is this a linear-phase filter?
(c) Find its response to the input x[n] = cos(n/4).
(d) Find its response to the input x[n] = [n].
(e) Find its response to the input x[n] = (1)n .
(f ) Find its response to the input x[n] = 9 + 9[n] 9 cos(n/4).
15.36 (Frequency Response) Consider the recursive filter y[n] + 0.5y[n 1] = 0.5x[n] + x[n 1].
(a) Sketch its magnitude (or amplitude) and phase spectrum. What type of filter is this?
(b) Find its phase delay and group delay. Is this a linear-phase filter?
(c) Find its response to the input x[n] = cos(n/2).
(d) Find its response to the input x[n] = [n].
(e) Find its response to the input x[n] = (1)n .
(f ) Find its response to the input x[n] = 4 + 2[n] 4 cos(n/2).
15.37 (System Response to Periodic Signals) Consider the filter y[n] 0.5y[n 1] = 3x[n 1]. Find
its response to the following periodic inputs.
(a) x[n] = {4, 0, 2, 1}, N = 4
(b) x[n] = (1)n
(c) x[n] = 1 (n even) and x[n] = 0 (n odd)
(d) x[n] = 14(0.5)n (u[n] u[n N ]), N = 3
15.38 (System Response to Periodic Signals) Find the response if a periodic signal whose one period
is x[n] = {4, 3, 2, 3} (with period N = 4) is applied to the following filters.
(a) h[n] = sinc(0.4n)
(b) H(F ) = tri(2F )
(c) y[n] = x[n] + x[n 1] + x[n 2] + x[n 3]
15.39 (Frequency Response) Consider the filter realization of Figure P15.39. Find the frequency re-
sponse H(F ) of the overall system if the impulse response h1 [n] of the filter in the forward path is
given by
z1
Figure P15.39 Filter realization for Problem 15.39
15.40 (Interconnected Systems) Consider two systems with impulse response h1 [n] = [n] + [n 1]
and h2 [n] = (0.5)n u[n]. Find the frequency response and impulse response of the combination if
(a) The two systems are connected in parallel with = 0.5.
(b) The two systems are connected in parallel with = 0.5.
(c) The two systems are connected in cascade with = 0.5.
(d) The two systems are connected in cascade with = 0.5.
526 Chapter 15 The Discrete-Time Fourier Transform
15.41 (Interconnected Systems) Consider two systems with impulse response h1 [n] = [n] + [n 1]
and h2 [n] = (0.5)n u[n]. Find the response y[n] of the overall system to the input x[n] = (0.5)n u[n] if
(a) The two systems are connected in parallel with = 0.5.
(b) The two systems are connected in parallel with = 0.5.
(c) The two systems are connected in cascade with = 0.5.
(d) The two systems are connected in cascade with = 0.5.
15.42 (Interconnected Systems) Consider two systems with impulse response h1 [n] = [n] + [n 1]
and h2 [n] = (0.5)n u[n]. Find the response of the overall system to the input x[n] = cos(0.5n) if
(a) The two systems are connected in parallel with = 0.5.
(b) The two systems are connected in parallel with = 0.5.
(c) The two systems are connected in cascade with = 0.5.
(d) The two systems are connected in cascade with = 0.5.
15.43 (The Fourier Transform and DTFT) The analog signal x(t) = et u(t) is ideally sampled at
!
the sampling rate S to yield the analog impulse train xi (t) = x(t)(t kts ) where ts = 1/S.
k=
The sampled version of x(t) corresponds to x[n] = ents u[n].
(a) Find the Fourier transforms X(f ) and Xi (f ), and the DTFT Xp (F ).
(b) Show that Xp (F ) and Xi (f ) are identical.
(c) How is Xp (F ) related to X(f )?
15.44 (Filter Concepts) Let H1 (F ) be an ideal lowpass filter with a cuto frequency of F1 = 0.2 and
let H2 (F ) be an ideal lowpass filter with a cuto frequency of F2 = 0.4. Make a block diagram of
how you would connect these filters (in cascade and/or parallel) to implement the following filters
and express their transfer function in terms of H1 (F ) and/or H2 (F ).
(a) An ideal highpass filter with a cuto frequency of FC = 0.2
(b) An ideal highpass filter with a cuto frequency of FC = 0.4
(c) An ideal bandpass filter with a passband covering 0.2 F 0.4
(d) An ideal bandstop filter with a stopband covering 0.2 F 0.4
15.45 (Linear-Phase Filters) Consider a filter whose impulse response is h[n] = {, , }.
(a) Find its frequency response in amplitude/phase form H(F ) = A(F )ej(F ) and confirm that the
phase (F ) varies linearly with F .
(b) Determine the values of and if this filter is to completely block a signal at the frequency
F = 13 while passing a signal at the frequency F = 18 with unit gain.
(c) What will be the output of the filter in part(b) if the input is x[n] = cos(0.5n)?
15.46 (Echo Cancellation) A microphone, whose frequency response is limited to 5 kHz, picks up not
only a desired signal x(t) but also its echoes. However, only the first echo, arriving after 1.25 ms, has
a significant amplitude (of 0.5) relative to the desired signal x(t).
(a) Set up an equation relating the analog output and analog input of the microphone.
(b) The microphone signal is to be processed digitally in an eort to remove the echo. Set up a
dierence equation relating the sampled output and sampled input of the microphone using an
arbitrary sampling rate S.
Chapter 15 Problems 527
(c) Argue that if S equals the Nyquist rate, the dierence equation for the microphone will contain
fractional delays and be dicult to implement.
(d) If the sampling rate S can be varied only in steps of 1 kHz, choose the smallest S that will
ensure integer delays and find the dierence equation of the microphone. Use this to set up the
dierence equation of an echo-canceling system that can recover the original signal.
(e) Sketch the frequency response of each system of the previous part. What is the dierence
equation, transfer function, and frequency response of the overall system?
15.47 (Sampling and Aliasing) A speech signal x(t) band-limited to 4 kHz is sampled at 10 kHz to
obtain x[n]. The sampled signal x[n] is filtered by an ideal bandpass filter whose passband extends
over 0.03 F 0.3 to obtain y[n]. Will the sampled output y[n] be contaminated if x(t) also includes
an undesired signal at the following frequencies?
15.48 (DTFT and Sampling) The transfer function of a digital filter is H(F ) = rect(2F )ej0.5F .
(a) What is the impulse response h[n] of this filter?
(b) The analog signal x(t) = cos(0.25t) applied to the system shown.
15.49 (Sampling and Filtering) The analog signal x(t) = cos(2f0 t) is applied to the following system:
15.50 (Sampling and Filtering) A periodic signal whose one full period is x(t) = 5tri(20t) is applied to
the following system:
x(t) pre-filter sampler H(F ) ideal LPF y(t)
The pre-filter band-limits the signal x(t) to B Hz. The sampler is ideal and operates at S = 80 Hz.
The transfer function of the digital filter is H(F ) = tri(2F ). The cuto frequency of the ideal lowpass
filter is fC = 40 Hz. Find y(t) if
15.51 (Sampling and Filtering) A speech signal x(t) whose spectrum extends to 5 kHz is filtered by an
ideal analog lowpass filter with a cuto frequency of 4 kHz to obtain the filtered signal xf (t).
(a) If xf (t) is to be sampled to generate y[n], what is the minimum sampling rate that will permit
recovery of xf (t) from y[n]?
(b) If x(t) is first sampled to obtain x[n] and then digitally filtered to obtain z[n], what must be
the minimum sampling rate and the impulse response h[n] of the digital filter such that z[n] is
identical to y[n]?
528 Chapter 15 The Discrete-Time Fourier Transform
15.52 (DTFT and Sampling) A signal is reconstructed using a filter that performs step interpolation
between samples. The reconstruction sampling interval is ts .
(a) What is the impulse response h(t) and transfer function H(f ) of the interpolating filter?
(b) What is the transfer function HC (f ) of a filter that can compensate for the non-ideal recon-
struction?
15.53 (DTFT and Sampling) A signal is reconstructed using a filter that performs linear interpolation
between samples. The reconstruction sampling interval is ts .
(a) What is the impulse response h(t) and transfer function H(f ) of the interpolating filter?
(b) What is the transfer function HC (f ) of a filter that can compensate for the non-ideal recon-
struction?
15.54 (Response of Numerical Algorithms) Simpsons and Ticks rules for numerical integration find
y[k] (the approximation to the area) over two time steps from y[k 2] and are described by
x[n] + 4x[n 1] + x[n 2]
Simpsons rule: y[n] = y[n 2] +
3
Ticks rule: y[n] = y[n 2] + 0.3584x[n] + 1.2832x[n 1] + 0.3584x[n 2]
(a) Find the transfer function H(F ) corresponding to each rule.
(b) For each rule, sketch |H(F )| over 0 F 0.5 and compare with the spectrum of an ideal
integrator.
(c) It is claimed that the coecients in Ticks rule optimize H(F ) in the range 0 < F < 0.25. Does
your comparison support this claim?
15.55 (Sampling and Filtering) The analog signal x(t) = sinc2 (10t) is applied to the following system:
15.56 (Modulation) A signal x[n] is modulated by cos(0.5n) to obtain the signal x1 [n]. The modulated
signal x1 [n] is filtered by a filter whose transfer function is H(F ) to obtain the signal y1 [n].
(a) Sketch the spectra X(F ), X1 (F ), and Y1 (F ) if X(F ) = tri(4F ) and H(F ) = rect(2F ).
(b) The signal y1 [n] is modulated again by cos(0.5n) to obtain the signal y2 [n], and filtered by
H(F ) to obtain y[n]. Sketch Y2 (F ) and Y (F ).
(c) Are the signals x[n] and y[n] related in any way?
15.57 (Sampling) A speech signal band-limited to 4 kHz is ideally sampled at a sampling rate S, and
the sampled signal x[n] is processed by the squaring filter y[n] = x2 [n] whose output is ideally
reconstructed to obtain y(t) as follows:
15.58 (Sampling) Consider the signal x(t) = sin(200t)cos(120t). This signal is sampled at a sampling
rate S1 , and the sampled signal x[n] is ideally reconstructed at a sampling rate S2 to obtain y(t).
What is the reconstructed signal if
(a) S1 = 400 Hz and S2 = 200 Hz.
(b) S1 = 200 Hz and S2 = 100 Hz.
(c) S1 = 120 Hz and S2 = 120 Hz.
15.59 (Up-Sampling) The analog signal x(t) = 4000sinc2 (4000t) is sampled at 12 kHz to obtain the signal
x[n]. The sampled signal is up-sampled (zero interpolated) by N to obtain the signal y[n] as follows:
x(t) sampler x[n] up-sample N y[n]
Sketch the spectra of X(F ) and Y (F ) over 1 F 1 for N = 2 and N = 3.
15.60 (Up-Sampling) The signal x[n] is up-sampled (zero interpolated) by N to obtain the signal y[n].
Sketch X(F ) and Y (F ) over 1 F 1 for the following cases.
(a) x[n] = sinc(0.4n), N = 2
(b) X(F ) = tri(4F ), N = 2
(c) X(F ) = tri(6F ), N = 3
15.61 (Linear Interpolation) Consider a system that performs linear interpolation by a factor of N . One
way to construct such a system is to perform up-sampling by N (zero interpolation between signal
samples) and pass the up-sampled signal through an interpolating filter with impulse response h[n]
whose output is the linearly interpolated signal y[n] as shown.
15.62 (Interpolation) The input x[n] is applied to a system that up-samples by N followed by an ideal
lowpass filter with a cuto frequency of FC to generate the output y[n]. Draw a block diagram of the
system. Sketch the spectra at various points of this system and find y[n] and Y (F ) for the following
cases.
(a) x[n] = sinc(0.4n), N = 2, FC = 0.4
(b) X(F ) = tri(4F ), N = 2, FC = 0.375
15.63 (Decimation) The signal x(t) = 2 cos(100t) is ideally sampled at 400 Hz to obtain the signal x[n],
and the sampled signal is decimated by N to obtain the signal y[n]. Sketch the spectra X(F ) and
Y (F ) over 1 F 1 for the cases N = 2 and N = 3.
15.64 (Decimation) The signal x[n] is decimated by N to obtain the decimated signal y[n]. Sketch X(F )
and Y (F ) over 1 F 1 for the following cases.
(a) x[n] = sinc(0.4n), N = 2
(b) X(F ) = tri(4F ), N = 2
(c) X(F ) = tri(3F ), N = 2
530 Chapter 15 The Discrete-Time Fourier Transform
15.66 (Interpolation and Decimation) For each of the following systems, X(F ) = tri(4F ). The digital
lowpass filter is ideal and has a cuto frequency of FC = 0.25 and a gain of 2. Sketch the spectra at
the various points over 1 F 1 and determine whether any systems produce identical outputs.
(a) x[n] up-sample N = 2 digital LPF down-sample M = 2 y[n]
(b) x[n] down-sample M = 2 digital LPF up-sample N = 2 y[n]
(c) x[n] down-sample M = 2 up-sample N = 2 digital LPF y[n]
15.67 (Interpolation and Decimation) For each of the following systems, X(F ) = tri(3F ). The digital
lowpass filter is ideal and has a cuto frequency of FC = 13 and a gain of 2. Sketch the spectra at the
various points over 1 F 1 and determine whether any systems produce identical outputs.
(a) x[n] up-sample N = 2 digital LPF down-sample M = 2 y[n]
(b) x[n] down-sample M = 2 digital LPF up-sample N = 2 y[n]
(c) x[n] down-sample M = 2 up-sample N = 2 digital LPF y[n]
15.68 (Interpolation and Decimation) You are asked to investigate the claim that interpolation by N
and decimation by N performed in any order, as shown, will recover the original signal.
15.69 (Fractional Delay) The following system is claimed to implement a half-sample delay:
x(t) sampler H(F ) ideal LPF y(t)
The signal x(t) is band-limited to fC , and the sampler is ideal and operates at the Nyquist rate. The
digital filter is described by H1 (F ) = ejF , |F | FC , and the cuto frequency of the ideal lowpass
filter is fC .
(a) Sketch the magnitude and phase spectra at the various points in this system.
(b) Show that y(t) = x(t 0.5ts ) (corresponding to a half-sample delay).
Chapter 15 Problems 531
15.70 (Fractional Delay) In practice, the signal y[n] = x[n 0.5] may be generated from x[n] using
interpolation by 2 (to give x[0.5n]) followed by a one-sample delay (to give x[0.5(n1)]) and decimation
by 2 (to give x[n 0.5]) as follows:
x[n] up-sample 2 digital LPF 1-sample delay down-sample 2 y[n]
Let X(F ) = tri(4F ). Sketch the magnitude and phase spectra at the various points and show that
Y (F ) = X(F )ejF (implying a half-sample delay).
15.71 (Group Delay) Show that the group delay tg of a filter described by its transfer function H(F )
may be expressed as
H (F )HI (F ) HI (F )HR (F )
tg = R
2|H(F )|2
Here, the primed quantities describe derivatives with respect to F . For an FIR filter with impulse
response h[n], these primed quantities are easily found by recognizing that HR
(F ) = Re{H (F )} and
HI (F ) = Im{H (F )} where,
N
! N
!
H(F ) = h[k]ej2kF H (F ) = j2 kh[k]ej2kF
k=0 k=0
For an IIR filter described by H(F ) = N (F )/D(F ), the overall group delay may be found as the
dierence of the group delays of N (F ) and D(F ). Use these results to find the group delay of
+ ej2F
(a) h[n] = {, 1}. (b) H(F ) = 1 + ej2F . (c) H(F ) = .
1 + ej2F
15.72 (Interconnected Systems) The signal x[n] = cos(2F0 n) forms the input to a cascade of two
systems whose impulse response is described by h1 [n] = {0.25, 0.5, 0.25} and h2 [n] = [n] [n 1].
(a) Generate and plot x[n] over 0 n 100 if F0 = 0.1. Can you identify its period from the plot?
(b) Let the output of the first system be fed to the second system. Plot the output of the each
system. Are the two outputs periodic? If so, do they have the same period? Explain.
(c) Reverse the order of cascading and plot the output of the each system. Are the two outputs
periodic? If so, do they have the same period? Does the order of cascading alter the overall
response? Should it?
(d) Let the first system be described by y[n] = x2 [n]. Find and plot the output of each system in
the cascade. Repeat after reversing the order of cascading. Does the order of cascading alter
the intermediate response? Does the order of cascading alter the overall response? Should it?
532 Chapter 15 The Discrete-Time Fourier Transform
15.73 (Frequency Response) This problem deals with the cascade and parallel connection of two FIR
filters whose impulse response is given by
h1 [n] = {1, 2, 1} h2 [n] = {2, 0, 2}
(a) Plot the frequency response of each filter and identify the filter type.
(b) The frequency response of the parallel connection of h1 [n] and h2 [n] is HP 1 (F ). If the second
filter is delayed by one sample and then connected in parallel with the first, the frequency
response changes to HP 2 (F ). It is claimed that HP 1 (F ) and HP 2 (F ) have the same magnitude
and dier only in phase. Use Matlab to argue for or against this claim.
(c) Obtain the impulse response hP 1 [n] and hP 2 [n] and plot their frequency response. Use Matlab
to compare their magnitude and phase. Do the results justify your argument? What type of
filters do hP 1 [n] and hP 2 [n] describe?
(d) The frequency response of the cascade of h1 [n] and h2 [n] is HC1 (F ). If the second filter is
delayed by one sample and then cascaded with the first, the frequency response changes to
HC2 (F ). It is claimed that HC1 (F ) and HC2 (F ) have the same magnitude and dier only in
phase. Use Matlab to argue for or against this claim.
(e) Obtain the impulse response hC1 [n] and hC2 [n] and plot their frequency response. Use Matlab
to compare their magnitude and phase. Do the results justify your argument? What type of
filters do hC1 [n] and hC2 [n] represent?
15.74 (Nonrecursive Forms of IIR Filters) We can only approximately represent an IIR filter by an
FIR filter by truncating its impulse response to N terms. The larger the truncation index N , the
better is the approximation. Consider the IIR filter described by y[n] 0.8y[n 1] = x[n].
(a) Find its impulse response h[n].
(b) Truncate h[n] to three terms to obtain hN [n]. Plot the frequency response H(F ) and HN (F ).
What dierences do you observe?
(c) Truncate h[n] to ten terms to obtain hN [n]. Plot the frequency response H(F ) and HN (F ).
What dierences do you observe?
(d) If the input to the original filter and truncated filter is x[n], will the greatest mismatch in the
response y[n] of the two filters occur at earlier or later time instants n?
15.75 (Compensating Filters) Digital filters are often used to compensate for the sinc distortion of a
zero-order-hold DAC by providing a 1/sinc(F ) boost. Two such filters are described by
Compensating Filter 1: y[n] = 16 (x[n] 18x[n
1
1] + x[n 2])
Compensating Filter 2: y[n] + 8 y[n 1] = 8 x[n]
1 9
(a) For each filter, state whether it is FIR (and if so, linear phase) or IIR.
(b) Plot the frequency response of each filter and compare with |1/sinc(F )|.
(c) Over what digital frequency range does each filter provide the required sinc boost? Which of
these filters provides better compensation?
15.76 (Up-Sampling and Decimation) Let x[n] = cos(0.2n) + 0.5 cos(0.4n), 0 n 100.
(a) Plot the spectrum of this signal.
(b) Generate the zero-interpolated signal y[n] = x[n/2] and plot its spectrum. Can you observe
the spectrum replication? Is there a correspondence between the frequencies in y[n] and x[n]?
Should there be? Explain.
Chapter 15 Problems 533
(c) Generate the decimated signal d[n] = x[2n] and plot its spectrum. Can you observe the stretch-
ing eect in the spectrum? Is there a correspondence between the frequencies in d[n] and x[n]?
Should there be? Explain.
(d) Generate the decimated signal g[n] = x[3n] and plot its spectrum. Can you observe the stretch-
ing eect in the spectrum? Is there a correspondence between the frequencies in g[n] and x[n]?
Should there be? Explain.
15.77 (Frequency Response of Interpolating Functions) The impulse response of filters for step
interpolation, linear interpolation, and ideal (sinc) interpolation by N are given by
15.78 (Interpolating Functions) To interpolate a signal x[n] by N , we use an up-sampler (that places
N 1 zeros after each sample) followed by a filter that performs the appropriate interpolation. The
filter impulse response for step interpolation, linear interpolation, and ideal (sinc) interpolation is
chosen as
Note that the ideal interpolating function is actually of infinite length and must be truncated in
practice. Generate the test signal x[n] = cos(0.5n), 0 n 3. Up-sample this by N = 8 (seven
zeros after each sample) to obtain the signal xU [n]. Use the Matlab routine filter to filter xU [n]
as follows:
(a) Use the step-interpolation filter to obtain the filtered signal xS [n]. Plot xU [n] and xS [n] on the
same plot. Does the system perform the required interpolation? Does the result look like a sine
wave?
(b) Use the step-interpolation filter followed by the compensating filter y[n] = {x[n] 18x[n 1] +
x[n 2]}/16 to obtain the filtered signal xC [n]. Plot xU [n] and xC [n] on the same plot. Does
the system perform the required interpolation? Does the result look like a sine wave? Is there
an improvement compared to part (a)?
(c) Use the linear-interpolation filter to obtain the filtered signal xL [n]. Plot xU [n] and a delayed
(by 8) version of xL [n] (to account for the noncausal nature of hL [n]) on the same plot. Does
the system perform the required interpolation? Does the result look like a sine wave?
(d) Use the ideal interpolation filter (with M = 4, 8, 16) to obtain the filtered signal xI [n]. Plot
xU [n] and a delayed (by M ) version of xI [n] (to account for the noncausal nature of hI [n]) on
the same plot. Does the system perform the required interpolation? Does the result look like a
sine wave? What is the eect of increasing M on the interpolated signal? Explain.
15.79 (FIR Filter Design) A 22.5-Hz sinusoid is contaminated by 60-Hz interference. We wish to sample
this signal and design a causal 3-point linear-phase FIR digital filter operating at a sampling frequency
of S = 180 Hz to eliminate the interference and pass the desired signal with unit gain.
(a) Argue that an impulse response of the form h[n] = {, , } can be used. Choose and to
satisfy the design requirements.
534 Chapter 15 The Discrete-Time Fourier Transform
(b) To test your filter, generate two signals x[n] and s[n], 0 n 50, by sampling x(t) = cos(45t)
and s(t) = 3 cos(120t) at 180 Hz. Generate the noisy signal g[n] = x[n] + s[n] and pass it
through your filter to obtain the filtered signal y[n]. Compare y[n] with the noisy signal g[n]
and the desired signal x[n] to confirm that the filter meets design requirements. What is the
phase of y[n] at the desired frequency? Can you find an exact expression for y[n]?
15.80 (Allpass Filters) Consider a lowpass filter with impulse response h[n] = (0.5)n u[n]. The input to
this filter is x[n] = cos(0.2n). We expect the output to be of the form y[n] = A cos(0.2n + ).
(a) Find the values of A and .
(b) What should be the transfer function H1 (F ) of a first-order allpass filter that can be cas-
caded with the lowpass filter to correct for the phase distortion and produce the signal z[n] =
B cos(0.2n) at its output?
(c) What should be the gain of the allpass filter in order that z[n] = x[n]?
15.81 (Frequency Response of Averaging Filters) The averaging of data uses both FIR and IIR
filters. Consider the following averaging filters:
N 1
1 !
Filter 1: y[n] = x[n k] (N -point moving average)
N
k=0
N!1
2
Filter 2: y[n] = (N k)x[n k] (N -point weighted moving average)
N (N + 1)
k=0
16.1 Introduction
The N -point discrete Fourier transform (DFT) XDFT [k] of an N -sample signal x[n] and the inverse
discrete Fourier transform (IDFT), which transforms XDFT [k] to x[n], are defined by
N
! 1
XDFT [k] = x[n]ej2nk/N , k = 0, 1, 2, . . . , N 1 (16.1)
n=0
N 1
1 !
x[n] = XDFT [k]ej2nk/N , n = 0, 1, 2, . . . , N 1 (16.2)
N
k=0
Each relation is a set of N equations. Each DFT sample is found as a weighted sum of all the samples in x[n].
One of the most important properties of the DFT and its inverse is implied periodicity. The exponential
exp(j2nk/N ) in the defining relations is periodic in both n and k with period N :
ej2nk/N = ej2(n+N )k/N = ej2n(k+N )/N (16.3)
As a result, the DFT and its inverse are also periodic with period N , and it is sucient to compute the
results for only one period (0 to N 1). Both x[n] and XDFT [k] have a starting index of zero.
535
536 Chapter 16 The DFT and FFT
16.2.1 Symmetry
In analogy with all other frequency-domain transforms, the DFT of a real sequence possesses conjugate
symmetry about the origin with XDFT [k] = XDFT
[k]. Since the DFT is periodic, XDFT [k] = XDFT [N k].
This also implies conjugate symmetry about the index k = 0.5N , and thus
If N is odd, the conjugate symmetry is about the half-integer value 0.5N . The index k = 0.5N is called the
folding index. This is illustrated in Figure 16.1.
Conjugate symmetry suggests that we need compute only half the DFT values to find the entire DFT
sequenceanother labor-saving concept! A similar result applies to the IDFT.
of the sequence. Circular folding generates the signal x[n] from x[n]. We fold x[n], create the periodic
extension of the folded signal, and pick N samples of the periodic extension over (0, N 1).
Even symmetry of x[n] requires that x[n] = x[n]. Its implied periodicity also means x[n] = x[N n],
and the periodic signal x[n] is said to possess circular even symmetry. Similarly, for circular odd
symmetry, we have x[n] = x[N n].
16.2.4 Convolution
Convolution in one domain transforms to multiplication in the other. Due to the implied periodicity in both
domains, the convolution operation describes periodic, not regular, convolution. This also applies to the
correlation operation.
Periodic Convolution
The DFT oers an indirect means of finding the periodic convolution y[n] = x[n]h[n] of two sequences
x[n] and h[n] of equal length N . We compute their N -sample DFTs XDFT [k] and HDFT [k], multiply them
to obtain YDFT [k] = XDFT [k]HDFT [k], and find the inverse of YDFT to obtain the periodic convolution y[n]:
x[n]h[n] XDFT [k]HDFT [k] (16.5)
Periodic Correlation
Periodic correlation can be implemented using the DFT in almost exactly the same way as periodic convo-
lution, except for an extra conjugation step prior to taking the inverse DFT. The periodic correlation of two
sequences x[n] and h[n] of equal length N gives
If x[n] and h[n] are real, the final result rxh [n] must also be real (to within machine roundo).
Since N = 8, we need compute XDFT [k] only for k 0.5N = 4. Now, XDFT [0] = 1 + 1 = 2 and
XDFT [4] = 1 1 = 0. For the rest (k = 1, 2, 3), we compute XDFT [1] = 1 + ej/4 = 1.707 j0.707,
XDFT [2] = 1 + ej/2 = 1 j, and XDFT [3] = 1 + ej3/4 = 0.293 j0.707.
By conjugate symmetry, XDFT [k] = XDFT
[N k] = XDFT
[8 k]. This gives
XDFT [5] = XDFT
[3] = 0.293+j0.707, XDFT [6] = XDFT
[2] = 1+j, XDFT [7] = XDFT
[1] = 1.707+j0.707.
Thus, XDFT [k] = {2, 1.707 j0.707, 0.293 j0.707, 1 j, 0, 1 + j, 0.293 + j0.707, 1.707 + j0.707}.
540 Chapter 16 The DFT and FFT
(c) Consider the DFT pair x[n] = {1, 2, 1, 0} XDFT [k] = {4, j2, 0 , j2} with N = 4.
1. (Time Shift) To find y[n] = x[n 2], we move the last two samples to the beginning to get
y[n] = x[n 2] = {1, 0, 1, 2}, n = 0, 1, 2, 3.
To find the DFT of y[n] = x[n 2], we use the time-shift property (with n0 = 2) to give
YDFT [k] = XDFT [k]ej2kn0 /4 = XDFT [k]ejk = {4, j2, 0, j2}.
2. (Modulation) The sequence ZDFT [k] = XDFT [k 1] equals {j2, 4, j2, 0}. Its IDFT is
z[n] = x[n]ej2n/4 = x[n]ejn/2 = {1, j2, 1, 0}.
3. (Folding) The sequence g[n] = x[n] is g[n] = {x[0], x[1], x[2], x[3]} = {1, 0, 1, 2}.
Its DFT equals GDFT [k] = XDFT [k] = XDFT
[k] = {4, j2, 0, j2}.
4. (Conjugation) The sequence p[n] = x [n] is p[n] = x [n] = x[n] = {1, 2, 1, 0}. Its DFT is
PDFT [k] = XDFT
[k] = {4, j2, 0, j2} = {4, j2, 0, j2}.
5. (Product) The sequence h[n] = x[n]x[n] is the pointwise product. So, h[n] = {1, 4, 1, 0}.
Its DFT is HDFT [k] = 14 XDFT [k]X
DFT [k] = 14 {4, j2, 0, j2}{4,
j2, 0, j2}.
Keep in mind that this is a periodic convolution.
The result is HDFT [k] = 14 {24, j16, 0, j16} = {6, j4, 0, j4}.
6. (Periodic Convolution) The periodic convolution c[n] = x[n]x[n]
gives
c[n] = {1, 2, 1, 0}{1,
2, 1, 0} = {2, 4, 6, 4}.
Its DFT is given by the pointwise product
CDFT [k] = XDFT [k]XDFT [k] = {16, 4, 0, 4}.
7. (Regular Convolution) The regular convolution s[n] = x[n] x[n] gives
s[n] = {1, 2, 1, 0}{1,
2, 1, 0} = {1, 4, 6, 4, 1, 0, 0}.
Since x[n] has 4 samples (N = 4), the DFT SDFT [k] of s[n] is the product of the DFT of the
zero-padded (to length N + N 1 = 7) signal xz [n] = {1, 2, 1, 0, 0, 0, 0} and equals
{16, 2.35 j10.28, 2.18 + j1.05, 0.02 + j0.03, 0.02 j0.03, 2.18 j1.05, 2.35 + j10.28}.
" "
8. (Central Ordinates) It is easy to check that x[0] = 14 XDFT [k] and XDFT [0] = x[n].
"
9. (Parsevals Relation) We have |x[n]|2 = 1 + 4 + 1 + 0 = 6.
"
Since XDFT [k] = {16, 4, 0, 4}, we also have 14
2
|XDFT [k]|2 = 14 (16 + 4 + 4) = 6.
Finally, the transform pair for the sinusoid says that for a periodic sinusoid x[n] = cos(2nF ) whose digital
frequency is F = k0 /N , the DFT is a pair of impulses at k = k0 and k = N k0 . By Eulers relation,
x[n] = 0.5ej2nk0 /N + 0.5ej2nk0 /N and, by periodicity, 0.5ej2nk0 /N = 0.5ej2n(N k0 )/N . Then, with the
DFT pair 1 N [k], and the modulation property, we get the required result.
542 Chapter 16 The DFT and FFT
16.3 Connections
From a purely mathematical or computational standpoint, the DFT simply tells us how to transform a set
of N numbers into another set of N numbers. Its physical significance (what the numbers mean), however,
stems from its ties to the spectra of both analog and discrete signals.
The processing of analog signals using digital methods continues to gain widespread popularity. The
Fourier series of periodic signals and the DTFT of discrete-time signals are duals of each other and are
similar in many respects. In theory, both oer great insight into the spectral description of signals. In
practice, both suer from (similar) problems in their implementation. The finite memory limitations and
finite precision of digital computers constrain us to work with a finite set of quantized numbers for describing
signals in both time and frequency. This brings out two major problems inherent in the Fourier series and
the DTFT as tools for digital signal processing. Both typically require an infinite number of samples (the
Fourier series for its spectrum and the DTFT for its time signal). Both deal with one continuous variable
(time t or digital frequency F ). A numerical approximation that can be implemented using digital computers
requires that we replace the continuous variable with a discrete one and limit the number of samples to a
finite value in both domains. The DFT can then be viewed as a natural extension of the Fourier transform,
obtained by sampling both time and frequency in turn. Sampling in one domain leads to a periodic extension
in the other. A sampled representation in both domains also forces periodicity in both domains. This leads
to two slightly dierent but functionally equivalent sets of relations, depending on the order in which we
sample time and frequency, as listed in Table 16.2.
If we first sample an analog signal x(t), the sampled signal has a periodic spectrum Xp (F ) (the DTFT),
and sampling of Xp (F ) leads to the DFT representation. If we first sample the Fourier transform X(f ) in the
frequency domain, the samples represent the Fourier series coecients of a periodic time signal xp (t), and
sampling of xp (t) leads to the discrete Fourier series (DFS) as the periodic extension of the frequency-
domain samples. The DFS diers from the DFT only by a constant scale factor.
Choice of sampling instants: The defining relation for the DFT (or DFS) mandates that samples of x[n]
be chosen over the range 0 n N 1 (through periodic extension, if necessary). Otherwise, the DFT (or
DFS) phase will not match the expected phase.
16.3 Connections 543
Fourier Transform
Aperiodic/continuous
) signal: Aperiodic/continuous
) spectrum:
x(t) = X(f )ej2f t
df X(f ) = x(t)ej2f t
dt
Sampling x(t) (DTFT) Sampling X(f ) (Fourier series)
Sampled
) time signal: Sampled
) spectrum:
1
x[n] = Xp (F )ej2nF dF X[k] = xp (t)ej2kf0 t dt
1 T T
Periodic spectrum (period = 1): Periodic time signal (period = T ):
!
!
Xp (F ) = x[n]ej2nF xp (t) = X[k]ej2kf0 t
n= k=
Sampling Xp (F ) (DFT) Sampling xp (t) (DFS)
Sampled/periodic spectrum: Sampled/periodic time signal:
N
! 1 N
! 1
XDFT [k] = x[n]ej2nk/N x[n] = XDFS [k]ej2nk/N
n=0 k=0
Sampled/periodic time signal: Sampled/periodic spectrum:
N 1 N 1
1 ! 1 !
x[n] = XDFT [k]ej2nk/N XDFS [k] = x[n]ej2nk/N
N N n=0
k=0
Choice of samples: If a sampling instant corresponds to a jump discontinuity, the sample value should be
chosen as the midpoint of the discontinuity. The reason is that the Fourier series (or transform) converges
to the midpoint of any discontinuity.
Choice of frequency axis: The computation of the DFT (or DFS) is independent of the sampling frequency
S or sampling interval ts = 1/S. However, if an analog signal is sampled at a sampling rate S, its spectrum
is periodic with period S. The DFT spectrum describes one period (N samples) of this spectrum starting
at the origin. For sampled signals, it is useful to plot the DFT (or DFS) magnitude and phase against the
analog frequency f = kS/N Hz, k = 0, 1, . . . , N 1 (with spacing S/N ). For discrete-time signals, we can
plot the DFT against the digital frequency F = k/N, k = 0, 1, . . . , N 1 (with spacing 1/N ). These choices
are illustrated in Figure 16.2.
Choice of frequency range: To compare the DFT results with conventional two-sided spectra, just
remember that by periodicity, a negative frequency f0 (at the index k0 ) in the two-sided spectrum
corresponds to the frequency S f0 (at the index N k0 ) in the (one-sided) DFT spectrum.
Identifying the highest frequency: The highest frequency in the DFT spectrum corresponds to the
folding index k = 0.5N and equals F = 0.5 for discrete signals or f = 0.5S Hz for sampled analog signals.
This highest frequency is also called the folding frequency. For purposes of comparison, its is sucient to
plot the DFT spectra only over 0 k < 0.5N (or 0 F < 0.5 for discrete-time signals or 0 f < 0.5S Hz
for sampled analog signals).
544 Chapter 16 The DFT and FFT
k (Index)
0 1 2 3 N 1
Plotting reordered spectra: The DFT (or DFS) may also be plotted as two-sided spectra to reveal
conjugate symmetry about the origin by creating its periodic extension. This is equivalent to creating a
reordered spectrum by relocating the DFT samples at indices past the folding index k = 0.5N to the left of
the origin (because X[k] = X[N k]). This process is illustrated in Figure 16.3.
X[k] = X[N k]
Relocate
k k
N/2 N/2
where Xp (F ) is periodic with unit period. If x[n] is a finite N -point sequence with n = 0, 1, . . . , N 1, we
obtain N samples of the DTFT over one period at intervals of 1/N as
N
! 1
XDFT [k] = x[n]ej2nk/N , k = 0, 1, . . . , N 1 (16.13)
n=0
This describes the discrete Fourier transform (DFT) of x[n] as a sampled version of its DTFT evaluated
at the frequencies F = k/N, k = 0, 1, . . . , N 1. The DFT spectrum thus corresponds to the frequency
range 0 F < 1 and is plotted at the frequencies F = k/N, k = 0, 1, . . . , N 1.
To recover the finite sequence x[n] from N samples of XDFT [k], we use dF 1/N and F k/N to
approximate the integral expression in the inversion relation by
N 1
1 !
x[n] = XDFT [k]ej2nk/N , n = 0, 1, . . . , N 1 (16.14)
N
k=0
This is the inverse discrete Fourier transform (IDFT). The periodicity of the IDFT implies that x[n]
actually corresponds to one period of a periodic signal.
If x[n] is a finite N -point signal with n = 0, 1, . . . , N 1, the DFT is an exact match to its DTFT Xp (F )
at F = k/N, k = 0, 1, . . . , N 1, and the IDFT results in perfect recovery of x[n].
If x[n] is not time-limited, its N -point DFT is only an approximation to its DTFT Xp (F ) evaluated at
F = k/N, k = 0, 1, . . . , N 1. Due to implied periodicity, the DFT, in fact, exactly matches the DTFT of
the periodic extension of x[n] with period N at these frequencies.
If x[n] is a discrete periodic signal with period N , its scaled DFT ( N1 XDFT [k]) is an exact match to the
impulse strengths in its DTFT Xp (F ) at F = k/N, k = 0, 1, . . . , N 1. In this case also, the IDFT results
in perfect recovery of one period of x[n] over 0 n N 1.
Since x[n] is a finite sequence, the DFT and DTFT show an exact match at F = k/N, k = 0, 1, 2, 3.
With N = 4, and ej2nk/N = ejnk/2 , we compute the IDFT of XDFT [k] = {4, j2, 0, j2} to give
3
!
n = 0: x[0] = 0.25 XDFT [k]e0 = 0.25(4 j2 + 0 + j2) = 1
k=0
!3
n = 1: x[1] = 0.25 XDFT [k]ejk/2 = 0.25(4 j2ej/2 + 0 + j2ej3/2 ) = 2
k=0
546 Chapter 16 The DFT and FFT
3
!
n = 2: x[2] = 0.25 XDFT [k]ejk = 0.25(4 j2ej + 0 + j2ej3 ) = 1
k=0
!3
n = 3: x[3] = 0.25 XDFT [k]ej3k/2 = 0.25(4 j2ej3/2 + 0 + ej9/2 ) = 0
k=0
* 1
Xp (F )*F =k/N =
1 ej2k/N
The N -point DFT of x[n] is
1 N
n (n = 0, 1, . . . , N 1)
1 ej2k/N
Clearly, the N -sample DFT of x[n] does not match the DTFT of x[n] (unless N ).
Comment: What does match, however, is the DFT of the N -sample periodic extension xpe [n] and the
DTFT of x[n]. We obtain one period of the periodic extension by wrapping around N -sample sections
of x[n] = n and adding them to give
n 1
xpe [n] = n + n+N + n+2N + = n (1 + N + 2N + ) = = x[n]
1 N 1 N
1
Its DFT is thus and matches the DTFT of x[n] at F = k
N, k = 0, 1, . . . , N 1.
1 ej2k/N
If we acquire x[n], n = 0, 1, . . . , N 1 as N samples of xp (t) over one period using a sampling rate of S Hz
(corresponding to a sampling interval of ts ) and approximate the integral expression for X[k] by a summation
using dt ts , t nts , T = N ts , and f0 = T1 = N1ts , we obtain
N 1 N 1
1 ! 1 !
XDFS [k] = x[n]ej2kf0 nts ts = x[n]ej2nk/N , k = 0, 1, . . . , N 1 (16.16)
N ts n=0 N n=0
The quantity XDFS [k] defines the discrete Fourier series (DFS) as an approximation to the Fourier series
coecients of a periodic signal and equals N times the DFT.
16.5 The DFT of Periodic Signals and the DFS 547
N
! 1 N
! 1
x[n] = XDFS [n]ej2kf0 nts = XDFS [k]ej2nk/N , n = 0, 1, 2, . . . , N 1 (16.17)
k=0 k=0
This relation describes the inverse discrete Fourier series (IDFS). The sampling interval ts does not
enter into the computation of the DFS or its inverse. Except for a scale factor, the DFS and DFT relations
are identical.
1. The signal samples must be acquired from x(t) starting at t = 0 (using the periodic extension of the
signal, if necessary). Otherwise, the phase of the DFS coecients will not match the phase of the
corresponding Fourier series coecients.
2. The periodic signal must contain a finite number of sinusoids (to ensure a band-limited signal with a
finite highest frequency) and be sampled above the Nyquist rate. Otherwise, there will be aliasing,
whose eects become more pronounced near the folding frequency 0.5S. If the periodic signal is not
band-limited (contains an infinite number of harmonics), we cannot sample at a rate high enough to
prevent aliasing. For a pure sinusoid, the Nyquist rate corresponds to two samples per period.
3. The signal x(t) must be sampled for an integer number of periods (to ensure a match between the
periodic extension of x(t) and the implied periodic extension of the sampled signal). Otherwise, the
periodic extension of its samples will not match that of x(t), and the DFS samples will describe the
Fourier series coecients of a dierent periodic signal whose harmonic frequencies do not match those
of x(t). This phenomenon is called leakage and results in nonzero spectral components at frequencies
other than the harmonic frequencies of the original signal x(t).
If we sample a periodic signal for an integer number of periods, the DFS (or DFT) also preserves the eects
of symmetry. The DFS of an even symmetric signal is real, the DFS of an odd symmetric signal is imaginary,
and the DFS of a half-wave symmetric signal is zero at even values of the index k.
For the index k0 to lie in the range 0 k0 N 1, we must ensure that 0 F < 1 (rather than the customary
0.5 F < 0.5). The frequency corresponding to k0 will then be k0 S/N and will equal f0 (if S > 2f0 ) or its
alias (if S < 2f0 ). The nonzero DFT values will equal XDFT [k0 ] = 0.5N ej and XDFT [N k0 ] = 0.5N ej .
These results are straightforward to obtain and can be easily extended, by superposition, to the DFT of a
combination of sinusoids, sampled over an integer number of periods.
(b) Let x(t) = 4 sin(72t) be sampled at S = 128 Hz. Choose the minimum number of samples necessary
to prevent leakage and find the DFS and DFT of the sampled signal.
The frequency of x(t) is 36 Hz, so F = 36/128 = 9/32 = k0 /N . Thus, N = 32, k0 = 9, and
the frequency spacing is S/N = 4 Hz. The DFS components will appear at k0 = 9 (36 Hz) and
N k0 = 23 (92 Hz). The Fourier series coecients of x(t) are j2 (at 36 Hz) and j2 (at 36 Hz).
16.5 The DFT of Periodic Signals and the DFS 549
The DFS samples will be XDFS [9] = j2, and XDFS [23] = j2. Since XDFT [k] = N XDFS [k], we get
XDFT [9] = j64, XDFT [23] = j64, and thus
XDFT [k] = {0, . . . , 0, j64, 0, . . . , 0, j64, 0, . . . , 0}
# $% & #$%&
k=9 k=23
(c) Let x(t) = 4 sin(72t) 6 cos(12t) be sampled at S = 21 Hz. Choose the minimum number of samples
necessary to prevent leakage and find the DFS and DFT of the sampled signal.
Clearly, the 36-Hz term will be aliased. The digital frequencies (between 0 and 1) of the two terms are
F1 = 36/21 = 12/7 5/7 = k0 /N and F2 = 6/21 = 2/7. Thus, N = 7 and the frequency spacing is
S/N = 3 Hz. The DFS components of the first term will be j2 at k = 5 (15 Hz) and j2 at N k = 2
(6 Hz). The DFS components of the second term will be 3 at k = 2 and 3 at k = 5. The DFS
values will add up at the appropriate indices to give XDFS [5] = 3 j2, XDFS [2] = 3 + j2, and
XDFS [k] = {0, 0, 3 + j2, 0, 0, 3 j2, 0} XDFT [k] = N XDFS [k] = 7XDFS [k]
# $% & # $% &
k=2 k=5
Note how the 36-Hz component was aliased to 6 Hz (the frequency of the second component).
(d) The signal x(t) = 1 + 8 sin(80t)cos(40t) is sampled at twice the Nyquist rate for two full periods. Is
leakage present? If not, find the DFS of the sampled signal.
First, note that x(t) = 1 + 4 sin(120t) + 4 sin(40t). The frequencies are f1 = 60 Hz and f2 = 20 Hz.
The Nyquist rate is thus 120 Hz, and hence S = 240 Hz. The digital frequencies are F1 = 60/240 = 1/4
and F2 = 20/240 = 1/12. The fundamental frequency is f0 = GCD(f1 , f2 ) = 20 Hz. Thus, two
full periods correspond to 0.1 s or N = 24 samples. There is no leakage because we acquire the
samples over two full periods. The index k = 0 corresponds to the constant (dc value). To find
the indices of the other nonzero DFS samples, we compute the digital frequencies (in the form k/N )
as F1 = 60/240 = 1/4 = 6/24 and F2 = 20/240 = 1/12 = 2/24. The nonzero DFS samples are
thus XDFS [0] = 1, XDFS [6] = j2, and XDFS [2] = j2, and the conjugates XDFS [18] = j2 and
XDFS [22] = j2. Thus,
XDFS [k] = {0, 0, j2, 0, 0, 0, j2, 0, . . . , 0, j2 , 0, 0, 0, j2 , 0, 0}
#$%& #$%& #$%& #$%&
k=2 k=6 k=18 k=22
Figure E16.6 One period of the square wave periodic signal for Example 16.6
Its Fourier series coecients are X[k] = j2/k, (k odd). If we require the first four harmonics to be in
error by no more than about 5%, we choose a sampling rate S = 32f0 , where f0 is the fundamental frequency.
This means that we acquire N = 32 samples for one period. The samples and their DFS up to k = 8 are
listed below, along with the error in the nonzero DFS values compared with the Fourier series coecients.
x[n] = {0, 1, 1, . . . , 1, 1, 0, 1, 1, . . . , 1, 1}
# $% & # $% &
15 samples 15 samples
The DFT results will also show aliasing because the periodic extension of the signal with non-integer
periods will not, in general, be band-limited. As a result, we must resort to the full force of the defining
relation to compute the DFT.
Suppose we sample the 1-Hz sine x(t) = sin(2t) at S = 16 Hz. Then, F0 = 1/16. If we choose N = 8,
the DFT spectral spacing equals S/N = 2 Hz. In other words, there is no DFT component at 1 Hz, the
frequency of the sine wave! Where should we expect to see the DFT components? If we express the digital
frequency F0 = 1/16 as F0 = kF /N , we obtain kF = NF0 = 0.5. Thus, F0 corresponds to the fractional
16.5 The DFT of Periodic Signals and the DFS 551
index kF = 0.5, and the largest DFT components should appear at the integer index nearest to kF , at k = 0
(or dc) and k = 1 (or 2 Hz). In fact, the signal and its DFS are given by
As expected, the largest components appear at k = 0 and k = 1. Since XDFS [k] is real, the DFS results
describe an even symmetric signal with nonzero average value. In fact, the periodic extension of the sampled
signal over half a period actually describes a full-rectified sine with even symmetry and a fundamental
frequency of 2 Hz (see Figure 16.4). The Fourier series coecients of this full-rectified sine wave (with unit
peak value) are given by
2
X[k] =
(1 4k2 )
We confirm that XDFS [0] and XDFS [1] show an error of less than 5%. But XDFS [2], XDFS [3], and XDFS [4]
deviate significantly. Since the new periodic signal is no longer band-limited, the sampling rate is not high
enough, and we have aliasing. The value XDFS [3], for example, equals the sum of the Fourier series coecient
X[3] and all other Fourier series coecients X[11], X[19], . . . that alias to k = 3. In other words,
!
2 ! 1
XDFS [3] = X[3 + 8m] =
m=
m= 1 4(3 + 8m)2
Although this sum is not easily amenable to a closed-form solution, it can be computed numerically and
does in fact approach XDFS [3] = 0.0293 (for a large but finite number of coecients).
Minimizing Leakage
Ideally, we should sample periodic signals over an integer number of periods to prevent leakage. In practice,
it may not be easy to identify the period of a signal in advance. In such cases, it is best to sample over
as long a signal duration as possible (to reduce the mismatch between the periodic extension of the analog
and sampled signal). Sampling for a larger time (duration) not only reduces the eects of leakage but also
yields a more closely spaced spectrum, and a more accurate estimate of the spectrum of the original signal.
Another way to reduce leakage is to multiply the signal samples by a window function (as described later).
(a) Length = 0.1 s N=20 (b) Length = 0.125 s N=25 (c) Length = 1.125 s N=225
2.5 2.5 2.5
2 2 2
Magnitude
Magnitude
Magnitude
1.5 1.5 1.5
1 1 1
0.5 0.5 0.5
0 0 0
0 10 50 100 0 10 50 100 0 10 50 100
Analog frequency f [Hz] Analog frequency f [Hz] Analog frequency f [Hz]
Figure E16.7 DFT results for Example 16.7
(a) The duration of 0.1 s corresponds to one full period, and N = 20. No leakage is present, and the DFS
results reveal an exact match to the spectrum of x(t). The nonzero components appear at the integer indices
k1 = NF = Nf1 /S = 1 and k2 = NF2 = Nf2 /S = 5 (corresponding to 10 Hz and 50 Hz, respectively).
(b) The duration of 0.125 s does not correspond to an integer number of periods. The number of samples
over 0.125 s is N = 25. Leakage is present. The largest components appear at the integer indices closest to
k1 = NF = Nf1 /S = 1.25 (i.e., k = 1 or 8 Hz) and k2 = Nf2 /S = 6.25 (i.e., k = 6 or 48 Hz).
(c) The duration of 1.125 s does not correspond to an integer number of periods. The number of samples
over 1.125 s is N = 225. Leakage is present. The largest components appear at the integer indices closest to
k1 = NF = Nf1 /S = 11.25 (i.e., k = 11 or 9.78 Hz) and k2 = Nf2 /S = 56.25 (i.e., k = 56 or 49.78 Hz).
Comment: The spectra reveal that the longest duration (1.125 s) also produces the smallest leakage.
XDFT [k] SX(f )|f =kS/N , 0 k < 0.5N (0 f < 0.5S) (16.19)
To find the DFT of an arbitrary signal with some confidence, we must decide on the number of samples N
and the sampling rate S, based on both theoretical considerations and practical compromises. For example,
one way to choose a sampling rate is based on energy considerations. We pick the sampling rate as 2B Hz,
where the frequency range up to B Hz contains a significant fraction P of the signal energy. The number of
samples should cover a large enough duration to include significant signal values.
spacing, we must choose a larger number of samples N . This increase in N cannot come about by increasing
the sampling rate S (which would leave the spectral spacing S/N unchanged), but by increasing the duration
over which we sample the signal. In other words, to reduce the frequency spacing, we must sample the signal
for a longer duration at the given sampling rate. However, if the original signal is of finite duration, we can
still increase N by appending zeros (zero-padding). Appending zeros does not improve accuracy because it
adds no new signal information. It only decreases the spectral spacing and thus interpolates the DFT at
a denser set of frequencies. To improve the accuracy of the DFT results, we must increase the number of
signal samples by sampling the signal for a longer time (and not just zero-padding).
(b) Let x(t) = tri(t). Its Fourier transform is X(f ) = sinc2 (f ). Let us choose S = 4 Hz and N = 8.
To obtain samples of x(t) starting at t = 0, we sample the periodic extension of x(t) as illustrated in
Figure E16.8B and obtain ts XDFT [k] to give
Since highest frequency present in the DFT spectrum is 0.5S = 2 Hz, the DFT results are listed only
up to k = 4. Since the frequency spacing is S/N = 0.5 Hz, we compare ts XDFT [k] with X(kS/N ) =
sinc2 (0.5k). At k = 0 (dc) and k = 2 (1 Hz), we see a perfect match. At k = 1 (0.5 Hz), ts XDFT [k] is
in error by about 5.3%, but at k = 3 (1.5 Hz), the error is a whopping 62.6%.
x(t) = tri( t ) x(t) = tri( t ) x(t) = tri( t )
S = 4 Hz Periodic extension S = 4 Hz Periodic extension S = 8 Hz Periodic extension
N=8 Period = 1 N = 16 Period = 2 N = 16 Period = 1
t t t
0.5 1 0.5 1 2 0.5 1
Figure E16.8B The triangular pulse signal for Example 16.8(b)
(Reducing Spectral Spacing) Let us decrease the spectral spacing by zero-padding to increase
the number of samples to N = 16. We must sample the periodic extension of the zero-padded
554 Chapter 16 The DFT and FFT
Note how the padded zeros appear in the middle. Since highest frequency present in the DFT
spectrum is 2 Hz, the DFT results are listed only up to the folding index k = 8.
The frequency separation is reduced to S/N = 0.25 Hz. Compared with X(kf0 ) = sinc2 (0.25k),
the DFT results for k = 2 (0.5 Hz) and k = 6 (1.5 Hz) are still o by 5.3% and 62.6%, respectively.
In other words, zero-padding reduces the spectral spacing, but the DFT results are no more
accurate. To improve the accuracy, we must pick more signal samples.
(Improving Accuracy) If we choose S = 8 Hz and N = 16, we obtain 16 samples shown in
Figure E16.8B. We list the 16-sample DFT up to k = 8 (corresponding to the highest frequency
of 4 Hz present in the DFT):
The DFT spectral spacing is still S/N = 0.5 Hz. In comparison with X(k/2) = sinc2 (0.5k), the
error in the DFT results for the 0.5-Hz component (k = 1) and the 1.5-Hz component (k = 3)
is now only about 1.3% and 12.4%, respectively. In other words, increasing the number of signal
samples improves the accuracy of the DFT results. However, the error at 2.5 Hz (k = 5) and
3.5 Hz (k = 7) is 39.4% and 96.4%, respectively, and implies that the eects of aliasing are more
predominant at frequencies closer to the folding frequency.
(c) Consider the signal x(t) = et u(t) whose Fourier transform is X(f ) = 1/(1 + j2f ). Since the energy
E in x(t) equals 1, we use Parsevals relation to estimate the bandwidth B that contains the fraction
P of this energy as
) B
1 tan(0.5P )
df = P B=
B 1 + 4 f 2
2 2
1. If we choose B to contain 95% of the signal energy (P = 0.95), we find B = 12.71/2 = 2.02 Hz.
Then, S > 4.04 Hz. Let us choose S = 5 Hz. For a spectral spacing of 1 Hz, we have S/N = 1 Hz
and N = 5. So, we sample x(t) at intervals of ts = 1/S = 0.2 s, starting at t = 0, to obtain x[n].
Since x(t) is discontinuous at t = 0, we pick x[0] = 0.5, not 1. The DFT results based on this set
of choices will not be very good because with N = 5 we sample only a 1-s segment of x(t).
2. A better choice is N = 15, a 3-s duration over which x(t) decays to 0.05. A more practical choice is
N = 16 (a power of 2, which allows ecient computation of the DFT, using the FFT algorithm).
This gives a spectral spacing of S/N = 5/16 Hz. Our rule of thumb (N > 8M ) suggests that
with N = 16, the DFT values XDFT [1] and XDFT [2] should show an error of only about 5%. We
see that ts XDFT [k] does compare well with X(f ) (see Figure E16.8C), even though the eects of
aliasing are still evident.
16.7 Spectral Smoothing by Time Windows 555
(a) tsXDFT N=16 ts=0.2 (b) tsXDFT N=128 ts=0.04 (c) tsXDFT N=50 ts=0.2
1 1 1
Magnitude
Magnitude
Magnitude
0.5 0.5 0.5
0 0 0
0 1 2 0 1 2 0 1 2
Frequency f [Hz] Frequency f [Hz] Frequency f [Hz]
Figure E16.8C DFT results for Example 16.8(c)
3. To improve the results (and minimize aliasing), we must increase S. For example, if we require
the highest frequency based on 99% of the signal energy, we obtain B = 63.6567/2 = 10.13 Hz.
Based on this, let us choose S = 25 Hz. If we sample x(t) over 5 s (by which time it decays to
less that 0.01), we compute N = (25)(5) = 125. Choosing N = 128 (the next higher power of 2),
we find that the 128-point DFT result ts XDFT [k] is almost identical to the true spectrum X(f ).
4. What would happen if we choose S = 5 Hz and five signal samples, but reduce the spectral spacing
by zero-padding to give N = 50? The 50-point DFT clearly shows the eects of truncation (as
wiggles) and is a poor match to the true spectrum. This confirms that improved accuracy does
not come by zero-padding but by including more signal samples.
Bartlett window
N = 12 N = 9
12 intervals 9 intervals
No sample here
Figure 16.5 Features of a DFT window
0.707P
0.5P
PSL
High-frequency decay
F
0.5
W3 W6 WS WM
Other measures of window performance include the coherent gain (CG), equivalent noise bandwidth
(ENBW), and the scallop loss (SL). For an N -point window w[n], these measures are defined by
*N 1 *
N
! 1 *! *
* *
N |w[k]|2 * w[k]ejk/N *
N 1
1 ! * *
CG = |w[k]| ENBW = * k=0 *2 SL = 20 log k=0
1
dB (16.20)
N *N!1 * N
!
k=0 * * w[k]
* w[k]*
* * k=0
k=0
The reciprocal of the equivalent noise bandwidth is also called the processing gain. The larger the pro-
cessing gain, the easier it is to reliably detect a signal in the presence of noise.
As we increase the window length, the mainlobe width of all windows decreases, but the peak sidelobe
level remains more or less constant. Ideally, for a given window length, the spectrum of a window should
approach an impulse with as narrow (and tall) a mainlobe as possible, and as small a peak sidelobe level as
possible. The aim is to pack as much energy in a narrow mainlobe as possible and make the sidelobe level
as small as possible. These are conflicting requirements in that a narrow mainlobe width also translates
to a higher peak sidelobe level. Some DFT windows and their spectral characteristics are illustrated in
Figure 16.7, and summarized in Table 16.3.
dB magnitude
20
Amplitude
26.5
40
0.5
60
0 80
0 10 19 0 0.1 0.2 0.3 0.4 0.5
Index n Digital frequency F
von Hann window: N = 20 Magnitude spectrum in dB
0
1
dB magnitude
20
Amplitude
31.5
40
0.5
60
0 80
0 10 19 0 0.1 0.2 0.3 0.4 0.5
Index n Digital frequency F
Hamming window: N = 20 Magnitude spectrum in dB
0
1
dB magnitude
20
Amplitude
42
0.5
60
0 80
0 10 19 0 0.1 0.2 0.3 0.4 0.5
Index n Digital frequency F
Blackman window: N = 20 Magnitude spectrum in dB
0
1
dB magnitude
20
Amplitude
40
0.5
58
0 80
0 10 19 0 0.1 0.2 0.3 0.4 0.5
Index n Digital frequency F
Kaiser window: N = 20 = 2 Magnitude spectrum in dB
0
1
dB magnitude
20
Amplitude
0.5 45.7
60
0 80
0 10 19 0 0.1 0.2 0.3 0.4 0.5
Index n Digital frequency F
Figure 16.7 Commonly used DFT windows and their spectral characteristics
558 Chapter 16 The DFT and FFT
sinc[N (F F0 )] sinc[N (F + F0 )]
Xw (F ) = 0.5N + 0.5N (16.22)
sinc(F F0 ) sinc(F + F0 )
The N -point DFT of the windowed sinusoid is given by XDFT [k] = Xw (F )|F =k/N . If the DFT length N
equals M , the number of samples over k0 full periods of x[n], we see that sinc[N (F F0 )] = sinc(k k0 ),
and this equals zero, unless k = k0 . Similarly, sinc[N (F + F0 )] = sinc(k + k0 ) is nonzero only if k = k0 .
The DFT thus contains only two nonzero terms and equals
*
*
XDFT [k] = Xw (F )* = 0.5N [k k0 ] + 0.5N [k k0 ] (if N = M ) (16.23)
F =k/N
In other words, using an N -point rectangular window that covers an integer number of periods (M samples)
of a sinusoid (i.e., with M = N ) gives us exact results. The reason of course is that the DTFT sampling
instants fall on the nulls of the sinc spectrum. If the window length N does not equal M (an integer number
of periods), the sampling instants will fall between the nulls, and since the sidelobes of the sinc function are
large, the DFT results will show considerable leakage. To reduce the eects of leakage, we must use windows
whose spectrum shows small sidelobe levels.
16.7 Spectral Smoothing by Time Windows 559
16.7.3 Resolution
Windows are often used to reduce the eects of leakage and improve resolution. Frequency resolution refers
to our ability to clearly distinguish between two closely spaced sinusoids of similar amplitudes. Dynamic-
range resolution refers to our ability to resolve large dierences in signal amplitudes. The spectrum of all
windows reveals a mainlobe and smaller sidelobes. It smears out the true spectrum and makes components
separated by less than the mainlobe width indistinguishable. The rectangular window yields the best fre-
quency resolution for a given length N since it has the smallest mainlobe. However, it also has the largest
peak sidelobe level of any window. This leads to significant leakage and the worst dynamic-range resolution
because small amplitude signals can get masked by the sidelobes of the window.
Tapered windows with less abrupt truncation show reduced sidelobe levels and lead to reduced leakage
and improved dynamic-range resolution. They also show increased mainlobe widths WM , leading to poorer
frequency resolution. The choice of a window is based on a compromise between the two conflicting re-
quirements of minimizing the mainlobe width (improving frequency resolution) and minimizing the sidelobe
magnitude (improving dynamic-range resolution).
The mainlobe width of all windows decreases as we increase the window length. However, the peak
sidelobe level remains more or less constant. To achieve a frequency resolution of f , the digital frequency
F = f /S must equal or exceed the mainlobe width WM of the window. This yields the window length N .
For a given window to achieve the same frequency resolution as the rectangular window, we require a larger
window length (a smaller mainlobe width) and hence a larger signal length. The increase in signal length
must come by choosing more signal samples (and not by zero-padding). To achieve a given dynamic-range
resolution, however, we must select a window with small sidelobes, regardless of the window length.
K depends on the window. To decrease f , increase N (more signal samples, not zero-padding).
Note that NFFT governs only the FFT spacing S/NFFT , whereas N governs only the frequency resolu-
tion S/N (which does not depend on the zero-padded length). Figure E16.9A shows the FFT spectra,
plotted as continuous curves, over a selected frequency range. We make the following remarks:
1. For a given signal length N, the rectangular window resolves a smaller f but also has the largest
sidelobes (panels a and b). This means that the eects of leakage are more severe for a rectangular
window than for any other.
2. We can resolve a smaller f by increasing the signal length N alone (panel c). To resolve the
same f with a von Hann window, we must double the signal length N (panel d). This means that
we can improve resolution only by increasing the number of signal samples (adding more signal
information). How many more signal samples we require will depend on the desired resolution
and the type of window used.
3. We cannot resolve a smaller f by increasing the zero-padded length NFFT alone (panels e and
f). This means that increasing the number of samples by zero-padding cannot improve resolution.
Zero-padding simply interpolates the DFT at a denser set of frequencies. It simply cannot improve
the accuracy of the DFT results because adding more zeros does not add more signal information.
(a) N=256 NFFT=2048 No window (b) N=256 NFFT=2048 von Hann window
0.06 0.06
Magnitude
Magnitude
0.04 0.04
0.02 0.02
0 0
26 28 29 30 31 32 36 26 28 29 30 31 32 36
Analog frequency f [Hz] Analog frequency f [Hz]
(c) N=512 NFFT=2048 No window (d) N=512 NFFT=2048 von Hann window
0.1 0.1
Magnitude
Magnitude
0.05 0.05
0 0
26 28 29 30 31 32 36 26 28 29 30 31 32 36
Analog frequency f [Hz] Analog frequency f [Hz]
(e) N=256 NFFT=4096 No window (f) N=256 NFFT=4096 von Hann window
0.04 0.04
0.03 0.03
Magnitude
Magnitude
0.02 0.02
0.01 0.01
0 0
26 28 29 30 31 32 36 26 28 29 30 31 32 36
Analog frequency f [Hz] Analog frequency f [Hz]
Figure E16.9A DFT spectra for Example 16.9(a)
16.7 Spectral Smoothing by Time Windows 561
(b) (Dynamic-Range Resolution) If A2 = 0.05 (26 dB below A1 ), the large sidelobes of the rectangular
window (13 dB below the peak) will mask the second peak at 31 Hz, even if we increase N and NFFT .
This is illustrated in Figure E16.9B(a) (where the peak magnitude is normalized to unity, or 0 dB) for
N = 512 and NFFT = 4096. For the same values of N and NFFT , however, the smaller sidelobes of the
von Hann window (31.5 dB below the peak) do allow us to resolve two distinct peaks in the windowed
spectrum, as shown in Figure E16.9B(b).
(a) N=512 NFFT=4096 No window (b) N=512 NFFT=4096 von Hann window
Normalized magnitude [dB]
20 20
40 40
60 60
28 29 30 31 32 28 29 30 31 32
Analog frequency f [Hz] Analog frequency f [Hz]
Figure E16.9B DFT spectra for Example 16.9(b)
4 4
Magnitude
Magnitude
3 3
2 2
1 1
0 0
0 50 100 150 200 250 0 50 100 150 200 250
Analog frequency f [Hz] Analog frequency f [Hz]
(c) DFT spectrum S=200 Hz shows aliasing (d) S=1 kHz No change in spectral locations
4 4
Magnitude
Magnitude
3 3
2 2
1 1
0 0
0 50 100 150 200 250 0 50 100 150 200 250
Analog frequency f [Hz] Analog frequency f [Hz]
Magnitude
47.12 47.12
0 0
0 0.05 0.5 0 0.05 0.5
Digital frequency F Digital frequency F
Figure E16.10 DFT spectra for Example 16.10
The comparison of the two DFT results suggests a peak at F = 0.05 and the presence of a sinusoid.
Since the sampling rate is S = 10 Hz, the frequency of the sinusoid is f = FS = 0.5 Hz. Let N1 = 80 and
N2 = 160. The peak in the N1 -point DFT occurs at the index k1 = 4 because F = 0.05 = k1 /N1 = 4/80.
Similarly, the peak in the N2 -point DFT occurs at the index k2 = 8 because F = 0.05 = k2 /N2 = 8/160.
Since the two spectra do not dier much, except near the peak, the dierence in the peak values allows us
to compute the peak value A of the sinusoid from
XDFT2 [k2 ] XDFT1 [k1 ] = 86.48 47.12 = 0.5N2 A 0.5N1A = 40A
Thus, A = 0.984 and implies the presence of the 0.5-Hz sinusoidal component 0.984 cos(t + ).
16.8 Applications in Signal Processing 563
Comment: The DFT results shown in Figure E16.10 are actually for the signal x(t) = cos(t) + et,
sampled at S = 10 Hz. The sinusoidal component has unit peak value, and the DFT estimate (A = 0.984)
diers from this value by less than 2%. Choosing larger DFT lengths would improve the accuracy of the
estimate. However, the 80-point DFT alone yields the estimate A = 47.12/40 = 1.178 (an 18% dierence),
whereas the 160-point DFT alone yields A = 86.48/80 = 1.081 (an 8% dierence).
Since each regular convolution contains 2N 1 samples, we zero-pad h[n] and each section xk [n] with N 1
zeros before finding yk [n] using the FFT. Splitting x[n] into equal-length segments is not a strict requirement.
We may use sections of dierent lengths, provided we keep track of how much each partial convolution must
be shifted before adding the results.
In either method, the FFT of the shorter sequence need be found only once, stored, and reused for
all subsequent partial convolutions. Both methods allow on-line implementation if we can tolerate a small
processing delay that equals the time required for each section of the long sequence to arrive at the proces-
sor (assuming the time taken for finding the partial convolutions is less than this processing delay). The
correlation of two sequences may also be found in exactly the same manner, using either method, provided
we use a folded version of one sequence.
(b) To find their convolution using the overlap-add method, we start by creating the zero-padded sequence
x[n] = {0, 0, 1, 2, 3, 3, 4, 5}. If we choose M = 5, we get three overlapping sections of x[n] (we need to
zero-pad the last one) described by
x0 [n] = {0, 0, 1, 2, 3} x1 [n] = {2, 3, 3, 4, 5} x2 [n] = {4, 5, 0, 0, 0}
The zero-padded h[n] becomes h[n] = {1, 1, 1, 0, 0}. Periodic convolution gives
x0 [n]
h[n] = {5, 3, 1, 3, 6}
x1 [n]
h[n] = {11, 10, 8, 10, 12}
x2 [n]
h[n] = {4, 9, 9, 5, 0}
We discard the first two samples from each convolution and glue the results to obtain
y[n] = x[n] h[n] = {1, 3, 6, 8, 10, 12, 9, 5, 0}
Note that the last sample (due to the zero-padding) is redundant, and may be discarded.
16.8.2 Deconvolution
Given a signal y[n] that represents the output of some system with impulse response h[n], how do we
recover the input x[n] where y[n] = x[n] h[n]? One method is to undo the eects of convolution using
deconvolution. The time-domain approach to deconvolution was studied in Chapters 6 and 7. Here, we
examine a frequency-domain alternative based on the DFT (or FFT).
The idea is to transform the convolution relation using the FFT to obtain YFFT [k] = XFFT [k]HFFT [k],
compute XFFT [k] = YFFT [k]/HFFT [k] by pointwise division, and then find x[n] as the IFFT of XFFT [k].
This process does work in many cases, but it has two disadvantages. First, it fails if HFFT [k] equals zero at
some index because we get division by zero. Second, it is quite sensitive to noise in the input x[n] and to
the accuracy with which y[n] is known.
16.8 Applications in Signal Processing 565
The inverse DFT of Xzp [k] will include the factor 1/MN and its machine computation may show (small)
imaginary parts. We therefore retain only its real part, and divide by M , to obtain the interpolated signal
xI [n], which contains M 1 interpolated values between each sample of x[n]:
1 2 3
xI [n] = Re IDFT{Xzp [k]} (16.27)
M
This method is entirely equivalent to creating a zero-interpolated signal (which produces spectrum repli-
cation), and filtering the replicated spectrum (by zeroing out the spurious images). For periodic band-limited
signals sampled above the Nyquist rate for an integer number of periods, the interpolation is exact. For all
others, imperfections show up as a poor match, especially near the ends, since we are actually interpolating
to zero outside the signal duration.
(a) Interpolated sinusoid: 4 samples over one period (b) Interpolated sinusoid: 4 samples over a halfperiod
1
1
0.5
Amplitude
Amplitude
0
0.5
0.5
1
0
0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5
Time t Time t
Figure E16.12 Interpolated sinusoids for Example 16.12
566 Chapter 16 The DFT and FFT
(b) For a sinusoid sampled over a half-period with four samples, interpolation does not yield exact results,
as shown in Figure E16.12(b). Since we are actually sampling one period of a full-rectified sine (the
periodic extension), the signal is not band-limited, and the chosen sampling frequency is too low. This
shows up as a poor match, especially near the ends of the sequence.
The inverse FFT of the product, multiplied by the omitted factor j, yields the Hilbert transform.
(a) Welch PSD of chirp (0.2 F 0.4) N=400 (b) Tukey PSD of chirp (0.2 F 0.4) N=400
No window
1.5 von Hann 1.5
Magnitude
Magnitude
1 1
0.5 0.5
0 0
0.50.4 0.2 0 0.2 0.4 0.5 0.50.4 0.2 0 0.2 0.4 0.5
Digital frequency F Digital frequency F
1.5
Magnitude
0.5
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Digital frequency F
Figure 16.10 Spectrum of an IIR filter and its non-parametric FIR filter estimate
This approach is termed non-parametric because it presupposes no model for the system. In practice,
we use the FFT to approximate Rxx (f ) and Ryx (f ) (using the Welch method, for example) by the finite
N -sample sequences Rxx [k] and Ryx [k]. As a consequence, the transfer function H[k] = Ryx [k]/Rxx [k] also
has N samples and describes an FIR filter. Its inverse FFT yields the N -sample impulse response h[n].
Figure 16.10 shows the spectrum of an IIR filter defined by y[n] 0.5y[n 1] = x[n] (or h[n] = (0.5)n u[n]),
and its 20-point FIR estimate obtained by using a 400-sample noise sequence and the Welch method.
(a) 20 Hz sine + 60100 Hz chirp signal (b) Its timefrequency (waterfall) plot
1
2
0.8
Time t [seconds]
1
Amplitude
0.6
0
0.4
1
0.2
2
0
0 0.2 0.4 0.6 0.8 1 0 20 60 100 160
Time t [seconds] Analog frequency f [Hz]
Figure 16.11 Time-frequency plot for the sum of a sinusoid and a chirp signal
expressing signals in terms of wavelets (much like the Fourier series). Wavelets are functions that show the
best possible localization characteristics in both the time-domain and the frequency-domain, and are an
area of intense ongoing research.
N
! 1
XDFT [k] = x[n]WNnk , k = 0, 1, . . . , N 1 (16.32)
n=0
N 1
1 !
x[n] = XDFT [k][WNnk ] , n = 0, 1, . . . , N 1 (16.33)
N
k=0
The first set of N DFT equations in N unknowns may be expressed in matrix form as
X = WN x (16.34)
Here, X and x are (N 1) matrices, and WN is an (N N ) square matrix called the DFT matrix. The
full matrix form is described by
X[0] WN0 WN0 WN0 ... W0 x[0]
X[1] WN0 WN1 WN2 . . . WNN 1 x[1]
2(N 1)
X[2] = WN0 WN2 WN4 . . . WN x[2] (16.35)
.. .. .. .. .. .. ..
. . . . . . .
2(N 1) (N 1)(N 1)
X[N 1] WN0 WNN 1 WN . . . WN x[N 1]
x = W1
N X (16.36)
The matrix W1N is called the IDFT matrix. We may also obtain x directly from the IDFT relation
in matrix form, where the change of index from n to k, and the change in the sign of the exponent in
exp(j2nk/N ), lead to a conjugate transpose of WN . We then have
1
x= [WN ]T X (16.37)
N
Comparison of the two forms suggests that
1
W1
N = [WN ]T (16.38)
N
This very important result shows that W1N requires only conjugation and transposition of WN , an obvious
computational advantage.
The elements of the DFT and IDFT matrices satisfy Aij = A(i1)(j1) . Such matrices are known as
Vandermonde matrices. They are notoriously ill conditioned insofar as their numerical inversion. This
is not the case for WN , however. The product of the DFT matrix WN with its conjugate transpose matrix
equals the identity matrix I. Matrices that satisfy such a property are called unitary. For this reason, the
DFT and IDFT, which are based on unitary operators, are also called unitary transforms.
XDFT [0] = x[0] + x[1] and XDFT [1] = x[0] x[1] (16.40)
The single most important result in the development of a radix-2 FFT algorithm is that an N -sample
DFT can be written as the sum of two N2 -sample DFTs formed from the even- and odd-indexed samples of
the original sequence. Here is the development:
N 1 N/21 N/21
! ! ! (2n+1)k
XDFT [k] = x[n]WNnk = x[2n]WN2nk + x[2n + 1]WN
n=0 n=0 n=0
N/21 N/21
! !
XDFT [k] = x[2n]WN2nk + WNk x[2n + 1]WN2nk
n=0 n=0
N/21 N/21
! !
= nk
x[2n]WN/2 + WN x[2n + 1]WN/2
nk
n=0 n=0
If X e [k] and X o [k] denote the DFT of the even- and odd-indexed sequences of length N/2, we can rewrite
this result as
XDFT [k] = X e [k] + WNk X o [k], k = 0, 1, 2, . . . , N 1 (16.41)
Note carefully that the index k in this expression varies from 0 to N 1 and that X e [k] and X o [k] are both
periodic in k with period N/2; we thus have two periods of each to yield XDFT [k]. Due to periodicity, we
can split XDFT [k] and compute the first half and next half of the values as
This result is known as the Danielson-Lanczos lemma. Its signal-flow graph is shown in Figure 16.12 and
is called a butterfly due to its characteristic shape.
A A + BW t A A + BW t
e
A = X [k]
t
o
B = X [k] Wt
t
B A BW B A BW t
The inputs X e and X o are transformed into X e + WNk X o and X e WNk X o . A butterfly operates on one
pair of samples and involves two complex additions and one complex multiplication. For N samples, there
are N/2 butterflies in all. Starting with N samples, this lemma reduces the computational complexity by
16.11 The FFT 573
evaluating the DFT of two N2 -point sequences. The DFT of each of these can once again be reduced to the
computation of sequences of length N/4 to yield
X e [k] = X ee [k] + WN/2
k
X eo [k] X o [k] = X oe [k] + WN/2
k
X oo [k] (16.42)
Since WN/2
k
= WN2k , we can rewrite this expression as
Separating even and odd indices, and letting x[n] = xa and x[n + N/2] = xb ,
N/21
!
XDFT [2k] = [xa + xb ]WN2nk , k = 0, 1, 2, . . . , N2 1 (16.44)
n=0
N/21 N/21
! (2k+1)n
!
XDFT [2k + 1] = [xa xb ]WN = [xa xb ]WNn WN2nk , k = 0, 1, . . . , N2 1 (16.45)
n=0 n=0
A A+B A A+B
e
A = X [k]
t
o
B = X [k] Wt
t
B (AB)W B (AB)W t
The factors W t , called twiddle factors, appear only in the lower corners of the butterfly wings at each
stage. Their exponents t have a definite order, described as follows for an N = 2m -point FFT algorithm
with m stages:
1. Number P of distinct twiddle factors W t at ith stage: P = 2mi .
2. Values of t in the twiddle factors W t : t = 2i1 Q with Q = 0, 1, 2, . . . , P 1.
The DIF algorithm is illustrated in Figure 16.14 for N = 2, N = 4, and N = 8.
A A + BW t A A + BW t
e
A = X [k]
t
o
B = X [k] Wt
t
B A BW B A BW t
As with the decimation-in-frequency algorithm, the twiddle factors W t at each stage appear only in the
bottom wing of each butterfly. The exponents t also have a definite (and almost similar) order described by
In both the DIF algorithm and DIT algorithm, it is possible to use a sequence in natural order and get
DFT results in natural order. This, however, requires more storage, since the computations cannot now be
done in place.
The dierence between 0.5 log2 N and N 2 may not seem like much for small N . For example, with N = 16,
0.5N log2 N = 64 and N 2 = 256. For large N , however, the dierence is phenomenal. For N = 1024 = 210 ,
0.5N log2 N 5000 and N 2 106 . This is like waiting 1 minute for the FFT result and (more than) 3 hours
for the identical direct DFT result. Note that N log2 N is nearly linear with N for large N , whereas N 2
shows a much faster, quadratic growth.
In the DFT relations, since there are N factors that equal W 0 or 1, we require only N 2 N complex
multiplications. In the FFT algorithms, the number of factors that equal 1 doubles (or halves) at each stage
and is given by 1 + 2 + 22 + + 2m1 = 2m 1 = N 1. We thus actually require only 0.5N log2 N (N 1)
complex multiplications for the FFT. The DFT requires N 2 values of W k , but the FFT requires at most
N such values at each stage. Due to the periodicity of W k , only about 34 N of these values are distinct.
Once computed, they can be stored and used again. However, this hardly aects the comparison for large N .
Since computers use real arithmetic, the number of real operations may be found by noting that one complex
addition involves two real additions, and one complex multiplication involves four real multiplications and
three real additions (because (A + jB)(C + jD) = AC BD + jBC + jAD).
If we sample F at M intervals over one period, the frequency interval F0 equals 1/M and F kF0 =
M , k = 0, 1, . . . , M 1, and we get
k
N
! 1
XDFT [k] = x[n]ej2nk/M , k = 0, 1, . . . , M 1 (16.48)
n=0
This is a set of M equations in N unknowns and describes the M-point DFT of the N-sample sequence x[n].
It may be written in matrix form as
X = WM x (16.50)
Here, X is an M 1 matrix, x is an N 1 matrix, and WM is an (M N ) matrix. In full form,
0 0 0 0
X[0] WM WM WM . . . WM x[0]
0 1 2 N 1
X[1] WM WM WM . . . WM x[1]
X[2] 1)
= WM 0
WM 2
WM 4
. . . WM
2(N
x[2] (16.51)
.. . . .
.. .. .. .. ..
. . . . . .
2(M 1) (N 1)(M 1)
X[M 1] WM 0
WM M 1
WM . . . WM x[N 1]
For N < M , one period of x[n] is a zero-padded M -sample sequence. For N > M , however, one period of
x[n] is the periodic extension of the N -sample sequence with period M .
The sign of the exponent and the interchange of the indices n and k allow us to set up the matrix
formulation for obtaining x[n] using an N M inversion-matrix WI that just equals M1
times [WM ]T , the
conjugate transpose of the M N DFT matrix WM . Its product with the N 1 matrix corresponding
to X[k] yields the M 1 matrix for x[n]. We thus have the forward and inverse matrix relations:
1
X = WM x (DFT) x = WI X = [WM ]T X (IDFT) (16.54)
M
These results are valid for any choice of M and N . An interesting result is that the product of WM with
WI is the N N identity matrix.
The important thing to realize is that x[n] is actually periodic with M = 4, and one period of x[n] is the
zero-padded sequence {1, 2, 1, 0}.
3. If M < N , the IDFT is periodic with period M < N . Its one period is the periodic extension of the
N -sample x[n] with period M . It thus yields a signal that corresponds to x[n] wrapped around after
M samples and does not recover the original x[n].
(a) For M = 3, we should get y[n] = {1, 2, 1} = x[n]. Let us find out.
With M = 3, we have F = k/3 for k = 0, 1, 2, and XDFT [k] becomes
, < < -
XDFT [k] = [2 + 2 cos(2k/3)]e j2k/3
= 4, 2 j 4 , 2 + j 34
1 3 1
1 1 < 1 < 4 <
1
1
1 2 + j 34
1
12 j 34 1
2 j 34
= 2
x = WI X =
3 < < <
1
+ 3 1
1 12 j 34 2 + j 4
1 3 2 j 4
This result is periodic with M = 3, and one period of this equals x[n].
(b) For M = 4, we should get a new sequence y[n] = {1, 2, 1, 0} that corresponds to a zero-padded
version of x[n], and we do (the details were worked out in Example 16.19).
(c) For M = 2, we should get a new sequence z[n] = {2, 2} that corresponds to the periodic extension of
x[n] with period 2.
With M = 2 and k = 0, 1, we have ZDFT [k] = [2 + 2 cos(k)]ejk = {4, 0}.
Since ej2/M = ej = 1, we can find the IDFT z[n] directly from the definition as
z[0] = 0.5{XDFT [0] + XDFT [1]} = 2 z[1] = 0.5{XDFT [0] XDFT [1]} = 2
The sequence z[n] = {2, 2} is periodic with M = 2. As expected, this equals one period of the periodic
extension of x[n] = {1, 2, 1} (with wraparound past two samples).
Chapter 16 Problems 581
CHAPTER 16 PROBLEMS
DRILL AND REINFORCEMENT
16.1 (DFT from Definition) Compute the DFT and DFS of the following signals.
(a) x[n] = {1, 2, 1, 2} (b) x[n] = {2, 1, 3, 0, 4}
(c) x[n] = {2, 2, 2, 2} (d) x[n] = {1, 0, 0, 0, 0, 0, 0, 0}
16.3 (Symmetry) For the DFT of each real sequence, compute the boxed quantities.
= >
(a) XDFT [k] = 0, X1 , 2 + j, 1, X4 , j
= >
(b) XDFT [k] = 1, 2, X2 , X3 , 0, 1 j, 2, X7
16.4 (Properties) The DFT of x[n] is XDFT [k] = {1, 2, 3, 4}. Find the DFT of each of the following
sequences, using properties of the DFT.
(a) y[n] = x[n 2] (b) f [n] = x[n + 6] (c) g[n] = x[n + 1]
(d) h[n] = ejn/2 x[n] (e) p[n] = x[n]x[n]
(f ) q[n] = x2 [n]
(g) r[n] = x[n] (h) s[n] = x [n] (i) v[n] = x2 [n]
16.5 (Replication and Zero Interpolation) The DFT of x[n] is XDFT [k] = {1, 2, 3, 4, 5}.
(a) What is the DFT of the replicated signal y[n] = {x[n], x[n]}?
(b) What is the DFT of the replicated signal f [n] = {x[n], x[n], x[n]}?
(c) What is the DFT of the zero-interpolated signal g[n] = x[n/2]?
(d) What is the DFT of the zero-interpolated signal h[n] = x[n/3]?
16.6 (DFT of Pure Sinusoids) Determine the DFS and DFT of x(t) = sin(2f0 t + 3 ) (without doing
any DFT computations) if we sample this signal, starting at t = 0, and acquire the following:
(a) 4 samples over 1 period (b) 8 samples over 2 periods
(c) 8 samples over 1 period (d) 18 samples over 3 periods
(e) 8 samples over 5 periods (f ) 16 samples over 10 periods
16.7 (DFT of Sinusoids) The following signals are sampled starting at t = 0. Find their DFS and DFT
and identify the indices of the nonzero DFT components.
(a) x(t) = cos(4t) sampled at 25 Hz, using the minimum number of samples to prevent leakage
(b) x(t) = cos(20t) + 2 sin(40t) sampled at 25 Hz with N = 15
(c) x(t) = sin(10t) + 2 sin(40t) sampled at 25 Hz for 1 s
(d) x(t) = sin(40t) + 2 sin(60t) sampled at intervals of 0.004 s for four periods
16.8 (Aliasing and Leakage) The following signals are sampled and the DFT of the sampled signal
obtained. Which cases will show aliasing, or leakage, or both? In which cases can the eects of
leakage and/or aliasing be avoided, and how?
582 Chapter 16 The DFT and FFT
16.9 (Spectral Spacing) What is the spectral spacing in the 500-point DFT of a sampled signal obtained
by sampling an analog signal at 1 kHz?
16.10 (Spectral Spacing) We wish to sample a signal of 1-s duration, band-limited to 50 Hz, and compute
the DFT of the sampled signal.
(a) Using the minimum sampling rate that avoids aliasing, what is the spectral spacing f , and
how many samples are acquired?
(b) How many padding zeros are needed to reduce the spacing to 0.5f , using the minimum sam-
pling rate to avoid aliasing if we use the DFT?
(c) How many padding zeros are needed to reduce the spacing to 0.5f , using the minimum sam-
pling rate to avoid aliasing if we use a radix-2 FFT?
16.11 (Convolution and the DFT) Consider two sequences x[n] and h[n] of length 12 samples and 20
samples, respectively.
(a) How many padding zeros are needed for x[n] and h[n] in order to find their regular convolution
y[n], using the DFT?
(b) If we pad x[n] with eight zeros and find the periodic convolution yp [n] of the resulting 20-point
sequence with h[n], for what indices will the samples of y[n] and yp [n] be identical?
16.12 (FFT) Write out the DFT sequence that corresponds to the following bit-reversed sequences obtained
using the DIF FFT algorithm.
(a) The 8-point DIF algorithm. (b) The 8-point DIT algorithm.
16.14 (Spectral Spacing and the FFT) We wish to sample a signal of 1-s duration, and band-limited
to 100 Hz, in order to compute its spectrum. The spectral spacing should not exceed 0.5 Hz. Find
the minimum number N of samples needed and the actual spectral spacing f if we use
16.16 (Convolution) Find the periodic convolution of x[n] = {1, 2, 1} and h[n] = {1, 2, 3}, using
(a) The time-domain convolution operation.
(b) The DFT operation. Is this result identical to that of part (a)?
(c) The radix-2 FFT and zero-padding. Is this result identical to that of part (a)? Should it be?
16.17 (Correlation) Find the periodic correlation rxh of x[n] = {1, 2, 1} and h[n] = {1, 2, 3}, using
(a) The time-domain correlation operation.
(b) The DFT.
16.18 (Convolution of Long Sequences) Let x[n] = {1, 2, 1} and h[n] = {1, 2, 1, 3, 2, 2, 3, 0, 1, 0, 2, 2}.
(a) Find their convolution using the overlap-add method.
(b) Find their convolution using the overlap-save method.
(c) Are the results identical to the time-domain convolution of x[n] and h[n]?
16.21 (Properties) For each DFT pair shown, compute the values of the boxed quantities, using properties
such as
= conjugate symmetry
> and= Parsevals theorem. >
(a) x0 , 3, 4, 0, 2 5, X1 , 1.28 j4.39, X3 , 8.78 j1.4
= > = >
(b) x0 , 3, 4, 2, 0, 1 4, X1 , 4 j5.2, X3 , X4 , 4 j1.73
16.22 (Properties) Let x[n] = {1, 2, 3, 4, 5, 6}. Without evaluating its DFT X[k], compute the
following:
5
! 5
! !5
(a) X[0] (b) X[k] (c) X[3] (d) |X[k]|2 (e) (1)k X[k]
k=0 k=0 k=0
16.23 (DFT Computation) Find the N -point DFT of each of the following signals.
(a) x[n] = [n] (b) x[n] = [n K], K < N
(c) x[n] = [n 0.5N ] (N even) (d) x[n] = [n 0.5(N 1)] (N odd)
(e) x[n] = 1 (f ) x[n] = [n 0.5(N 1)] + [n 0.5(N + 1)] (N odd)
(g) x[n] = (1)n (N even) (h) x[n] = ej4n/N
(i) x[n] = cos(4n/N ) (j) x[n] = cos(4n/N + 0.25)
584 Chapter 16 The DFT and FFT
16.24 (Properties) The DFT of a signal x[n] is XDFT [k]. If we use its conjugate YDFT [k] = XDFT
[k] and
obtain its DFT as y[n], how is y[n] related to x[n]?
16.25 (Properties) Let X[k] = {1, 2, 1 j, j2, 0, . . .} be the 8-point DFT of a real signal x[n].
(a) Determine X[k] in its entirety.
(b) What is the DFT Y [k] of the signal y[n] = (1)n x[n]?
(c) What is the DFT G[k] of the zero-interpolated signal g[n] = x[n/2]?
(d) What is the DFT H[k] of h[n] = {x[n], x[n], x[n]} obtained by threefold replication of x[n]?
16.26 (Spectral Spacing) We wish to sample the signal x(t) = cos(50t) + sin(200t) at 800 Hz and
compute the N -point DFT of the sampled signal x[n].
(a) Let N = 100. At what indices would you expect to see the spectral peaks? Will the peaks occur
at the frequencies of x(t)?
(b) Let N = 128. At what indices would you expect to see the spectral peaks? Will the peaks occur
at the frequencies of x(t)?
16.27 (Spectral Spacing) We wish to sample the signal x(t) = cos(50t) + sin(80t) at 100 Hz and
compute the N -point DFT of the sampled signal x[n].
(a) Let N = 100. At what indices would you expect to see the spectral peaks? Will the peaks occur
at the frequencies of x(t)?
(b) Let N = 128. At what indices would you expect to see the spectral peaks? Will the peaks occur
at the frequencies of x(t)?
16.28 (Spectral Spacing) We wish to identify the 21-Hz component from the N -sample DFT of a signal.
The sampling rate is 100 Hz, and only 128 signal samples are available.
(a) If N = 128, will there be a DFT component at 21 Hz? If not, what is the frequency closest to
21 Hz that can be identified? What DFT index does this correspond to?
(b) Assuming that all signal samples must be used and zero-padding is allowed, what is the smallest
value of N that will result in a DFT component at 21 Hz? How many padding zeros will be
required? At what DFT index will the 21-Hz component appear?
16.29 (Sampling Frequency) For each of the following signals, estimate the sampling frequency and
sampling duration by arbitrarily choosing the bandwidth as the frequency where |H(f )| is 5% of its
maximum and the signal duration as the time at which x(t) is 1% of its maximum.
(a) x(t) = et u(t) (b) x(t) = tet u(t) (c) x(t) = tri(t)
16.30 (Sampling Frequency and Spectral Spacing) It is required to sample the signal x(t) = et u(t)
and compute the DFT of the sampled signal. The signal is sampled for a duration D that contains
95% of the signal energy. How many samples are acquired if the sampling rate S is chosen to ensure
that
(a) The aliasing level at f = 0.5S due to the first replica is less than 1% of the peak level?
(b) The energy in the aliased signal past f = 0.5S is less than 1% of the total signal energy?
16.31 (Sampling Rate and the DFT) A periodic square wave x(t) with a duty ratio of 0.5 and period T =
2 is sampled for one full period to obtain N samples. The N -point DFT of the samples corresponds
to the signal y(t) = A + B sin(t).
Chapter 16 Problems 585
(a) What are the possible values of N for which you could obtain such a result? For each such
choice, compute the values of A and B.
(b) What are the possible values of N for which you could obtain such a result using the radix-2
FFT? For each such choice, compute the values of A and B.
(c) Is it possible for y(t) to be identical to x(t) for any choice of sampling rate?
16.32 (Sampling Rate and the DFT) A periodic signal x(t) with period T = 2 is sampled for one full
period to obtain N samples. The signal reconstructed from the N -point DFS of the samples is y(t).
(a) Will the DFS show the eects of leakage?
(b) Let N = 8. How many harmonics of x(t) can be identified in the DFS? What constraints on
x(t) will ensure that y(t) = x(t)?
(c) Let N = 12. How many harmonics of x(t) can be identified in the DFS? What constraints on
x(t) will ensure that y(t) = x(t)?
16.33 (DFT from Definition) Use the defining relation to compute the N -point DFT of the following:
(a) x[n] = [n], 0 n N 1
(b) x[n] = n , 0 n N 1
(c) x[n] = ejn/N , 0 n N 1
16.34 (DFT Concepts) The signal x(t) = cos(150t) + cos(180t) is to be sampled and analyzed using
the DFT.
(a) What is the minimum sampling Smin rate to prevent aliasing?
(b) With the sampling rate chosen as S = 2Smin , what is the minimum number of samples Nmin
required to prevent leakage?
(c) With the sampling rate chosen as S = 2Smin , and N = 3Nmin , what is the DFT of the sampled
signal?
(d) With S = 160 Hz and N = 256, at what DFT indices would you expect to see spectral peaks?
Will leakage be present? Will aliasing occur?
16.35 (DFT Concepts) The signal x(t) = cos(50t) + cos(80t) is sampled at S = 200 Hz.
(a) What is the minimum number of samples required to prevent leakage?
(b) Find the DFT of the sampled signal if x(t) is sampled for 1 s.
(c) What are the DFT indices of the spectral peaks if N = 128?
16.38 (DFT Properties) Consider the signal x[n] = n + 1, 0 n 7. Use Matlab to compute its DFT.
Confirm the following properties by computing the DFT.
(a) The DFT of y[n] = x[n] to confirm the (circular) folding property
(b) The DFT of f [n] = x[n 2] to confirm the (circular) shift property
(c) The DFT of g[n] = x[n/2] to confirm the zero-interpolation property
(d) The DFT of h[n] = {x[n], x[n]} to confirm the signal-replication property
(e) The DFT of p[n] = x[n] cos(0.5n) to confirm the modulation property
(f ) the DFT of r[n] = x2 [n] to confirm the multiplication property
(g) The DFT of s[n] = x[n]x[n]
to confirm the periodic convolution property
16.40 (Resolution) We wish to compute the radix-2 FFT of the signal samples acquired from the signal
x(t) = A cos(2f0 t)+B cos[2(f0 +f )t], where f0 = 100 Hz. The sampling frequency is S = 480 Hz.
(a) Let A = B = 1. What is the smallest number of signal samples Nmin required for a frequency
resolution of f = 2 Hz if no window is used? How does this change if we wish to use a
von Hann window? What about a Blackman window? Plot the FFT magnitude for each case
to confirm your expectations.
Chapter 16 Problems 587
(b) Let A = 1 and B = 0.02. Argue that we cannot obtain a frequency resolution of f = 2 Hz if
no window is used. Plot the FFT magnitude for various lengths to justify your argument. Of
the Bartlett, Hamming, von Hann, and Blackman windows, which ones can we use to obtain a
resolution of f = 2 Hz? Which one will require the minimum number of samples, and why?
Plot the FFT magnitude for each applicable window to confirm your expectations.
16.41 (Convolution) Consider the sequences x[n] = {1, 2, 1, 2, 1} and h[n] = {1, 2, 3, 3, 5}.
(a) Find their regular convolution using three methods: the convolution operation; zero-padding
and the DFT; and zero-padding to length 16 and the DFT. How are the results of each operation
related? What is the eect of zero-padding?
(b) Find their periodic convolution using three methods: regular convolution and wraparound; the
DFT; and zero-padding to length 16 and the DFT. How are the results of each operation related?
What is the eect of zero-padding?
16.42 (Convolution) Consider the signals x[n] = 4(0.5)n , 0 n 4, and h[n] = n, 0 n 10.
(a) Find their regular convolution y[n] = x[n] h[n], using the Matlab routine conv.
(b) Use the FFT and IFFT to obtain the regular convolution, assuming the minimum length N
(that each sequence must be zero-padded to) for correct results.
(c) How do the results change if each sequence is zero-padded to length N + 2?
(d) How do the results change if each sequence is zero-padded to length N 2?
16.43 (FFT of Noisy Data) Sample the sinusoid x(t) = cos(2f0 t) with f0 = 8 Hz at S = 64 Hz for 4 s
to obtain a 256-point sampled signal x[n]. Also generate 256 samples of a uniformly distributed noise
sequence s[n] with zero mean.
(a) Display the first 32 samples of x[n]. Can you identify the period from the plot? Compute and
plot the DFT of x[n]. Does the spectrum match your expectations?
(b) Generate the noisy signal y[n] = x[n] + s[n] and display the first 32 samples of y[n]. Do you
detect any periodicity in the data? Compute and plot the DFT of the noisy signal y[n]. Can
you identify the frequency and magnitude of the periodic component from the spectrum? Do
they match your expectations?
(c) Generate the noisy signal z[n] = x[n]s[n] (by elementwise multiplication) and display the first 32
samples of z[n]. Do you detect any periodicity in the data? Compute and plot the DFT of the
noisy signal z[n]. Can you identify the frequency the periodic component from the spectrum?
16.44 (Filtering a Noisy ECG Signal: I) During recording, an electrocardiogram (ECG) signal, sampled
at 300 Hz, gets contaminated by 60-Hz hum. Two beats of the original and contaminated signal (600
samples) are provided on disk as ecgo.mat and ecg.mat. Load these signals into Matlab (for
example, use the command load ecgo). In an eort to remove the 60-Hz hum, use the DFT as a
filter to implement the following steps.
(a) Compute (but do not plot) the 600-point DFT of the contaminated ECG signal.
(b) By hand, compute the DFT indices that correspond to the 60-Hz signal.
(c) Zero out the DFT components corresponding the 60-Hz signal.
(d) Take the IDFT to obtain the filtered ECG and display the original and filtered signal.
(e) Display the DFT of the original and filtered ECG signal and comment on the dierences.
(f ) Is the DFT eective in removing the 60-Hz interference?
588 Chapter 16 The DFT and FFT
16.45 (Filtering a Noisy ECG Signal: II) Continuing with Problem 16.44, load the original and
contaminated ECG signal sampled at 300 Hz with 600 samples provided on disk as ecgo.mat and
ecg.mat (for example, use the command load ecgo). Truncate each signal to 512 samples. In an
eort to remove the 60-Hz hum, use the DFT as a filter to implement the following steps.
(a) Compute (but do not plot) the 512-point DFT of the contaminated ECG signal.
(b) Compute the DFT indices closest to 60 Hz and zero out the DFT at these indices.
(c) Take the IDFT to obtain the filtered ECG and display the original and filtered signal.
(d) Display the DFT of the original and filtered ECG signal and comment on the dierences.
(e) The DFT is not eective in removing the 60-Hz interference. Why?
(f ) From the DFT plots, suggest and implement a method for improving the results (by zeroing
out a larger portion of the DFT around 60 Hz, for example).
16.46 (Decoding a Mystery Message) During transmission, a message signal gets contaminated by a
low-frequency signal and high-frequency noise. The message can be decoded only by displaying it in
the time domain. The contaminated signal is provided on disk as mystery1.mat. Load this signal
into Matlab (use the command load mystery1). In an eort to decode the message, use the DFT
as a filter to implement the following steps and determine what the decoded message says.
(a) Display the contaminated signal. Can you read the message?
(b) Take the DFT of the signal to identify the range of the message spectrum.
(c) Zero out the DFT component corresponding to the low-frequency signal.
(d) Zero out the DFT components corresponding to the high-frequency noise.
(e) Take the IDFT to obtain the filtered signal and display it to decode the message.
16.47 (Spectrum Estimation) The FFT is extensively used in estimating the spectrum of various signals,
detecting periodic components buried in noise, or detecting long-term trends. The monthly rainfall
data, for example, tends to show periodicity (an annual cycle). However, long-term trends may also
be present due to factors such as deforestation and soil erosion that tend to reduce rainfall amounts
over time. Such long-term trends are often masked by the periodicity in the data and can be observed
only if the periodic components are first removed (filtered).
(a) Generate a signal x[n] = 0.01n + sin(n/6), 0 n 500, and add some random noise to
simulate monthly rainfall data. Can you observe any periodicity or long-term trend from a plot
of the data?
(b) Find the FFT of the rainfall data. Can you identify the periodic component from the FFT
magnitude spectrum?
(c) Design a notch filter to remove the periodic component from the rainfall data. You may identify
the frequency to be removed from x[n] (if you have not been able to identify it from the FFT).
Filter the rainfall data through your filter and plot the filtered data. Do you observe any
periodicity in the filtered data? Can you detect any long-term trends from the plot?
(d) To detect the long-term trend, pass the filtered data through a moving average filter. Experiment
with dierent lengths. Does averaging of the filtered data reveal the long-term trend? Explain
how you might go about quantifying the trend.
16.48 (The FFT as a Filter) We wish to filter out the 60-Hz interference from the signal
x(t) = cos(100t) + cos(120t)
by sampling x(t) at S = 500 Hz and passing the sampled signal x[n] through a lowpass filter with a
cuto frequency of fC = 55 Hz. The N -point FFT of the filtered signal may be obtained by simply
Chapter 16 Problems 589
zeroing out the FFT X[k] of the sampled signal between the indices M = int(N fC /S) and N M
(corresponding to the frequencies between fC and S fC ). This is entirely equivalent to multiplying
X[k] by a filter function H[k] of the form
H[k] = { 1, (M ones), (N 2M 1 zeros), (M ones) }
The FFT of the filtered signal equals Y [k] = H[k]X[k], and the filtered signal y[n] is obtained by
computing its IFFT.
(a) Start with the smallest value of N you need to resolve the two frequencies and successively
generate the sampled signal x[n], its FFT X[k], the filter function H[k], the FFT Y [k] of the
filtered signal as Y [k] = H[k]X[k], and its IFFT y[n]. Plot X[k] and Y [k] on the same plot and
x[n] and y[n] on the same plot. Is the 60-Hz signal completely blocked? Is the filtering eective?
(b) Double the value of N several times and, for each case, repeat the computations and plots of
part (a). Is there a noticeable improvement in the filtering?
(c) The filter described by the N -sample h[n] is not very useful because its true frequency response
H(F ) matches the N -point FFT only at N points and varies considerably in between. To see
this, superimpose the DTFT H(F ) of the N -sample h[n] (using the value of N from part (a))
over 0 F 1 (using 4N points) and the N -point FFT X[k]. How does H(F ) dier from
H[k]? What is the reason for this dierence? Can a larger N reduce the dierences? If not,
how can the dierences be minimized? (This forms the subject of frequency sampling filters
that we discuss in Chapter 20.)
16.50 (Decimation) To decimate a signal x[n] by N , we use a lowpass filter (to band-limit the signal
to F = 0.5/N ), followed by a down-sampler (that retains only every N th sample). In this problem,
ignore the lowpass filter.
(a) Generate the test signal x[n] = cos(0.2n) + cos(0.3n), 0 n 59. Plot its DFT. Can you
identify the frequencies present?
(b) Decimate x[n] by N = 2 to obtain the signal x2 [n]. Is the signal x[n] suciently band-limited
in this case? Plot the DFT of x2 [n]. Can you identify the frequencies present? Do the results
match your expectations? Would you be able to recover x[n] from band-limited interpolation
(by N = 2) of x2 [n]?
590 Chapter 16 The DFT and FFT
(c) Decimate x[n] by N = 4 to obtain the signal x4 [n]. Is the signal x[n] suciently band-limited in
this case? Plot the DFT of x4 [n]. Can you identify the frequencies present? If not, explain how
the result diers from that of part (b). Would you be able to recover x[n] from band-limited
interpolation (by N = 4) of x4 [n]?
16.51 (DFT of Large Data Sets) The DFT of a large N -point data set may be obtained from the DFT
of smaller subsets of the data. In particular, if N = RC, we arrange the data as an R C matrix
(by filling along columns), find the DFT of each row, multiply each result at the location (r, c) by
Wrc = ej2rc/N , where r = 0, 1, . . . , R 1 and c = 0, 1, . . . , C 1, find the DFT of each column, and
reshape the result (by rows) into the required N -point DFT. Let x[n] = n + 1, 0 n 11.
(a) Find the DFT of x[n] by using this method with R = 3, C = 4.
(b) Find the DFT of x[n] by using this method with R = 4, C = 3.
(c) Find the DFT of x[n] using the Matlab command fft.
(d) Do all methods yield identical results? Can you justify your answer?
16.52 (Time-Frequency Plots) This problem deals with time-frequency plots of a sum of sinusoids.
(a) Generate 600 samples of the signal x[n] = cos(0.1n) + cos(0.4n) + cos(0.7n), the sum of
three pure cosines at F = 0.05, 0.2, 0.35. Use the Matlab command fft to plot its DFT
magnitude. Use the ADSP routine timefreq to display its time-frequency plot. What do the
plots reveal?
(b) Generate 200 samples each of the three signals y1 [n] = cos(0.1n), y2 [n] = cos(0.4n), and
y3 [n] = cos(0.7n). Concatenate them to form the 600-sample signal y[n] = {y1 [n], y2 [n], y3 [n]}.
Plot its DFT magnitude and display its time-frequency plot. What do the plots reveal?
(c) Compare the DFT magnitude and time-frequency plots of x[n] and y[n] How do they dier?
16.53 (Deconvolution) The FFT is a useful tool for deconvolution. Given an input signal x[n] and the
system response y[n], the system impulse response h[n] may be found from the IDFT of the ratio
HDFT [k] = YDFT [k]/XDFT [k]. Let x[n] = {1, 2, 3, 4} and h[n] = {1, 2, 3}.
(a) Obtain the convolution y[n] = x[n] h[n]. Now zero-pad x[n] to the length of y[n] and find the
DFT of the two sequences and their ratio HDFT [k]. Does the IDFT of HDFT [k] equal h[n] (to
within machine roundo)? Should it?
(b) Repeat part (a) with x[n] = {1, 2, 3, 4} and h[n] = {1, 2, 3}. Does the method work for
this choice? Does the IDFT of HDFT [k] equal h[n] (to within machine roundo)?
(c) Repeat part (a) with x[n] = {1, 2, 3} and h[n] = {1, 2, 3, 4}. Show that the method does
not work because the division yields infinite or indeterminate results (such as 10 or 00 ). Does the
method work if you replace zeros by very small quantities (for example, 1010 )? Should it?
16.54 (FFT of Two Real Signals at Once) Show that it is possible to find the FFT of two real sequences
x[n] and y[n] from a single FFT operation on the complex signal g[n] = x[n] + jy[n] as
XDFT [k] = 0.5(GDFT [N k] + GDFT [k]) YDFT [k] = j0.5(GDFT [N k] GDFT [k])
Use this result to find the FFT of x[n] = {1, 2, 3, 4} and y[n] = {5, 6, 7, 8} and compare the results
with their FFT computed individually.
16.55 (Quantization Error) Quantization leads to noisy spectra. Its eects can be studied only in
statistical terms. Let x(t) = cos(20t) be sampled at 50 Hz to obtain the 256-point sampled signal
x[n].
Chapter 16 Problems 591
(a) Plot the linear and decibel magnitude of the DFT of x[n].
(b) Quantize x[n] by rounding to B bits to generate the quantized signal y[n]. Plot the linear
and decibel magnitude of the DFT of y[n]. Compare the DFT spectra of x[n] and y[n] for
B = 8, 4, 2, and 1. What is the eect of decreasing the number of bits on the DFT spectrum
of y[n]?
(c) Repeat parts (a) and (b), using quantization by truncation. How do the spectra dier in this
case?
(d) Repeat parts (a)(c) after windowing x[n] by a von Hann window. What is the eect of win-
dowing?
16.56 (Sampling Jitter) During the sampling operation, the phase noise on the sampling clock can result
in jitter, or random variations in the time of occurrence of the true sampling instant. Jitter leads
to a noisy spectral and its eects can be studied only in statistical terms. Consider the analog signal
x(t) = cos(2f0 t) sampled at a rate S that equals three times the Nyquist rate.
(a) Generate a time array tn of 256 samples at intervals of ts = 1/S. Generate the sampled signal
x[n] from values of x(t) at the time instants in tn . Plot the DFT magnitude of x[n].
(b) Add some uniformly distributed random noise with a mean of zero and a noise amplitude of
Ats to tn to form the new time array tnn . Generate the sampled signal y[n] from values of
x(t) at the time instants in tnn . Plot the DFT magnitude of y[n] and compare with the DFT
magnitude of x[n] for A = 0.01, 0.1, 1, 10. What is the eect of increasing the noise amplitude
on the DFT spectrum of y[n]? What is the largest value of A for which you can still identify
the signal frequency from the DFT of y[n]?
(c) Repeat parts (a) and (b) after windowing x[n] and y[n] by a von Hann window. What is the
eect of windowing?
Chapter 17
THE z-TRANSFORM
Here, x[n] and X(z) form a transform pair, and the double arrow implies a one-to-one correspondence
between the two.
X(z) = 7z 2 + 3z 1 + z 0 + 4z 1 8z 2 + 5z 3
592
17.1 The Two-Sided z-Transform 593
Comparing x[n] and X(z), we observe that the quantity z 1 plays the role of a unit delay operator. The
sample location n = 2 in x[n], for example, corresponds to the term with z 2 in X(z). In concept, then, it
is not hard to go back and forth between a sequence and its z-transform if all we are given is a finite number
of samples.
Since the defining relation for X(z) describes a power series, it may not converge for all z. The values of
z for which it does converge define the region of convergence (ROC) for X(z). Two completely dierent
sequences may produce the same two-sided z-transform X(z), but with dierent regions of convergence. It is
important (unlike Laplace transforms) that we specify the ROC associated with each X(z), especially when
dealing with the two-sided z-transform.
(b) Let x[n] = 2[n + 1] + [n] 5[n 1] + 4[n 2]. This describes the sequence x[n] = {2, 1, 5, 4}. Its
z-transform is evaluated as X(z) = 2z + 1 5z 1 + 4z 2 . No simplifications are possible. The ROC is
the entire z-plane, except z = 0 and z = (or 0 < |z| < ).
(c) Let x[n] = u[n] u[n N ]. This represents a sequence of N samples, and its z-transform may be
written as
X(z) = 1 + z 1 + z 2 + + z (N 1)
A closed-form solution may be found using the defining relation as follows:
N
! 1
1 z N
X(z) = z k = , z = 1 ROC: z = 0
1 z 1
k=0
The ROC is the entire z-plane, except z = 0 (or |z| > 0). Note that if z = 1, we get X(z) = N .
(d) Let x[n] = u[n]. We evaluate its z-transform using the defining relation as follows:
!
! 1 z
X(z) = z k = (z 1 )k = = , ROC: |z| > 1
1 z 1 z1
k=0 k=0
The geometric series converges only for |z 1 | < 1 or |z| > 1, which defines the ROC for this X(z).
594 Chapter 17 The z-Transform
Finite Sequences
1 [n] 1 all z
2 u[n] u[n N ] 1 z N z = 0
1 z 1
Causal Signals
z
3 u[n] |z| > 1
z1
z
4 n u[n] |z| > ||
z
z
5 ()n u[n] |z| > ||
z+
z
6 nu[n] |z| > 1
(z 1)2
z
7 nn u[n] |z| > ||
(z )2
8 cos(n)u[n] z 2 z cos |z| > 1
z2 2z cos + 1
9 sin(n)u[n] z sin |z| > 1
z 2z cos + 1
2
Anti-Causal Signals
z
12 u[n 1] |z| < 1
z1
z
13 nu[n 1] |z| < 1
(z 1)2
z
14 n u[n 1] |z| < ||
z
z
15 nn u[n 1] |z| < ||
(z )2
17.1 The Two-Sided z-Transform 595
(e) Let x[n] = n u[n]. Using the defining relation, its z-transform and ROC are
! " #k
! 1 z
X(z) = k z k = = = , ROC: |z| > ||
z 1 (/z) z
k=0 k=0
ROC
ROC
Re[ z ] ROC Re[ z ] Re[ z ]
ROC (shaded) of right-sided signals ROC (shaded) of left-sided signals ROC (shaded) of two-sided signals
Figure 17.1 The ROC (shown shaded) of the z-transform for various sequences
The ROC excludes all pole locations (denominator roots) where X(z) becomes infinite. As a result,
the ROC of right-sided signals is |z| > |p|max and lies exterior to a circle of radius |p|max , the magnitude
of the largest pole. The ROC of causal signals, with x[n] = 0, n < 0, excludes the origin and is given
by 0 > |z| > p|max . Similarly, the ROC of a left-sided signal x[n] is |z| < |p|min and lies interior to a
circle of radius |p|min , the smallest pole magnitude of X(z). Finally, the ROC of a two-sided signal x[n] is
|p|min < |z| < |p|max , an annulus whose radii correspond to the smallest and largest pole locations in X(z).
We use inequalities of the form |z| < || (and not |z| ||), for example, because X(z) may not converge
at the boundary |z| = ||.
The ROC of Y (z) is |z| < ||. The z-transform of n u[n] is also z/(z ), but with an ROC of |z| > ||. Do
you see the problem? We cannot uniquely identify a signal from its transform alone, unless we also specify
the ROC. In this book, we will assume a right-sided signal if no ROC is specified.
z z
(b) Let X(z) = + .
z2 z+3
Its ROC depends on the nature of x[n].
If x[n] is assumed right-sided, the ROC is |z| > 3 (because |p|max = 3).
If x[n] is assumed left-sided, the ROC is |z| < 2 (because |p|min = 2).
If x[n] is assumed two-sided, the ROC is 2 < |z| < 3.
The region |z| < 2 and |z| > 3 does not correspond to a valid region of convergence because we must
find a region that is common to both terms.
With the change of variable m = k N , the new summation index m still ranges from to (since N
is finite), and we obtain
!
!
Y (z) = x[m]z (m+N ) = z N x[m]z m = z N X(z) (17.5)
m= m=
Scaling: The scaling property follows from the transform of y[n] = n x[n], to yield
!
! " z #k "z#
Y (z) = k x[k]z k = x[k] =X (17.8)
k= k=
& 'n
If x[n] is multiplied by ejn or ej , we then obtain the pair ejn x[n] X(zej ). An extension of this
result, using Eulers relation, leads to the times-cos and times-sin properties:
& ' $ & ' & '%
cos(n)x[n] = 0.5x[n] ejn + ejn 0.5 X zej + X zej (17.9)
& ' $ & ' & '%
sin(n)x[n] = j0.5x[n] ejn ejn j0.5 X zej X zej (17.10)
In particular, if = 1, we obtain the useful result (1)n x[n] X(z).
598 Chapter 17 The z-Transform
Convolution: The convolution property is based on the fact that multiplication in the time domain cor-
responds to convolution in any transformed domain. The z-transforms of sequences are polynomials, and
multiplication of two polynomials corresponds to the convolution of their coecient sequences. This property
finds extensive use in the analysis of systems in the transformed domain.
Folding: The folding property comes about if we use = 1 in the scaling property (or k k in the
defining relation). With x[n] X(z) and y[n] = x[n], we get
!
!
!
Y (z) = x[k]z k = x[k]z k = x[k](1/z)k = X(1/z) (17.11)
k= k= k=
If the ROC of x[n] is |z| > ||, the ROC of the folded signal x[n] becomes |1/z| > || or |z| < 1/||.
=
n n n n
Figure 17.2 Finding the z-transform of an anti-causal signal from a causal version
(c) We find the transform of the N -sample exponential pulse x[n] = n (u[n] u[n N ]). We let y[n] =
u[n] u[n N ]. Its z-transform is
1 z N
Y (z) = , |z| =
1
1 z 1
Then, the z-transform of x[n] = n y[n] becomes
1 (z/)N
X[z] = , z =
1 (z/)1
(d) The z-transforms of x[n] = cos(n)u[n] and y[n] = sin(n)u[n] are found using the times-cos and
times-sin properties:
( )
zej zej z 2 z cos
X(z) = 0.5 + = 2
ze 1 ze
j j 1 z 2z cos + 1
( )
zej zej z sin
Y (z) = j0.5 = 2
zej 1 zej 1 z 2z cos + 1
(e) The z-transforms of f [n] = n cos(n)u[n] and g[n] = n sin(n)u[n] follow from the results of part
(d) and the scaling property:
(z/)2 (z/)cos z 2 z cos
F (z) = =
(z/)2 2(z/)cos + 1 z 2 2z cos + 2
(z/)sin z sin
G(z) = = 2
(z/)2 2(z/)cos + 1 z 2z cos + 2
600 Chapter 17 The z-Transform
(f ) We use the folding property to find the transform of x[n] = n u[n 1]. We start with the transform
pair y[n] = n u[n] z/(z ), ROC: |z| > ||. With y[0] = 1 and x[n] = y[n] [n], we find
1/z z 1
n u[n 1] 1= , ROC: |z| <
1/z 1 z ||
(g) We use the folding property to find the transform of x[n] = |n| , || < 1 (a two-sided decaying
exponential). We write this as x[n] = n u[n] + n u[n] [n] (a one-sided decaying exponential and
its folded version, less the extra sample included at the origin), as illustrated in Figure E17.3G.
| n | n u [n] n u [n ] [n]
1 1 1 1
=
n n n n
Figure E17.3G The signal for Example 17.3(g)
z 1/z z z 1
X(z) = + 1= , ROC: || < |z| <
z (1/z) z z (1/) ||
Note that the ROC is an annulus that corresponds to a two-sided sequence, and describes a valid region
only if || < 1.
N (z) B0 + B1 z 1 + B2 z 2 + + BM z M
X(z) = = (17.13)
D(z) 1 + A1 z 1 + A2 z 2 + + AN z N
Here, the coecient A0 of the leading term in the denominator has been normalized to unity.
Denoting the roots of N (z) by zi , i = 1, 2, . . . , M and the roots of D(z) by pk , k = 1, 2, . . . , N , we may
also express X(z) in factored form as
N (z) (z z1 )(z z2 ) (z zM )
X(z) = K =K (17.14)
D(z) (z p1 )(z p2 ) (z pN )
Assuming that common factors have been canceled, the p roots of N (z) and the q roots of D(z) are termed
the zeros and the poles of the transfer function, respectively.
17.3 Poles, Zeros, and the z-Plane 601
0.5 0.5
Re[ z ] Re[ z ]
1 2 1
2 1 1/3
0.5 0.5
(b) What is the z-transform corresponding to the pole-zero pattern of Figure E17.4(b)? Does it represent
a symmetric signal?
If we let X(z) = KN (z)/D(z), the four zeros correspond to the numerator N (z) given by
The two poles at the origin correspond to the denominator D(z) = z 2 . With K = 1, the z-transform
is given by
N (z) z 4 + 4.25z 2 + 1
X(z) = K = = z 2 + 4.25 + z 2
D(z) z2
602 Chapter 17 The z-Transform
Checking for symmetry, we find that X(z) = X(1/z), and thus x[n] is even symmetric. In fact,
x[n] = [n + 2] + 4.25[n] + [n 2] = {1, 4.25, 1}. We also note that each zero is paired with its
reciprocal (j0.5 with j2, and j0.5 with j2), a characteristic of symmetric sequences.
Y (z)
Y (z) = X(z)H(z) or H(z) = (17.15)
X(z)
The time-domain and z-domain equivalence of these operations is illustrated in Figure 17.3.
The transfer function is defined only for relaxed LTI systems, either as the ratio of the output Y (z) and
input X(z), or as the z-transform of the system impulse response h[n].
A relaxed LTI system is also described by the dierence equation:
Y (z) B0 + B1 z 1 + + BM z M
H(z) = = (17.17)
X(z) 1 + A1 z 1 + + AN z N
The transfer function is thus a ratio of polynomials in z. It also allows us to retrieve either the system
dierence equation or the impulse response.
(b) Let h[n] = [n] 0.4(0.5)n u[n]. We obtain its transfer function as
0.4z 0.6z 0.5
H(z) = 1 =
z 0.5 z 0.5
We also obtain the dierence equation by expressing the transfer function as
Y (z) 0.6z 0.5 0.6 5z 1
H(z) = = or
X(z) z 0.5 1 0.5z 1
and using cross-multiplication to give
(z 0.5)Y (z) = (0.6z 0.5)X(z) or (1 0.5z 1 )Y (z) = (0.6 0.5z 1 )X(z)
The dierence equation may then be found using forward dierences or backward dierences as
y[n + 1] 0.5y[n] = 0.6x[n + 1] 0.5x[n] or y[n] 0.5y[n 1] = 0.6x[n] 0.5x[n 1]
The impulse response, the transfer function, and the dierence equation describe the same system.
ROC ROC
Figure 17.4 The ROC of stable systems (shown shaded) always includes the unit circle
In the time domain, a causal system requires a causal impulse response h[n] with h[n] = 0, n < 0. In the
z-domain, this is equivalent to a transfer function H(z) that is proper and whose ROC lies outside a circle
of finite radius. For stability, the poles of H(z) must lie inside the unit circle. Thus, for a system to be both
causal and stable, the ROC must include the unit circle.
The stability of an anti-causal system requires all the poles to lie outside the unit circle, and an ROC
that lies inside a circle of finite radius. Thus, for a system to be both anti-causal and stable, the ROC must
include the unit circle. Similarly, the ROC of a stable, two-sided system must be an annulus that includes
the unit circle, and all its poles must lie outside outside this annulus.
17.5 The Inverse z-Transform 605
z
(b) Let H(z) = , as before.
z
If the ROC is |z| < ||, its impulse response is h[n] = n u[n 1], and the system is anti-causal.
For stability, we require || > 1 (for the ROC to include the unit circle).
This sequence can also be written as x[n] = {0, 3, 0, 5, 2}.
606 Chapter 17 The z-Transform
(b) Let X(z) = 2z 2 5z + 5z 1 2z 2 . This transform corresponds to a noncausal sequence. Its inverse
transform is written, by inspection, as
x[n] = {2, 5, 0, 5, 2}
Comment: Since X(z) = X(1/z), x[n] should possess odd symmetry. It does.
'z 3z 4z
1 2 3
z z+1 z4
2
z 1 + z 1
3 z 1
3 + 3z 1 3z 2
4z 1 + 3z 2
4z 1 + 4z 2 4z 3
z 2 + 4z 3
This leads to H(z) = z 1 3z 2 4z 3 . The sequence h[n] can be written as
h[n] = (n 1) 3[n 2] 4[n 3] or h[n] = {0, 1, 3, 4, . . .}
17.5 The Inverse z-Transform 607
(b) We could also have found the inverse by setting up the the dierence equation corresponding to
H(z) = Y (z)/X(z), to give
With h[1] = h[2] = 0, (a relaxed system), we recursively obtain the first few values of h[n] as
These are identical to the values obtained using long division in part (a).
z4
(c) We find the left-sided inverse of H(z) = .
1 z + z2
We arrange the polynomials in descending powers of z and use long division to obtain
'4 3z + z 2
1 z + z 2 4 + z
4 + 4z 4z 2
3z + 4z 2
3z + 3z 2 3z 3
z 2 + 3z 3
z2 z3 + z4
4z 3 z 4
(d) We could also have found the inverse by setting up the the dierence equation in the form
With h[1] = h[2] = 0, we can generate h[0], h[1], h[2], . . . , recursively, to obtain the same result as
in part (c).
608 Chapter 17 The z-Transform
In general, Y (z) will contain terms with real constants and terms with complex conjugate residues, and may
be written as
K1 K2 A1 A1 A2 A2
Y (z) = + + + + + + + (17.23)
z + p1 z + p2 z + r1 z + r1 z + r2 z + r2
For a real root, the residue (coecient) will also be real. For each pair of complex conjugate roots, the
residues will also be complex conjugates, and we thus need compute only one of these.
Repeated Factors
If the denominator of Y (z) contains the repeated term (z + r)k , the partial fraction expansion corresponding
to the repeated terms has the form
A0 A1 Ak1
Y (z) = (other terms) + + ++ (17.24)
(z + r)k (z + r)k1 z+r
Observe that the constants Aj ascend in index j from 0 to k 1, whereas the denominators (z + r)n descend
in power n from k to 1. Their evaluation requires (z + r)k Y (z) and its derivatives. We successively find
, 1 d2 ,
, ,
A0 = (z + r)k Y (z), A2 = [(z + r)k
Y (z)],
z=r 2! dz 2 z=r
(17.25)
d , 1 d n ,
, ,
A1 = [(z + r) Y (z)],
k
An = [(z + r) Y (z)],
k
dz z=r n! dz n z=r
Even though this process allows us to find the coecients independently of each other, the algebra in finding
the derivatives can become tedious if the multiplicity k of the roots exceeds 2 or 3. Table 17.3 lists some
transform pairs useful for inversion by partial fractions.
17.5 The Inverse z-Transform 609
Note: For anti-causal sequences, we get the signal x[n]u[n 1] where x[n] is as listed.
z
1 n
z
z
2 n(n1)
(z )2
z n(n 1) (n N + 1) (nN )
3 (N > 1)
(z )N +1 N!
4 z(C + jD) z(C jD) 2n [C cos(n) D sin(n)]
+
z ej z ej
5 zK zK 2Kn cos(n + )
+
z ej z ej
z(C + jD) z(C jD) " #
6 + 2nn1 C cos[(n 1)] D sin[(n 1)]
(z ej )2 (z ej )2
7 zK zK 2Knn1 cos[(n 1) + ]
+
(z ej )2 (z ej )2
8 zK zK n(n 1) (n N + 1) (nN )
+ 2K cos[(n N ) + ]
(z e )
j N +1 (z ej )N +1 N!
Its first few samples, x[0] = 0, x[1] = 0, x[2] = 1, and x[3] = 0.75 can be checked by long division.
4 4
X(z) = + x[n] = 4(0.25)n1 u[n 1] + 4(0.5)n1 u[n 1]
z 0.25 z 0.5
Its inverse requires the shifting property. This form is functionally equivalent to the previous case. For
example, we find that x[0] = 0, x[1] = 0, x[2] = 1, and x[3] = 0.25, as before.
z
(b) (Repeated Roots) We find the inverse of X(z) = .
(z 1)2 (z 2)
X(z)
We obtain Y (z) = , and set up its partial fraction expansion as
z
X(z) 1 A K0 K1
Y (z) = = = + +
z (z 1)2 (z 2) z 2 (z 1)2 z1
The first few values x[0] = 0, x[1] = 0, x[2] = 1, x[3] = 4, and x[4] = 11 can be easily checked by long
division.
z 2 3z
(c) (Complex Roots) We find the inverse of X(z) = .
(z 2)(z 2 2z + 2)
X(z)
We set up the partial fraction expansion for Y (z) = as
z
X(z) z3 A K K
Y (z) = = = + +
z (z 2)(z 2 2z + 2) z2 z1j z1+j
Substituting into the expression for Y (z), and multiplying through by z, we get
Alternatively, with 0.25 j0.25 = 0.7906 71.56 , Table 17.3 (entry 5) also gives
x[n] = 0.5(2)n u[n] + 2(0.7906)( 2)n cos( n
4 71.56 )
The two forms, of course, are identical. Which one we pick is a matter of personal preference.
z
(d) (Inverse Transform of Quadratic Forms) We find the inverse of X(z) = .
z2 + 4
Bz sin
One way is partial fractions. Here, we start with Bn sin(n)u[n] 2 .
z 2z cos + 2
Comparing denominators, 2 = 4, and thus = 2. Let us pick = 2.
We also have 2 cos = 0. Thus, = /2.
Finally, comparing numerators, Bz sin = z or B = 0.5. Thus,
Comment: Had we picked = 2, we would still have come out with the same answer.
z2 + z
(e) (Inverse Transform of Quadratic Forms) Let X(z) = . We start with
z2 2z + 4
Comparing denominators, we find = 2. Then, 2 cos = 2, and thus cos = 0.5 or = /3.
Now, A(z 2 z cos ) = A(z 2 z) and Bz sin = Bz 3. We express the numerator of X(z) as
a sum
of these forms to get z 2 + z = (z 2 z) + 2z = (z 2 z) + (2/ 3)(z 3) (with A = 1 and
B = 2/ 3 = 1.1547). Thus,
X(z) 4z 4z
The partial fraction expansion of Y (z) = leads to X(z) = + .
z z 0.25 z 0.5
1. If the ROC is |z| > 0.5, x[n] is causal and stable, and we obtain
2. If the ROC is |z| < 0.25, x[n] is anti-causal and unstable, and we obtain
3. If the ROC is 0.25 < |z| < 0.5, x[n] is two-sided and unstable. This ROC is valid only if z0.25
4z
(b) Find the unique inverse transforms of the following, assuming each system is stable:
z 2.5z z
H1 (z) = H2 (z) = H3 (z) =
(z 0.4)(z + 0.6) (z 0.5)(z + 2) (z 2)(z + 3)
To find the appropriate inverse, the key is to recognize that the ROC must include the unit circle.
Looking at the pole locations, we see that
The lower limit of zero in the summation implies that the one-sided z-transform of an arbitrary signal x[n]
and its causal version x[n]u[n] are identical. Most of the properties of the two-sided z-transform also apply
to the one-sided version.
4z
(c) Let x[n] = X(z), with ROC: |z| > 0.5. Find the z-transform of the signals h[n] = nx[n]
(z + 0.5)2
and y[n] = x[n] x[n].
16z 2
By the convolution property, Y (z) = X 2 (z) = .
(z + 0.5)4
(d) Let (4)n u[n] = X(z). Find the signal corresponding to F (z) = X 2 (z) and G(z) = X(2z).
By the convolution property, f [n] = (4)n u[n] (4)n u[n] = (n + 1)(4)n u[n].
By the scaling property, G(z) = X(2z) = X(z/0.5) corresponds to the signal g[n] = (0.5)n x[n].
Thus, we have g[n] = (2)n u[n].
x [n] Shift x [n 1] x [n 2]
Shift
x [1] x [1] x [1]
x [2] right x [2] right x [2]
n n n
2 1 1 2 1 1 2 3 1 2 3 4
For causal signals (for which x[n] = 0, n < 0), this result reduces to x[n N ] z NX(z).
17.6 The One-Sided z-Transform 615
x [n] x [n +1] x [n +2 ]
x [1]
x [1] Shift x [0] x [1] Shift x [0]
x [0]
n left n left n
1 2 3 1 1 2 2 1 1
(b) Consider the signal x[n] = n . Its one-sided z-transform is identical to that of n u[n] and equals
X(z) = z/(z ). If y[n] = x[n 1], the right-shift property, with N = 1, yields
1
Y (z) = z 1 X(z) + x[1] = + 1
z
The additional term 1 arises because x[n] is not causal.
616 Chapter 17 The z-Transform
(d) With y[n] = n u[n] z/(z ) and y[0] = 1, the left-shift property gives
( )
z z
y[n + 1] = n+1 u[n + 1] z z =
z z
With X(z) described by x[0] + x[1]z 1 + x[2]z 2 + , it should be obvious that only x[0] survives as
z and the initial value equals x[0] = lim X(z).
z
To find the final value, we evaluate (z 1)X(z) at z = 1. It yields meaningful results only when the poles
of (z 1)X(z) have magnitudes smaller than unity (lie within the unit circle in the z-plane). As a result:
1. x[] = 0 if all poles of X(z) lie within the unit circle (since x[n] will then contain only exponentially
damped terms).
2. x[] is constant if there is a single pole at z = 1 (since x[n] will then include a step).
3. x[] is indeterminate if there are complex conjugate poles on the unit circle (since x[n] will then
include sinusoids). The final value theorem can yield absurd results if used in this case.
2 + z 1
(c) Find the causal signal corresponding to X(z) = .
1 z 3
Comparing with the z-transform for a switched periodic signal, we recognize N = 3 and X1 (z) = 2z 1 .
Thus, the first period of x[n] is {2, 1, 0}.
z 1
(d) Find the causal signal corresponding to X(z) = .
1 + z 3
We first rewrite X(z) as
(z 1 )(1 z 3 ) z 1 z 4
X(z) = =
(1 + z )(1 z )
3 3 1 z 6
Comparing with the z-transform for a switched periodic signal, we recognize the period as N = 6, and
X1 (z) = z 1 z 4 . Thus, the first period of x[n] is {0, 1, 0, 0, 1, 0}.
618 Chapter 17 The z-Transform
(c) (Zero-Input and Zero-State Response) Let y[n] 0.5y[n 1] = 2(0.25)n u[n], with y[1] = 2.
Upon transformation using the right-shift property, we obtain
2z 2z
Y (z) 0.5{z 1 Y (z) + y[1]} = (1 0.5z 1 )Y (z) = 1
z 0.25 z 0.25
1. Zero-state response: For the zero-state response, we assume zero initial conditions to obtain
2z 2z 2
(1 0.5z 1 )Yzs (z) = Yzs (z) =
z 0.25 (z 0.25)(z 0.5)
Yzs (z) 2z 2 4
= = +
z (z 0.25)(z 0.5) z 0.25 z 0.5
2. Zero-input response: For the zero-input response, we assume zero input (the right-hand side)
and use the right-shift property to get
z
Yzi (z) 0.5{z 1 Yzi (z) + y[1]} = 0 Yzi (z) =
z 0.5
4z
(b) (Step Response) Let H(z) = .
z 0.5
z
To find its step response, we let x[n] = u[n]. Then X(z) = , and the output equals
z1
2 32 3
4z z 4z 2
Y (z) = H(z)X(z) = =
z 0.5 z1 (z 1)(z 0.5)
Y (z) 4z 8 4
= =
z (z 1)(z 0.5) z 0.5 z 1
Thus,
8z 4z
Y (z) = y[n] = 8u[n] (0.5)n u[n]
z 0.5 z 1
The first term in y[n] is the steady-state response, which can be found much more easily as described
shortly in Section 17.8.1.
z2
(c) (A Second-Order System) Let H(z) = . Let the input be x[n] = 4u[n] and the initial
z2 16 z 16
conditions be y[1] = 0, y[2] = 12.
1. Zero-state and zero-input response: The zero-state response is found directly from H(z) as
4z 3 4z 3
Yzs (z) = X(z)H(z) = =
(z 2 1
6z 6 )(z 1)
1
(z 2 )(z + 13 )(z 1)
1
2.4z 0.4z 6z
Yzs (z) = + + yzs [n] = 2.4( 21 )n u[n] + 0.4( 31 )n u[n] + 6u[n]
z 12 z + 13 z1
To find the zero-input response, we first set up the dierence equation. We start with
Y (z) z2
H(z) = = 2 1 , or (z 2 16 z 16 )Y (z) = z 2 X(z). This gives
X(z) z 6 z 16
We now assume x[n] = 0 (zero input) and transform this equation, using the right-shift property,
to obtain the zero-input response from
2z 2 2z 2
Yzi (z) = =
z2 16 z 1
6 (z 2 )(z
1
+ 13 )
y[n] = yzs [n] + yzi [n] = 1.2( 12 )n u[n] + 1.2( 31 )n u[n] + 6u[n]
2. Natural and forced response: By inspection, the natural and forced components of y[n] are
The steady-state response corresponds to terms of the form z/(z 1) (step functions). For this
example, YF (z) = z1
6z
and yF [n] = 6u[n].
Since the poles of (z 1)Y (z) lie within the unit circle, yF [n] can also be found by the final value
theorem:
z 2 (6z 2)
yF [n] = lim (z 1)Y (z) = lim 2 1 =6
z1 z1 z z 1
6 6
Recall that Hp (F ) is periodic with period 1 and, for real h[n], shows conjugate symmetry over the principal
period (0.5, 0.5). This implies that Hp (F ) = Hp (F ), |Hp (F )| is even symmetric and (F ) is odd
symmetric. Since Hp (F ) and H(z) are related by z = ej2F = ej , the frequency response Hp (F ) at F = 0
(or = 0) is equivalent to evaluating H(z) at z = 1, and the response Hp (F ) at F = 0.5 (or = ) is
equivalent to evaluating H(z) at z = 1.
, , , , , ,
Hp (F ),
F =0
= Hp (), = H(z),
=0 z=1
Hp (F ), = Hp (),
F =0.5
= H(z), =
(17.34)
z=1
The frequency response is useful primarily for stable systems for which the natural response does, indeed,
decay with time.
Im[ z ]
F=0.25 = /2
z= j
z = e j
Unit circle
|z | = 1
z= j
F=0.25 = /2
Figure 17.7 Relating the variables z, F , and through the unit circle
Typical plots of the magnitude and phase are shown in Figure E17.17 over the principal period (0.5, 0.5).
Note the conjugate symmetry (even symmetry of |Hp (F )| and odd symmetry of (F )).
Magnitude
Phase
0.5 F
0.5
F
0.5 0.5
Figure E17.17 Magnitude and phase of Hp (F ) for Example 17.17
2z 1
(b) Let H(z) = . We find its steady-state response to x[n] = 6u[n].
z 2 + 0.5z + 0.5
Since the input is a constant for n 0, the input frequency F = 0 corresponds to z = ej2F = 1.
,
We thus require H(z), z=1
= 0.5. Then, yss [n] = (6)(0.5) = 3.
Of course, in either part, the total response would require more laborious methods, but the steady-state
component would turn out to be just what we found here.
624 Chapter 17 The z-Transform
17.9 Connections
The z-transform may be viewed as a generalization of the DTFT to complex frequencies. The DTFT describes
a signal as a sum of weighted harmonics, or complex exponentials. However, it cannot handle exponentially
growing signals, and it cannot handle initial conditions in system analysis. The z-transform overcomes these
shortcomings by using exponentially weighted harmonics in its definition.
!
!
!
& 'k
Xp () = x[n]ejn X(z) = x[n] rej = x[n]z n
k= k= k=
A lower limit of zero defines the one-sided Laplace transform and applies to causal signals, permitting the
analysis of systems with arbitrary initial conditions to arbitrary inputs.
Many properties of the z-transform and the DTFT show correspondence. However, the shifting property
for the one-sided z-transform must be modified to account for initial values. Due to the presence of the
convergence factor rk , the z-transform X(z) of x[n] no longer displays the conjugate symmetry present in
its DTFT. Since z = rej2F = rej is complex, the z-transform may be plotted as a surface in the z-plane.
The DTFT (with z = ej2F = ej ) is just the cross section of this surface along the unit circle (|z| = 1).
1
(b) The signal x[n] = u[n] is not absolutely summable. Its DTFT is Xp (F ) = + 0.5(F ).
1 ej2F
We can find the z-transform of u[n] as the impulsive part in the DTFT, with e j2F
= z, to give
1 z
X(z) = = . However, we cannot recover the DTFT from its z-transform in this case.
1 z 1 z1
Chapter 17 Problems 625
CHAPTER 17 PROBLEMS
DRILL AND REINFORCEMENT
17.1 (The z-Transform of Sequences) Use the defining relation to find the z-transform and its region
of convergence for the following:
(a) x[n] = {1, 2, 3, 2, 1} (b) x[n] = {1, 2, 0, 2, 1}
(c) x[n] = {1, 1, 1, 1} (d) x[n] = {1, 1, 1, 1}
17.2 (z-Transforms) Find the z-transforms and specify the ROC for the following:
(a) x[n] = (2)n+2 u[n] (b) x[n] = n(2)0.2n u[n]
(c) x[n] = (2)n+2 u[n 1] (d) x[n] = n(2)n+2 u[n 1]
(e) x[n] = (n + 1)(2)n u[n] (f ) x[n] = (n 1)(2)n+2 u[n]
17.3 (z-Transforms) Find the z-transforms and specify the ROC for the following:
(a) x[n] = cos( n
4 4 )u[n]
(b) x[n] = (0.5)n cos( n 4 )u[n]
(c) x[n] = (0.5) cos( n
n
4 4 )u[n]
(d) x[n] = ( 13 )n (u[n] u[n 4])
(e) x[n] = n(0.5) cos( 4 )u[n]
n n
(f ) x[n] = [(0.5)n (0.5)n ]nu[n]
17.4 (Two-Sided z-Transform) Find the z-transform X(z) and its ROC for the following:
(a) x[n] = u[n 1] (b) x[n] = (0.5)n u[n 1]
(c) x[n] = (0.5)|n| (d) x[n] = u[n 1] + ( 31 )n u[n]
(e) x[n] = (0.5)n u[n 1] + ( 13 )n u[n] (f ) x[n] = (0.5)|n| + (0.5)|n|
17.5 (Properties) Let x[n] = nu[n]. Find X(z), using the following:
(a) The defining relation for the z-transform
(b) The times-n property
(c) The convolution result u[n] u[n] = (n + 1)u[n + 1] and the shifting property
(d) The convolution result u[n] u[n] = (n + 1)u[n] and superposition
4z
17.6 (Properties) The z-transform of x[n] is X(z) = , |z| > 0.5. Find the z-transform of the
(z + 0.5)2
following using properties and specify the region of convergence.
(a) y[n] = x[n 2] (b) d[n] = (2)n x[n] (c) f [n] = nx[n]
(d) g[n] = (2)n nx[n] (e) h[n] = n2 x[n] (f ) p[n] = [n 2]x[n]
(g) q[n] = x[n] (h) r[n] = x[n] x[n 1] (i) s[n] = x[n] x[n]
17.7 (Properties) The z-transform of x[n] = (2)n u[n] is X(z). Use properties to find the time signal
corresponding to the following:
(a) Y (z) = X(2z) (b) F (z) = X(1/z) (c) G(z) = zX (z)
zX(z) zX(2z)
(d) H(z) = (e) D(z) = (f ) P (z) = z 1 X(z)
z1 z1
(g) Q(z) = z 2 X(2z) (h) R(z) = X 2 (z) (i) S(z) = X(z)
626 Chapter 17 The z-Transform
17.8 (Inverse Transforms of Polynomials) Find the inverse z-transform x[n] for the following:
(a) X(z) = 2 z 1 + 3z 3 (b) X(z) = (2 + z 1 )3
(c) X(z) = (z z 1 )2 (d) X(z) = (z z 1 )2 (2 + z)
17.9 (Inverse Transforms by Long Division) Assume that x[n] represents a right-sided signal. De-
termine the ROC of the following z-transforms and compute the values of x[n] for n = 0, 1, 2, 3.
(z + 1)2 z+1
(a) X(z) = (b) X(z) =
z2 + 1 z2 + 2
1 1 z 2
(c) X(z) = 2 (d) X(z) =
z 0.25 2 + z 1
17.10 (Inverse Transforms by Partial Fractions) Assume that x[n] represents a right-sided signal.
Determine the ROC of the following z-transforms and compute x[n], using partial fractions.
z 16
(a) X(z) = (b) X(z) =
(z + 1)(z + 2) (z 2)(z + 2)
3z 2 3z 3
(c) X(z) = 2 (d) X(z) = 2
(z 1.5z + 0.5)(z 0.25) (z 1.5z + 0.5)(z 0.25)
3z 4 4z
(e) X(z) = 2 (f ) X(z) =
(z 1.5z + 0.5)(z 0.25) (z + 1)2 (z + 3)
17.11 (Inverse Transforms by Partial Fractions) Assume that x[n] represents a right-sided signal.
Determine the ROC of the following z-transforms and compute x[n], using partial fractions.
z z
(a) X(z) = 2 (b) X(z) = 2
(z + z + 0.25)(z + 1) (z + z + 0.25)(z + 0.5)
1 z
(c) X(z) = 2 (d) X(z) = 2
(z + z + 0.25)(z + 1) (z + z + 0.5)(z + 1)
z3 z2
(e) X(z) = 2 (f ) X(z) = 2
(z z + 0.5)(z 1) (z + z + 0.5)(z + 1)
2z 2
(g) X(z) = 2 (h) X(z) = 2
(z 0.25)2 (z 0.25)2
z z2
(i) X(z) = 2 (j) X(z) = 2
(z + 0.25)2 (z + 0.25)2
17.12 (Inverse Transforms by Long Division) Assume that x[n] represents a left-sided signal. Deter-
mine the ROC of the following z-transforms and compute the values of x[n] for n = 1, 2, 3.
z 2 + 4z z
(a) X(z) = (b) X(z) =
z2 z + 2 (z + 1)2
z2 z3 + 1
(c) X(z) = 3 (d) X(z) = 2
z +z1 z +1
z 2 + 5z
17.13 (The ROC and Inverse Transforms) Let X(z) = . Which of the following describe
z 2 2z 3
a valid ROC for X(z)? For each valid ROC, find x[n], using partial fractions.
(a) |z| < 1 (b) |z| > 3 (c) 1 < |z| < 3 (d) |z| < 1 and z > 3
Chapter 17 Problems 627
17.14 (Convolution) Compute y[n], using the z-transform. Then verify your results by finding y[n], using
time-domain convolution.
(a) y[n] = { 1, 2, 0, 3} {2, 0, 3} (b) y[n] = {1, 2, 0, 2, 1} {1, 2, 0, 2, 1}
(c) y[n] = (2)n u[n] (2)n u[n] (d) (2)n u[n] (3)n u[n]
17.15 (Initial Value and Final Value Theorems) Assume that X(z) corresponds to a right-sided signal
x[n] and find the initial and final values of x[n].
2 2z 2
(a) X(z) = (b) X(z) =
z + 6 z 16
2 1 z 2 + z + 0.25
2z 2z 2 + 0.25
(c) X(z) = 2 (d) X(z) =
z +z1 (z 1)(z + 0.25)
z + 0.25 2z + 1
(e) X(z) = 2 (f ) X(z) = 2
z + 0.25 z 0.5z 0.5
17.16 (System Representation) Find the transfer function and dierence equation for the following
causal systems. Investigate their stability, using each system representation.
(a) h[n] = (2)n u[n] (b) h[n] = [1 ( 13 )n ]u[n]
(c) h[n] = n( 31 )n u[n] (d) h[n] = 0.5[n]
(e) h[n] = [n] ( 13 )n u[n] (f ) h[n] = [(2)n (3)n ]u[n]
17.17 (System Representation) Find the transfer function and impulse response of the following causal
systems. Investigate the stability of each system.
(a) y[n] + 3y[n 1] + 2y[n 2] = 2x[n] + 3x[n 1]
(b) y[n] + 4y[n 1] + 4y[n 2] = 2x[n] + 3x[n 1]
(c) y[n] = 0.2x[n]
(d) y[n] = x[n] + x[n 1] + x[n 2]
17.18 (System Representation) Set up the system dierence equations and impulse response of the
following causal systems. Investigate the stability of each system.
3 1 + 2z + z 2
(a) H(z) = (b) H(z) =
z+2 (1 + z 2 )(4 + z 2 )
2 1 2z 1
(c) H(z) = (d) H(z) =
1+z 2+z 1+z 2+z
17.19 (Zero-State Response) Find the zero-state response of the following systems, using the z-transform.
(a) y[n] 0.5y[n 1] = 2u[n] (b) y[n] 0.4y[n 1] = (0.5)n
(c) y[n] 0.4y[n 1] = (0.4)n (d) y[n] 0.5y[n 1] = cos(n/2)
17.20 (System Response) Consider the system y[n] 0.5y[n 1] = x[n]. Find its zero-state response to
the following inputs, using the z-transform.
(a) x[n] = u[n] (b) x[n] = (0.5)n u[n] (c) x[n] = cos(n/2)u[n]
(d) x[n] = (1)n u[n] (e) x[n] = (j)n u[n] (f ) x[n] = ( j)n u[n] + ( j)n u[n]
17.21 (Zero-State Response) Find the zero-state response of the following systems, using the z-transform.
(a) y[n] 1.1y[n 1] + 0.3y[n 2] = 2u[n] (b) y[n] 0.9y[n 1] + 0.2y[n 2] = (0.5)n
(c) y[n] + 0.7y[n 1] + 0.1y[n 2] = (0.5)n (d) y[n] 0.25y[n 2] = cos(n/2)
628 Chapter 17 The z-Transform
17.22 (System Response) Let y[n] 0.5y[n 1] = x[n], with y[1] = 1. Find the response y[n] of this
system for the following inputs, using the z-transform.
(a) x[n] = 2u[n] (b) x[n] = (0.25)n u[n] (c) x[n] = n(0.25)n u[n]
(d) x[n] = (0.5)n u[n] (e) x[n] = n(0.5)n (f ) x[n] = (0.5)n cos(0.5n)
17.23 (System Response) Find the response y[n] of the following systems, using the z-transform.
(a) y[n] + 0.1y[n 1] 0.3y[n 2] = 2u[n] y[1] = 0 y[2] = 0
(b) y[n] 0.9y[n 1] + 0.2y[n 2] = (0.5)n y[1] = 1 y[2] = 4
(c) y[n] + 0.7y[n 1] + 0.1y[n 2] = (0.5)n y[1] = 0 y[2] = 3
(d) y[n] 0.25y[n 2] = (0.4)n y[1] = 0 y[2] = 3
(e) y[n] 0.25y[n 2] = (0.5)n y[1] = 0 y[2] = 0
17.24 (System Response) For each system, evaluate the response y[n], using the z-transform.
(a) y[n] 0.4y[n 1] = x[n] x[n] = (0.5)n u[n] y[1] = 0
(b) y[n] 0.4y[n 1] = 2x[n] + x[n 1] x[n] = (0.5)n u[n] y[1] = 0
(c) y[n] 0.4y[n 1] = 2x[n] + x[n 1] x[n] = (0.5)n u[n] y[1] = 5
(d) y[n] + 0.5y[n 1] = x[n] x[n 1] x[n] = (0.5)n u[n] y[1] = 2
(e) y[n] + 0.5y[n 1] = x[n] x[n 1] x[n] = (0.5)n u[n] y[1] = 0
17.25 (System Response) Find the response y[n] of the following systems, using the z-transform.
(a) y[n] 0.4y[n 1] = 2(0.5)n1 u[n 1] y[1] = 2
(b) y[n] 0.4y[n 1] = (0.4)n u[n] + 2(0.5)n1 u[n 1] y[1] = 2.5
(c) y[n] 0.4y[n 1] = n(0.5)n u[n] + 2(0.5)n1 u[n 1] y[1] = 2.5
2z(z 1)
17.26 (System Response) The transfer function of a system is H(z) = . Find its response
4 + 4z + z 2
y[n] for the following inputs.
(a) x[n] = [n] (b) x[n] = 2[n] + [n + 1] (c) x[n] = u[n]
(d) x[n] = (2)n u[n] (e) x[n] = nu[n] (f ) x[n] = cos( n
2 )u[n]
17.27 (System Analysis) Find the impulse response h[n] and the step response s[n] of the causal digital
filters described by
4z
(a) H(z) = (b) y[n] + 0.5y[n 1] = 6x[n]
z 0.5
17.28 (System Analysis) Find the zero-state response, zero-input response, and total response for each
of the following systems, using the z-transform.
(a) y[n] 14 y[n 1] = ( 13 )n u[n] y[1] = 8
(b) y[n] + 1.5y[n 1] + 0.5y[n 2] = (0.5)n u[n] y[1] = 2 y[2] = 4
(c) y[n] + y[n 1] + 0.25y[n 2] = 4(0.5)n u[n] y[1] = 6 y[2] = 12
(d) y[n] y[n 1] + 0.5y[n 2] = (0.5)n u[n] y[1] = 1 y[2] = 2
2z(z 1)
17.29 (Steady-State Response) The transfer function of a system is H(z) = . Find its steady-
z 2 + 0.25
state response for the following inputs.
(a) x[n] = 4u[n] (b) x[n] = 4 cos( n
2 + 4 )u[n]
17.30 (Response of Digital Filters) Consider the averaging filter y[n] = 0.5x[n] + x[n 1] + 0.5x[n 2].
(a) Find its impulse response h[n], its transfer function H(z), and its frequency response H(F ).
(b) Find its response y[n] to the input x[n] = {2, 4, 6, 8}.
(c) Find its response y[n] to the input x[n] = cos( n
3 ).
(d) Find its response y[n] to the input x[n] = cos( n
3 ) + sin( 3 ) + cos( 2 ).
2n n
17.31 (Frequency Response) For each filter, sketch the magnitude spectrum and identify the filter type.
(a) h[n] = [n] [n 2] (b) y[n] 0.25y[n 1] = x[n] x[n 1]
z2
(c) H(z) = (d) y[n] y[n 1] + 0.25y[n 2] = x[n] + x[n 1]
z 0.5
17.32 (Transfer Function) The input to a digital filter is x[n] = {1, 0.5}, and the response is described
by y[n] = [n + 1] 2[n] [n 1].
(a) What is the filter transfer function H(z)?
(b) Does H(z) describe an IIR filter or FIR filter?
(c) Is the filter stable? Is it causal?
z
17.33 (System Response) A system is described by H(z) = . Find the ROC and impulse
(z 0.5)(z + 2)
response h[n] of this system and state whether the system is stable or unstable if
(a) h[n] is assumed to be causal.
(b) h[n] is assumed to be anti-causal.
(c) h[n] is assumed to be two-sided.
17.34 (System Response) Consider the 2-point averager with y[n] = 0.5x[n] + 0.5x[n 1].
(a) Sketch its frequency response and identify the filter type.
(b) Find its response y[n] to the input x[n] = cos(n/2).
(c) Find its response y[n] to the input x[n] = [n].
(d) Find its response y[n] to the input x[n] = 1.
(e) Find its response y[n] to the input x[n] = 3 + 2[n] 4 cos(n/2).
17.35 (System Response) Consider the 3-point averager with h[n] = 13 {1, 1, 1}.
(a) Sketch its frequency response and identify the filter type.
(b) Find its response y[n] to the input x[n] = cos(n/3).
(c) Find its response y[n] to the input x[n] = [n].
(d) Find its response y[n] to the input x[n] = (1)n .
(e) Find its response y[n] to the input x[n] = 3 + 3[n] 6 cos(n/3).
17.36 (System Response) Consider the tapered 3-point averager with h[n] = {0.5, 1, 0.5}.
(a) Sketch its frequency response and identify the filter type.
(b) Find its response y[n] to the input x[n] = cos(n/2).
(c) Find its response y[n] to the input x[n] = [n 1].
(d) Find its response y[n] to the input x[n] = 1 + (1)n .
(e) Find its response y[n] to the input x[n] = 3 + 2[n] 4 cos(2n/2).
630 Chapter 17 The z-Transform
17.37 (System Response) Consider the 2-point dierencer with h[n] = [n] [n 1].
(a) Sketch its frequency response and identify the filter type.
(b) Find its response y[n] to the input x[n] = cos(n/2).
(c) Find its response y[n] to the input x[n] = u[n].
(d) Find its response y[n] to the input x[n] = (1)n .
(e) Find its response y[n] to the input x[n] = 3 + 2u[n] 4 cos(n/2).
17.38 (Frequency Response) Consider the filter described by y[n] + 0.5y[n 1] = 0.5x[n] + x[n 1].
(a) Sketch its frequency response and identify the filter type.
(b) Find its response y[n] to the input x[n] = cos(n/2).
(c) Find its response y[n] to the input x[n] = [n].
(d) Find its response y[n] to the input x[n] = (1)n .
(e) Find its response y[n] to the input x[n] = 4 + 2[n] 4 cos(n/2).
17.40 (z-Transforms) Find the z-transforms and specify the ROC for the following:
(a) x[n] = (0.5)2n u[n] (b) x[n] = n(0.5)2n u[n] (c) (0.5)n u[n]
(d) (0.5)n u[n] (e) (0.5)n u[n 1] (f ) (0.5)n u[n 1]
17.41 (The ROC) The transfer function of a system is H(z). What can you say about the ROC of H(z)
for the following cases?
(a) h[n] is a causal signal.
(b) The system is stable.
(c) The system is stable, and h[n] is a causal signal.
17.42 (Poles, Zeros, and the ROC) The transfer function of a system is H(z). What can you say about
the poles and zeros of H(z) for the following cases?
(a) The system is stable.
(b) The system is causal and stable.
(c) The system is an FIR filter with real coecients.
(d) The system is a linear-phase FIR filter with real coecients.
(e) The system is a causal, linear-phase FIR filter with real coecients.
17.43 (z-Transforms and ROC) Consider the signal x[n] = n u[n] + n u[n 1]. Find its z-transform
X(z). Will X(z) represent a valid transform for the following cases?
17.44 (z-Transforms) Find the z-transforms (if they exist) and specify their ROC.
(a) x[n] = (2)n u[n] + 2n u[n] (b) x[n] = (0.25)n u[n] + 3n u[n]
(c) x[n] = (0.5)n u[n] + 2n u[n 1] (d) x[n] = (2)n u[n] + (0.5)n u[n 1]
(e) x[n] = cos(0.5n)u[n] (f ) x[n] = cos(0.5n + 0.25)u[n]
(g) x[n] = ejn u[n] (h) x[n] = ejn/2 u[n]
(i) x[n] = ejn/4 u[n] (j) x[n] = ( j)n u[n] + ( j)n u[n]
17.45 (z-Transforms and ROC) The causal signal x[n] = n u[n] has the transform X(z) whose ROC
is |z| > . Find the ROC of the z-transform of the following:
(a) y[n] = x[n 5]
(b) p[n] = x[n + 5]
(c) g[n] = x[n]
(d) h[n] = (1)n x[n]
(e) r[n] = n x[n]
17.46 (z-Transforms and ROC) The anti-causal signal x[n] = n u[n 1] has the transform X(z)
whose ROC is |z| < . Find the ROC of the z-transform of the following:
(a) y[n] = x[n 5]
(b) p[n] = x[n + 5]
(c) g[n] = x[n]
(d) h[n] = (1)n x[n]
(e) r[n] = n x[n]
17.47 (z-Transforms) Find the z-transform X(z) of x[n] = |n| and specify the region of convergence of
X(z). Consider the cases || < 1 and || > 1 separately.
4z
17.48 (Properties) The z-transform of a signal x[n] is X(z) = , |z| > 0.5. Find the z-transform
(z + 0.5)2
and its ROC for the following.
(a) y[n] = (1)n x[n] (b) f [n] = x[2n]
(c) g[n] = (j)n x[n] (d) h[n] = x[n + 1] + x[n 1]
17.49 (Properties) The z-transform of the signal x[n] = (2)n u[n] is X(z). Use properties to find the time
signal corresponding to the following.
17.51 (Properties) Find the z-transform of x[n] = rect(n/2N ) = u[n + N ] u[n N 1]. Use this result
to evaluate the z-transform of y[n] = tri(n/N ).
632 Chapter 17 The z-Transform
17.52 (Pole-Zero Patterns and Symmetry) Plot the pole-zero patterns for each X(z). Which of these
describe symmetric time sequences?
z2 + z 1 z 4 + 2z 3 + 3z 2 + 2z + 1
(a) X(z) = (b) X(z) =
z z2
z4 z3 + z 1 (z 2 1)(z 2 + 1)
(c) X(z) = 2
(d) X(z) =
z z2
17.53 (Switched Periodic Signals) Find the z-transform of each switched periodic signal.
(a) x[n] = {2, 1, 3, 0, . . .}, N =4 (b) x[n] = cos(n/2)u[n]
(c) x[n] = {0, 1, 1, 0, 0, . . .}, N =5 (d) x[n] = cos(0.5n + 0.25)u[n]
17.54 (Inverse Transforms) For each X(z), find the signal x[n] for each valid ROC.
z 3z 2
(a) X(z) = (b) X(z) =
(z + 0.4)(z 0.6) z 2 1.5z + 0.5
17.55 (Poles and Zeros) Make a rough sketch of the pole and zero locations of the z-transform of each
of the signals shown in Figure P17.55.
Signal 1 Signal 2 Signal 3 Signal 4 Signal 5
n n n n
17.56 (Poles and Zeros) Find the transfer function corresponding to each pole-zero pattern shown in
Figure P17.56 and identify the filter type.
Im[ z ] Im[ z ] Im[ z ] Im[ z ]
0.7
Re[ z ] Re[ z ] Re[ z ] 50 0 Re[ z ]
0.7 0.6 0.4
17.57 (Causality and Stability) How can you identify whether a system is a causal and/or stable system
from the following information?
(a) Its impulse response h[n]
(b) Its transfer function H(z) and its region of convergence
(c) Its system dierence equation
(d) Its pole-zero plot
17.58 (Inverse Transforms) Consider the stable system described by y[n] + y[n 1] = x[n] + x[n 1].
(a) Find its causal impulse response h[n] and specify the range of and the ROC of H(z).
(b) Find its anti-causal impulse response h[n] and specify the range of and the ROC of H(z).
Chapter 17 Problems 633
17.59 (Inverse Transforms) Each X(z) below represents the z-transform of a switched periodic signal
xp [n]u[n]. Find one period x1 [n] of each signal and verify your results using inverse transformation
by long division.
1 1
(a) X(z) = (b) X(z) =
1z 3 1 + z 3
1 + 2z 1
3 + 2z 1
(c) X(z) = (d) X(z) =
1 z 4 1 + z 4
17.60 (Inverse Transforms) Let H(z) = (z 0.5)(2z + 4)(1 z 2 ).
(a) Find its inverse transform h[n].
(b) Does h[n] describe a symmetric sequence?
(c) Does h[n] describe a linear phase sequence?
z
17.61 (Inverse Transforms) Let H(z) = .
(z 0.5)(z + 2)
(a) Find its impulse response h[n] if it is known that this represents a stable system. Is this system
causal?
(b) Find its impulse response h[n] if it is known that this represents a causal system. Is this system
stable?
z
17.62 (Inverse Transforms) Let H(z) = . Establish the ROC of H(z), find its impulse
(z 0.5)(z + 2)
response h[n], and investigate its stability for the following:
N
! 1
(d) h[n] = 2
N (N +1) (N k)[n k], N = 3 (weighted moving average)
k=0
(e) y[n] y[n 1] = (1 )x[n], = N 1
N +1 , N =3 (exponential average)
17.69 (System Analysis) The impulse response of a system is h[n] = [n] [n 1]. Determine and
make a sketch of the pole-zero plot for this system to act as
17.70 (System Analysis) The impulse response of a system is h[n] = n u[n]. Determine and make a
sketch of the pole-zero plot for this system to act as
(a) A stable lowpass filter. (b) A stable highpass filter. (c) An allpass filter.
17.71 (System Analysis) Consider a system whose impulse response is h[n] = (0.5)n u[n]. Find its
response to the following inputs.
(a) x[n] = [n] (b) x[n] = u[n]
(c) x[n] = (0.25)n u[n] (d) x[n] = (0.5)n [n]
(e) x[n] = cos(n) (f ) x[n] = cos(n)u[n]
(g) x[n] = cos(0.5n) (h) x[n] = cos(0.5n)u[n]
17.72 (System Analysis) Consider a system whose impulse response is h[n] = n(0.5)n u[n]. What input
x[n] will result in each of the following outputs?
(a) y[n] = cos(0.5n)
(b) y[n] = 2 + cos(0.5n)
(c) y[n] = cos2 (0.25n)
17.73 (System Response) Consider the system y[n] 0.25y[n 2] = x[n]. Find its response y[n], using
z-transforms, for the following inputs.
17.74 (Poles and Zeros) It is known that the transfer function H(z) of a filter has two poles at z = 0,
two zeros at z = 1, and a dc gain of 8.
(a) Find the filter transfer function H(z) and impulse response h[n].
(b) Is this an IIR or FIR filter?
Chapter 17 Problems 635
17.75 (System Response) The signal x[n] = (0.5)n u[n] is applied to a digital filter, and the response is
y[n]. Find the filter transfer function and state whether it is an IIR or FIR filter and whether it is a
linear-phase filter if the system output y[n] is the following:
(a) y[n] = [n] + 0.5[n 1]
(b) y[n] = [n] 2[n 1]
(c) y[n] = (0.5)n u[n]
17.76 (Frequency Response) Sketch the pole-zero plot and frequency response of the following systems
and describe the function of each system.
(a) y[n] = 0.5x[n] + 0.5x[n 1] (b) y[n] = 0.5x[n] 0.5x[n 1]
(c) h[n] = 13 {1, 1, 1} (d) h[n] = 13 {1, 1, 1}
(e) h[n] = {0.5, 1, 0.5} (f ) h[n] = {0.5, 1, 0.5}
z 0.5 z2
(g) H(z) = (h) H(z) =
z + 0.5 z + 0.5
z+2 z 2 + 2z + 3
(i) H(z) = (j) H(z) = 2
z + 0.5 3z + 2z + 1
17.77 (Interconnected Systems) Consider two systems whose impulse response is h1 [n] = [n]+[n1]
and h2 [n] = (0.5)n u[n]. Find the overall system transfer function and the response y[n] of the overall
system to the input x[n] = (0.5)n u[n], and to the input x[n] = cos(n) if
(a) The two systems are connected in parallel with = 0.5.
(b) The two systems are connected in parallel with = 0.5.
(c) The two systems are connected in cascade with = 0.5.
(d) The two systems are connected in cascade with = 0.5.
17.78 (Interconnected Systems) The transfer function H(z) of the cascade of two systems H1 (z) and
z 2 + 0.25
H2 (z) is known to be H(z) = 2 . It is also known that the unit step response of the first
z 0.25
system is [2 (0.5) ]u[n]. Determine H1 (z) and H2 (z).
n
17.79 (Frequency Response) Consider the filter realization of Figure P17.79. Find the transfer function
H(z) of the overall system if the impulse response of the filter is given by
(a) h[n] = [n] [n 1]. (a) h[n] = 0.5[n] + 0.5[n 1].
Find the dierence equation relating y[n] and x[n] from H(z) and investigate the stability of the
overall system.
x [n]
+ Filter y [n]
z1
Figure P17.79 Filter realization for Problem 17.79
636 Chapter 17 The z-Transform
17.82 (System Response in Symbolic Form) The ADSP routine sysresp2 returns the system response
in symbolic form. See Chapter 21 for examples of its usage. Obtain the response of the following
filters and plot the response for 0 n 30.
(a) The step response of y[n] 0.5y[n] = x[n]
(b) The impulse response of y[n] 0.5y[n] = x[n]
(c) The zero-state response of y[n] 0.5y[n] = (0.5)n u[n]
(d) The complete response of y[n] 0.5y[n] = (0.5)n u[n], y[1] = 4
(e) The complete response of y[n] + y[n 1] + 0.5y[n 2] = (0.5)n u[n], y[1] = 4, y[2] = 3
17.83 (Steady-State Response in Symbolic Form) The ADSP routine ssresp yields a symbolic
expression for the steady-state response to sinusoidal inputs (see Chapter 21 for examples of its
usage). Find the steady-state response to the input x[n] = 2 cos(0.2n 3 ) for each of the following
systems and plot the results over 0 n 50.
(a) y[n] 0.5y[n] = x[n]
(b) y[n] + y[n 1] + 0.5y[n 2] = 3x[n]
Chapter 18
APPLICATIONS OF
THE z -TRANSFORM
We choose M = N with no loss of generality, since some of the coecients Bk may always be set to zero.
The transfer function (with M = N ) then becomes
B0 + B1 z 1 + + BN z N
H(z) = = HN (z)HR (z) (18.4)
1 + A1 z 1 + A2 z 2 + + AN z N
The transfer function H(z) = HN (z)HR (z) is the product of the transfer functions of a recursive and a
nonrecursive system. Its realization is thus a cascade of the realizations for the recursive and nonrecursive
portions, as shown in Figure 18.2(a). This form describes a direct form I realization. It uses 2N delay
elements to realize an N th-order dierence equation and is therefore not very ecient.
Since LTI systems can be cascaded in any order, we can switch the recursive and nonrecursive parts to get
the structure of Figure 18.2(b). This structure suggests that each pair of feed-forward and feedback signals
can be obtained from a single delay element instead of two. This allows us to use only N delay elements
637
638 Chapter 18 Applications of the z-Transform
x [n]
B0 +
y [n] x [n]
+ y [n]
+ +
z1 z1
A1
+
B1 + + +
z1 z1
A2
B2 +
+
+ +
z1 AN z1
BM
Figure 18.1 Realization of a nonrecursive (left) and recursive (right) digital filter
x [n]
+ + y [n] x [n]
+ + y [n]
+ B0 + + B0 +
z1 z1 z1
A1 A1
+ +
+ +
+ B1 + + B1 +
z1 z1 z1
A2 A2
+ + + +
+ B2 + + B2 +
z1 z1 z1
AN BN AN BN
Figure 18.2 Direct form I (left) and canonical, or direct form II (right), realization of a digital filter
and results in the direct form II, or canonic, realization. The term canonic implies a realization with the
minimum number of delay elements.
If M and N are not equal, some of the coecients (Ak or Bk ) will equal zero and will result in missing
signal paths corresponding to these coecients in the filter realization.
x [n]
+ + y [n] x [n]
+ y [n]
+ B0 + B0 +
z1
A1 z1
+ +
+ B1 + +
z1 + A1
A2 B1 +
+ + z1
+ B2 + Turn around
Nodes to summers +
Summers to nodes + A2
Reverse signal flow B2 +
z1
AN BN
z1
BN AN
+ +
Figure 18.3 Direct form II (left) and transposed (right) realization of a digital filter
3z 2 1.5z
H(z) = = 3
2 4z 3
z 2 z 0.5z 2
This is a third-order system. To sketch its direct form II and transposed realizations, we compare H(z) with
the generic third-order transfer function to get
B0 z 3 + B1 z 2 + B2 z + B3
H(z) =
z 3 + A1 z 2 + A2 z + A3
The nonzero constants are B2 = 1.5, A2 = 0.5, and A3 = 2. Using these, we obtain the direct form II
and transposed realizations shown in Figure E18.1.
y [n ]
x [n]
+
z1
z1
z1
z1
0.5 1.5 y [n]
x [n] 1.5
+
0.5
+
z1 +
2 z1
2
The overall transfer function of a cascaded system is the product of the individual transfer functions.
For n systems in cascade, the overall impulse response hC [n] is the convolution of the individual impulse
responses h1 [n], h2 [n], . . . . Since the convolution operation transforms to a product, we have
HC (z) = H1 (z)H2 (z) Hn (z) (for n systems in cascade) (18.5)
We can also factor a given transfer function H(z) into the product of first-order and second-order transfer
functions and realize H(z) in cascaded form.
For systems in parallel, the overall transfer function is the sum of the individual transfer functions. For
n systems in parallel,
HP (z) = H1 (z) + H2 (z) + + Hn (z) (for n systems in parallel) (18.6)
We can also use partial fractions to express a given transfer function H(z) as the sum of first-order and/or
second-order subsystems, and realize H(z) as a parallel combination.
z 2 (6z 2)
(b) Find a cascaded realization for H(z) = .
(z 1)(z 2 16 z 16 )
This system may be realized as a cascade H(z) = H1 (z)H2 (z), as shown in Figure E18.2B, where
z2 6z 2
H1 (z) = H2 (z) =
z2 16 z 1
6
z1
x [n]
+ + 6
+ y [n]
+ +
z1 z1
1/6
+
2
+
z1
1/6
z2
(c) Find a parallel realization for H(z) =
(z 1)(z 0.5)
2z z
Using partial fractions, we find H(z) = = H1 (z) H2 (z).
z 1 z 0.5
The two subsystems H1 (z) and H2 (z) may now be used to obtain the parallel realization, as shown in
Figure E18.2C.
+
x [n]
+ 2 y [n]
+
z1
+
+
z1
0.5
(d) Is the cascade or parallel combination of two linear-phase filters also linear phase? Explain.
Linear-phase filters are described by symmetric impulse response sequences.
The impulse response of their cascade is also symmetric because it is the convolution of two symmetric
sequences. So, the cascade of two linear-phase filters is always linear phase.
The impulse response of their parallel combination is the sum of their impulse responses. Since the
sum of symmetric sequences is not always symmetric (unless both are odd symmetric or both are even
symmetric), the parallel combination of two linear-phase filters is not always linear phase.
642 Chapter 18 Applications of the z-Transform
This cascaded system is called an identity system, and its impulse response equals hC [n] = [n]. The
inverse system can be used to undo the eect of the original system. We can also describe hC [n] by the
convolution h[n] hI [n]. It is far easier to find the inverse of a system in the transformed domain.
The dierence equation of the inverse system is y[n] = x[n] x[n 1].
Phase [degrees]
4 H (z)
100 H3(z) 1
3
100 3
Delay
0 H1(z) H (z) 2
200 2
1 H2(z)
100 H (z) 300 H1(z)
2 H3(z) 0
200 400 1
0 0.25 0.5 0 0.25 0.5 0 0.25 0.5
Digital frequency F Digital frequency F Digital frequency F
Figure E18.4 Response of the systems for Example 18.4
The phase response confirms that H1 (z) is a minimum-phase system with no zeros outside the unit circle,
H2 (z) is a mixed-phase system with one zero outside the unit circle, and H3 (z) is a maximum-phase system
with all its zeros outside the unit circle.
For every root rk , there is a root at 1/rk . We thus select only the roots lying inside the unit circle to extract
the minimum-phase transfer function H(z). The following example illustrates the process.
Im Im
Conjugate reciprocal symmetry
1/ r * for a complex zero for a real zero
r r 1/ r
r* Re Re
The zeros of a linear-phase sequence
1/ r display conjugate reciprocal symmetry.
Real zeros at z = 1 or z = 1 need not be paired (they form their own reciprocals), but all other real
zeros must be paired with their reciprocals. Complex zeros on the unit circle must be paired with their
conjugates (which also form their reciprocals), whereas complex zeros anywhere else must occur in conjugate
reciprocal quadruples. For a linear-phase sequence to be odd symmetric, there must be an odd number of
zeros at z = 1.
(b) Let x[n] = [n + 2] + 4.25[n] + [n 2]. Sketch the pole-zero plot of X(z).
We find X(z) = z 2 + 4.25 + z 2 . Since X(z) = X(1/z), x[n] is even symmetric about n = 0.
z 4 + 4.25z 2 + 1 (z + j0.5)(z j0.5)(z + j2)(z j2)
In factored form, X(z) = = .
z2 z2
Its four zeros are at j0.5, j2, j0.5, and j2. The pole-zero plot is shown in Figure E18.6 (left panel).
Note the conjugate reciprocal symmetry of the zeros.
(b) Im[ z ] (c)
2
K=1 K=1
Im[ z ]
0.5
Re[ z ] Re[ z ]
2 1 2 1
2 1 0.5
0.5
Figure E18.6 Pole-zero plots of the sequences for Example 18.6(b and c)
(c) Sketch the pole-zero plot for X(z) = z 2 + 2.5z 2.5z 1 z 2 . Is x[n] a linear-phase sequence?
Since X(z) = X(1/z), x[n] is odd symmetric. In fact, x[n] = {1, 2.5, 0, 2.5, 1}.
(z 1)(z + 1)(z + 0.5)(z + 2)
In factored form, X(z) = .
z2
We see a pair of reciprocal zeros at 0.5 and 2. The zeros at z = 1 and z = 1 are their own
reciprocals and are not paired. The pole-zero plot is shown in Figure E18.6 (right panel).
8(z 1)
H(z) = (18.11)
(z 0.6 j0.6)(z 0.6 + j0.6)
8(ej0 1) 8N1
H(0 ) = = (18.12)
(ej0 0.6 j0.6)(ej0 0.6 + j0.6] D1 D2
Analytically, the magnitude |H(0 )| is the ratio of the magnitudes of each term. Graphically, the complex
terms may be viewed in the z-plane as vectors N1 , D1 , and D2 , directed from each pole or zero location
to the location = 0 on the unit circle corresponding to z = ej0 . The gain factor times the ratio of the
vector magnitudes (the product of distances from the zeros divided by the product of distances from the
poles) yields |H(0 )|, the magnitude at = 0 . The dierence in the angles yields the phase at = 0 .
The vectors and the corresponding magnitude spectrum are sketched for several values of in Figure 18.6.
Im [z] B H(F)
D1 A D1 D1 A
C
N1
Re [z] N1
D2 N1 B
D2 D2
C
1/8 0.5 F
/4
A graphical evaluation can yield exact results but is much more suited to obtaining a qualitative estimate
of the magnitude response. We observe how the vector ratio N1 /D1 D2 influences the magnitude as is
increased from = 0 to = . For our example, at = 0, the vector N1 is zero, and the magnitude is zero.
For 0 < < /4 (point A), both |N1 | and |D2 | increase, but |D1 | decreases. Overall, the response is small
but increasing. At = /4 , the vector D1 attains its smallest length, and we obtain a peak in the response.
For 4 < < (points B and C), |N1 | and |D1 | are of nearly equal length, while |D2 | is increasing. The
magnitude is thus decreasing. The form of this response is typical of a bandpass filter.
(a) Magnitude of H(z) (b) Blowup of magnitude inside and around unit circle
20 4
Magnitude
Magnitude
10 2
0 0
2 2 1 1
0 0 Re [z] 0 0
Re [z] Im [z] Im [z]
2 2 1 1
Figure 18.7 A plot of the magnitude of H(z) = 8(z 1)/(z 2 1.2z + 0.72) in the z-plane
18.4 The Frequency Response: A Graphical Interpretation 647
1. For filter 1, the vector length of the numerator is always unity, but the vector length of the
denominator keeps increasing as we increase frequency (the points A and B, for example). The
magnitude (ratio of the numerator and denominator lengths) thus decreases with frequency and
corresponds to a lowpass filter.
2. For filter 2, the vector length of the numerator is always unity, but the vector length of the
denominator keeps decreasing as we increase the frequency (the points A and B, for example).
The magnitude increases with frequency and corresponds to a highpass filter.
3. For filter 3, the magnitude is zero at = 0. As we increase the frequency (the points A and B,
for example), the ratio of the vector lengths of the numerator and denominator increases, and
this also corresponds to a highpass filter.
4. For filter 4, the magnitude is zero at the zero location = 0 . At any other frequency (the point
B, for example), the ratio of the vector lengths of the numerator and denominator are almost
equal and result in almost constant gain. This describes a bandstop filter.
(b) Design a bandpass filter with center frequency = 100 Hz, passband 10 Hz, stopband edges at 50 Hz
and 150 Hz, and sampling frequency 400 Hz.
We find 0 = 2,
= 20 ,
s = [ 4 , 4 ],
3
and R = 1 0.5 = 0.9215.
Passband: Place poles at p1,2 = Re j0
= 0.9215ej/2 = j0.9215.
Stopband: Place conjugate zeros at z1,2 = ej/4 and z3,4 = ej3/4 .
We then obtain the transfer function as
(z ej/4 )(z ej/4 )(z ej3/4 )(z ej3/4 ) z4 + 1
H(z) = = 2
(z j0.9215)(z + j0.9215) z + 0.8941
Note that this filter is noncausal. To obtain a causal filter H1 (z), we could, for example, use double-
poles at each pole location to get
z4 + 1 z4 + 1
H1 (z) = = 4
(z 2 + 0.8941)2 z + 1.6982z 2 + 0.7210
The pole-zero pattern and gain of the modified filter H1 (z) are shown in Figure E18.7B.
(a) Polezero plot of bandpass filter (b) Magnitude spectrum of bandpass filter
1.5 100
zplane
1 80
0.5
Magnitude
60
Im [z]
0
40
0.5
1 20
double poles
1.5 0
1.5 1 0.5 0 0.5 1 1.5 0 0.1 0.25 0.4 0.5
Re [z] Digital frequency F
Figure E18.7B Frequency response of the bandpass filter for Example 18.7(b)
18.5 Application-Oriented Examples 649
(c) Design a notch filter with notch frequency 60 Hz, stopband 5 Hz, and sampling frequency 300 Hz.
We compute 0 = 5 ,
2
= 30 ,
and R = 1 0.5 = 0.9476.
Stopband: We place zeros at the notch frequency to get z1,2 = ej0 = ej2/5 .
Passband: We place poles along the orientation of the zeros at p1,2 = Rej0 = 0.9476ej2/5 .
We then obtain H(z) as
The pole-zero pattern and magnitude spectrum of this filter are shown in Figure E18.7C.
(a) Polezero plot of notch filter (b) Magnitude spectrum of notch filter
1.5
zplane
1 1
0.5 0.8
Magnitude
Im [z]
0 0.6
0.5 0.4
1 0.2
1.5 0
1.5 1 0.5 0 0.5 1 1.5 0 0.1 0.2 0.3 0.4 0.5
Re [z] Digital frequency F
Figure E18.7C Frequency response of the bandstop filter for Example 18.7(c)
18.5.1 Equalizers
Audio equalizers are typically used to tailor the sound to suit the taste of the listener. The most common
form of equalization is the tone controls (for bass and treble, for example) found on most low-cost audio
systems. Tone controls employ shelving filters that boost or cut the response over a selected frequency band
while leaving the rest of the spectrum unaected (with unity gain). As a result, the filters for the various
controls are typically connected in cascade. Graphic equalizers oer the next level in sophistication and
employ a bank of (typically, second-order) bandpass filters covering a fixed number of frequency bands, and
with a fixed bandwidth and center frequency for each range. Only the gain of each filter can be adjusted by
the user. Each filter isolates a selected frequency range and provides almost zero gain elsewhere. As a result,
the individual sections are connected in parallel. Parametric equalizers oer the ultimate in versatility
and comprise filters that allow the user to vary not only the gain but also the filter parameters (such as
the cuto frequency, center frequency, and bandwidth). Each filter in a parametric equalizer aects only a
selected portion of the spectrum (providing unity gain elsewhere), and as a result, the individual sections
are connected in cascade.
650 Chapter 18 Applications of the z-Transform
The transfer function of a first-order highpass filter with the same cuto frequency C is simply HHP (z) =
1 HLP (z), and gives
! "
1+ z1
HHP (z) = (18.16)
2 z
A highpass shelving filter consists of a first-order highpass filter with adjustable gain G in parallel with a
lowpass filter. With HLP (z) + HHP (z) = 1, its transfer function may be written as
The realizations of these lowpass and highpass shelving filters are shown in Figure 18.8.
+ +
+ +
H (z) G1 H (z) G1
LP HP
The response of a lowpass and highpass shelving filter is shown for various values of gain (and a fixed
) in Figure 18.9. For G > 1, the lowpass shelving filter provides a low-frequency boost, and for 0 < G < 1
it provides a low-frequency cut. For G = 1, we have HSL = 1, and the gain is unity for all frequencies.
Similarly, for G > 1, the highpass shelving filter provides a high-frequency boost, and for 0 < G < 1 it
provides a high-frequency cut. In either case, the parameter allows us to adjust the cuto frequency.
Practical realizations of shelving filters and parametric equalizers typically employ allpass structures.
18.5 Application-Oriented Examples 651
(a) Spectra of lowpass shelving filters =0.85 (b) Spectra of highpass shelving filters =0.85
12 12
K=4 K=4
8 8
Magnitude [dB]
Magnitude [dB]
K=2 K=2
4 4
0 0
4 4
K=0.5 K=0.5
8 8
K=0.25 K=0.25
12 12
3 2 1 3 2 1
10 10 10 10 10 10
Digital frequency F (log scale) Digital frequency F (log scale)
z 2 z cos
h[n] = cos(n)u[n] H(z) = (18.19)
z2 2z cos + 1
Similarly, a system whose impulse response is a pure sine is given by
z sin
h[n] = sin(n)u[n] H(z) = (18.20)
z 2 2z cos + 1
The realizations of these two systems, called digital oscillators, are shown in Figure 18.10.
+ + + +
[n] +
cos (n ) u [n] [n] +
sin (n ) u [n]
+ +
z1 z1
+ +
+ 2 cos cos + 2 cos sin
z1 z1
1 1
would generate a combination of 770-Hz and 1336-Hz tones. There are four low frequencies and four high
frequencies. The low- and high-frequency groups have been chosen to ensure that the paired combinations
do not interfere with speech. The highest frequency (1633 Hz) is not currently in commercial use. The
tones can be generated by using a parallel combination of two programmable digital oscillators, as shown in
Figure 18.12.
The sampling rate typically used is S = 8 kHz. The digital frequency corresponding to a typical high-
frequency tone fH is H = 2fH /S. The code for each button selects the appropriate filter coecients.
The keys pressed are identified at the receiver by first separating the low- and high-frequency groups using
a lowpass filter (with a cuto frequency of around 1000 Hz) and a highpass filter (with a cuto frequency
of around 1200 Hz) in parallel, and then isolating each tone, using a parallel bank of narrowband bandpass
filters tuned to the (eight) individual frequencies. The (eight) outputs are fed to a level detector and decision
logic that establishes the presence or absence of a tone. The keys may also be identified by computing the
FFT of the tone signal, followed by threshold detection.
1 2 3 A 697
4 5 6 B 770
Low-frequency
group (Hz)
7 8 9 C 852
* 0 # D 941
+
[n]
+ + y [n]
+ + +
z1
+
+ 2 cos L cos L
z1
1 Low-frequency
generator
+ +
+ +
z1
+
+ 2 cos H cos H
z1
1 High-frequency
generator
Im[ z ] H()
A x [n] + y [n]
+
R 2R cos 0 z1
A
0 Re[ z ] 2 +
0 1 +
R z1
R 2
0 0.5
The peak gain can be normalized to unity by dividing the transfer function by A. The impulse response of
this filter can be found by partial fraction expansion of H(z)/z, which gives
H(z) z K K
= = + (18.28)
z (z Rej0 )(z Rej0 ) z Rej0 z Rej0
where #
z # Rej0 Rej0 ej(0 /2)
K= # = = = (18.29)
z Re j0 # Rej0 Re j0 2jR sin 0 2 sin 0
z=Rej0
Then, from lookup tables, we obtain
1 1
h[n] = Rn cos(n0 + 0 2 )u[n] = Rn sin[(n + 1)0 ]u[n] (18.30)
sin 0 sin 0
To null the response at low and high frequencies ( = 0 and = ), the two zeros in H(z) may be relocated
from the origin to z = 1 and z = 1, and this leads to the modified transfer function H1 (z):
z2 1
H1 (z) = (18.31)
z2 2zR cos 0 + R2
compute the pole radius as R = 1 0.5 = 1 0.02 = 0.9372. The transfer function of the digital
resonator is thus
Gz 2 Gz 2 Gz 2
H(z) = = =
(z Rej0 )(z Rej0) z 2R cos 0 + R
2 2 z 0.9372z + 0.8783
2
0.1054z 2
H(z) =
z2 0.9372z + 0.8783
The magnitude spectrum and passband detail of this filter are shown in Figure E18.8.
(a) Digital resonator with peak at 50 Hz (b) Passband detail
1 1
0.707 0.707
Magnitude
Magnitude
0.5 0.5
0 0
0 50 100 150 40 46.74 50 52.96 60
Analog frequency f [Hz] Analog frequency f [Hz]
Figure E18.8 Frequency response of the digital resonator for Example 18.8
The passband detail reveals that the half-power frequencies are located at 46.74 Hz and 52.96 Hz, a close
match to the bandwidth requirement (of 6 Hz).
18.5 Application-Oriented Examples 655
This corresponds to the system dierence equation y[n] = x[n] x[n N ], and represents a linear-phase
FIR filter whose impulse response h[n] (which is odd symmetric about its midpoint) has N + 1 samples with
h[0] = 1, h[N ] = 1, and all other coecients zero. There is a pole of multiplicity N at the origin, and the
zeros lie on a unit circle with locations specified by
2k
z N = ejN = 1 = ej2k k = , k = 0, 1, . . . , N 1 (18.33)
N
The pole-zero pattern and magnitude spectrum of this filter are shown for two values of N in Figure 18.14.
Figure 18.14 Pole-zero plot and frequency response of the comb filter H(z) = 1 z N
The zeros of H(z) are uniformly spaced 2/N radians apart around the unit circle, starting at = 0.
For even N , there is also a zero at = . Being an FIR filter, it is always stable for any N . Its frequency
response is given by
H(F ) = 1 ej2F N (18.34)
Note that H(0) always equals 0, but H(0.5) = 0 for even N and H(0.5) = 2 for odd N . The frequency
response H(F ) looks like a comb with N rounded teeth over its principal period 0.5 F 0.5.
A more general form of this comb filter is described by
It has N poles at the origin, and its zeros are uniformly spaced 2/N radians apart around a circle of radius
R = 1/N , starting at = 0. Note that this filter is no longer a linear-phase filter because its impulse
response is not symmetric about its midpoint. Its frequency response H(F ) = 1 ej2F N suggests that
H(0) = 1 for any N , and H(0.5) = 1 for even N and H(0.5) = 1 + for odd N . Thus, its magnitude
varies between 1 and 1 + , as illustrated in Figure 18.15.
656 Chapter 18 Applications of the z-Transform
1 F 1 F
0.5 0.5 0.5 0.5
This corresponds to the system dierence equation y[n] = x[n] + x[n N ], and represents a linear-phase FIR
filter whose impulse response h[n] (which is even symmetric about its midpoint) has N + 1 samples with
h[0] = h[N ] = 1, and all other coecients zero. There is a pole of multiplicity N at the origin, and the zero
locations are specified by
(2k + 1)
z N = ejN = 1 = ej(2k+1) k = , k = 0, 1, . . . , N 1 (18.37)
N
The pole-zero pattern and magnitude spectrum of this filter are shown for two values of N in Figure 18.16.
Figure 18.16 Pole-zero plot and frequency response of the comb filter H(z) = 1 + z N
The zeros of H(z) are uniformly spaced 2/N radians apart around the unit circle, starting at = /N .
For odd N , there is also a zero at = . Its frequency response is given by
Note that H(0) = 2 for any N , but H(0.5) = 2 for even N and H(0.5) = 0 for odd N .
A more general form of this comb filter is described by
It has N poles at the origin and its zeros are uniformly spaced 2/N radians apart around a circle of radius
R = 1/N , starting at = /N . Note that this filter is no longer a linear-phase filter because its impulse
response is not symmetric about its midpoint. Its frequency response H(F ) = 1 + ej2F N suggests that
H(0) = 1 + for any N , and H(0.5) = 1 + for even N and H(0.5) = 1 for odd N . Thus, its magnitude
varies between 1 and 1 + , as illustrated in Figure 18.17.
1 F 1 F
0.5 0.5 0.5 0.5
Magnitude
1 1
= 0.99 = 0.99
0.5 0.5
0 0
0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5
Digital frequency F Digital frequency F
Figure E18.9 Frequency response of the notch filters for Example 18.9
658 Chapter 18 Applications of the z-Transform
Comment: This notch filter also removes the dc component. If we want to preserve the dc component, we
must extract the zero at z = 1 (corresponding to F = 0) from N (z) (by long division), to give
1 z 5
N1 (z) = = 1 + z 1 + z 2 + z 3 + z 4
1 z 1
and use N1 (z) to compute the new transfer function H1 (z) = N1 (z)/D(z) as
N1 (z) 1 + z 1 + z 2 + z 3 + z 4 z4 + z3 + z2 + z + 1
H1 (z) = = = 4
N1 (z/) 1 + (z/) + (z/) + (z/) + (z/)
1 2 3 4 z + z 3 + 2 z 2 + 3 z + 4
Figure E18.9(b) compares the response of this filter for = 0.9 and = 0.99, and reveals that the dc
component is indeed preserved by this filter.
Consider a stable, first-order allpass filter whose transfer function H(z) and frequency response H(F )
are described by
1 + z 1 + ej2F
H(z) = HA (F ) = , || < 1 (18.43)
z+ + ej2F
If we factor out ejF from the numerator and denominator of H(F ), we obtain the form
1 + ej2F ejF + ejF
H(F ) = = , || < 1 (18.44)
+ ej2F ejF + ejF
The numerator and denominator are complex conjugates. This implies that their magnitudes are equal (an
allpass characteristic) and that the phase of H(F ) equals twice the numerator phase. Now, the numerator
may be simplified to
ejF + ejF = cos(F ) j sin(F ) + cos(F ) + j sin(F ) = (1 + )cos(F ) j(1 )sin(F )
18.6 Allpass Filters 659
The phase (F ) of the allpass filter H(F ) equals twice the numerator phase. The phase (F ) and phase
delay tp (F ) may then be written as
( ) ( )
1 (F ) 1 1
(F ) = 2 tan1 tan(F ) tp (F ) = = tan1 tan(F ) (18.45)
1+ 2F F 1+
A low-frequency approximation for the phase delay is given by
1
tp (F ) = (low-frequency approximation) (18.46)
1+
The group delay tg (F ) of this allpass filter equals
1 d(F ) 1 2
tg (F ) = = (18.47)
2 dF 1 + 2 cos(2F ) + 2
At F = 0 and F = 0.5, the group delay is given by
1 2 1 1 2 1+
tg (0) = = tg (0.5) = = (18.48)
1 + 2 + 2 1+ 1 2 + 2 1
To stabilize Hu (z), we use an allpass filter HAP (z) that is a cascade of P allpass sections. Its form is
*P
1 + m z
HAP (z) = , |m | < 1 (18.52)
m=1
z + m
The stabilized filter H(z) is then described by the cascade Hu (z)HAP (z) as
P
* z
H(z) = Hu (z)HAP (z) = Hs (z) , |m | < 1 (18.53)
m=1
z +
m
There are two advantages to this method. First, the magnitude response of the original filter is unchanged.
And second, the order of the new filter is the same as the original. The reason for the inequality |m | < 1
(rather than |m | 1) is that if Hu (z) has a pole on the unit circle, its conjugate reciprocal will also lie on
the unit circle and no stabilization is possible.
660 Chapter 18 Applications of the z-Transform
The cascade of HNM (z) and HAP (z) yields a minimum-phase filter with
P
*
H(z) = HNM (z)HAP (z) = HM (z) (1 + zm
) (18.56)
m=1
Once again, H(z) has the same order as the original filter.
The direct sound provides clues to the location of the source, the early echoes provide an indication of
the physical size of the listening space, and the reverberation characterizes the warmth and liveliness that
18.7 Application-Oriented Examples: Digital Audio Eects 661
we usually associate with sounds. The amplitude of the echoes and reverberation decays exponentially with
time. Together, these characteristics determine the psycho-acoustic qualities we associate with any perceived
sound. Typical 60-dB reverberation times (for the impulse response to decay to 0.001 of its peak value) for
concert halls are fairly long, up to two seconds. A conceptual model of a listening environment, also shown
in Figure 18.18, consists of echo filters and reverb filters.
A single echo can be modeled by a feed-forward system of the form
This is just a comb filter in disguise. The zeros of this filter lie on a circle of radius R = 1/D , with angular
orientations of = (2k + 1)/D. Its comb-like magnitude spectrum H(F ) shows minima of 1 at the
frequencies F = (2k + 1)/D, and peaks of 1 + midway between the dips. To perceive an echo, the index
D must correspond to a delay of at least about 50 ms.
A reverb filter that describes multiple reflections has a feedback structure of the form
1
y[n] = y[n D] + x[n] H(z) = (18.58)
1 z D
This filter has an inverse-comb structure, and its poles lie on a circle of radius R = 1/D , with an angular
separation of = 2/D. The magnitude spectrum shows peaks of 1/(1) at the pole frequencies F = k/D,
and minima of 1/(1 + ) midway between the peaks.
Conceptually, the two systems just described can form the building blocks for simulating the acoustics
of a listening space. Many reverb filters actually use a combination of reverb filters and allpass filters. A
typical structure is shown in Figure 18.19. In practice, however, it is more of an art than a science to create
realistic eects, and many of the commercial designs are propriety information.
+
Reverb filter
+ Allpass filter
+
+ +
Reverb filter
Reverb filter
Direct sound
Figure 18.19 Echo and reverb filters for simulating acoustic eects
The reverb filters in Figure 18.19 typically incorporate irregularly spaced delays to allow the blending of
echoes, and the allpass filter serves to create the eect of early echoes. Some structures for the reverb filter
and allpass filter are shown in Figure 18.20. The first structure is the plain reverb. In the second structure,
the feedback path incorporates a first-order lowpass filter that accounts for the dependence (increase) of
sound absorption with frequency. The allpass filter has the form
1
+ z L 1
H(z) = = + L (18.59)
1 z L 1 z
The second form of this expression (obtained by long division) shows that, except for the constant term, the
allpass filter has the same form as a reverb filter.
662 Chapter 18 Applications of the z-Transform
+ + +
+ + +
z D z D z L
z
z
Chorusing mimics a chorus (or group) singing (or playing) in unison. In practice, of course, the voices
(or instruments) are not in perfect synchronization, nor identical in pitch. The chorusing eect can be
implemented by a weighted combination of echo filters, each with a time-varying delay dn of the form
Typical delay times used in chorusing are between 20 ms and 30 ms. If the delays are less than 10 ms (but
still variable), the resulting whooshing sound is known as flanging.
Phase shifting or phasing also creates many interesting eects, and may be achieved by passing the
signal through a notch filter whose frequency can be tuned by the user. It is the sudden phase jumps at
the notch frequency that are responsible for the phasing eect. The eects may also be enhanced by the
addition of feedback.
at multiples of the fundamental frequency of the note. The harmonics should decay exponentially in time,
with higher frequencies decaying at a faster rate. A typical structure, first described by Karplus and Strong,
is illustrated in Figure 18.22.
+ +
+ +
A z D A z D
In the Karplus-Strong structure, the lowpass filter has the transfer function GLP (z) = 0.5(1 + z 1 ), and
contributes a 0.5-sample phase delay. The delay line contributes an additional D-sample delay, and the loop
delay is thus D + 0.5 samples. The overall transfer function of the first structure is
1
H(z) = GLP (z) = 0.5(1 + z 1 ) (18.61)
1 Az D GLP (z)
The frequency response of this Karplus-Strong filter is shown in Figure 18.23, for D = 8 and D = 16,
and clearly reveals a resonant structure with sharp peaks. The lowpass filter GLP (z) is responsible for the
decrease in the amplitude of the peaks, and for making them broader as the frequency increases.
Magnitude
5 5
0 0
0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5
Digital frequency F Digital frequency F
If A is close to unity, the peaks occur at (or very close to) multiples of the fundamental digital frequency
F0 = 1/(D + 0.5). For a sampling rate S, the note frequency thus corresponds to f0 = S/(D + 0.5) Hz.
However, since D is an integer, we cannot achieve precise tuning (control of the frequency). For example,
to generate a 1600-Hz note at a sampling frequency of 14 kHz requires that D + 0.5 = 8.75. The closest we
can come is by picking D = 8 (to give a frequency of f0 = 8.5 14
1647 Hz). To generate the exact frequency
requires an additional 0.25-sample phase delay. The second structure of Figure 18.22 includes an allpass
filter of the form GAP (z) = (1 + z)/(z + ) in the feedback path that allows us to implement such delays by
appropriate choice of the allpass parameter . For example, to implement the 0.25-sample delay, we would
require (using the low-frequency approximation for the phase delay)
1 1 0.25
0.25 = tp = = 0.6 (18.62)
1+ 1 + 0.25
664 Chapter 18 Applications of the z-Transform
CHAPTER 18 PROBLEMS
DRILL AND REINFORCEMENT
18.1 (Realization) Sketch the direct form I, direct form II, and transposed realization for each filter.
z2
(a) y[n] 16 y[n 1] 12 y[n 2] = 3x[n] (b) H(z) = 2
z 0.25
2z 2 + z 2
(c) y[n] 3y[n 1] + 2y[n 2] = 2x[n 2] (d) H(z) =
z2 1
18.2 (Realization) Find the transfer function and dierence equation for each system realization shown
in Figure P18.2.
System 1
x [n]
+ 4
+ y [n]
+
x [n] 4 y [n] z1
+ +
+ +
3
z1
2 3 z1 System 2
2
18.3 (Inverse Systems) Find the dierence equation of the inverse systems for each of the following.
Which inverse systems are causal? Which are stable?
z+2
(a) H(z) = (z 2 + 19 )/(z 2 14 ) (b) H(z) = 2
z + 0.25
(c) y[n] 0.5y[n 1] = x[n] + 2x[n 1] (d) h[n] = n(2)n u[n]
18.4 (Minimum-Phase Systems) Classify each system as minimum phase, mixed phase, or maximum
phase. Which of the systems are stable?
z 2 + 19 z2 4
(a) H(z) = (b) H(z) =
z 2 14 z2 + 9
(c) h[n] = n(2)n u[n] (d) y[n] + y[n 1] + 0.25y[n 2] = x[n] 2x[n 1]
18.5 (Minimum-Phase Systems) Find the minimum-phase transfer function corresponding to the
systems described by the following:
1
(a) H(z)H(1/z) = 2
3z 10z + 3
3z 2 + 10z + 3
(b) H(z)H(1/z) = 2
5z + 26z + 5
1.25 + cos(2F )
(c) |H(F )| =
2
8.5 + 4 cos(2F )
18.6 (System Characteristics) Consider the system y[n] y[n 1] = x[n] x[n 1].
(a) For what values of and will the system be stable?
Chapter 18 Problems 665
18.7 (Filter Design by Pole-Zero Placement) Design the following filters by pole-zero placement.
(a) A bandpass filter with a center frequency of f0 = 200 Hz, a 3-dB bandwidth of f = 20 Hz,
zero gain at f = 0 and f = 400 Hz, and a sampling frequency of 800 Hz.
(b) A notch filter with a notch frequency of 1 kHz, a 3-dB stopband of 10 Hz, and sampling frequency
8 kHz.
z+3
18.8 (Stabilization by Allpass Filters) The transfer function of a filter is H(z) = .
z2
(a) Is this filter stable?
(b) What is the transfer function A1 (z) of a first-order allpass filter that can stabilize this filter?
What is the transfer function HS (z) of the stabilized filter?
(c) If HS (z) is not minimum phase, pick an allpass filter A2 (z) that converts HS (z) to a minimum-
phase filter HM (z).
(d) Verify that |H(F )| = |HS (F )| = |HM (F )|.
z1 +
x [n] 3
+ 2
4 2 y [n]
+
z1 z1
3 4
18.10 (Feedback Compensation) Feedback compensation is often used to stabilize unstable filters. It is
6
required to stabilize the unstable filter G(z) = by putting it in the forward path of a negative
z 1.2
feedback system. The feedback block has the form H(z) = .
z
(a) What values of and are required for the overall system to have two poles at z = 0.4 and
z = 0.6? What is the overall transfer function and impulse response?
(b) What values of and are required for the overall system to have both poles at z = 0.6?
What is the overall transfer function and impulse response? How does the double pole aect
the impulse response?
666 Chapter 18 Applications of the z-Transform
18.11 (Recursive and IIR Filters) The terms recursive and IIR are not always synonymous. A recursive
filter could in fact have a finite impulse response and even linear phase. For each of the following
recursive filters, find the transfer function H(z) and the impulse response h[n]. Which filters (if any)
describe IIR filters? Which filters (if any) are linear phase?
(a) y[n] y[n 1] = x[n] x[n 2]
(b) y[n] y[n 1] = x[n] x[n 1] 2x[n 2] + 2x[n 3]
18.12 (Recursive Forms of FIR Filters) An FIR filter may always be recast in recursive form by the
simple expedient of including poles and zeros at identical locations. This is equivalent to multiplying
the transfer function numerator and denominator by identical factors. For example, the filter H(z) =
1 z 1 is FIR but if we multiply the numerator and denominator by the identical term 1 + z 1 , the
new filter and its dierence equation become
(1 z 1 )(1 + z 1 ) 1 z 2
H1 (z) = = y[n] + y[n 1] = x[n] x[n 2]
1 + z 1 1 + z 1
The dierence equation can be implemented recursively. Find two dierent recursive dierence equa-
tions (with dierent orders) for each of the following filters.
(a) h[n] = {1, 2, 1}
z 2 2z + 1
(b) H(z) =
z2
(c) y[n] = x[n] x[n 2]
18.13 (Systems in Cascade and Parallel) Consider the filter realization of Figure P18.13.
(a) Find its transfer function and impulse response if = . Is the overall system FIR or IIR?
(b) Find its transfer function and impulse response if = . Is the overall system FIR or IIR?
(c) Find its transfer function and impulse response if = = 1. What does the overall system
represent?
x [n]
+ + y [n]
+
z1
z1 +
+
z1
Figure P18.13 Filter realization for Problem 18.13
18.14 (Filter Concepts) Argue for or against the following. Use examples to justify your arguments.
(a) All the finite poles of an FIR filter must lie at z = 0.
(b) An FIR filter is always linear phase.
(c) An FIR filter is always stable.
(d) A causal IIR filter can never display linear phase.
(e) A linear-phase sequence is always symmetric about is midpoint.
(f ) A minimum-phase filter can never display linear phase.
(g) An allpass filter can never display linear phase.
Chapter 18 Problems 667
0.2(z + 1)
18.15 (Filter Concepts) Let H(z) = . What type of filter does H(z) describe? Sketch its pole-
z 0.6
zero plot. For the following problems, assume that the cuto frequency of this filter is FC = 0.15.
(a) What filter does H1 (z) = 1 H(z) describe? Sketch its pole-zero plot. How is the cuto
frequency of this filter related to that of H(z)?
(b) What filter does H2 (z) = H(z) describe? Sketch its pole-zero plot. How is the cuto frequency
of this filter related to that of H(z)?
(c) What type of filter does H3 (z) = 1 H(z) H(z) describe? Sketch its pole-zero plot.
(d) Use a combination of the above filters to implement a bandstop filter.
18.16 (Inverse Systems) Consider a system described by h[n] = 0.5[n] + 0.5[n 1].
(a) Sketch the frequency response H(F ) of this filter.
(b) In an eort to recover the input x[n], it is proposed to cascade this filter with another filter
whose impulse response is h1 [n] = 0.5[n] 0.5[n 1], as shown:
x[n] h[n] h1 [n] y[n]
What is the output of the cascaded filter to the input x[n]? Sketch the frequency response
H1 (F ) and the frequency response of the cascaded filter.
(c) What must be the impulse response h2 [n] of a filter connected in cascade with the original filter
such that the output of the cascaded filter equals the input x[n], as shown?
x[n] h[n] h2 [n] x[n]
(d) Are H2 (F ) and H1 (F ) related in any way?
18.17 (Linear Phase and Symmetry) Assume a sequence x[n] with real coecients with all its poles
at z = 0. Argue for or against the following statements. You may want to exploit two useful facts.
First, each pair of terms with reciprocal roots such as (z ) and (z 1/) yields an even symmetric
impulse response sequence. Second, the convolution of symmetric sequences is also endowed with
symmetry.
(a) If all the zeros lie on the unit circle, x[n] must be linear phase.
(b) If x[n] is linear phase, its zeros must always lie on the unit circle.
(c) If there are no zeros at z = 1 and x[n] is linear phase, it is also even symmetric.
(d) If there is one zero at z = 1 and x[n] is linear phase, it is also odd symmetric.
(e) If x[n] is even symmetric, there can be no zeros at z = 1.
(f ) If x[n] is odd symmetric, there must be an odd number of zeros at z = 1.
18.18 (Comb Filters) For each comb filter, identify the pole and zero locations and determine whether
it is a notch filter or a peaking filter.
z 4 0.4096 z4 1
(a) H(z) = 4 (b) H(z) = 4
z 0.6561 z 0.6561
668 Chapter 18 Applications of the z-Transform
18.22 (Minimum-Phase Systems) Consider the filter y[n] = x[n] 0.65x[n 1] + 0.1x[n 2].
(a) Find its transfer function H(z) and verify that it is minimum phase.
(b) Find an allpass filter A(z) with the same denominator as H(z).
(c) Is the cascade H(z)A(z) minimum phase? Is it causal? Is it stable?
18.23 (Causality, Stability, and Minimum Phase) Consider two causal, stable, minimum-phase digital
filters described by
z z 0.5
F (z) = G(z) =
z 0.5 z + 0.5
Argue that the following filters are also causal, stable, and minimum phase.
(a) The inverse filter M (z) = 1/F (z)
(b) The inverse filters P (z) = 1/G(z)
(c) The cascade H(z) = F (z)G(z)
(d) The inverse of the cascade R(z) = 1/H(z)
(e) The parallel connection N (z) = F (z) + G(z)
z+2
18.24 (Allpass Filters) Consider the filter H(z) = . The input to this filter is x[n] = cos(2nF0 ).
z + 0.5
18.25 (Allpass Filters) Consider two causal, stable, allpass digital filters described by
0.5z 1 0.5z + 1
F (z) = G(z) =
0.5 z 0.5 + z
(a) Is the filter L(z) = F 1 (z) causal? Stable? Allpass?
(b) Is the filter H(z) = F (z)G(z) causal? Stable? Allpass?
(c) Is the filter M (z) = H 1 (z) causal? Stable? Allpass?
(d) Is the filter N (z) = F (z) + G(z) causal? Stable? Allpass?
Chapter 18 Problems 669
18.26 (Signal Delay) The delay D of a discrete-time energy signal x[n] is defined by
+
kx2 [k]
k=
D=
+
x2 [k]
k=
(a) Verify that the delay of the linear-phase sequence x[n] = {4, 3, 2, 1, 0, 1, 2, 3, 4} is zero.
(b) Compute the delay of the signals g[n] = x[n 1] and h[n] = x[n 2].
(c) What is the delay of the signal y[n] = 1.5(0.5)n u[n] 2[n]?
(d) Consider the first-order allpass filter H(z) = (1 + z)/(z + ). Compute the signal delay for its
impulse response h[n].
18.27 (First-Order Filters) For each filter, sketch the pole-zero plot, sketch the frequency response to
establish the filter type, and evaluate the phase delay at low frequencies. Assume that = 0.5.
z z 1/ z + 1/
(a) H(z) = (b) H(z) = (c) H(z) =
z+ z+ z+
18.28 (Allpass Filters) Consider a lowpass filter with impulse response h[n] = (0.5)n u[n]. If its input is
x[n] = cos(0.5n), the output will have the form y[n] = A cos(0.5n + ).
(a) Find the values of A and .
(b) What should be the transfer function H1 (z) of a first-order allpass filter that can be cas-
caded with the lowpass filter to correct for the phase distortion and produce the signal z[n] =
B cos(0.5n) at its output?
(c) What should be the gain of the allpass filter in order that z[n] = x[n]?
(z + 0.5)(2z + 0.5)
18.29 (Allpass Filters) An unstable digital filter whose transfer function is H(z) =
(z + 5)(2z + 5)
is to be stabilized in a way that does not aect its magnitude spectrum.
(a) What must be the transfer function H1 (z) of a filter such that the cascaded filter described by
HS (z) = H(z)H1 (z) is stable?
(b) What is the transfer function HS (z) of the stabilized filter?
(c) Is HS (z) causal? Minimum phase? Allpass?
18.30 (FIR Filter Design) A 22.5-Hz signal is corrupted by 60-Hz hum. It is required to sample this
signal at 180 Hz and filter out the interference from the the sampled signal.
670 Chapter 18 Applications of the z-Transform
(a) Design a minimum-length, linear-phase filter that passes the desired signal with unit gain and
completely rejects the interference signal.
(b) Test your design by applying a sampled version of the desired signal, adding 60-Hz interference,
filtering the noisy signal, and comparing the desired signal and the filtered signal.
18.31 (Comb Filters) Plot the frequency response of the following filters over 0 F 0.5 and describe
the action of each filter.
(a) y[n] = x[n] + x[n 4], = 0.5
(b) y[n] = x[n] + x[n 4] + 2 x[n 8], = 0.5
(c) y[n] = x[n] + x[n 4] + 2 x[n 8] + 3 x[n 12], = 0.5
(d) y[n] = y[n 4] + x[n], = 0.5
18.32 (Filter Design) An ECG signal sampled at 300 Hz is contaminated by interference due to 60-Hz
hum. It is required to design a digital filter to remove the interference and provide a dc gain of unity.
(a) Design a 3-point FIR filter (using zero placement) that completely blocks the interfering signal.
Plot its frequency response. Does the filter provide adequate gain at other frequencies in the
passband? Is this a good design?
(b) Design an IIR filter (using pole-zero placement) that completely blocks the interfering signal.
Plot its frequency response. Does the filter provide adequate gain at other frequencies in the
passband? Is this a good design?
(c) Generate one period (300 samples) of the ECG signal using the command yecg=ecgsim(3,9);.
Generate a noisy ECG signal by adding 300 samples of a 60-Hz sinusoid to yecg. Obtain filtered
signals, using each filter, and compare plots of the filtered signal with the original signal yecg.
Do the results support your conclusions of parts (a) and (b)? Explain.
18.33 (Nonrecursive Forms of IIR Filters) If we truncate the impulse response of an IIR filter to
N terms, we obtain an FIR filter. The larger the truncation index N , the better the FIR filter
approximates the underlying IIR filter. Consider the IIR filter described by y[n] 0.8y[n 1] = x[n].
(a) Find its impulse response h[n] and truncate it to N terms to obtain hN [n], the impulse response
of the approximate FIR equivalent. Would you expect the greatest mismatch in the response of
the two filters to identical inputs to occur for lower or higher values of n?
(b) Plot the frequency response H(F ) and HN (F ) for N = 3. Plot the poles and zeros of the two
filters. What dierences do you observe?
(c) Plot the frequency response H(F ) and HN (F ) for N = 10. Plot the poles and zeros of the two
filters. Does the response of HN (F ) show a better match to H(F )? How do the pole-zero plots
compare? What would you expect to see in the pole-zero plot if you increase N to 50? What
would you expect to see in the pole-zero plot as N ?
18.34 (LORAN) A LORAN (long-range radio and navigation) system for establishing positions of marine
craft uses three transmitters that send out short bursts (10 cycles) of 100-kHz signals in a precise phase
relationship. Using phase comparison, a receiver (on the craft) can establish the position (latitude and
longitude) of the craft to within a few hundred meters. Suppose the LORAN signal is to be digitally
processed by first sampling it at 500 kHz and filtering the sampled signal using a second-order peaking
filter with a half-power bandwidth of 1 kHz. Design the peaking filter and use Matlab to plot its
frequency response.
Chapter 18 Problems 671
18.35 (Decoding a Mystery Message) During transmission, a message signal gets contaminated by a
low-frequency signal and high-frequency noise. The message can be decoded only by displaying it in
the time domain. The contaminated signal is provided on disk as mystery1.mat. Load this signal into
Matlab (using the command load mystery1). In an eort to decode the message, try the following
steps and determine what the decoded message says.
(a) Display the contaminated signal. Can you read the message?
(b) Display the DFT of the signal to identify the range of the message spectrum.
(c) Design a peaking filter (with unit gain) centered about the message spectrum.
(d) Filter the contaminated signal and display the filtered signal to decode the message.
18.36 (Plucked-String Filters) Figure P18.36 shows three variants of the Karplus-Strong filter for syn-
thesizing plucked-string instruments. Assume that GLP (z) = 0.5(1 + z 1 ).
+ + GLP (z)
+ z D
+ + +
A z D A z D A
18.37 (More Plucked-String Filters) We wish to use the Karplus-Strong filter to synthesize a guitar
note played at exactly 880 Hz, using a sampling frequency of 10 kHz, by including a first-order allpass
filter GAP (z) = (1 + z)/(z + ) in the feedback loop. Choose the second variant from Figure P18.36
and assume that GLP (z) = 0.5(1 + z 1 ).
(a) What is the value of D and the value of the allpass parameter ?
(b) Plot the frequency response of the designed filter, using an appropriate value for A.
(c) How far o is the fundamental frequency of the designed filter from 880 Hz?
(d) Show that the exact relation for finding the parameter from the phase delay tp at any frequency
is given by
sin[(1 tp )F ]
=
sin[(1 + tp )F ]
(e) Use the result of part (d) to compute the exact value of at the digital frequency corresponding
to 880 Hz and plot the frequency response of the designed filter. Is the fundamental frequency
of the designed filter any closer to 880 Hz? Will the exact value of be more useful for lower
or higher sampling rates?
18.38 (Generating DTMF Tones) In dual-tone multi-frequency (DTMF) or touch-tone telephone di-
aling, each number is represented by a dual-frequency tone (as described in the text). It is required
to generate the signal at each frequency as a pure cosine, using a digital oscillator operating at a
sampling rate of S = 8192 Hz. By varying the parameters of the digital filter, it should be possible to
generate a signal at any of the required frequencies. A DTMF tone can then be generated by adding
the appropriate low-frequency and high-frequency signals.
672 Chapter 18 Applications of the z-Transform
(a) Design the digital oscillator and use it to generate tones corresponding to all the digits. Each
tone should last for 0.1 s.
(b) Use the FFT to obtain the spectrum of each tone and confirm that its frequencies correspond
to the appropriate digit.
18.39 (Decoding DTMF Tones) To decode a DTMF signal, we must be able to isolate the tones and
then identify the digits corresponding to each tone. This may be accomplished in two ways: using
the FFT or by direct filtering.
(a) Generate DTMF tones (by any method) at a sampling rate of 8192 Hz.
(b) For each tone, use the FFT to obtain the spectrum and devise a test that would allow you to
identify the digit from the frequencies present in its FFT. How would you automate the decoding
process for an arbitrary DTMF signal?
(c) Apply each tone to a parallel bank of bandpass (peaking) filters, compute the output signal
energy, compare with an appropriate threshold to establish the presence or absence of a fre-
quency, and relate this information to the corresponding digit. You will need to design eight
peaking filters (centered at the appropriate frequencies) to identify the frequencies present and
the corresponding digit. How would you automate the decoding process for an arbitrary DTMF
signal?
z + 1/
18.40 (Phase Delay and Group Delay of Allpass Filters) Consider the filter H(z) = .
z+
(a) Verify that this is an allpass filter.
(b) Pick values of that correspond to a phase delay of tp = 0.1, 0.5, 0.9. For each value of , plot
the unwrapped phase, phase delay, and group delay of the filter.
(c) Over what range of digital frequencies is the phase delay a good match to the value of tp
computed in part (b)?
(d) How does the group delay vary with frequency as is changed?
(e) For each value of , compute the minimum and maximum values of the phase delay and the
group delay and the frequencies at which they occur.
Chapter 19
19.1 Introduction
Digital filters process discrete-time signals. They are essentially mathematical implementations of a filter
equation in software or hardware. They suer from few limitations. Among their many advantages are
high noise immunity, high accuracy (limited only by the roundo error in the computer arithmetic), easy
modification of filter characteristics, freedom from component variations, and, of course, low and constantly
decreasing cost. Digital filters are therefore rapidly replacing analog filters in many applications where they
can be used eectively. The term digital filtering is to be understood in its broadest sense, not only as a
smoothing or averaging operation, but also as any processing of the input signal.
673
674 Chapter 19 IIR Digital Filters
appropriate mapping and an appropriate spectral transformation. A causal, stable IIR filter can never
display linear phase for several reasons. The transfer function of a linear-phase filter must correspond to
a symmetric sequence and ensure that H(z) = H(1/z). For every pole inside the unit circle, there is a
reciprocal pole outside the unit circle. This makes the system unstable (if causal) or noncausal (if stable).
To make the infinitely long symmetric impulse response sequence of an IIR filter causal, we need an infinite
delay, which is not practical, and symmetric truncation (to preserve linear phase) simply transforms the IIR
filter into an FIR filter.
Only FIR filters can be designed with linear phase (no phase distortion). Their design is typically based
on selecting a symmetric impulse response sequence whose length is chosen to meet design specifications.
This choice is often based on iterative techniques or trial and error. For given specifications, FIR filters
require many more elements in their realization than do IIR filters.
Here, ts is the sampling interval corresponding to the sampling rate S = 1/ts . The discrete-time impulse
response hs [n] describes the samples h(nts ) of h(t) and may be written as
!
hs [n] = h(nts ) = hs [k][n k] (19.2)
k=
These relations describe a mapping between the variables z and s. Since s = + j, where is the
continuous frequency, we can express the complex variable z as
The sampled signal hs [n] has a periodic spectrum given by its DTFT:
!
Hp (f ) = S H(f kS) (19.6)
k=
If the analog signal h(t) is band-limited to B and sampled above the Nyquist rate (S > 2B), the principal
period (0.5 F 0.5) of Hp (f ) equals SH(f ), a scaled version of the true spectrum H(f ). We may thus
relate the analog and digital systems by
H(f ) = ts Hp (f ) or Ha (s)|s=j2f ts Hd (z)|z=ej2f /S , |f | < 0.5S (19.7)
If S < 2B, we have aliasing, and this relationship no longer holds.
j Im[ z ]
s-plane The s-plane origin is mapped to z = 1. z-plane
s /2
Segments of the j -axis of length s
are mapped to the unit circle. Re[ z ]
The origin: The origin s = 0 is mapped to z = 1, as are all other points corresponding to s = 0 jks for
which z = ejks ts = ejk2 = 1.
The j-axis: For points on the j-axis, = 0, z = ej , and |z| = 1. As increases from 0 to 0 + s ,
the frequency increases from 0 to 0 + 2, and segments of the j-axis of length s = 2S thus map to
the unit circle, over and over.
The left half-plane: In the left half-plane, < 0. Thus, z = ets ej or |z| = ets < 1. This describes the
interior of the unit circle in the z-plane. In other words, strips of width s in the left half of the s-plane are
mapped to the interior of the unit circle, over and over.
The right half-plane: In the right half-plane, > 0, and we see that |z| = ets > 1. Thus, strips of width
s in the right half of the s-plane are repeatedly mapped to the exterior of the unit circle.
Strips of width s = 2S (along the j-axis) in the left half of the s-plane map to the interior of the unit
circle, over and over.
676 Chapter 19 IIR Digital Filters
Sample
Input x(t) x [n] X(z)
t = nts
Y(z)
H(z)=
X(z)
Sample
X(s) Y(s) = X(s) H(s) y(t) y [n] Y(z)
t = nts
Response-invariant matching yields a transfer function that is a good match only for the time-domain
response to the input for which it was designed. It may not provide a good match for the response to other
inputs. The quality of the approximation depends on the choice of the sampling interval ts , and a unique
correspondence is possible only if the sampling rate S = 1/ts is above the Nyquist rate (to avoid aliasing).
This mapping is thus useful only for analog systems such as lowpass and bandpass filters, whose frequency
response is essentially band-limited. This also implies that the analog transfer function H(s) must be strictly
proper (with numerator degree less than the denominator degree).
Y (z) z z z
H(z) = = Y (z) = = =
X(z) z ets z e1 z 0.3679
(a) Magnitude of H(s)=1/(s+1) and H(z) ts=1 s (b) Magnitude after gain matching at dc
1.582 1
H(z)
0.75
Magnitude
Magnitude
1 H(s)
0.5
H(s) H(z)
0.4 0.25
0 0
0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5
Analog frequency f [Hz] Analog frequency f [Hz]
Figure E19.1 Frequency response of H(s) and H(z) for Example 19.1(a)
The dc gain of H(s) (at s = 0) is unity, but that of H(z) (at z = 1) is 1.582. Even if we normalize the
dc gain of H(z) to unity, as in Figure E19.1(b), we see that the frequency response of the analog and
digital filters is dierent. However, the analog impulse response h(t) = et matches h[n] = en u[n] at
the sampling instants t = nts = n. A perfect match for the time-domain response, for which the filter
was designed, lies at the heart of response invariant mappings. The time-domain response to any other
inputs will be dierent. For example, the step response of the analog filter is
1 1 1
S(s) = = s(t) = (1 et )u(t)
s(s + 1) s s+1
To find the step response S(z) of the digital filter whose input is u[n] z/(z 1), we use partial
fractions on S(z)/z to obtain
z2 ze/(e 1) z/(1 e) e 1 n
S(z) = = + s[n] = u[n] + e u[n]
(z 1)(z e1 ) z1 z e1 e1 1e
The sampled version of s(t) is quite dierent from s[n]. Figure E19.1A reveals that, at the sampling
instants t = nts , the impulse response of the two filters shows a perfect match, but the step response
does not, and neither will the time-domain response to any other input.
678 Chapter 19 IIR Digital Filters
(a) Impulse response of analog and digital filter (b) Step response of analog and digital filter
1 2
1.5
Amplitude
Amplitude
0.5 1
0.5
0 0
0 2 4 6 0 2 4 6
DT index n and time t=nts DT index n and time t=nts
Figure E19.1A Impulse response and step response of H(s) and H(z) for Example 19.1(a)
4
(b) Convert H(s) = to H(z), using various response-invariant transformations.
(s + 1)(s + 2)
1. Impulse invariance: We choose x(t) = (t). Then, X(s) = 1, and
4 4 4
Y (s) = H(s)X(s) = = y(t) = 4et u(t) 4e2t u(t)
(s + 1)(s + 2) s+1 s+2
The sampled input and output are then
The ratio of their z-transforms yields the transfer function of the digital filter as
Y (z) 4z 4z
HI (z) = = Y (z) =
X(z) z ets z e2ts
3. Ramp invariance: We choose x(t) = r(t) = tu(t). Then, X(s) = 1/s2 , and
4 3 2 4 1
Y (s) = = + 2+ y(t) = (3 + 2t + 4et e2t )u(t)
s2 (s + 1)(s + 2) s s s+1 s+2
19.3 Response Matching 679
The z-transform of h[n] (which has the form n u[n], where = epts ), yields the transfer function H(z) of
the digital filter as
z
H(z) = , |z| > epts (19.9)
z epts
This relation suggests that we can go directly from H(s) to H(z), using the mapping
1 z
= (19.10)
s+p z epts
We can now extend this result to filters of higher order. If H(s) is in partial fraction form, we can obtain
simple expressions for impulse-invariant mapping. If H(s) has no repeated roots, it can be described as a sum
of first-order terms, using partial fraction expansion, and each term can be converted by the impulse-invariant
mapping to give
N
! N
!
Ak zAk
H(s) = H(z) = , ROC: |z| > e|p|max ts (19.11)
s + pk z epk ts
k=1 k=1
Here, the region of convergence of H(z) is in terms of the largest pole magnitude |p|max of H(s).
If the denominator of H(s) also contains repeated roots, we start with a typical kth term Hk (s) with a
root of multiplicity M , and find
Ak Ak
Hk (s) = hk (t) = tM 1 epk t u(t) (19.12)
(s + pk )M (M 1)!
680 Chapter 19 IIR Digital Filters
The sampled version hk [n], and its z-transform, can then be found by the times-n property of the z-transform.
Similarly, quadratic terms corresponding to complex conjugate poles in H(s) may also be simplified to obtain
a real form. These results are listed in Table 19.1. Note that impulse-invariant design requires H(s) in partial
fraction form and yields a digital filter H(z) in the same form. It must be reassembled if we need a composite
rational function form. The left half-plane poles of H(s) (corresponding to pk > 0) map into poles of H(z)
that lie inside the unit circle (corresponding to z = epk ts < 1). Thus, a stable analog filter H(s) is
transformed into a stable digital filter H(z).
Distinct A Az
(s + p) (z )
Complex conjugate Aej Aej 2z 2 A cos() 2Az cos( + qts )
+
s + p + jq s + p jq z 2 2z cos(qts ) + 2
Repeated twice A z
Ats
(s + p)2 (z )2
Repeated thrice A z(z + )
0.5At2s
(s + p)3 (z )3
4
(b) Convert H(s) = to H(z), using impulse invariance, with ts = 0.5 s.
(s + 1)(s2 + 4s + 5)
The partial fraction form for H(s) is
2 1 j 1 + j
H(s) = + +
s+1 s+2+j s+2j
19.3 Response Matching 681
For the second term, we write K = (1 j) = 2ej3/4 = Aej . Thus, A = 2 and = 3/4.
We also have p = 2, q = 1, and = epts = 1/e. With these values, Table 19.1 gives
2z 2 2z 2 cos( 3
4 ) 2 2(z/e)cos(0.5 4 )
3
H(z) = +
z 1/ e z 2(z/e)cos(0.5) + e
2 2
1 1 + e1
H(z)|z=1 = = 1.582 HM (z)|z=1 = 0.5 = 1.082
1 e1 1 e1
For unit dc gain, the transfer functions of the original and modified digital filter become
Figure E19.3B compares the response of H(s), H(z), H1 (z), HM (z), and HM 1 (z). It clearly reveals
the improvement due to each modification.
Impulseinvariant design for H(s) = 1/(s+1) (dashed)
1.582
HM(z)
1.082
1
Magnitude
H(z)
H (z)
1
0.5 H (z)
M1
0
0 0.1 0.2 0.3 0.4 0.5
Digital frequency F
Figure E19.3B Response of the various filters for Example 19.3(b)
2s2 s 2 1/s
h(0) = lim sH(s) = lim = =2
s s s2 + 5s + 4 1 + 5/s + 4/s2
The transfer function of the digital filter using impulse-invariant mapping was found in part (a) as
3z z 2z 2 1.6843z
H(z) = =
z e2 z e1 z 2 0.7419z + 0.0831
The modified transfer function is thus
3z z z 2 0.9424z 0.0831
HM (z) = H(z) 0.5h(0) = 1= 2
ze 2 ze 1 z 0.7419z + 0.0831
The power P of z P in H(z) is P = N M , the dierence in the degree of the denominator and numerator
polynomials of H(s). The constant A is chosen to match the gains of H(s) and H(z) at some convenient
frequency (typically, dc).
For complex roots, we can replace each conjugate pair using the mapping
z 2 2zepts cos(qts ) + e2pts
(s + p jq)(s + p + jq) = (19.15)
z2
Since poles in the left half of the s-plane are mapped inside the unit circle in the z-plane, the matched z-
transform preserves stability. It converts an all-pole analog system to an all-pole digital system but may not
preserve the frequency response of the analog system. As with the impulse-invariant mapping, the matched
z-transform also suers from aliasing errors.
(b) First modification: Replace both zeros in H(z) (the term z 2 ) by (z + 1)2 :
A1 (z + 1)2 A1 (z + 1)2 A1 (z + 1)2
H1 (z) = = =
(z ets )(z e2ts ) (z e0.5 )(z e1 ) z 2 0.9744z + 0.2231
(c) Second modification: Replace only one zero in H(z) (the term z 2 ) by z + 1:
A2 z(z + 1) A2 z(z + 1) A2 z(z + 1)
H2 (z) = = = 2
(z e )(z e
ts 2ts ) (z e0.5 )(z e )
1 z 0.9744z + 0.2231
Comment: In each case, the constants (A, A1 , and A2 ) may be chosen for a desired gain.
Numerical dierences approximate the slope (derivative) at a point, as illustrated for the backward-
dierence and forward-dierence algorithms in Figure 19.3.
686 Chapter 19 IIR Digital Filters
ts ts
n 1 n n n +1
Backward Euler algorithm Forward Euler algorithm
The mappings that result from the operations (also listed in Table 19.2) are based on comparing the ideal
derivative operator H(s) = s with the transfer function H(z) of each dierence algorithm, as follows:
x[n] x[n 1]
Backward dierence: y[n] =
ts
x[n + 1] x[n]
Forward dierence: y[n] =
ts
x[n + 1] x[n 1]
Central dierence: y[n] =
2ts
1 + sts 1 + sts
z = 0.5 + 0.5 z 0.5 = 0.5 (19.19)
1 sts 1 sts
1 + jts
z 0.5 = 0.5 |z 0.5| = 0.5 (19.20)
1 jts
Thus, the j-axis is mapped into a circle of radius 0.5, centered at z = 0.5, as shown in Figure 19.4.
Since this region is within the unit circle, the mapping preserves stability. It does, however, restrict the
pole locations of the digital filter. Since the frequencies are mapped into a smaller circle, this mapping is a
good approximation only in the vicinity of z = 1 or 0 (where it approximates the unit circle), which
implies high sampling rates.
19.5 Mappings from Discrete Algorithms 687
s-plane j Im[ z ]
The backward-difference mapping z-plane
Figure 19.4 Mapping region for the mapping based on the backward dierence
s-plane j Im[ z ]
Figure 19.5 Mapping region for the mapping based on the forward dierence
1
(b) We convert the stable analog filter H(s) = , > 0, to a digital filter, using the forward-dierence
s+
mapping s = (z 1)/ts to obtain
1 ts
H(z) = =
+ (z 1)/ts z (1 ts )
The digital filter has a pole at z = 1 ts and is thus stable only if 0 < ts < 2 (to ensure |z| < 1).
Since > 0 and ts > 0, we are assured a stable system only if < 2/ts . This implies that the sampling
rate S must be chosen to ensure that S > 0.5.
1
(c) We convert the stable analog filter H(s) = , > 0, to a digital filter, using the central-dierence
s+
mapping s = (z 2 1)/2zts to obtain
1 2zts
H(z) = = 2
+ (z 2 1)/2zts z + 2ts z 1
'
The digital filter has a pair of poles at z = ts (ts )2 + 1. The magnitude of one of the poles is
always greater than unity, and the digital filter is thus unstable for any > 0.
Comment: Clearly, from a stability viewpoint, only the mapping based on the backward dierence is useful
for the filter H(s) = 1/(s + ). In fact, this mapping preserves stability for any stable analog filter.
The mappings resulting from these operators are also listed in Table 19.3 and are based on comparing
the transfer function H(s) = 1/s of the ideal integrator with the transfer function H(z) of each integration
algorithm, as follows:
Rectangular rule: y[n] = y[n 1] + ts x[n]
Y (z) zts z1
Y (z) = z 1 Y (z) + ts X(z) H(z) = = s= (19.22)
X(z) z1 zts
We remark that the rectangular algorithm for integration and the backward dierence for the derivative
generate identical mappings.
19.5 Mappings from Discrete Algorithms 689
n 1 n n 1 n
Rectangular rule Trapezoidal rule
Im[ z ]
s-plane j
z-plane
The bilinear transform
Re[ z ]
The left half of the s plane
is mapped to the interior of the unit circle
Unit circle
Discrete dierence and integration algorithms are good approximations only for small digital frequencies
(F < 0.1, say) or high sampling rates S (small ts ) that may be well in excess of the Nyquist rate. This is
why the sampling rate is a critical factor in the frequency-domain performance of these algorithms. Another
factor is stability. For example, the mapping based on the central-dierence algorithm is not very useful
because it always produces an unstable digital filter. Algorithms based on trapezoidal integration and the
backward dierence are popular because they always produce stable digital filters.
690 Chapter 19 IIR Digital Filters
The pole location of H(z) is z = (2 ts )/(2 + ts ). Since this is always less than unity (if > 0 and
ts > 0), we have a stable H(z).
(b) Simpsons algorithm for numerical integration finds y[n] over two time steps from y[n 2] and is given
by
y[n] = y[n 2] + t3s (x[n] + 4x[n 1] + x[n 2])
1
Derive a mapping based on Simpsons rule, use it to convert H(s) = , > 0, to a digital filter
s+
H(z), and comment on the stability of H(z).
The transfer function HS (z) of this algorithm is found as follows:
z 2 + 4z + 1
Y (z) = z 2 Y (z) + 3 (1
ts
+ 4z 1 + z 2 )X(z) HS (z) = ts
3(z 2 1)
Comparison with the transfer function of the ideal integrator H(s) = 1/s gives
( )
3 z2 1
s=
ts z 2 + 4z + 1
ts (z 2 + 4z + 1)
H(z) =
(3 + ts )z 2 + 4ts z (3 ts )
The poles of H(z) are the roots of (3 + ts )z 2 + 4ts z (3 ts ) = 0. The magnitude of one of these
roots is always greater than unity (if > 0 and ts > 0), and H(z) is thus an unstable filter.
19.6 The Bilinear Transformation 691
This is a nonlinear relation between the analog frequency and the digital frequency . When = 0, = 0,
and as , . It is thus a one-to-one mapping that nonlinearly compresses the analog frequency
range < f < to the digital frequency range < < . It avoids the eects of aliasing at the
expense of distorting, compressing, or warping the analog frequencies, as shown in Figure 19.8.
The higher the frequency, the more severe is the warping. We can compensate for this warping (but
not eliminate it) if we prewarp the frequency specifications before designing the analog system H(s) or
applying the bilinear transformation. Prewarping of the frequencies prior to analog design is just a scaling
(stretching) operation based on the inverse of the warping relation, and is given by
= C tan(0.5) (19.31)
Figure 19.9 shows a plot of versus for various values of C, compared with the linear relation = .
The analog and digital frequencies always show a match at the origin ( = = 0), and at one other value
dictated by the choice of C.
We point out that the nonlinear stretching eect of the prewarping often results in a filter of lower order,
especially if the sampling frequency is not high enough. For high sampling rates, it turns out that the
prewarping has little eect and may even be redundant.
692 Chapter 19 IIR Digital Filters
(analog)
= C tan (/2)
Equal-width intervals
(digital)
= C tan (/2)
(analog)
=
C=2
C =1
C = 0.5 (digital)
For all values of C, we see a match at the origin.
The popularity of the bilinear transformation stems from its simple, stable, one-to-one mapping. It avoids
problems caused by aliasing and can thus be used even for highpass and bandstop filters. Though it does
suer from warping eects, it can also compensate for these eects, using a simple relation.
1. We pick C by matching A and the prewarped frequency D , and obtain H(z) from H(s), using the
z1
transformation s = C . This process may be summarized as follows:
z+1
A *
A = C tan(0.5D ) C= H(z) = H(s)*s=C(z1)/(z+1) (19.32)
tan(0.5D )
19.6 The Bilinear Transformation 693
2. We pick a convenient value for C (say, C = 1). This actually matches the response at an arbitrary
prewarped frequency x given by
x = tan(0.5D ) (19.33)
Next, we frequency scale H(s) to H1 (s) = H(sA /x ), and obtain H(z) from H1 (s), using the trans-
z1
formation s = (with C = 1). This process may be summarized as follows (for C = 1):
z+1
* *
x = tan(0.5D ) H1 (s) = H(s)*s=sA /x H(z) = H1 (s)*s=(z1)/(z+1) (19.34)
The two methods yield an identical digital filter H(z). The first method does away with the scaling of H(s),
and the second method allows a convenient choice for C.
Figure E19.7(a) compares the magnitude of H(s) and H(z). The linear phase of the Bessel filter is not
preserved during the transformation (unless the sampling frequency is very high).
(a) Bessel filter H(s) and digital filter H(z) (b) Notch filter H(s) and digital filter H(z)
1 1
H(z)
Magnitude
Magnitude
H(z)
0.5 0.5
H(s)
H(s)
0 0
0 1 2 3 4 5 6 0 20 40 60 80 100 120
Analog frequency f [kHz] Analog frequency f [Hz]
Figure E19.7 Magnitude of the analog and digital filters for Example 19.7(a and b)
s2 + 1
(b) The twin-T notch filter H(s) = has a notch frequency 0 = 1 rad/s. Design a digital notch
+ 4s + 1 s2
filter with S = 240 Hz and a notch frequency f = 60 Hz.
The digital notch frequency is = 2f /S = 0.5. We pick C by matching the analog notch frequency
0 and the prewarped digital notch frequency to get
0 = C tan(0.5) 1 = C tan(0.25) C=1
z1 z1
Finally, we convert H(s) to H(z), using s = C = , to get
z+1 z+1
* (z 1)2 + (z + 1)2 z2 + 1
H(z) = H(s)*s=(z1)/(z+1) = =
(z 1)2 + 4(z 2 1) + (z + 1)2 3z 2 1
We confirm that H(s) = 0 at s = j0 = j and H(z) = 0 at z = ej = ej/2 = j. Figure E19.7(b)
shows the magnitude of the two filters and the perfect match at f = 60 Hz (or F = 0.25).
(b) Use H(z) to design a bandpass filter with band edges of 1 kHz and 3 kHz.
The various digital frequencies are 1 = 0.25, 2 = 0.75, 2 1 = 0.5, and 2 + 1 = .
From Table 19.4, the parameters needed for the LP2BP transformation are
tan(/4) cos(/2)
K= =1 = =0 A1 = 0 A2 = 0
tan(/4) cos(/4)
The LP2BP transformation is thus z z 2 and yields
3(z 2 1)2
HBP (z) =
31z 4 + 26z 2 + 7
696 Chapter 19 IIR Digital Filters
(c) Use H(z) to design a bandstop filter with band edges of 1.5 kHz and 2.5 kHz.
Once again, we need 1 = 3/8, 2 = 5/8, 2 1 = /4, and 2 + 1 = .
From Table 19.4, the LP2BS transformation requires the parameters
tan(/8) cos(/2)
K= = 0.4142 = =0 A1 = 0 A2 = 0.4142
tan(/4) cos(/8)
z 2 + 0.4142
The LP2BS transformation is thus z and yields
0.4142z 2 + 1
0.28(z 2 + 1)2
HBS (z) =
z4 + 0.0476z 2 + 0.0723
Figure E19.8 compares the magnitudes of each filter designed in this example.
Digitaltodigital transformations of a lowpass digital filter
1
0.8
BS
BP
Magnitude
0.6
HP
0.4
LP
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5 4
Analog frequency f [kHz]
Figure E19.8 The digital filters for Example 19.8
Similarly, if we wish to use the bilinear transformation to design a second-order digital notch (bandstop)
filter with a 3-dB notch bandwidth of and a notch frequency of 0 , we once again start with the lowpass
19.7 Spectral Transformations for IIR Filters 697
analog prototype H(s) = 1/(s + 1), and apply the A2D LP2BS transformation, to obtain
+ ,
1 z 2 2z + 1
HBS (z) = = cos 0 C = tan(0.5) (19.37)
1 + C z 2 1+C 2
z + 1C
1+C
If either design calls for an A-dB bandwidth of , the constant C is replaced by KC, where
- .1/2
1
K= (A in dB) (19.38)
100.1A 1
This is equivalent to denormailzing the lowpass prototype such that its gain corresponds to an attenuation
of A decibels at unit radian frequency. For a 3-dB bandwidth, we obtain K = 1 as expected. These design
relations prove quite helpful in the quick design of notch and peaking filters.
The center frequency 0 is used to determine the parameter = cos 0 . If only the band edges 1 and
2 are specified, but the center frequency 0 is not, may also be found from the alternative relation of
Table 19.5 in terms of 1 and 2 . The center frequency of the designed filter will then be based on the
geometric symmetry of the prewarped frequencies and can be computed from
'
tan(0.50 ) = tan(0.51 )tan(0.52 ) (19.39)
In fact, the digital band edges 1 and 2 do not show geometric symmetry with respect to the center
frequency 0 . We can find 1 and 2 in terms of and 0 by equating the two expressions for finding
(in Table 19.5) to obtain
cos[0.5(2 + 1 )]
= cos 0 = (19.40)
cos[0.5(2 1 )]
With = 2 1 , we get
(a) Bandpass filter f0=6 kHz f=5 kHz (b) Bandpass filter f1=4 kHz f2=9 kHz
1 1
0.707 0.707
Magnitude
Magnitude
0.5 0.5
0 0
0 3.55 6 8.55 12.5 0 4 6.56 9 12.5
Analog frequency f [kHz] Analog frequency f [kHz]
Figure E19.9 Response of bandpass filters for Example 19.9
(a and b)
(b) Let us design a peaking (bandpass) filter with a 3-dB band edges of 4 kHz and 9 kHz. The sampling
frequency is 25 kHz.
The digital frequencies are 1 = 2(4/25) = 0.32, 2 = 2(6/25) = 0.72, and = 0.4.
We find C = tan(0.5) = 0.7265 and = cos[0.5(2 +1 )]/cos[0.5(2 1 )] = 0.0776. Substituting
these into the form for the required filter, we obtain
0.4208(z 2 1)
H(z) =
z 2 + 0.0899z + 0.1584
Figure E19.9(b) shows the magnitude spectrum. The band edges are at 4 kHz and 9 kHz as expected.
The center frequency, however, is at 6.56 kHz. This is because the digital center frequency 0 must be
computed from = cos 0 = 0.0776. This gives 0 = cos1 (0.0776) = 1.6485. This corresponds
to f0 = S0 /(2) = 6.5591 kHz.
Comment: We could also have computed 0 from
'
tan(0.50 ) = tan(0.51 ) tan(0.52 ) = 1.0809
Then, 0 = 2 tan1 (1.0809) = 1.6485, as before.
19.7 Spectral Transformations for IIR Filters 699
(c) We design a peaking filter with a center frequency of 40 Hz and a 6-dB bandwidth of 2 Hz, operating
at a sampling rate of 200 Hz.
We compute = 2(2/200) = 0.02 and 0 = 2(40/200) = 0.4 and = cos 0 = 0.3090.
Since we are given the 6-dB bandwidth, we compute K and C as follows:
- .1/2 - .1/2
1 1
K= = = 0.577 C = K tan(0.5) = 0.0182
100.1A 1 100.6 1
Substituting these into the form for the required filter, we obtain
0.0179(z 2 1)
H(z) =
z2 0.6070z + 0.9642
Figure E19.9C shows the magnitude spectrum. The blowup reveals that the 6-dB bandwidth (where
the gain is 0.5) equals 2 Hz, as required.
0.5 0.5
0 0
0 20 40 60 80 100 35 39 40 41 45
Analog frequency f [Hz] Analog frequency f [Hz]
Figure E19.9C Response of peaking filter for Example 19.9(c)
2
Amplitude
1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Time t [seconds]
Figure E19.10A Simulated ECG signal with 60-Hz interference for Example 19.10
If we design a high-Q notch filter with Q = 50 and a notch at f0 = 60 Hz, we have a notch bandwidth of
f = f0 /Q = 1.2 Hz. The digital notch frequency is 0 = 2f0 /S = 2(60/300) = 0.4, and the digital
bandwidth is = 2f /S = 2(1.2/300) = 0.008.
We find C = tan(0.5) = 0.0126 and = cos 0 = 0.3090. Substituting these into the form for the
notch filter, we obtain
0.9876(z 2 0.6180z + 1)
H1 (z) =
z 2 0.6104z + 0.9752
700 Chapter 19 IIR Digital Filters
0.8878(z 2 0.6180z + 1)
H2 (z) =
z 2 0.5487z + 0.7757
Figure E19.10B shows the magnitude spectrum of the two filters. Naturally, the filter H1 (z) (with the
higher Q) exhibits the sharper notch.
(a) 60Hz notch filter with Q=50 (b) 60Hz notch filter with Q=5
1 1
0.707 0.707
Magnitude
Magnitude
0.5 0.5
0 0
0 30 60 90 120 150 0 30 60 90 120 150
Analog frequency f [Hz] Analog frequency f [Hz]
Figure E19.10B Response of the notch filters for Example 19.10
The filtered ECG signal corresponding to these two notch filters is shown in Figure E19.10C. Although
both filters are eective in removing the 60-Hz noise, the filter H2 (z) (with the lower Q) shows a much
shorter start-up transient (because the highly oscillatory transient response of the high-Q filter H1 (z) takes
much longer to reach steady state).
2
Amplitude
1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
(b) Filtered ECG signal using 60Hz notch filter with Q = 5
2
Amplitude
1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Time t [seconds]
Figure E19.10C Output of the notch filters for Example 19.10
They employ a bank of (typically, second-order) bandpass filters covering the audio frequency spectrum, and
with a fixed bandwidth and center frequency for each range. Only the gain of each filter can be adjusted
by the user. Each filter isolates a selected frequency range and provides almost zero gain elsewhere. As a
result, the individual sections are connected in parallel, as shown in Figure 19.10.
Figure 19.10 A graphic equalizer (left) and the display panel (right)
The input signal is split into as many channels as there are frequency bands, and the weighted sum of
the outputs of each filter yields the equalized signal. A control panel, usually calibrated in decibels, allows
for gain adjustment by sliders, as illustrated in Figure 19.10. The slider settings provide a rough visual
indication of the equalized response, hence the name graphic equalizer. The design of each second-order
bandpass section is based on its center frequency and its bandwidth or quality factor Q. A typical set of
center frequencies for a ten-band equalizer is [31.5, 63, 125, 250, 500, 1000, 2000, 4000, 8000, 16000] Hz. A
typical range for the gain of each filter is 12 dB (or 0.25 times to 4 times the nominal gain).
Equalized
Magnitude [dB]
1
10
0.5 20
0 30
120 480 960 1920 60 120 240 480 960 1920
Frequency f [Hz] Frequency f [Hz] (log scale)
Figure E19.11 Frequency response of the audio equalizer for Example 19.11
702 Chapter 19 IIR Digital Filters
The center frequency and bandwidth for the filters are given by
- .
2 1C
0 = cos1 = cos1 , where = (19.43)
1+ 2 1+C
It is interesting to note that HBS (z) = 1 HBP (z). A tunable second-order equalizer stage HPAR (z) consists
of the bandpass filter HBP (z), with a peak gain of G, in parallel with the bandstop filter HBS (z):
+
+
H (z) G1
BP
For G = 1, we have HPAR = 1, and the gain is unity for all frequencies. The parameters and allow
us to adjust the bandwidth and the center frequency, respectively. The response of this filter is shown for
various values of these parameters in Figure 19.12.
(a) Peaking filter: =0.87, =0.5 (b) Peaking filter: G=4, =0.8 (c) Peaking filter: G=4, =0.5
12 12 12
G=4 =
[dB]
[dB]
[dB]
8 0.7 0 0.7
=0.4
8 8
4
Magnitude
Magnitude
Magnitude
2
0
4 4
4 0.6
0.5 0.8
8 0 0
0 0.25 0.5 0 0.25 0.5 0 0.25 0.5
Digital frequency F Digital frequency F Digital frequency F
Lowpass Analog s z
analog prototype transformation Analog filter mapping Digital filter
HP (s) H(s) H(z)
C = 1 rad/s
A major disadvantage of this approach is that it cannot be used for mappings that suer from aliasing
problems (such as the impulse-invariant mapping) to design highpass or bandpass filters.
The second, indirect approach, illustrated in Figure 19.14, tries to overcome this problem by designing
only the lowpass prototype HP (s) in the analog domain. This is followed by the required mapping to obtain
a digital lowpass prototype HP (z). The final step is the spectral (D2D) transformation of HP (z) to the
required digital filter H(z).
This approach allows us to use any mappings, including those (such as response invariance) that may
otherwise lead to excessive aliasing for highpass and bandstop filters. Designing HP (z) also allows us to
match its dc magnitude with HP (s) for subsequent comparison.
A third approach that applies only to the bilinear transformation is illustrated in Figure 19.15. We
prewarp the frequencies, design an analog lowpass prototype (from prewarped specifications), and apply
A2D transformations to obtain the required digital filter H(z).
Lowpass A2D
analog prototype transformation Digital filter
HP (s)
H(z)
C = 1 rad/s
A Step-by-Step Approach
Given the passband and stopband edges, the passband and stopband attenuation, and the sampling frequency
S, a standard recipe for the design of IIR filters is as follows:
704 Chapter 19 IIR Digital Filters
1. Normalize (divide) the design band edges by S. This allows us to use a sampling interval ts = 1 in
subsequent design. For bilinear design, we also prewarp the normalized band edges.
2. Use the normalized band edges and attenuation specifications to design an analog lowpass prototype
HP (s) whose cuto frequency is C = 1 rad/s.
3. Apply the chosen mapping (with ts = 1) to convert HP (s) to a digital lowpass prototype filter HP (z)
with D = 1.
5. For bilinear design, we can also convert HP (s) to H(z) directly (using A2D transformations).
0.1634
HP (s) =
s4 + 0.7162s3 + 1.2565s2 + 0.5168s + 0.2058
0.34s4
HBP (s) =
s + 0.86s + 10.87s + 6.75s + 39.39s4 + 15.27s3 + 55.69s2 + 9.99s + 26.25
8 7 6 5
Finally, we transform the bandpass filter HBP (s) to H(z), using s 2(z 1)/(z + 1), to obtain
0.794
No prewarping
Magnitude
Prewarped
0.5
Analog
0.1
0
0 1 1.6 1.8 3.2 4.8 6
Analog frequency f [kHz]
Figure E19.12A Bandpass filter for Example 19.12 designed by the bilinear transformation
cos[0.5(2 + 1 )]
C = tan[0.5(2 1 )] = 0.3839 = = 0.277
cos[0.5(2 1 )]
1. Use the normalized (but unwarped) band edges [0.89, 0.94, 1.68, 2.51] to design an analog lowpass
prototype HP (s) with C = 1 rad/s (fortunately, the unwarped and prewarped specifications yield
the same HP (s) for the specifications of this problem).
2. Convert HP (s) to HP (z) with D = 1, using the chosen mapping, with ts = 1. For the backward-
dierence, for example, we would use s = (z 1)/zts = (z 1)/z. To use the impulse-invariant
mapping, we would have to first convert HP (s) to partial fraction form.
3. Convert HP (z) to H(z), using the D2D LP2BP transformation with D = 1, and the unwarped
passband edges [1 , 2 ] = [0.94, 1.68].
Figure E19.12C compares the response of two such designs, using the impulse-invariant mapping and
the backward-dierence mapping (both with gain matching at dc). The design based on the backward-
dierence mapping shows a poor match to the analog filter.
706 Chapter 19 IIR Digital Filters
0.794
Impulse invariance (solid)
Magnitude
Backward difference
0.5
Analog (dashed)
0.1
0
0 1 1.6 1.8 3.2 4.8 6
Analog frequency f [kHz]
Figure E19.12C Bandpass filter for Example 19.12 designed by impulse invariance and backward dierence
(s + 0.5)(s + 1.5)
H(s) = (19.45)
(s + 1)(s + 2)(s + 4.5)(s + 8)(s + 12)
Bilinear transformation of H(s) with ts = 0.01 s yields the digital transfer function H(z) = B(z)/A(z) whose
denominator coecients to double precision (Ak ) and truncated to seven significant digits (Atk ) are given by
The poles of H(z) all lie within the unit circle, and the designed filter is thus stable. However, if we use the
truncated coecients Atk to compute the roots, the filter becomes unstable because one pole moves out of
the unit circle! The bottom line is that stability is an important issue in the design of IIR digital filters.
CHAPTER 19 PROBLEMS
DRILL AND REINFORCEMENT
1
19.1 (Response Invariance) Consider the analog filter H(s) = .
s+2
(a) Convert H(s) to a digital filter H(z), using impulse invariance. Assume that the sampling
frequency is S = 2 Hz.
(b) Will the impulse response h[n] match the impulse response h(t) of the analog filter at the
sampling instants? Should it? Explain.
(c) Will the step response s[n] match the step response s(t) of the analog filter at the sampling
instants? Should it? Explain.
1
19.2 (Response Invariance) Consider the analog filter H(s) = .
s+2
(a) Convert H(s) to a digital filter H(z), using step invariance at a sampling frequency of S = 2 Hz.
(b) Will the impulse response h[n] match the impulse response h(t) of the analog filter at the
sampling instants? Should it? Explain.
(c) Will the step response s[n] match the step response s(t) of the analog filter at the sampling
instants? Should it? Explain.
1
19.3 (Response Invariance) Consider the analog filter H(s) = .
s+2
(a) Convert H(s) to a digital filter H(z), using ramp invariance at a sampling frequency of S = 2 Hz.
(b) Will the impulse response h[n] match the impulse response h(t) of the analog filter at the
sampling instants? Should it? Explain.
(c) Will the step response s[n] match the step response s(t) of the analog filter at the sampling
instants? Should it? Explain.
(d) Will the response v[n] to a unit ramp match the unit-ramp response v(t) of the analog filter at
the sampling instants? Should it? Explain.
s+1
19.4 (Response Invariance) Consider the analog filter H(s) = .
(s + 1)2 + 2
(a) Convert H(s) to a digital filter H(z), using impulse invariance. Assume that the sampling
frequency is S = 2 Hz.
(b) Convert H(s) to a digital filter H(z), using invariance to the input x(t) = et u(t) at a sampling
frequency of S = 2 Hz.
19.5 (Impulse Invariance) Use the impulse-invariant transformation with ts = 1 s to transform the
following analog filters to digital filters.
1 2 2 1
(a) H(s) = (b) H(s) = + (c) H(s) =
s+2 s+1 s+2 (s + 1)(s + 2)
1
19.6 (Impulse-Invariant Design) We are given the analog lowpass filter H(s) = whose cuto
s+1
frequency is known to be 1 rad/s. It is required to use this filter as the basis for designing a digital
filter by the impulse-invariant transformation. The digital filter is to have a cuto frequency of 50 Hz
and operate at a sampling frequency of 200 Hz.
708 Chapter 19 IIR Digital Filters
(a) What is the transfer function H(z) of the digital filter if no gain matching is used?
(b) What is the transfer function H(z) of the digital filter if the gain of the analog filter and digital
filter are matched at dc? Does the gain of the two filters match at their respective cuto
frequencies?
(c) What is the transfer function H(z) of the digital filter if the gain of the analog filter at its cuto
frequency (1 rad/s) is matched to the gain of the digital filter at its cuto frequency (50 Hz)?
Does the gain of the two filters match at dc?
z ets
19.7 (Matched z-Transform) Use the matched z-transform s + = , with ts = 0.5 s and
z
gain matching at dc, to transform each analog filter H(z) to a digital filter H(z).
1 1
(a) H(s) = (b) H(s) =
s+2 (s + 1)(s + 2)
1 1 s+1
(c) H(s) = + (d) H(s) =
s+1 s+2 (s + 1)2 + 2
19.8 (Backward Euler Algorithm) The backward Euler algorithm for numerical integration is given
by y[n] = y[n 1] + ts x[n].
(a) Derive a mapping rule for converting an analog filter to a digital filter, based on this algorithm.
4
(b) Apply the mapping to convert the analog filter H(s) = to a digital filter H(z), using a
s+4
sampling interval of ts = 0.5 s.
1
19.9 (Mapping from Dierence Algorithms) Consider the analog filter H(s) = .
s+
(a) For what values of is this filter stable?
(b) Convert H(s) to a digital filter H(z), using the mapping based on the forward dierence at a
sampling rate S. Is H(z) always stable if H(s) is stable?
(c) Convert H(s) to a digital filter H(z), using the mapping based on the backward dierence at a
sampling rate S. Is H(z) always stable if H(s) is stable?
3
19.10 (Bilinear Transformation) Consider the lowpass analog Bessel filter H(s) = .
s2 + 3s + 3
(a) Use the bilinear transformation to convert this analog filter H(s) to a digital filter H(z) at a
sampling rate of S = 2 Hz.
(b) Use H(s) and the bilinear transformation to design a digital lowpass filter H(z) whose gain at
f = 20 kHz matches the gain of H(s) at = 3 rad/s. The sampling frequency is S = 80 kHz.
s
19.11 (Bilinear Transformation) Consider the analog filter H(s) = .
s2 +s+1
(a) What type of filter does H(s) describe?
(b) Use H(s) and the bilinear transformation to design a digital filter H(z) operating at S = 1 kHz
such that its gain at 250 Hz matches the gain of H(s) at = 1 rad/s. What type of filter does
H(z) describe?
(c) Use H(s) and the bilinear transformation to design a digital filter H(z) operating at S = 10 Hz
such that gains of H(z) and H(s) match at 1 Hz. What type of filter does H(z) describe?
z+1
19.12 (Spectral Transformation of Digital Filters) The digital lowpass filter H(z) =
z 2 z + 0.2
has a cuto frequency f = 0.5 kHz and operates at a sampling frequency S = 10 kHz. Use this filter
to design the following:
Chapter 19 Problems 709
2
19.13 (Spectral Transformation of Analog Prototypes) The analog lowpass filter H(s) =
s2 + 2s + 2
has a cuto frequency of 1 rad/s. Use this prototype to design the following digital filters.
(a) A lowpass filter with a passband edge of 100 Hz and S = 1 kHz
(b) A highpass filter with a cuto frequency of 500 Hz and S = 2 kHz
(c) A bandpass filter with band edges at 400 Hz and 800 Hz, and S = 3 kHz
(d) A bandstop filter with band edges at 1 kHz and 1200 Hz, and S = 4 kHz
19.14 (IIR Filter Design) Design IIR filters that meet each of the following sets of specifications. Assume
a passband attenuation of Ap = 2 dB and a stopband attenuation of As = 30 dB.
(a) A Butterworth lowpass filter with passband edge at 1 kHz, stopband edge at 3 kHz, and S =
10 kHz, using the backward Euler transformation.
(b) A Butterworth highpass filter with passband edge at 400 Hz, stopband edge at 100 Hz, and
S = 2 kHz, using the impulse-invariant transformation.
(c) A Chebyshev bandpass filter with passband edges at 800 Hz and 1600 Hz, stopband edges at
400 Hz and 2 kHz, and S = 5 kHz, using the bilinear transformation.
(d) An inverse Chebyshev bandstop filter with passband edges at 200 Hz and 1.2 kHz, stopband
edges at 500 Hz and 700 Hz, and S = 4 kHz, using the bilinear transformation.
19.15 (Group Delay) A digital filter H(z) is designed from an analog filter H(s), using the bilinear
transformation a = C tan(d /2).
(a) Show that the group delays Ta and Td of H(s) and H(z) are related by
Td = 0.5C(1 + a2 )Ta
5
(b) Design a digital filter H(z) from the analog filter H(s) = at a sampling frequency of
s+5
S = 4 Hz such that the gain of H(s) at = 2 rad/s matches the gain of H(z) at 1 Hz.
(c) What is the group delay Ta of the analog filter H(s)?
(d) Use the above results to find the group delay Td of the digital filter H(z) designed in part (b).
19.16 (Digital-to-Analog Mappings) The bilinear transformation allows us to use a linear mapping to
transform a digital filter H(z) to an analog equivalent H(s).
(a) Develop such a mapping based on the bilinear transformation.
z
(b) Use this mapping to convert a digital filter H(z) = operating at S = 2 Hz to its analog
z 0.5
equivalent H(s).
ln(t1 ) ln(t2 )
(a) Using the fact that s = = , show that t2 = (t1 )M , where M = t2 /t1 .
t1 t2
z z
(b) Use the result of part (a) to convert the digital filter H1 (z) = + , with ts = 1 s,
z 0.5 z 0.25
to a digital filter H2 (z), with ts = 0.5 s.
4s(s + 1)
19.18 (Matched z-Transform) The analog filter H(s) = is to be converted to a digital
(s + 2)(s + 3)
filter H(z) at a sampling rate of S = 4 Hz.
z ets
(a) Convert H(s) to a digital filter, using the matched z-transform s + = .
z
(b) Convert H(s) to a digital filter, using the modified matched z-transform by moving all zeros at
the origin (z = 0) to z = 1.
(c) Convert H(s) to a digital filter, using the modified matched z-transform, by moving all but one
zero at the origin (z = 0) to z = 1.
19.20 (Bilinear Transformation) A second-order Butterworth lowpass analog filter with a half-power
frequency of 1 rad/s is converted to a digital filter H(z), using the bilinear transformation at a
sampling rate of S = 1 Hz.
(a) What is the transfer function H(s) of the analog filter?
(b) What is the transfer function H(z) of the digital filter?
(c) Are the dc gains of H(z) and H(s) identical? Should they be? Explain.
(d) Are the dc gains H(z) and H(s) at their respective half-power frequencies identical? Explain.
19.21 (Digital-to-Analog Mappings) In addition to the bilinear transformation, the backward Euler
method also allows a linear mapping to transform a digital filter H(z) to an analog equivalent H(s).
(a) Develop such a mapping based on the backward Euler algorithm.
z
(b) Use this mapping to convert a digital filter H(z) = operating at S = 2 Hz to its analog
z 0.5
equivalent H(s).
19.22 (Digital-to-Analog Mappings) The forward Euler method also allows a linear mapping to trans-
form a digital filter H(z) to an analog equivalent H(s).
(a) Develop such a mapping based on the forward Euler algorithm.
z
(b) Use this mapping to convert a digital filter H(z) = operating at S = 2 Hz to its analog
z 0.5
equivalent H(s).
Chapter 19 Problems 711
19.23 (Digital-to-Analog Mappings) Two other methods that allow us to convert a digital filter H(z) to
z ets
an analog equivalent H(z) are impulse invariance and the matched z-transform s + = .
z
z(z + 1)
Let H(z) = . Find the analog filter H(s) from which H(z) was developed, assuming
(z 0.25)(z 0.5)
a sampling frequency of S = 2 Hz and
(a) Impulse invariance. (b) Matched z-transform. (c) Bilinear transformation.
19.24 (Filter Specifications) A hi-fi audio signal band-limited to 20 kHz is contaminated by high-
frequency noise between 70 kHz and 110 kHz. We wish to design a digital filter to reduce the noise
by a factor of 100, with no appreciable signal loss. Pick the minimum sampling frequency that avoids
aliasing of the noise spectrum into the signal spectrum and develop the specifications for the digital
filter.
19.25 (IIR Filter Design) A digital filter is required to have a monotonic response in the passband and
stopband. The half-power frequency is to be 4 kHz, and the attenuation past 5 kHz is to exceed 20
dB. Design the digital filter, using impulse invariance and a sampling frequency of 15 kHz.
19.26 (Notch Filters) A notch filter is required to remove 50-Hz interference. Design such a filter,
using the bilinear transformation and assuming a bandwidth of 4 Hz and a sampling rate of 300 Hz.
Compute the gain of this filter at 40 Hz, 50 Hz, and 60 Hz.
19.27 (Peaking Filters) A peaking filter is required to isolate a 100-Hz signal with unit gain. Design
this filter, using the bilinear transformation and assuming a bandwidth of 5 Hz and a sampling rate
of 500 Hz. Compute the gain of this filter at 90 Hz, 100 Hz, and 110 Hz.
19.28 (IIR Filter Design) A fourth-order digital filter is required to have a passband between 8 kHz and
12 kHz and a maximum passband ripple that equals 5% of the peak magnitude. Design the digital
filter, using the bilinear transformation if the sampling frequency is assumed to be 40 kHz.
19.29 (IIR Filter Design) Lead-lag systems are often used in control systems and have the generic form
1 + s1
H(s) = . Use a sampling frequency of S = 10 Hz and the bilinear transformation to design
1 + s2
IIR filters from this lead-lag compensator if
(a) 1 = 1 s, 2 = 10 s. (b) 1 = 10 s, 2 = 1 s.
(sts )2
19.30 (Pade Approximations) A delay of ts may be approximated by ests 1 sts + . An
2!
nth-order Pade approximation is based on a rational function of order n that minimizes the truncation
error of this approximation. The first-order and second-order Pade approximations are
1 12 sts 1 12 sts + 12 (sts )
1 2
P1 (s) = P2 (s) =
1 + 12 sts 1 + 12 sts + 12 (sts )
1 2
Since ests describes a delay of one sample (or z 1 ), Pade approximations can be used to generate
inverse mappings for converting a digital filter H(z) to an analog filter H(s).
(a) Generate mappings for converting a digital filter H(z) to an analog filter H(s) based on the
first-order and second-order Pade approximations.
z
(b) Use each mapping to convert H(z) = to H(s), assuming ts = 0.5 s.
z 0.5
(c) Show that the first-order mapping is bilinear. Is this mapping related in any way to the bilinear
transformation?
712 Chapter 19 IIR Digital Filters
19.31 (IIR Filter Design) It is required to design a lowpass digital filter H(z) from the analog filter
H(s) = s+1
1
. The sampling rate is S = 1 kHz. The half-power frequency of H(z) is to be C = /4.
(a) Use impulse invariance to design H(z) such that gain of the two filters matches at dc. Compare
the frequency response of both filters (after appropriate frequency scaling). Which filter would
you expect to yield better performance? To confirm your expectations, define the (in-band)
signal to (out-of-band) noise ratio (SNR) in dB as
- .
signal level
SNR = 20 log dB
noise level
(b) What is the SNR at the input and output of each filter if the input is
x(t) = cos(0.2C t) + cos(1.2C t) for H(s) and x[n] = cos(0.2nC ) + cos(1.2nC ) for H(z)?
(c) What is the SNR at the input and output of each filter if the input is
x(t) = cos(0.2C t) + cos(3C t) for H(s) and x[n] = cos(0.2nC ) + cos(3nC ) for H(z)?
(d) Use the bilinear transformation to design another filter H1 (z) such that gain of the two filters
matches at dc. Repeat the computations of parts (a) and (b) for this filter. Of the two digital
filters H(z) and H1 (z), which one would you recommend using, and why?
19.32 (The Eect of Group Delay) The nonlinear phase of IIR filters is responsible for signal distortion.
Consider a lowpass filter with a 1-dB passband edge at f = 1 kHz, a 50-dB stopband edge at f = 2 kHz,
and a sampling frequency of S = 10 kHz.
(a) Design a Butterworth filter HB (z) and an elliptic filter HE (z) to meet these specifications.
Using the Matlab routine grpdelay (or otherwise), compute and plot the group delay of each
filter. Which filter has the lower order? Which filter has a more nearly constant group delay in
the passband? Which filter would cause the least phase distortion in the passband? What are
the group delays NB and NE (expressed as the number of samples) of the two filters?
(b) Generate the signal x[n] = 3 sin(0.03n) + sin(0.09n) + 0.6 sin(0.15n) over 0 n 100. Use
the ADSP routine filter to compute the response yB [n] and yE [n] of each filter. Plot the filter
outputs yB [n] and yE [n] (delayed by NB and NE , respectively) and the input x[n] on the same
plot to compare results. Does the filter with the more nearly constant group delay also result
in smaller signal distortion?
(c) Are all the frequency components of the input signal in the filter passband? If so, how can
you justify that the distortion is caused by the nonconstant group delay and not by the filter
attenuation in the passband?
19.33 (LORAN) A LORAN (long-range radio and navigation) system for establishing positions of marine
craft uses three transmitters that send out short bursts (10 cycles) of 100-kHz signals in a precise phase
Chapter 19 Problems 713
relationship. Using phase comparison, a receiver (on the craft) can establish the position (latitude
and longitude) of the craft to within a few hundred meters. Suppose the LORAN signal is to be
digitally processed by first sampling it at 500 kHz and filtering the sampled signal using a second
order peaking filter with a half-power bandwidth of 100 Hz. Use the bilinear transformation to design
the filter from an analog filter with unit half-power frequency. Compare your design with the digital
filter designed in Problem 18.34 (to meet the same specifications).
19.34 (Decoding a Mystery Message) During transmission, a message signal gets contaminated by a
low-frequency signal and high-frequency noise. The message can be decoded only by displaying it
in the time domain. The contaminated signal x[n] is provided on disk as mystery1.mat. Load this
signal into Matlab (use the command load mystery1). In an eort to decode the message, try the
following methods and determine what the decoded message says.
(a) Display the contaminated signal. Can you read the message? Display the DFT of the signal
to identify the range of the message spectrum.
(b) Use the bilinear transformation to design a second-order IIR bandpass filter capable of extracting
the message spectrum. Filter the contaminated signal and display the filtered signal to decode
the message. Use both the filter (filtering) and filifilt (zero-phase filtering) commands.
(c) As an alternative method, first zero out the DFT component corresponding to the low-frequency
contamination and obtain the IDFT y[n]. Next, design a lowpass IIR filter (using impulse
invariance) to reject the high-frequency noise. Filter the signal y[n] and display the filtered
signal to decode the message. Use both the filter and filtfilt commands.
(d) Which of the two methods allows better visual detection of the message? Which of the two
filtering routines (in each method) allows better visual detection of the message?
19.35 (Interpolation) The signal x[n] = cos(2F0 n) is to be interpolated by 5 using up-sampling followed
by lowpass filtering. Let F0 = 0.4.
(a) Generate and plot 20 samples of x[n] and up-sample by 5.
(b) What must be the cuto frequency FC and gain A of a lowpass filter that follows the up-sampler
to produce the interpolated output?
(c) Design a fifth-order digital Butterworth filter (using the bilinear transformation) whose half-
power frequency equals FC and whose peak gain equals A.
(d) Filter the up-sampled signal through this filter and plot the result. Is the result an interpolated
version of the input signal? Do the peak amplitudes of the interpolated signal and original
signal match? Should they? Explain.
19.37 (Numerical Integration Algorithms) It is claimed that mapping rules to convert an analog filter
to a digital filter, based on numerical integration algorithms that approximate the area y[n] from
y[n 2] or y[n 3] (two or more time steps away), do not usually preserve stability. Consider the
following integration algorithms.
(1) y[n] = y[n 1] + 12 (5x[n] + 8x[n 1] x[n 2]) (Adams-Moulton rule)
ts
(3) y[n] = y[n 3] + 8 (x[n] + 3x[n 1] + 3x[n 2] + x[n 3]) (Simpsons three-eighths
3ts
rule)
1
Derive mapping rules for each algorithm, convert the analog filter H(s) = to a digital filter
s+1
using each mapping with S = 5 Hz, and use Matlab to compare their frequency response. Which
of these mappings (if any) allow us to convert a stable analog filter to a stable digital filter? Is the
claim justified?
19.38 (RIAA Equalization) Audio signals usually undergo a high-frequency boost (and low-frequency
cut) before being used to make the master for commercial production of phonograph records. During
playback, the signal from the phono cartridge is fed to a preamplifier (equalizer) that restores the
original signal. The frequency response of the preamplifier is based on the RIAA (Recording Industry
Association of America) equalization curve whose Bode plot is shown in Figure P19.38, with break
frequencies at 50, 500, and 2122 Hz.
HdB RIAA equalization curve
20
-20 dB/dec
19.39 (An Audio Equalizer) Many hi-fi systems are equipped with graphic equalizers to tailor the
frequency response. Consider the design of a four-band graphic equalizer. The first section is to be
a lowpass filter with a passband edge of 300 Hz. The next two sections are bandpass filters with
passband edges of [300, 1000] Hz and [1000, 3000] Hz, respectively. The fourth section is a highpass
filter with a stopband edge of 3 kHz. Use a sampling rate of 20 kHz and the bilinear transformation
to design second-order IIR Butterworth filters to implement these filters. Superimpose plots of the
frequency response of each section and their parallel combination. Explain how you might adjust the
gain of each section to vary over 12 dB.
Chapter 20
/2
The term generalized means that (F ) may include a jump (of at F = 0, if H(F ) is imaginary). There
may also be phase jumps of 2 (if the phase is restricted to the principal range (F ) ). If we plot
the magnitude |H(F )|, there will also be phase jumps of (where the amplitude A(F ) changes sign).
715
716 Chapter 20 FIR Digital Filters
Type 1 Sequences
A type 1 sequence h1 [n] and its amplitude spectrum A1 (F ) are illustrated in Figure 20.2.
n F
1 0.5 0.5 1
Center of symmetry Even symmetry about F = 0 and F = 0.5
Figure 20.2 Features of a type 1 symmetric sequence
This sequence is even symmetric with odd length N , and a center of symmetry at the integer value
M = (N 1)/2. Using Eulers relation, its frequency response H1 (F ) may be expressed as
! M 1
#
"
H1 (F ) = h[M ] + 2 h[k]cos[(M k)2F ] ej2MF = A1 (F )ej2MF (20.1)
k=0
Thus, H1 (F ) shows a linear phase of 2MF , and a constant group delay of M . The amplitude spectrum
A1 (F ) is even symmetric about both F = 0 and F = 0.5, and both |H1 (0)| and |H1 (0.5)| can be nonzero.
Type 2 Sequences
A type 2 sequence h2 [n] and its amplitude spectrum A2 (F ) are illustrated in Figure 20.3.
1 0.5 1 F
0.5
n
Odd symmetry about F = 0.5
Center of symmetry
Figure 20.3 Features of a type 2 symmetric sequence
This sequence is also even symmetric but of even length N , and a center of symmetry at the half-integer
value M = (N 1)/2. Using Eulers relation, its frequency response H2 (F ) may be expressed as
M 1/2
"
H2 (F ) = 2 h[k]cos[(M k)2F ]ej2MF = A2 (F )ej2MF (20.2)
k=0
20.1 Symmetric Sequences and Linear Phase 717
Thus, H2 (F ) also shows a linear phase of 2M F , and a constant group delay of M . The amplitude
spectrum A2 (F ) is even symmetric about F = 0, and odd symmetric about F = 0.5, and as a result, |H2 (0.5)|
is always zero.
Type 3 Sequences
A type 3 sequence h3 [n] and its amplitude spectrum A3 (F ) are illustrated in Figure 20.4.
1 0.5 F
n 0.5 1
This sequence is odd symmetric with odd length N , and a center of symmetry at the integer value
M = (N 1)/2. Using Eulers relation, its frequency response H3 (F ) may be expressed as
! M 1
#
"
H3 (F ) = j 2 h[k]sin[(M k)2F ] ej2MF = A3 (F )ej(0.52MF ) (20.3)
k=0
Thus, H3 (F ) shows a generalized linear phase of 2 2MF , and a constant group delay of M . The amplitude
spectrum A3 (F ) is odd symmetric about both F = 0 and F = 0.5, and as a result, |H3 (0)| and |H3 (0.5)| are
always zero.
Type 4 Sequences
A type 4 sequence h4 [n] and its amplitude spectrum A4 (F ) are illustrated in Figure 20.5.
1 0.5 F
n 0.5 1
This sequence is odd symmetric with even length N , and a center of symmetry at the half-integer value
M = (N 1)/2. Using Eulers relation, its frequency response H4 (F ) may be expressed as
M 1/2
"
H4 (F ) = j 2 h[k]sin[2(M k)F ]ej2MF = A4 (F )ej(0.52MF ) (20.4)
k=0
718 Chapter 20 FIR Digital Filters
Thus, H4 (F ) also shows a generalized linear phase of 2 2MF , and a constant group delay of M . The
amplitude spectrum A4 (F ) is odd symmetric about F = 0, and even symmetric about F = 0.5, and as a
result, |H4 (0)| is always zero.
z =1 z =1 z =1 z =1
z =1 z =1 z =1 z =1
Must be an odd number (if present) Must be an even number (if present)
All other zeros must show conjugate reciprocal symmetry
For a type 1 sequence, the number of zeros at z = 1 must be even. So, there must be another zero at
z = 1. Thus, we have a total of six zeros.
(b) Find the transfer function and impulse response of a causal type 3 linear-phase filter, assuming the
smallest length and smallest delay, if it is known that there is a zero at z = j and two zeros at z = 1.
The zero at z = j must be paired with its conjugate (and reciprocal) z = j.
A type 3 sequence requires an odd number of zeros at z = 1 and z = 1. So, the minimum number of
zeros required is one zero at z = 1 and three zeros at z = 1 (with two already present).
The transfer function has the form
H(z) = (z + j)(z j)(z + 1)(z 1)3 = z 6 2z 5 + z 4 z 2 + 2z 1
The transfer function of the causal filter with the minimum delay is
HC (z) = 1 2z 1 + z 2 z 4 + 2z 5 z 6 hC [n] = {1, 2, 1, 0, 1, 2, 1}
Note that for an even length N , the index n is not an integer. Even though the designed filter is an
approximation to the ideal filter, it is the best approximation (in the mean square sense) compared to any
other filter of the same length. The problem is that it shows certain undesirable characteristics. Truncation
of the ideal impulse response h[n] is equivalent to multiplication of h[n] by a rectangular window w[n] of
length N . The spectrum of the windowed impulse response hW [n] = h[n]w[n] is the (periodic) convolution
of H(F ) and W (F ). Since W (F ) has the form of a Dirichlet kernel, this spectrum shows overshoot and
ripples (the Gibbs eect). It is the abrupt truncation of the rectangular window that leads to overshoot and
ripples in the magnitude spectrum. To reduce or eliminate the Gibbs eect, we use tapered windows.
0.707P
0.5P
PSL
High-frequency decay
F
0.5
W3 W6 WS WM
Amplitude-based measures for a window include the peak sidelobe level (PSL), usually in decibels, and
the decay rate DS in dB/dec. Frequency-based measures include the mainlobe width WM , the 3-dB and
6-dB widths (W3 and W6 ), and the width WS to reach the peak sidelobe level. The windows commonly used
in FIR filter design and their spectral features are listed in Table 20.2 and illustrated in Figure 20.8. As the
window length N increases, the width parameters decrease, but the peak sidelobe level remains more or less
constant. Ideally, the spectrum of a window should approximate an impulse and be confined to as narrow a
mainlobe as possible, with as little energy in the sidelobes as possible.
20.2 Window-Based Design 721
NOTATION:
GP : Peak gain of mainlobe GS : Peak sidelobe gain DS : High-frequency attenuation (dB/decade)
ASL : Sidelobe attenuation ( GP
GS
) in dB W6 : 6-dB half-width WS : Half-width of mainlobe to reach PS
W3 3-dB half-width WM : Half-width of mainlobe
Notes:
1. All widths (WM , WS , W6 , W3 ) must be normalized (divided) by the window length N .
2. Values for the Kaiser window depend on the parameter . Empirically determined relations are
|sinc(j)| GS 0.22
GP = , = , WM = (1 + 2 )1/2 , WS = (0.661 + 2 )1/2
I0 () GP sinh()
722 Chapter 20 FIR Digital Filters
dB magnitude
Amplitude
40
0.5
60
0 80
10 0 10 0 0.1 0.2 0.3 0.4 0.5
Index n Digital frequency F
von Hann window: N = 21 Magnitude spectrum in dB
0
1
dB magnitude
20
Amplitude
31.5
40
0.5
60
0 80
10 0 10 0 0.1 0.2 0.3 0.4 0.5
Index n Digital frequency F
Hamming window: N = 21 Magnitude spectrum in dB
0
1
dB magnitude
20
Amplitude
0.5 42.7
60
0 80
10 0 10 0 0.1 0.2 0.3 0.4 0.5
Index n Digital frequency F
Blackman window: N = 21 Magnitude spectrum in dB
0
1
dB magnitude
20
Amplitude
40
0.5
58.1
0 80
10 0 10 0 0.1 0.2 0.3 0.4 0.5
Index n Digital frequency F
Kaiser window N = 21 = 2.6 Magnitude spectrum in dB
0
1
dB magnitude
20
Amplitude
40
0.5
60
0 80
10 0 10 0 0.1 0.2 0.3 0.4 0.5
Index n Digital frequency F
Most windows have been developed with some optimality criterion in mind. Ultimately, the trade-o
is a compromise between the conflicting requirements of a narrow mainlobe (or a small transition width),
and small sidelobe levels. Some windows are based on combinations of simpler windows. For example,
the von Hann (or Hanning) window is the sum of a rectangular and a cosine window, the Bartlett window
is the convolution of two rectangular windows, and the cosa window is the product of a von Hann and
a cosine window. Other windows are designed to emphasize certain desirable features. The von Hann
window improves the high-frequency decay (at the expense of a larger peak sidelobe level). The Hamming
window minimizes the sidelobe level (at the expense of a slower high-frequency decay). The Kaiser window
has a variable parameter that controls the peak sidelobe level. Still other windows are based on simple
mathematical forms or easy application. For example, the cosa windows have easily recognizable transforms,
and the von Hann window is easy to apply as a convolution in the frequency domain. An optimal time-limited
window should maximize the energy in its spectrum over a given frequency band. In the continuous-time
domain, this constraint leads to a window based on prolate spheroidal wave functions of the first order. The
Kaiser window best approximates such an optimal window in the discrete domain.
Here, Tn (x) is the Chebyshev polynomial of order n, and A is the sidelobe attenuation in decibels.
Harris windows oer a variety of peak sidelobe levels and high-frequency decay rates. Their characteristics
are listed in Table 20.3. Harris windows have the general form
2n N 1
w[n] = b0 b1 cos(m) + b2 cos(2m) b3 cos(3m), where m = , |n| (20.7)
N 1 2
724 Chapter 20 FIR Digital Filters
1 Bartlett window
N = 13 N = 10
12 intervals 9 intervals
An N -point FIR window uses N 1 intervals to generate N samples (including both end samples). Once
selected, the window sequence must be symmetrically positioned with respect to the symmetric impulse
response sequence. Windowing is then simply a pointwise multiplication of the two sequences.
The symmetrically windowed impulse response of an ideal lowpass filter may be written as
hW [n] = 2FC sinc(2nFC )w[n], 0.5(N 1) n 0.5(N 1) (20.8)
For even N , the index n is not an integer, even though it is incremented every unit, and we require a non-
integer delay to produce a causal sequence. If the end samples of w[n] equal zero, so will those of hW [n],
and the true filter length will be less by 2.
(b) With N = 6, we have FC = 0.25 and hN [n] = 2FC sinc(2nFC ) = 0.5 sinc(0.5n), 2.5 n 2.5.
For a von Hann (Hanning) window, w[n] = 0.5 + 0.5 cos( N2n
1 ) = 0.5 + 0.5 cos(0.4n), 2.5 n 2.5.
Since the first sample of hW [n] is zero, the minimum delay required for a causal filter is only 1.5 (and
not 2.5) samples, and the transfer function of the causal sequence is
HC (z) = 0.0518 + 0.4072z 1 + 0.4072z 2 + 0.0518z 3
Stopband As
s
F F
Fp Fs 0.5 Fp Fs 0.5
The gain specifications of an FIR filter are often based on the passband ripple p (the maximum deviation
from unit gain) and the stopband ripple s (the maximum deviation from zero gain). Since the decibel gain
is usually normalized with respect to the peak gain, the passband and stopband attenuation Ap and As (in
decibels) are related to these ripple parameters by
- . - .
1 p s
Ap (dB) = 20 log As (dB) = 20 log 20 log s , p 1 (20.9)
1 + p 1 + p
To convert attenuation specifications (in decibels) to values for the ripple parameters, we use
10Ap /20 1
p = s = (1 + p )10As /20 10As /20 , p 1 (20.10)
10Ap /20 + 1
Window-based design calls for normalization of the design frequencies by the sampling frequency, devel-
oping the lowpass filter by windowing the impulse response of an ideal filter, and conversion to the required
filter type, using spectral transformations. The method may appear deceptively simple (and it is), but some
issues can only be addressed qualitatively. For example, the choice of cuto frequency is aected by the
window length N . The smallest length N that meets specifications depends on the choice of window, and
the choice of the window, in turn, depends on the (stopband) attenuation specifications.
726 Chapter 20 FIR Digital Filters
Ideal filter
Transition width FT window mainlobe width
Peak stopband ripple peak sidelobe level of window
F
F Fp Fs 0.5
0.5 0.5 (Not to scale)
The ideal spectrum has a jump discontinuity at F = FC . The windowed spectrum shows overshoot and
ripples, and a finite transition width but no abrupt jump. Its magnitude at F = FC equals 0.5 (corresponding
to an attenuation of 6 dB).
Table 20.4 lists the characteristics of the windowed spectrum for various windows. It excludes windows
(such as the Bartlett window) whose amplitude spectrum is entirely positive (because they result in a
complete elimination of overshoot and the Gibbs eect). Since both the window function and the impulse
response are symmetric sequences, the spectrum of the windowed filter is also endowed with symmetry. Here
are some general observations about the windowed spectrum:
1. Even though the peak passband ripple equals the peak stopband ripple (p = s ), the passband (or
stopband) ripples are not of equal magnitude.
2. The peak stopband level of the windowed spectrum is typically slightly less than the peak sidelobe
level of the window itself. In other words, the filter stopband attenuation (listed as AWS in Table 20.4)
is typically greater (by a few decibels) than the peak sidelobe attenuation of the window (listed as
ASL in Tables 20.2 and 20.3). The peak sidelobe level, the peak passband ripple, and the passband
attenuation (listed as AWP in Table 20.4) remain more or less constant with N .
3. The peak-to-peak width across the transition band is roughly equal to the mainlobe width of the window
(listed as WM in Tables 20.2 and 20.3). The actual transition width (listed as FW in Table 20.4) of
the windowed spectrum (when the response first reaches 1 p and s ) is less than this width. The
transition width FWS is inversely related to the window length N (with FWS C/N , where C is more
or less a constant for each window).
The numbers vary in the literature, and the values in Table 20.4 were found here by using an ideal impulse
response h[n] = 0.5 sinc(0.5n), with FC = 0.25, windowed by a 51-point window. The magnitude specifi-
cations are normalized with respect to the peak magnitude. The passband and stopband attenuation are
computed from the passband and stopband ripple (with p = s ), using the relations already given.
20.2 Window-Based Design 727
based on matching the given transition width specification FT to the transition width FWS = C/N of the
windowed spectrum (as listed in Table 20.4)
C C
FT = Fs Fp = FWS N= (20.11)
N Fs Fp
Here, Fp and Fs are the digital passband and stopband frequencies. The window length depends on the
choice of window (which dictates the choice of C). The closer the match between the stopband attenuation
As and the stopband attenuation AWS of the windowed spectrum, the smaller is the window length N . In
any case, for a given window, this relation typically overestimates the smallest filter length, and we can often
decrease this length and still meet design specifications.
10Ap /20 1
p = s = 10As /20 = min(p , s ) (20.12)
10Ap /20 + 1
The ripple parameter is used to recompute the actual stopband attenuation As0 in decibels:
The Kaiser window parameter is estimated from the actual stopband attenuation As0 , as follows:
0.0351(As0 8.7), As0 > 50 dB
= 0.186(As0 21) + 0.0251(As0 21),
0.4
21 dB As0 50 dB (20.15)
0, As0 < 21 dB
do not change the filter order (or length). The starting point is an ideal lowpass filter, with unit passband
gain and a cuto frequency of FC , whose windowed impulse response hLP [n] is given by
hLP [n] = 2FC sinc(2nFC )w[n], 0.5(N 1) n 0.5(N 1) (20.16)
Here, w[n] is a window function of length N . If N is even, the index n takes on non-integer values. The
lowpass-to-highpass (LP2HP) transformation may be achieved in two ways, as illustrated in Figure 20.12.
The first transformation of Figure 20.12 is valid only if the filter length N is odd, and reads
N 1 N 1
hHP [n] = [n] hLP [n] = [n] 2FC sinc(2nFC )w[n], n (20.17)
2 2
Its cuto frequency FH = 0.5(Fp + Fs ) equals the cuto frequency FC of the lowpass filter. The second
transformation (assuming a causal lowpass prototype hLP [n]) is valid for any length N , and reads
hHP [n] = (1)n hLP [n] = 2(1)n FC sinc(2nFC )w[n], 0nN 1 (20.18)
However, its cuto frequency is 0.5 0.5(Fp + Fs ). In order to design a highpass filter with a cuto frequency
of FH = 0.5(Fp + Fs ), we must start with a lowpass filter whose cuto frequency is FC = 0.5 0.5(Fp + Fs ).
The LP2BP and LP2BS transformations preserve arithmetic (not geometric) symmetry about the center
frequency F0 . If the band edges are [F1 , F2 , F3 , F4 ] in increasing order, arithmetic symmetry means that
F1 + F4 = F2 + F3 = 2F0 and implies equal transition widths. If the transition widths are not equal, we must
relocate a band edge to make both transition widths equal to the smaller width, as shown in Figure 20.13.
F Relocate F4 F
F1 F2 F3 F4 F1 F2 F3 F4
No arithmetic symmetry Arithmetic symmetry
Figure 20.13 How to ensure arithmetic symmetry of the band edges
The LP2BP and LP2BS transformations are illustrated in Figure 20.14. In each transformation, the
center frequency F0 , and the cuto frequency FC of the lowpass filter, are given by
F0 = 0.5(F2 + F3 ) FC = 0.5(F3 + F4 ) F0 (20.19)
730 Chapter 20 FIR Digital Filters
N 1 N 1
hBP [n] = 2 cos(2nF0 )hLP [n] = 4FC sinc(2nFC ) cos(2nF0 )w[n], n (20.20)
2 2
The lowpass-to-bandstop (LP2BS) transformation is given by
N 1 N 1
hBS [n] = [n] hBS [n] = [n] 4FC sinc(2nFC ) cos(2nF0 )w[n], n (20.21)
2 2
Note that the LP2BS transformation requires a type 4 sequence (with even symmetry and odd length N ).
This describes a lowpass filter. The digital frequencies are Fp = fp /S = 0.1 and Fs = fs /S = 0.2.
With As = 40 dB, possible choices for a window are (from Table 20.4) von Hann (with AWS = 44 dB)
and Blackman (with AWS = 75.3 dB). Using FWS (Fs Fp ) = C/N , the approximate filter lengths
for these windows (using the values of C from Table 20.4) are:
von Hann:N 3.21
0.1 33 Blackman:N 5.71
0.1 58
We choose the cuto frequency as FC = 0.5(Fp + Fs ) = 0.15. The impulse then response equals
hN [n] = 2FC sinc(2nFC ) = 0.3 sinc(0.3n)
Windowing gives the impulse response of the required lowpass filter
hLP [n] = w[n]hN [n] = 0.3w[n]sinc(0.3n)
As Figure E20.3A(a) shows, the design specifications are indeed met by each filter (but the lengths are
actually overestimated). The Blackman window requires a larger length because of the larger dierence
between As and AWS . It also has the larger transition width.
(a) Lowpass filter using von Hann and Blackman windows (b) von Hann: FC=0.1313 Minimum N=23
FC=0.15 von Hann N=33 Blackman N=58 Blackman: F =0.1278
C
Minimum N=29
2
6 2
6
von Hann von Hann
Blackman Blackman
[dB]
[dB]
40 40
Magnitude
Magnitude
80 80
120 120
0 0.1 0.15 0.2 0.3 0.4 0.5 0 0.1 0.15 0.2 0.3 0.4 0.5
Digital frequency F Digital frequency F
Figure E20.3A Lowpass FIR filters for Example 20.3(a and b)
(b) (Minimum-Length Design) By trial and error, the cuto frequency and the smallest length that
just meet specifications turn out to be FC = 0.1313, N = 23, for the von Hann window, and FC =
0.1278, N = 29, for the Blackman window. Figure E20.3A(b) shows the response of these minimum-
length filters. The passband and stopband attenuation are [1.9, 40.5] dB for the von Hann window,
and [1.98, 40.1] dB for the Blackman window. Even though the filter lengths are much smaller, each
filter does meet the design specifications.
1. Choose the cuto frequency of the lowpass filter as FC = 0.5(Fp + Fs ) = 0.15. The impulse
response hN [n] then equals
hN [n] = 2FC sinc(2nFC ) = 0.3 sinc(0.3n)
The windowed response is thus hW [n] = 0.3w[n]sinc(0.3n).
The impulse of the required highpass filter is then
hHP [n] = [n] hW [n] = [n] 0.3w[n]sinc(0.3n)
2. Choose the cuto frequency of the lowpass filter as FC = 0.5 0.5(Fp + Fs ) = 0.35.
Then, the impulse response equals hN [n] = 2FC sinc(2nFC ) = 0.7 sinc(0.7n).
The windowed response is thus hW [n] = 0.7w[n]sinc(0.7n).
The impulse of the required highpass filter is then
hHP [n] = (1)n hW [n] = 0.7(1)n w[n]sinc(0.7n)
The two methods yield identical results. As Figure E20.3B(a) shows, the design specifications are
indeed met by each window, but the lengths are actually overestimated.
(a) Highpass filter using Hamming and Blackman windows (b) Hamming: LPP FC=0.3293 Minimum N=22
LPP FC=0.35 Hamming N=35 Blackman N=58 Blackman: LPP F =0.3277
C
Minimum N=29
2
6 2
6
Hamming Hamming
Blackman Blackman
[dB]
[dB]
40 40
Magnitude
Magnitude
80 80
120 120
0 0.1 0.15 0.2 0.3 0.4 0.5 0 0.1 0.15 0.2 0.3 0.4 0.5
Digital frequency F Digital frequency F
Figure E20.3B Highpass FIR filters for Example 20.3(c and d)
(d) (Minimum-Length Design) By trial and error, the cuto frequency, and the smallest length that
just meet specifications, turn out to be FC = 0.3293, N = 22, for the Hamming window, and FC =
0.3277, N = 29, for the Blackman window. Figure E20.3B(b) shows the response of the minimum-
length filters. The passband and stopband attenuation are [1.94, 40.01] dB for the Hamming window,
and [1.99, 40.18] dB for the Blackman window. Each filter meets the design specifications, even though
the filter lengths are much smaller than the values computed from the design relations.
The specifications do not show arithmetic symmetry. The smaller transition width is 2 kHz. For
arithmetic symmetry, we therefore choose the band edges as [2, 4, 8, 10] kHz.
The digital frequencies are: passband [0.16, 0.32], stopband [0.08, 0.4], and F0 = 0.24.
The lowpass band edges become Fp = 0.5(Fp2 Fp1 ) = 0.08 and Fs = 0.5(Fs2 Fs1 ) = 0.16.
With As = 45 dB, one of the windows we can use (from Table 20.4) is the Hamming window.
For this window, we estimate FWS (Fs Fp ) = C/N to obtain N = 3.47/0.08 = 44.
We choose the cuto frequency as FC = 0.5(Fp + Fs ) = 0.12.
The lowpass impulse response is hN [n] = 2FC sinc(2nFC ) = 0.24 sinc(0.24n), 21.5 n 21.5.
Windowing this gives hW [n] = hN [n]w[n], and the LP2BP transformation gives
hBP [n] = 2 cos(2nF0 )hW [n] = 2 cos(0.48n)hW [n]
Its frequency response is shown in Figure E20.3C(a) and confirms that the specifications are met.
(a) Hamming BPF (N=44, F0=0.24): LPP FC=0.12 (b) Hamming BPF (N=27, F0=0.24): LPP FC=0.0956
3 3
20 20
[dB]
[dB]
Magnitude
Magnitude
45 45
90 90
0 0.08 0.16 0.24 0.32 0.4 0.48 0 0.08 0.16 0.24 0.32 0.4 0.48
Digital frequency F Digital frequency F
Figure E20.3C Bandpass FIR filters for Example 20.3(e)
The cuto frequency and smallest filter length that meets specifications turn out to be smaller. By
decreasing N and FC , we find that the specifications are just met with N = 27 and FC = 0.0956. For
these values, the lowpass filter is hN [n] = 2FC sinc(2nFC ) = 0.1912 sinc(0.1912n), 13 n 13.
Windowing and bandpass transformation yields the filter whose spectrum is shown in Figure E20.3C(b).
The attenuation is 3.01 dB at 4 kHz and 8 kHz, 45.01 dB at 2 kHz, and 73.47 dB at 12 kHz.
Thus, h[n] = 0 for even n, and the filter length N is always odd. Being a type 1 sequence, its transfer
function H(F ) displays even symmetry about F = 0. It is also antisymmetric about F = 0.25, with
A highpass half-band filter also requires FC = 0.25. If we choose FC = 0.5(Fp + Fs ), the sampling frequency
S must equal 2(fp + fs ) to ensure FC = 0.25, and cannot be selected arbitrarily. Examples of lowpass
and highpass half-band filters are shown in Figure 20.15. Note that the peak passband ripple and the peak
stopband ripple are of equal magnitude, with p = s = (as they are for any symmetric window).
(a) Amplitude of a halfband lowpass filter (b) Amplitude of a halfband highpass filter
1+ 1+
1 1
1 1
Amplitude
Amplitude
0.5 0.5
0 0
0 0.25 0.5 0 0.25 0.5
Digital frequency F Digital frequency F
Since the impulse response of bandstop and bandpass filters contains the term 2 cos(2nF0 )hLP [n], a
choice of F0 = 0.25 (for the center frequency) ensures that the odd-indexed terms vanish. Once again, the
sampling frequency S cannot be arbitrarily chosen, and must equal 4f0 to ensure F0 = 0.25. Even though
the choice of sampling rate may cause aliasing, the aliasing will be restricted primarily to the transition band
between fp and fs , where its eects are not critical.
Except for the restrictions in the choice of sampling rate S (which dictates the choice of FC for lowpass
and highpass filters, or F0 for bandpass and bandstop filters) and an odd length sequence, the design of
half-band filters follows the same steps as window-based design.
1. If we use the Kaiser window, we compute the filter length N and the Kaiser parameter as
follows:
10Ap /20 1
p = = 0.0575 s = 10As /20 = 0.00316 = 0.00316 As0 = 20 log = 50
10Ap /20 + 1
As0 7.95
N= + 1 = 18.57 19 = 0.0351(As0 8.7) = 1.4431
14.36(Fs Fp )
The impulse response is therefore hN [n] = 0.5 sinc(0.5n), 9 n 9.
Windowing hN [n] gives the required impulse response hW [n].
Figure E20.4A(a) shows that this filter does meet specifications with an attenuation of 0.045 dB
at 8 kHz and 52.06 dB at 16 kHz.
(a) Kaiser halfband LPF: =1.44, N=19, FC=0.25 (b) Hamming halfband LPF: N=21 FC=0.25
1 1
6 6
[dB]
[dB]
Magnitude
Magnitude
50 50
100 100
0 0.166 0.25 0.333 0.5 0 0.166 0.25 0.333 0.5
Digital frequency F Digital frequency F
Figure E20.4A Lowpass half-band filters for Example 20.4(a)
2. If we choose a Hamming window, we use Table 20.4 to approximate the odd filter length as
C C 3.47
FWS = Fs Fp = N= = 21
N Fs Fp 1/6
This value of N meets specifications and also turns out to be the smallest length that does. Its
response is plotted in Figure E20.4A(b). This filter shows an attenuation of 0.033 dB at 8 kHz
and an attenuation of 53.9 dB at 16 kHz.
1. If we use the Kaiser window, we must compute the filter length N and the Kaiser parameter
as follows:
10Ap /20 1
p = = 0.0575 s = 10As /20 = 0.00316 = 0.00316 As0 = 20 log = 50
10Ap /20 + 1
As0 7.95
N= + 1 = 30.28 31 = 0.0351(As0 8.7) = 1.4431
14.36(Fs Fp )
The prototype impulse response is hN [n] = 0.2 sinc(0.2n), 15 n 15. Windowing hN [n]
gives hW [n]. We transform hW [n] to the bandstop form hBP [n], using F0 = 0.25, to give
hBP [n] = [n] 2 cos(2nF0 )hW [n] = [n] 2 cos(0.5n)hW [n]
Figure E20.4B(a) shows that this filter does meet specifications, with an attenuation of 0.046 dB
at 2 kHz and 3 kHz, and 53.02 dB at 1 kHz and 4 kHz.
(a) Kaiser halfband BSF (N=31, F0=0.25, =1.44) (b) Hamming halfband BSF (N=35, F0=0.25)
1 1
[dB]
Magnitude
50
50
70 80
0 0.1 0.2 0.25 0.3 0.4 0.5 0 0.1 0.2 0.25 0.3 0.4 0.5
Digital frequency F Digital frequency F
Figure E20.4B Bandstop half-band filters for Example 20.4(b)
2. For a Hamming window, we use Table 20.4 to approximate the odd filter length as
C C 3.47
FWS = Fs Fp = N= = 35
N Fs Fp 0.1
This is also the smallest filter length that meets specifications. The magnitude response is shown
in Figure E20.4B(b). We see an attenuation of 0.033 dB at [2, 3] kHz and 69.22 dB at [1, 4] kHz.
By analogy, the continuous (but periodic) spectrum H(F ) can also be recovered from its frequency
samples, using a periodic extension of the sinc interpolating function (that equals zero at the sampling
points). The reconstructed spectrum HN (F ) will show an exact match to a desired H(F ) at the sampling
instants, even though HN (F ) could vary wildly at other frequencies. This is the basis for FIR filter design
by frequency sampling. Given the desired form for H(F ), we sample it at N frequencies, and find the IDFT
of the N -point sequence H[k], k = 0, 1, . . . , N 1. The following design guidelines stem both from design
aspects as well as computational aspects of the IDFT itself.
1. The N samples of H(F ) must correspond to the digital frequency range 0 F < 1, with
5
H[k] = H(F )5F =k/N , k = 0, 1, 2, . . . , N 1 (20.24)
The reason is that most DFT and IDFT algorithms require samples in the range 0 k N 1.
2. Since h[n] must be real, its DFT H[k] must possess conjugate symmetry about k = 0.5N (this is a
DFT requirement). Note that conjugate symmetry will always leave H[0] unpaired. It can be set to
any real value, in keeping with the required filter type (this is a design requirement). For example, we
must choose H[0] = 0 for bandpass or highpass filters.
3. For even length N , the computed end-samples of h[n] may not turn out to be symmetric. To ensure
symmetry, we must force h[0] to equal h[N ] (setting both to 0.5h[0], for example).
4. For h[n] to be causal, we must delay it (this is a design requirement). This translates to a linear
phase shift to produce the sequence |H[k]|ej[k] . In keeping with conjugate symmetry about the index
k = 0.5N , the phase for the first N/2 samples of H[k] will be given by
k(N 1)
[k] = , k = 0, 1, 2, . . . , 0.5(N 1) (20.25)
N
Note that for type 3 and type 4 (antisymmetric) sequences, we must also add a constant phase of 0.5
to [k] (up to k = 0.5N ). The remaining samples of H[k] are found by conjugate symmetry.
5. To minimize the Gibbs eect near discontinuities in H(F ), we may allow the sample values to vary
gradually between jumps (this is a design guideline). This is equivalent to introducing a finite transition
width. The choice of the sample values in the transition band can aect the response dramatically.
|H[k]| = {1, 1, 1, 0, 0, 0, 0, 0, 1, 1}
6789
k=5
H[1] = 1ej[1] = ej0.9 H[9] = H [1] = ej0.9 H[2] = 1ej[3] = ej1.8 H[8] = H [2] = ej1.8
738 Chapter 20 FIR Digital Filters
The inverse DFT of H[k] yields the symmetric real impulse response sequence h1 [n], with
h1 [n] = {0.0716, 0.0794, 0.1, 0.1558, 0.452, 0.452, 0.1558, 0.1, 0.0794, 0.0716}
Its DTFT magnitude H1 (F ), shown light in Figure E20.5(a), reveals a perfect match at the sampling
points but has a large overshoot near the cuto frequency. To reduce the overshoot, let us pick
H[2] = 0.5ej[2] = 0.5e1.8 H[8] = H [2] = 0.5e1.8
The inverse DFT of this new set of samples yields the new impulse response sequence h2 [n]:
Its frequency response H2 (F ), in Figure E20.5(a), not only shows a perfect match at the sampling
points but also a reduced overshoot, which we obtain at the expense of a broader transition width.
(a) Lowpass filter using frequency sampling (b) Highpass filter using frequency sampling
1.2 1.2
1 1
Magnitude
Magnitude
0.5 0.5
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Digital frequency F Digital frequency F
Figure E20.5 Lowpass and highpass filters for Example 20.5
(a and b)
(b) Consider the design of a highpass filter shown in Figure E20.5(b). Let us sample the ideal H(F ) (shown
dark) over 0 F < 1, with N = 10 samples. The magnitude of the sampled sequence H[k] is
|H[k]| = {0, 0, 0, 1, 1, 1, 1, 1, 0, 0}
6789
k=5
The actual (phase-shifted) sequence is H[k] = |H[k]|ej[k] . Since the impulse response h[n] must be
antisymmetric (for a highpass filter), [k] includes an additional phase of 0.5 and is given by
k(N 1)
[k] = + 0.5 = 0.5 0.9k, k5
N
Note that H[k] is conjugate symmetric about k = 0.5N = 5, with H[k] = H [N k].
Now, H[k] = 0, k = 0, 1, 2, 8, 9, and H[5] = 1ej[5] = 1. The remaining samples are
H[3] = 1ej[3] = ej2.2 H[7] = H [3] = ej2.2 H[4] = 1ej[4] = ej3.1 H[6] = H [4] = ej3.1
The inverse DFT of H[k] yields the antisymmetric real impulse response sequence h1 [n], with
h1 [n] = {0.0716, 0.0794, 0.1, 0.1558, 0.452, 0.452, 0.1558, 0.1, 0.0794, 0.0716}
20.4 FIR Filter Design by Frequency Sampling 739
Its DTFT magnitude H1 (F ), shown light in Figure E20.5(b), reveals a perfect match at the sampling
points but a large overshoot near the cuto frequency. To reduce the overshoot, let us choose
H[2] = 0.5ej[2] = 0.5e1.3 H[8] = 0.5e1.3
The inverse DFT of this new set of samples yields the new impulse response sequence h2 [n]:
h2 [n] = {0.0128, 0.0157, 0.1, 0.0606, 0.5108, 0.5108, 0.0606, 0.1, 0.0157, 0.0128}
Its frequency response H2 (F ), in Figure E20.5(b), not only shows a perfect match at the sampling
points but also a reduced overshoot, which we obtain at the expense of a broader transition width.
Its impulse response hN [n] may be found using the inverse DFT as
N 1
1 "
hN [n] = HN [k]ej2nk/N (20.27)
N
k=0
Interchanging summations, setting z n ej2nk/N = [z 1 ej2k/N ]n , and using the closed form for the finite
geometric sum, we obtain
N 1 , /
1 " 1 z N
H(z) = HN [k] (20.29)
N
k=0
1 z 1 ej2k/N
740 Chapter 20 FIR Digital Filters
If we factor out exp(jFN ) from the numerator, exp[j(F k/N )] from the denominator, and use Eulers
relation, we can simplify this result to
N
" 1 N
" 1
sinc[N (F Nk
)] j(N 1)(F k/N )
H(F ) = HN [k] e = HN [k]W [F N]
k
(20.31)
k=0
sinc[(F N
k
)] k=0
Here, W [F N]
k
describes a sinc interpolating function, defined by
sinc[N (F Nk
)] j(N 1)(F k/N )
W [F N]
k
= e (20.32)
sinc[(F N )]
k
and reconstructs H(F ) from its samples HN [k] taken at intervals 1/N . It equals 1 when F = k/N , and zero
otherwise. As a result, HN (F ) equals the desired H(F ) at the sampling instants, even though HN (F ) could
vary wildly at other frequencies. This is the concept behind frequency sampling.
The transfer function H(z) may also be written as the product of two transfer functions:
, 1 ,
/ N" /
1 z N HN [k]
H(z) = H1 (z)H2 (z) = (20.33)
N
k=0
1 z 1 ej2k/N
This form of H(z) suggests a method of recursive implementation of FIR filters. We cascade a comb filter,
described by H1 (z), with a parallel combination of N first-order resonators, described by H2 (z). Note that
each resonator has a complex pole on the unit circle and the resonator poles actually lie at the same locations
as the zeros of the comb filter. Each pair of terms corresponding to complex conjugate poles may be combined
to form a second-order system with real coecients for easier implementation.
Why implement FIR filters recursively? There are several reasons. In some cases, this may reduce the
number of arithmetic operations. In other cases, it may reduce the number of delay elements required. Since
the pole and zero locations depend only on N , such filters can be used for all FIR filters of length L by
changing only the multiplicative coecients.
Even with these advantages, things can go wrong. In theory, the poles and zeros balance each other.
In practice, quantization errors may move some poles outside the unit circle and lead to system instability.
One remedy is to multiply the poles and zeros by a real number slightly smaller than unity to relocate the
poles and zeros. The transfer function then becomes
N 1 , /
1 (z)N " HN [k]
H(z) = (20.34)
N
k=0
1 (z)1 ej2k/N
With = 1 , typically used values for range from 212 to 227 (roughly 104 to 109 ) and have been
shown to improve stability with little change in the frequency response.
Stopband As
s
F F
Fp Fs 0.5 Fp Fs 0.5
Figure 20.16 An optimal filter has ripples of equal magnitude in the passband and stopband
should therefore expect such a design to yield the smallest filter length, and a response that is equiripple in
both the passband and the stopband. A typical spectrum is shown in Figure 20.16.
There are three important concepts relevant to optimal design:
1. The error between the approximation H(F ) and the desired response D(F ) must be equiripple. The
error curve must show equal maxima and minima with alternating zero crossings. The more the number
of points where the error goes to zero (the zero crossings), the higher is the order of the approximating
polynomial and the longer is the filter length.
2. The frequency response H(F ) of a filter whose impulse response h[n] is a symmetric sequence can
always be put in the form
M
"
H(F ) = Q(F ) n cos(2nF ) = Q(F )P (F ) (20.35)
n=0
Here, Q(F ) equals 1 (type 1), cos(F ) (type 2), sin(2F ) (type 3), or sin(F ) (type 4); M is related
to the filter length N with M = int( N21 ) (types 1, 2, 4) or M = int( N23 ) (type 3); and the n are
related to the impulse response coecients h[n]. The quantity P (F ) may also be expressed as as a
power series in cos(2F ) (or as a sum of Chebyshev polynomials). If we can select the n to best meet
optimal constraints, we can design H(F ) as an optimal approximation to D(F ).
1. The error alternates between two equal maxima and minima (extrema):
In other words, we require M + 2 extrema (including the band edges) where the error attains its maximum
absolute value. These frequencies yield the smallest filter length (number of coecients k ) for optimal
design. In some instances, we may get M + 3 extremal frequencies leading to so called extra ripple filters.
The design strategy to find the extremal frequencies invariably requires iterative methods. The most
popular is the algorithm of Parks and McClellan, which in turn relies on the so-called Remez exchange
algorithm.
The Parks-McClellan algorithm requires the band edge frequencies Fp and Fs , the ratio K = p /s of the
passband and stopband ripple, and the filter length N . It returns the coecients k and the actual design
values of p and s for the given filter length N . If these values of p and s are not acceptable (or do not
meet requirements), we can increase N (or change the ratio K), and repeat the design.
A good starting estimate for the filter length N is given by a relation similar to the Kaiser relation for
half-band filters, and reads
10 log(p s ) 13 10Ap /20 1
N =1+ p = s = 10As /20 (20.39)
14.6FT 10Ap /20 + 1
Here, FT is the digital transition width. More accurate (but more involved) design relations are also available.
To explain the algorithm, consider a lowpass filter design. To approximate an ideal lowpass filter, we
choose D(F ) and W (F ) as
: +
1, 0 F Fp 1, 0 F Fp
D(F ) = W (F ) = (20.40)
0, Fs F 0.5 K = p /s , Fs F 0.5
To find the k in H(F ), we use the Remez exchange algorithm. Here is how it works. We start with
a trial set of M + 2 frequencies Fk , k = 1, 2, . . . , M + 2. To force the alternation condition, we must satisfy
(Fk ) = (Fk+1 ), k = 1, 2, . . . , M + 1. Since the maximum error is yet unknown, we let = (Fk ) =
(Fk+1 ). We now have M + 1 unknown coecients k and the unknown , a total of M + 2. We solve for
these by using the M + 2 frequencies, to generate the M + 2 equations:
(1)k = W (Fk )[D(Fk ) H(Fk )], k = 1, 2, . . . , M + 2 (20.41)
Here, the quantity (1)k brings out the alternating nature of the error.
Once the k are found, the right-hand side of this equation is known in its entirety and is used to
compute the extremal frequencies. The problem is that these frequencies may no longer satisfy the alternation
condition. So we must go back and evaluate a new set of k and , using the computed frequencies. We
continue this process until the computed frequencies also turn out to be the actual extremal frequencies (to
within a given tolerance, of course).
Do you see why it is called the exchange algorithm? First, we exchange an old set of frequencies Fk for
a new one. Then we exchange an old set of k for a new one. Since the k and Fk actually describe the
impulse response and frequency response of the filter, we are in essence going back and forth between the
two domains until the coecients k yield a spectrum with the desired optimal characteristics.
Many time-saving steps have been suggested to speed the computation of the extremal frequencies, and
the k , at each iteration. The Parks-McClellan algorithm is arguably one of the most popular methods
of filter design in the industry, and many of the better commercial software packages on signal processing
include it in their stock list of programs. Having said that, we must also point out two disadvantages of
this method. First, the filter length must still be estimated by empirical means. And second, we have no
control over the actual ripple that the design yields. The only remedy, if this ripple is unacceptable, is to
start afresh with a dierent set of weight functions or with a dierent filter length.
20.5 Design of Optimal Linear-Phase FIR Filters 743
[dB]
Magnitude
Magnitude
50 50
100 100
0 0.1 0.2 0.3 0.4 0.5 0 0.166 0.25 0.333 0.5
Digital frequency F Digital frequency F
Figure E20.6 Optimal filters for Example 20.6
(a and b)
(b) (Optimal Half-Band Filter Design) We design an optimal half-band filter to meet the following
specifications:
Passband edge: 8 kHz, stopband edge: 16 kHz, Ap = 1 dB, and As = 50 dB.
We choose S = 2(fp + fs ) = 48 kHz. The digital band edges are Fp = 16 , Fs = 13 , and FC = 0.25.
Next, we find the minimum ripple and the approximate filter length N as
10Ap /20 1
p = = 0.0575 s = 10As /20 = 0.00316 = 0.00316
10Ap /20 + 1
744 Chapter 20 FIR Digital Filters
10 log 2 13 20 log(0.00316) 13
N =1+ =1+ 16.2 N = 19
14.6(Fs Fp ) 14.6(1/6)
The choice N = 19 is based on the form N = 4k 1. Next, we design the lowpass prototype hP [n] of
even length M = 0.5(N + 1) = 10 and band edges at 2Fp = 2(1/6) = 1/3 and Fs = 0.5. The result is
hP [n] = {0.0074, 0.0267, 0.0708, 0.173, 0.6226, 0.6226, 0.173, 0.0708, 0.0267, 0.0074}
Finally, we zero-interpolate 0.5hP [n] and choose the center coecient as 0.5 to obtain HHB [n]:
{0.0037, 0, 0.0133, 0, 0.0354, 0, 0.0865, 0, 0.3113, 0, 0.5, 0.3113, 0, 0.0865, 0, 0.0354, 0, 0.0133, 0, 0.0037}
Its length is N = 19, as required. Figure E20.6(b) shows that this filter does meet specifications, with
an attenuation of 0.02 dB at 8 kHz and 59.5 dB at 16 kHz.
The filter serves to remove the spectral replications due to the zero interpolation by the up-sampler. These
replicas occur at multiples of the input sampling rate. As a result, the filter stopband edge is computed
from the input sampling rate Sin as fs = Sin fp , while the filter passband edge remains fixed (by the
given specifications). The filter sampling rate corresponds to the output sampling rate Sout and the filter
gain equals the interpolation factor Ik . At each successive stage (except the first), the spectral images occur
at higher and higher frequencies, and their removal requires filters whose transition bands get wider with
each stage, leading to less complex filters with smaller filter lengths. Although it is not easy to establish the
optimum values for the interpolating factors and their order for the smallest overall filter length, it turns
out that interpolating factors in increasing order generally yield smaller overall lengths, and any multistage
design results in a substantial reduction in the filter length as compared to a single-stage design.
For a single-stage interpolator, the output sampling rate is Sout = 48 kHz, and we thus require a filter
with a stopband edge of fs = Sin fp = 4 1.8 = 2.2 kHz, a sampling rate of S = Sout = 48 kHz, and
a gain of 12. If we use a crude approximation for the filter length as L 4/FT , where FT = (fs fp )/S
is the digital transition width, we obtain L = 4(48/0.4) = 480.
(b) If we choose two-stage interpolation with I1 = 3 and I2 = 4, at each stage we compute the important
parameters as follows:
(c) If we choose three-stage interpolation with I1 = 2, I2 = 3, and I3 = 2, at each stage we compute the
important parameters, as follows:
(d) If we choose three-stage interpolation but with the dierent order I1 = 3, I2 = 2, and I3 = 2, at each
stage we compute the important parameters, as follows:
Any multistage design results in a substantial reduction in the filter length as compared to a single-stage
design, and a smaller interpolation factor in the first stage of a multistage design does seem to yield smaller
overall lengths. Also remember that the filter lengths are only a crude approximation to illustrate the relative
merits of each design and the actual filter lengths will depend on the attenuation specifications.
746 Chapter 20 FIR Digital Filters
For multistage operations, the actual filter lengths depend not only on the order of the interpolating
factors, but also on the given attenuation specifications Ap and As . Since attenuations in decibels add in
a cascaded system, the passband attenuation Ap is usually distributed among the various stages to ensure
an overall attenuation that matches specifications. The stopband specification needs no such adjustment.
Since the signal is attenuated even further at each stage, the overall stopband attenuation always exceeds
specifications. For interpolation, we require a filter whose gain is scaled (multiplied) by the interpolation
factor of the stage.
The actual filter length of the optimal filter turns out to be N = 233, and the filter shows a passband
attenuation of 0.597 dB and a stopband attenuation of 50.05 dB.
(b) Repeat the design using a three-stage interpolator with I1 = 2, I2 = 3, and I3 = 2. How do the results
compare with those of the single-stage design?
We distribute the passband attenuation (in decibels) equally among the three stages. Thus, Ap = 0.2 dB
for each stage, and the ripple parameters for each stage are
100.2/20 1
p = = 0.0115 s = 1050/20 = 0.00316
100.2/20 + 1
For each stage, the important parameters and the filter length are listed in the following table:
In this table, for example, we compute the filter length for the first stage as
10 log(p s ) 13 S[10 log(p s ) 13]
L +1= + 1 = 44
14.6(Fs Fp ) 14.6(fs fp )
20.6 Application: Multistage Interpolation and Decimation 747
The actual filter lengths of the optimal filters turn out to be 47 (with design attenuations of 0.19 dB
and 50.31 dB), 13 (with design attenuations of 0.18 dB and 51.09 dB), and 4 (with design attenuations
of 0.18 dB and 50.91 dB). The overall filter length is only 64. This is about four times less than the
filter length for single-stage design.
The decimation filter has a gain of unity and operates at the input sampling rate Sin , and its stopband
edge is computed from the output sampling rate as fs = Sout fp . At each successive stage (except the
first), the transition bands get narrower with each stage.
The overall filter length does depend on the order in which the decimating factors are used. Although it
is not easy to establish the optimum values for the decimation factors and their order for the smallest overall
filter length, it turns out that decimation factors in decreasing order generally yield smaller overall lengths,
and any multistage design results in a substantial reduction in the filter length as compared to a single-stage
design.
The actual filter lengths also depend on the given attenuation specifications. Since attenuations in
decibels add in a cascaded system, the passband attenuation Ap is usually distributed among the various
stages to ensure an overall value that matches specifications.
(b) If we choose two-stage decimation with D1 = 4 and D2 = 3, at each stage we compute the important
parameters, as follows:
(c) If we choose three-stage decimation with D1 = 2, D2 = 3, and D3 = 2, at each stage we compute the
important parameters, as follows:
(d) If we choose three-stage decimation but with the dierent order D1 = 2, D2 = 2, and D3 = 3, at each
stage we compute the important parameters, as follows:
Here, the dn have the form of binomial coecients as indicated. Note that 2L 1 derivatives of |H(F )|2
are zero at F = 0 and 2K 1 derivatives are zero at F = 0.5. This is the basis for the maximally flat
response of the filter. The filter length equals N = 2(K + L) 1, and is thus odd. The integers K and L
are determined from the passband and stopband frequencies Fp and Fs that correspond to gains of 0.95 and
0.05 (or attenuations of about 0.5 dB and 26 dB), respectively. Here is an empirical design method:
1. Define the cuto frequency as FC = 0.5(Fp + Fs ) and let FT = Fs Fp .
2. Obtain a first estimate of the odd filter length as N0 = 1 + 0.5/FT2 .
3. Define the parameter as = cos2 (FC ).
4. Find the best rational approximation K/Mmin , with 0.5(N0 1) Mmin (N0 1).
5. Evaluate L and the true filter length N from L = Mmin K and N = 2Mmin 1.
6. Find h[n] as the N -point inverse DFT of H(F ), F = 0, 1/N, . . . , (N 1)/N .
(a) Impulse response h[n] (b) Magnitude spectrum in dB (c) Passband detail in dB
0 0
0.6
0.5 50 0.01
Magnitude [dB]
Magnitude [dB]
0.4 100
Amplitude
0.02
0.3
150
0.2 0.03
200
0.1
250 0.04
0
0.1 300 0.05
0 13 26 0 0.1 0.2 0.3 0.4 0.5 0 0.05 0.1 0.15 0.2
DT index n Digital frequency F Digital frequency F
Figure E20.10 Features of the maximally flat lowpass filter for Example 20.10
Magnitude
Phase (radians)
2F /2
0.5 0.5 F
F FC
FC FC FC /2
0.5 0.5
The magnitude and phase spectrum of such a dierentiator are shown in Figure 20.19. Since H(F ) is
odd, h[0] = 0. To find h[n], n = 0 we use the inverse DTFT to obtain
; FC
h[n] = j 2F ej2nF dF (20.44)
FC
cos(n)
For FC = 0.5 , this yields h[0] = 0 and h[n] = , n = 0.
n
With h[0] = 0, we use the inverse DTFT to find its impulse response h[n], n = 0, as
; ;
FC FC
1 cos(2nFC )
h[n] = j sgn(F )e j2nF
dF = 2 sin(2nF ) dF = (20.47)
FC 0 n
1 cos(n)
For FC = 0.5, this reduces to h[n] = , n = 0.
n
20.9 Least Squares and Adaptive Signal Processing 751
(a) FIR differentiators (Hamming window) FC=0.5 (b) Hilbert transformers (Hamming window) FC=0.5
3
10 1
2.5
25 25 15 10
2 0.8
Magnitude
Magnitude
N=15 N=15 25
1.5 0.6
1 0.4
0.5 0.2
0 0
0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5
Digital frequency F Digital frequency F
The design of Hilbert transformers closely parallels the design of FIR dierentiators. The sequence h[n]
is truncated to hN [n]. The chosen filter must correspond to a type 3 or type 4 sequence, since H(F ) is
imaginary. To ensure odd symmetry, the filter coecients may be computed only for n > 0, with the same
values (in reversed order) used for n < 0. If N is odd, we must also include the sample h[0]. We may window
hN [n] to minimize the ripples (due to the Gibbs eect) in the spectrum HN (F ). The Hamming window is a
common choice, but others may also be used. To make the filter causal, we introduce a delay of (N 1)/2
samples. The magnitude spectrum of a Hilbert transformer becomes flatter with increasing filter length N .
Figure 20.21(b) shows the magnitude response of Hilbert transformers for both even and odd lengths N .
Note that H(0) is always zero, but H(0.5) = 0 only for type 3 (odd-length) sequences.
In many practical situations, the set of equations is over-determined and thus amenable to a least-squares
solution. To solve for b, we simply premultiply both sides by XT to obtain
XT Xb = XT Y (20.49)
+
y [n] e [n]
Adaptive
x [n]
FIR filter
Adaptation
algorithm
M
"
y[n] = b[k]x[n k], n = 0, 1, . . . , M + N + 1 (20.51)
k=0
bn [k] = bn1 [k] + 2(y[n] y[n])x[n k] = bn1 [k] + 2e[n]x[n k], 0kM (20.53)
The parameter governs both the rate of convergence and the stability of the algorithm. Larger values
result in faster convergence, but the filter coecients tend to oscillate about the optimum values. Typically,
is restricted to the range 0 < < 1x , where x , the variance of x[n], provides a measure of the power in
the input signal x[n].
System Identification
In system identification, the goal is to identify the transfer function (or impulse response) of an unknown
system. Both the adaptive filter and the unknown system are excited by the same input, and the signal y[n]
represents the output of the unknown system. Minimizing e[n] implies that the output of the unknown system
and the adaptive filter are very close, and the adaptive filter coecients describe an FIR approximation to
the unknown system.
Noise Cancellation
In adaptive noise-cancellation systems, the goal is to improve the quality of a desired signal y[n] that may
be contaminated by noise. The signal x[n] is a noise signal, and the adaptive filter minimizes the power in
e[n]. Since the noise power and signal power add (if they are uncorrelated), the signal e[n] (with its power
minimized) also represents a cleaner estimate of the desired signal y[n].
Channel Equalization
In adaptive channel equalization, the goal is to allow a modem to adapt to dierent telephone lines (so as
to prevent distortion and intersymbol interference). A known training signal y[n] is transmitted at the start
of each call, and x[n] is the output of the telephone channel. The error signal e[n] is used to generate an
FIR filter (an inverse system) that cancels out the eects of the telephone channel. Once found, the filter
coecients are fixed, and the modem operates with the fixed filter.
754 Chapter 20 FIR Digital Filters
CHAPTER 20 PROBLEMS
DRILL AND REINFORCEMENT
20.1 (Symmetric Sequences) Find H(z) and H(F ) for each sequence and establish the type of FIR
filter it describes by checking values of H(F ) at F = 0 and F = 0.5.
(a) h[n] = {1, 0, 1} (b) h[n] = {1, 2, 2, 1}
(c) h[n] = {1, 0, 1} (d) h[n] = {1, 2, 2, 1}
20.2 (Symmetric Sequences) What types of sequences can we use to design the following filters?
(a) Lowpass (b) Highpass (c) Bandpass (d) Bandstop
20.3 (Truncation and Windowing) Consider a windowed lowpass FIR filter with cuto frequency 5 kHz
and sampling frequency S = 20 kHz. Find the truncated, windowed sequence, the minimum delay (in
samples and in seconds) to make the filter causal, and the transfer function H(z) of the causal filter
if
(a) N = 7, and we use a Bartlett window.
(b) N = 8, and we use a von Hann (Hanning) window.
(c) N = 9, and we use a Hamming window.
20.4 (Spectral Transformations) Assuming a sampling frequency of 40 kHz and a fixed passband, find
the specifications for a digital lowpass FIR prototype and the subsequent spectral transformation to
convert to the required filter type for the following filters.
(a) Highpass: passband edge at 10 kHz, stopband edge at 4 kHz
(b) Bandpass: passband edges at 6 kHz and 10 kHz, stopband edges at 2 kHz and 12 kHz
(c) Bandstop: passband edges 8 kHz and 16 kHz, stopband edges 12 kHz and 14 kHz
20.5 (Window-Based FIR Filter Design) We wish to design a window-based linear-phase FIR filter.
What is the approximate filter length N required if the filter to be designed is
(a) Lowpass: fp = 1 kHz, fs = 2 kHz, S = 10 kHz, using a von Hann (Hanning) window?
(b) Highpass: fp = 2 kHz, fs = 1 kHz, S = 8 kHz, using a Blackman window?
(c) Bandpass: fp = [4, 8] kHz, fs = [2, 12] kHz, S = 25 kHz, using a Hamming window?
(d) Bandstop: fp = [2, 12] kHz, fs = [4, 8] kHz, S = 25 kHz, using a Hamming window?
20.6 (Half-Band FIR Filter Design) A lowpass half-band FIR filter is to be designed using a von Hann
window. Assume a filter length N = 11 and find its windowed, causal impulse response sequence and
the transfer function H(z) of the causal filter.
20.7 (Half-Band FIR Filter Design) Design the following half-band FIR filters, using a Kaiser window.
(a) Lowpass filter: 3-dB frequency 4 kHz, stopband edge 8 kHz, and As = 40 dB
(b) Highpass filter: 3-dB frequency 6 kHz, stopband edge 3 kHz, and As = 50 dB
(c) Bandpass filter: passband edges at [2, 3] kHz, stopband edges at [1, 4] kHz, Ap = 1 dB, and
As = 35 dB
(d) Bandstop filter: stopband edges at [2, 3] kHz, passband edges at [1, 4] kHz, Ap = 1 dB, and
As = 35 dB
Chapter 20 Problems 755
20.8 (Frequency-Sampling FIR Filter Design) Consider the frequency-sampling design of a lowpass
FIR filter with FC = 0.25.
(a) Choose eight samples over the range 0 F < 1 and set up the frequency response H[k] of the
filter.
(b) Compute the impulse response h[n] for the filter.
(c) To reduce the overshoot, modify H[3] and recompute h[n].
20.9 (Maximally Flat FIR Filter Design) Design a maximally flat lowpass FIR filter with normalized
frequencies Fp = 0.1 and Fs = 0.4, and find its frequency response H(F ).
20.10 (FIR Dierentiators) Find the impulse response of a digital FIR dierentiator with
(a) N = 6, cuto frequency FC = 0.4, and no window.
(b) N = 6, cuto frequency FC = 0.4, and a Hamming window.
(c) N = 5, cuto frequency FC = 0.5, and a Hamming window.
20.11 (FIR Hilbert Transformers) Find the impulse response of an FIR Hilbert transformer with
(a) N = 6, cuto frequency FC = 0.4, and no window.
(b) N = 6, cuto frequency FC = 0.4, and a von Hann window.
(c) N = 7, cuto frequency FC = 0.5, and a von Hann window.
20.14 (Linear Phase and Symmetry) Assume a sequence h[n] with real coecients with all its poles
at z = 0. Argue for or against the following statements. You may want to exploit two useful facts.
First, each pair of terms with reciprocal roots such as (z ) and (z 1/) yields an even symmetric
impulse response sequence. Second, the convolution of symmetric sequences is also endowed with
symmetry.
(a) If all the zeros lie on the unit circle, h[n] must be linear phase.
(b) If h[n] is linear phase its zeros must always lie on the unit circle.
(c) If h[n] is odd symmetric, there must be an odd number of zeros at z = 1.
20.15 (Linear Phase and Symmetry) Assume a linear-phase sequence h[n] and refute the following
statements by providing simple examples using zero locations only at z = 1.
(a) If h[n] has zeros at z = 1 it must be a type 2 sequence.
(b) If h[n] has zeros at z = 1 and z = 1, it must be a type 3 sequence.
(c) If h[n] has zeros at z = 1 it must be a type 4 sequence.
756 Chapter 20 FIR Digital Filters
20.16 (Linear Phase and Symmetry) The locations of the zeros at z = 1 and their number provides
useful clues about the type of a linear-phase sequence. What is the sequence type for the following
zero locations at z = 1? Other zero locations are arbitrary but in keeping with linear phase and
conjugate reciprocal symmetry.
(a) No zeros at z = 1
(b) One zero at z = 1, none at z = 1
(c) Two zeros at z = 1, one zero at z = 1
(d) One zero at z = 1, none at z = 1
(e) Two zeros at z = 1, none at z = 1
(f ) One zero at z = 1, one zero at z = 1
(g) Two zeros at z = 1, one zero at z = 1
20.17 (Linear-Phase Sequences) What is the smallest length linear-phase sequence that meets the
requirements listed? Identify all the zero locations and the type of linear-phase sequence.
(a) Zero location: z = ej0.25 ; even symmetry; odd length
(b) Zero location: z = 0.5ej0.25 ; even symmetry; even length
(c) Zero location: z = ej0.25 ; odd symmetry; even length
(d) Zero location: z = 0.5ej0.25 ; odd symmetry; odd length
20.18 (Linear-Phase Sequences) The zeros of various finite-length linear-phase filters are given. As-
suming the smallest length, identify the sequence type, find the transfer function of each filter, and
identify the filter type.
(a) Zero location: z = 0.5ej0.25
(b) Zero location: z = ej0.25
(c) Zero locations: z = 0.5, z = ej0.25
(d) Zero locations: z = 0.5, z = 1; odd symmetry
(e) Zero locations: z = 0.5, z = 1; oven symmetry
20.19 (IIR Filters and Linear Phase) Even though IIR filters cannot be designed with linear phase,
they can actually be used to provide no phase distortion. Consider a sequence x[n]. We fold x[n],
feed it to a filter H(F ), and fold the filter output to obtain y1 [n]. We also feed x[n] directly to the
filter H(F ) to obtain y2 [n]. The signals y1 [n] and y2 [n] are summed to give y[n].
(a) Sketch a block diagram of this system.
(b) How is Y (F ) related to X(F )?
(c) Show that the signal y[n] suers no phase distortion.
20.20 (FIR Filter Specifications) Figure P20.20 shows the magnitude and phase characteristics of a
causal FIR filter designed at a sampling frequency of 10 kHz.
20 0.75 5
40 0.5
10
60 0.25
15
80 0 17
0 0.15 0.3 0.5 0 0.1 0.3 0.5 0 0.15 0.3 0.5
Digital frequency F Digital frequency F Digital frequency F
Figure P20.20 Filter characteristics for Problem 20.20
Chapter 20 Problems 757
(a) What are the values of the passband ripple p and stopband ripple s ?
(b) What are the values of the attenuation As and Ap in decibels?
(c) What are the frequencies of the passband edge and stopband edge?
(d) What is the group delay of the filter?
(e) What is the filter length N ?
(f ) Could this filter have been designed using the window method? Explain.
(g) Could this filter have been designed using the optimal method? Explain.
20.21 (FIR Filter Design) It is desired to reduce the frequency content of a hi-fi audio signal band-
limited to 20 kHz and sampled at 44.1 kHz for purposes of AM transmission. Only frequencies up
to 10 kHz are of interest. Frequencies past 15 kHz are to be attenuated by at least 55 dB, and the
passband loss is to be less than 10%. Design a digital filter using the Kaiser window that meets these
specifications.
20.22 (FIR Filter Design) It is desired to eliminate 60-Hz interference from an ECG signal whose
significant frequencies extend to 35 Hz.
(a) What is the minimum sampling frequency we can use to avoid in-band aliasing?
(b) If the 60-Hz interference is to be suppressed by a factor of at least 100, with no appreciable
signal loss, what should be the filter specifications?
(c) Design the filter using a Hamming window and plot its frequency response.
(d) Test your filter on the signal x(t) = cos(40t) + cos(70t) + cos(120t). Plot and compare the
frequency response of the sampled test signal and the filtered signal to confirm that your design
objectives are met.
20.23 (Digital Filter Design) We wish to design a lowpass filter for processing speech signals. The
specifications call for a passband of 4 kHz and a stopband of 5 kHz. The passband attenuation is to
be less than 1 dB, and the stopband gain is to be less than 0.01. The sampling frequency is 40 kHz.
(a) Design FIR filters, using the window method (with Hamming and Kaiser windows) and using
optimal design. Which of these filters has the minimum length?
758 Chapter 20 FIR Digital Filters
(b) Design IIR Butterworth and elliptic filters, using the bilinear transformation, to meet the same
set of specifications. Which of these filters has the minimum order? Which has the best delay
characteristics?
(c) How does the complexity of the IIR filters compare with that of the FIR filters designed with
the same specifications? What are the trade-os in using an IIR filter over an FIR filter?
20.24 (The Eect of Group Delay) The nonlinear phase of IIR filters is responsible for signal distortion.
Consider a lowpass filter with a 1-dB passband edge at fp = 1 kHz, a 50-dB stopband edge at
fp = 2 kHz, and a sampling frequency of S = 10 kHz.
(a) Design a Butterworth filter HB (z), an elliptic filter HE (z), and an optimal FIR filter HO (z) to
meet these specifications. Using the Matlab routine grpdelay (or otherwise), compute and
plot the group delay of each filter. Which filter has the best (most nearly constant) group delay
in the passband? Which filter would cause the least phase distortion in the passband? What
are the group delays NB , NE , and NO (expressed as the number of samples) of the three filters?
(b) Generate the signal x[n] = 3 sin(0.03n) + sin(0.09n) + 0.6 sin(0.15n) over 0 n 100.
Use the ADSP routine filter to compute the response yB [n], yE [n], and yO [n] of each filter.
Plot the filter outputs yB [n], yE [n], and yO [n] (delayed by NB , NE , and NO , respectively) and
the input x[n] on the same plot to compare results. Which filter results in the smallest signal
distortion?
(c) Are all the frequency components of the input signal in the filter passband? If so, how can you
justify that what you observe as distortion is actually the result of the nonconstant group delay
and not the filter attenuation in the passband?
20.25 (Raised Cosine Filters) The impulse response of a raised cosine filter has the form
cos(2nRFC )
hR [n] = h[n]
1 (4nRFC )2
where the roll-o factor R satisfies 0 < R < 1 and h[n] = 2FC sinc(2nFC ) is the impulse response of
an ideal lowpass filter.
(a) Let FC = 0.2. Generate the impulse response of an ideal lowpass filter with length 21 and
the impulse response of the corresponding raised cosine filter with R = 0.2, 0.5, 0.9. Plot the
magnitude spectra of each filter over 0 F 1 on the same plot, using linear and decibel
scales. How does the response in the passband and stopband of the raised cosine filter dier
from that of the ideal filter? How does the transition width and peak sidelobe attenuation of
the raised cosine filter compare with that of the ideal filter for dierent values of R? What is
the eect of increasing R on the frequency response?
(b) Compare the frequency response of hR [n] with that of an ideal lowpass filter with FC = 0.25.
Is the raised cosine filter related to this ideal filter?
20.26 (Interpolation) The signal x[n] = cos(2F0 n) is to be interpolated by 5, using up-sampling followed
by lowpass filtering. Let F0 = 0.4.
(a) Generate and plot 20 samples of x[n] and up-sample by 5.
(b) What must be the cuto frequency FC and gain A of a lowpass filter that follows the up-sampler
to produce the interpolated output?
(c) Design an FIR filter (using the window method or optimal design) to meet these specifications.
(d) Filter the up-sampled signal through this filter and plot the result.
(e) Is the filter output an interpolated version of the input signal? Do the peak amplitude of the
interpolated signal and original signal match? Should they? Explain.
Chapter 20 Problems 759
20.27 (Multistage Interpolation) To relax the design requirements for the analog reconstruction filter,
many compact disc systems employ oversampling during the DSP stages. Assume that audio signals
are band-limited to 20 kHz and sampled at 44.1 kHz. Assume a maximum passband attenuation of
1 dB and a minimum stopband attenuation of 50 dB.
(a) Design a single-stage optimal interpolating filter that increases the sampling rate to 176.4 kHz.
(b) Design multistage optimal interpolating filters that increase the sampling rate to 176.4 kHz.
(c) Which of the two designs would you recommend?
(d) For each design, explain how you might incorporate compensating filters during the DSP stage
to oset the eects of the sinc distortion caused by the zero-order-hold reconstruction device.
20.28 (Multistage Interpolation) The sampling rate of a speech signal band-limited to 3.4 kHz and
sampled at 8 kHz is to be increased to 48 kHz. Design three dierent schemes that will achieve
this rate increase and compare their performance. Use optimal FIR filter design where required and
assume a maximum passband attenuation of 1 dB and a minimum stopband attenuation of 45 dB.
20.29 (Multistage Decimation) The sampling rate of a speech signal band-limited to 3.4 kHz and
sampled at 48 kHz is to be decreased to 8 kHz. Design three dierent schemes that will achieve
this rate decrease and compare their performance. Use optimal FIR filter design where required and
assume a maximum passband attenuation of 1 dB and a minimum stopband attenuation of 45 dB.
How do these filters compare with the filters designed for multistage interpolation in Problem 20.28.
20.30 (Filtering Concepts) This problem deals with time-frequency plots of a combination of sinusoids
and their filtered versions.
(a) Generate 600 samples of the signal x[n] = cos(0.1n) + cos(0.4n) + cos(0.7n) comprising the
sum of three pure cosines at F = 0.05, 0.2, 0.35. Use the Matlab command fft to plot its
DFT magnitude. Use the ADSP routine timefreq to display its time-frequency plot. What do
the plots reveal? Now design an optimal lowpass filter with a 1-dB passband edge at F = 0.1
and a 50-dB stopband edge at F = 0.15 and filter x[n] through this filter to obtain the filtered
signal xf [n]. Plot its DFT magnitude and display its time-frequency plot. What do the plots
reveal? Does the filter perform its function? Plot xf [n] over a length that enables you to identify
its period. Does the period of xf [n] match your expectations?
(b) Generate 200 samples each of the three signals y1 [n] = cos(0.1n), y2 [n] = cos(0.4n), and
y3 [n] = cos(0.7n). Concatenate them to form the 600-sample signal y[n] = {y1 [n], y2 [n], y3 [n]}.
Plot its DFT magnitude and display its time-frequency plot. What do the plots reveal? In what
way does the DFT magnitude plot dier from part (a)? In what way does the time-frequency
plot dier from part (a)? Use the optimal lowpass filter designed in part (a) to filter y[n], obtain
the filtered signal yf [n], plot its DFT magnitude, and display its time-frequency plot. What do
the plots reveal? In what way does the DFT magnitude plot dier from part (a)? In what way
does the time-frequency plot dier from part (a)? Does the filter perform its function? Plot
yf [n] over a length that enables you to identify its period. Does the period of yf [n] match your
expectations?
20.31 (Decoding a Mystery Message) During transmission, a message signal gets contaminated by a
low-frequency signal and high-frequency noise. The message can be decoded only by displaying it
in the time domain. The contaminated signal x[n] is provided on disk as mystery1.mat. Load this
signal into Matlab (use the command load mystery1). In an eort to decode the message, try the
following methods and determine what the decoded message says.
760 Chapter 20 FIR Digital Filters
(a) Display the contaminated signal. Can you read the message? Display the DFT of the signal
to identify the range of the message spectrum.
(b) Design an optimal FIR bandpass filter capable of extracting the message spectrum. Filter the
contaminated signal and display the filtered signal to decode the message. Use both the filter
(linear-phase filtering) and filtfilt (zero-phase filtering) commands.
(c) As an alternative method, first zero out the DFT component corresponding to the low-frequency
contamination and obtain its IDFT y[n]. Next, design an optimal lowpass FIR filter to reject
the high-frequency noise. Filter the signal y[n] and display the filtered signal to decode the
message. Use both the filter and filtfilt commands.
(d) Which of the two methods allows better visual detection of the message? Which of the two
filtering routines (in each method) allows better visual detection of the message?
20.32 (Filtering of a Chirp Signal) This problem deals with time-frequency plots of a chirp signal and
its filtered versions.
(a) Use the ADSP routine chirp to generate 500 samples of a chirp signal x[n] whose frequency
varies from F = 0 to F = 0.12. Use the Matlab command fft to plot its DFT magnitude.
Use the ADSP routine timefreq to display its time-frequency plot. What do the plots reveal?
Plot x[n] and confirm that its frequency is increasing with time.
(b) Design an optimal lowpass filter with a 1-dB passband edge at F = 0.04 and a 40-dB stopband
edge at F = 0.1 and use the Matlab command filtfilt to obtain the zero-phase filtered
signal y1 [n]. Plot its DFT magnitude and display its time-frequency plot. What do the plots
reveal? Plot y1 [n] and x[n] on the same plot and compare. Does the filter perform its function?
(c) Design an optimal highpass filter with a 1-dB passband edge at F = 0.06 and a 40-dB stopband
edge at F = 0.01 and use the Matlab command filtfilt to obtain the zero-phase filtered
signal y2 [n]. Plot its DFT magnitude and display its time-frequency plot. What do the plots
reveal? Plot y2 [n] and x[n] on the same plot and compare. Does the filter perform its function?
20.33 (Filtering of a Chirp Signal) This problem deals with time-frequency plots of a chirp signal and
a sinusoid and its filtered versions.
(a) Use the ADSP routine chirp to generate 500 samples of a signal x[n] that consists of the sum
of cos(0.6n) and a chirp whose frequency varies from F = 0 to F = 0.05. Use the Matlab
command fft to plot its DFT magnitude. Use the ADSP routine psdwelch to display its power
spectral density plot. Use the ADSP routine timefreq to display its time-frequency plot. What
do the plots reveal?
(b) Design an optimal lowpass filter with a 1 dB passband edge at F = 0.08 and a 40-dB stopband
edge at F = 0.25 and use the Matlab command filtfilt to obtain the zero-phase filtered
signal y1 [n]. Plot its DFT magnitude and display its PSD and time-frequency plot. What do
the plots reveal? Plot y1 [n]. Does it look like a signal whose frequency is increasing with time?
Do the results confirm that the filter performs its function?
(c) Design an optimal highpass filter with a 1-dB passband edge at F = 0.25 and a 40-dB stopband
edge at F = 0.08 and use the Matlab command filtfilt to obtain the zero-phase filtered
signal y2 [n]. Plot its DFT magnitude and display its PSD and time-frequency plot. What do
the plots reveal? Plot y1 [n]. Does it look like a sinusoid? Can you identify its period from the
plot? Do the results confirm that the filter performs its function?
Chapter 20 Problems 761
20.34 (A Multi-Band Filter) A requirement exists for a multi-band digital FIR filter operating at 140
Hz with the following specifications:
Passband 1: from dc to 5 Hz
Maximum passband attenuation = 2 dB (from peak)
Minimum attenuation at 10 Hz = 40 dB (from peak)
Passband 2: from 30 Hz to 40 Hz
Maximum passband attenuation = 2 dB (from peak)
Minimum attenuation at 20 Hz and 50 Hz = 40 dB (from peak)
(a) Design the first stage as an odd-length optimal filter, using the ADSP routine firpm.
(b) Design the second stage as an odd-length half-band filter, using the ADSP routine firhb with
a Kaiser window.
(c) Combine the two stages to obtain the impulse response of the overall filter.
(d) Plot the overall response of the filter. Verify that the attenuation specifications are met at each
design frequency.
20.35 (Audio Equalizers) Many hi-fi systems are equipped with graphic equalizers to tailor the frequency
response. Consider the design of a four-band graphic equalizer. The first section is to be a lowpass
filter with a passband edge of 300 Hz. The next two sections are bandpass filters with passband edges
of [300, 1000] Hz and [1000, 3000] Hz, respectively. The fourth section is a highpass filter with a
stopband edge at 3 kHz. The sampling rate is to be 20 kHz. Implement this equalizer, using FIR
filters based on window design. Repeat the design, using an optimal FIR filter. Repeat the design,
using an IIR filter based on the bilinear transformation. For each design, superimpose plots of the
frequency response of each section and their parallel combination. What are the dierences between
IIR and FIR design? Which design would you recommend?
Chapter 21
MATLAB EXAMPLES
21.0 Introduction
In this last chapter, we conclude with examples that illustrate many of the principles of analog and digital
signal processing, using native Matlab commands and the ADSP toolbox routines supplied with this book.
The code can be readily customized for use with other similar examples. You may wish to consult the
extensive help facility within Matlab to learn more about the Matlab commands.
Installation
The following installation procedure is generic to all platforms. To install the ADSP toolbox,
1. Create two subdirectories (preferably with the names adsp and gui).
2. Copy the adsp files from the disk to your adsp subdirectory.
3. Copy the gui files from the disk to your gui subdirectory.
4. Add the names of the two subdirectories to the Matlab path.
Testing
To check the proper operation of the m-files in the ADSP toolbox, start Matlab and, at the Matlab
prompt (>>), type
tour
The ADSP routine tour executes many of the m-files and prompts you to press a key to continue.
For a tour of all the GUI (graphical user interface) programs, type
tourgui
762
21.2 Matlab Tips and Pointers 763
Bugs or Problems
If you encounter problems while installing or operating the toolbox, or wish to report any bugs or improve-
ments, we would like to hear from you. You may contact us at the following e-mail addresses, or visit our
Internet site for updates and other information:
Ashok Ambardar Michigan Technological University
Internet: http://www.ee.mtu.edu/faculty/akambard.html
e-mail: [email protected]
Brooks/Cole Publishing 511 Forest Lodge Road, Pacific Grove, CA 93950-5098,USA
Internet: http://www.brookscole.com
e-mail: [email protected]
Q. Why do I get an error while using plot(t,x), with x and t previously defined?
A. Both t and x must have the same dimensions (size). To check, use size(t) and size(x).
Q. Why do I get an error while generating x(t) = 2t sin(2t), with the array t previously defined?
A. You probably did not use array operations (such as .* or . ). Try >> x=(2 . (-t)).*sin(2*t)
Q. How can I plot several graphs on the same plot?
A. Example: use >>plot(t,x),hold on,plot(t1,x1),hold off OR >>plot(t,x,t1,x1)
Q. How do I use subplot to generate 6 plots in the same window (2 rows of 3 subplots)?
A. Example: use >>subplot(2,3,n);plot(t,x) for the nth (count by rows) plot (n cycles from 1 to 6).
A subplot command precedes each plot command and has its own xlabel, ylabel, axis, title, etc.
t=-2:0.01:4; % CT index (2 to 4)
x1=ustep(t-1);plot(t,x1) % (ustep is an ADSP routine)
x2=uramp(t-1);plot(t,x2) % (uramp is an ADSP routine)
x3=urect(0.5*t-1);plot(t,x3) % (urect is an ADSP routine)
x4=tri(2*t-2);plot(t,x4) % (tri is an ADSP routine)
x5=sinc(t/2);plot(t,x5)
x6=4*exp(-2*t).*cos(pi*t).*ustep(t);plot(t,x6)
t0=1;t1=-2:0.1*t0:2;x1=(1/t0)*exp(-pi*t1.*t1/t0/t0);
t0=0.5;t2=-2:0.1*t0:2;x2=(1/t0)*exp(-pi*t2.*t2/t0/t0);
t0=0.1;t3=-2:0.1*t0:2;x3=(1/t0)*exp(-pi*t3.*t3/t0/t0);
t0=0.05;t4=-2:0.1*t0:2;x4=(1/t0)*exp(-pi*t4.*t4/t0/t0);
t0=0.01;t5=-2:0.1*t0:2;x5=(1/t0)*exp(-pi*t5.*t5/t0/t0);
plot(t1,x1,t2,x2,t3,x3,t4,x4,t5,x5)
a1=sum(x1)*0.1;a2=sum(x2)*0.05;
a3=sum(x3)*0.01;a4=sum(x4)*0.005;
a5=sum(x5)*0.001; % First generate each area
a=[a1;a2;a3;a4;a5] % Display (all should be 1)
magnitude and phase, and the sum and dierence of the real and imaginary parts. For the last case, derive
analytic expressions for the sequences plotted and compare with your plots. Which plots allow you determine
the period of w[n]? What is the period?
n=-20:20;
j=sqrt(-1);
w=7.071*exp(j*(n*pi/9-pi/4));
dtplot(n,real(w),o),dtplot(n,imag(w),o)
dtplot(n,abs(w),o),dtplot(n,180*angle(w)/pi,o)
dtplot(n,real(w)+imag(w),o),dtplot(n,real(w)-imag(w),o)
n=-10:10;
x=uramp(n+6)-uramp(n+3)-uramp(n-3)+uramp(n-6);
[n1,y1]=operate(n,x,1,-4);
[n2,y2]=operate(n,x,1,4);
[n3,y3]=operate(n,x,-1,-4);
[n4,y4]=operate(n,x,-1,4);
dtplot(n,x),dtplot(n1,y1,o) % Similar plot commands for y2 etc.
n=0:15; % DT index
x=uramp(n)-uramp(n-5)-5*uramp(n-10); % Generate x[n]
e=sum(x.*x) % Energy in x[n]
dtplot(n,x) % Plot x[n] using dtplot
xd=uramp(n-2)-uramp(n-7)-5*uramp(n-12); % Generate x[n-2]
stem(n,xd) % Plot x[n-2] using stem
[xe,xo,nn]=evenodd(x,n); % Even, odd parts, and index nn
dtplot(nn,xe) % Plot even part of x[n]
dtplot(nn,xo) % Plot odd part of x[n]
y=perext(x,7); % Periodic extension xpe[n]
np=0:20;dtplot(np,[y y y]) % Plot 3 periods over n=0:20
pwr=sum(y.*y)/7 % Power in xpe[n]
21.5 Examples of Matlab Code 773
Comment:
On systems with sound capability, one could create a longer signal (2 s or so) at the natural sampling rate
and listen to the pure, noisy and averaged signal for audible dierences, using the Matlab command sound.
2
Consider an analog system whose transfer function is H(s) = .
s+2
Find and plot its impulse response h(t) and step response w(t).
Find its response ya (t) to x(t) = 2e3t u(t) if y(0) = 4.
Find its response yb (t) to x(t) = cos(3t)u(t).
The output arguments [yt, yzs, yzi] are the total, zero-state, and zero-input response.
The impulse response is found by omitting the last two arguments (X and IC).
Note: Since the output of sysresp1 is a symbolic (or string) expression, it must first be evaluated (using
the eval command) for the selected range of n before it can be plotted.
Response by Recursion
The ADSP routine dtsim allows the recursive solution of dierence equations. Its syntax is
yd=dtsim(RHS, LHS, XN, IC)
Here XN is the input array x[n] over an index range n.
The response yd has the same length as XN. The argument IC can be omitted for zero initial conditions.
For zero initial conditions, the MATLAB command filter(RHS,LHS,XN) yields identical results.
Consider the second-order system y[n]0.64y[n2] = x[n] with x[n] = 20(0.8)n u[n], and the initial conditions
y[1] = 5, y[2] = 25. Find and plot the total, zero-state, and impulse response over 0 n 30, using both
dtsim and sysresp1.
RHS=[1 0 0];LHS=[1 0 -0.64]; % RHS and LHS of difference eq
X=[20 0.8 0 0 0 0]; IC=[5;25]; % Input and initial conditions
[yt,yzs]=sysresp1(z,RHS,LHS,X,IC) % Symbolic response
h=sysresp1(z,RHS,LHS) % Symbolic impulse response
n=0:30; % DT index to evaluate sysresp1 output
dtplot(n,eval(yt),o) % Evaluate response and plot
dtplot(n,eval(h),o) % Evaluate and plot impulse response
%
xin=20*(0.8 .^ n); % Input array for DTSIM
ytd=dtsim(RHS,LHS,xin,IC); % Numerical response
yzsd=dtsim(RHS,LHS,xin); % Numerical zero-state response
hd=dtsim(RHS,LHS,udelta(n)); % Numerical impulse response
dtplot(n,ytd,o) % Plot numerical response
dtplot(n,hd,o) % Plot numerical impulse response
The FFT results may show an imaginary part which is zero (to within machine roundo).
Comment: Note that y5 [n] is periodic with the same period as y1 [n] but five times longer and five times
larger. Normalizing (dividing) y1 [n] by its length N = 6, or y5 [n] by its length 5N = 30, we obtain identical
results over one period.
The spectrum of the correlated noisy signal can be used to estimate the frequency of the signal.
Generate two noisy signals by adding noise to a 20-Hz sinusoid sampled at ts = 0.01 s for 2 s.
1. Verify the presence of the signal by correlating the two noisy signals.
2. Estimate the frequency of the signal from the FFT spectrum of the correlation.
1. Generate 800 samples of a chirp signal x whose digital frequency varies from F = 0.05 to F = 0.4.
Observe how the frequency of x varies linearly with time, using the ADSP command timefreq(x).
2. Consider two filters described by
h1 [m] = [m] 0.4 cos(0.4m)sinc(0.2m), 80 m 80 h2 [m] = 0.3sinc(0.3m), 80 m 80
Generate the response of the two filters to xc and use timefreq and psdwelch to plot their spectra.
Identify each filter type and determine the range of digital frequencies it passes or blocks.
Comment: On machines with sound capability, one could use the natural sampling rate to generate and
listen to the chirp signal and the filter response for audible dierences, using sound.
Comment: For large , the response should be a triangular wave because the RC circuit would behave
(very nearly) as an integrator.
EXAMPLE 21.43 (Analog Filter Design from a Specified Order (Ch. 13))
The ADSP command [n,d]=lpp(name, n, [Ap As]) returns the transfer function of an nth-order lowpass
prototype. To transform this to the required form, we use the command [nn, dd]=lp2af(ty, n, d, fc,
f0). It requires one band edge (fc) if ty=lp or ty=hp, and the bandwidth (fc) and center frequency
(f0) if ty=bp or ty=bs. All frequencies are in rad/s (not Hz). Design the following:
1. A third-order Chebyshev II LPF with a 2-dB passband of 20 Hz and As 40 dB
2. An eighth-order Chebyshev BPF with a 3-dB bandwidth of 1 kHz and a center frequency of 5 kHz
[n1, d1]=lpp(c2, 3, [2 40]) % Lowpass prototype; order=3
[n2, d2]=lp2af(lp, n1, d1, 20*2*pi) % Lowpass filter
[n3, d3]=lpp(c1, 4, 3); % Lowpass prototype; order=4
[n4, d4]=lp2af(bp, n3, d3, 4000*2*pi, 1000*2*pi) % BPF; order=8
We use two calls to afd to design the filters, adjust the gain, and combine the two stages in parallel.
Since 3.2 = 16
5 , the resampled signal of the second part can be generated by decimating the 16-fold interpo-
lated signal of the first part by 5.
Comment: In theory, perfect recovery is possible for a band-limited signal using sinc interpolation. The
results do not show this because signal values outside the one-period range are assumed to be zero during
the interpolation.
788 Chapter 21 Matlab Examples
EXAMPLE 21.50 (The DTFT, Periodicity, Convolution, and Multiplication (Ch. 15))
Let x[n] = tri((n 4)/4), 0 n 8. Obtain the DTFT of x[n], g[n] = x[n] x[n], and h[n] = x2 [n] over
2 F 1.99
n=0:8; x=tri((n-4)/4);
F=-2:0.01:1.99; W=2*pi*F;
X=freqz(x,[1 zeros(1,8)],W);
G=freqz(conv(x,x),[1 zeros(1,16)],W);
H=freqz(x.*x,[1 zeros(1,8)],W);
plot(F,abs(X))
plot(F,abs(X).^2,F,abs(G),:)
Yp=convp(X,X)/length(X);
plot(F,abs(Yp),F,abs(H),:)
Comment: You should observe that |X(F )| is periodic with unit period, |X(F )|2 equals |G(F )|, and |Y p(F )|
equals |H(F )|.
EXAMPLE 21.52 (The Gibbs Eect and Periodic Convolution (Ch. 15))
An ideal LPF with cuto frequency FC has the impulse response h[n] = 2FC sinc(2FC n). Symmetric trunca-
tion (about n = 0) leads to the Gibbs eect (overshoot and oscillations in the frequency response). Truncation
of h[n] to |n| N is equivalent to multiplying h[n] by a rectangular window wD [n] = rect(n/2N ). In the
frequency domain, it is equivalent to the periodic convolution of H(F ) and WD (F ) (the Dirichlet kernel).
To eliminate overshoot and reduce the oscillations, we choose tapered windows whose DTFT sidelobes decay
much faster than those of the rectangular window.
Consider an ideal filter with FC = 0.2 whose impulse response h[n] is truncated over 20 n 20.
1. Show that its periodic convolution with the Dirichlet kernel results in the Gibbs eect.
2. Show that the Gibbs eect is absent in its periodic convolution with the Fejer kernel.
Comment: Since h[n] is generated over 0.5 F < 0.5, one period of the periodic convolution will cover
the same range only if the Dirichlet and Fejer kernels are generated over 0 F < 1. Also note that the
periodic convolution is normalized (divided) by the number of samples.
T=0.1;t=0:0.05/200:T;
x=cos(200*pi*t)-cos(400*pi*t);
S=500;tn1=0:1/S:T;
xn1=cos(200*pi*tn1)-cos(400*pi*tn1);
f1=alias(100,S);f2=alias(200,S);
xr1=cos(2*pi*f1*t)-cos(2*pi*f2*t);
plot(t,x,t,xr1),hold on
dtplot(tn1,xn1,o),hold off
S=300;tn2=0:1/S:T;
xn2=cos(200*pi*tn2)-cos(400*pi*tn2);
f1=alias(100,S);f2=alias(200,S);
xr2=cos(2*pi*f1*t)-cos(2*pi*f2*t);
plot(t,x,t,xr2),hold on
dtplot(tn2,xn2,o),hold off
S=70;tn3=0:1/S:T;
xn3=cos(200*pi*tn3)-cos(400*pi*tn3);
f1=alias(100,S);f2=alias(200,S);
xr3=cos(2*pi*f1*t)-cos(2*pi*f2*t);
plot(t,x,t,xr3),hold on
dtplot(tn3,xn3,o),hold off
Comment: You should observe that values of x[n] match x(t) and xr (t) for each sampling rate, but x(t)
and xr (t) match only when there is no aliasing.
ts=1/8192;t=0:ts:2;
a1=2048;xc1=cos(pi*a1*t.*t);
fi1=a1*t;fa1=alias(fi1,8192);
plot(fi1,fa1)
a2=8192;xc2=cos(pi*a2*t.*t);
fi2=a2*t;fa2=alias(fi2,8192);
plot(fi2,fa2)
Comment: On machines with sound capability, you could listen to the signals using sound and understand
how the frequency changes with time.
EXAMPLE 21.65 (IIR Filter Design from a Specified Order (Ch. 19))
We start with a lowpass prototype (LPP) of the given order (using the ADSP routine lpp), convert it to
a digital LPP (using s2zinvar, s2zmatch, or s2zni), and apply D2D frequency transformations (using the
ADSP routine lp2iir) to obtain the required IIR filter. For the bilinear mapping, lp2iir allows direct A2D
transformation of the LPP. The general syntax for lp2iir is
[n, d]=lp2iir(ty, ty1, Nlpp, Dlpp, SF, F)
Here, ty = lp, hp, bp, or bs and ty1=a for A2D or ty1=d for D2D transformation.
The argument F contains the cuto band edge(s).
Design a sixth-order Chebyshev BPF with 3-dB edges at [2, 4] kHz and S = 10 kHz, using the bilinear
transformation. Repeat the design, using impulse invariance.
[n, d]=lpp(c1, 3, 3) % Analog lowpass prototype (order=3)
S=10000;F=[2000 4000];FN=F/S; % Frequency data
[NB, DB]=lp2iir(bp, a, n, d, 1, FN) % Direct design for bilinear
[N2, D2]=s2zinvar(n, d, 1, i, 0); % Digital LPP by impulse invariance
[NI, DI]=lp2iir(bp, d, N2, D2, 1, FN) % Convert to required filter
Ambardar, A., and C. Borghesani, Mastering DSP Concepts Using Matlab (Prentice Hall, 1998).
Antoniou, A., Digital Filters (McGraw-Hill, 1993).
Burrus, C. S., et al., Computer-Based Exercises for Digital Signal Processing (Prentice Hall, 1994).
Cadzow, J. A., Foundations of Digital Signal Processing and Data Analysis (Macmillan, 1987).
Cunningham, E. P., Digital Filtering (Houghton Miin, 1992).
DeFatta, D. J., J. G. Lucas, and W. S. Hodgkiss, Digital Signal Processing (Wiley, 1988).
Ifeachor, E. C., and B. W. Jervis, Digital Signal Processing (Addison-Wesley, 1993).
Ingle, V. K., and J. G. Proakis, Digital Signal Processing Using Matlab (PWS, 1996).
Jackson, L. B., Digital Filters and Signal Processing (Kluwer, 1995).
Johnson, J. R., Introduction to Digital Signal Processing (Prentice Hall, 1989).
Kuc, R., Introduction to Digital Signal Processing (McGraw-Hill, 1988).
Ludeman, L. C., Fundamentals of Digital Signal Processing (Wiley, 1986).
Mitra, S. K., Digital Signal Processing (Wiley, 1998).
Oppenheim, A. V., and R. W. Schafer, Digital Signal Processing (Prentice Hall, 1975).
Oppenheim, A. V., and R. W. Schafer, Discrete-Time Signal Processing (Prentice Hall, 1989).
Orfanidis, S. J., Introduction to Signal Processing (Prentice Hall, 1996).
Porat, B., A Course in Digital Signal Processing (Wiley, 1997).
Proakis, J. G., and D. G. Manolakis, Digital Signal Processing (Prentice Hall, 1996).
Rabiner, L. R., and B. Gold, Theory and Application of Digital Signal Processing (Prentice Hall, 1974).
Steiglitz, K., A Digital Signal Processing Primer (Addison-Wesley, 1996).
Strum, R. D., and D. E. Kirk, First Principles of Discrete Systems and Digital Signal Processing
(Addison-Wesley, 1988).
798
References 799
Carlson, G. E., Signal and Linear System Analysis (Houghton Miin, 1992).
Chen, C. T., System and Signal Analysis (Saunders, 1989).
Gabel, R. A., and R. A. Roberts, Signals and Linear Systems (Wiley, 1987).
Haykin, S., and B. Van Veen, Signals and Systems (Wiley, 1999).
Kamen, E. W., and B. S. Heck, Fundamentals of Signals and Systems (Prentice Hall, 1997).
Kwakernaak, H., and R. Sivan, Modern Signals and Systems (Prentice Hall, 1991).
Lathi, B. P., Signals and Systems (Berkeley-Cambridge, 1987).
McGillem, C. D., and G. R. Cooper, Continuous and Discrete Signal and System Analysis (Saunders,
1991).
Oppenheim, A. V., et al., Signals and Systems (Prentice Hall, 1997).
Phillips, C. L., and J. M. Parr, Signals, Systems, and Transforms (Prentice Hall, 1995).
Siebert, W. McC., Circuits, Signals and Systems (MIT Press/McGraw-Hill, 1986).
Soliman, S. S., and M. D. Srinath, Continuous and Discrete Signals and Systems (Prentice Hall, 1990).
Strum, R. D., and D. E. Kirk, Contemporary Linear Systems using Matlab (PWS, 1994).
Taylor, F. J., Principles of Signals and Systems (McGraw-Hill, 1994).
Ziemer, R. E., W. H. Tranter, and D. R. Fannin, Signals and Systems: Continuous and Discrete
(Macmillan, 1993).
Window Functions
Geckinli, N. C., and D. Yavuz, Discrete Fourier Transformation and Its Applications to Power Spectra
Estimation (Elsevier, 1983).
Harris, F. J., On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform,
Proceedings of the IEEE, vol. 66, no. 1, pp. 5183, Jan. 1978.
Nuttall, A. H., Some Windows with Very Good Sidelobe Behavior, IEEE Transactions on Acoustics,
Speech and Signal Processing, vol. 29, no. 1, pp. 8491, Feb. 1981 (containing corrections to the paper
by Harris).
800 References
Analog Filters
Daniels, R. W., Approximation Methods for Electronic Filter Design (McGraw-Hill, 1974).
Weinberg, L., Network Analysis and Synthesis (McGraw-Hill, 1962).
Applications of DSP
Elliott, D. F. (ed.), Handbook of Digital Signal Processing (Academic Press, 1987).
A good resource for audio applications on the Internet is
http://www.harmonycentral.com/Effects/
Mathematical Functions
Abramowitz, M., and I. A. Stegun (eds.), Handbook of Mathematical Functions (Dover, 1964).
Numerical Methods
Press, W. H., et al., Numerical Recipes (Cambridge University Press, 1986).
801
802 Index