0 ratings0% found this document useful (0 votes) 370 views655 pagesDigital Signal Processing by S Salivahanan PDF Free
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
wei
SIGNAL
att tt TE
ee Signals and Systems
(ee GR Or elit
: S Salivahanan
i) Te
C GnanapriyaInformation contained in this work has been obtained by
Tata McGraw-Hill, from sources believed to be reliable.
However, neither Tata McGraw-Hill nor its authors
guarantee the accuracy or completeness of any information
published herein, and neither Tata McGraw-Hill nor its
authors shall be responsible for any errors, omissions, or
damages arising out of use of this information. This work
is published with the understanding that Tata McGraw-Hill
and its authors are supplying information but are not
attempting to render engineering or other professional
services. If such services are required, the assistance of an
appropriate professional should be sought.
Tata McGraw-Hill
© 2000, Tata McGraw-Hill Publishing Company Limited
21* reprint 2007
DZLCRRXYRCYZR
No part of this publication may be reproduced in any form or by any
means without the prior written permission of the publishers
This edition can be exported from India by the publishers,
Tata McGraw-Hill Publishing Company Limited
ISBN 0-07-463996-X
Published by Tata McGraw-Hill Publishing Company Limited,
7 West Patel Nagar, New Delhi 110 008, typeset at
Anvi Composers, A1/33 Pashchim Vihar, New Delhi 110 063 and printed at
A P Offset Pvt. Ltd., Naveen Shahdara, Delhi 110 032
Rael ee es ad aContents
Foreword D
Preface vit
1._ Classification of Signals and Systems i:
1.1___Introduction 1
1.2 Classification of Signals 3
‘Singularity Functions 9
1.4 Amplitude and Phase Spectra 15
1.5 Classification of Systems 17
1.6 Simple Manipulations of Discrete-time Signals 21
1.7___ Representations of Systems 23
1.8 Analog-to-Digital Conversion of Signals 28
Review Questions 37
2. Fourier Analysis of Periodic and
Aperiodic Continuous-Time Signals and Systems 40
2% Tatrodueti 0
2.2 Trigonometric Fourier Series 41
2.3 Complex or Exponential form of Fourier Series 52
2.4 Parseval’s Identity for Fourier Series 58
2.5 Power Spectrum of a Periodic Function 59
2.6 Fourier Transform 62
2.7 Properties of Fourier Transform 64
2.8 Fourier Transform of Some Important Signals 75
2.9 Fourier Transform of Power and Energy Signals 103
Review Questions 119
3._ Applications of Laplace Transform to System Analysis 127
3.1__Introduction 127
3.2 Definition _128
3.3 Region of Convergence (ROC) 128
3.4 Laplace Transforms of Some Important Functions 129
3.5 Initial and Final Value TE 3
3.6 Convolution Integral 138
3.7__ Table of Laplace Transforms 142
3.8 Partial Fraction Expansions 144
Copyrighted materialx Contents
3.10 _s-plane Poles and Zeros _147
3.11 Laplace Transform of Periodic Functions 154
3.12 Application of Laplace Transformation in
Analysing Networks 157
Review Questions 183
4, z-Transforms 193
4.1 Introduction _193
2 Definition of the 24 fe 196
4.3 Properties of z-transform 203
Review Questions 228
5._Linear Time Invariant Systems 236
5.1 Introduction 236
5.2 Properties of a DSP System _238
5.3 Difference Equation and its Relationship with
System Function, Impulse Response and
Frequency Response 256
5.4 Frequency Response 260
Review Questions 272
6.3 Discrete-Time Fourier Transform (DTFT) 305
6.4 ‘ast Fourier Transform (FFT) 319
6.5___ Computing an Inverse DFT by Doing a Direct DFT 344
6.6 Composite-radix FFT 352
Review Questions 376
7. Finite Impulse Response (FIR) Filters 380
2.1 __Introduction 380
7.2 Magnitude Response and Phase Response of
Digital Filters 381
7.3 Frequency Response of Linear Phase FIR Filters 384
7.4 Design Techniques for FIR Filters 385
7.5 Design of Optimal Linear Phase FIR Filters 409
Review Questions 414
8. Infinite Impulse Response (IIR) Filters 417
81 Introduction 417
8.2 __ IIR Filter Design by Approximation of Derivatives 418
Copyrighted materialContents xi
8.3 IIR Filter Design by Impulse Invariant Method 423
84 IIR Filter Design by the Bilinear Transformation 427
8.5 Butterworth Filters 432
8.6 Chebyshev Filters 439
8.7 Inverse Chebyshev Filters 444
8.8 Elliptic Filters 445
8.9 Frequency Transformation 446
Review Questions 450
9. Realisation of Digital Linear Systems 453
10.
il.
9.1 Introduction 453
9.2 Basic Realisation Block Diagram and the
Signal-flow Graph 453
9.3 Basic Structures for IIR Systems 455
9.4 Basic Structures for FIR Systems 482
Review Questions 489
Effects of Finite Word Length in Digital Filters 496
10.1 Introduction 496
10.2_Rounding and Truncation Errors 496
10.3 Quantisation Effects in Analog-to-Digital
Conversion of Signals 499
10.4 Output Noise Power from a Digital System 502
10.5 Coefficient Quantisation Effects in Direct Form
Realisation of IR & 505
10.6 Coefficient Quantisation in Direct Form
Realisation of FIR Fi 508
10.7__Limit Cycle Oscillations 510
10.8 Product Quantisation 513
10.9 Scaling 518
10.10 Quantisation Errors in the Computation of DFT 519
Review Questions 621
Multirate Digital Signal Processing 523
11.1 Introduction 523
11.2 Sampling 524
11.3 Sampling Rate Conversion 525
11.4 Signal Flow Graphs 535
11.5 Filter Structures 539
11.6 Polyphase Decomposition 541
11.7 Digital Filter Design 551
11.8 Multistage Decimators and Interpolators 555
11.9 Digital Filter Banks 565
11.10 Two-channel Quadrature Mirror Filter Bank 572
11.11 Multilevel Filter Banks 578xii_ Contents
12.
13.
14,
15.
Spectral Estimation 584
12.1 Introduction 584
12.2 Energy Density Spectrum 584 |
12.3 Estimation of the Autocorrelation and Power Spectrum |
of Random Signals 586
12.4 DFT in Spectral Estimation 591
12.5 Power Spectrum Estimation: Non-Parametric
Methods 593
12.6 Power Spectrum Estimation: Parametric methods 606
Review Questions 628
Adaptive Filters 631
13.1 Introduction 631
13.2 _ Examples of Adaptive filtering 637
13.3 The Minimum Mean Square Error Criterion 643
13.4 The Widrow LMS Algorithm 645
13.5 Recursive Least Square Algorithm 647
13.6 The Forward-Backward Lattice Method 650
183.7__Gradient Adaptive Lattice Method 654
Review Questions 655
Applications of Digital Signal Processing 658
14.1 Introduction 658
14.2 Voice Processing 658
14.3 Applicationsto Radar 671
14.4 Applications to Image Processing 673
14.5 Introduction to Wavelets 675
Review Questions 686
MATLAB Programs 688
15.1 Introduction 688
15.2 Representation of Basic Signals 688
15.3 Discrete Convolution 691
15.4 Discrete Correlation 693
15.5 StabilityTest 695
15.6 Sampling Theorem 696
15.7 Fast Fourier Transform 699
15.8 Butterworth Analog Filters 700
15.9 Chebyshev Type-1 Analog Filters 706
15.10 Chebyshev Type-2 Analog Filters 712
15.11 Butterworth Digital IIR Filters 718
15,12 Chebyshev Type-1 Digital Filters 724
15.13 Chebyshev Type-2 Digital Filters 729
15.14 FIR Filter Design Using Window Techniques 735
15.15 Upsampling a Sinusoidal Signal 750
15.16 Down Sampling a Sinusoidal Sequence 750Contents xiii
15.17 Decimator 751
15.18 Estimation of Power Spectral Density (PSD) 751
15.19 PSD Estimator 752
15.20 Periodogram Estimation 753
15.21 State-space Representation 753
15.22 Partial Fraction Decomposition 753
15.23 Inverse z-transform 754
15.24 Group Delay 754
15.25 Overlap-add Method 755
15.26 IIR Filter Design-impulse Invariant Method 756
15.27 IIR Filter Design-bilinear Transformation 756
15.28 Direct Realisation of IIR Digital Filters 756
15.29 Parallel Realisation of IIR Digital Filters 757
15.30 Cascade Realisation of Digital IIR Filters 757
15.31 Decimation by Polyphase Decomposition 758
15.32 Multiband FIR Filter Design 758
15.33 Analysis Filter Bank 759
15.34 Synthesis Filter Bank 759
15.35 Levinson-Durbin Algorithm 759
15.36 Wiener Equation’s Solution 760
15.37 Short-time Spectral Analysis 760
15.38 Cancellation of Echo produced on the
Telephone—Base Band Channel 761
15.39 Cancellation of Echo Produced on the
Telephone—Pass Band Channel 763
Review Questions 765
Appendix A 773
Appendix B 774
Appendix C 782
Index. 802Chapter 1
Classification of Signals
and Systems
1.1 INTRODUCTION
Signals play a major role in our life. In general, a signal can be a
function of time, distance, position, temperature, pressure, etc., and it
represents some variable of interest associated with a system. For
example, in an electrical system the associated signals are electric
current and voltage. In a mechanical system, the associated signals may
be force, speed, torque, etc. In addition to these, some examples of
signals that we encounter in our daily life are speech, music, picture
and video signals. A signal can be represented in a number of ways.
Most of the signals that we come across are generated naturally.
However, there are some signals that are generated synthetically. In
general, a signal carries information, and the objective of signal
processing is to extract this information.
Signal processing is a method of extracting information from the
signal which in turn depends on the type of signal and the nature of
information it carries. Thus signal processing is concerned with
representing signals in mathematical terms and extracting the
information by carrying out algorithmic operations on the signal.
Mathematically, a signal can be represented in terms of basic functions
in the domain of the original independent variable or it can be
represented in terms of basic functions in a transformed domain.
Similarly, the information contained in the signal can also be extracted
either in the original domain or in the transformed domain.
A system may be defined as an integrated unit composed of diverse,
interacting structures to perform a desired task. The task may vary such
as filtering of noise in a communication receiver, detection of range of a
target in a radar system, or monitoring steam pressure in a boiler. The
function of a system is to process a giv2n input sequence to generate an
output sequence.2. Digital Signal Processing
It is said that digital signal processing techniques origin in the
seventeenth century when finite difference methods, numerical
integration methods, and numerical interpolation methods were
developed to solve physical problems involving continuous variables and
functions. There has been a tremendous growth since then and today
digital signal processing techniques are applied in almost every field.
The main reasons for such wide applications are due to the numerous
advantages of digital signal processing techniques. Some of these
advantages are discussed subsequently.
Digital circuits do not depend on precise values of digital signals for
their operation. Digital circuits are less sensitive to changes in
component values. They are also less sensitive to variations in
temperature, ageing and other external parameters.
In a digital processor, the signals and system coefficients are
represented as binary words. This enables one to choose any accuracy
by increasing or decreasing the number of bits in the binary word.
Digital processing of a signal facilitates the sharing of a single
processor among a number of signals by time-sharing. This reduces the
processing cost per signal.
Digital implementation of a system allows easy adjustment of the
processor characteristics during processing. Adjustments in the
processor characteristics can be easily done by periodically changing
the coefficients of the algorithm representing the processor
characteristics. Such adjustments are often needed in adaptive filters.
Digital processing of signals also has a major advantage which is not
possible with the analog techniques. With digital filters, linear phase
characteristics can be achieved. Also multirate processing is possible
only in the digital domain. Digital circuits can be connected in cascade
without any loading problems, whereas this cannot be easily done with
analog circuits.
Storage of digital data is very easy. Signals can be stored on various
storage media such as magnetic tapes, disks and optical disks without
any loss. On the other hand, stored analog signals deteriorate rapidly as
time progresses and cannot be recovered in their original form.
For processing very low frequency signals like seismic signals, analog
circuits require inductors and capacitors of a very large size whereas,
digital processing is more suited for such applications.
Though the advantages are many, there are some drawbacks
associated with processing a signal in the digital domain. Digital
processing needs ‘pre’ and ‘post’ processing devices like analog-to-digital
and digital-to-analog converters and associated reconstruction filters.
This increases the complexity of the digital system. Also, digital
techniques suffer from frequency limitations. For reconstructing a
signal from its sample, the sampling frequency must be atleast twice the
highest frequency component present in that signal. The available
frequency range of operation of a digital signal processor is primarilyClassification of Signals and Systems _ 3
determined by the sample-and-hold circuit and the analog-to-digital
converter, and as a result is limited by the technology available at that
time. The highest sampling frequency is presently around 1GHz
reported by K.Poulton, etal., in 1987. However, such high sampling
frequencies are. not used since the resolution of the A/D converter
decreases with an increase in the speed of the converter. But the
advantages of digital processing techniques outweigh the disadvantages
in many applications. Also, the cost of DSP hardware is decreasing
continuously. Consequently, the applications of digital signal processing
are increasing rapidly.
1.2 CLASSIFICATION OF SIGNALS
Signals can be classified based on their nature and characteristics in the
time domain. They are broadly classified as (i) continuous-time signals
and (ii) discrete-time signals. A continuous-time signal is a mathemati-
cally continuous function and the function is defined continuously in
the time domain. On the other hand, a discrete-time signal is specified
only at certain time instants. The amplitude of the discrete-time signal
between two time instants is just not defined. Figure 1.1 shows typical
continuous-time and discrete-time signals.
x(t)
o
(a) Continuous-time signal
=r (n)
aT
(b) Discrete-time signal
Fig. 1.1 Continuous-Time and Discrete-Time Signals4 Digital Signal Processing
Both continuous-time and discrete-time signals are further classified
as
(i) Deterministic and non-deterministic signals
(ii) Periodic and aperiodic signals
(iii) Even and odd signals, and
(iv) Energy and power signals.
1.2.1 Deterministic and Non-deterministic Signals
Deterministic signals are functions that are completely specified in time.
The nature and amplitude of such a signal at any time can be predicted.
The pattern of the signal is regular and can be characterised
mathematically. Examples of deterministic signals are
(i) x(¢)=at This is a ramp whose amplitude increases linearly with
time and slope is a.
(ii) x(t) = A sin wt. The amplitude of this signal varies sinusoidally
with time and its maximum amplitude is A.
1 nz0
0 otherwise
amplitude is 1 for the sampling instants n 2 0 and for all other
samples, the amplitude is zero.
For all the signals given above, the amplitude at any time instant can
be predicted in advance. Contrary to this, a non-deterministic signal is
one whose occurrence is random in nature and its pattern is quite
irregular. A typical example of a non-deterministic signal is thermal
noise in an electrical circuit. The behaviour of such a signal is
probabilistic in nature and can be analysed only stochastically. Another
example which can be easily understood is the number of accidents in
an year. One cannot exactly predict what would be the figure in a
particular year and this varies randomly. Non-deterministic signals are
also called random signals.
1.2.2 Periodic and Aperiodic Signals
A continuous-time signal is said to be periodic if it exhibits periodicity,
ie.
(iti) x(n) = { This is a discrete-time signal whose
x(t+T)=x(t), -e0
t) dt = 2
Je { <0 (1.20)
(1.19)
Since, the area of the impulse function is all concentrated at ¢ = 0, for
any value of ¢ < 0 the integral becomes zero and for ¢ > 0, from Eq.1.18,
the value of the integral is unity. The integral of the impulse function is
also a singularity function and called the unit-step function and is
represented as
0, ¢<0
t)= 1.21
u(t) a t>0 (1.21)
The value at t = 0 is taken to be finite and in most cases it is
unspecified. The discrete-time unit-step signal is defined as
0, n + —
1 2 4
A
a
(a) Unit-imputse function
a(t) u(r)
A A
3 a]
| |
/ . -LTILT
-_— OH or _
0 t -3 2 aT o4 nr
(b) Unit-step function
ag) n(n)
; .
34 3
2{ 2
| Slope = 1
+4 ‘| t
> | _
o 123 t -3 -2 -10 1 2 3 4a
(c) Unit-ramp function
A mo
|
|
|
-05 0 06 1
() Unit-pulse function
Fig. 1.4 Singularity Functions (a) Unit-Impulse Function (b) Unit-Step Function
(c) Unit-Ramp Function (d) Unit-Pulse FunctionClassification of Signals and Systems 13
Proof
fixe) B(t — t.)] = x(t) 5(t — tp) + E(t) Blt — ty)
= x(t) S(t — ty) + X (ty) B(t ty), ty < ty < ty
Integrating, we get
ty d ty . ty
J Gest -t)1 dt = f (x(t) 5(¢~ to de + f Lilt) 8(t - to) dt
i i 4
a
(x(t) 8(¢ - ty)]? = { x(t) 8(t to) dt + (ty)
LHS =0.
o .
Therefore, { x(t) 5(t - ty) dt + £(fq) = 0
4
t
ie. j x(t) 5(t = ty) dt = ~ x (ty)
4
Similarly,
eo
J x0 5G -t) dt = 8ty)
t
Hence, j x(t) 8" (¢ = ty) dé = (-1)" x(t)
4
1.3.6 Representation of Signals
In the signal given by x(at + b), i.e., x(a(t + b/a)), a is a scaling factor
and b/a is a pure shift version in the time domain.
If b/a is positive, then the signal x(t) is shifted to left.
If b/a is negative, then the signal x(t) is shifted to right.
If a is positive, then the signal x(t) will have positive slope.
If a is negative, then the signal x(t) will have negative slope.
Ifa is less than 0, then the signal x(t) is reflected or reversed through
the origin.
If |a| < 1, x(t) is expanded, and if |a| > 1, x(t) is compressed.
Sketch the following signals
(a) x(t) = M1(2t + 3) (b) x(t) = 21 (¢ - 1/4)
(c) x(t) = cos(20 xt-5x) and = (d) x(t) = r (— 0.5t + 2)
Solution
(a) M1(2t + 3) = T1(2¢ + 3/2))
Here the signal shown in Fig. E1.2(a) is shifted to left, with centre at
~ 3 /2. Since a = 2, i.e. {a| > 1, the signal is compressed. The signal
width becomes 1/2 with unity amplitude.14 Digital Signal Processing
x()
A
-14 04 G4
Fig. E1.2 (a) Fig. E1.2 (6)
(b) x(t) = 2M(¢ - 1/4)
Here the signal shown in Fig. E1.2(b) is shifted to the right, with
centre at 1 /4. Since a = 1, the signal width is 1 and amplitude is 2.
(c) x(t) = cos(20 xt - 5x)
= o«(20 (1-84)
-(2r(-2)
Here the signal x(t) shown in Fig. E1.2(c) is shifted by quarter cycle
to the right.
Fig. E1.2(c)
(d) x(t)=r(—0.5t + 2) x0
-r(-05(+-2)}
0.5
=r(-0.5 (t- 4)) 2
The given ramp signal is
reflected through the origin and 0 4 f
shifted to right at t = 4. Fig. E1.2 (d)
The signal is expanded by x = 2. When ¢ = 0, the magnitude of
the signal x(t) = 2, shown in Fig, E1.2(d).Classification of Signals and Systems _\5
Ez Write down the corr nding equation for the given
signal.
x(t)
Fig. E1.3
Solution
Representation through addition of two unit step functions
The signal x (t) can be obtained by adding both the pulses, i.e.
x(t) = 2[u(t) — u(t — 2)]+{u(t — 3) — u(t - 5))
Representation through multiplication of two unit step functions
x(t) = 2(u(t) u(—t + 2)] + [u(t - 3) u(-t + 5)]
= 2(u(t) u(2 - t) + u(t — 3) u(5 - 2)
1.4 AMPLITUDE AND PHASE SPECTRA
Let us consider a cosine signal of peak amplitude A, frequency f and
phase shift ©, in order to introduce the concept of amplitude and phase
spectra, i.e.,
x(t) = A cos (2nft + >) (1.27)
The amplitude and phase of this signal can be plotted as a function of
frequency. The amplitude of the signal as a function of frequency is
referred to as amplitude spectrum and the phase of the signal as a
function of frequency is referred to as phase spectrum of the signal. The
amplitude and phase spectra together is called the frequency spectrum
of the signal. The units of the amplitude spectrum depends on the
signal. For example, the unit of the amplitude spectrum of a voltage
signal is measured in volts, and the unit of the amplitude spectrum of a
current signal is measured in amperes. The unit of the phase spectrum
is usually radians. The frequency spectrum drawn only for positive
values of frequencies alone is called a single-sided spectrum.
The cosine signal can also be expressed in phasor form as the sum of
the two counter rotating phasors with complex-conjugate magnitudes,
1.e.16 Digital Signal Processing
j2Aft+o) , o- jax ft+o)
x()= ae
2
From this the amplitude spectrum for the signal x(t) consists of two
components of amplitude, viz. A/2 at frequency ‘f and A/2 at frequency
“f’. Similarly, the phase spectrum also consists of two phase
components one at ‘f’ and the other at ‘“-/’. The frequency spectrum of
the signal, in this case, is called a double-sided spectrum. The following
example illustrates the single-sided and double-sided frequency spectra
of a signal.
Sketch the single-sided and double-
and phase spectra of the signal
led amplitude
xit)=8sin (20m -2),-w 0, The
unit-impulse function is nothing but the derivative of the unit-step
signal. Therefore, the impulse response of the system can also be
obtained by computing the derivative of the step response of the system.
1.7.3 State-Variable Technique
The state-variable technique provides a convenient formulation
procedure for modelling a multi-input, multi-output system. This
technique also facilitates the determination of the internal behaviour of
the system very easily. The state of a system at time fy is the minimum
information necessary to completely specify the condition of the system
at time fy and it allows determination of the system outputs at any time
t > to, when inputs upto time ¢ are specified. The state of a system at
time fy is a set of values, at time fo, of a set of variables. These variables
are called the state variables. The number of state variables is equal to
the order of the system. The state variables are chosen such that they
correspond to physically measurable quantities. It is also convenient to
consider an n-dimensional space in which each coordinate is defined by
one of the state variables x,, x», ..., x,, where n is the order of the system.
This n-dimensional space is called the state space. The state vector is
an n-vector x whose elements are the state variables. The state vector
defines a point in the state space at any time t. As the time changes, the
system state changes and a set of points, which is nothing but the locus
of the tip of the state vector as time progresses, is called a trajectory of
the system.
A linear system of order n with m inputs and & outputs can be
represented by n first-order differential equations and & output
equations as shown below.
a $y, Xt Oyg XQ to. + yy Xy t Oy, Uy t Ogu gt... + Dim Um
oe Sg Xy + Ogg Xyt -1. + Ayn X qt Oy, Uy + Ogg U gt 0. + bam Um
: (1.84)
as,
dt
& Gy Xy + Aggy Lot... + Pay Xp t Ony Uy + Ong gt «.. + Dam UmClassification of Signals and Systems 27
and
Wy =O yy Ny + C yy XQ F oe + Cy Xp_t dy Uy + Cyglgt ... + dy_, Up,
Yo = C21 Xy + Cog Xyt ... + Con Xp + Ay Uy + Tog lg t ... + dom Um
. (1.35)
Ye = Ce Xp + Cyg Xo t .-. + Cyn Xpq + Ap Uy + Ago lg + ... + Tg Um
where uj, i = 1, 2, ..., m are the system inputs, x,, i = 1, 2,3, ...,n are
called the state variables and y;, i = 1, 2, 3, ...,# are the system outputs.
Equations 1.34 are called the state equations, and Eqs 1.35 are the
output equations. Equations 1.34 and 1.35 together constitute the state-
equation model of the system. Generally, the a’s, b’s, c’s and d’s may be
functions of time. The solution of such a set of time-varying state
equations is very difficult. If the system is assumed to be time-invariant,
then the solution of the state equations can be obtained without much
difficulty.
The state variable representation of a system offers a number of
advantages. The most obvious advantage of this representation is that
multiple-input, multiple-output systems can be easily represented and
analysed. The model is in the time-domain, and one can obtain the
simulation diagram for the equations directly. This is of much use when
computer simulation methods are used to analyse the system. Also, a
compact matrix notation can be used for the state model and using the
laws of linear algebra the state equations can be very easily
manipulated. For example, Eqs 1.34 and 1.35 expressed in a compact
matrix form is shown below. Let us define vectors
% uy vn
x=|"|, u=|"? |, y=]? (1.36)
Xn Um 'h
and matrices
My, Hg + + Oy by bp + + + bum
Gy qn. + + Aan Bay bag. bam
A=|. .... .) Bel. . ... . | aan
Gq, Ome - + + Onn bay baa + + + Onm
Cy Ces es Cy dy dig ss» dim
Co Cg = + + Con dy dy... dom
C= | D=
a eo ny App - » » dem28 Digital Signal Processing
Now, Eqs 1.34 and 1.35 can be compactly written as
a =Ax+Bu (1.38a)
y=Cx+Du (1.38b)
where # = dx/dt. Equations 1.38 may be illustrated schematically as
shown in Fig.1.10. The double lines indicate a multiple-variable signal
flow path. The blocks represent matrix multiplication of the vectors and
matrices. The integrator block consists of n integrators with appropriate
connections specified by the A and B matrices.
z is
Fig. 1.10 Block Diagram of the State-Variable Model of Eq. 1.38
State Equations for Discrete-time Systems
For a discrete-time system, the state equations form a set of first-order
difference equations constituting a recursion relation. This recursion
relation allows determination of the state of a system at the sampling
time kT from the state of the system and the input at the sampling time
(k — DT, where k is an integer. The state-equations for a discrete-time
system can be modelled as shown below.
X41 = Fx + Gu, (1.39a)
9, = Hx, + Ju, (1.39b)
The dependence of these parameters on Tis suppressed for simplicity.
For a single input, single output system u, and y, are scalars and G and
H become vectors g and h, and d is a null in most cases. The state-
variable modelling of a discrete-time system finds application in the
digital simulation of a continuous time systems.
1.8 ANALOG-TO-DIGITAL CONVERSION OF SIGNALS
A discrete-time signal is defined by specifying its value only at discrete
times, called sampling instants. When the sampled values are quantised
and encoded, a digital signal is obtained. A digital signal can be obtained
from the analog signal by using an analog-to-digital converter. In the
following sections the process of analog-to-digital conversion isClassification of Signals and Systems _29
discussed in some detail and this enables one to understand the
relationship between the digital signals and discrete-time signals.
Figure 1.11 shows the block diagram of an analog-to-digital
converter. The sampler extracts the sample values of the input signal at
the sampling instants. The output of the sampler is the discrete-time
signal with continuous amplitude. This signal is applied to a quantiser
which converts this continuous amplitude into a finite number of sample
values. Each sample value can be represented by a digital word of finite
word length. The final stage of analog-to-digital conversion is encoding.
The encoder assigns a digital word to each quantised sample. Sampling,
quantizing and encoding are discussed in the following sections.
TC ——) f Tf | |
>| Sampler | > Quantiser> >| Encoder {>
er el | om
| i
Continuous-time
Discrete-time Discrete-time Digital output
continuous-amplitude Continuous-amplitude discrete-amplitude signal
input signal signal signal
Fig. 1.11 Analog-to-Digital Converter
1.8.1 Sampling of Continuous-time Signals
Sampling is a process by which a continuous-time signal is converted
into a discrete-time signal. This can be accomplished by representing
the continuous-time signal x(t), at a discrete number of points. These
discrete number of points are determined by the sampling period, 7, i.e.
the samples of x(t) can be obtained at discrete points ¢ = nT, where n is
an integer. The process of sampling is illustrated in Fig.1.12, The
sampling unit can be thought of as a switch, where, to one of its inputs
the continuous-time signal is applied. The signal is available at the
output only during the instants the switch is closed. Thus, the signal at
the output end is not a continuous function of time but only discrete
samples. In order to extract samples of x(t), the switch closes briefly
every T seconds. Thus, the output signal has the same amplitude as x(t)
when the switch is closed and a value of zero when the switch is open.
The switch can be any high speed switching device.
The continuous-time signal x(t) must be sampled in such a way that
the original signal can be reconstructed from these samples. Otherwise,
the sampling process is useless. Let us obtain the condition necessary to
faithfully reconstruct the original signal from the samples of that signal.
The condition can be easily obtained if the signals are analysed in the
frequency domain. Let the sampled signal be represented by x,(¢). Then,
x, ()= x(t) g(t) (1.40)
where g(t) is the sampling function. The sampling function is a
continuous train of pulses with a period of T seconds between the pulses,
and it models the action of the sampling switch. The sampling function
is shown in Fig. 1.12(c) and (d). The frequency spectrum of the sampled30 Digital Signal Processing
| i i
0 T aT 3T 4T 5T ert
(a) Sampies of x (1)
| sven «0 x 29
x() x,()
es ott)
(b) Modelling a sampler as a switch (c) Mode! of a sampler
t 9 eee
1 i f 1
| | | |
| ft ft ff
Lil aL in i uy»
° T ar 3T aT ST 6T
(4) Sampling function
Fig. 1.12 The Sampling Process
signal x,(t) helps in determining the appropriate values of TJ for
reconstructing the original signal. The sampling function g(t) is periodic
and can be represented by a Fourier series (Fourier Series and
transforms are discussed in Chapter six), i.e.
at)= SC, ert (1.41)
sie
where
c,=2 faweve at (1.42)
TE
is the nth Fourier coefficient of g(t), and f, = 2 is the fundamental
frequency of g(t). The fundamental frequency, /f, is also called the
sampling frequency. From Eqs.1.40, and 1.41, we have
xf =x(t) LC ei?! = FC, xlseirrh! (1.43)
nace nase
The spectrum of x,(t), denoted by X, (f ), can be determined by taking
the Fourier transform of Eq. 1.43, i.e.Classification of Signals and Systems 34
XQ) = fx, (tel! dt (1.44)
Using Eq.1.43 in the above equation,
XP) = J LCyxle eM eae (1.45)
Interchanging the order of integration and summation,
Xf) = zG [atye tet -Me ae (1.46)
nan iw
But from the definition of the Fourier transform
fae F086 a = X(F= nf)
Thus,
x= > C,X¢-nf) (1.47)
From Eq. 1.47, it is understood that the spectrum of the sampled
continuous-time signal is composed of the spectrum of x(t) plus the
spectrum of x(¢) translated to each harmonic of the sampling frequency.
The spectrum of the sampled signal is shown in Fig. 1.13. Each
frequency translated spectrum is multiplied by a constant. To
reconstruct the original signal, it is enough to just pass the spectrum of
x(t) and suppress the spectra of other translated frequencies. The
amplitude response of such a filter is also shown in Fig. 1.13. As this
filter is used to reconstruct the original signal, it is often referred to as a
reconstruction filter. The output of the reconstruction filter will be
CX(f) in the frequency domain and x(t) in the time-domain.
xh
=F
Gy
—2fy -h -hO hf fs
Fig. 1.13 Spectrum of Sampled Signal32_Digital Signal Processing
The signal x(t), in this case, is assumed to have no frequency
components above f,, i.e. in the frequency domain, X(f) is zero for
If | 2f,- Such a signal is said to be bandlimited. From Fig.1.13, it is
clear that in order to recover X(f) from X,(/), we must have
f-h2th
or equivalently,
f, 2 2f,, hertz (1.48)
That is, in order to recover the original signal from the samples, the
sampling frequency must be greater than or equal to twice the
maximum frequency in x(t). The sampling theorem is thus derived,
which states that a bandlimited signal x(t) having no frequency
components above f, hertz, is completely specified by samples that are
taken at a uniform rate greater than 2f, hertz. The frequency equal to
twice the highest frequency in x(t) , i.e. 2f,, is called the Nyquist rate.
Sampling by Impulse Function
The sampling function g(t), discussed above, was periodic, The pulse
width of the sampling function must be very small compared to the
period, 7. The samples in digital systems are in the form of a number,
and the magnitude of these numbers represent the value of the signal
x(t) at the sampling instants. In this case, the pulse width of the
sampling function is infinitely small and an infinite train of impulse
functions of period T can be considered for the sampling function. That
is,
g(t)= Y&(e-nT) (1.49)
neo»
The sampling function as given in Eq.1.49 is shown in Fig,1.14. When
this sampling function is used, the weight of the impulse carries the
sample value.
The sampling function g(t) is periodic and can be represented by a
Fourier series as in Eq.1.41, which is repeated here.
git)= > 0,
where
os
C, 3 BO) e Mae (1.50)
nhs’
Since &(t) has its maximum energy concentrated at ¢ = 0, a more
formal mathematical definition of the unit-impulse function may be
defined as a functionalClassification of Signals and Systems 33
6
etEt thd het te
-t
-67 -5T-4T-3T -2T-7 0 T 2T 3T 4T ST 6T
(a)
Aen
Fig. 1.14 (a) impulse Sampling Function (b) Spectrum of the Signal x(t)
(c) Spectrum of Impulse Sampled Signal
J +0 8) de =2(0) (1.51)
where x(t) is continuous at ¢ = 0. Using Eq. 1.51 in Eq. 1.50, we have
1 1
Cie ne aaah (1.52)
Thus C, is same as the sampling frequency f,, for all n. The spectrum
of the impulse sampled signal, x,(¢) is given by
XN=f, DXF - nf) (1.53)
ne
The spectra of the signal x(¢) and the impulse sampled signal X, (¢)
are shown in Figs 1.14 (b) and (c). The effect of impulse sampling is
same as sampling with a train of pulses. However, all the frequency
translated spectra have the same amplitude. The original signal X(f)
can be reconstructed from X,(f) using a low-pass filter. Figure 1.15
shows the effect of sampling at a rate lower than the Nyquist rate.
Consider a bandlimited signal x(t), with f, as its highest frequency
content, being sampled at a rate lower than the Nyquist rate, i.e.,
sampling frequency f, < 2f,. This results in overlapping of adjacent34 _Digital Signal Processing
Ax
/\ .
“ho ty
(a) Spectrum of the input signal
4 xan
AKAALA.,
“ho ty thy ooh
(b) Spectrum of the sampled signal's for f, > 2f,
A xs
(c) Sampled signal's spectrum for f, < 2f
Fig. 1.15 Illustration of Aliasing
spectra i.e., higher frequency components of X,(/) get superimposed on
lower frequency components as shown in Fig.1.15. Here, faithful
reconstruction or recovery of the original continuous time signal from
its sampled discrete-time equivalent by filtering is very difficult because
portions of X(f - f,) and X(f + f,) overlap X(f), and thus add to X() in
producing X,(f). The original shape of the signal is lost due to
undersampling, i.e. down-sampling. This overlap is known as aliasing
or overlapping or fold over. Aliasing, as the name implies, means that a
signal can be impersonated by another signal. In practice, no signal is
strictly bandlimited but there will be some frequency beyond which the
energy is very small and negligible. This frequency is generally taken as
the highest frequency content of the signal.
To prevent aliasing, the sampling frequency f, should be greater than
two times the frequency /;, of the sinusoidal signal being sampled. The
condition to be satisfied by the sampling frequency to prevent aliasing is
called the sampling theorem. In some applications, an analog anti-
aliasing filter is placed before sample/hold circuit in order to prevent the
aliasing effect.Classification of Signals and Systems 35
A useful application of aliasing due to undersampling arises in the
sampling oscilloscope, which is meant for observing very high frequency
waveforms.
1.8.2 Signal Reconstruction
Any signal x(t) can be faithfully reconstructed from its samples if these
samples are taken at a rate greater than or equal to the Nyquist rate. It
can be seen from the spectrum of the sampled signal, X,(¢) that it
consists of the spectra of the signal and its frequency translated
harmonics. Thus, if the spectrum of the signal alone can be separated
from that of the harmonics then the original signal can be obtained.
This can be achieved by filtering the sampled signal using a low-pass
filter with a bandwidth greater than f;, and less than f, — f,. hertz.
If the sampling function is an impulse sequence, we note from Eq.1.53
that the spectrum of the sampled signal has an amplitude equal to
f,= VT. Therefore, in order to remove this scaling constant, the low-pass
filter must have an amplitude response of I/f, = T. Assuming that
sampling has been done at the Nyquist rate, i.e. 2f,, the bandwidth
.
of the low-pass filter will bef, = & . Therefore, the unit impulse response
of an ideal filter for this bandwidth is
fa
A(t)=T fel? af (1.54)
~fyl2
That is
A(t) = —D_ (eit ft _ ¢-J8ht)
jane
The above expression can be alternatively written as
sin & f,t
aft
The ideal reconstruction filter is shown in Fig.1.16a. The input to this
filter is the sampled signal x(nT) and the output of the filter is the
reconstructed signal x(t). The output signal x(t) is given by
h(t) = Tf, =sinc f,t (1.55)
x(t)= Sx(nT)h(t - nT)
n=
Using Eq.1.55, we get
x(t)= ¥ x(nT)sine f,(t - nT) (1.56)
naae
The above expression is a convolution expression and the signal x(t)
is reconstructed by convoluting its samples with the unit-impulse
response of the filter. Eq. 1.56 can also be interpreted as follows. The36 Digital Signal Processing
original signal can be reconstructed by weighting each sample by a sinc
function and adding them all. This process is shown in Fig. 1.16b.
> x(nT)&(t= AT)
|
|
new >
|
Ideal reconstruction ier x(0)
pone
‘Sinc functions
>
(n-3)T (n=2)7 (a-1)T nT (ns )P (n42)7 (ns 3)T
(b)
Fig. 1.16 Signal Reconstruction (a) Reconstruction Filter
(b) Time Domain Representation
1.8.3 Signal Quantisation and Encoding
A discrete-time signal with continuous-valued amplitudes is called a
sampled data signal, whereas a continuous-time signal with discrete-
valued amplitudes is referred to as a quantised boxcar signal.
Quantisation is a process by which the amplitude of each sample of a
signal is rounded off to the nearest permissible level. That is,
quantisation is conversion of a discrete-time continuous-amplitude
signal into a discrete-time, discrete-valued signal. Then encoding is
done by representing each of these permissible levels by a digital word
of fixed wordlength.
The process of quantisation introduces an error called quantisation
error and it is simply the difference between the value of the analog
input and the analog equivalent of the digital representation. This error
will be small if there are more permissible levels and the width of these
quantization levels is very small. In the analog-to-digital conversion
process, the only source of error is the quantiser. Even if there are more
quantisation levels, error can occur if the signal is at its maximum or
minimum value for significant time intervals. Figure 1.17 shows how a
continuous-time signal is quantised in a quantiser that has 16
quantising levels.Classification of Signals and Systems 37
Quantisationtevet | Encoded |
15 Wit
14 1110
13 1101
12 1100
" 1011
10 1010
9 1001
8 1000
7 0111
6 o110
5 0101
4 |_ e100
3 0011
2 0010
1 0001 |
a A
0 F aT 3T 4T
Fig. 1.17 Quantizing and Encoding
rola
REVIEW QUESTIONS
1.1 What are the major classifications of signals?
1.2 With suitable examples distinguish a deterministic signal from
| a random signal.
1.3 What are periodic signals? Give examples.
1.4 Describe the procedure used to determine whether the sum of
two periodic signals is periodic or not.
1.5 Determine which of the following signals are periodic and
determine the fundamental period.
(a) x,(t) = 10 sin 25 nt (b) xo(t) = 10 sin VB nt38 Digital Signal Processing
(c) x3(t) = cos10nt (d) x(t) = x,(t) + x,ft)
(e) x5(t) =x,(t) + x3 (t) (D x4(t) = xo(t) + x(t)
1.6 What are even signals? Give examples.
1.7 What are odd signals? Give examples.
1.8 What is energy signal?
1.9 What is power signal?
1.10 What are singularity functions?
1.11 Define unit-impulse function?
1.12 What is unit-step function? How it can be obtained from an
unit-impulse function?
1.13 What is unit-ramp function? How it can be obtained from an
unit-impulse function?
1.14 What is pulse function?
1.15 Evaluate
(a) fe“? st-10)at ) fe Se+arae
@ i 4oe** 5(t—-10)dt and (a) { e*” 6(t—10)a¢
Ans (a)e""* (8)0— (c) 40.077 (de 2
1.16 Explain the terms single-sided spectrum and double-sided
spectrum with respect to a signal.
1.17 Sketch the single-sided and double-sided frequency spectra of
the signals
(a) x(t) = 10 sin (tome -25), —e >
oO Tr \ ¥ t 2T t
(b)
“A an
(a)
A
|
|
te >
0 Ti2 T t
{o)
Fig. 2.1, Waveforms Representing Periodic FunctionsFourier Analysis of Periodic and Aperiodic Continuous-Time Signals and Systems 41
Examples of periodic processes are the vibration of a tuning fork,
oscillations of a pendulum, conduction of heat, alternating current
passing through a circuit, propagation of sound in a medium, etc.
Fourier series may be used to represent either functions of time or
functions of space co-ordinates. In a similar manner, functions of two
and three variables may be represented as double and triple Fourier
series respectively. Periodic waveforms may be expressed in the form of
Fourier series. Non-periodic waveforms may be expressed by Fourier
transforms.
2.2 TRIGONOMETRIC FOURIER SERIES
A periodic function f(t) can be expressed in the form of trigonometric
series as
F(8) = Fag + ay €05 Wnt + ay 608 2p t+ ay c05 Bay f+ see
+ 6, sin @o t+ by sin 2m) t + by sin Bayt + ... (2)
where @) = 2nf= 3, f is the frequency and a’s and 6’s are the
coefficients. The Fourier series exists only when the function f(t)
satisfies the following three conditions called Dirichlet’s conditions.
(i) f(t) is well defined and single-valued, except possibly at a finite
number of points, i.e.
f @ has a finite average value over the period T.
(ii) f(¢) must posses only a finite number of discontinuities in the
period T.
(iii) f(t) must have a finite number of positive and negative maxima in
the period T.
Equation 2.1 may be expressed by the Fourier series
fit)= 4 + Ya, cosnayt+ Sb, sin nat (2.2)
n=l n=l
where a, and b,, are the coefficients to be evaluated.
Integrating Eq. 2.2 for a full period, we get
riz 1 TI2 TI2
[fat = Say fat+ [ Ya, cos nage +b, sin n apt) dt
-T/2 2 -TI2 -Ti2n=1
Integration of cosine or sine function for a complete period is zero.
Tr.
Therefore, fre dt = i aT’
-72 8
9 Ti
Hence, ay== ffiede (2.3)
“Tia42__Digitat Signal Processing
T
or, equivalently ag = a f(t)dt
0
Multiplying both sides of Eq. 2.2 by cos mm@pt and integrating, we have
T/2 1 TZ
J f()cos Mot dt = 3 Jao cos M@ot dt +
-TI2 -TI2
TI2 @ TI2
J Sa, cos nag tcos maot d+ f >.b, sin naot cos mut dt
_Tign=1 -Tign=1
1 TI
Here, 2 fay cos m Wot dt = 0
-T2
TI2 a, Ti2
fon COS N Wot COS Mm@g dt = —" Jleos (m +N) t + cos (m — n) Wot] dt
-TI2 2 -TI2
0, forme#n
= Fa
ny form=n
TI2 bn TI2
Jen Sin N@ot Cos Mwytdt = — Jisin (m + n)@ot - sin (m - n)wot] dt
-T/2 2 -T/2
=0
TI2 Ta
Therefore, J £2) cos neoot dt = “,form=n
-Te
2 TI2
Hence, a,= = J f(t) cos nagt dt (2.4)
T ing
in
or, equivalently a, = 2 | fa)008 NW t dt
0
Similarly, multiplying both sides of Eq. 2.2 by sin m @)f and
integrating, we get
TI2 1 Ti?
Jf) sin moo tat == [agsin magt dt
-T/2 2 ig
fea Th.
+ J Xa, cos nwot sin may t de + J X4, sin nwot sin mag t dt
-Tign=1 -TIg n=.