Digital Signal Processing: Dr. Md. Aynal Haque
Digital Signal Processing: Dr. Md. Aynal Haque
Processing
2014
Chapter 1
INTRODUCTION
1. Signal
A signal is any variable that carries or contains some kind of information that can be
conveyed, displayed or manipulated. It is a function of independent variables such as time,
distance, position, temperature, pressure, etc. Mathematically, we describe a signal as a
function of one or more independent variables. For example, the functions
describe two signals, one that varies linearly with the independent variable t (time) and a
second that varies quadratically with t. Another function
This describes a signal of two independent variables x and y that could represent the two
spatial coordinates in a plane. Speech, music, telegraph, electrocardiogram (ECG) and
electroencephalogram (EEG) are examples of information-bearing signals that are
functions of one independent variable, namely, time. Image signal is an example that is a
function of two independent variables (spatial coordinates).
2. System
A system may be defined as a device that performs an operation on a signal. For example,
filter is a system. It is used to reduce noise and interference corrupting desired
information-bearing signal. The operations performed by a system usually can be specified
1
mathematically. The method or set of rules for implementing the system by a program that
performs the corresponding mathematical operations is called an algorithm. The
algorithm can be of software and/or hardware type.
Inverse: The inverse of a system (denoted by T) is a second system (denoted by Ti) that,
when cascaded with the system T, yields the identity system.
x(t) →T → Ti → z(t) = x(t)
Causality: A system is causal if the output at any time t0 is dependent on the input only for
t ≤ t0. y(t) = x(t-2) is causal, but y(t+2) is non causal
Stability: A system is stable if the output remains bounded for any bounded input, this
type of stability is known as BIBO stability and is used mostly. If |x(t)| ≤ M for all t, then
for BIBO stability, |y(t)| ≤ R for all t. y(t)=x2(t) is stable but is unstable.
Time Invariance: A system is said to be time invariant if a time shift in the input signal
results only in the same time shift in the output signal.
x(t) → y(t) x(t) → system → delay t0 → y(t-t0)
2
● The system has no inverse.
● The system is causal, since the output does not depend on the input at a future time.
● The system is stable, since the output is bounded for all bounded inputs.
If |x(t)| ≤ M, |y(t)| ≤ M also.
● The system is time-varying.
yd(t) = y(t)|x(t-t0) = sin2t x(t-t0) and y(t)|t-t0 = sin2(t-t0) x(t-t0) are not same.
● The system is linear, since a1x1(t) + a2x2(t) → sin2t [a1x1(t)+ a2x2(t)]
= a1 sin2t x1(t)+ a2 sin2t x2(t) = a1y1(t) +a2y2(t)
3. Signal Processing
Any operation on signal is termed as signal processing. It is concerned with the
mathematical representation of the signal and the algorithmic operation carried out on it to
extract the information present. Digital signal processing (DSP) is concerned with the
digital representation of signals and the use of digital processors to analyze, modify and/or
extract information from it.
3
Fig. 1: Analog signal processing
Chapter 2
DIGITAL SIGNAL
4
1. Digital Signal
A discretetime signal having a set of discrete values represented by some symbols (code)
is called a digital signal.
2. AD Conversion
We can obtain digital signal from analog one by performing analog to digital (A/D)
conversion. Most signals of practical interest, such as speech, biological signals, seismic
signals, radar signals, sonar signals, and various communications signals such as audio
and video signals, are analog. To process analog signals by digital means, it is first
necessary to convert them into digital form, that is, to convert them to a sequence of
numbers having finite precision. This procedure is called analogtodigital (A/D)
conversion, and the corresponding devices are called A/D converters (ADCs).
Conceptually, we view A/D conversion as a threestep process. This process is illustrated in
Fig. 4.
5
Fig. 4: Basic parts of an analogtodigital (A/D) converter
Sampling: This is the conversion of a continuous-time signal into a discrete time signal
obtained by taking “samples” of the continuous-time signal as discrete-time instants. Thus,
if is the input to the sampler, the output is , where T is called the
sampling interval.
Quantization: This is the conversion of a discrete-time continuous-valued signal into a
discrete-time, discrete-valued (digital) signal. The value of each signal sample is
represented by a value selected from a finite set of possible values. The difference between
the unquantized sample x(n) and the quantized output is called the quantization
error.
Coding: In the coding process, each discrete value is represented normally by a
fixed bit (b) binary sequence.
where x(n) is the discretetime signal obtained by "taking samples" of the analog signal
every T seconds. This procedure is illustrated in Fig. 5. The time interval T between
successive samples is called the sampling period or sample interval and its reciprocal
is called the sampling rate (samples per second) or the sampling frequency
(hertz).
6
Continuous-time signals Discrete-time signals
From these relations we observe that the fundamental difference between continuoustime
and discretetime signals is in their range of values of the frequency variables F and f, or
and . Periodic sampling of a continuoustime signal implies a mapping of the infinite
frequency range for the variable F (or ) into a finite frequency range for the variable f
(or ). Since the highest frequency in a discretetime signal is or f = 1/2 it follows
that, with a sampling rate , the corresponding highest values of F (or ) are
Example:
The implications of these frequency relations can be fully appreciated by considering the
two analog sinusoidal signals.
7
The Sampling theorem: If the highest frequency contained in an analog signal is
and the signal should be sampled at a rate . The sampling rate
is called the Nyquist rate.
Example: Consider the analog signal
signal
obtained by sampling the analog exponential signal , t ≥ 0 with a sampling
frequency = 1 Hz (see Fig. 6). Observation of Table 2, which shows the values of the
first 10 samples of x(n), reveals that the description of the sample value x(n) requires n
significant digits. It is obvious that this signal cannot be processed by using a calculator
or a digital computer since only the first few samples can be stored and manipulated. For
example, most calculators process numbers with only ten significant digits.
8
Fig. 6: Illustration of quantization
Table 2: Quantization with one significant digit using Truncation or Rounding
n
Discrete-time (Truncation) (Rounding) (Rounding)
signal
0 1 1.0 1.0 0.0
1 0.9 0.9 0.9 0.0
2 0.81 0.8 0.8 -0.01
3 0.729 0.7 0.7 -0.029
4 0.6561 0.6 0.7 0.0439
5 0.59049 0.5 0.6 0.00951
6 0.531441 0.5 0.5 -0.031441
7 0.4782969 0.4 0.5 0.0217031
8 0.43046721 0.4 0.4 -0.03046721
9 0.387420489 0.3 0.4 0.012579511
However, let us assume that we want to use only one significant digit. To eliminate the
excess digits, we can either simply discard them (truncation) or discard them by rounding
the resulting number (rounding). The resulting quantized signals are shown in
Table 2. We discuss only quantization by rounding, although it is just as easy to treat
truncation. The values allowed in the digital signal are called the quantization levels,
whereas the distance A between two successive quantization levels is called the
quantization step size or resolution. The rounding quantizer assigns each sample of x(n)
to the nearest quantization level. In contrast, a quantizer that performs truncation would
have assigned each sample of x(n) to the quantization level below it. The quantization
error in rounding is limited to the range of to , that is,
In other words, the instantaneous quantization error cannot exceed half of the quantization
step (see Table 2). If and represent the minimum and maximum value of x(n)
and L is the number of quantization levels, then
9
2.3 Quantization of Sinusoidal Signals
Fig. 7 illustrates the sampling and quantization of an analog sinusoidal signal
using a rectangular grid. Horizontal lines within the range of the
quantizer indicate the allowed levels of quantization. Vertical lines indicate the sampling
times. Thus, from the original analog signal we obtain a discrete-time signal
by sampling and a discretetime, discreteamplitude signal after
quantization. If the sampling rate , satisfies the sampling theorem, quantization is the
only error in the A/D conversion process. Thus, we can evaluate the quantization error by
quantizing the analog signal instead of the discretetime signal .
Inspection of Fig. 7 indicates that the signal is almost linear between quantization
levels (see Fig. 8). The corresponding quantization error is shown in
Fig. 8. In figure 8, denotes the time that stays within the quantization levels. Let
us assume that both the signal and noise are voltages and those are passing through a
resistor of R ohm. The meansquare error power is,
Since , we have
If the quantizer has b bits of accuracy and the quantizer covers the entire range 2A, the
SQNR =
Expressed in decibels (dB), the SQNR is
This implies that the SQNR increases approximately 6 dB for every bit added to the word
length, i.e., for each doubling of the quantization levels.
10
Fig. 7: Sampling and quantization of a sinusoidal signal
Although formula was derived for sinusoidal signals, a similar result holds for every signal
whose dynamic range spans the range of the quantizer. This relationship is extremely
important because it dictates the number of bits required by a specific application to assure
a given signal to noise ratio. For example, most compact disc players use a sampling
frequency of 44.1 kHz and 16bit sample resolution, which implies a SQNR of more than
96 dB.
3. DigitaltoAnalog Conversion
To convert a digital signal into an analog signal we can use a digitaltoanalog (D/A)
converter. The task of a D/A converter is to interpolate between samples.
11
Chapter 3
DT SIGNAL ANALYSIS
1. DT System
A DT system operates of DT signals.
🡪 transformation / operation / processing
Some operations on DT Signal:
Adder :
Constant multiplier :
Signal multiplier :
Unit delay:
Unit advance:
where, ak, bk are system parameters while M, N are system orders. The system is also
termed as autoregressive moving average (ARMA) system. It has two subclasses as:
● If bk = 0 for all k, the system is termed as autoregressive (AR) system
● If ak = 0 for all k, the system is termed as moving average (MA) system
12
Two methods are used to analyze LTI system:
● Direct solution of I/O relation:
y(n) = F[y(n-1), y(n-2, . . ., y(n-N), x(n), x(n-1), . . ., x(n-M)]
● Impulse resolution method
We will use second method first.
Since x(n) can be resolved into weighted sum of impulses, we can find the response of the
impulse function as
Then, using superposition summation,
13
Distributive 🡪 x(n) * [h1(n)+ h2(n)] = x(n) * h1(n)+ x(n) * h2(n)
If N is finite the system is termed as finite impulse response (FIR) system, otherwise the
system is infinite impulse response (IIR) system. FIR system response can be calculated
by convolution. However, the IIR system must be first represented by a recursive
equation to solve for the response. Let us take the example of cumulative summation,
So, to compute y(n), we need to store all x(n). Alternatively, we can express y(n) as,
🡨 Recursive system
Generally, a recursive equation is written as,
Putting n = 0, 1, 2, …..
y(0) = a y(-1) + x(0) y(1) = a2 y(-1) +a x(0)+ x(1)
y(2) = a3 y(-1) + a2 x(0)+ax(1) +x(2)
y(n) = an+1 y(-1) +an x(0)+an-1x(1) + ….. + x(n)
y(-1) is called initial condition. If the system is initially relaxed at n = 0, then y(-1) = 0
and hence the response will depend on x(n) only. This output is termed as zero-state
Answer:
But for finite x(n), |x(n)| ≤ Mx. So, . So, y(n) is bounded provided that
the summation of the absolute values of the impulse response is finite, i.e.,
Example: Determine the range of values of a and b for which an LTI system with impulse
response h(n) = an u(n) + bn u(-n-1) is stable.
The system can be realized as shown in Fig. 1 (a). This realization uses separate delays
(memory) for both the input and output signal samples. This realization is called direct
form I structure. The system can be viewed as two LTI systems in cascade. The first one
is a non-recursive system described by
For LTI systems, if we interchange the order of cascading, the system response remains
same. Interchanging the order, non-recursive-recursive to recursive-non recursive, we
obtain an alternate structure as shown in Fig. 1 (b), where the difference equations are:
Since the two delay units contain same input w(n), the delays can be merged to obtain the
structure of Fig. 1(c). This realization is termed as direct form II structure.
15
Fig. 1: Steps of converting direct form I (a) to direct form II realization (c)
Figure 2 shows direct form I structure where we need M+N delays and M+N+1
multiplication. The equations are
Non recursive
Recursive
16
Fig. 2: Direct form I of generalized LTI system
Figure 3 shows direct form II structure where we need max(M,N) delays and M+N+1
multiplication. This form is also known as canonic form. The equations are
Recursive
Non recursive
Let us take an FIR system to distinguish the realization in recursive and non recursive
form. The I/O relation can be written as
17
Suppose we have an FIR system to compute moving average in the form
This equation represents a recursive realization of the same FIR system, represented in
Figure 5.
4. Correlation
Correlation is the measurement of the degree to which two signals are similar. The
application areas include radar, sonar, digital communications, geology, and so on. A
received signal from a target can be represented as y(n) = A x(n-D) + w(n)
D 🡪 round trip delay, assumed to be an integral multiple of sampling interval
w(n) 🡪 additive noise
Cross correlation: The correlation between two dependent variables. Dependency of y(n)
on x(n) is
18
On the other hand, dependency of x(n) on y(n) is
The l is termed as the lag parameter. We see from the expressions that correlation is
similar to convolution except the operation of folding. So, we can write, rxy(l) = x(l) * y(-l).
Also, note that rxy(l) = ryx(-l).
Auto correlation: The correlation between same dependent variable with the passage of
time. Here, y(n) = x(n).
When l = 0,
🡪 Energy of a signal
Maximum correlation: Let x(n) and y(n) have finite energy. If the time shift is l, we have
a composite signal as, s = ax(n) + by(n-l). The energy of the signal is,
Since Es ≥ 0, , or,
This is a quadratic equation of the function a/b. Now, if a quadratic equation has
nonnegative value, its discriminant must be non-positive. So,
For autocorrelation,
So, the autocorrelation has maximum value at 0 lag. To scale-down the correlation
function, normally is normalized between -1 to 1. The normalized correlations are,
and
19
4.2 Correlation of Periodic Sequences
Let x(n) and y(n) be the power signals.
For x(n), y(n) be periodic with period N, the averages of the above expressions are
identical to averages for one period,
So, rxy(l) and rxx(l) are periodic sequences with period N and 1/N is the normalization scale
factor.
Let y(n) = x(n) + w(n). Suppose we observe M samples of y(n), 0 ≤ n ≤ M-1, where,
M>>N. We can assume for practical purpose, y(n) = 0 for n<0 and n ≥ M. Using
normalized factor 1/M,
The first term is periodic and shows large peaks at l=0, N, 2N, . . .. But when l🡪 M, the
peaks are reduced since many x(n)x(n-l) are 0. So, ryy(l) does not signify much for l > M/2.
If we expect x(n) and w(n) unrelated, rxw(l) and rwx(l) are very small. rww(l) will contain
peak at l = 0 and because of its randomness, it will rapidly decay to 0. So, only rxx(l) is
expected to have large peaks for l > 0. This property can be applied to the detection of
periodic signal buried in noise.
Using autocorrelation,
ryy(l) = y(l) * y(-l) = [h(l) * x(l)] * [h(-l) * x(-l)]
= [h(l) * h(-l)] * [x(l) * x(l)] = rhh(l) * rxx(l)
rhh(l) exists if the system is stable.
20
Chapter 4
Z TRANSFORM
1. Sampling and the Z Transform
Substitute or
where,
Power function
2. Significance of ROC
Consider a power function, both causal and anti causal.
For causal sequence,
The two sequences have same X(z) but their ROC is different. Without ROC we can not
uniquely determine the sequence x(n). Generally, for causal sequence, the ROC is exterior
of the circle having radius a and for anti causal sequence it is interior of the circle.
Answer:
3. Properties of Z Transform
22
Expansion of general equation yields,
But
Thus the Z-Transform gives,
Where is the sum of the terms, in the partial fraction expansion, that originate in the
poles of . Hence is the z-transform of the forced response.
The IZT yields,
where, is the natural response. If the input x(n) is bounded, will remain
bounded, since is of the functional form of .Thus an unbounded output must
The unboundness can occur only if the magnitude of at least one pole, .
So, in LTI discrete-time causal system is BIBO stable provided that all poles of the system
transfer function lie inside the unit circle in the Z-plane.
23
Example
Let
Here
Then, the poles of the transfer function are at 0.5, 0.8 and 1.2
So, the system is unstable because the pole at is outside the limit circle.
where,
The system will be stable if all the roots of A(z) < 1. The polynomial of degree m is,
where, am(0) = 1
The reverse / reciprocal polynomial Bm(z) of degree m is,
So, the coefficients of Bm(z) are the same as those of Am(z), but in reverse order.
In the Schür-Cohn stability test, to determine if the polynomial A(z) has all its roots inside
the unit circle, we compute a set of coefficients, called the reflection coefficients, k1, k2, . .
. . , kN from the polynomials of Am(Z). Let AN(z) = A(z) and kN = aN(N), then compute
lower degree polynomials Am(z), m = N, N-1, . . . . , 1 as,
, where, km = am(m)
The polynomial A(z) has all its roots inside the unit circle if and only if the coefficients km
24
So, k1 = -
Example:
also,
5. Inverse Z-Transform
So, the values of x(n) are the coefficients of and so can be obtained
by direct inspection of X(z). Normally X(z) is often expressed as a ratio of two
polynomials in or in z.
25
5.1 Power Series Expansion Method
X(z) can be expanded into an infinite series of or z by long division or synthetic
division.
So,
Example
and the perform the long division, we will have the same result.
So
The long division can be performed by the recursive approaches,
where, .
, Then,
Example:
26
Multiple-order poles: Example
Multiply both sides by and integrate in contour C with ROC of X(z) enclosing the
origin
Hence,
Cauchy residue theorem: If exists on and inside the contour C and if f(z) has no
pole at z = z0, then
27
residue of the poles at z = z0
Example:
For n = -1,
For n = -2,
Hence,
Time delay:
If x(n) is causal
Time advance:
Example: Fibonacci sequence {1 1 2 3 5 8 13. . ..} Any number equals the summation of
two previous numbers:
28
The poles are
Hence,
Let
So,
If the system is initially relaxed,
Here, are system poles and are input poles.
Let all the poles are simple poles,
And zeros of B(z) and N(z) don’t coincide with and so that no pole-zero
cancellation occurs. We then have,
29
7.3 Transient and Steady-State Responses: The natural response of a system is
7.4 Pole-zero Cancellation: Pole and zero in the same location may arise due to (1)
system function H(z) itself and/or (2) product of H(z)X(z). The cancellation of pole-zero
due to (1) causes the reduction of system order while that due to (2) causes suppression of
pole by zero. Zero located very near to pole results a response with very less amplitude.
Chapter 5
FOURIER ANALYSIS OF DT SIGNALS
In this chapter we shall discuss Fourier analysis of discretetime signals. Our approach is
parallel to that used for continuoustime signals. We first represent a periodic signal x(n) as
30
a Fourier series formed by a discretetime exponential (or sinusoid) and its harmonics.
Later we extend this representation to an aperiodic signal x(n) by considering x(n) as a
limiting case of a periodic signal with the period approaching infinity.
(2)
and m, integer (3)
Thus, the first harmonic is identical to the (N0+1)st harmonic, the second harmonic is
identical to the (N0+2)nd harmonic, and so on. In other words, there are only N0
independent harmonics, and they range over an interval 2π (because the harmonics are
(4)
31
To compute coefficients Dr in the Fourier series, we multiply both sides of (4) by
and sum over k from k = 0 to (N0 1).
(5)
The righthand sum, after interchanging the order of summation, results in
(6)
The inner sum is zero for all values of r ≠ m. It is nonzero with a value N0 only when r =
m. This fact means the outside sum has only one term DmN0 (corresponding to r = m).
Therefore, the righthand side is equal to DmN0, and
and (7)
We now have a discretetime Fourier series (DTFS) representation of an N0-periodic signal
x(n) as
(8)
where (9)
Observe that DTFS equations (8) and (9) are identical (within a scaling constant) to the
DFT equations derived at the end of this chapter. Therefore, we can compute the DTFS
coefficients using the efficient FFT algorithm.
In general, the Fourier coefficients Dr are complex, and they can be represented in the
polar form as
Dr = |Dr| e Dr (10)
The plot of |Dr| vs. is called the amplitude spectrum and that of Dr vs. is called
the angle (or phase) spectrum. These two plots together are the frequency spectra of x(n).
Knowing these spectra, we can reconstruct or synthesize x(n). Therefore, the Fourier (or
frequency) spectra, which are an alternative way of describing a signal x(n), are in every
way equivalent (in terms of the information) to the plot of x(n) as a function of n. The
32
Fourier spectra of a signal constitute the frequencydomain description of x(n), in contrast
to the timedomain description, where x(n) is specified as a function of time (n).
The results are very similar to the representation of a continuoustime periodic signal by
an exponential Fourier series except that, generally, the continuoustime signal spectrum
bandwidth is infinite, and consists of an infinite number of exponential components
(harmonics). The spectrum of the discretetime periodic signal, in contrast, is bandlimited
and has at most N0 components.
= =
Therefore, if x(n) is N0-periodic, x(n) is also N0 periodic. Hence, it follows that Dr is
also N0-periodic, as is Dr . Now, we can write,
(12)
and (13)
If we plot Dr for all values of r (rather than only 0 ≤ r ≤ N0 1), then the spectrum Dr is
N0periodic. Moreover, x(n) can be synthesized by not only the N0 exponentials
corresponding to 0 ≤ r ≤ N0 1, but by any successive N0 exponentials in this spectrum,
starting at any value of r (positive or negative). For this reason, it is customary to show the
spectrum Dr for all values of r (not just over the interval 0 ≤ r ≤ N0 1). Yet we must
remember that to synthesize x(n) from this spectrum, we need to add only N0 consecutive
components.
The spectral components Dr are separated by the frequency and there are a
total of N0 components repeating periodically along the axis. Thus, on the frequency
scale , Dr repeats every 2π intervals. Equations (12) and (13) show that both x(n) and its
spectrum Dr are periodic and both have exactly the same number of components (N0) over
one period. The period of x(n) is N0 and that of Dr is 2π radians.
33
so that the amplitude spectrum |Dr| is an even function , and Dr is an odd function of r
(or ). All these concepts will be clarified by the examples to follow.
Example: Find the discretetime Fourier series (DTFS) for x(n) = sin(0.1πn). Sketch
the amplitude and phase spectra.
In this case the sinusoid sin(0.1πn) (Fig. 1 a) is periodic because /2π = 1/20 is a rational
number and the period N0 is
N0 = m =m =20m
The smallest value of m that makes 20m an integer is m = 1. Therefore, the period N0 = 20,
so that 0 = 2π/N0 = = 0.1π, and from Eq. (12),
where the sum is performed over any 20 consecutive values of r. We shall select the range
10 ≤ r < 10 (values of r from 10 to 9). This choice corresponds to synthesizing x(n) using
the spectral components in the fundamental frequency range (π≤ <π). Thus,
In these sums, r takes on all values between 10 and 9. From The first sum on the righthand
side is zero for all values of r except r = 1, when the sum is equal to No = 20. Similarly,
the second sum is zero for all values of r except r = -1 when it is equal to No = 20.
34
Therefore, D1 = 1/j2 and D1 = - 1/j2 and all other coefficients are zero. The
corresponding Fourier series is given by
Figures 1 b and c show the sketch of Dr, for the interval (10 ≤ r < 10). There are only two
components corresponding to r = 1 and 1. The remaining 18 coefficients are zero. Because
of the periodicity property, the spectrum Dr is a periodic function of r with period N0 = 20.
For this reason, we repeat the spectrum with period N0 = 20 (or = 2π), as illustrated in
Figs. 1 b and c, which are periodic extensions of the spectrum in the range 10 ≤ r < 10.
Observe that the amplitude spectrum is an even function, and the angle or phase spectrum
is an odd function of r (or ) as expected.
Exercise: Find the period and the DTFS for over the
interval [Hint: Compute Dr first using Eq. (9)]
Answer:
Applying a limiting process, we now show that aperiodic signals can be expressed
as a continuous sum (integral) of everlasting exponentials. To represent an aperiodic signal
such as the one illustrated in Fig. 2a by everlasting exponential signals, let us
construct a new periodic signal formed by repeating the signal every N0
units, as shown in Fig. 2. The period N0 is made long enough to avoid overlap between the
repeating cycles The periodic signal can be represented by an
exponential Fourier series. If we let , the signal repeats after an infinite
35
Fig. 2 Generation of a periodic signal by periodic extension of a signal
Thus, the Fourier series representing will also represent in the limit N0 .
The exponential Fourier series for is given by
=
=
(16)
where (17)
The limits for the sum on the right-hand side of equation (17) should be from - to .
But because for , it does not matter if the limits are taken from - to
. In the limit as and . Also, becomes
infinitesimal ( 🡪 0). For this reason it will be appropriate to replace with an
(17.A)
With the same reasoning, LHS of the above equation becomes x(n), and the summation of
RHS becomes integration over the range of 2π. So,
(16.A)
🡪 IDTFT (18)
Symbolically,
(19)
This result shows that the Fourier coefficients Dr are times the samples of
taken every rad/s. Therefore, is the envelope for the
coefficients Dr. We now let N0 by doubling N0 repeatedly. Doubling N0 halves the
fundamental frequency so the spacing between successive spectral components
(harmonics) is halved, and there are now twice as many components (samples) in the
spectrum. At the same time, by doubling N0, the envelope of the coefficients Dr is halved
as seen from Eq. (19). If we continue this process of doubling repeatedly, the number
of components doubles in each step; the spectrum progressively becomes denser while its
magnitude Dr, becomes smaller. Note, however, that the relative shape of the envelope
remains the same. In the limit, as , the fundamental frequency and
The separation between successive harmonics, which is , is approaching
zero (infinitesimal), and the spectrum becomes so dense that it appears continuous. But as
the number of harmonics increases indefinitely, the harmonic amplitudes become
vanishingly small (infinitesimal). We have a strange situation of having nothing of
everything.
Answer:
Example: Find the response of the DT system with x(n) = (0.8)nu(n) and h(n) = (0.5)nu(n)
Here, X(ω) is periodic is periodic with a period of 2π. Let X(ω) is sampled every δω and
for 0 ≤ ω ≤ 2π we have N samples. Then δω = 2π/N. So, for ω = 2πk/N, we get,
k = 0, 1, 2, ….., N-1
Changing the limit of inner sum n to n-lN and interchanging the order of summation, we
get,
38
from its samples at frequencies if N ≥ L. So, we may write the sampled
version of DTFT as,
We have CFT as
Putting t = nT and ω = kΩT and noting that as , , we obtain (integration
become summation),
So, N-point DFT provides the exact line spectrum of a periodic sequence with
fundamental period of N.
39
Example: x(n) = {1 0 0 1}
= {2 1+j1 0 1-j1}
Against point N/2, magnitudes are even symmetry while phases are of odd symmetry.
where,
WN = e-j2π/N 🡪Nth root of unity
In N-point DFT, we need N2 complex multiplications and N(N-1) complex additions.
Defining xN, XN and WN as follows, we can write
So,
where IN is an N×N identity matrix.
40
With simplified notations,
Data is divided into two sequences: Even-numbered data and odd-numbered data. Let
X1(k) is of even-numbered data and X2(k) is of odd-numbered data.
m = 0, 1, 2, . . ., N-1
The expression is termed as circular convolution. 🡺 Multiplication of 2 DFT sequences
is equivalent to circular convolution of the 2 sequences in time domain.
Example: Using circular convolution, determine the response of an LTI system with
x(n) = { 2 1 2 1) and h(n) = {1 2 3 4}. Check your results using DFT.
41
3.7 DFT Applications
Filtering
Filtering of long data sequence: overlap-save method and overlap-add method
Frequency analysis of signals
……
Chapter 6
Filter Design
Filters are signal conditioners. It functions by accepting an input signal, blocking
pre-specified frequency components, and passing the original signal minus those
components to the output. For example, a typical phone line acts as a filter that limits
frequencies to a range considerably smaller than the range of frequencies human beings
can hear. That's the reason in listening to CD-quality music over the phone is not as
pleasing to the ear as listening to it directly.
A digital filter takes a digital input, gives a digital output, and consists of digital
components. In a typical digital filtering application, software running on a digital signal
processor (DSP) reads input samples from an A/D converter, performs the mathematical
manipulations dictated by theory for the required filter type, and outputs the result via a
D/A converter.
An analog filter, by contrast, operates directly on the analog inputs and is built entirely
with analog components, such as resistors, capacitors, and inductors.
42
There are many filter types, but the most common are low pass, high pass, band pass, and
band stop. A low pass filter allows only low frequency signals (below some specified
cutoff) through to its output, so it can be used to eliminate high frequencies. A low pass
filter is handy, in that regard, for limiting the uppermost range of frequencies in an audio
signal; it's the type of filter that a phone line resembles. A high pass filter does just the
opposite, by rejecting only frequency components below some threshold. An example
high pass application is cutting out the audible 50 Hz AC power "hum", which can be
picked up as noise accompanying almost any signal in our country.
The designer of a cell phone or any other sort of wireless transmitter would typically place
an analog band pass filter in its output RF stage, to ensure that only output signals within
its narrow, government-authorized range of the frequency spectrum are transmitted.
Engineers can use band stop filters, which pass both low and high frequencies, to block a
predefined range of frequencies in the middle.
1. Analog Filter
Filters of continuous-time signals are frequently referred to as analog filters. Let us
consider a distortionless transmission system, the one that passes any signal with no
change in general wave shape and frequency except possible amplification and time delay.
The output for such a system is, , where x(t) is the input signal. Taking
Fourier transform,
So, the magnitude spectrum should be constant and phase spectrum should be linear
(linear phase) for no or very little distortion. But for a practical system the magnitude
spectrum decreases beyond , and the region where it remains constant is known
as bandwidth of system. Beyond the bandwidth, the phase spectrum flattens to become
constant.
Since high-pass, band-pass and band-stop filters can be obtained by a suitable frequency
transformation and combination of a low-pass filter, only the LPF is considered here. For
an R-C LPF,
Then,
43
From the filter responses, it is observed that:
● Response deviates markedly from an ideal response
● Higher the order of the filter, sharper the cut-off and narrower the transition region
● By suitable choice of coefficients in the frequency response function, a narrower
transition region can be obtained at the expense of peaks (ripple) occurring in the
pass band as well as in the stop band
So, filter design is a compromise (trade-off) between sharp cut-off and distortion
Chebyshev Filter: For a given order the Chebyshev filter has a higher rate of cut-off than
the corresponding Butterworth filter. However, instead of the gain falling monotonically
there are ripples in the response. The form of the filter determines these ripples in the pass
or stop band. The equation of the form having pass band ripple is,
n is the order of the filter, Tn is the nth order Chebyshev polynomial and ε is the parameter
that controls the amount of ripple in the pass band. The ripple has a maximum value of 1
n Tn(x = ω/ωc)
0 1
1 x
2 2x2-1
3 4x3-3x
4 8x4-8x2+1
5 16x5-20x3+5x
44
Fig. 1: Frequency responses of some classical analog filters;
(a) Butterworth, (b) Chebyshev Type I (c) Chebyshev type II, (d) Elliptic
2. Digital Filter
A digital filter is a mathematical algorithm implemented in hardware and/or software that
operates on a digital input signal to produce a digital output signal for the purpose of
filtering objective. Application areas are: data compression, biomedical signal
processing, speech processing, image processing, data transmission, digital audio,
telephone echo cancellation, and so on.
In practice, it is not feasible to compute the output of IIR filter as the impulse response is
of infinite duration. So IIR filter equation is expressed in a recursive form as,
(3)
where ak and bk are the coefficients of the filter and (M, N) is the filter order . For IIR filter
45
y(n) is a function of past outputs as well as present and past input samples. If bk = 0, then
IIR filter becomes an FIR one. The alternative representations of the filters in terms of
z-transform values are:
for FIR,
for IIR,
Example: Given the following two equations which meet identical amplitude and
frequency response specifications:
1.
2.
Since 1 is IIR and 2 is FIR filters. The block diagram of 1 is:
46
Fig. 2: Block diagram of (a) IIR and (b) FIR filters
So, IIR is more economical. In general for same amplitude response specifications,
Number of FIR filter coefficient = 6 × the order of IIR transfer function.
47
● Coefficient quantization
● Arithmatic round off errors
● Overflow
Frequencies within a filter's stop band are, by contrast, highly attenuated. The transition
band represents frequencies in the middle, which may receive some attenuation but are not
removed completely from the output signal.
In the following figure, which shows the frequency response of a low pass filter, ωp is the
pass band ending frequency, ωs is the stop band beginning frequency, and As is the amount
of attenuation in the stop band. Frequencies between ωp and ωs fall within the transition
band and are attenuated to some lesser degree.
Given these individual filter parameters, one of numerous filter design software packages
can generate the required signal processing equations and coefficients for implementation
on a DSP. Before we can talk about specific implementations, however, some additional
terms need to be introduced. Ripple is usually specified as a peak-to-peak level in
decibels. It describes how little or how much the filter's amplitude varies within a band.
Smaller amounts of ripple represent more consistent response and are generally preferable.
48
Transition bandwidth describes how quickly a filter transitions from a pass band to a stop
band, or vice versa. The more rapid this transition, the higher the transition bandwidth;
and the more difficult the filter is to achieve. Though an almost instantaneous transition to
full attenuation is typically desired, real-world filters don't often have such ideal frequency
response curves. There is, however, a tradeoff between ripple and transition bandwidth, so
that decreasing either will only serve to increase the other.
The process of selecting the filter's length and coefficients is called filter design. The goal
is to set those parameters such that certain desired stop band and pass band parameters will
result from running the filter. Most engineers utilize a program such as MATLAB to do
their filter design. But whatever tool is used, the results of the design effort should be the
same:
▪ A frequency response plot, like the one shown in Figure, which verifies that the
filter meets the desired specifications, including ripple and transition bandwidth.
▪ The filter's length and coefficients.
The longer the filter (more taps), the more finely the response can be tuned. With the
length, N, and coefficients, float h[N] = { ... }, decided upon, the implementation of the
FIR filter is fairly straightforward. Listing 1 shows how it could be done in C. Running
this code on a processor with a multiply-and-accumulate instruction (and a compiler that
knows how to use it) is essential to achieving a large number of taps.
(2)
h(k), k=0,1,2,.........., N-1, are the impulse response (coefficient) of the filter, H(z)
is the transfer function of the filter, N is the filter length.
● FIR filters can have exactly linear phase response
49
● Very simple to implement
Linear phase response: When a signal passes through a filter, it is modified in amplitude
and/or phase. Let us assume a signal consists of several frequency components. The phase
delay and group delay may occur.
A filter with nonlinear phase characteristic will cause a phase distortion in the signal. This
is because the frequency components in the signal will each be delayed by an amount not
proportional to the frequency thereby altering their harmonic relationships. A filter is said
to have linear phase response if its phase response satisfies one of the following
relationship:
(3) (4) where α and β are constants.
Satisfying (3) implies constant group and constant phase delay responses, and the impulse
response must have positive symmetry. So,
[sinc(x) = sin(x)/x]
Since , the filter has linear phase response. As n→ ∞, hD(n) → 0, so the
filter is not FIR. An obvious solution is to truncate the ideal impulse response by setting
hD(n) = 0 for n>M (say). However, this introduces undesirable ripples and overshoots →
Gibb’s phenomenon.
50
The more coefficients that are retained, the closer the filter spectrum is to the ideal
response. Direct truncation of hD(n) is equivalent to multiplying the ideal impulse response
by a rectangular window of the form,
w(n) = 1, n = 0, 1, ........, M-1
= 0, elsewhere.
A practical approach is multiply hD(n) by a suitable window function, w(n), whose
duration is finite. In this way the resulting impulse response decays smoothly toward zero.
H(ω) shows that the ripples and overshoots, characteristic of direct truncation are much
reduced. However, the transition width is wider than for the rectangular case. The
transition width of the filter is determined by the width of the main lobe of the window.
The side lobes produce ripples in both pass and stop bands.
1
Rectangular 0.7416 13 21
Hanning 0.0546 31 44
Hamming 0.0194 41 53
Blackman 0.0017 57 74
51
0.0274 - 50
(β = 4.54)
Kaiser
0.00275 - 70
(β = 6.76)
0.000275 - 90 *
(β = 8.96)
* I0(x) is zero-order modified Bessel function of the first kind. When β = 0, Kaiser
window corresponds to rectangular window. β is determined by the stop band
attenuation requirements.
Example: Obtain the coefficients of an FIR LPF to meet the following specifications.
Fp = 1.5 kHz ΔF = 0.5 kHz SB attenuation: > 50 dB Fs= 8 kHz
Solution: For stop band attenuation requirement, Hamming, Blackman or Kaiser window
can be used. Consider Hamming window for simplicity.
Δf = 0.5/8 = 0.0625 = 3.3/N 🡪 N = 52.8 ≈ 53
The filter coefficients are, h(n) = hD(n) w(n), -26≤ n ≥26
and
Since h(n) is symmetrical, compute h(0), h(1),......., h(26) and then use the fact that h(n) =
h(N-1-n) to compute h(27), h(28), . . . , h(52). The calculated values are provided below:
52
7 0.85049 0.04201 0.03573 45
8 0.80817 0 0 44
9 0.76208 -0.03268 -0.0249 43
10 0.71288 -0.02251 -0.01605 42
11 0.66125 0.01107 0.00732 41
12 0.60792 0.02653 0.01613 40
13 0.55363 0.00937 0.00519 39
14 0.49915 -0.01608 -0.00803 38
15 0.44525 -0.01961 -0.00873 37
16 0.39268 0 0 36
17 0.34217 0.0173 0.00592 35
18 0.29444 0.0125 0.00368 34
19 0.25016 -0.00641 -0.0016 33
20 0.20995 -0.01592 -0.00334 32
21 0.17437 -0.0058 -0.00101 31
22 0.14392 0.01023 0.00147 30
23 0.11903 0.01279 0.00152 29
24 0.10006 0 0 28
25 0.08725 -0.01176 -0.00103 27
26 0.08081 -0.00866 -0.0007 26
53
7.2 Optimal Method
This method is very powerful and flexible and also easy to apply. The optimal method is
based on the concept of equiripple pass band and stop band.
where H(k), k=0,1,2,....., N-1 are the samples of ideal or target frequency response.For
linear phase filters,
Basic equations:
The strength of IIR filters comes from the flexibility the feedback arrangement provides.
Fewer coefficients than FIR are needed for same set of specifications. IIR filters are used
when sharp cut-off and high throughput are the important requirements. The IIR filter can
become unstable if adequate care is not taken in design. For stability, all its poles must lie
inside the unit circle.
1
When a zero is placed at a given point on the z-plane, the frequency response will be zero
at the corresponding point. A pole, on the other hand, produces a peak at the
corresponding frequency point. Fig. (a) shows pole-zero diagram of a simple filter
whereas (b) is the sketch of frequency response.
Fig. 7: (a) Pole-zero diagram of a simple filter, (b) Sketch of frequency response
Poles close to the unit circle give rise to large peaks, zeros close to or on the circle produce
troughs or minima. For the coefficients to be real, the poles and zeros must either be real
or occur in complex conjugate pairs.
Example: BPF, complete signal rejection at dc and 250 Hz, narrow pass band centered at
125 Hz, 3 dB bandwidth of 10 Hz, 500 Hz sampling frequency.
Solution: Place zeros. Since rejection at 0 and 250 Hz, the place of zeros are 0° and
360°*250/500 = 180°. To have the pass band centered at 125 Hz, poles should be placed at
±360°*125/500 = 90°. For real coefficients poles are in complex conjugate pairs.
Radius r of the poles are determined by the desired bandwidth. For r >0.9,
r ≅ 1-(BW/Fs)π = 1-10π/500 = 0.937
For complete rejection radius of zeros are 1. So,
Taking ILT
Sampling ha(t) periodically at t = nT
2
So,
So, we have,
If we put z = ejωT, then H(z) at ω = 0 is 1223. To keep this high gain down and to avoid
overflow it is common practice to multiply H(z) by T (=1/Fs). The other way to remove
the effect of sampling frequency, use T=1 and α = 2*π*150 /1280 = 0.74. When H(z) is
multiplied by T,
(1)
Using trapezoidal formula of approximation,
(2)
(3)
3
at t = nT, (1) yields,
(4)
Putting (4) into (3) and defining x(n) = x(nT) and y(n) = y(nT),
🡪 Bilinear transformation
Since and
So, and
If r < 1, σ < 0 and if r > 1, σ > 0. For the limiting case (r = 1), we have and
Example: Design a single-pole low-pass digital filter with 3-dB bandwidth of 0.2π using
4
For analog filter,
The time domain DE that gives this response can be deduced as follows :
Note: The magnitude response doesn't look much like that of the ideal low pass
filter (M not constant over any finite range).
5
2. One zero at the origin and one negative real pole makes a high pass filter.
Similarly for the case p=-0.9 we see from the diagram that a high pass filter results.
The unit impulse response is
The DE giving the same filtering effect as H(z) is:
3. A pair of zeros at the origin and a complex conjugate pair of poles makes a band
pass filter.
and is a minimum at .
The unit step response h(n) is found by inverting the z-transform.
6
Invert the z-transform:
The DE which would give the same band pass filter response is:
7
We see from the diagram that and goes to zero at . By
having the poles close to the zeros the ratios and don't vary much with
frequency except near .
The unit impulse response: