0% found this document useful (0 votes)
11 views

Digital Signal Processing: Dr. Md. Aynal Haque

Uploaded by

Sakib Rayhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Digital Signal Processing: Dr. Md. Aynal Haque

Uploaded by

Sakib Rayhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Digital Signal

Processing

Dr. Md. Aynal Haque


Department of Electrical and Electronic Engineering
Bangladesh University of Engineering and Technology
Dhaka 1000, Bangladesh

2014
Chapter 1
INTRODUCTION
1. Signal
A signal is any variable that carries or contains some kind of information that can be
conveyed, displayed or manipulated. It is a function of independent variables such as time,
distance, position, temperature, pressure, etc. Mathematically, we describe a signal as a
function of one or more independent variables. For example, the functions
describe two signals, one that varies linearly with the independent variable t (time) and a
second that varies quadratically with t. Another function

This describes a signal of two independent variables x and y that could represent the two
spatial coordinates in a plane. Speech, music, telegraph, electrocardiogram (ECG) and
electroencephalogram (EEG) are examples of information-bearing signals that are
functions of one independent variable, namely, time. Image signal is an example that is a
function of two independent variables (spatial coordinates).

1.1 Classification of Signals


Signals can be classified in many ways.
● Generating source:
▪ Single channel / scalar (1 source)

▪ Multi channel / vector (multiple source)


● Dimension (number of dependent variable): 1-D, 2-D, 3-D, etc.
● Continuity of dependent (signal) and independent (time) variables:
▪ Continuous time (CT) or Analog signal: a continuous-time signal
with continuous amplitude.
▪ Discrete time (DT) or Sampled signal: discrete-time signal with
continuous-valued amplitude.
▪ Discrete or Quantized signal: discrete-time signal with
discrete-valued amplitude.
▪ Digital signal: a discrete-time signal with discrete-valued
amplitudes represented by a finite number of digits.
● Certainty of description:
▪ Deterministic signal: can be uniquely determined by a well-defined
process such as mathematical expression or rule, or table look-up,
etc. Deterministic signal can be linear or nonlinear.
▪ Random signal: is generated in a random fashion and can not be
predicted ahead of time.
● Signals may be real, imaginary or complex.

2. System
A system may be defined as a device that performs an operation on a signal. For example,
filter is a system. It is used to reduce noise and interference corrupting desired
information-bearing signal. The operations performed by a system usually can be specified
1
mathematically. The method or set of rules for implementing the system by a program that
performs the corresponding mathematical operations is called an algorithm. The
algorithm can be of software and/or hardware type.

2.1 System Properties


Memory: A system has memory if its output at time t0, y(t0), depends on input values
other than x(t0). Otherwise, the system is memoryless or static. A system with memory is
also called a dynamic system. Ex: voltage of a capacitor,

🡪 q(t0) is determined by current i(t) for all t ≤


t0.

Invertibility: A system is to be invertible if distinct inputs result in distinct outputs. Here


input can be determined from output.

y(t) = x2(t) 🡪 🡪 not invertible

Inverse: The inverse of a system (denoted by T) is a second system (denoted by Ti) that,
when cascaded with the system T, yields the identity system.
x(t) →T → Ti → z(t) = x(t)

Causality: A system is causal if the output at any time t0 is dependent on the input only for
t ≤ t0. y(t) = x(t-2) is causal, but y(t+2) is non causal

Stability: A system is stable if the output remains bounded for any bounded input, this
type of stability is known as BIBO stability and is used mostly. If |x(t)| ≤ M for all t, then

for BIBO stability, |y(t)| ≤ R for all t. y(t)=x2(t) is stable but is unstable.

Time Invariance: A system is said to be time invariant if a time shift in the input signal
results only in the same time shift in the output signal.
x(t) → y(t) x(t) → system → delay t0 → y(t-t0)

x(t-t0) → y(t-t0) x(t) → delay t0 → system → yd (t-t0)

Example : y(t) = sin x(t) → y(t)|x(t-t0) = sin[x(t-t0)] = y(t)|t-t0 → TI system


y(t) =e x(t) → y(t)|x(t-t0) =e-tx(t-t0) ≠ e-(t-t0)x(t-t0) → Time varying system
-t

Linearity: A system is linear if it meets the criteria of additivity and homogeneity.


Principle of superposition is applicable in linear system.
Additivity : if x1(t) → y1(t) and x2(t) → y2(t), then x1(t)+ x2(t) → y1(t)+ y2(t)
Homogeneity : ax1(t) → ay1(t)
Superposition : a1x1(t) + a2x2(t) → a1y1(t) +a2y2(t)
y(t) = kx(t) is linear, y(t) = x2(t) is nonlinear.

Example : y(t) = sin2t x(t)


● The system is memoryless, output is a function of the input at only the present time.
● Not invertible, since y(π) = 0, regardless of x(t).

2
● The system has no inverse.
● The system is causal, since the output does not depend on the input at a future time.
● The system is stable, since the output is bounded for all bounded inputs.
If |x(t)| ≤ M, |y(t)| ≤ M also.
● The system is time-varying.
yd(t) = y(t)|x(t-t0) = sin2t x(t-t0) and y(t)|t-t0 = sin2(t-t0) x(t-t0) are not same.
● The system is linear, since a1x1(t) + a2x2(t) → sin2t [a1x1(t)+ a2x2(t)]
= a1 sin2t x1(t)+ a2 sin2t x2(t) = a1y1(t) +a2y2(t)

3. Signal Processing
Any operation on signal is termed as signal processing. It is concerned with the
mathematical representation of the signal and the algorithmic operation carried out on it to
extract the information present. Digital signal processing (DSP) is concerned with the
digital representation of signals and the use of digital processors to analyze, modify and/or
extract information from it.

3.1 Advantages of DSP


● Guaranteed accuracy.
● Perfect reproducibility
● Use of more reliable, smaller, low-cost, low power consumed, high-speed ICs
using CMOS technology
● No drift of performance with temperature or age
● Greater flexibility
● Superior performance

3.2 Limitations of DSP


● Speed and cost
● Design time
● Finite word length problems
But these limitations are being continually diminished with the advent of new
technologies.

3.3 Application areas of DSP


● Image processing: pattern recognition, robotic vision, image enhancement,
facsimile, satellite weather map, animation
● Instrument and control: spectrum analysis, position and rate control, noise
reduction, data compression
● Speech / audio: speech recognition, speech synthesis, text to speech, digital audio,
equalization
● Military: secure communication, radar processing, sonar processing, missile
guidance
● Telecommunications: echo cancellation, adaptive equalization, modulation,
spread spectrum, video conferencing, data communication
● Medical: patient monitoring, scanners, EEG mapping, ECG analysis, X-ray
storage / enhancement
4. Basic Elements of DSP

3
Fig. 1: Analog signal processing

Fig. 2: Digital signal processing

Typical Signal Processing Operations


1. Elementary time-domain operations
● Scaling (amplification/attenuation): multiplication by a +ve or –ve constant
● Delay : y(t) = x(t-t0)
● Addition : y(t) = x1(t)+x2(t)-x3(t)+.......
● Integration
● Differentiation
2. Filtering: In addition to basic filters (LPF, HPF, BPF, BSF) the following are used
● Notch filter: band stop filter designed to block a single frequency.
● Multiband filter: has more than one pass band and more than one stop band.
● Comb filter: designed to block frequencies that are integral multiple of a
certain low frequency.
3. Generation of signal (prediction)
4. Modulation and demodulation
5. Multiplexing and demultiplexing

Chapter 2
DIGITAL SIGNAL

4
1. Digital Signal
A discretetime signal having a set of discrete values represented by some symbols (code)
is called a digital signal.

Fig. 1: Analog signal

Fig. 2: Discrete-time signal

Fig. 3: Discrete signal

2. AD Conversion
We can obtain digital signal from analog one by performing analog to digital (A/D)
conversion. Most signals of practical interest, such as speech, biological signals, seismic
signals, radar signals, sonar signals, and various communications signals such as audio
and video signals, are analog. To process analog signals by digital means, it is first
necessary to convert them into digital form, that is, to convert them to a sequence of
numbers having finite precision. This procedure is called analogtodigital (A/D)
conversion, and the corresponding devices are called A/D converters (ADCs).
Conceptually, we view A/D conversion as a threestep process. This process is illustrated in
Fig. 4.

5
Fig. 4: Basic parts of an analogtodigital (A/D) converter

Sampling: This is the conversion of a continuous-time signal into a discrete time signal
obtained by taking “samples” of the continuous-time signal as discrete-time instants. Thus,
if is the input to the sampler, the output is , where T is called the
sampling interval.
Quantization: This is the conversion of a discrete-time continuous-valued signal into a
discrete-time, discrete-valued (digital) signal. The value of each signal sample is
represented by a value selected from a finite set of possible values. The difference between
the unquantized sample x(n) and the quantized output is called the quantization
error.
Coding: In the coding process, each discrete value is represented normally by a
fixed bit (b) binary sequence.

2.1 Sampling of Analog Signals


There are many ways to sample an analog signal. We limit our discussion to periodic or
uniform sampling, which is the type of sampling used most often in practice. This is
described by the relation

where x(n) is the discretetime signal obtained by "taking samples" of the analog signal
every T seconds. This procedure is illustrated in Fig. 5. The time interval T between
successive samples is called the sampling period or sample interval and its reciprocal
is called the sampling rate (samples per second) or the sampling frequency
(hertz).

Fig. 5: Periodic sampling of analog signal


Table 1: Relation among frequency variables

6
Continuous-time signals Discrete-time signals

radian/sec cycle/sec (Hz) radian/sample cycle/sample (Hz)

From these relations we observe that the fundamental difference between continuoustime
and discretetime signals is in their range of values of the frequency variables F and f, or
and . Periodic sampling of a continuoustime signal implies a mapping of the infinite
frequency range for the variable F (or ) into a finite frequency range for the variable f
(or ). Since the highest frequency in a discretetime signal is or f = 1/2 it follows
that, with a sampling rate , the corresponding highest values of F (or ) are

Therefore, sampling introduces an ambiguity, since the highest frequency in a


continuoustime signal that can be uniquely distinguished when such a signal is sampled at
a rate is or . To see what happens to frequencies above ,
let us consider the following example.

Example:
The implications of these frequency relations can be fully appreciated by considering the
two analog sinusoidal signals.

which are sampled at a rate = 40 Hz. The corresponding discretetime signals or


sequences are

Hence, . Thus, the sinusoidal signals are identical and, consequently,


indistinguishable. If we are given the sampled values generated by , there is
some ambiguity as to whether these sampled values correspond to or . Since
yields exactly the same values as when the two are sampled at = 40
samples per second, we say that the frequency = 50 Hz is an alias of the frequency
= 10 Hz at the sampling rate of 40 samples per second. It is important to note that is
not the only alias of . In fact at the sampling rate of 40 samples per second, the
frequency = 90 Hz is also an alias of , as is the frequency = 130 Hz, and so on.
All the sinusoids , k = l, 2, 3, 4.... sampled at 40 samples per second,
yield identical values. Consequently, they are all aliases of = 10 Hz.

7
The Sampling theorem: If the highest frequency contained in an analog signal is
and the signal should be sampled at a rate . The sampling rate
is called the Nyquist rate.
Example: Consider the analog signal

(a) What is the Nyquist rate for this signal?


(b) Assume now that we sample this signal using a sampling rate = 5000 samples/s.
What is the discretetime signal obtained after sampling?
(c) What is the analog signal we can reconstruct from the samples if we use ideal
interpolation?
Answer: (a) 12 KHz (b)
(c)

2.2 Quantization of Continuousamplitude Signals


As we have seen, a digital signal is a sequence of numbers (samples) in which each
number is represented by a finite number of digits (finite precision). The process of
converting a discretetime continuousamplitude signal into a digital signal by expressing
each sample value as a finite (instead of an infinite) number of digits, is called
quantization. The error introduced in representing the continuousvalued signal by a finite
set of discrete value level is called quantization error or quantization noise. We denote
the quantizer operation on the samples x(n) as Q[x(n)] and let denote the sequence
of quantized samples at the output of the quantizer. Hence, . Then the
quantization error is a sequence defined as the difference between the quantized
value and the actual sample value. Thus, .
We illustrate the quantization process with an example. Let us consider the discretetime

signal
obtained by sampling the analog exponential signal , t ≥ 0 with a sampling
frequency = 1 Hz (see Fig. 6). Observation of Table 2, which shows the values of the
first 10 samples of x(n), reveals that the description of the sample value x(n) requires n
significant digits. It is obvious that this signal cannot be processed by using a calculator
or a digital computer since only the first few samples can be stored and manipulated. For
example, most calculators process numbers with only ten significant digits.

8
Fig. 6: Illustration of quantization
Table 2: Quantization with one significant digit using Truncation or Rounding
n
Discrete-time (Truncation) (Rounding) (Rounding)
signal
0 1 1.0 1.0 0.0
1 0.9 0.9 0.9 0.0
2 0.81 0.8 0.8 -0.01
3 0.729 0.7 0.7 -0.029
4 0.6561 0.6 0.7 0.0439
5 0.59049 0.5 0.6 0.00951
6 0.531441 0.5 0.5 -0.031441
7 0.4782969 0.4 0.5 0.0217031
8 0.43046721 0.4 0.4 -0.03046721
9 0.387420489 0.3 0.4 0.012579511

However, let us assume that we want to use only one significant digit. To eliminate the
excess digits, we can either simply discard them (truncation) or discard them by rounding
the resulting number (rounding). The resulting quantized signals are shown in
Table 2. We discuss only quantization by rounding, although it is just as easy to treat
truncation. The values allowed in the digital signal are called the quantization levels,
whereas the distance A between two successive quantization levels is called the
quantization step size or resolution. The rounding quantizer assigns each sample of x(n)
to the nearest quantization level. In contrast, a quantizer that performs truncation would
have assigned each sample of x(n) to the quantization level below it. The quantization
error in rounding is limited to the range of to , that is,

In other words, the instantaneous quantization error cannot exceed half of the quantization
step (see Table 2). If and represent the minimum and maximum value of x(n)
and L is the number of quantization levels, then

We define the dynamic range of the signal as . In our example we have


, and L = 11, which leads to = 0.1. Note that if the dynamic range is fixed,
increasing the number of quantization levels results in a decrease of the quantization step
size. Thus the quantization error decreases and the accuracy of the quantizer increases.

9
2.3 Quantization of Sinusoidal Signals
Fig. 7 illustrates the sampling and quantization of an analog sinusoidal signal
using a rectangular grid. Horizontal lines within the range of the
quantizer indicate the allowed levels of quantization. Vertical lines indicate the sampling
times. Thus, from the original analog signal we obtain a discrete-time signal
by sampling and a discretetime, discreteamplitude signal after
quantization. If the sampling rate , satisfies the sampling theorem, quantization is the
only error in the A/D conversion process. Thus, we can evaluate the quantization error by
quantizing the analog signal instead of the discretetime signal .
Inspection of Fig. 7 indicates that the signal is almost linear between quantization
levels (see Fig. 8). The corresponding quantization error is shown in
Fig. 8. In figure 8, denotes the time that stays within the quantization levels. Let
us assume that both the signal and noise are voltages and those are passing through a
resistor of R ohm. The meansquare error power is,

Since , we have

If the quantizer has b bits of accuracy and the quantizer covers the entire range 2A, the

quantization step is . Hence,

The average power of the signal is


The quality of the output of the A/D converter is usually measured by the
signaltoquantization noise ratio (SQNR), which provides the ratio of the signal power to
the noise power:

SQNR =
Expressed in decibels (dB), the SQNR is

This implies that the SQNR increases approximately 6 dB for every bit added to the word
length, i.e., for each doubling of the quantization levels.

10
Fig. 7: Sampling and quantization of a sinusoidal signal

Fig. 8: The quantization error

Although formula was derived for sinusoidal signals, a similar result holds for every signal
whose dynamic range spans the range of the quantizer. This relationship is extremely
important because it dictates the number of bits required by a specific application to assure
a given signal to noise ratio. For example, most compact disc players use a sampling
frequency of 44.1 kHz and 16bit sample resolution, which implies a SQNR of more than
96 dB.

2.4 Encoding of quantized samples


The coding process in an A/D converter assigns a unique binary number to each
quantization level. If we have L levels, we need at least L different binary numbers. With a
word length of b bits, we can create different binary numbers. Hence, we have
. Thus, the number of bits required in the coder is the
smallest integer greater than or equal to . In our example it can easily be seen that
we need a coder with b = 4 bits. Commercially available A/D converters may be obtained
with finite precision of b = 16 or less. Generally, the higher the sampling speed and the
finer the quantization, the more expensive the device becomes.

Representation of Discrete Signal: x(n) = {…………………}

3. DigitaltoAnalog Conversion
To convert a digital signal into an analog signal we can use a digitaltoanalog (D/A)
converter. The task of a D/A converter is to interpolate between samples.

11
Chapter 3
DT SIGNAL ANALYSIS
1. DT System
A DT system operates of DT signals.
🡪 transformation / operation / processing
Some operations on DT Signal:
Adder :

Constant multiplier :

Signal multiplier :

Unit delay:

Unit advance:

Example: y(n) = 0.25 y(n-1) + 0.5 x(n) + x(n-2) + …

2. Analysis LTI Systems


The input-output relation of a generalized LTI system can be written as,

where, ak, bk are system parameters while M, N are system orders. The system is also
termed as autoregressive moving average (ARMA) system. It has two subclasses as:
● If bk = 0 for all k, the system is termed as autoregressive (AR) system
● If ak = 0 for all k, the system is termed as moving average (MA) system
12
Two methods are used to analyze LTI system:
● Direct solution of I/O relation:
y(n) = F[y(n-1), y(n-2, . . ., y(n-N), x(n), x(n-1), . . ., x(n-M)]
● Impulse resolution method
We will use second method first.

2.1 Impulse Resolution Method


In this method, the following steps are followed:
● Resolve or decompose the input signal into a sum of elementary signals. The
elementary signals are selected in such a way that the response of the system to
each elementary signal is easily determined.
● Find the response of the system to the elementary inputs.
● Judiciously sum all the responses to find the total response.

The mostly used method of resolving input into impulses:


Example: Resolve into impulses

Since x(n) can be resolved into weighted sum of impulses, we can find the response of the
impulse function as
Then, using superposition summation,

The final expression is termed as convolution summation or simply convolution. It says


that the response of an LTI system is the convolution between excitation x(n) and impulse
response h(n). Mathematically, y(n) = x(n) * h(n). The convolution has the operations of
Folding: Find h(k) from h(k)
Shifting: Find h(n-k) from h(-k); right shift for +ve n and left shift for –ve n
Multiplication and summation

Example: Find the convolution between


h(n) = {1 2 1 -1} and x(n) = {1 2 3 1} (Ex. 2.3.2 P 77)

Answer: y(n) = {... 1 4 8 8 3 -2 -1...}

2.1.1 Convolution Properties


Commutative 🡪 x(n) * h(n) = h(n) * x(n)
Associative 🡪 [x(n) * h1(n)] * h2(n) = x(n) * [h1(n) * h2(n)]

13
Distributive 🡪 x(n) * [h1(n)+ h2(n)] = x(n) * h1(n)+ x(n) * h2(n)

2.2 Recursive Solution


For a practical causal sequence,

If N is finite the system is termed as finite impulse response (FIR) system, otherwise the
system is infinite impulse response (IIR) system. FIR system response can be calculated
by convolution. However, the IIR system must be first represented by a recursive
equation to solve for the response. Let us take the example of cumulative summation,

So, to compute y(n), we need to store all x(n). Alternatively, we can express y(n) as,

🡨 Recursive system
Generally, a recursive equation is written as,

Putting n = 0, 1, 2, …..
y(0) = a y(-1) + x(0) y(1) = a2 y(-1) +a x(0)+ x(1)
y(2) = a3 y(-1) + a2 x(0)+ax(1) +x(2)
y(n) = an+1 y(-1) +an x(0)+an-1x(1) + ….. + x(n)

y(-1) is called initial condition. If the system is initially relaxed at n = 0, then y(-1) = 0
and hence the response will depend on x(n) only. This output is termed as zero-state

response or forced response, for all n ≥ 0. This is the convolution


sum of a causal sequence with h(n) = an u(n). On the other hand, if the system is initially
non relaxed and x(n) = 0 for all n, the output is called zero-input response or natural
response . The complete response is the summation of two.

Example: Find the homogeneous solution of the difference equation


y(n) – 3y(n-1) – 4y(n-2) = x(n) +x(n-1)
For homogeneous solution, all the terms containing x(n) will be 0 and hence let
y(n) = λn

Answer:

2.3 Stability of LTI System


BIBO Stability: y(n) is bounded for every bounded x(n). If x(n) is bounded, there exists a
constant Mx such that |x(n)| ≤ Mx < ∞ for all n. Similarly, |y(n)| ≤ My < ∞ for all n. We

have, . Taking the absolute values, .


14
Now, we know that the absolute value of the summation of terms is always less than or

equal to the summation of the absolute values of terms. Hence, .

But for finite x(n), |x(n)| ≤ Mx. So, . So, y(n) is bounded provided that
the summation of the absolute values of the impulse response is finite, i.e.,

Thus, an LTI system is stable if its impulse response is absolutely summable.

Example: Determine the range of values of a and b for which an LTI system with impulse
response h(n) = an u(n) + bn u(-n-1) is stable.

Answer: Stable if both |a| < 1 and |b| > 1.

3. LTI System Realization Structure


We describe the structures for the realization of systems described by linear
constant-coefficient difference equations. Let us consider the first order system

The system can be realized as shown in Fig. 1 (a). This realization uses separate delays
(memory) for both the input and output signal samples. This realization is called direct
form I structure. The system can be viewed as two LTI systems in cascade. The first one
is a non-recursive system described by

whereas the second is a recursive system described by

For LTI systems, if we interchange the order of cascading, the system response remains
same. Interchanging the order, non-recursive-recursive to recursive-non recursive, we
obtain an alternate structure as shown in Fig. 1 (b), where the difference equations are:

Since the two delay units contain same input w(n), the delays can be merged to obtain the
structure of Fig. 1(c). This realization is termed as direct form II structure.

15
Fig. 1: Steps of converting direct form I (a) to direct form II realization (c)

The structures can readily be generalized for LTI system

Figure 2 shows direct form I structure where we need M+N delays and M+N+1
multiplication. The equations are

Non recursive

Recursive

16
Fig. 2: Direct form I of generalized LTI system

Figure 3 shows direct form II structure where we need max(M,N) delays and M+N+1
multiplication. This form is also known as canonic form. The equations are

Recursive

Non recursive

Fig. 3: Direct form II of generalized LTI system (M = N-2)

Let us take an FIR system to distinguish the realization in recursive and non recursive
form. The I/O relation can be written as

17
Suppose we have an FIR system to compute moving average in the form

Clearly the system is FIR with impulse response


Figure 4 illustrates the structure of the non recursive realization. We can have,

This equation represents a recursive realization of the same FIR system, represented in
Figure 5.

Fig. 4: Non recursive realization of FIR moving average system

Fig. 5: Recursive realization of FIR moving average system

4. Correlation
Correlation is the measurement of the degree to which two signals are similar. The
application areas include radar, sonar, digital communications, geology, and so on. A
received signal from a target can be represented as y(n) = A x(n-D) + w(n)
D 🡪 round trip delay, assumed to be an integral multiple of sampling interval
w(n) 🡪 additive noise

Cross correlation: The correlation between two dependent variables. Dependency of y(n)
on x(n) is

18
On the other hand, dependency of x(n) on y(n) is

The l is termed as the lag parameter. We see from the expressions that correlation is
similar to convolution except the operation of folding. So, we can write, rxy(l) = x(l) * y(-l).
Also, note that rxy(l) = ryx(-l).

Auto correlation: The correlation between same dependent variable with the passage of
time. Here, y(n) = x(n).

When l = 0,

🡪 Energy of a signal

If x(n) and y(n) are causal sequences of length N, then,

where, i = l, k = 0 for l ≥ 0, and i = 0, k = l for l < 0.

4.1 Properties of Correlation


Symmetry 🡪 rxy(l) = ryx(-l) So, rxx(l) = rxx(-l) 🡪 even function

Maximum correlation: Let x(n) and y(n) have finite energy. If the time shift is l, we have
a composite signal as, s = ax(n) + by(n-l). The energy of the signal is,

Since Es ≥ 0, , or,
This is a quadratic equation of the function a/b. Now, if a quadratic equation has
nonnegative value, its discriminant must be non-positive. So,

For autocorrelation,
So, the autocorrelation has maximum value at 0 lag. To scale-down the correlation
function, normally is normalized between -1 to 1. The normalized correlations are,

and

19
4.2 Correlation of Periodic Sequences
Let x(n) and y(n) be the power signals.

For x(n), y(n) be periodic with period N, the averages of the above expressions are
identical to averages for one period,

So, rxy(l) and rxx(l) are periodic sequences with period N and 1/N is the normalization scale
factor.

Let y(n) = x(n) + w(n). Suppose we observe M samples of y(n), 0 ≤ n ≤ M-1, where,
M>>N. We can assume for practical purpose, y(n) = 0 for n<0 and n ≥ M. Using
normalized factor 1/M,

The first term is periodic and shows large peaks at l=0, N, 2N, . . .. But when l🡪 M, the
peaks are reduced since many x(n)x(n-l) are 0. So, ryy(l) does not signify much for l > M/2.
If we expect x(n) and w(n) unrelated, rxw(l) and rwx(l) are very small. rww(l) will contain
peak at l = 0 and because of its randomness, it will rapidly decay to 0. So, only rxx(l) is
expected to have large peaks for l > 0. This property can be applied to the detection of
periodic signal buried in noise.

4.3 Input Output Correlation Sequences


For an LTI system, we can calculate the output using the convolution of input and impulse
response.
ryx(l) = y(l) * x(-l) = h(l) * [x(-l) * x(-l)] = h(l) * rxx(l)
So, ryx(l) is the output of LTI system when input is rxx(l).
Similarly, rxy(l) = h(-l) * rxx(l).

Using autocorrelation,
ryy(l) = y(l) * y(-l) = [h(l) * x(l)] * [h(-l) * x(-l)]
= [h(l) * h(-l)] * [x(l) * x(l)] = rhh(l) * rxx(l)
rhh(l) exists if the system is stable.

20
Chapter 4
Z TRANSFORM
1. Sampling and the Z Transform

The sampled signal


Laplace transform gives,

Substitute or

where,

Unit step function

Power function

Unit ramp function


21
The values of z for which X(z) is finite are known as region of convergence (ROC).
Exercise: Find the Z-transform and ROC of x(n) = {………………….}
Values of Z for which are referred to as poles of . Values of z for which
are referred to as the zeroes of .

2. Significance of ROC
Consider a power function, both causal and anti causal.
For causal sequence,

For anti causal sequence,

The two sequences have same X(z) but their ROC is different. Without ROC we can not
uniquely determine the sequence x(n). Generally, for causal sequence, the ROC is exterior
of the circle having radius a and for anti causal sequence it is interior of the circle.

Exercise: Find X(z) and ROC for


x(n) = αn u(n) + βn u(-n-1)

Answer:

3. Properties of Z Transform

4. Z Transform of LTI System


When possible, we try to model discrete-time systems with linear difference equations
with constant co-efficient. The model is then linear and time invariant (LTI). The general
equation for this model is,

where, ak, bk 🡪 system parameters and M, N 🡪 system orders

22
Expansion of general equation yields,

But
Thus the Z-Transform gives,

By Definition, the system transfer function H(z) is,

Stability: For an Nth order LTI system (N >M),

D(z) can be factored as,


The zeros of this polynomial are the poles of . Where, by definition, the poles are
those values for which is unbounded. Let ,Then

Where is the sum of the terms, in the partial fraction expansion, that originate in the
poles of . Hence is the z-transform of the forced response.
The IZT yields,

where, is the natural response. If the input x(n) is bounded, will remain
bounded, since is of the functional form of .Thus an unbounded output must

be the result of at least one of the natural-response terms, becoming unbounded.

The unboundness can occur only if the magnitude of at least one pole, .

So, in LTI discrete-time causal system is BIBO stable provided that all poles of the system
transfer function lie inside the unit circle in the Z-plane.

23
Example

Let
Here
Then, the poles of the transfer function are at 0.5, 0.8 and 1.2
So, the system is unstable because the pole at is outside the limit circle.

Schür-Cohn Stability Test: The test is to determine stability criteria; procedures to


determine if any root of the denominator of the system transfer function lies outside the
unit circle. We have

where,

The system will be stable if all the roots of A(z) < 1. The polynomial of degree m is,

where, am(0) = 1
The reverse / reciprocal polynomial Bm(z) of degree m is,

So, the coefficients of Bm(z) are the same as those of Am(z), but in reverse order.

In the Schür-Cohn stability test, to determine if the polynomial A(z) has all its roots inside
the unit circle, we compute a set of coefficients, called the reflection coefficients, k1, k2, . .
. . , kN from the polynomials of Am(Z). Let AN(z) = A(z) and kN = aN(N), then compute
lower degree polynomials Am(z), m = N, N-1, . . . . , 1 as,

, where, km = am(m)

The polynomial A(z) has all its roots inside the unit circle if and only if the coefficients km

satisfies the condition for all m = 1, 2, . . .., N.

Example: [p1 = 2 and p2 = - 0.25, so unstable]

Begin with A2(z) as, hence, k2 = -

24
So, k1 = -

Since, , the system is unstable.

Example:

🡪 2 zeros at 0 and 2 poles at

For stability and


We have and

So, for stability,

From Schür-Cohn stability test, and

Since we need and ,

also,

5. Inverse Z-Transform

So, the values of x(n) are the coefficients of and so can be obtained
by direct inspection of X(z). Normally X(z) is often expressed as a ratio of two
polynomials in or in z.

So, IZT may be obtained using


● Power series expansion method
● Partial fraction expansion method
● Residue method / Contour integration

25
5.1 Power Series Expansion Method
X(z) can be expanded into an infinite series of or z by long division or synthetic
division.
So,

Example

Alternatively, X(z) may be expressed in the power of z as,

and the perform the long division, we will have the same result.
So
The long division can be performed by the recursive approaches,

where, .

5.2 Partial Fraction Expansion Method

If Proper rational function


Improper rational function
Distinct Poles: All poles are distinct.

, Then,

Example:

26
Multiple-order poles: Example

has a simple pole at =-1 and a double pole at . In such case,

To find , differentiate (2) with respect to z and put z=1

5.3 Contour Integration Method


Contour integration uses the Cauchy integral theorem and Cauchy residue theorem. We
have,

Multiply both sides by and integrate in contour C with ROC of X(z) enclosing the
origin

Let us now apply Cauchy integral theorem which is

Hence,

Cauchy residue theorem: If exists on and inside the contour C and if f(z) has no
pole at z = z0, then

27
residue of the poles at z = z0

Example:

Here . Let us consider different cases,


Case I: , f(z) has only zeros and hence no poles inside C. The only pole inside C is z
=a.

Case II: n<0, has nth order pole at z = 0 inside C

For n = -1,

For n = -2,
Hence,

6. One Sided Z Transform

One-sided z-transform has the same properties as of z-transform except time-shifting


property.

Time delay:
If x(n) is causal

Time advance:

Example: Fibonacci sequence {1 1 2 3 5 8 13. . ..} Any number equals the summation of
two previous numbers:

The initial conditions can be found as

Taking one-sided z-transform and using shifting property,

28
The poles are

Taking partial fraction,

Hence,

7. LTI System Analysis in Z Domain


7.1 System Response with Rational System Functions

Let

So,
If the system is initially relaxed,
Here, are system poles and are input poles.
Let all the poles are simple poles,
And zeros of B(z) and N(z) don’t coincide with and so that no pole-zero
cancellation occurs. We then have,

Inverse z-transform gives


The first term is natural response while the second is the forced response.

7.2 Non-zero Initial Condition: x(n) is applied at n=0.

Since x(n) is causal, X+(z)=X(z) and we have,

29
7.3 Transient and Steady-State Responses: The natural response of a system is

. If for all k, decays to 0 as . In such case, is


termed as natural response. The rate of decay depends on the pole values. Again, the

forced response is . If the poles fall on unit circle and x(n) is


sinusoid, is also a sinusoid or steady-state response.

7.4 Pole-zero Cancellation: Pole and zero in the same location may arise due to (1)
system function H(z) itself and/or (2) product of H(z)X(z). The cancellation of pole-zero
due to (1) causes the reduction of system order while that due to (2) causes suppression of
pole by zero. Zero located very near to pole results a response with very less amplitude.

Chapter 5
FOURIER ANALYSIS OF DT SIGNALS
In this chapter we shall discuss Fourier analysis of discretetime signals. Our approach is
parallel to that used for continuoustime signals. We first represent a periodic signal x(n) as

30
a Fourier series formed by a discretetime exponential (or sinusoid) and its harmonics.
Later we extend this representation to an aperiodic signal x(n) by considering x(n) as a
limiting case of a periodic signal with the period approaching infinity.

1. Discrete Time Fourier Series (DTFS)


A periodic signal of period N0 is called an N0periodic signal. A continuoustime periodic
signal of period To can be represented as a trigonometric Fourier series consisting of a

sinusoid of the fundamental frequency and all its harmonics (sinusoids of


frequencies that are integral multiples of ). The exponential form of the Fourier series
consists of exponentials , , , , . . . . For a parallel development of

the discrete-time case, recall that the frequency of a sinusoid of period N0 is .


Hence, an N0periodic discretetime signal x(n) can be represented by a discretetime Fourier
series with fundamental frequency and its harmonics. As in the continuoustime case, we
may use a trigonometric or an exponential form of the Fourier series. Because of its
compactness and ease of mathematical manipulations, the exponential form is preferable
to the trigonometric. For this reason, we shall bypass the trigonometric form and go
directly to the exponential form of the discretetime Fourier series.

The exponential Fourier series consists of the exponentials , , , …..


and so on. There would be an infinite number of harmonics. Discretetime
exponentials whose frequencies are separated by (or integral multiples of 2π) are
identical because
(1)
The consequence of this result is that the r-th harmonic is identical to the (r+N0)-th
harmonic. To demonstrate this, let gk denote the kth harmonic . Then

(2)
and m, integer (3)

Thus, the first harmonic is identical to the (N0+1)st harmonic, the second harmonic is
identical to the (N0+2)nd harmonic, and so on. In other words, there are only N0
independent harmonics, and they range over an interval 2π (because the harmonics are

separated by ). We may choose these N0 independent harmonics as over 0


≤ r ≤ N0 1, or over 1 ≤ r ≤ N0 2, or over 1 ≤ r ≤ N0, or over any other suitable choice for
that matter. Every one of these sets will have the same harmonics, although in different
order. Let us take the first choice (0 ≤ r ≤ N0 1). This choice corresponds to exponentials
for r = 0,1,2, ..., N01. The Fourier series for an N0periodic signal x(n) consists of
only these N0 harmonics, and can be expressed as

(4)
31
To compute coefficients Dr in the Fourier series, we multiply both sides of (4) by
and sum over k from k = 0 to (N0 1).

(5)
The righthand sum, after interchanging the order of summation, results in

(6)
The inner sum is zero for all values of r ≠ m. It is nonzero with a value N0 only when r =
m. This fact means the outside sum has only one term DmN0 (corresponding to r = m).
Therefore, the righthand side is equal to DmN0, and

and (7)
We now have a discretetime Fourier series (DTFS) representation of an N0-periodic signal
x(n) as

(8)

where (9)
Observe that DTFS equations (8) and (9) are identical (within a scaling constant) to the
DFT equations derived at the end of this chapter. Therefore, we can compute the DTFS
coefficients using the efficient FFT algorithm.

1.1 Fourier Spectra


The Fourier series consists of N0 components

The frequencies of these components are where .


The amount of the rth harmonic is Dr. We can plot this amount Dr (the Fourier coefficient)
as a function of . Such a plot, called the Fourier spectrum of x(n), gives us, at a
glance, the graphical picture of the amounts of various harmonics of x(n).

In general, the Fourier coefficients Dr are complex, and they can be represented in the
polar form as
Dr = |Dr| e Dr (10)
The plot of |Dr| vs. is called the amplitude spectrum and that of Dr vs. is called
the angle (or phase) spectrum. These two plots together are the frequency spectra of x(n).
Knowing these spectra, we can reconstruct or synthesize x(n). Therefore, the Fourier (or
frequency) spectra, which are an alternative way of describing a signal x(n), are in every
way equivalent (in terms of the information) to the plot of x(n) as a function of n. The

32
Fourier spectra of a signal constitute the frequencydomain description of x(n), in contrast
to the timedomain description, where x(n) is specified as a function of time (n).

The results are very similar to the representation of a continuoustime periodic signal by
an exponential Fourier series except that, generally, the continuoustime signal spectrum
bandwidth is infinite, and consists of an infinite number of exponential components
(harmonics). The spectrum of the discretetime periodic signal, in contrast, is bandlimited
and has at most N0 components.

Periodic Extension of Fourier Spectrum

Note that if [r] is an N0 -periodic function of r, then

[r] = [r] (11)


where r = <N0> indicates summation over any N0 consecutive values of r. This follows
because the right-hand side of Eq. (11) is the sum of all the N0 consecutive values of [r].
Because [r] is periodic, this sum must be the same regardless of where we start the first
term. Now is N0 -periodic because

= =
Therefore, if x(n) is N0-periodic, x(n) is also N0 periodic. Hence, it follows that Dr is
also N0-periodic, as is Dr . Now, we can write,

(12)

and (13)
If we plot Dr for all values of r (rather than only 0 ≤ r ≤ N0 1), then the spectrum Dr is
N0periodic. Moreover, x(n) can be synthesized by not only the N0 exponentials
corresponding to 0 ≤ r ≤ N0 1, but by any successive N0 exponentials in this spectrum,
starting at any value of r (positive or negative). For this reason, it is customary to show the
spectrum Dr for all values of r (not just over the interval 0 ≤ r ≤ N0 1). Yet we must
remember that to synthesize x(n) from this spectrum, we need to add only N0 consecutive
components.

The spectral components Dr are separated by the frequency and there are a
total of N0 components repeating periodically along the axis. Thus, on the frequency
scale , Dr repeats every 2π intervals. Equations (12) and (13) show that both x(n) and its
spectrum Dr are periodic and both have exactly the same number of components (N0) over
one period. The period of x(n) is N0 and that of Dr is 2π radians.

Dr is complex in general, and D-r is the conjugate of Dr if x(n) is real. Thus


|Dr| = |D-r| and Dr = - D-r (14)

33
so that the amplitude spectrum |Dr| is an even function , and Dr is an odd function of r
(or ). All these concepts will be clarified by the examples to follow.

Example: Find the discretetime Fourier series (DTFS) for x(n) = sin(0.1πn). Sketch
the amplitude and phase spectra.

In this case the sinusoid sin(0.1πn) (Fig. 1 a) is periodic because /2π = 1/20 is a rational
number and the period N0 is

N0 = m =m =20m
The smallest value of m that makes 20m an integer is m = 1. Therefore, the period N0 = 20,
so that 0 = 2π/N0 = = 0.1π, and from Eq. (12),

where the sum is performed over any 20 consecutive values of r. We shall select the range
10 ≤ r < 10 (values of r from 10 to 9). This choice corresponds to synthesizing x(n) using
the spectral components in the fundamental frequency range (π≤ <π). Thus,

where, according to Eq. (13),

Fig. 1: Discretetime sinusoid sin(0.1πn) and its Fourier spectra

In these sums, r takes on all values between 10 and 9. From The first sum on the righthand
side is zero for all values of r except r = 1, when the sum is equal to No = 20. Similarly,
the second sum is zero for all values of r except r = -1 when it is equal to No = 20.

34
Therefore, D1 = 1/j2 and D1 = - 1/j2 and all other coefficients are zero. The
corresponding Fourier series is given by

x(n) = sin(0.1πn) = (ej0.1πn _ e-j0.1πn) (15)


Here the fundamental frequency = 0.1π, and there are only two nonzero components:

D1 = = e-jπ/2 and D1 =- = ejπ/2


Therefore, |D1| = |D1| =1/2, D1= -π/2 and D-1= π/2

Figures 1 b and c show the sketch of Dr, for the interval (10 ≤ r < 10). There are only two
components corresponding to r = 1 and 1. The remaining 18 coefficients are zero. Because
of the periodicity property, the spectrum Dr is a periodic function of r with period N0 = 20.
For this reason, we repeat the spectrum with period N0 = 20 (or = 2π), as illustrated in
Figs. 1 b and c, which are periodic extensions of the spectrum in the range 10 ≤ r < 10.
Observe that the amplitude spectrum is an even function, and the angle or phase spectrum
is an odd function of r (or ) as expected.

Exercise: Find the period and the DTFS for over the
interval [Hint: Compute Dr first using Eq. (9)]
Answer:

2. Fourier Integral and Fourier Transform


In Sec. 1 we succeeded in representing periodic signals as a sum of (everlasting)
exponentials. In this section we extend this representation to aperiodic signals. The
procedure is identical conceptually to that used for continuous-time signals.

Applying a limiting process, we now show that aperiodic signals can be expressed
as a continuous sum (integral) of everlasting exponentials. To represent an aperiodic signal
such as the one illustrated in Fig. 2a by everlasting exponential signals, let us
construct a new periodic signal formed by repeating the signal every N0
units, as shown in Fig. 2. The period N0 is made long enough to avoid overlap between the
repeating cycles The periodic signal can be represented by an
exponential Fourier series. If we let , the signal repeats after an infinite

interval, and therefore

35
Fig. 2 Generation of a periodic signal by periodic extension of a signal
Thus, the Fourier series representing will also represent in the limit N0 .
The exponential Fourier series for is given by

=
=
(16)

where (17)
The limits for the sum on the right-hand side of equation (17) should be from - to .

But because for , it does not matter if the limits are taken from - to
. In the limit as and . Also, becomes
infinitesimal ( 🡪 0). For this reason it will be appropriate to replace with an

infinitesimal notation . Since , becomes so small that there will


be no discrete level change of frequency. For this reason, = . Under these changes,
Equation 17 becomes,

(17.A)

where, is the frequency domain description of any signal, and is


termed as Fourier transform, more specifically in this case, discrete time Fourier transform
(DTFT). Putting the value of Dr into Equation 16, in the limiting case of ,

With the same reasoning, LHS of the above equation becomes x(n), and the summation of
RHS becomes integration over the range of 2π. So,

(16.A)

The integration is known as the Fourier integral and equation 16.A


represents the inverse of Fourier transform (IDTFT). The pair of DTFT and IDTFT are
rewritten below:
36
🡪 DTFT

🡪 IDTFT (18)
Symbolically,

The Fourier transform is the frequency-domain description of .

It is interesting to see how the nature of the spectrum changes as increases.

From the definition of Dr and FT, we have

(19)
This result shows that the Fourier coefficients Dr are times the samples of
taken every rad/s. Therefore, is the envelope for the
coefficients Dr. We now let N0 by doubling N0 repeatedly. Doubling N0 halves the
fundamental frequency so the spacing between successive spectral components
(harmonics) is halved, and there are now twice as many components (samples) in the
spectrum. At the same time, by doubling N0, the envelope of the coefficients Dr is halved
as seen from Eq. (19). If we continue this process of doubling repeatedly, the number
of components doubles in each step; the spectrum progressively becomes denser while its
magnitude Dr, becomes smaller. Note, however, that the relative shape of the envelope
remains the same. In the limit, as , the fundamental frequency and
The separation between successive harmonics, which is , is approaching
zero (infinitesimal), and the spectrum becomes so dense that it appears continuous. But as
the number of harmonics increases indefinitely, the harmonic amplitudes become
vanishingly small (infinitesimal). We have a strange situation of having nothing of
everything.

Example: Find DTFT of a 9-point rectangular window [x(n) = 1 for -4 ≤ n ≤ 4]

Answer:

2.1 Properties of DTFT:


● Periodic with a period of 2π: X(ω+2π) = X(ω)
● Magnitudes are even symmetric while phases are odd:
,
● Time inversion results frequency inversion:
● Time shifting results multiplication:
37
● Convolution in time and frequency domains:

Example: Find the response of the DT system with x(n) = (0.8)nu(n) and h(n) = (0.5)nu(n)

So, Y(ω) = X(ω) H(ω) =

3. Discrete Fourier Transform (DFT)


3.1 Definition of DFT
From the previous section we have DTFT of any sequence x(n)

Here, X(ω) is periodic is periodic with a period of 2π. Let X(ω) is sampled every δω and
for 0 ≤ ω ≤ 2π we have N samples. Then δω = 2π/N. So, for ω = 2πk/N, we get,

k = 0, 1, 2, ….., N-1

Changing the limit of inner sum n to n-lN and interchanging the order of summation, we
get,

The term is periodic repetition of x(n), it is periodic with period N.


So, xp(n) is periodic extension of x(n). If the sequence x(n) is of length L, the extended
order of N, where N ≥L, the periodic extension becomes the original signal x(n), i.e., xp(n)
= x(n). Spectrum of an aperiodic DT signal with finite duration L can be exactly recovered

38
from its samples at frequencies if N ≥ L. So, we may write the sampled
version of DTFT as,

k = 0, 1, 2, ….., N-1 🡪 DFT

Conversely, n = 0, 1, 2, ….., N-1 🡪 IDFT

3.2 Relation of DFT with other Transforms

3.2.1 Relation of CFT and DFT

We have CFT as
Putting t = nT and ω = kΩT and noting that as , , we obtain (integration
become summation),

So, CFT = T × DFT

3.2.2 DFT and Fourier Coefficients


If xp(n) is periodic with period N,

If xp(n)=x(n) for 0 ≤ n ≤ N-1, then X(k) = N Ck.

So, N-point DFT provides the exact line spectrum of a periodic sequence with
fundamental period of N.

3.2.3 DFT and Z-transform

If X(z) is sampled at the N equally spaced points on the unit circle ,

If the sequence x(n) has finite duration of length N or less,

39
Example: x(n) = {1 0 0 1}

= {2 1+j1 0 1-j1}

Against point N/2, magnitudes are even symmetry while phases are of odd symmetry.

3.3 Properties of DFT


● Periodicity: If x(n+N) = x(n), then X(k+N) = X(k) for all k
● Linearity: a1x1(n) + a2x2(n) 🡪 a1X1(k) + a2X2(k) +N)
● Real valued sequence:If x(n) is real, X(N-k) + X*(k) = X(-k)

● Real and even sequence: If x(n) is real and even, 🡪 DCT

● Real and odd sequence: If x(n) is real and odd,


3.4 DFT as a Linear Transformation
We have the DFT pairs, rewritten as,

where,
WN = e-j2π/N 🡪Nth root of unity
In N-point DFT, we need N2 complex multiplications and N(N-1) complex additions.
Defining xN, XN and WN as follows, we can write

WN is a symmetric matrix. If the inverse of WN exists, then

So,
where IN is an N×N identity matrix.

3.5 Fast Fourier Transform (FFT)

40
With simplified notations,

Two resultant terms of WN are,

Data is divided into two sequences: Even-numbered data and odd-numbered data. Let
X1(k) is of even-numbered data and X2(k) is of odd-numbered data.

Since is common in 2 DFT sequences, it needs to be calculated only once.

3.6 DFT Multiplications: Circular Convolution

Suppose and and


Let k = 0, 1, 2, . . . . ., N-1, We get,

The inner sum has the form for a = 1 and for a ≠ 1


where . a =1, when m-n-l = pN and for a ≠ 0, aN = 0. So,

for l = m – n – pN = ((m-n))N , and 0 otherwise. So,

m = 0, 1, 2, . . ., N-1
The expression is termed as circular convolution. 🡺 Multiplication of 2 DFT sequences
is equivalent to circular convolution of the 2 sequences in time domain.

Example: Using circular convolution, determine the response of an LTI system with
x(n) = { 2 1 2 1) and h(n) = {1 2 3 4}. Check your results using DFT.

Answer: Using circular convolution, we get y(n) = {14 16 14 16}.


Using DFT, X(k) = {6 0 2 0} and H(k) = {10 -2+j2 -2 -2-j2}. So,
Y(k) = {60 0 -4 0} and using IDFT we get, y(n) = {14 16 14 16}

41
3.7 DFT Applications
Filtering
Filtering of long data sequence: overlap-save method and overlap-add method
Frequency analysis of signals
……

Chapter 6
Filter Design
Filters are signal conditioners. It functions by accepting an input signal, blocking
pre-specified frequency components, and passing the original signal minus those
components to the output. For example, a typical phone line acts as a filter that limits
frequencies to a range considerably smaller than the range of frequencies human beings
can hear. That's the reason in listening to CD-quality music over the phone is not as
pleasing to the ear as listening to it directly.

A digital filter takes a digital input, gives a digital output, and consists of digital
components. In a typical digital filtering application, software running on a digital signal
processor (DSP) reads input samples from an A/D converter, performs the mathematical
manipulations dictated by theory for the required filter type, and outputs the result via a
D/A converter.

An analog filter, by contrast, operates directly on the analog inputs and is built entirely
with analog components, such as resistors, capacitors, and inductors.

42
There are many filter types, but the most common are low pass, high pass, band pass, and
band stop. A low pass filter allows only low frequency signals (below some specified
cutoff) through to its output, so it can be used to eliminate high frequencies. A low pass
filter is handy, in that regard, for limiting the uppermost range of frequencies in an audio
signal; it's the type of filter that a phone line resembles. A high pass filter does just the
opposite, by rejecting only frequency components below some threshold. An example
high pass application is cutting out the audible 50 Hz AC power "hum", which can be
picked up as noise accompanying almost any signal in our country.

The designer of a cell phone or any other sort of wireless transmitter would typically place
an analog band pass filter in its output RF stage, to ensure that only output signals within
its narrow, government-authorized range of the frequency spectrum are transmitted.
Engineers can use band stop filters, which pass both low and high frequencies, to block a
predefined range of frequencies in the middle.

1. Analog Filter
Filters of continuous-time signals are frequently referred to as analog filters. Let us
consider a distortionless transmission system, the one that passes any signal with no
change in general wave shape and frequency except possible amplification and time delay.
The output for such a system is, , where x(t) is the input signal. Taking
Fourier transform,

The response of the system is,

So, the magnitude spectrum should be constant and phase spectrum should be linear
(linear phase) for no or very little distortion. But for a practical system the magnitude

spectrum decreases beyond , and the region where it remains constant is known
as bandwidth of system. Beyond the bandwidth, the phase spectrum flattens to become
constant.

Since high-pass, band-pass and band-stop filters can be obtained by a suitable frequency
transformation and combination of a low-pass filter, only the LPF is considered here. For
an R-C LPF,

Then,

where, is known as the cut-off frequency. This is the equation to approximate


an ideal LPF by a first-order system. For 2nd order system,

43
From the filter responses, it is observed that:
● Response deviates markedly from an ideal response
● Higher the order of the filter, sharper the cut-off and narrower the transition region
● By suitable choice of coefficients in the frequency response function, a narrower
transition region can be obtained at the expense of peaks (ripple) occurring in the
pass band as well as in the stop band

So, filter design is a compromise (trade-off) between sharp cut-off and distortion

Butterworth Filter: The magnitude response is,

N is the order of the filter.

Chebyshev Filter: For a given order the Chebyshev filter has a higher rate of cut-off than
the corresponding Butterworth filter. However, instead of the gain falling monotonically
there are ripples in the response. The form of the filter determines these ripples in the pass
or stop band. The equation of the form having pass band ripple is,

n is the order of the filter, Tn is the nth order Chebyshev polynomial and ε is the parameter
that controls the amount of ripple in the pass band. The ripple has a maximum value of 1

and minimum value of . The values of Tn are,

n Tn(x = ω/ωc)
0 1
1 x
2 2x2-1
3 4x3-3x
4 8x4-8x2+1
5 16x5-20x3+5x

44
Fig. 1: Frequency responses of some classical analog filters;
(a) Butterworth, (b) Chebyshev Type I (c) Chebyshev type II, (d) Elliptic

2. Digital Filter
A digital filter is a mathematical algorithm implemented in hardware and/or software that
operates on a digital input signal to produce a digital output signal for the purpose of
filtering objective. Application areas are: data compression, biomedical signal
processing, speech processing, image processing, data transmission, digital audio,
telephone echo cancellation, and so on.

Types of digital filters: Broadly 2 classes:


● Infinite impulse response (IIR) filter
● Finite impulse response (FIR) filter
The input and output signals to the filter are related by the convolution sum as,

for IIR, (1)

for FIR, (2)

In practice, it is not feasible to compute the output of IIR filter as the impulse response is
of infinite duration. So IIR filter equation is expressed in a recursive form as,

(3)
where ak and bk are the coefficients of the filter and (M, N) is the filter order . For IIR filter

45
y(n) is a function of past outputs as well as present and past input samples. If bk = 0, then
IIR filter becomes an FIR one. The alternative representations of the filters in terms of
z-transform values are:

for FIR,

for IIR,

3. Choice between IIR and FIR filters


● FIR can have exactly linear phase response, no phase distortion is introduced by
the filter. The phase response of the IIR filter is nonlinear, specially at band edges.
● FIR filters are always stable, but IIR may always not due to recursion.
● Quantization noise is much less in FIR.
● FIR requires more number of coefficients for sharp cut-off.
● Analog filters can be readily transformed into an equivalent IIR.
● In general, FIR is algebraically more difficult to synthesize.
So, the broad guideline of choosing is,
● Use IIR when only important requirements are sharp cut-off.
● Use FIR if the number of filter coefficients is not large, and if little or no phase
distortion is desired.

Example: Given the following two equations which meet identical amplitude and
frequency response specifications:

1.

2.
Since 1 is IIR and 2 is FIR filters. The block diagram of 1 is:

The block diagram of 2 is:


From examination,
FIR IIR
Number of multiplications 12 5
Number of additions 11 4
Storage locations 24 8
(coefficient and data)

46
Fig. 2: Block diagram of (a) IIR and (b) FIR filters

So, IIR is more economical. In general for same amplitude response specifications,
Number of FIR filter coefficient = 6 × the order of IIR transfer function.

4. Filter Design Steps


The design of a digital filter involves mainly 5 steps. The steps are not necessarily
independent, nor in the order given. There may be iterations to have an efficient filter.
1. Specification of the filter requirements:
● Signal characteristics: types of signal source and sink, I/O interface, data
rate and width, highest frequency of interest.
● Filter characteristics: desired amplitude and/or phase responses and their
tolerances, speed of operation and modes of filtering (real time or batch).
● The manner of implementation
● Cost, etc
2. Calculation of suitable filter coefficients: Calculation of IIR filter coefficients are
traditionally based on the transformation of known analog filter characteristics into
equivalent digital filter. Impulse invariant and bilinear transformations are the basic
two methods. The other is pole-zero placement method. The methods used for FIR
filters are window method, frequency sampling method and optimal method.
3. Representation of the filter by a suitable structure (Realization): It involves
converting a given transfer function, H(z), into a suitable filter structure. For IIR
filters, the structures commonly used are direct form, cascade form and parallel
form. The most widely used structure for FIR is the direct form. Two other FIR
structures are the frequency sampling structure and the first convolution technique.
The lattice structure may be used to represent FIR as well as IIR filters.
4. Analysis of the effects of finite word length on filter performance: The use of
limited number of bits may have effects to degrade the performance of the filter
and in some cases to make it unusable. The main sources of performance
degradation in digital filters are:
● Input/output signal quantization (ADC noise)

47
● Coefficient quantization
● Arithmatic round off errors
● Overflow

5. Implementation of the filter in software and/or hardware: Examination of


difference equations shows that the computation of y(n) involves only
multiplications, additions/subtractions, and delays. To implement a filter, we need
the following basic building blocks:
● Memory for storing filter coefficients
● Memory for storing the present and past inputs and outputs
● Hardware or software multiplier(s)
● Adder or arithmetic logic unit

5. Frequency Response of Filter


Simple filters are usually defined by their responses to the individual frequency
components that constitute the input signal. There are three different types of responses. A
filter's response to different frequencies is characterized as pass band, transition band, or
stop band. The pass band response is the filter's effect on frequency components that are
passed through (mostly) unchanged.

Frequencies within a filter's stop band are, by contrast, highly attenuated. The transition
band represents frequencies in the middle, which may receive some attenuation but are not
removed completely from the output signal.

In the following figure, which shows the frequency response of a low pass filter, ωp is the
pass band ending frequency, ωs is the stop band beginning frequency, and As is the amount
of attenuation in the stop band. Frequencies between ωp and ωs fall within the transition
band and are attenuated to some lesser degree.

Fig. 3: Response of a low pass filter to various input frequencies

Given these individual filter parameters, one of numerous filter design software packages
can generate the required signal processing equations and coefficients for implementation
on a DSP. Before we can talk about specific implementations, however, some additional
terms need to be introduced. Ripple is usually specified as a peak-to-peak level in
decibels. It describes how little or how much the filter's amplitude varies within a band.
Smaller amounts of ripple represent more consistent response and are generally preferable.
48
Transition bandwidth describes how quickly a filter transitions from a pass band to a stop
band, or vice versa. The more rapid this transition, the higher the transition bandwidth;
and the more difficult the filter is to achieve. Though an almost instantaneous transition to
full attenuation is typically desired, real-world filters don't often have such ideal frequency
response curves. There is, however, a tradeoff between ripple and transition bandwidth, so
that decreasing either will only serve to increase the other.

6. FIR Filter Design


A finite impulse response (FIR) filter is a filter structure that can be used to implement
almost any sort of frequency response digitally. An FIR filter is usually implemented by
using a series of delays, multipliers, and adders to create the filter's output. Figure below
shows the basic block diagram for an FIR filter of length N. The delays result in operating
on prior input samples. The hk values are the coefficients used for multiplication, so that
the output at time n is the summation of all the delayed samples multiplied by the
appropriate coefficients.

Fig. 4: Logical structure of an FIR filter

The process of selecting the filter's length and coefficients is called filter design. The goal
is to set those parameters such that certain desired stop band and pass band parameters will
result from running the filter. Most engineers utilize a program such as MATLAB to do
their filter design. But whatever tool is used, the results of the design effort should be the
same:
▪ A frequency response plot, like the one shown in Figure, which verifies that the
filter meets the desired specifications, including ripple and transition bandwidth.
▪ The filter's length and coefficients.
The longer the filter (more taps), the more finely the response can be tuned. With the
length, N, and coefficients, float h[N] = { ... }, decided upon, the implementation of the
FIR filter is fairly straightforward. Listing 1 shows how it could be done in C. Running
this code on a processor with a multiply-and-accumulate instruction (and a compiler that
knows how to use it) is essential to achieving a large number of taps.

Key features of FIR filters are:

● Basic equations (1)

(2)

h(k), k=0,1,2,.........., N-1, are the impulse response (coefficient) of the filter, H(z)
is the transfer function of the filter, N is the filter length.
● FIR filters can have exactly linear phase response

49
● Very simple to implement

Linear phase response: When a signal passes through a filter, it is modified in amplitude
and/or phase. Let us assume a signal consists of several frequency components. The phase
delay and group delay may occur.

Phase delay: amount of time delay each frequency component suffers


Group delay: average time delay the composite signal suffers at each frequency

A filter with nonlinear phase characteristic will cause a phase distortion in the signal. This
is because the frequency components in the signal will each be delayed by an amount not
proportional to the frequency thereby altering their harmonic relationships. A filter is said
to have linear phase response if its phase response satisfies one of the following
relationship:
(3) (4) where α and β are constants.
Satisfying (3) implies constant group and constant phase delay responses, and the impulse
response must have positive symmetry. So,

n = 0, 1, 2, . . . ., for odd N and n = 0, 1, 2, . . . ., for even N


When (4) is satisfied, the filter will have a constant group delay only. In this case, the
impulse response has negative symmetry.

Types of linear phase filters: Depending on N (even/odd) and h(n) (positive/negative


symmetry):
Type 1: Positive symmetry, N odd
Type 2: Positive symmetry, N even
Type 3: Negative symmetry, N odd
Type 4: Negative symmetry, N even
7. FIR Coefficient Calculation
We will discuss three methods, namely Window method, Optimal method and Frequency
Sampling method.

7.1 Window Method


Use is made of the fact that the frequency response of a filter, HD(ω), is related to the
corresponding impulse response, hD(n), by inverse Fourier transform (IFT). Consider an
LPF with . By IFT,

[sinc(x) = sin(x)/x]
Since , the filter has linear phase response. As n→ ∞, hD(n) → 0, so the
filter is not FIR. An obvious solution is to truncate the ideal impulse response by setting
hD(n) = 0 for n>M (say). However, this introduces undesirable ripples and overshoots →
Gibb’s phenomenon.
50
The more coefficients that are retained, the closer the filter spectrum is to the ideal
response. Direct truncation of hD(n) is equivalent to multiplying the ideal impulse response
by a rectangular window of the form,
w(n) = 1, n = 0, 1, ........, M-1
= 0, elsewhere.
A practical approach is multiply hD(n) by a suitable window function, w(n), whose
duration is finite. In this way the resulting impulse response decays smoothly toward zero.
H(ω) shows that the ripples and overshoots, characteristic of direct truncation are much
reduced. However, the transition width is wider than for the rectangular case. The
transition width of the filter is determined by the width of the main lobe of the window.
The side lobes produce ripples in both pass and stop bands.

Steps of window method


● Specify the ideal or desired frequency response of the filter, HD(ω).
● Obtain the impulse response, hD(n), by IFT.
● Select a window function that satisfies the pass band or attenuation specifications
and then determine the number of filter coefficients using the appropriate
relationship between the filter length and the transition width, Δf.
● Obtain values of w(n) and the values of actual FIR coefficients by h(n) =
hD(n)w(n).

Ideal impulse responses


Filter Type HD(n)
n≠0 n=0
Low pass 2fc sinc(nωc) 2fc
High pass -2fc sinc(nωc) 1-2fc
Band pass 2f2 sinc(nω2) - 2f1 sinc(nω1) 2(f2 –f1)
Band stop 2f1 sinc(nω1)- 2f2 sinc(nω2) 1-2(f2 –f1)

Important features of common window functions


Name of Transition PB ripple Main lobe Maximum Window function
Window width relative to SB w(n)
side lobe attenuation
(Hz) (dB) (dB) (dB)

1
Rectangular 0.7416 13 21

Hanning 0.0546 31 44

Hamming 0.0194 41 53

Blackman 0.0017 57 74

51
0.0274 - 50
(β = 4.54)

Kaiser
0.00275 - 70
(β = 6.76)

0.000275 - 90 *
(β = 8.96)
* I0(x) is zero-order modified Bessel function of the first kind. When β = 0, Kaiser
window corresponds to rectangular window. β is determined by the stop band
attenuation requirements.

Example: Obtain the coefficients of an FIR LPF to meet the following specifications.
Fp = 1.5 kHz ΔF = 0.5 kHz SB attenuation: > 50 dB Fs= 8 kHz

Solution: For stop band attenuation requirement, Hamming, Blackman or Kaiser window
can be used. Consider Hamming window for simplicity.
Δf = 0.5/8 = 0.0625 = 3.3/N 🡪 N = 52.8 ≈ 53
The filter coefficients are, h(n) = hD(n) w(n), -26≤ n ≥26

and

Since h(n) is symmetrical, compute h(0), h(1),......., h(26) and then use the fact that h(n) =
h(N-1-n) to compute h(27), h(28), . . . , h(52). The calculated values are provided below:

n w(n) hD(n) h(n) =


0 1 0.375 0.375 52
1 0.99677 0.29408 0.29313 51
2 0.98713 0.11254 0.11109 50
3 0.97121 -0.0406 -0.03943 49
4 0.94924 -0.07958 -0.07554 48
5 0.92153 -0.02436 -0.02245 47
6 0.88846 0.03751 0.03333 46

52
7 0.85049 0.04201 0.03573 45
8 0.80817 0 0 44
9 0.76208 -0.03268 -0.0249 43
10 0.71288 -0.02251 -0.01605 42
11 0.66125 0.01107 0.00732 41
12 0.60792 0.02653 0.01613 40
13 0.55363 0.00937 0.00519 39
14 0.49915 -0.01608 -0.00803 38
15 0.44525 -0.01961 -0.00873 37
16 0.39268 0 0 36
17 0.34217 0.0173 0.00592 35
18 0.29444 0.0125 0.00368 34
19 0.25016 -0.00641 -0.0016 33
20 0.20995 -0.01592 -0.00334 32
21 0.17437 -0.0058 -0.00101 31
22 0.14392 0.01023 0.00147 30
23 0.11903 0.01279 0.00152 29
24 0.10006 0 0 28
25 0.08725 -0.01176 -0.00103 27
26 0.08081 -0.00866 -0.0007 26

Advantages: Simple to apply and simple to understand


Limitations:
● Lack of flexibility: both peak pass band and stop band ripples are approximately
equal
● Edge frequencies can not be precisely specified because of the convolution of
HD(ω) and W(ω).
● Maximum ripple amplitude is fixed irrespective of N.

53
7.2 Optimal Method
This method is very powerful and flexible and also easy to apply. The optimal method is
based on the concept of equiripple pass band and stop band.

Fig. 5: (a) Frequency response of an optimal LPF, (b) Response of error

H(ω) → Practical response HD(ω)→ Ideal response E(ω) = W(ω)[HD(ω)-H(ω)]


where W(ω) is the weighting function that allows the relative error of approximation
between different bands to be defined. The objective is to determine filter coefficients,
h(n), such that |E(ω)|max is minimized in the pass band and stop band. When max |E(ω)| is
minimized, the resulting filter response will have equiripple pass band and stop band. The
minima and maxima are known as extrema. For linear phase LPF total extrema are r+1 or
r+2, where,
r = (N+1)/2 for type 1 filters and r = N/2 for type 2 filters.

7.3 Frequency Sampling Method


Allows to design nonrecursive FIR filters for both standard frequency selective filters (LP,
HP, BP, BS) and filters with arbitrary frequency response. It also allows recursive
implementation of FIR filters. The following figure shows (a) the frequency response of
an ideal LPF, (b) samples of ideal LPF, (c) frequency response of LPF derived from
frequency samples of (b).

where H(k), k=0,1,2,....., N-1 are the samples of ideal or target frequency response.For
linear phase filters,

when N is even with α = N/2


If N is odd, the upper limit of the summation is (N-1)/2. The resulting filter will have a
frequency response that is exactly the same as the original response at the sampling
instants. However, between the sampling instants, the response may be significantly
different. The more samples taken, the less error occurs.
Fig. 6: Frequency sampling concept;
(a) Ideal LPF response, (b) Samples of LPF, (c) Response derived from samples

8. IIR Filter Design

Basic equations:

The strength of IIR filters comes from the flexibility the feedback arrangement provides.
Fewer coefficients than FIR are needed for same set of specifications. IIR filters are used
when sharp cut-off and high throughput are the important requirements. The IIR filter can
become unstable if adequate care is not taken in design. For stability, all its poles must lie
inside the unit circle.

9. Calculation of IIR Filter Coefficients


The methods fall mainly in two categories. One is the direct calculation based on the
position of poles and zeros in the z-plane and the method is termed as Pole-Zero
Placement (PZP) method. The other category involves the conversion of analog filters into
their equivalent digital ones. This approach is mostly successful to calculate IIR filter
coefficients. There already exists a wealth of information on analog filters in the literature
which can be utilized. Two methods fall in this category. They are Impulse Invariant (II)
and Bilinear Z-Transformation (BZT) methods.

9.1 Pole-Zero Placement Method


To place poles and zeros judiciously in the z-plane such that the resulting filter has the
desired frequency response. This approach is only useful for very simple filters, e.g., notch
filters, where the filter parameters need not be specified precisely.

1
When a zero is placed at a given point on the z-plane, the frequency response will be zero
at the corresponding point. A pole, on the other hand, produces a peak at the
corresponding frequency point. Fig. (a) shows pole-zero diagram of a simple filter
whereas (b) is the sketch of frequency response.

Fig. 7: (a) Pole-zero diagram of a simple filter, (b) Sketch of frequency response
Poles close to the unit circle give rise to large peaks, zeros close to or on the circle produce
troughs or minima. For the coefficients to be real, the poles and zeros must either be real
or occur in complex conjugate pairs.

Example: BPF, complete signal rejection at dc and 250 Hz, narrow pass band centered at
125 Hz, 3 dB bandwidth of 10 Hz, 500 Hz sampling frequency.

Solution: Place zeros. Since rejection at 0 and 250 Hz, the place of zeros are 0° and
360°*250/500 = 180°. To have the pass band centered at 125 Hz, poles should be placed at
±360°*125/500 = 90°. For real coefficients poles are in complex conjugate pairs.
Radius r of the poles are determined by the desired bandwidth. For r >0.9,
r ≅ 1-(BW/Fs)π = 1-10π/500 = 0.937
For complete rejection radius of zeros are 1. So,

So, the difference equation is


y(n) = x(n) – x(n-2) –0.878 y(n-2)
Comparing H(z) with IIR transfer function, the filter is a 2nd order section with,
a0 = 1 b1 = 0
a1 = 0 b2 = 0.878
a2 = -1

9.2 Impulse Invariant Method


In this method, starting with a suitable analog function, H(s), the impulse response, h(t), is
obtained using inverse Laplace transform. The h(t) so obtained is suitably sampled to
obtained h(nT), and then the desired transfer function, H(z), is obtained by
z-transformation of h(nT).

Taking ILT
Sampling ha(t) periodically at t = nT

2
So,

Since for stability, the inner sum converges to

So, we have,

Example: , H(s) is normalized. 3 dB cut-off at 150 Hz. Fs = 1.28


kHz.
Solution: To frequency scale of H(s), put s/α for s, where α = 2*π*150 = 942.48. Then,

Solving, p1 = -666.43(1-j), p2 = p1* Poles are complex conjugate;


C1 = -666.43, C2 = C1*
So,

If we put z = ejωT, then H(z) at ω = 0 is 1223. To keep this high gain down and to avoid
overflow it is common practice to multiply H(z) by T (=1/Fs). The other way to remove
the effect of sampling frequency, use T=1 and α = 2*π*150 /1280 = 0.74. When H(z) is
multiplied by T,

Comparing we get a0 = 0 b1 = -1.03


a1 = 0 .31 b2 = 0.35

9.3 Bilinear z-transform (BZT) Method


It requires the mapping of H(s) from the s-plane to z-plane. Consider analog linear filter
with transfer function

The equation comes from the differential equation of

(1)
Using trapezoidal formula of approximation,

(2)

where, At t = nT and t0 = nT-T gives,

(3)
3
at t = nT, (1) yields,
(4)
Putting (4) into (3) and defining x(n) = x(nT) and y(n) = y(nT),

Taking z-transform in both sides,

Comparing with the original equation of H(s), we get,

🡪 Bilinear transformation
Since and

So, and
If r < 1, σ < 0 and if r > 1, σ > 0. For the limiting case (r = 1), we have and

The steps are:


● Determine the cut-off frequency (or PB edge frequency), ω, of the digital filter

● Obtain an equivalent analog filter cut-off frequency (Ω) using


● Determine analog transfer function H(s). If it is normalized, denormalized it by
replacing s by s/ Ω

● Apply BZT to obtain H(z) by putting

Example: Design a single-pole low-pass digital filter with 3-dB bandwidth of 0.2π using

BZT applied to analog filter where Ωc is the 3-dB cut-off frequency


Solution: Given, ωc = 0.2π

4
For analog filter,

So, a0 = a1 = 0.245 b1 = -0.509

Frequency response of the digital filter is


We can check the answer as: H(ω) =1 and H(0.2π) = 0.707
10. Some Simple Digital Filters
1. One zero at the origin and one positive real pole makes a low pass filter.

The steady state frequency response is

as shown in the diagram for the case p=0.9.


The digital frequency is the angle ( ). As the frequency increases

decreases - hence it is a low pass filter.

The unit sample response of this filter is:

The time domain DE that gives this response can be deduced as follows :

Taking the inverse z-transform and using the shifting theorem :

Note: The magnitude response doesn't look much like that of the ideal low pass
filter (M not constant over any finite range).

5
2. One zero at the origin and one negative real pole makes a high pass filter.

Similarly for the case p=-0.9 we see from the diagram that a high pass filter results.
The unit impulse response is
The DE giving the same filtering effect as H(z) is:

3. A pair of zeros at the origin and a complex conjugate pair of poles makes a band
pass filter.

Let the poles be at e.g. and .(


).

(The coefficient of z in the denominator is :

From the diagram: as goes from :

and is a minimum at .
The unit step response h(n) is found by inverting the z-transform.

6
Invert the z-transform:

The DE which would give the same band pass filter response is:

Take the inverse z-transform, applying the shifting theorem:

4. A band rejection filter can be made using:

o A pair of complex conjugate zeros on the unit circle


o Together with a pair of complex conjugate poles of magnitude < 1 at the

same phase i.e. with .

e.g., let and

7
We see from the diagram that and goes to zero at . By

having the poles close to the zeros the ratios and don't vary much with
frequency except near .
The unit impulse response:

r=-0.1173,-0.1173, 1.2346 p=0+0.9i, 0-0.9i,0=

Invert the z-transform:

The DE to apply to get the same band rejection filter:

Invert the z-transform:

You might also like