0% found this document useful (0 votes)
99 views

Unit-4 - Design of Digital Filter

This document discusses the design of digital filters. It begins by defining digital filters and classifying them as either finite impulse response (FIR) or infinite impulse response (IIR) filters. FIR filters use non-recursive structures while IIR filters use recursive structures. Common design methods for digital filters include window techniques and impulse invariance. Window techniques involve truncating impulse responses and applying window functions to reduce oscillations. Impulse invariance directly maps analog filter designs to the digital domain by sampling impulse responses. Bilinear transformation is another common design method that maps the s-plane to the z-plane in a way that preserves stability.

Uploaded by

chandrani dey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views

Unit-4 - Design of Digital Filter

This document discusses the design of digital filters. It begins by defining digital filters and classifying them as either finite impulse response (FIR) or infinite impulse response (IIR) filters. FIR filters use non-recursive structures while IIR filters use recursive structures. Common design methods for digital filters include window techniques and impulse invariance. Window techniques involve truncating impulse responses and applying window functions to reduce oscillations. Impulse invariance directly maps analog filter designs to the digital domain by sampling impulse responses. Bilinear transformation is another common design method that maps the s-plane to the z-plane in a way that preserves stability.

Uploaded by

chandrani dey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 107

Design of Digital Filters

Digital Filter
• A digital filter is a mathematical algorithm implemented in
hardware/software that operates on a digital input to produce a digital
output.
• Digital filters are preferred in a number of applications like data
compression, speech processing, image processing, etc., because of the
following advantages.
1. Digital filters can have characteristics which are not possible with analog
filters such as linear phase response.
2. The performance of digital filters does not vary with environmental
changes, for example, thermal variations.
3. The frequency response of a digital filter can be adjusted if it is
implemented using a programmable processor.
4. Several input signals can be filtered by one digital filter without the need
to replicate the hardware.
5. Digital filters can be used at very low frequencies.
• The following are the main disadvantages of digital filters compared
with analog filters:
(i) Speed limitation
(ii) Finite word length effects
(iii) Long design and development times
• Digital filters are classified either as
1. finite duration impulse response (FIR) filters
2. infinite duration impulse response (IIR) filters,
depending on the form of the impulse response of the system.

Finite number of
FIR
non-zero terms

Infinite number of IIR


non-zero terms
• FIR filters are usually implemented using structures with no
feedback (non-recursive structures – all zeros).
• Suppose a system has the following difference equation
representation with input x(n) and output y(n)

• An FIR filter of length M is described by the difference equation

where {bk} is the set of filter coefficients.

• IIR filters are usually implemented using structures having feedback


(recursive structures – poles and zeros)
• The response of the FIR filter depends only on the present and past input
samples,
• whereas for the IIR filter, the present response is a function of the present
and past M values of the excitation as well as past values of the response.
• FIR filters have the following advantages over IIR filters:
1. They have linear phase characteristics.
2. FIR filters, realised non-recursively are always stable.
3. The design methods are generally linear.
4. They can be realised efficiently in hardware
5. The filter start-up transients have finite duration.
6. They have low sensitivity to finite word-length effects.
• FIR filters are employed in filtering problems where linear phase
characteristics within the passband of the filter is required.
• If some phase distortion is tolerable, an IIR filter is preferable.
FIR & IIR Filter Design
Window Techniques
• The desired frequency response of any digital filter is periodic in frequency and can
be expanded in a Fourier series, i.e.

• The Fourier coefficients of the series hd(n) are identical to the impulse response of a
digital filter.
• There are two difficulties with the for designing a digital filter:
1. The impulse response is of infinite duration and
2. The filter is non-causal and unrealizable.
• No finite amount of delay can make the impulse response realizable. Hence the filter
resulting from a Fourier series representation of H(ejω) is an unrealizable IIR filter.
• The infinite duration impulse response can be converted to a finite duration impulse
response by truncating the infinite series at n= ±M.
• But, this results in undesirable oscillations in the passband and stopband of the digital
filter.
• These undesirable oscillations can be reduced by using a set of time-limited weighting
functions, w(n), referred to as window functions, to modify the Fourier coefficients.

• The desired frequency response H(ejω) and its Fourier coefficients {h(n)} are shown
above
• The finite duration weighting function w(n) and its Fourier transform W(ejω) are shown below:

• The Fourier transform of the weighting function consists of a main lobe, which contains most of
the energy of the window function and side lobes which decay rapidly.
• ෠
The sequence ℎ(n) = h(n).ω(n) is obtained to get an FIR approximation of H(ejω). The sequence

ℎ(n) ෠
is exactly zero outside the interval –M ≤ 𝑛 ≤ M. Sequence ℎ(n) and its Fourier transform
෠ jω) are shown below:
ℎ(e
• ෡ jω) is nothing but the circular convolution of H(ejω) and W(e jω).
𝐻(e

• The realisable causal sequence g(n), which is obtained by shifting ℎ(n)෠ , is shown in
the last row and this can be used as the desired filter impulse response.
• The desirable characteristics of window functions are:
1. The Fourier transform of the window function W(e jω) should have a small width of
main lobe containing as much of the total energy as possible.
2. The Fourier transform of the window function W(ejω) should have side lobes that
decrease in energy rapidly as ω tends to π.
Rectangular Window Function
• The weighting function for the rectangular window is :

• The spectrum of wR(n) can be obtained by taking Fourier transform

• Substituting n = m - (M - 1)/2 and replacing m by n,


• The transition width of the main lobe is
approximately 4π/M.
• The first sidelobe will be 13 dB down the peak
of the main lobe and the rolloff will be at 20 dB
per decade.
• For a causal rectangular window, the frequency
response will be

• The linear phase response of the causal filter is


given by θ(ω) = ω(M - 1)T/2, and the non-
causal impulse response has a zero phase shift.
Hamming Window Function
• The causal Hamming window function is

• The non-causal Hamming window function is given by

• Non-causal Hamming window function is related to the rectangular window function


as
• The spectrum of Hamming window

• The width of the main lobe is approximately 8π/M and the peak of the first side lobe is
at 243 dB. The side lobe roll off is 20 dB/decade.
• For a causal Hamming window, the second and third terms are negative:
Hanning Window Function
• The window function of a causal Hanning window is:

• The window function of a non-causal Hanning window is

• The width of the main lobe is approximately 8π/M and the peak of the first side lobe is
at -32 dB.
Blackman Window Function
• The window function of a causal Blackman window is

• The window function of a non-causal Blackman window is

• The width of the main lobe is approximately 12π/M and the peak of the first side-lobe is
at –58 dB.
Solution
Solution
Ans
:
Design of IIR Filter
• The system function describing an analog filter may be written as

• where {ak} and {bk} are the filter coefficients. The impulse response
of these filter coefficients is related to Ha(s) by the Laplace
transform

• The analog filter having the rational system function Ha(s) also be
described by the LCCDE
• The design techniques for IIR filters are presented with the restriction that the
filters be realisable and stable.
• An analog filter with system function H(s) is stable if all its poles lie in the left-
half of the s-plane.
• As a result, if the conversion techniques are to be effective, the technique
should possess the following properties:
(i) The jΩ axis in the s-plane should map onto the unit circle in the z-plane.
This gives a direct relationship between the two frequency variables in the two
domains.
(ii) The left-half plane of the s-plane should map into the inside of the unit
circle in the z-plane to convert a stable analog filter into a stable digital filter.
Impulse Invariance Method
• The impulse response of the discrete system
(digital filter) be the discrete version of the
impulse response of the analog system (filter).
• The desired impulse response of the digital filter
is obtained by uniformly sampling the impulse
response of the equivalent analog filter. That is,
h(n) = ha (nT)
where T is the sampling interval.
• Steps:
1. Get H(s) of an analog filter that satisfies the
prescribed magnitude response.
2. Apply the inverse Laplace transform to get the
impulse response h(t).
3. Obtain a discrete version of h(t) by replacing t by
nT i.e. h(nT).
4. Apply the Z-transform to h(nT) to get H(z) and
multiply by T.
1.

2.

3.

4.
• The analog pole at s = pi is mapped into a digital pole at z = epiT.
• Therefore, the analog poles and the digital poles are related by the
relation
z = esT
• The general characteristic of the mapping z = esT can be obtained by
substituting s = σ + jΩ and expressing the complex variable z in the
polar form as z = rejω.
re jω = eσT e jΩT
Therefore, r = eσT
ω = ΩT
• σ < 0 implies that 0 < r < 1 and σ > 0 implies that r > 1. When σ = 0,
we have r = 1. Therefore, the left-half of s-plane is mapped inside
the unit circle and the right-half of s-plane is mapped into points that
fall outside the unit circle in z plane.
• The mapping ω = ΩT implies that the interval -π/T ≤ Ω ≤ π/T maps
into the corresponding values of -π ≤ ω ≤ π .
• Some of the properties of the impulse invariant transformation are
given below.
as,
Recall that,
Bilinear Transformation
• The IIR filter design using the impulse invariant method is
appropriate for the design of low-pass filters and band pass filters
whose resonant frequencies are low.
• This technique is not suitable for high-pass or band-reject filters.
• This limitation is overcome in the mapping technique called the
bilinear transformation.
• This transformation is a one-to-one mapping from the s-domain to
the z-domain.
• The bilinear transformation is obtained by using the trapezoidal
formula for numerical integration
• Let the system function of the analog filter be
..(1)

• The differential equation describing the analog filter can be obtained


shown below.

• Taking inverse Laplace transform,

• Integrated between the limits (nT - T ) and nT


• The trapezoidal rule for numeric integration is given by,

• Therefore the integration becomes

• Taking z-transform, the system function of the digital filter is

…(2)

• Comparing Eq (1) and (2)


Substituting ejω = cosω - jsinω and simplifying, we get

where

If r < 1, then σ < 0, and if r > 1, then σ > 0. Thus, the left-half of the s-
plane maps onto the points inside the unit circle in the z-plane and the
transformation results in a stable digital system. For r = 1, σ is zero. In
this case,
…(3)

…(4)
Warping

• Eq (4) gives the relationship between the frequencies in the two


domains. It can be noted that the entire range in Ω is mapped only
once into the range -π ≤ ω ≤ π.
• However, the mapping is non-linear and the lower frequencies in
analog domain are expanded in the digital domain, whereas the
higher frequencies are compressed.
• The distortion introduced in the frequency scale of the digital filter
to that of the analog filter due to the non linearity of the arctangent
function
• This effect of the bilinear transform is usually called frequency
warping the filter design.
Relationship between ω and Ω as given in Eq. (4)
Pre-warping
• The analog filter is designed to compensate for the frequency warping by
setting Eqn. (3) for every frequency specification so that the corner
frequency or center frequency is controlled.
• This is called pre-warping the filter design.
• When a digital filter is designed as an approximation of an analog filter, the
frequency response of the digital filter can be made to match the frequency
response of the analog filter by considering the following:

because the factor 2/T cancels out at numerator and denominator while
calculating the order N of filter and H(z).
Using Equ. (3)

The system response of the digital filter is given by

Simplifying, we get further


Butterworth Filters
• The Butterworth low-pass filter has a magnitude response given by

where A is the filter gain and Ωc is the 3 dB cut-off frequency and N is


the order of the filter.
• The magnitude response has
a maximally flat pass-band
and stop-band.
• By increasing the filter order
N, the Butterworth response
approximates the ideal
response.
• The phase response of the
Butterworth filter becomes
more non-linear with
increasing N.
The analog magnitude response of the Butterworth filter with the
design parameters is shown Here, Ω1 = Ωp and Ω2 = Ωs.

Considering the low-pass filter:

…(1)

…(2)

where ε and δ1 are the parameters specifying allowable passband,


and λ and δ2 are the parameters specifying allowable stopband.
Or, ….(3)

….(4)

Dividing (4) by (3) & considering equality

The order of the filter N is given by

The value of N is chosen to be next nearest integer to the value of N as given


by above Equation
The transfer function of the Butterworth filter is

…(5)

….(6)

The coefficients bk and ck are given by

The parameter Bk can be obtained from

The system function of the equivalent


digital filter is obtained from H(s); using
the specified transformation technique,
viz. impulse invariant technique or
bilinear transformation.
Poles of Normalised Butterworth
Filter
The Butterworth low-pass filter has a magnitude squared response given
by

For a normalised filter, Ωc=1

The normalised poles in the s-domain can be obtained by substituting


Ω=s/j and equating the denominator polynomial to zero
The poles in the left-half of the s-plane are given by,

The unnormalised poles, s'n, can also be obtained from the normalised poles
Example : Obtain the system functions of normalised Butterworth filters
for order N = 1 and 2
Chebyshev Filters
• The Chebyshev low-pass filter has a magnitude response

where A is the filter gain, ε is a constant and Ωc is the 3 dB cut-off


frequency
• The Chebyshev polynomial of the I kind of Nth order, CN(x) is given
by
• The magnitude response of the Chebyshev filter is shown.
• The magnitude response has equiripple passband and maximally flat
stopband.
• By increasing the filter order N, the Chebyshev response approximates the
ideal response.
• The phase response of the Chebyshev filter is more non-linear than the
Butterworth filter for a given filter length N.
• The design parameters of the Chebyshev filter are obtained by
considering the low-pass filter with the desired specifications as
below.

• Assuming Ωc = Ω1, we will have CN(Ωc/Ωc) = CN(1) = 1.

Assuming equality,
by &

Choose N (order of the filter) to be next nearest integer to the value

The transfer function of Chebyshev filters written in the factored form

The coefficients bk and ck


Poles of a Normalised Chebyshev Filter
• The Chebyshev low-pass filter has a magnitude squared response

• For a normalised filter, Ωc = 1. Thus

• The normalised poles in the s-domain can be obtained by


substituting Ω =s/j= - js and equating the denominator polynomial to
zero,

The cosine term in the above equation has a complex argument


• Using trigonometric identities for the imaginary terms and with
minor manipulations, the poles of the normalised low-pass analog
Chebyshev filter is given by

• The unnormalised poles, s’n, can also be obtained from the


normalised poles

• The normalised poles lie on an ellipse in the s-plane and the


equation of the ellipse
Properties of Butterworth Approximation
Butterworth Filter Specifications
Elliptic Filters
• Square Magnitude Response Function for Elliptic Filters:

• Properties of the Rational Function Rn(ω):


1. Rn(ω) = even for n even. Rn(ω) = odd for n odd.
2. The zeros of Rn(ω) are in the range 𝜔 < 1;The poles of Rn(ω) are
in the range 𝜔 > 1
3. The function Rn(ω) oscillates between ±1 in the passband.
4. Rn(ω) = 1 at ω = 1.
5. Rn(ω) oscillates between ±1/d and infinity in the stopband, where d
is discrimination factor
• The Rational Normalized Function Rn(ω) with Respect to Center
Frequency ω0 = 1

• Steps to Calculate the Elliptic Filter:


1. Find the selectivity factor k

2. Define
3. Find the expression

4. Find d

5. Find the filter order n

6. Calculate ε

7. Define β
8. Calculate:

9. Define

10. Calculate:

11. Define
:
Finite Word Length Effect
• Digital Signal Processing the computations like FFT algorithm, ADC and filter
designs are associated with numbers and coefficients.
• These numbers and coefficients are stored in a finite length registers but due to
mathematical manipulations performed with fixed point arithmetic number of
errors are present by storing the numbers and coefficients are required to
quantize the different type of number representations are used for this purpose.
• The implementation of digital filters involves the use of finite precision
arithmetic. This leads to quantization of the filter coefficients and the results of
the arithmetic operations.
• These type of effect due to finite precision representation of numbers in digital
system are called finite word length effects.
• Finite word length of the signals to be processed the finite word length of the
filter coefficients does not affect the linearity of the filter behavior.
• This effect only amounts to restrictions on the linear filter characteristics,
resulting in discrete grids of pole-zero patterns.
• These effects, which divide into those due to "signal quantization" and those
due to "overflow".
 Errors arise due to quantization of numbers:
• Input quantization error.
• Product quantization error.
• Co-efficient quantization error.
 Truncation:
• Truncation is the process of reducing the size of binary number by
discarding all bits less significant than least significant bit that is
retained.
• Example: Truncate the binary number from 8 bits to 4 bits.
0.01011000  0.0101
(8 bits) (4-bits)
1.10100111  1.1010
(8 bits) (4-bits)

• In truncation of a binary number to b bits all the less significant bits


beyond bth bit are discarded
 What is meant by rounding?
• Rounding is the process of reducing the size of a binary number to
‘b’ bits of finite word size, such that the rounded b-bit number is closed
to the original unquantized number.
• The rounding process consists of truncation and addition .
• In rounding of a number to b bits, first the unquantized number is
truncated to b bits by retaining the most significant b bits. Then zero or
one is added to LSB of the truncated number depending on the bit that
is next to LSB.
• If the bit next to LSB is zero ,then zero is added to the LSB of the
truncated number.
• If the bit next to LSB is one, then one is added to LSB of the truncated
number.
 Example: Round off the binary number 0.11010 to 3 bits
1. First, truncate the given number into 3 bits. So 0.11010 0.110
2. Addition is done. Here for given number (0.11010), the bit next to LSB
i.e. fourth bit is 1. So add 1 with the truncated number in first step:
(0.111)
Power Spectrum of signal
• Power spectrum of signal gives the distribution of the signal power
among various frequencies.
• Power spectrum is the Fourier transform of the correlation function
• It describes the characteristics over time series in frequency domain
• So the power spectrum represent variance or power as a function of
frequency in the process and tell us where the energy is distributed.
• Estimate the power spectrum given set of data.
• If the signal is random ,then only an estimate of the signal can be
obtained.
Why Power Spectrum Estimation is used?
• To estimate the spectral characteristics of signal characterized as
random processes.
• To estimation of spectra in frequency domain when signals are
random in nature.
• Power Spectral Estimation method is to obtain an approximate
estimation of the power spectral density of a given real random
process .
• What is Estimation?
• Estimation theory is concerned with the determination of the best
estimate of an unknown parameter vector from an observation
signal, or the recovery of a clean signal degraded by noise and
distortion.
Parametric Modeling
• Parametric modeling techniques find the parameters for a
mathematical model describing a signal, system, or process.
• These techniques use known information about the system to
determine the model.
• The power spectrum of the signal x(m) is given as the product of the
power spectrum of the input signal and the squared magnitude
frequency response of the model:

• where H(f) is the frequency response of the model and PEE(f) is the
input power spectrum.
Signal Modeling
• The idea of signal modeling is to represent the signal via (some) model
parameters.
• In the model given below, the random signal x[n] is observed. Given the
observed signal x[n], the goal here is to find a model that best describes the
spectral properties of x[n] under the following assumptions:
 x[n] is WSS (Wide Sense Stationary) . A random process X(t) is said to be
wide-sense stationary (WSS) if its mean and autocorrelation functions are time
invariant, i.e.,
 E(X(t)) = μ, independent of t
 RX(t1, t2) is a function only of the time difference t2 − t1
 E[X(t)2] < ∞ (technical condition)
 The input signal to the LTI system is white noise ( noise containing many
frequencies with equal intensities) following Gaussian distribution – zero mean
and variance σ2.
 The LTI system is BIBO (Bounded Input Bounded Output) stable.
• In the model shown above, the input to the LTI system is a white noise
following Gaussian distribution – zero mean and variance σ2.
• The power spectral density (PSD) of the noise w[n] is

• The noise process drives the LTI system with frequency response H(ejɷ)
producing the signal of interest x[n].
• The PSD of the output is

• 3 cases are possible given the nature of the transfer function of the LTI
system
 Auto Regressive (AR) models : H(ejɷ) is an all-poles system
 Moving Average (MA) models : H(ejɷ) is an all-zeros system
 Auto Regressive Moving Average (ARMA) models : H(ejɷ) is a
pole-zero system
AR, MA, and ARMA equations
• General ARMA equations:

• Particular cases:

….MA

….AR

• Taking Z-transforms of both sides of the ARMA equation


AR models (all-poles model)
• In the AR model, the present output sample x[n] and the past N output
samples determine the source input w[n]

• Here, the LTI system is an Infinite Impulse Response (IIR) filter.


• This is evident from the fact that the above equation considered past
samples of x[n] when determining w[n], there by creating a feedback
loop from the output of the filter.
• The frequency response of the IIR filter is

• The transfer function H(ejɷ) is an all-pole


transfer function (when the denominator is set
to zero, the transfer function goes to infinity -
> creating peaks in the spectrum).
• Poles are best suited to model resonant peaks
in a given spectrum. At the peaks, the poles
are closer to unit circle.
MA models (all-zeros model)
• In the MA model, the present output sample x[n] is determined by the
present source input w[n] and past N samples of source input w[n]

• Here, the LTI system is an Finite Impulse Response (FIR) filter.


• This is evident from the fact that the above equation that no feedback is
involved from output to input.
• The frequency response of the FIR filter is

• The transfer function H(ejɷ) is an all-


zero transfer function (when the
numerator is set to zero, the transfer
function goes to zero -> creating nulls
in the spectrum).
• Zeros are best suited to model sharp
nulls in a given spectrum.
ARMA model (pole-zero model)

• ARMA model is a generalized model that is a combination of AR and


MA model.
• The output of the filter is linear combination of both weighted inputs
(present and past samples) and weight outputs (present and past
samples)

• The frequency response of this generalized filter is


Power Spectrum Estimation
• The power spectrum estimation deals with the estimation of the spectral
characteristics of signals characterized as random processes.
• Many of the phenomena that occur in nature are best characterized
statistically in terms of averages.
• Due to random fluctuations in such signals, we must adopt a statistical
view point, which deals with the average characteristics of random
signals.
• In particular, the autocorrelation function of random process is the
appropriate statistical average that we will use for characterizing
random signals in the time domain.
• The Fourier transform of the autocorrelation function, which yields the
power density spectrum, provides the transform from the time domain
to frequency domain.
Power Spectrum of Random Signals
• The finite-energy signals possess a Fourier transform and are
characterized in the spectral domain by their energy density spectrum.
• The important class of signals characterized as stationary random
processes do not have finite energy and hence do not posses a Fourier
transform.
• Such signals have finite average power and hence are characterized by a
power density spectrum.
• If x(t) is a stationary random process, its autocorrelation function is

• where E[.] denotes the statistical average.


• Then by Wiener-Khintchine theorem, the power density spectrum of the
stationary random process is the Fourier transform of the
autocorrelation function:
• From a single realization of the random process we can compute the time-average
autocorrelation function,

where 2T0 is the observation interval.


• The Fourier transform of Rxx(τ) provides an estimate Pxx(F) of the power density
spectrum, that is,

• The actual power density spectrum is the expected value of Pxx(F) in the limit as T0 →
∞,
• The estimate Pxx(F) can also be expressed as

• where x(f) is the Fourier transform of the finite duration sequence x(n) , 0 ≤ n ≤ N − 1.
• This form of the power density spectrum estimate is called the periodogram.
Nonparametric Methods for Power Spectrum
Estimation
• The nonparametric methods make no assumption about how the data were generated.
• The Bartlett Method: Averaging Periodograms:
• It reduces the variance in the periodogram. The N -point sequence is subdivided into K
non overlapping segments, where each segment has length M. This results in the K
data segments
Where, is the frequency characteristic of the Bartlett window
Therefore, the variance of the Bartlett power spectrum estimate has been reduced by the factor K
The Blackman and Tukey Method: Smoothing the
Periodogram:

In this method the sample autocorrelation sequence is windowed first and then
Fourier transformed to yield the estimate of the power spectrum.
The Blackman-Tukey estimate is

where the window function w(m) has length 2M − 1 and is zero for |m| ≥ M.
The frequency domain equivalent expression:

where Pxx(α) is the periodogram.


The expected value of the Blackman-Tukey power spectrum estimate:

m
The variance of the Blackman-Tukey power spectrum estimate is
Parametric Methods for Power Spectrum
Estimation:
• Parametric methods avoid the problem of spectral leakage and provide
better frequency resolution than do the nonparametric methods.
• Parametric methods also eliminate the need for window functions.
• The Yuke-Walker Method for the AR Model Parameters:
• This method is used to estimate the autocorrelation from the data and
use the estimates to solve for the AR model parameters.
• The autocorrelation estimate is given by

• The corresponding power spectrum estimate is


MA Model for Power Spectrum Estimation
• The parameters of MA model are related to the statistical
autocorrelation γxx(m) by

• where coefficients {dm} are related to the MA parameters by the


expression
• It is apparent from these expressions that we do not have to
solve for the MA parameters {bk} to estimate the power
spectrum.
• The estimates of the autocorrelation γxx(m) for |m| ≤ q suffice.
• From such estimates we compute the estimated MA power
spectrum , given as
ARMA Model for Power Spectrum Estimation

• An ARMA model provides us with an opportunity to improve on the AR


spectrum estimate by using few model parameters.
• The ARMA model is particularly appropriate when the signal has been
corrupted by noise.
• The sequence x(n) can be filtered by an FIR filter to yield the sequence

• The filtered sequence v(n) for p ≤ n ≤ N − 1 is used to form the estimated


correlation sequences rvv(m) , from which we obtain the MA spectrum

• The estimated ARMA power spectrum is


Multirate Signal Processing
• A multirate DSP system uses multiple sampling rates within the system.
• Whenever a signal at one rate has to be used by a system that expects a
different rate, the rate has to be increased or decreased, and some
processing is required to do so.
• The most immediate reason is when you need to pass data between two
systems which use incompatible sampling rates.
• For example, professional audio systems use 48 kHz rate, but consumer CD
players use 44.1 kHz; when audio professionals transfer their recorded
music to CDs, they need to do a rate conversion.
• Different sampling rates can be achieved using an upsampler and
downsampler.
• The basic operations in multirate processing to achieve this are decimation
and interpolation.
• Decimation is for reducing the sampling rate and interpolation is for
increasing the sampling rate.
Sampling Rate Conversion

• The process of reducing the sampling rate of a signal


without resulting in aliasing is called decimation or
sampling rate reduction.
• If M is the integer sampling rate reduction factor for the
signal x(n), then

The new sampling rate F’ becomes


• Let the signal x(n) be a full band signal, with non-zero values in the
frequency range -F/2 ≤ f ≤ F/2, where ω = 2πfT.

• To avoid aliasing caused by downsampling, the high frequency


components in signal x(n) must be removed by using a low-pass
filter which has the following spectral response:

• The sequence y(k) is obtained by selecting only the Mth sample of


the filtered output which results in sampling rate reduction.
• If the impulse response of the filter is h(n), then the filtered output
w(n) is given by

• The decimated signal y(m) is y(m) = w(Mm)


The decimator is also known as subsampler, downsampler or undersampler. In
the practical case, where a non-ideal low-pass filter is used, the output signal
y(m) will not be a perfect one. Consider the signal w’(n) defined by

At the sampling instants of y(m), w’(n) = w(n) and in other cases, it is zero.

You might also like