Adsp All Module
Adsp All Module
B. Sainath
[email protected]
October 1, 2018
3 Applications
4 Filter Banks
5 Conclusions
yD [n] = x[Mn]
M−1
1 X j (ω−2πk )
YD (ejω ) = X e M
M
k=0
Stretch X (ejω ) by M ⇒
M−1
1 X j (ω−2πk )
YD (ejω ) = X e M
M
k=0
ω
Stretch X (ejω ) by M ⇒ X (ej M )
ω
Create (M - 1) copies of X (ej M ) shifting uniformly k × 2π, k positive
integer
ω
Sum all these shifted ‘stretched versions’ to X (ej M ) & divided by M
where 1
X 0 ejω = X ejω + X ej(ω−π)
2
π
Suppose that x[n] is passed through an ideal LPF with ωc = 2 and then
applied to downsampler with M = 2. Sketch Y (ejω ).
Sketch of XLP (ejω )
x[ nL ], n = kL
yE [n] =
0, elsewhere.
Q: Verify that
YE (ejω ) = X (ejωL )
Q.
Polyphase representation
Valid for FIR/IIR; causal/non-causal
Applicable to any sequence (not just impulse response)
Type-1 & Type-2 PPR (in class)
PPR of interpolation filter
B. Sainath
[email protected]
5 Applications
6 Conclusions
Quote
“Wavelet theory” is the result of a multidisciplinary effort that brought
together mathematicians, physicists and engineers...this connection has
created a flow of ideas that goes well beyond the construction of new
bases or transforms
—-Stephane Mallat, Author of Wavelet tour of signal processing
Definition: (continuous-time)
Z ∞
Vf (ω, t) = f (u)v (u − t)e−jωu du,
u=−∞
Z ∞
= f (u)vω,t (u) du
u=−∞
P∞
Vf (ω, n) = m=−∞ f [m]v [m − n]e−jωm
Signal f [m], window v [m]
n is discrete & ω is continuous
However, the STFT is performed on a computer using the FFT ⇒ both
variables are discrete & quantized
P∞
Vf (ω, n) = m=−∞ f [m]v [m − n]e−jωm
Signal f [m], window v [m]
n is discrete & ω is continuous
However, the STFT is performed on a computer using the FFT ⇒ both
variables are discrete & quantized
STFT
can be used for the t − f information in signals of interest (for e.g., audio
signal)
consists of the DFTs of portions of the time-domain signal
STEPS
Read the input signal to be analyzed
For e.g., audioread to read audio signal of known sampling frequency fs
[x,fs] = audioread(’file.wav’); ’x’ contains samples & fs sampling frequency
N
Plot the discrete-time (DT) signal: Duration of DT-signal with N samples = fs
sec.
t = (0:length(x)-1)/fs; plot(t,x);
Plot the f − domain signal with FFT or freqz:
[H,W] = freqz(x); plot(W,abs(H)); (f in rad/sample)
0.5
0
x
-0.5
0 1 2 3 4 5
t, seconds
300
f− response magnitude
250
200
150
100
50
300
f− response magnitude
250
200
150
100
50
0 5 10 15 20
frequency, KHz
Synthesis equation:
∞ N−1
M X 1 X
f [n] = √ Vf [pM, k]ejωk n
v [0] p=−∞ N
k=0
Fixed-resolution (therefore,)
suits for analyzing processes where all the features appear approximately at
the same scale
Wider window gives higher frequency resolution (but poor t−resolution)
Narrower window gives good time resolution (but poor f −resolution)
Time-frequency tradeoff!
Based on the concepts covered in class, read the book chapter (by
Nawab & Quatieri) on STFT (check Nalanda).
Questions:
Let f [n] = exp j 2π
N d
f n . Let v [n] denote the analysis window. Determine the
discrete-STFT of f [n]. What is the D-STFT when rectangular window is
used?
Let f [n] = cos 2π
N d
f n . Let v [n] denote the analysis window. Determine the
discrete-STFT of f [n]. What is the D-STFT when rectangular window is
used? What is the D-STFT if v [n] = δ[n]?
Figure: https://en.wikipedia.org/wiki/Haar_wavelet
Definitions in class
Scalogram is analogous to spectrogram in STFT
Energy computation from scalogram (in class)
Matlab command for continuous wavelet transform (CWT): cwt
http://in.mathworks.com/help/wavelet/ref/cwt.html
wt = cwt(x,wname) uses the analytic wavelet specified by wname to
compute the CWT
Orthogonality
Wavelets (or wavelet basis functions) are localized waveforms whose
scaled and translated versions are all orthogonal to each other
Orthogonality
Scaling function is orthogonal to translations of itself but not dilations of itself
m
dm (n) =< f , 2 2 ψ(2m t − n) >
X
= h1 [n − 2k] cm+1 (n)
n
The filters are shifted by 2k (rather than k) so that only even indexed
terms (at filter o/ps) are retained
Eliminates redundant information
With these coefficients (computed using simple digital filters), we can
recover PVm f (finite sum approximation to finite-time f (t)!) =⇒ New
world of DSP!
Instead of processing signal samples, we can analyze & process a signal
using its DWT
Haar analysis example (in class)
Theorem
Let {φ(t − n), n ∈ Z} denote an orthonormal basis and φ(t) denotes
orthonormal scaling function. Then, to ensure a valid multiresolution
analysis, the sequence
√
h[n] =< φ(t), 2 φ(2t − n) >
must satisfy
2 2
|H(ω)| + |H(ω + π)| = 2, ω ∈ [0, 2π)
√
H(0) = 2
if k 0 = 0,
X 2,
ck ck+2k 0 =
0, otherwise.
k
where m = 0, 1, . . . , N2k − 1 =⇒ Nk
2 vanishing moments
Nk
suppressing parts of the signal which are polynomial up to degree 2
−1
Examples: DB2, DB4, DB6 so on
Determine the 4 scaling coefficients c0 , c1 , c2 , c3 of DB4 (in class)
Figure: WPD over 3 levels. g[n]: LP approximation coefficients & h[n]: HP detailed
coefficients
f (n2−M ) ≈ cM (k ),
X
PVM f ≈ f (n2−M )φ(2M t − n)
n
Q.5. Let φ(t) denote the father scaling function & ψ(t) denote the mother
wavelet of DB2. Determine
Z ∞
φ(t)ψ(t) dt
−∞
B. Sainath
[email protected]
December 2, 2018
2 Detection Theory
4 Estimation Theory
5 Types of Estimators
r !
NA2
PD = Q Q −1 (PFA ) −
σ2
r
NA2
d, ⇐ deflection coefficient
σ2
B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 8 / 30
MAP Detector Example
Figure: Effect of prior probability on decision regions: i). (left) MAP detector with
P(H0 ) = P(H1 ) = 12 , ii). (right) MAP detector with P(H0 ) = 14 & P(H1 ) = 34
Bayes rule:
p (Y |H1 ) p (H1 )
> .
p (Y |H0 ) p (H0 )
For equal a priori probabilities, i.e., p (H1 ) = p (H0 ) = 21 , we get
p (Y |H1 )
> 1,
p (Y |H0 )
=⇒ ML rule
Example (in class)
Figure: N-P detector in: a). Correlator structure b). Matched filter structure
DC level in WGN
Model: X [n] = A + W [n], n = 0, 1, . . . , N − 1
Goal: To estimate the parameter A
Average value of X [n] ⇐ unbiased estimate
N−1
1 X
 = X [n] , g(X [n])
N
n=0
H0 : N,
H1 : B exp (jΨ) + N ,
Determine MAP decision rule, ML decision rule. Simplify the rules to the
extent possible.
Derive the minimum average probability of error. Express in terms of
Marcum Q−function (Refer to Wiki page for definition)
λy0
p(y|H0 ) = exp (−λ0 ) ,
y!
λy
p(y|H1 ) = 1 exp (−λ1 ) ,
y!
B. Sainath
[email protected]
December 2, 2018
2 Wiener Filter
3 LMS Algorithm
4 Configurations
5 RLS Algorithm
6 Conclusions
Problem
Given signal {x[n], n = 0, 1, . . . , n}
Determine optimum set of weights that minimize Φe
An iterative solution
avoids computation of ACF inverse ⇒ less computational complexity
Let wk [n] denote current weight at nth iteration
Let wk [n + 1] denote weight of next iteration i.e. updated filter weight
p
X
wk [n + 1] = wk [n] + µ Rdx [k ] − wj [n]Rx [j, k] , k = 1, . . . , p
j=1
w[n + 1] = w + µe[n]x[n]
LMS Summary:
Initialization: wk [0] = 0, k = 1, 2, . . . , p ≡ w[0] = 0
Filtering: For n = 1, . . . , (∞) compute
p
X
y [n] = wj [n]xj [n]
j=1
Linear Prediction
I/p vector: set of past values
Desired signal: current i/p samples
Objective: To estimate the future values of a signal based on past values
of the signal
Application: Linear predictive coding (LPC) (e.g., speech compression)
B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 11 / 19
Configurations of
Adaptive Filters: Noise Cancellation
E eopt x[n − k ] = 0,
k = 0, 1, 2, . . . , M − 1
Q. [Filtering of Noisy Signals]: Let y [n] = d[n] + v [n] denote the received
signal. In it, d[n] denotes the desired signal & v [n] denotes noise with
zero mean, variance σv2 . Assume that d[n] & v [n] uncorrelated.
Derive Wiener-Hopf equations
Q. [MSE Function]: Suppose that input autocorrelation matrix of given
data as Identity matrix (2 × 2) & crosscorrelation vector [24.5]t . Assume
that σd2 = 9, determine Φe , that is, the mean square error function in
terms of coefficients w0 & w1 .
http://www.commsp.ee.ic.ac.uk/˜mandic/ASP_Slides/
Fundamentals of Adaptive filtering by Ali H. Sayed
https://in.mathworks.com/help/dsp/ug/
overview-of-adaptive-filters-and-applications.html
https:
//en.wikipedia.org/wiki/Least_mean_squares_filter
B. Sainath
[email protected]
December 2, 2018
3 Signal Reconstruction
4 Applications
Notation:
x[n]: real-valued, 1D, DT signal of length N ⇐ viewed as column vector
ψj , j = 1, 2, . . . , N: an orthonormal basis
Ψ: [ψ1 | ψ2 | . . . |ψN ], ψj ’s are column vectors
{sj }: vector of weight coefficients
Representation of x:
N
X
x= sj ψj , sj = < x, ψj >= ψjt x
j=1
x (in time or space domain) & s (in Ψ domain) are equivalent representations
N
X
x= sj ψj , sj = < x, ψj >= ψjt x
j=1
K −sparse signal
Signal x is K −sparse if it is linear combination of K basis vectors
That is, only K of sj coefficients are non-zero & N − K are zero
Figure: a). CS measurement process with random Gaussian measurement matrix Φ &
Discrete cosine transform (DCT) matrix Ψ. b). Measurement process with Θ = ΦΨ.
CS Problem
Directly acquire compressed signal (via measurements)
avoid intermediate stage of acquiring N sample
Let {φj , j = 1, 2, . . . , M} denote collection of vectors
{yj , j = 1, 2, . . . M} denote set of measurements
yj = < x, ψj >
We have
y = Φx = ΦΨs = Θs
B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 7 / 17
Sensing Matrix (SM) & UUP
CS Objectives
To design a stable measurement matrix Φ such that key information in any
K − sparse signal is not lost due to the reduction of dimensionality
To develop reconstruction algorithm for signal recovery from only K
measurements
Incoherence
The rows {φj } of Φ cannot sparsely represent the columns {ψi } of Ψ (&
vice versa)
Properties of Φ
M × N i.i.d. Gaussian matrix Θ = ΦI = Φ satisfies RIP with high
cK
probability if M ≥ log KN << N, where c is small constant
Matrix Φ is universal
Θ = ΦΨ will be i.i.d. Gaussian =⇒ satisfies RIP with high probability
regardless of orthonormal basis Ψ
Solution −1
ŝ = Θt ΘΘt y
µ(ψ, φ) = max ψi ? φj
1≤i,j≤n
y [n] denotes received signal at the cognitive radio (CR) at the nth sampling
instant
s[n] denotes the primary signal
w[n] is the additive white Gaussian noise
Key objective: the CR user (secondary) has to decide if primary user’s
signal is present (H1 ) or not (H0 )