0% found this document useful (0 votes)
42 views

Digital Communications Digital Communications Digital Communications Digital Communications - Overview

This document outlines the syllabus for an advanced communications course taught by Professor Noor M. Khan at Muhammad Ali Jinnah University. The course covers digital communication fundamentals over 3 topics: (1) an overview of signals, modulation, and basic system diagrams; (2) transmitter and receiver principles; and (3) performance metrics like error probability. Recommended textbooks are also provided. The document provides details on topics that will be covered each week, including modulation, detection, equalization, and error probability calculations. Formulas for estimating error probability of different signal types based on signal-to-noise ratio are also presented.

Uploaded by

Rizwan Hasan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Digital Communications Digital Communications Digital Communications Digital Communications - Overview

This document outlines the syllabus for an advanced communications course taught by Professor Noor M. Khan at Muhammad Ali Jinnah University. The course covers digital communication fundamentals over 3 topics: (1) an overview of signals, modulation, and basic system diagrams; (2) transmitter and receiver principles; and (3) performance metrics like error probability. Recommended textbooks are also provided. The document provides details on topics that will be covered each week, including modulation, detection, equalization, and error probability calculations. Formulas for estimating error probability of different signal types based on signal-to-noise ratio are also presented.

Uploaded by

Rizwan Hasan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

EE7713 : Advanced Topics in Communications

Digital Communications - Overview

Instructor: Prof. Engr. Dr. Noor M. Khan


Acme Center for Research in Wireless Communications (ARWiC),
Department of Electronic Engineering,
Muhammad Ali Jinnah University,
Islamabad Campus, Islamabad, PAKISTAN
Ph: +92 (51) 111-878787 Ext. (Office 116, ARWiC Lab 186)
Email: [email protected], [email protected]

1 Advanced Topics in Communications, Fall-2015, Week-3-4


 Recommended Books
– Bernard Sklar, Digital Communications: Fundamentals
and Applications, 2001.
– J. G. Proakis, Digital Communications, 2001
– Marvin Kenneth Simon, Mohamed-Slim Alouini, Digital
Communication over Fading Channels, John Wiley &
Sons, 2004

02-Oct-15 Advanced Topics in Communications, Fall-2015, Week-3-4 2


Digital Communications - Fundamentals
 Week-3
 This Lecture would be covered on Board and the
following concepts would be delivered:
 Thermal Noise / AWGN
 Signal to Noise Ratio (SNR)
 Channel Bandwidth and Data Rate
 Fourier Transformation and Time/Frequency Domains
 Basic Diagram of A Communication System
 Modulation
 Baseband and Bandpass Modulation
Advanced Topics in Communications, Fall-2015, Week-3-4 3
Digital Communication: Transmitter

02-Oct-15 Advanced Topics in Communications, Fall-2015, Week-3-4 4


Digital Communication: Receiver

02-Oct-15 Advanced Topics in Communications, Fall-2015, Week-3-4 5


Performance Metrics
 Analog Communication Systems
ˆ (t ) ≈ m(t )
– Metric is fidelity: want m
– SNR typically used as performance metric
 Digital Communication Systems
– Metrics are data rate (R bps) and probability of bit error
Pb = p (bˆ ≠ b)
– Symbols already known at the receiver
– Without noise/distortion/sync. problem, we will never
make bit errors

02-Oct-15 Advanced Topics in Communications, Fall-2015, Week-3-4 6


Digital communication blocks

02-Oct-15 Advanced Topics in Communications, Fall-2015, Week-3-4 7


Processes Involved

02-Oct-15 Advanced Topics in Communications, Fall-2015, Week-3-4 8


EE7713 : Advanced Topics in Communications
Week 4:
Principles of Digital Communications-Overview
Detection
Matched Filter and Correlator Filter
Error Probability
ISI
Equalization
Adaptive Equalization
Advanced Topics in Communications, Fall-2015, Week-3-4 9
Detection
 Matched filter reduces the received signal to a single variable
z(T), after which the detection of symbol is carried out
 The concept of maximum likelihood detector is based on
Statistical Decision Theory
 It allows us to
– formulate the decision rule that operates on the data
– optimize the detection criterion

H 1

z (T ) >
<
γ 0
H 2

Advanced Topics in Communications, Fall-2015, Week-3-4 10


Detection of Binary Signal in Gaussian Noise

The output of the filtered sampled at T is a Gaussian random process

Advanced Topics in Communications, Fall-2015, Week-3-4 11


Baye’s Decision Criterion and Maximum
Likelihood Detector
 Hence
H1
> ( a1 + a2 )
z ∆ γ0
< 2
H2
where z is the minimum error criterion and γ 0 is optimum
threshold
 For antipodal signal, s1(t) = - s2 (t) ⇒ a1 = - a2
H1
>
z 0
<
H2
Advanced Topics in Communications, Fall-2015, Week-3-4 12
Probability of Error
 Error will occur if
– s1 is sent → s2 is received
P ( H 2 | s1 ) = P (e | s1 )
γ0
P (e | s1 ) =∫ p ( z | s1 ) dz
−∞

– s2 is sent → s1 is received
P ( H 1 | s2 ) = P (e | s2 )

P (e | s2 ) = ∫γ 0
p ( z | s 2 ) dz
 The total probability of error is the sum of the errors
2
PB = ∑ P (e, si ) = P ( e | s1 ) P ( s1 ) + P (e | s2 ) P ( s2 )
i =1

= P ( H 2 | s1 ) P ( s1 ) + P ( H 1 | s2 ) P ( s2 )

Advanced Topics in Communications, Fall-2015, Week-3-4 13


 If signals are equally probable
PB = P ( H 2 | s1 ) P ( s1 ) + P ( H 1 | s2 ) P ( s2 )
1
= [ P ( H 2 | s1 ) + P ( H 1 | s2 ) ]
2
1
PB = [ P( H 2 | s1 ) + P( H1 | s2 )] →
by Symmetry
P( H1 | s2 )
2
 Numerically, PB is the area under the tail of either of the
conditional distributions p(z|s1) or p(z|s2) and is given by:
∞ ∞
PB = ∫γ 0
P ( H 1 | s2 )dz = ∫γ 0
p ( z | s2 )dz

∞ 1  1  z − a 2 
= ∫γ 0 σ0 2π
exp  −  2
  dz
 2  σ 0  

Advanced Topics in Communications, Fall-2015, Week-3-4 14


∞1  1 z − a2 
 
2

PB = ∫ exp  −    dz
γ0
σ 0 2π  2  σ 0  
( z − a2 )
⇒ u=
σ0
∞ 1  u2 
= ∫ ( a1 − a 2 )
2σ 0 2π
exp  −
 2
 du

 The above equation cannot be evaluated in closed form (Q-
function)
 Hence,  a1 − a 2 
PB = Q  ⇐ equation B .18
 2σ 0 
1  z2 
Q( z) ≅ exp  − 
z 2π  2
Advanced Topics in Communications, Fall-2015, Week-3-4 15
Error probability for binary signals
 Recall:  a1 − a0 
PB = Q  ⇐ equation B.18
 2σ 0 
 Where we have replaced a2 by a0.

 To minimize PB, we need to maximize:


a1 − a0 or ( a1 − a0 ) 2

σ0 σ 20
 We have ( a1 − a0 ) 2 Ed 2 Ed
= =
σ 20 N0 / 2 N0
 Therefore, a1 − a0 1 ( a1 − a0 ) 2 1 2 Ed Ed
= = =
2σ 0 2 σ 0
2
2 N0 2 N0
Advanced Topics in Communications, Fall-2015, Week-3-4 16
 The probability of bit error is given by:
 Ed 
PB = Q 
 (3.63)
 2N0 

Ed = ∫ [s1(t) − s0 (t)] dt
T 2

= ∫ [s1(t)] dt + ∫ [s0 (t)] dt − 2∫ [s1(t)s0 (t)]


T 2 T 2 T

0 0 0

Advanced Topics in Communications, Fall-2015, Week-3-4 17


 The probability of bit error for antipodal signals:

 2 Eb 
PB = Q 

 N0 
 The probability of bit error for orthogonal signals:

 Eb 
PB = Q 

 N 0 

 The probability of bit error for unipolar signals:

 Eb 
PB = Q 

 2N 0 

Advanced Topics in Communications, Fall-2015, Week-3-4 18


Error probability for binary signals

Table for computing of Q-Functions

Advanced Topics in Communications, Fall-2015, Week-3-4 19


Relation Between SNR (S/N) and Eb/N0
 In analog communication, the figure of merit is the average
signal power S to average noise power N ratio or the SNR.
 In the previous few slides, we have used the term Eb/N0 in
the bit error calculations. How are the two related?
 Eb can be written as STb and N0 is N/W. So we have:

Eb STb S W  S Eb  Rb 
= =   or SNR = =  
N 0 N / W N  Rb  N N0  W 
 Thus Eb/N0 can be thought of as normalized SNR.
 Makes more sense when we have multi-level signaling.
 Reading: Page 117 and 118.

Advanced Topics in Communications, Fall-2015, Week-3-4 20


 Bipolar signals require a factor of 2 increase in energy compared to
orthogonal signals
 Since 10log102 = 3 dB, we say that bipolar signaling offers a 3 dB better
performance than orthogonal

Advanced Topics in Communications, Fall-2015, Week-3-4 21


Comparing BER Performance

For Eb / N 0 = 10 dB
PB ,orthogonal = 9.2 x10 − 2
PB , antipodal = 7.8 x10 − 4

 For the same received signal to noise ratio, antipodal provides


lower bit error rate than orthogonal

Advanced Topics in Communications, Fall-2015, Week-3-4 22


Correlation
 Measure of similarity between two signals
+∞
1
cn =
Eg Ez ∫ g (t ) z (t )dt.
−∞

 Cross correlation
+∞
ψ gz (τ ) = ∫ g (t ) z (t + τ )dt.
−∞

 Autocorrelation
+∞
ψ g (τ ) = ∫ g (t ) g (t + τ )dt.
−∞

Advanced Topics in Communications, Fall-2015, Week-3-4 23


Characterization of band-limited
channels
Communication channels are limited
 by limited bandwidth because of FDM techniques or physical
reasons,
 by transmit power constraints because of amplifier
nonlinearities or interference,
 by interference because of noise, echo- or impulse-interference,
or multipath.
A band-limited channel, e.g. telephone channel is characterized
as a linear filter c(t) ↔ C(f).
Other impairments: non-linear distortion, frequency offset,
phase jitter, impulse and thermal noise.

Advanced Topics in Communications, Fall-2015, Week-3-4


Characterization of band-limited
channels

The Received signal

Advanced Topics in Communications, Fall-2015, Week-3-4


Characterization of band-limited
channels

The Received signal

Advanced Topics in Communications, Fall-2015, Week-3-4


Fundamentals of Equalization
 ISI is recognized as the major obstacle in high speed data
transmission over mobile radio channels
 Equalization is a technique used to eliminate the effects of ISI
 Mobile fading channel is time varying
 Equalizers must constantly follow the changes in the channel in
order to eliminate ISI
 The equalizers have to be adaptive.

Advanced Topics in Communications, Fall-2015, Week-3-4


Adaptive Equalization

 The general operating modes of an adaptive equalizer


include equalizer training and equalizer channel tracking
 During training a known, fixed-length training sequence is
sent by the transmitter
 The receiver “knows” the training sequence and adjusts
equalizer coefficients appropriately until the symbols in the
training sequence are received without ISI
 When the training is finished the equalizer coefficients
remain in the optimum setting during data transmission.

Advanced Topics in Communications, Fall-2015, Week-3-4


System Using an Adaptive Equalizer

f(t) = combined impulse response of transmitter,


multi-path channel and receiver RF/IF
Original
Base-band Modulator Transmitter Channel
Message
x(t)
Matched RF
IF Stage Receiver
Filter
f(t)
Noise
+
Reconstructed
Equalizer Detector Message Data
y(t) x(t)
^
x(t)
e(t) +
Σ

x(t)

Advanced Topics in Communications, Fall-2015, Week-3-4


Received Signal

 x(t) – the original information signal


 f(t) – channel impulse response
(combined impulse response of the transmitter, channel and the receiver)

 The signal received by the equalizer:


y(t) = x(t) ⊗ f (t) + nb (t)
 nb (t) - base-band noise,
 ⊗ - convolution operation,

Advanced Topics in Communications, Fall-2015, Week-3-4


Equalizer Response
 The output from the equalizer is given by:

xˆ (t ) = x (t ) ⊗ f (t ) ⊗ heq (t ) + nb (t ) ⊗ heq (t )

 Where heq (t) is the equalizer impulse response.

 Let us put aside the impact of noise nb (t) ⊗ heq (t)


for a while.

Advanced Topics in Communications, Fall-2015, Week-3-4


ISI Elimination Condition
 The desired output from the equalizer is x(t).
 In order to force the output from the equalizer
xˆ (t ) to be equal to x(t), the following has to be
satisfied:

f (t) ⊗ heq (t) = δ (t)

 δ (t ) is the unite impulse delta (Dirac) function.

Advanced Topics in Communications, Fall-2015, Week-3-4


ISI Elimination Condition (frequency)
 In frequency domain, ISI elimination condition is given
by:
1
H eq ( f ) F ( f ) = 1; ⇔ H eq ( f ) =
F( f )

Heq ( f ) - Fourier transform of heq (t)


F( f ) - Fourier transform of f (t)

Equalizer is an inverse filter to the channel filter!

Advanced Topics in Communications, Fall-2015, Week-3-4


Channel Characterization
(Engineering Approach)

F( f ) Fourier transform of f (t)

F( f ) = F( f ) e jθ( f )
F(f )[dB] = 20log10 F(f )

1 dθ ( f )
τ( f )= − - Envelope time delay
2π df

Advanced Topics in Communications, Fall-2015, Week-3-4


Common Telephone Channel
Transfer Function
F( f ) 1 dθ ( f )
τ( f )= −
2π df

Advanced Topics in Communications, Fall-2015, Week-3-4


Telephone Channel Impulse
Response

f (t)

Advanced Topics in Communications, Fall-2015, Week-3-4


Discrete Channel Model

 For any real function, satisfying certain sampling


(Nyquist) conditions, channel impulse response
f (t) could be presented in discrete form without
loss of information as channel impulse response
vector

f = [ f0 f1 f2 . . . f M ] T

The time index of the impulse response runs from 0…to M

Advanced Topics in Communications, Fall-2015, Week-3-4


Time Variable Discrete Channel Model

In order to represent slow time variable system we assume


time variability of the total impulse response f(k)

f (k) = [ f0 (k ), f1(k) , f2 (k), . . . f M (k )]T


As you may have noticed the time variable appears
independently twice.
1) In the channel impulse response (0..M)
2) Time change of the total channel impulse response f(k)
This is an obvious contradiction

Advanced Topics in Communications, Fall-2015, Week-3-4


Time Variable Discrete Channel
Model – Double Time?
•This appearance of the double time reference in
describing a time variable system is a consequence of the
fact that presentation of a system by impulse response is
implicitly assuming a time invariant system.
•At the moment, in mobile communications, we can not do
much better. The problem needs theoretical fixing. !!!

•Surprisingly, the existing “wrong” model served well


engineering practice of mobile communications so far.

Advanced Topics in Communications, Fall-2015, Week-3-4


Discrete Channel Model

Channel
Input
xk xk-1 xk-2 xk-M
z-1 z-1 z-1

f0 f1 f2 fM
X X X X
Channel
yk Output

Advanced Topics in Communications, Fall-2015, Week-3-4


Transmitted Symbols Vector
 Transmitted symbols could be presented as
the transmitted symbols vector

xk = [ xk xk −1 . . . xk −M ] T

 Then the channel output sample at the discrete time


instant k is given by convolution: yk
M
yk = ∑f x
l =0
l k −l + nk = x f + nk T
k

Advanced Topics in Communications,


41 Fall-2015, Week-3-4
Transversal Linear Digital Equalizer

 Define equalizer input signal vector as

yk = [ yk yk −1 yk −2 . . . yk −N ] T

 Define the vector of equalizer coefficients

[
wk = w0k w1k w2k . . . wN k ]T
 Then the output of the equalizer is chosen to be:
xˆ k = w kH y k ; where (⋅) H ≡ ((⋅)∗ )T

Advanced Topics in Communications, Fall-2015, Week-3-4


Transversal Linear Equalizer
yk
yk yk-1 yk-2 yk yk-N
z-1 z-1 z-1 z-1

w0k X w1k X w2k X wNk X


Equalizer
output
^xk

error
In fact there could be any delay-advance D ek xk

yk
yk+D yk-1+D yk-2+D yk-N+D

Advanced Topics in Communications, Fall-2015, Week-3-4


The Equalizer Output Error

 The output of the “untrained” equalizer will produce


symbol estimates x̂k with error:

ek = xk − xˆk

 The error could be positive or negative (in fact it could be


complex number for complex symbols)
 The right measure for the error size is the squared error
module (it works for complex signals too)
= x k − xˆ k = ( x k − xˆ k ) ⋅ ( x k − xˆ k )
2 2 *
ek

Advanced Topics in Communications, Fall-2015, Week-3-4


The Equalizer Output Error

• After substitution for xˆ k = w kH y k ;

ek
2
( )(
= x k − w kH y k ⋅ x k* − w Tk y *k )

 In developed form

2
= x k − xˆ k = x k2 − w kH y k x k∗ − x k w Tk y ∗k + w kH y k y kH w k
2
ek

Advanced Topics in Communications, Fall-2015, Week-3-4


Mean Squared Error
 In fact we want to minimize average squared error over all symbols
in the symbol alphabet
 Taking the fact that the symbols appear with certain probability we
take the probabilistic expectation (MEAN) over all symbols in the
symbol alphabet as

[ ]= E[x ]− w E[y x ]− w E[x y ]+ w E[y y ]w


E ek
2 2
k
H
k k

k
T
k k

k
H
k k
H
k k

 The average symbol power E x [ ] is denoted as


2
k
σ 2
x

Advanced Topics in Communications, Fall-2015, Week-3-4


Mean Squared Error (2)
 In the previous equation there are two statistical values of
interest,
 Channel impulse response vector:

[ ∗
k ] [ ∗
k

p = E y k x = E x y k x y k −1 . . . x y k − N
k

k ]
T

 Channel correlation matrix:


 y k2 y k y k∗−1 ... y k y k∗− N 
 ∗ ∗ 
[
R = E y k y kH ] y
= E  k −1 k
 ...
y y k2−1
...
...
...
y k −1 y k − N 
... 
 ∗ 
 k − N k
y y y k − N y k∗−1 ... 2
y k − N 

Advanced Topics in Communications, Fall-2015, Week-3-4


Mean Square Error (3)

Due to time invariance of the communication system the


autocorrelation does not depend on k

 y k2 y k y k∗−1 ... y k y k∗− N 


 ∗ ∗ 
[
R = E y k y kH ] y
= E  k −1 k
 ...
y y k2−1
...
...
...
y k −1 y k − N 
... 
 ∗ 
 y k − N y k y k − N y k∗−1 ... 2
y k − N 

Advanced Topics in Communications, Fall-2015, Week-3-4


Channel Impulse Response Vector
 Channel vector is statistically defined

[ ∗
k ] [ ∗
k

p = E y k x = E x y k x y k −1 . . . x y k − N
k

k ]
T

 but once the expectation calculation E is done, it


has got simple deterministic value
p =σ 2
x [ f0 01 . . . 0 N ]
T

 Carrying information about channel impulse


response (please, notice that p changes if delay D
is introduced in vector yk)

Advanced Topics in Communications, Fall-2015, Week-3-4


Channel Correlation Matrix
 Similarly the channel correlation matrix is a deterministic
matrix

σ x2 σ 2 + f 0H f 0 f1H f 0 ... 
f NH f 0
 
1  f1H f 0 σ x2 σ 2 + f 0H f 0 ... f NH−1f 0 
R= 2
σx  ... ... ... ... 
 H 
 f NH f 0 f NH−1f 0 . . . σ x σ + f 0 f 0 
2 2

where: fm = σ 2
x [ fm f m+1 . . . f M 0M +2 . . . 0N ] ; m = 0,1,..., N − 1
T

is the m-th rotation of the channel coefficient vector f to


the left with zero padding from the right.
Advanced Topics in Communications, Fall-2015, Week-3-4
Mean Squared Error (MSE)
 Coming back to equation for MSE

[ ]= E[x ]− w E[y x ]− w E[x y ]+ w E[y y ]w


E ek
2 2
k
H
k k

k
T
k k

k
H
k k
H
k k

 MSE after the expectation operation has been performed

[ ]=
E ek
2
J (w k ) = σ x2 − w kH p − p H w k + w kH R w k

where σ 2
x
is the average symbol power E x k2 [ ]

Advanced Topics in Communications, Fall-2015, Week-3-4


MSE Revisited
 The MSE

J (w k ) = σ x2− w kH p − p H w k + w kH R w k

Depends on:
- channel - R, p, symbol power -
σ 2
x
- and most importantly
equalizer coefficients!! - wk

Advanced Topics in Communications, Fall-2015, Week-3-4


Minimum Mean Squared Error
 Minimum Mean Squared Error (MMSE) is the optimum value of
MSE with respect to right choice of equalizer coefficients .

wk
MMSE = min J (w k )
wk

min J (w k ) = min (σ x2 − w kH p − p H w k + w kH R w k )
wk wk

Advanced Topics in Communications, Fall-2015, Week-3-4


Mean Squared Error Gradient
 To determine MMSE the gradient of MSE is used

T
∂J (w k )  ∂J (w k ) ∂J (w k ) ∂J (w k ) 
∇ J = =  ⋅⋅⋅ 
∂ (w k )  ∂ ( w1 ) ∂ ( w2 ) ∂ ( w N ) 

∇ J = 2R wk − 2p

Advanced Topics in Communications, Fall-2015, Week-3-4


Optimum Equalizer Coefficient Set
 By setting

∇J =0

 The equation for the equalizer optimum coefficient set


w k is obtained
2R w k − 2p = 0

Advanced Topics in Communications, Fall-2015, Week-3-4


MMSE Equalizer Coefficient Set
 Equation for the optimum coefficient set is:

w k = w opt = R p −1

 MMSE is given by:

J min (w k ) = σ x2 − p T R − 1p

Advanced Topics in Communications, Fall-2015, Week-3-4


Coefficient Set Calculation
 The MMSE coefficient set is calculated in terms of the
channel impulse response vector f

−1
w k = R (f )p (f )

 However, in reality we hardly ever have

f (channel information) available to us in explicit form.

Advanced Topics in Communications, Fall-2015, Week-3-4


How to Calculate R and p
 The first idea that comes to mind is to replace the statistical
average of R and p with the time average for R and p:

[x ]
k
1
p = E ∗
k y k =
k
∑l= 0
x l∗ y l

[ ]
k
1
R = E y k y H
k =
k

l= 0
y ly H
l

 Complexity!! Just inverting R takes order of N3


complex multiplications. (but the idea is not entirely useless)

Advanced Topics in Communications,


58 Fall-2015, Week-3-4
Steepest Descent Algorithm
 The MMSE coefficient set is calculated in terms of the
channel impulse response vector f

= w k + α [− ∇ ( J ( w k )) ]
1
w k +1
2
∇ J (w k ) = 2 R w k
− 2p

w k + 1 = w k + α [p − R w k ]

Advanced Topics in Communications, Fall-2015, Week-3-4


Simple Approximation for R and p

 The simplest approximation k=1

[
p = E y k x k∗ = y k x k∗ ]
R = E [y k y H
k ]= y k y H
k

• then w k +1
= w k
+ α [p − R w k
]
• becomes w k +1 = w k − α e y k ; ∗
k

Advanced Topics in Communications, Fall-2015, Week-3-4


LMS Algorithm for Linear Equalizer

Symbol estimate xˆ k = w kH y k

Error ek = x k − xˆ k
Coefficient update w k +1 = w k + α e k∗ y k ;
N
Step size 0 <α < 2/∑λ i
i =1

λi ; i = 1,..n; eigenvalues of R
LMS algorithm computation complexity 2N +1

Advanced Topics in Communications, Fall-2015, Week-3-4


Adaptive Transversal Linear Equalizer
yk yk yk-1 yk-2 yk-N
-1 -1 -1
z z z
Equalizer
input
w0k X w1k X w2k X wNk X Equalizer output

xˆ k = w kH y k

- Training signal
Adaptive Algorithm updating equalizer weights wk ∑ xk
error
ek = xk − xˆk
w k +1 = w k + α e y k ; ∗
k

[
w k = w 0 k w1k w 2 k . . . w N k ]T
Advanced Topics in Communications, Fall-2015, Week-3-4
Gradient Noise and the Tracking
Misadjustment (‘lag’)

Advanced Topics in Communications, Fall-2015, Week-3-4


RLS Algorithm
k k
If pk = ∑λ
l =0
k −l ∗
y l x ; and R k =
l ∑λ
l =0
k −l
y l y lH .

Symbol estimate xˆ k = w kH y k
Error ek = xk − xˆ k
R k− 1−1 y k
Kalman gain kk =
λ + y kH R −k 1−1 y k
Correlation matrix R =−1
k
λ
1
[R −1
k −1 − k k y kH R −k1−1 ]

Coefficient update w k +1 = w k + k k ek∗ ;

Advanced Topics in Communications, Fall-2015, Week-3-4


Algorithm Memory λ

Advanced Topics in Communications, Fall-2015, Week-3-4


Derivation of RLS Algorithm
k k

If pk = ∑λ
l =0
k −l ∗
yl x ;
l Rk = ∑λ k −l
y l y lH
l =0

p k = λ ⋅ p k −1 + y k x k∗ R k = λ ⋅ R k −1 + y k ⋅ y kH

We need R k− 1

Matrix Inversion lemma

−1 −1 −1 λ − 2 R −k 1−1 y k y kH R −k 1−1 λ − 1 R −k 1−1 y k


R = λ ⋅R k −1 − kk =
1 + λ −1 y kH R −k 1−1 y k
k
1 + λ −1 y kH R −k 1−1 y k

Advanced Topics in Communications, Fall-2015, Week-3-4


Derivation of RLS Algorithm
−2 −1 H −1 λ − 1 R −k 1−1 y k
R −k 1 = λ −1 ⋅ R −k 1−1 −
λ R yky R
k −1 k k −1 kk =
1 + λ y kH R k−1−1 y k
−1 1 + λ −1 y kH R −k 1−1 y k

R −k 1 = λ − 1 ⋅ R −k 1−1 − λ − 1k k y kH R −k 1−1

k k = [ λ −1 ⋅ R −k 1−1 − λ −1k k y kH R −k 1−1 ]y k

k k = R −k 1 y k

R −k 1 = λ − 1 ⋅ R k−1−1 − λ − 1k k y kH R k− 1−1

Advanced Topics in Communications, Fall-2015, Week-3-4


Derivation of RLS Algorithm
Since w k = R −k 1p k and p k = λ ⋅ p k −1 + y k xk∗
From previous page
w k = λ ⋅ R −k 1p k −1 + R −k 1y k xk∗ and R k− 1 = λ − 1 ⋅ R k− 1−1 − λ − 1k k y kH R k− 1−1

w k = R −k 1−1p k −1 − k k y kH R −k 1−1p k −1 + R −k 1y k xk∗ wk −1 = R−k 1−1pk −1


k k = R −k 1 y k
w k = w k −1 − k k y kH w k −1 + k k xk∗

w k = w k −1 + k k ( xk∗ − y kH w k −1 ) (y kH w k −1 )∗ = w kH−1 y k

w k = w k −1 + k k e ∗
k
ek = ( xk − w kH−1 y k )

Advanced Topics in Communications, Fall-2015, Week-3-4


Initialization and Practical Realization
of RLS Algorithm
P0 = R 0−1 = δ −1 I δ = small positive constant < 0.01 σ x2
w 0 = 0 Then for k=1,2,…, compute

( Pk = R k−1 ) π H
k = y H
k P k −1 κ k = λ + π kH y k
πk Pk −1 y k
kk = ; kk = Pk −1 = PkH−1
Kalman gain κk λ + y kH Pk −1 y k

Error ek = xk − w H
k −1 yk

Coefficients w k = w k −1 + k k e k∗ ;

Correlation matrix P'k −1 = k k π kH ; Pk =


1
(Pk −1 − P'k −1 ).
λ
Pk −1y k
P 'k −1 = y H
k Pk −1 ; Pk = λ −1Pk −1 − λ −1
Pk −1y k
y H
Pk −1 ;
λ + y kH Pk −1y k λ + y k Pk −1y k
H k

Advanced Topics in Communications, Fall-2015, Week-3-4


RLS vs. LMS

Advanced Topics in Communications, Fall-2015, Week-3-4

You might also like