Digital Communications Digital Communications Digital Communications Digital Communications - Overview
Digital Communications Digital Communications Digital Communications Digital Communications - Overview
H 1
z (T ) >
<
γ 0
H 2
– s2 is sent → s1 is received
P ( H 1 | s2 ) = P (e | s2 )
∞
P (e | s2 ) = ∫γ 0
p ( z | s 2 ) dz
The total probability of error is the sum of the errors
2
PB = ∑ P (e, si ) = P ( e | s1 ) P ( s1 ) + P (e | s2 ) P ( s2 )
i =1
= P ( H 2 | s1 ) P ( s1 ) + P ( H 1 | s2 ) P ( s2 )
∞ 1 1 z − a 2
= ∫γ 0 σ0 2π
exp − 2
dz
2 σ 0
PB = ∫ exp − dz
γ0
σ 0 2π 2 σ 0
( z − a2 )
⇒ u=
σ0
∞ 1 u2
= ∫ ( a1 − a 2 )
2σ 0 2π
exp −
2
du
The above equation cannot be evaluated in closed form (Q-
function)
Hence, a1 − a 2
PB = Q ⇐ equation B .18
2σ 0
1 z2
Q( z) ≅ exp −
z 2π 2
Advanced Topics in Communications, Fall-2015, Week-3-4 15
Error probability for binary signals
Recall: a1 − a0
PB = Q ⇐ equation B.18
2σ 0
Where we have replaced a2 by a0.
σ0 σ 20
We have ( a1 − a0 ) 2 Ed 2 Ed
= =
σ 20 N0 / 2 N0
Therefore, a1 − a0 1 ( a1 − a0 ) 2 1 2 Ed Ed
= = =
2σ 0 2 σ 0
2
2 N0 2 N0
Advanced Topics in Communications, Fall-2015, Week-3-4 16
The probability of bit error is given by:
Ed
PB = Q
(3.63)
2N0
Ed = ∫ [s1(t) − s0 (t)] dt
T 2
0 0 0
2 Eb
PB = Q
N0
The probability of bit error for orthogonal signals:
Eb
PB = Q
N 0
Eb
PB = Q
2N 0
Eb STb S W S Eb Rb
= = or SNR = =
N 0 N / W N Rb N N0 W
Thus Eb/N0 can be thought of as normalized SNR.
Makes more sense when we have multi-level signaling.
Reading: Page 117 and 118.
For Eb / N 0 = 10 dB
PB ,orthogonal = 9.2 x10 − 2
PB , antipodal = 7.8 x10 − 4
Cross correlation
+∞
ψ gz (τ ) = ∫ g (t ) z (t + τ )dt.
−∞
Autocorrelation
+∞
ψ g (τ ) = ∫ g (t ) g (t + τ )dt.
−∞
x(t)
xˆ (t ) = x (t ) ⊗ f (t ) ⊗ heq (t ) + nb (t ) ⊗ heq (t )
F( f ) = F( f ) e jθ( f )
F(f )[dB] = 20log10 F(f )
1 dθ ( f )
τ( f )= − - Envelope time delay
2π df
f (t)
f = [ f0 f1 f2 . . . f M ] T
Channel
Input
xk xk-1 xk-2 xk-M
z-1 z-1 z-1
f0 f1 f2 fM
X X X X
Channel
yk Output
∑
xk = [ xk xk −1 . . . xk −M ] T
yk = [ yk yk −1 yk −2 . . . yk −N ] T
[
wk = w0k w1k w2k . . . wN k ]T
Then the output of the equalizer is chosen to be:
xˆ k = w kH y k ; where (⋅) H ≡ ((⋅)∗ )T
error
In fact there could be any delay-advance D ek xk
∑
yk
yk+D yk-1+D yk-2+D yk-N+D
ek = xk − xˆk
ek
2
( )(
= x k − w kH y k ⋅ x k* − w Tk y *k )
In developed form
2
= x k − xˆ k = x k2 − w kH y k x k∗ − x k w Tk y ∗k + w kH y k y kH w k
2
ek
[ ∗
k ] [ ∗
k
∗
p = E y k x = E x y k x y k −1 . . . x y k − N
k
∗
k ]
T
[ ∗
k ] [ ∗
k
∗
p = E y k x = E x y k x y k −1 . . . x y k − N
k
∗
k ]
T
σ x2 σ 2 + f 0H f 0 f1H f 0 ...
f NH f 0
1 f1H f 0 σ x2 σ 2 + f 0H f 0 ... f NH−1f 0
R= 2
σx ... ... ... ...
H
f NH f 0 f NH−1f 0 . . . σ x σ + f 0 f 0
2 2
where: fm = σ 2
x [ fm f m+1 . . . f M 0M +2 . . . 0N ] ; m = 0,1,..., N − 1
T
[ ]=
E ek
2
J (w k ) = σ x2 − w kH p − p H w k + w kH R w k
where σ 2
x
is the average symbol power E x k2 [ ]
J (w k ) = σ x2− w kH p − p H w k + w kH R w k
Depends on:
- channel - R, p, symbol power -
σ 2
x
- and most importantly
equalizer coefficients!! - wk
wk
MMSE = min J (w k )
wk
min J (w k ) = min (σ x2 − w kH p − p H w k + w kH R w k )
wk wk
T
∂J (w k ) ∂J (w k ) ∂J (w k ) ∂J (w k )
∇ J = = ⋅⋅⋅
∂ (w k ) ∂ ( w1 ) ∂ ( w2 ) ∂ ( w N )
∇ J = 2R wk − 2p
∇J =0
w k = w opt = R p −1
J min (w k ) = σ x2 − p T R − 1p
−1
w k = R (f )p (f )
[x ]
k
1
p = E ∗
k y k =
k
∑l= 0
x l∗ y l
[ ]
k
1
R = E y k y H
k =
k
∑
l= 0
y ly H
l
= w k + α [− ∇ ( J ( w k )) ]
1
w k +1
2
∇ J (w k ) = 2 R w k
− 2p
w k + 1 = w k + α [p − R w k ]
[
p = E y k x k∗ = y k x k∗ ]
R = E [y k y H
k ]= y k y H
k
• then w k +1
= w k
+ α [p − R w k
]
• becomes w k +1 = w k − α e y k ; ∗
k
Symbol estimate xˆ k = w kH y k
Error ek = x k − xˆ k
Coefficient update w k +1 = w k + α e k∗ y k ;
N
Step size 0 <α < 2/∑λ i
i =1
λi ; i = 1,..n; eigenvalues of R
LMS algorithm computation complexity 2N +1
xˆ k = w kH y k
∑
- Training signal
Adaptive Algorithm updating equalizer weights wk ∑ xk
error
ek = xk − xˆk
w k +1 = w k + α e y k ; ∗
k
[
w k = w 0 k w1k w 2 k . . . w N k ]T
Advanced Topics in Communications, Fall-2015, Week-3-4
Gradient Noise and the Tracking
Misadjustment (‘lag’)
Symbol estimate xˆ k = w kH y k
Error ek = xk − xˆ k
R k− 1−1 y k
Kalman gain kk =
λ + y kH R −k 1−1 y k
Correlation matrix R =−1
k
λ
1
[R −1
k −1 − k k y kH R −k1−1 ]
If pk = ∑λ
l =0
k −l ∗
yl x ;
l Rk = ∑λ k −l
y l y lH
l =0
p k = λ ⋅ p k −1 + y k x k∗ R k = λ ⋅ R k −1 + y k ⋅ y kH
We need R k− 1
R −k 1 = λ − 1 ⋅ R −k 1−1 − λ − 1k k y kH R −k 1−1
k k = R −k 1 y k
R −k 1 = λ − 1 ⋅ R k−1−1 − λ − 1k k y kH R k− 1−1
w k = w k −1 + k k ( xk∗ − y kH w k −1 ) (y kH w k −1 )∗ = w kH−1 y k
w k = w k −1 + k k e ∗
k
ek = ( xk − w kH−1 y k )
( Pk = R k−1 ) π H
k = y H
k P k −1 κ k = λ + π kH y k
πk Pk −1 y k
kk = ; kk = Pk −1 = PkH−1
Kalman gain κk λ + y kH Pk −1 y k
Error ek = xk − w H
k −1 yk
Coefficients w k = w k −1 + k k e k∗ ;