Problem Formulation For Least-Squares Channel Estimation
Problem Formulation For Least-Squares Channel Estimation
Systems HO #23
Fall 2000-2001/Yumin Lee Page 1
Let
yν + N −1
y
y≡ ν + N −2
M
yν
Then
xν + N −1 xν + N − 2 ... x N −1 p0 ην + N −1
x xν + N −3 ... x N − 2 p1 ην + N −2
y = ν + N −2 + ≡ Xp + η
... ... ... ... M ...
xν xν −1 ... x0 pν ην
! Note that in order to completely specify X, we need x0, x1, …, xN+ν−1 (a total of
N+ν training symbols).
(y − Xpˆ )H X = [0 0 L 0],
or
X H (y − Xpˆ ) = 0 .
Thus, the least-squares channel estimate is given by
pˆ = (X H X ) X H y
−1
1 4 y 7 + 4 y 6 + 3 y5 − 3 y 4 + 3 y 3 − 4 y 2 − 3 y1
pˆ =
24 4 y 7 + 4 y 6 − 3 y5 + 3 y 4 − 3 y 3 − 4 y 2 + 3 y1
Note that the least-squares channel estimator is a linear and time-varying system.
pˆ = (X H X ) X H y .
−1
Since
y = Xp + η
therefore
pˆ = (X H X ) X H (Xp + η ) = p + (X H X ) X Hη
−1 −1
ε = pˆ − p = (X ∗ X ) X ∗η .
−1
Λ ≡ E εε[ H
] = E [(X X ) X ηη X (X X )
H −1 ∗ H H −1
]
= (X X ) X E [ηη ]X (X X )
H −1 H H H −1
Λ = σ 2 (X H X ) (X H X )(X H X ) = σ 2 (X H X )
−1 −1 −1
! What criterion should the training sequence satisfy in order to get good
ν
Note that tr (Λ ) = ε = ∑ E pˆ j − p j
2 2
performance? is the sum of the
j =0
equal to ∑X
j =0
j . Therefore, the tr(XHX) is proportional to the energy of the
training sequence. Thus, we would like to design the training sequence such
Λ) is minimized, under the constraint that tr(XHX) is a constant.
that tr(Λ
! Let A = XHX, and let λi, i=0,1,2,…, ν be the eigenvalues of A. Then we have
ν
tr (A ) = ∑ λi
i =0
and
ν
tr (Λ) = σ 2 tr (A −1 ) = σ 2 ∑
1
i =0 λi
ν
1
Therefore, we would like to design the training sequence such that σ 2 ∑ is
i =0 λi
ν
minimized under the constraint that ∑λ
i =0
i is a constant. This goal can be
achieved by requiring the eigenvalues of XHX to be the same. In this case XHX
is a diagonal matrix, and the optimal training sequence should satisfy
N −1 2
∑ xi 0 ... 0
i =0
N −1 2
X X= 0
H
∑xi =0
i ... 0 .
... ... ... ...
N −1 2
0 ... 0 ∑ xi
i =0
! In other words, in order to obtain good performance, the (deterministic)
autocorrelation of the training sequence should be very small within the window
[0,ν]. This requirement is very similar as that for the correlator-estimator, and
will also be referred to as the whiteness requirement.
then we have
tr (Λ) σ2
σ p2 ≡ =
(ν + 1) NE X
! Note that the mean-square estimation error per channel tap is inversely
proportional to the signal-to-noise ratio EX /σ2 as one would expect.
Furthermore, the mean-square estimation error per channel tap is also inversely
proportional to the length N of the training sequence.