0% found this document useful (0 votes)
17 views

Problem Formulation For Least-Squares Channel Estimation

Uploaded by

guoguoguo272727
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Problem Formulation For Least-Squares Channel Estimation

Uploaded by

guoguoguo272727
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Signal Processing for Comm.

Systems HO #23
Fall 2000-2001/Yumin Lee Page 1

LEAST-SQUARES CHANNEL ESTIMATION

Problem formulation for least-squares channel estimation

! The correlator-estimator is a heuristic channel estimator, in the sense that it does


not attempt to minimize any criterion and is thus not optimal in any sense. The
least-squares channel estimator, on the other hand, is optimal in the least-squares
sense.
! Consider the discrete-time equivalent model
ν
y k = ∑ p j x k − j + η k , k = 0,1,2, L
j =0

Let

 yν + N −1 
y 
y≡  ν + N −2 

 M 
 
 yν 

Then

 xν + N −1 xν + N − 2 ... x N −1   p0  ην + N −1 
x xν + N −3 ... x N − 2   p1  ην + N −2 
y =  ν + N −2   +   ≡ Xp + η
 ... ... ... ...   M   ... 
    
 xν xν −1 ... x0   pν   ην 

Assuming that X is known. We would like to find a channel estimate p̂ based


2
on the observation y, such that y ≈ X pˆ , i.e., y − X p̂ is minimized.

! Note that in order to completely specify X, we need x0, x1, …, xN+ν−1 (a total of
N+ν training symbols).

Solution for the Least-Squares Channel Estimator


! Since Xpˆ is a linear combination of the columns of X with coefficients taken
from the vector p̂ , therefore we are essentially looking for the linear
combination of the columns of X that is “closest” (in Euclidean distance) to the
observation vector y.
! In other words, we want to find the vector in the column space of X that is
closest in Euclidean distance to y. Therefore, the optimal choice of Xpˆ is the
projection of y on the column space of X.

Please report errors to [email protected].


Signal Processing for Comm. Systems HO #23
Fall 2000-2001/Yumin Lee Page 2

! Therefore, the vector p̂ must be chosen in such a way that ( ( y − Xpˆ ) is


orthogonal to every column of X. (Note: This is the orthogonality principle.)
In other words, we must have

(y − Xpˆ )H X = [0 0 L 0],

or
X H (y − Xpˆ ) = 0 .
Thus, the least-squares channel estimate is given by

pˆ = (X H X ) X H y
−1

! The matrix (X∗X)−1XH is referred to as the pseudo-inverse of X if X is a tall


matrix (i.e., more rows than columns). Note that (XHX)−1XH is a (ν+1)×N
matrix that can be pre-computed. Therefore least-squares channel estimation is
simply a matrix-vector multiplication.
! An Example: ν=1, N=7
Assumet that (x0, x1, …, x6, x7) = (1, −1, −1, 1, −1, 1, 1, 1). Then, we have
1 1
1 1
 
 1 − 1
 
X = − 1 1 
 1 − 1
 
− 1 − 1
− 1 1 
Thus
 7 − 1
XHX =  ,
− 1 7 
and
1 7 1 1 1 1 − 1 1 − 1 − 1
(X H
X) XH =
−1

48 1 7 1 1 − 1 1 − 1 − 1 1 


8 8 6 6 6 8 6
 48 48 48 − 48 48 − − .
= 48 48
8 8 6 6 6 8 6 
 − − − 
 48 48 48 48 48 48 48 
Therefore

1 4 y 7 + 4 y 6 + 3 y5 − 3 y 4 + 3 y 3 − 4 y 2 − 3 y1 
pˆ =
24 4 y 7 + 4 y 6 − 3 y5 + 3 y 4 − 3 y 3 − 4 y 2 + 3 y1 

Note that the least-squares channel estimator is a linear and time-varying system.

Please report errors to [email protected].


Signal Processing for Comm. Systems HO #23
Fall 2000-2001/Yumin Lee Page 3

Requirement on Training Sequence for Least-Squares Channel Estimation

! The least-squares channel estimator is given by

pˆ = (X H X ) X H y .
−1

Since
y = Xp + η
therefore

pˆ = (X H X ) X H (Xp + η ) = p + (X H X ) X Hη
−1 −1

! For channel estimation, the number of observations N must be greater than or


equal to the number channel taps ν+1, i.e., X must be a tall matrix. Usually, we
need N ≥ 2ν to get good performance.
! η=0), then pˆ = p regardless of the choice of X.
If there is no noise (η
Furthermore, if E[η η]=0, then we have E (pˆ ) = p , i.e., the least-squares channel
estimate is unbiased regardless of the choice of X.
! The estimation error vector is given by

ε = pˆ − p = (X ∗ X ) X ∗η .
−1

The error covariance matrix is given by

Λ ≡ E εε[ H
] = E [(X X ) X ηη X (X X )
H −1 ∗ H H −1
]
= (X X ) X E [ηη ]X (X X )
H −1 H H H −1

ηηH] = σ2I, then we have


Assume that E[η

Λ = σ 2 (X H X ) (X H X )(X H X ) = σ 2 (X H X )
−1 −1 −1

! What criterion should the training sequence satisfy in order to get good
ν
Note that tr (Λ ) = ε = ∑ E pˆ j − p j
2 2
performance? is the sum of the
j =0

mean-square estimation error of each channel tap. On the other hand, if we


further assume that x0, x1, …, xN+ν−1 is taken from a periodic sequence with
period N, then it is easily verified that all the diagonal elements of (XHX) are
N −1 2

equal to ∑X
j =0
j . Therefore, the tr(XHX) is proportional to the energy of the

training sequence. Thus, we would like to design the training sequence such
Λ) is minimized, under the constraint that tr(XHX) is a constant.
that tr(Λ

Please report errors to [email protected].


Signal Processing for Comm. Systems HO #23
Fall 2000-2001/Yumin Lee Page 4

! Let A = XHX, and let λi, i=0,1,2,…, ν be the eigenvalues of A. Then we have
ν
tr (A ) = ∑ λi
i =0

and
ν
tr (Λ) = σ 2 tr (A −1 ) = σ 2 ∑
1
i =0 λi
ν
1
Therefore, we would like to design the training sequence such that σ 2 ∑ is
i =0 λi
ν
minimized under the constraint that ∑λ
i =0
i is a constant. This goal can be

achieved by requiring the eigenvalues of XHX to be the same. In this case XHX
is a diagonal matrix, and the optimal training sequence should satisfy
 N −1 2

 ∑ xi 0 ... 0
 i =0 
 N −1 2 
X X= 0
H

∑xi =0
i ... 0 .

 ... ... ... ... 
 N −1 2

 0 ... 0 ∑ xi 
 i =0 
! In other words, in order to obtain good performance, the (deterministic)
autocorrelation of the training sequence should be very small within the window
[0,ν]. This requirement is very similar as that for the correlator-estimator, and
will also be referred to as the whiteness requirement.

Performance of the Least-Squares Channel Estimator

! A measure of the performance of a channel estimator is the mean-square


estimation error per channel tap, defined as
2

σ ≡ 2
p
(ν + 1)
! Assume that the whiteness requirement is satisfied, and that
N −1 2
1
N
∑x
i =0
i = EX ,

then we have
tr (Λ) σ2
σ p2 ≡ =
(ν + 1) NE X

Please report errors to [email protected].


Signal Processing for Comm. Systems HO #23
Fall 2000-2001/Yumin Lee Page 5

! Note that the mean-square estimation error per channel tap is inversely
proportional to the signal-to-noise ratio EX /σ2 as one would expect.
Furthermore, the mean-square estimation error per channel tap is also inversely
proportional to the length N of the training sequence.

Please report errors to [email protected].

You might also like