0% found this document useful (0 votes)
664 views

HW4 Solution

This document contains solutions to homework problems for the course ECE 4110: Random Signals in Communications and Signal Processing at Cornell University. It includes solutions to 7 problems related to random signals, linear estimation, conditional probability distributions of Gaussian random variables, and convergence of random sequences.

Uploaded by

Di Wu
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
664 views

HW4 Solution

This document contains solutions to homework problems for the course ECE 4110: Random Signals in Communications and Signal Processing at Cornell University. It includes solutions to 7 problems related to random signals, linear estimation, conditional probability distributions of Gaussian random variables, and convergence of random sequences.

Uploaded by

Di Wu
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

ECE 4110: Random Signals in Communications and Signal Processing

ECE department, Cornell University, Fall 2011


Instructor: Salman Avestimehr

Homework 4 Solutions
By: Sina Lashgari

1. Let Yi = X + Zi for i = 1, 2, . . . , n be n observations of a signal X N (0, P ). The additive noise components Z1 , Z2 , . . . , Zn are zero-mean jointly Gaussian random variables that are independent of X. Furthermore, assume that the noise components Z1 , . . . , Zn are uncorrelated, each with variance N . Find the best MSE estimate of X given Y1 , Y2 , . . . , Yn and its MSE. Hint: It might be convenient to assume a form of the estimator and use the orthogonality principle to claim optimality. Problem 1 Solution According to the hint we try to come up with a guess and prove the optimality of that guess in terms of mean square error. First, note that since X and Y are jointly Gaussian, we have E[X|Y ] = L[X|Y ]. Therefore, MSE is a linear combination of Yi s. Thus, it is sucient to search through the class of linear functions for MMSE. In other words, it is sucient to nd a linear combination of Yi s whose error is orthogonal to any of Y1 through Yn . We consider X = c n Yi , where c is an unknown constant. i=1 n For E[(X c i=1 Yi )Yj ] to be equal to zero for any j {1, 2, .n} we should have n c = nPP . Therefore, X = nPP i=1 Yi . +N +N The error is nP 2 E[(X X)2 ] = P nP + N . As n or N 0 this error goes to zero. 2. Suppose (X, Y1 , Y2 ) is a zero-mean Gaussian random vector, with covariance matrix 4 2 1 K= 2 4 2 (1) 1 2 1 (a) Find the conditional pdf, fX|Y1 ,Y2 (x|y1 , y2 ). (b) Calculate E[X|Y1 , Y2 ]. Problem 2 Solution

(a) Since we are dealing with jointly Gaussian random variables, given Y1 and Y2 , X is a Gaussian random variable with mean E[X|Y1 , Y2 ] and variance equal to the MMSE error. KY is singular and therefore, Y does not have a density. We have E[(Y1 2Y2 )2 ] = var(Y1 ) 4cov(Y1 , Y2 ) + 4var(Y2 ) = 0 which shows Y1 = 2Y2 . Thus, we can just consider fX|Y1 . For Gaussian random varibles E[X|Y1 ] = L[X|Y1 ]. Therefore, We need E[(X aY1 )Y1 ] = 0 cov(X, Y1 ) avar(Y1 ) = 0 a = 0.5 (3) (2)

Since we are dealing with zero mean Gaussian random variables; having correlation zero implies (X 0.5Y1 ) and Y1 are independent. We have X = X 0.5Y1 + 0.5Y1 , thus given Y1 = y1 , we have X N (0.5y1 , 2 ), where 2 = var(X 0.5Y1 ) = 3. fX|Y1 ,Y2 (x|y1 , y2 ) = (b) E[X|Y1 , Y2 ] = E[X|Y1 ] = 0.5Y1 (5) 1 1 exp{ (x 0.5y1 )2 } 6 2 3 if y1 = 2y2 (4)

3. Suppose that g(Y ) is the linear least-square error (LLSE) estimator for X given Y :
1 g(Y ) = L[X|Y ] = KX Y KY (Y E[Y ]) + E[X].

Determine the mean square error E[(X g(Y ))2 ] in terms of the means, covariances, and cross-covariances of X and Y . Problem 3 Solution
1 E[(X g(Y ))2 ] = E[(X E[X] KX Y KY (Y E[Y ]))2 ] 1 = E[(X E[X])2 ] + E[(KX Y KY (Y E[Y ]))2 ] 1 2E[(X E[X])(KX Y KY (Y E[Y ]))] 1 1 T = var(X) + KX Y KY E[(Y E[Y ])(Y E[Y ])T ](KY )T KX Y 1 2KX Y KY E[(X E[X])(Y E[Y ])] 1 T 1 1 T = var(X) + KX Y KY KY KY KX Y 2KX Y KY KX Y 1 T = var(X) KX Y KY KX Y

(6)

4. Let X be Gaussian random vector with 3 1 0

mean [1 4 6]T and covariance matrix 1 0 2 1 . 1 1

(a) Compute E[X1 |X2 ] and E[(X1 E[X1 |X2 ])2 ]. (b) Compute E[X1 |X3 ] and E[(X1 E[X1 |X3 ])2 ]. (c) Compute E[X1 |X2 , X3 ] and E[(X1 E[X1 |X2 , X3 ])2 ]. (d) Note that X1 and X3 are uncorrelated, and hence independent. Yet E[X1 |X2 , X3 ] is a function of both X2 and X3 . Why is that? Problem 4 Solution (a) MMSE estimator in this case is an ane function. E[X1 |X2 ] = E[X1 ] + cov(X1 , X2 )(var(X2 ))1 (X2 E[X2 ]) = E[(X1 E[X1 |X2 ])2 ] = var(X1 ) (cov(X1 , X2 ))2 5 = var(X2 ) 2 X2 1 2

(7)

(b) X1 and X2 are Gaussian and uncorrelated, therefore independent. E[X1 |X3 ] = E[X1 ] = 1 E[(X1 E[X1 |X3 ])2 ] = 3 (c) Let Y T = [X2 X3 ].
1 E[X1 |Y ] = E[X1 ] + KX1 ,Y KY (Y E[Y ])

(8)

1 0

2 1 1 1

([X2 X3 ]T [46]T ) + 1

(9)

= X 2 X3 + 3
1 T E[(X1 E[X1 |Y ])2 ] = var(X1 ) + +KX1 ,Y KY KX1 ,Y = 2

(d) Independence of X1 and X3 , does not imply their independence when conditioned on X2 , because X2 might contain some common information between X1 and X3 . In general, we have the following statement: X1 , X3 independent (X1 , X3 )|X2 independent. 5. Let Xn be a sequence of i.i.d. equiprobable Bernoulli random variables and let Yn = 2n X1 X2 . . . Xn . 3

(a) Does this sequence converge almost surely, and if so, to what limit? (b) Does this sequence converge in mean square, and if so, to what limit? Problem 5 Solution (a) Yn = 2n X1 X2 . . . Xn = since P (Yn = 0) = (b) E[Yn ] = 2n P [X1 = X2 = . . . = Xn = 1] + 0 (1 P [X1 = X2 = . . . = Xn = 1]) 1 = 2n ( n ) = 1 2 Furthermore,
2 E[Yn ] = (2n )2 ( 1 2n

2n X1 = X2 = . . . = Xn = 1 0 Otherwise

(10)

0, Yn 0.

a.s.

(11)

1 ) = 2n 2n

(12)

Thus Yn does not converge to zero in m.s. sense. 6. Suppose Xn X and Yn Y . Show that (a) Xn + Yn X + Y . Hint: You may nd the following inequality useful, (a + b)2 2a2 + 2b2 . (b) E[(Xn + Yn )2 ] E[(X + Y )2 ]. (c) E[Xn Yn ] E[XY ]. Problem 6 Solution (a) E[(Xn + Yn (X + Y ))2 ] = E[((Xn X) + (Yn Y ))2 ] E[2(Xn X)2 + 2(Yn Y )2 ] = 2E[(Xn X)2 ] + 2E[(Yn Y )2 ] 0 as n (13) where we have used the hint and the fact that expectation is a monotonic function.
m.s. m.s. m.s.

(b) E[(Xn + Yn )2 ] = E[((X + Y ) + (Xn + Yn (X + Y )))2 ] = E[(X + Y )2 ] + E[(Xn + Yn (X + Y ))2 ] + 2E[(X + Y )(Xn + Yn (X + Y ))]

(14)

From part (a) we know that the second term goes to zero as n goes to . We use the Cauchy-Schwarz inequality to show that the last term also converges to zero. E[|(X + Y )(Xn + Yn (X + Y ))|] E[(X + Y )2 ]0.5 E[(Xn + Yn (X + Y ))2 ]0.5 0 (15) Therefore, we get E[(Xn + Yn )2 ] E[(X + Y )2 ] (16) (c) E[Xn Yn ] = E[(X + (Xn X))(Y + (Yn Y ))] = E[XY ] + E[X(Yn Y )] + E[Y (Xn X)] + E[(Xn X)(Yn Y )]

(17)

We already know that Yn Y in m.s. sense, using Cauchy-Schwarz inequality we have E[|X(Yn Y )|] E[X 2 ]0.5 E[(Yn Y )2 ]0.5 0 (18) similarly, third and fourth terms also go to zero and therefore E[Xn Yn ] E[XY ]
2 2 2

(19)

You can also use the fact that Xn Yn = (Xn +Yn ) 2Xn Yn . From part (b), we know 2 that E[(Xn +Yn )2 ] E[(X +Y )2 ], and we can similarly show that E[Xn ] E[X 2 ] 2 and E[Yn ] E[Y 2 ]. Using these facts, we have E[Xn Yn ] = E[
2 2 (Xn + Yn )2 Xn Yn (X + Y )2 X 2 Y 2 ] E[ ] = E[XY ] 2 2 (20)

7. Let X1 , X2 , . . . be a sequence of variables with mean and covariance COV (Xi , Xj ) = 2 |ij| , where || < 1. Let Sn = Show that Sn . Problem 7 Solution 5
m.s. n i=1

Xi

E[(Sn )2 ] = = = = = =

1 E[ n2 i=1 1 n2 1 n2
n

(Xi )(Xj )]
j=1 n

E[(Xi )(Xj )]
i=1 j=1 n n

COV (Xi Xj )
i=1 j=1

1 [n 2 + 2(n 1) 2 + 2(n 2) 2 2 + . . . + 2 2 n1 ] 2 n 1 [2n 2 + 2n 2 + 2n 2 2 + . . . + 2n 2 n1 ] 2 n 2 2 [1 + + 2 + . . . + n1 ] n 2 2 1 n 0 n 1

(21)

where to derive the last line, we have used the fact that || 1.

You might also like