0% found this document useful (0 votes)
86 views5 pages

Solution To Chapter 1 Analytical Exercises

This document provides solutions to analytical exercises from Chapter 1. It includes: 1. Deriving the log-likelihood function for the linear regression model. 2. Showing that the matrix MX = In - X(X'X)-1X' is symmetric and idempotent. 3. Solving a special case of the next exercise by deriving the normal equations. 4. Solving a multiple regression problem by partitioning matrices and deriving expressions for the OLS estimators and other regression outputs like residuals and sum of squared residuals. 5. Noting that the hint provided for one exercise is as good as the full answer, and using an add-and-subtract strategy to

Uploaded by

Camila Sanchez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views5 pages

Solution To Chapter 1 Analytical Exercises

This document provides solutions to analytical exercises from Chapter 1. It includes: 1. Deriving the log-likelihood function for the linear regression model. 2. Showing that the matrix MX = In - X(X'X)-1X' is symmetric and idempotent. 3. Solving a special case of the next exercise by deriving the normal equations. 4. Solving a multiple regression problem by partitioning matrices and deriving expressions for the OLS estimators and other regression outputs like residuals and sum of squared residuals. 5. Noting that the hint provided for one exercise is as good as the full answer, and using an add-and-subtract strategy to

Uploaded by

Camila Sanchez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Nov. 22, 2003, revised Dec.

27, 2003 Hayashi Econometrics

Solution to Chapter 1 Analytical Exercises


1. (Reproducing the answer on p. 84 of the book)
e 0 (y X)
(yX) e = [(y Xb) + X(b )]e 0 [(y Xb) + X(b )]
e
(by the add-and-subtract strategy)

= [(y Xb)0 + (b )
e 0 X0 ][(y Xb) + X(b )]
e

= (y Xb)0 (y Xb) + (b )
e 0 X0 (y Xb)

+ (y Xb)0 X(b )
e + (b )e 0 X0 X(b )
e

= (y Xb)0 (y Xb) + 2(b )e 0 X0 (y Xb) + (b )


e 0 X0 X(b )
e
e 0 X0 (y Xb) = (y Xb)0 X(b ))
(since (b ) e

= (y Xb)0 (y Xb) + (b )
e 0 X0 X(b )
e
(since X0 (y Xb) = 0 by the normal equations)

(y Xb)0 (y Xb)
n
X
e 0 X0 X(b )
(since (b ) e = z0 z = zi2 0 where z X(b )).
e
i=1

2. (a), (b). If X is an n K matrix of full column rank, then X0 X is symmetric and invertible.
It is very straightforward to show (and indeed youve been asked to show in the text) that
MX In X(X0 X)1 X0 is symmetric and idempotent and that MX X = 0. In this question,
set X = 1 (vector of ones).

(c)
M1 y = [In 1(10 1)1 10 ]y
1
= y 110 y (since 10 1 = n)
n
n
1 X
=y 1 yi = y 1 y
n i=1

(d) Replace y by X in (c).


3. Special case of the solution to the next exercise.
4. From the normal equations (1.2.3) of the text, we obtain
(a)
X01 . X01
     
b1
[X1 .. X2 ] = y.
X02 b2 X02
Using the rules of multiplication of partitioned matrices, it is straightforward to derive ()
and () from the above.

1
(b) By premultiplying both sides of () in the question by X1 (X01 X1 )1 , we obtain
X1 (X01 X1 )1 X01 X1 b1 = X1 (X01 X1 )1 X01 X2 b2 + X1 (X01 X1 )1 X01 y
X1 b1 = P1 X2 b2 + P1 y
Substitution of this into () yields
X02 (P1 X2 b2 + P1 y) + X02 X2 b2 = X02 y
X02 (I P1 )X2 b2 = X02 (I P1 )y
X02 M1 X2 b2 = X02 M1 y
X02 M01 M1 X2 b2 = X02 M01 M1 y (since M1 is symmetric & idempotent)
X e0 X e 0 e.
2 2 b2 = X2 y
e

Therefore,
e0 X
b2 = ( X e 1 X
e0 y
2 2) 2e

(The matrix X e0 X
2 2 is invertible because X2 is of full column rank. To see that X2 is of full
e e e
column rank, suppose not. Then there exists a non-zero vector c such that X2 c = 0. But
e
e 2 c = X2 c X1 d where d (X0 X1 )1 X0 X2 c. That is, X = 0 for d . This is
X 1 1 c
..
a contradiction because X = [X1 . X2 ] is of full column rank and 6= 0.)
(c) By premultiplying both sides of y = X1 b1 + X2 b2 + e by M1 , we obtain
M1 y = M1 X1 b1 + M1 X2 b2 + M1 e.
e M1 y, the above equation can be rewritten as
Since M1 X1 = 0 and y
y
e = M1 X2 b2 + M1 e
=Xe 2 b2 + M1 e.

M1 e = e because
M1 e = (I P1 )e
= e P1 e
= e X1 (X01 X1 )1 X01 e
=e (since X01 e = 0 by normal equations).
(d) From (b), we have
e0 X
b2 = (X e 1 X e0 y
2 2) 2e
0 e 2 ) X M0 M1 y
1 0
= (X
e X
2 2 1
e0 X
= (X e 1 X
e 0 y.
2 2) 2

Therefore, b2 is the OLS coefficient estimator for the regression y on X


e 2 . The residual
vector from the regression is
yX
e 2 b2 = (y y yX
e ) + (e e 2 b2 )
= (y M1 y) + (e
yX e 2 b2 )
= (y M1 y) + e (by (c))
= P1 y + e.

2
This does not equal e because P1 y is not necessarily zero. The SSR from the regression
of y on X
e 2 can be written as

e 2 b2 )0 (y X
(y X e 2 b2 ) = (P1 y + e)0 (P1 y + e)
= (P1 y)0 (P1 y) + e0 e (since P1 e = X1 (X01 X1 )1 X01 e = 0).

This does not equal e0 e if P1 y is not zero.


(e) From (c), y
e=X e 2 b2 + e. So

e0 y
y e 2 b2 + e)0 (X
e = (X e 2 b2 + e)
= b02 X
e0 X
2 2 b2 + e e
e 0
(since X
e 2 e = 0).

e0 X
Since b2 = (X e 1 X
e 0 y, we have b0 Xe0 e e 0 X2 (X02 M1 X2 )1 X2 y
2 2) 2 2 2 X2 b2 = y e.
(f) (i) Let b1 be the OLS coefficient estimator for the regression of y
b e on X1 . Then
b 1 = (X0 X1 )1 X0 y
b 1 1e
= (X01 X1 )1 X01 M1 y
= (X01 X1 )1 (M1 X1 )0 y
=0 (since M1 X1 = 0).

So SSR1 = (e b 1 )0 (e
y X1 b y X1 b e0 y
b1) = y e.
(ii) Since the residual vector from the regression of ye on X e 2 equals e by (c), SSR2 = e0 e.
(iii) From the Frisch-Waugh Theorem, the residuals from the regression of y e on X1 and
X2 equal those from the regression of M1 y e (= y
e ) on M1 X2 (= X e 2 ). So SSR3 = e0 e.

5. (a) The hint is as good as the answer.


yX,
(b) Let b b the residuals from the restricted regression. By using the add-and-subtract
strategy, we obtain

y X
b b = (y Xb) + X(b ).
b

So
b 0 [(y Xb) + X(b )]
SSRR = [(y Xb) + X(b )] b

= (y Xb)0 (y Xb) + (b )
b 0 X0 X(b )
b (since X0 (y Xb) = 0).

But SSRU = (y Xb)0 (y Xb), so


b 0 X0 X(b )
SSRR SSRU = (b ) b

= (Rb r)0 [R(X0 X)1 R0 ]1 (Rb r) (using the expresion for


b from (a))
= 0 R(X0 X)1 R0 (using the expresion for from (a))
0 0 1 0
=b
X(X X) Xb
(by the first order conditions that X0 (y X)
b = R0 )
0 Pb
=b .

(c) The F -ratio is defined as

(Rb r)0 [R(X0 X)1 R0 ]1 (Rb r)/r


F (where r = #r) (1.4.9)
s2

3
Since (Rb r)0 [R(X0 X)1 R0 ]1 (Rb r) = SSRR SSRU as shown above, the F -ratio
can be rewritten as
(SSRR SSRU )/r
F =
s2
(SSRR SSRU )/r
=
e0 e/(n K)
(SSRR SSRU )/r
=
SSRU /(n K)

Therefore, (1.4.9)=(1.4.11).

6. (a) Unrestricted model: y = X + , where



y1 1 x12 . . . x1K 1
.. .. .. .. ,
y = . , X = . .. = ... .

(N K)
. . .
(N 1) (K1)
yn 1 xn2 . . . xnK n

Restricted model: y = X + , R = r, where



0 1 0 ... 0
0 0
0 1 ... 0
R = . , r = ... .

.. ..
((K1)K) .. . . ((K1)1)
0
0 0 1

Obviously, the restricted OLS estimator of is



y y
0 y

b = .. . So X b= = 1 y.

..
(K1) . .
0 y

(You can use the formula for the unrestricted OLS derived in the previous exercise, b =
b (X0 X)1 R0 [R(X0 X)1 R0 ]1 (Rb r), to verify this.) If SSRU and SSRR are the
minimized sums of squared residuals from the unrestricted and restricted models, they are
calculated as
n
X
b 0 (y X)
SSRR = (y X) b = (yi y)2
i=1
n
X
SSRU = (y Xb)0 (y Xb) = e0 e = e2i
i=1

Therefore,
n
X n
X
SSRR SSRU = (yi y)2 e2i . (A)
i=1 i=1

4
On the other hand,
b 0 (X0 X)(b )
(b ) b = (Xb X)b 0 (Xb X)
b
n
X
= yi y)2 .
(b
i=1

b 0 (X0 X)(b )
Since SSRR SSRU = (b ) b (as shown in Exercise 5(b)),
n
X n
X n
X
(yi y)2 e2i = yi y)2 .
(b (B)
i=1 i=1 i=1

(b)

(SSRR SSRU )/(K 1)


F = Pn 2 (by Exercise 5(c))
i=1 ei /(n K)
Pn Pn
( i=1 (yi y)2 i=1 e2i )/(K 1)
= P n 2 (by equation (A) above)
i=1 ei /(n K)
Pn
(by y)2 /(K 1)
Pn i 2
= i=1 (by equation (B) above)
i=1 ei /(n K)
P (yb y) /(K1)
P y)
n
i
2
n
P e(y/(nK)
i=1
n 2 X
(yi y)2 )
i

P (y y)
i=1
= n 2 (by dividing both numerator & denominator by
i=1 i
n
i
2 i=1
i=1

R2 /(K 1)
= (by the definition or R2 ).
(1 R2 )/(n K)

7. (Reproducing the answer on pp. 84-85 of the book)


0 1
(a)
b
GLS = A where A (X V X)1 X0 V1 and b
b
GLS = B where B
0 1 0 0 1 1 0 1
(X X) X (X V X) X V . So

Cov( GLS , b GLS )


b b
= Cov(A, B)
= A Var()B0
= 2 AVB0 .

It is straightforward to show that AVB0 = 0.


(b) For the choice of H indicated in the hint,

b Var( 1 0
Var() GLS ) = CVq C .
b

If C 6= 0, then there exists a nonzero vector z such that C0 z v 6= 0. For such z,

z0 [Var()
b Var(
b 0 1
GLS )]z = v Vq v < 0 (since Vq is positive definite),

which is a contradiction because


b
GLS is efficient.

You might also like