Applied Econometrics: Introduction To Matrix
Applied Econometrics: Introduction To Matrix
Introduction to Matrix
Abdul Hakim
FE UII
Linear Regression Model
y Xβ ε
• Assumptions:
2
Linear Regression Model
' ( y X )' ( y X )
( y'' X' )( y X )
( y' y y' X ' X' y ' X' X )
y' y ' X' y'' X' y ' X' X
y' y 2' X' y ' X' X
3
Linear Regression Model
(' )
2 X' y 2X' Xˆ 0
ˆ
X' Xˆ X' y
( X' X) 1 X' Xˆ ( X' X) 1 X' y
Iˆ ( X' X) 1 X' y
ˆ ( X' X) 1 X' y
4
Linear Regression Model
• Varians ̂
Var (ˆ ) E(ˆ )(ˆ )'
E[( X' X) 1 X' y ][( X' X) 1 X' y ]'
E[( X' X) 1 X' ( X ) ][( X' X) 1 X' ( X ) ]'
E[( X' X) 1 X' X ( X' X) 1 X' ][( X' X ) 1 X' X ( X' X) 1 X' ]'
E[I ( X' X) 1 X' ][I ( X' X ) 1 X' ]'
E[ ( X' X ) 1 X' ][ ( X' X) 1 X' ]'
E[( X' X) 1 X' ][( X' X) 1 X' ]'
E[( X' X) 1 X' ][' X( X' X ) 1 ]
E[( X' X) 1 X' ' X( X' X ) 1 ]
( X' X) 1 X' XE(' )( X' X) 1
I 2 ( X' X) 1
2 ( X' X ) 1 .
5
Gauss Markov Theorem
▧ If a model meets the classical assumptions, OLS
application on it provides BLUE (Best-Linear-
Unbiased Estimators).
▧ Best means that the estimators have minimum
variance compared to those resulted from other
linear echnique
▧ Unbiased means the mean of estimators equals the
parameter.
6
Unbiasedness
ˆ ( X' X) 1 X' y
( X' X) 1 X' ( X )
( X' X) 1 X' X ( X' X) 1 X'
E(ˆ ) E( X' X) 1 X' X E( X' X) 1 X'
( X' X) 1 X' XE( ) ( X' X) 1 X' E( )
I ( X' X) 1 (0)
7
Best
~ ˆ
D
• Say there is a non OLS estimator, namely where Dy
is kxT and fixed.
~
E( ) E(ˆ ) E( Dy )
DE( y )
DX
iff DX 0.
~
ˆ Dy
ˆ D( X )
ˆ DX D
ˆ D
because DX = 0.
8
Best
~ ~ ~
Var ( ) E( )( )'
E(ˆ D )(ˆ D )'
E(ˆ D )(ˆ D )'
E[(ˆ ) D ][(ˆ ) D]'
E[(ˆ ) D ][(ˆ )' ' D' ]
E[(ˆ )(ˆ )' (ˆ )' D' D(ˆ )' D' D' ]
E[(ˆ )(ˆ )' D' D' ] sin ce DX 0
E[(ˆ )(ˆ )' ] E(' DD' )
2 ( X' X) 1 2 D' D
9
Best
▧ Since DD’ is positive semi-definite matrix ( DD '
is kxk),
~
Var( β ) Var( βˆ )
▧ In another word, ̂is best (has the minimum variance)
compared to estimators resulted from other linear techniques.
10
Best
▧ Notes
o Gauss-Markov applies to linear unbiased estimators. This
means that it is possible that there are non-linear unbiased
estimators that have lower variance than that of resulted
from the OLS.
o Gauss-Markov applies to linear unbiased estimators. This
means that it is possible that there are biased estimators
that have lower MSE (mean squared errors) than that of
resulted from the OLS.
11
Best
E(ˆ )(ˆ )' E(ˆ Eˆ Eˆ )(ˆ Eˆ Eˆ )'
E[(ˆ Eˆ ) (Eˆ )][(ˆ Eˆ ) (Eˆ )]'
E[(ˆ Eˆ ) (Eˆ )][(ˆ Eˆ )' (Eˆ )' ]
E[(ˆ Eˆ )(ˆ Eˆ )' (ˆ Eˆ )(Eˆ )' (Eˆ )(ˆ Eˆ )' (Eˆ )(Eˆ )' ]
E(ˆ Eˆ )(ˆ Eˆ )'0 0 E[Eˆ )(Eˆ )' ]
E[(ˆ Eˆ )(ˆ Eˆ )' ] E[(Eˆ )(Eˆ )' ]
MSE(ˆ ) Variance (ˆ ) Bias 2
12
Consistency of ̂
to
▧ Consistency can be defined as the fulfillment of the following
three conditions:
13
Consistency of ̂
to
▧ It has bee proven that ˆ
E()
14
Consistency of ̂
to
▧ Another definition of consistency: ̂is a consistence estimator of
if
e y Xˆ ( I P )y (I P )( X ) (I P ) .
e' e y' (I P )y ' (I P ).
16
Estimator of 2
E(e' e) E(' (I P )
E(tr ' (I P ) ) ( scalar tr(scalar)
E(tr(I P )' ) (tr AB tr BA )
tr[(I P )E(' )]
2 tr(I P )
2 [tr It tr P]
2 [T trX( X' X) 1 X' ]
2 [T tr( X' X) 1 X' X]
2 T trI k
2 (T k )
17
2
Consistency of sto 2
e' e ' ( I P )
e' e
s2 , so (T k )s 2 e' e ' (I P ) ' ' X( X' X) 1 X'
Tk
18