100% found this document useful (1 vote)
51 views2 pages

Gauss-Markov Econometrics

[1] The Gauss-Markov theorem states that if its conditions are met, ordinary least squares (OLS) provides the best linear unbiased estimator (BLUE). [2] The Gauss-Markov conditions are that the errors have a mean of 0, constant variance, and are uncorrelated. [3] This shows that OLS estimates are linear and unbiased under the Gauss-Markov conditions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
51 views2 pages

Gauss-Markov Econometrics

[1] The Gauss-Markov theorem states that if its conditions are met, ordinary least squares (OLS) provides the best linear unbiased estimator (BLUE). [2] The Gauss-Markov conditions are that the errors have a mean of 0, constant variance, and are uncorrelated. [3] This shows that OLS estimates are linear and unbiased under the Gauss-Markov conditions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Gauss-Markov theorem activity

Title of the book: Introduction to Econometrics 3rd edition


Author: James H. Stock and Mark M. Watson
Editorial: Pearson International Edition

The Gauss-Markov theorem states that if the Gauss-Markov conditions holds, the
OLS estimator is BLUE (the best conditionally linear unbiased estimator).
The Gauss-Markov conditions are as follows:

1 ¿ E ( ui| X 1 , X 2 , … , X n ) =0

2 ¿ Var ( u i| X 1 , X 2 , … , X n ) =σ u
2

3 ¿ E ( ui u j|X 1 , X 2 , … , X n )=0

This is explained by the assumption that the errors are homoscedastic and the
observations are independent. Thus, the mathematical expectation is equal to
zero.

To show that the estimator ^


β 1 is a linear conditionally unbiased estimator, first note
n
that, because ∑ (X i −X ¿ )=0 ¿,
i=1
n n n n

∑ (X i −X ¿ )( Y i−Y ) =∑ ( X i−X ¿ )Y i−Y ∑ ( X i −X ) =∑ ( X i−X ¿ )Y i ¿ ¿ ¿. Substituting


i=1 i=1 i=1 i=1

this result into the formula for ^


β 1 in the following equation yields,
n

∑ ( X i− X ¿ )Y i
^
β 1=
i=1
¿
n n
( X i− X)
∑ ( X i−X ¿ ) =∑ a^i Y i , where a^ i Y i , where a^ i=
2
n
¿
∑ ( X j− X)
j =1 i=1 2

i =1

Because the weights a^i, i = 1,…, n in the equation depend on X 1 , X 2 , … , X n but not
on Y 1 ,Y 2 , … , Y n, the OLS estimator ^
β 1 is a linear estimator.
Under the conditions, ^ β 1 is conditionally unbiased, and the variance of the
conditional distribution of ^β 1, given X 1 , X 2 , … , X n is

σ 2u
β 1| X 1 , X 2 , … , X n ) =
var ( ^ n

∑ (X i −X ¿ )2 ¿
i=1

Then ^
β 1 is conditionally unbiased because

[ ]
n
1
∑ ( X −X ) u i
n i=1 i
E ( β^1 ) =β1 + E n
=β1
1

n i=!
( X i−X )
2

Points that I am not able to understand:

 Concept of homoscedasticity (how is it related to the Gauss Markov


Theorem?)
 What does a^ i stand for?

You might also like