Statistics II - Asymptotic Theory For Least Squares: Marcelo Sant'Anna
Statistics II - Asymptotic Theory For Least Squares: Marcelo Sant'Anna
Marcelo Sant’Anna
FGV EPGE
The asymptotic theory developed here applies to the broader linear projection
model:
yi = xi0 β + ei ,
−1
where β = (E [xi xi0 ]) E [xi yi ].
Theorem
Under the Assumption above,
√
d
n β̂ − β −→ N(0, Vβ ),
where
−1 −1
Vβ = Qxx ΩQxx ,
Qxx = E [xi xi0 ] and E xi xi0 ei2 .
p
Vβ 6= Vβ̂ , from Chp 4. Indeed nVβ̂ −→ Vβ .
−1 2
In the homoskedastic case: Ω = Qxx σ 2 , so Vβ = Qxx σ .
Example
In the homoskedastic errors case, when k = 2, and
σ12
ρσ1 σ2
E [xi xi0 ] =
ρσ1 σ2 σ22
if ρ > 0 (ρ < 0), then β̂1 and β̂2 are asymptotically negatively (positively)
correlated.
In a similar fashion, a natural estimator for the asymptotic variance in the case of
heteroskedasticity is the plug-in estimator:
−1 −1
V̂βHC 0 = Q̂xx Ω̂ Q̂xx ,
1
xi xi0 êi .
P
where Ω̂ = n i
where Vθ = R 0 Vβ R.
log(qm ) = β0 − β1 log(pm ) + εm ,
θ̂ − θ
T (θ) = ,
s(θ̂)
where s(θ̂)2 = n1 V̂θ and V̂θ is a consistent estimator for the asymptotic variance of
θ.
Theorem
Under the regularity conditions and assumption above, provided Vθ > 0,
d N(0, Vθ )
T (θ) −→ √ ∼ N(0, 1).
Vθ
There are important differences between the asymptotic CI and the normal
regression model CI. In particular the normal regression CI we derived earlier:
only applies to β and not to functions θ;
is constructed under the assumption of homoskedastic errors and here we
allow for heteroskedasticity;
uses student-t distribution to compute c.
0 0
W (θ) = θ̂ − θ V̂θ̂−1 θ̂ − θ = n θ̂ − θ V̂θ−1 θ̂ − θ