0% found this document useful (0 votes)
16 views52 pages

Econometric Lec6

Uploaded by

Diêu Meo Meo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views52 pages

Econometric Lec6

Uploaded by

Diêu Meo Meo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 52

AUTOCORRELATION

6
LECTURE

1
CHAPTER 12

Autocorrelation
or
Serial Correlation
2
Time Series Data

Time series process of economic variables


e.g., GDP, M1, interest rate, exchange
rate,
imports, exports, inflation rate, etc.

3
4
Decomposition of time series

Xt Trend

Cyclical or
seasonal

random

time
5
THE NATURE
OF
AUTOCORRELATION

6
The nature of problem

• Similar to Multicollinearity and Heteroscedasticity


problems, the violation of one of the OLS assumptions
gives rise to autocorrelation.
• That assumption states : “there is no autocorrelation or
serial correlation among the disturbances (error terms,
residuals) entering into the population regression function”.
 
It can be presented by the formula : cov U i ,U j  0, i  j
• When the above assumption is not satisfied, then comes
the so-called Autocorrelation which can be performed by
 
U i , U j  0, i  j
cov

7
More details of related assumption
• The Classical Model assumes that the disturbance term
relating to any observation is not influenced by the
disturbance term relating to any other observation.
• This assumption is more related to time-series data.
• Examples :
o When regressing a model of output depending on labor
and capital inputs, if there is a labor strike affecting
output this quarter, there is no reason to believe that this
disruption will be carried over to the next quarter.
o If output is lower this quarter, there is no reason to expect
it to be lower next quarter

8
Assumption is violated  Autocorrelation

• The Autocorrelation is presented by following to formulas :


      
cov U i ,U j  E U i  E U i  U j  E U j  E U iU j  0, i  j

• The above formula means that the values of the error term
are not independent, that is, that the error in one period in
some way influences the error in another period.

• Applying this phenomenon into the previous examples, we


can say that, the disruption caused by a strike this quarter
may very well affect output next quarter.

9
Definition: First-order of Autocorrelation, AR(1)
Yt = 1 + 2 X2t + ut t = 1,……,T

If Cov (ut, us) = E (ut us)  0 where t  s


and if
ut =  ut-1 + vt
E (vt )  0
where -1 <  < 1 ( : RHO) var(vt )   v2
and vt ~ iid (0, v2) (white noise) cov(vt , vs )  0

This scheme is called first-order autocorrelation and denotes as


AR(1)
Autoregressive : The regression of ut can be explained by
itself lagged one period.
(RHO) : the first-order autocorrelation coefficient
or ‘coefficient of autocovariance’
10
Example of serial correlation:
Year Consumptiont = 1 + 2 Incomet + errort
Error term
1973 230 320 u1973 represents
… … … …. other factors
... … … …. that affect
1995 558 714 u1995 consumption
1996 699 822 u1996
1997 881 907 u1997
1998 925 1003 u1998 TaxRate1999
1999 984 1174 u1999
2000 1072 1246 u2000 TaxRate2000
The year Tax Rate may be determined by year rate
TaxRate2000 =  TaxRate1999 + v2000
 ut =  ut-1 + vt t ~ iid(0, v2)
11
If ut = 1 ut-1 + vt
it is AR(1), first-order autoregressive
If ut = 1 ut-1 + 2 ut-2 + vt
it is AR(2), second-order autoregressive
High order
………………………………………………. autocorrelation
If ut = 1 ut-1 + 2 ut-2 + …… + n ut-n + vt
it is AR(n), nth-order autoregressive

Cov (ut u t-1) > 0 => 0 <  < 1 positive AR(1)


Cov (ut ut-1) < 0 => -1 <  < 0 negative AR(1)

-1 <  < 1 12
^u ^u
i
x i

x x
x
x x
x x
x x
0 time 0 time
x x
x
x x
x x
^ui

x x The current error


x x x
x x x term tends to have
0 time
x x x the same sign as
x x x the previous one.
x x 13
^u
i

x x x x x
x time
x x x
x x x x x
The current error term tends to have the opposite sign from the previous.

^u
i

x x x
x x x x x xx x x x
0 x time
x x x x x x x x
x x x
x

The current error term tends to be randomly appeared from the previous.
14
 The error term at time t is a linear combination
of the current and past disturbance.

The further the period is in the


0<<1
past, the smaller is the weight of
-1 <  < 0 that error term (ut-1) in
determining ut

=1 The past is equal importance to


the current.

>1 The past is more importance than


the current.

15
The Consequences of
Serial Correlation

16
The consequences of serial correlation:

1. The estimated coefficients are still unbiased.


^ )=
E( k k

^
2. The variances of the k is no longer the smallest

^
3. The standard error of the estimated coefficient, Se(k)
becomes large

Therefore, when AR(1) is existing in the regression,


The estimation will not be BLUE

17
Two variable regression model: Yt = 1 + 2X2t + ut
^ xy
The OLS estimator of ===> 2 =
 2,  x2

^ )=  2
If E (ut ut-1) = 0 then Var ( 2
 xt2
If E(utut-1)  0, and ut = ut-1 + vt , then

^) = 2 22  xt xt+1 xt xt+2


Var ( 2 AR1 +  + 2 + ….
 xt2  xt2  xt2 xt2
-1 <  < 1
^ ) = Var(^)
If  = 0, zero autocorrelation, than Var( 2 AR1 2

If   0, autocorrelation, than Var( ^ ) > Var(^)


2 AR1 2

The AR(1) variance is not the smallest


18
Autoregressive scheme:

ut = ut-1 + vt ==> ut = [ut-2 + vt-1] + vt


==> ut-1 = ut-2 + vt-1 ut = 2u t-2 + vt-1 + vt

==> ut-2 = ut-3 + vt-2 => ut = 2 [ut-3 + vt-2] + vt-1 + vt


ut = 3ut-3 + 2 vt-2 + vt-1 + vt

2
E(utut-1) =
1 - 2

E(ut ut-2) = 2 It means the more periods past,


the less effect on current period
E(ut ut-3) = 2 2
…………….
E(utut-k) = k-1 2
19
How to detect autocorrelation ?

20
• Durbin-Watson d Test
• The Breusch-Godfrey (BG) test of
higher order autocorrelation
• Durbin h-test

21
Gujarati(2003) Table12.4, pp.460
How to detect autocorrelation ?

22
Run OLS: uˆt  uˆt 1  vt and check the t-value of the coefficient

DW 0.122904
0.914245  ˆ  1   1  0.9385
2 2
23
Durbin-Watson Autocorrelation test
From OLS regression result: where d or DW* = 0.1229

(At 5% level of significance, k’ = 1, n=40)

dL = 1.442 H0 : no autocorrelation
du = 1.544 =0
H1 : yes, autocorrelation exists.
Reject or  > 0
H0
region positive autocorrelation

dL du
0 1.442 1.544 2
DW*
24
0.1229
Durbin-Watson test
OLS : Y = 1 + 2 X2 + …… + k Xk + ut
obtain ^ut , DW-statistic(d)
Assuming AR(1) process: ut = ut-1 + vt
I. H0 :  = 0 no autocorrelation
-1 <  < 1 H1 :  > 0 yes, positive autocorrelation
DW*
Compare d* and dL, du (critical values)

if d* < dL ==> reject H0


if d* > du ==> not reject H0
if dL  d*  du ==> this test is inconclusive
25
Durbin-Watson d Test (cont’)

• Example 1 :
The DW = 1.8756; N=23; K’=2 excluding the intercept
The critical values are DL=1.168 DU=1.534 at 5%
In this case we would not reject the null hypothesis of no
autocorrelation since 1,534 < 1, 8756 < 4-1,534

• Example 2:
The DW=0.1380; N=32; K’=1 (explanatory variable)
The critical values are DL=1.37 DU=1.50 at 5%
In this case, we can not reject the hypothesis that there is
positive serial correlation in the residuals since
0.1380 < 1.37

26
Durbin-Watson test(Cont.)
T
^ -^
 (u t ut-1)
2
d = 2 (1- ^
)
DW = t=2
 2 (1 - ^)
T
^
ut2 d ^
(d) ==> =1-
t=1 2

d
==> ^ = 1-
2
^1
Since -1  

implies 0  d  4

27
Durbin-Watson test(Cont.)
II. H0 :  =0 no negative autocorrelation
H1 :  < 0 yes, negative autocorrelation

we use (4-d) (when d is greater than 2)


if (4 - d) < dL
or 4 - dL < d < 4 ==> reject H0
if dL  (4 - d) du
or 4 - du > d > du ==> not reject H0

if dL  (4 - d) du
or 4 - du  d  4 - dL ==> inconclusive

28
Durbin-Watson test(Cont.)
II. H0 :  =0 No autocorrelation
H1 :   0 two-tailed test for auto correlation
either positive or negative AR(1)

If d < dL
==> reject H0
or d > 4 - dL

If du < d < 4 - du ==> not reject H0

If dL  d  du
==> inconclusive
or 4 - du  d  4 - dL

29
H0 :  = 0 H0 :  = 0
negative autocorrelation
positive autocorrelation H1 :  < 0
H1 :  > 0
reject reject
H0 H0
not not
reject reject

inconclusive inconclusive

dL du 2 4-du 4-dL 4 DW
0 (d)
1.372 1.546 2.45 2.63 1% & 5%
1.525 1.703 2.297 2.475 Critical values
0.23 30
For example :
UM^ = 23.1 - 0.078 CAP - 0.146 CAP + 0.043T
t t t-1 t
(15.6) (2.0) (3.7) (10.3)
_
R2 = 0.78 F = 78.9 ^u = 0.677 RSS = 29.3 DW = 0.23 n = 68

observed
(i) K’ = 3 (number of independent variable)

(ii) n = 68 , = 0.01 significance level


0.05

(iii) dL = 1.525 , du = 1.703 0.05


dL = 1.372 , du = 1.546 0.01
Reject H0, positive autocorrelation exists
31
The assumptions underlying the d(DW) statistics :
1. Intercept term must be included in OLS regression.
2. X’s are nonstochastic
3. Only test AR(1) : ut = ut-1 + vt where vt ~ iid (0, v2)
4. Not include the lagged dependent variable,
Yt = 1+ 2Xt2 + 3 Xt3 + …… + kXtk +  Yt-1 + ut
(autoregressive model) Y X
5. No missing observation 1970 100 15
.
..

.
..

.
..
1980 235 20
missing 81 N.A. N.A.
82 N.A. N.A.
93 253 37
94 281 41
32
95
.
..

.
..
Durbin h Test

33
Lagged Dependent Variable and Autocorrelation

Yt = 1 + 2 X2t + 3 X3t + …… + k Xk.t + 1 Yt-1 +ut

DW statistic will often be closed to 2 or


DW is not reliable
DW does not converge to 2 (1 - ^)

n
Durbin-h Test: Compute 
h =^
*

^1)
1 - n*Var (

Compare h* to Z where Zc ~ N (0,1) normal distribution

If |h*| > Zc => reject H0 :  = 0 (no autocorrelation)

34
The Breusch-Godfrey (BG) or
Lagrange Multiplier test of higher
order autocorrelation

35
Breusch-Godfrey (BG) test of higher-order autocorrelation
or called Durbin’s m test (Lagrange Multiplier, LM, Test)

Test Procedures:
(1) Run OLS and obtain the residuals ^ ut.
(2) Run ^ut against all the regressors in the model
plus the additional regressors, u^t-1, u^t-2, u^t-3,…, u^t-p.
^ ^ ^ ^
u =  +  X + u + u + u + … + u^+ v
t 1 2 t t-1 t-2 t-3 t-p

Obtain the R2 value from this regression.


(3) compute the BG-statistic: (n-p)R2
(4) compare the BG-statistic to the 2p (p is # of degree-order)
(5) If BG > 2p, reject Ho (No autocorrelation),
it means there is a higher-order autocorrelation
If BG < 2p, not reject Ho,
it means there is a no higher-order autocorrelation
36
Click
VIEW

37
Compare with
the critical values

Check on the
t-statistics
To see
The order of
autocorrelation
38
REMEDIAL MEASURES

1. First-difference transformation
2. Add T = trend
3. Cochrane-Orcutt Two-step procedure (CORC)
4. Cochrane-Orcutt Iterative Procedure
5. Generalized least Squares
6. Durbin’s Two-step method

39
Remedy: 1. First-difference transformation
Yt = 1 + 2 Xt + ut
Yt-1 = 1 + 2 Xt-1 + ut-1 assume  = 1
==> Yt - Yt-1 = 1 - 1 + 2 (Xt - Xt-1) + (ut - ut-1)
==> Yt = 2 Xt + ut

no intercept

Yt = 1 + 2 Xt + 3 T + ut 2. Add T = trend
Yt-1 = 1 + 2 Xt-1 + 3 (T -1) + t-1
==> (Yt - Yt-1) = (1 - 1) + 2(Xt - Xt-1) + 3[T - (T -1)] + (ut - ut-1)
==> Yt = 2 Xt + 3*1 + u’t
==> Yt = 1* + 2 Xt + u’t
If ^
1* > 0 => an upward trend in Y
40
^
(2 > 0)
3. Cochrane-Orcutt Two-step procedure (CORC)
(1). Run OLS on
Yt = 1 + 2 Xt + ut
and obtains ^
 t
Generalized
Least Squares
(2). Run OLS on ^u =  ^
ut-1 + vt
t (GLS)
and obtains ^
 method

(3). Use the ^ to transform the variables :


Yt* = Yt - ^ Yt-1 Yt = 1 + 2 Xt + ut
-) ^ Yt-1 = 1 ^+ 2 ^Xt-1 + u
^
Xt* = Xt - ^
 Xt-1 t-1

^ )=  (1-)
(Yt - Y ^ +2(Xt - X
^ t-1) + (ut -u
^ t-1)
t-1 1

(4). Run OLS on Yt* = 1* + 2* Xt* + ut*

41
4. Cochrane-Orcutt Iterative Procedure
(5). If DW test shows that the autocorrelation still existing, than
it needs to iterate the procedures from (4). Obtains the ut*
(6). Run OLS
^
ut* =  u^t-1* + vt’ ^
^ DW2
  (1 - )
2
and obtains ^ ^
 which is the second-round estimated 
^
(7). Use the ^ to transform the variable
^
Yt = Yt - ^Yt-1
**
Yt = 1 + 2 Xt + ut
^ ^ ^
^ ^ ^
^
X = Xt -  Xt-1
t
**
 Yt-1 = 1  + 2 Xt-1 + u^t-1
^ ^
(8). Run OLS on
Yt** = 1** + 2** Xt** + ut**
^
^ ^ ^ ^
^
Where is (Yt -  Yt-1) = 1 (1 - ) + 2 (Xt -  Xt-1) + (ut -  ut-1)
42
Cochrane-Orcutt Iterative procedure(Cont.)
(9). Check on the DW3 -statistic, if the autocorrelation is still
existing, than go into third-round procedures and so on.
^ ^
^
^ - ^ < 0.01) the estimated ’s differs a little.
Until (

Generalized least Squares (GLS)


5. Prais-Winsten transformation
Yt = 1 + 2 Xt + ut t = 1,……,T (1)
Assume AR(1) : ut = ut-1 + vt -1 <  < 1
Yt-1 = 1 + 2 Xt-1 + ut-1 (2)
(1) - (2) => (Yt - Yt-1) = 1 (1 - ) + 2 (Xt - Xt-1) + (ut - ut-1)
GLS => Yt* = 1* + 2* Xt* + u*t
43
To avoid the loss of the first observation of each
variable, the first observation of Y* and X* should be
transformed as :
Yt=1*=  1 - ^2 (Yt=1)
Xt=1* =  1 - ^2 (Xt=1)

but Yt=2* = Yt=2 - ^Yt=1 ; Xt=2* = Xt=2 -  ^Xt=1

Yt=3* = Yt=3 - ^ Yt=2 ; Xt=3* = Xt=3 - ^Xt=2


.
…..

.
…..
.
…..

.
…..

.
…..
.
…..
Yt* = Yt - ^ Yt-1 ; Xt* = Xt - ^ Xt-1

44
6. Durbin’s Two-step method : Yt = 1 + 2 Xt + ut

Since (Yt - Yt-1) = 1 (1 - ) + 2 (Xt - Xt-1) + Vt


=> Yt = 1* + 2 Xt - 2 Xt-1 + Yt-1 + Vt

I. Run OLS =>


Yt = 1* + 2* Xt - 3* Xt-1 + 4* Yt-1 + vt
this specification
Obtain ^4* as an estimated 
^ (RHO)
II. Transforming the
variables : *
Yt = Yt - ^4* Yt-1 as Yt* = Yt - ^Yt-1
^*
and Xt = Xt - 4 Xt-1
*
as Xt* = Xt - ^Xt-1
III. Run OLS on model : Yt* = 1 + 2 Xt* + u’t Compare
^1 = ^1(1 - ) and ^2 = ^2
where 
45
Example

46
Example: Gujarati(2003) Table 12-4, p.460

Wage(Yt) = 1 + 2 Output(Xt) + ut

DW  2(1 - )
^
0.1229  2(1-0.9385)
47
ut =  ut-1+ vt


^  1 - DW
2
0.9142  (1- 0.0614)
48
Cochrane-Orcutt Two-step procedure (2)

Critical values:
Du=1.337
DL=1.237
Since DW > Du
No
Autocorrelation
After CO-correction
49
Running the Cochrane-Orcutt iterative procedure in EVIEWS

50
^
^

51
THE END

52

You might also like