0% found this document useful (0 votes)
525 views

Questions and Answers On Regression Models With Lagged Dependent Variables and ARMA Models

This document contains questions and answers related to regression models with lagged dependent variables and ARMA models. Key points include: 1. Derives the formula for the covariance of an AR(1) process t and t−s as a function of the lag s. 2. Calculates the variance, autocovariances, and autocorrelations for various AR and MA time series processes, including AR(1), AR(1) with intercept, MA(1), and MA(3). 3. Explains that an AR(2) process cannot be rewritten as a finite-order MA process if the coefficient on the lag-2 term is nonzero.

Uploaded by

Carmen Orazzo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
525 views

Questions and Answers On Regression Models With Lagged Dependent Variables and ARMA Models

This document contains questions and answers related to regression models with lagged dependent variables and ARMA models. Key points include: 1. Derives the formula for the covariance of an AR(1) process t and t−s as a function of the lag s. 2. Calculates the variance, autocovariances, and autocorrelations for various AR and MA time series processes, including AR(1), AR(1) with intercept, MA(1), and MA(3). 3. Explains that an AR(2) process cannot be rewritten as a finite-order MA process if the coefficient on the lag-2 term is nonzero.

Uploaded by

Carmen Orazzo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Questions and Answers on Regression Models with Lagged Dependent

Variables and ARMA models


L. Magee Winter, 2013

———————————————————–

1. Consider an AR(1) process: t = ρt−1 + ut , where E(ut ) = 0, E(u2t ) = σu2 , and E(ut us ) = 0 for
all t 6= s. Assume that t is stationary. Derive a formula for Cov(t , t−s ), the covariance of t
and t−s , that holds for s = 0, 1, 2, 3, . . ..

2. Let ut be white noise. That is,

E(ut ) = 0 for all t


E(u2t ) = σ 2
for all t
E(ut ut−s ) = 0 for all t and s where s 6= 0

For each of the following time series processes, determine the variance of yt as a function of σu2
and of parameters appearing in the equations below. Also derive the first- and second-order
autocovariances and autocorrelations. Assume that the time series processes are stationary.

(a) yt = βyt−1 + ut (yt is an AR(1) process)


(b) yt = β + t , where t = ρt−1 + ut (t is an AR(1) process)
(c) yt = ut + θut−1 (yt is an MA(1) process)
(d) yt = ut + 0.6ut−1 + 0.2ut−2 + 0.1ut−3 (yt is an MA(3) process)

3. Consider a stationary AR(2) process

yt = µ + ρ1 yt−1 + ρ2 yt−2 + ut

where ρ2 6= 0. Are there values of ρ1 and ρ2 for which this process could be re-written in moving
average form as an MA(2) process? If so, what are the values of ρ1 and ρ2 ? If no such values
exist, briefly explain why not.

4. An autoregressive distributed lag model is estimated as:

yt = 31.2 + 0.61yt−1 + 0.19yt−2 + 1.40xt + 0.58xt−1 + ut

Consider the effect on y of a one-unit increase in x at time t∗ in the following two cases:

1
(a) x remains one unit higher permanently after time t∗ .
(b) x immediately returns to its former level at time t∗ + 1.

Obtain the estimated effect on y in each of these cases at the four time periods: t∗ , t∗ + 1, t∗ + 2,
and the long run effect, t∗ + ∞.

5. Consider a regression model with a constant term and three explanatory variables, which include
the lagged dependent variable yt−1 and two other variables, x1t and x2t . The estimated model is

yt = 21.0 + 0.6yt−1 + 1.5x1t + 0.75x2t + et

(a) Obtain the estimated effect on y of a permanent one-unit increase in x1 at time t∗ (that is,
x1 remains one unit higher permanently after time t∗ ) at the four time periods: t∗ ; t∗ + 1;
t∗ + 2; and the long run effect, t∗ + ∞.
(b) Compare the size of the estimated effect on y of a permanent one-unit increase in x1 to
the size of the estimated effect on y of a permanent one-unit increase in x2 . Mention their
initial (time t∗ ) effects and their long run effects. No algebra or calculations are required.

6. For each of the following time series processes

(a) yt = µ + βyt−1 + ut
(b) yt = µ + ut + 0.6ut−1 + 0.2ut−2

derive

(i) the unconditional mean, E(yt )


(ii) the unconditional variance, Var(yt )
(iii) the first-order autocovariance, Cov(yt , yt−1 ) = E(yt − E(yt ))(yt−1 − E(yt−1 ))

Assume: E(ut ) = 0 for all t; E(u2t ) = σ 2 for all t; E(ut ut−s ) = 0 for all t and s where s 6= 0; and
that the time series processes are stationary.

7. An autoregressive distributed lag model is estimated as

yt = 11 + 0.7yt−1 − 0.4yt−2 + 9xt + 2xt−1 + ut

Consider the effect on y of a one-unit increase in x at time t∗ where x remains one unit higher
permanently after time t∗ . Obtain the estimated effect on y at time t∗ , t∗ + 1, t∗ + 2, and the
long run effect.

2
8. Consider a regression model with a constant term and three explanatory variables, which include
the lagged dependent variable yt−1 and two other variables, x1t and x2t . The estimated model is

yt = 2.1 + 0.8yt−1 − 2.0x1t + 0.5x2t + et

(a) Obtain the estimated effect on y of a permanent one-unit increase in x1 at time t∗ (that is,
x1 remains one unit higher permanently after time t∗ ) at the four time periods: t∗ ; t∗ + 1;
t∗ + 2; and the long run effect, t∗ + ∞.
(b) Compare the size of the estimated effect on y of a permanent one-unit increase in x1 with
the size of the estimated effect on y of a permanent one-unit increase in x2 . Mention their
initial (time t∗ ) effects and their long run effects. No algebra or calculations are required.

9. Suppose t follows a stationary AR(1) process:

t = ρt−1 + ut , t = 1, . . . , n

where ut is white noise. Let ρ = 0.6 and Var(ut ) = 5.

(a) What is the numerical value of the correlation between t and t−3
(b) What is the numerical value of Var(t )
(c) Suppose that E(ut ) = 10, instead of the usual zero-mean assumption. What is the numerical
value of E(t )?

10. Let ut be white noise, where

E(ut ) = 0 for all t


E(u2t ) = 20 for all t
E(ut ut−s ) = 0 for all t and s where s 6= 0

Let yt = ut + 0.7ut−1 + 0.1ut−2 . Determine the numerical values of

(a) Var(yt )
(b) The correlation between yt and yt−1
(c) The covariance between yt and yt−1

Answers

1. (1 − ρL)t = ut ⇒ t = (1 − ρL)−1 ut
t = ut + ρut−1 + ρ2 ut−2 + . . .

3
Since E(t ) = 0 for all t, then

Cov(t , t−s ) = E(t t−s )


= E(ut + ρut−1 + ρ2 ut−2 + . . . + ρs ut−s + . . .) × (ut−s + ρut−s−1 + ρ2 ut−s−2 + . . .)

Because (Eut us ) = 0 for all t 6= s, the only terms with non-zero expectations in this product are
those with equal subscripts on the u’s. Then the above expression simplifies to

Cov(t , t−s ) = E(ρs u2t−s + ρs+2 u2t−s−1 + ρs+4 u2t−s−2 + . . .)


= ρs (σu2 + ρ2 σu2 + ρ4 σu2 + . . .)
= ρs σu2 (1 + ρ2 + ρ4 + . . .)
ρs
= σ 2 , for all s = 0, 1, 2, . . .
(1 − ρ2 ) u

2. (a) Var(yt ) = Var(βyt−1 + ut )


= β 2 Var(yt−1 ) + σu2
= β 2 Var(yt ) + σu2

so Var(yt ) = σu2 /(1 − β 2 )

Cov(yt , yt−1 ) = E(yt × yt−1 ) (since Eyt = 0)


2
= E(βyt−1 + ut )yt−1 = βE(yt−1 ) = βVar(yt )

For Cov(yt , yt−2 ), use: yt = βyt−1 + ut = β(βyt−2 + ut−1 ) + ut = β 2 yt−2 + βut−1 + ut


Then

Cov(yt , yt−2 ) = E(yt yt−2 ) = E(β 2 yt−2 + βut−1 + ut )yt−2


= β 2 E(yt−2
2
) = β 2 Var(yt )

Substitutions then give:

Cov(yt , yt−1 )
Corr(yt , yt−1 ) = p =β
Var(yt )Var(yt−1 )

and similarly

Corr(yt , yt−2 ) = β 2

4
(b) (yt − β) = t = ρt−1 + ut

= ρ(yt−1 − β) + ut

This is like (a) except now E(yt ) = β instead of = 0 and we now have ρ replacing part (a)’s
β. Then

Var(yt ) = σu2 /(1 − ρ2 )


Cov(yt , yt−1 ) = E(yt − β)(yt−1 − β) = ρVar(yt )
Cov(yt , yt−2 ) = ρ2 Var(yt )
Corr(yt , yt−1 ) = ρ
Corr(yt , yt−2 ) = ρ2

(c) Var(yt ) = Var(ut + θut−1 ) = Var(ut ) + θ2 Var(ut−1 ) = σu2 + θ2 σu2 = (1 + θ2 )σu2

Cov(yt , yt−1 ) = E(ut + θut−1 )(ut−1 + θut−2 ) = θE(u2t−1 ) = θσu2

Cov(yt , yt−2 ) = 0 (yt and yt−2 have no ut ’s in common and the ut ’s are not correlated)
θσu2 θ
Corr(yt , yt−1 ) = 2 2
=
(1 + θ )σu 1 + θ2
Corr(yt , yt−2 ) = 0

(d) Var(yt ) = σu2 + (0.6)2 σu2 + (0.2)2 σu2 + (0.1)2 σu2

= (1 + 0.36 + 0.04 + 0.01)σu2


= 1.41σu2

Cov(yt , yt−1 ) = E(ut + 0.6ut−1 + 0.2ut−2 + 0.1ut−3 )(ut−1 + 0.6ut−2 + 0.2ut−3 + 0.1ut−4 )
= 0.6σu2 + 0.12σu2 + 0.02σu2
= 0.74σu2

Cov(yt , yt−2 ) = E(ut + 0.6ut−1 + 0.2ut−2 + 0.1ut−3 )(ut−2 + 0.6ut−3 + 0.2ut−4 + 0.1ut−5 )
= 0.2σu2 + 0.06σu2
= 0.26σu2
0.74
Corr(yt , yt−1 ) = = 0.52
1.41
0.26
Corr(yt , yt−2 ) = = 0.18
1.41

5
3. Write this process as

(1 − ρ1 L − ρ2 L2 )yt = µ + ut

Invert the lag polynomial to get it in MA form

yt = (1 − ρ1 L − ρ2 L2 )−1 (µ + ut ) = µ/(1 − ρ1 − ρ2 ) + (1 − ρ1 L − ρ2 L2 )−1 ut

The inverse lag polynomial (1 − ρ1 L − ρ2 L2 )−1 is an infinite series of the form 1 + θ1 L + θ2 L2 +


θ3 L3 + . . ., which is an infinite-order MA, not an MA(2). One way to see this is to factor the
original quadratic lag polynomial as (1 − λ1 L)(1 − λ2 L) for some λ1 and λ2 values. λ1 and λ2
both are non-zero since ρ2 6= 0. The inverse of this factorized lag polynomial is the product of
two infinite-term geometric series

(1 − λ1 L)−1 (1 − λ2 L)−1 = (1 + λ1 L + λ21 L2 + λ31 L3 + . . .)(1 + λ2 L + λ22 L2 + λ32 L3 + . . .)

which itself is an infinite series.

4. (Note that ∆ represents the change in y due to a change in x. It does not represent the first-
difference operator here.)

(a) ∆yt∗ = 1.40∆xt∗ = 1.40(1) = 1.40

∆yt∗ +1 = 0.61∆yt∗ + 1.40∆xt∗ +1 + 0.58∆xt∗


= 0.61(1.40) + 1.40(1) + 0.58(1)
= 2.834

∆yt∗ +2 = 0.61∆yt∗ +1 + 0.19∆yt∗ + 1.40∆xt∗ +2 + 0.58∆xt∗ +1


= 0.61(2.834) + 0.19(1.40) + 1.40(1) + 0.58(1)
= 3.975

The permanent effect can be obtained from ∆y = 0.61∆y + 0.19∆y + 1.40∆x + 0.58∆x,
where ∆x is the permanent change in x. Then solve for ∆y:

(1 − 0.61 − 0.19)∆y = 1.98∆x


1.98
∆y = ∆x = 9.9∆x = 9.90
0.2

6
(b) ∆yt∗ = 1.40∆xt∗ = 1.40

∆yt∗ +1 = 0.61∆yt∗ + 0.58∆xt∗ (Now ∆xt∗ +1 = 0)


= 0.61(1.40) + 0.58(1)
= 1.434

∆yt∗ +2 = 0.61∆yt∗ +1 + 0.19∆yt∗


= 0.61(1.434) + 0.19(1.40)
= 1.141

The permanent effect is ∆y = 0 since the permanent change in x is ∆x = 0.

5. (a) Effect at time t∗ : 1.5


at time t ∗ +1 : 1.5 + 0.6 × 1.5 = 2.4
at time t ∗ +2 : 2.4 + (0.6)2 × 1.5 = 2.94
at time t ∗ +∞ : 1.5/(1 − .6) = 3.75
(b) At every time period, the effects of x2 on y are half as big as the effects of x1 on y. Reason:
The coefficient on x2 is half the size of the coefficient on x1 , and the dynamic pattern of the
effects is the same for both, because that depends only on the coefficient on yt−1 .

6. (a) (i) Eyt = µ + βEyt−1 + Eut =⇒ Eyt = µ + βEyt =⇒ (Eyt )(1 − β) = µ


=⇒ Eyt = µ/(1 − β)
(ii) Var(yt ) = β 2 Var(yt−1 ) + Var(ut ) =⇒ Var(yt ) = β 2 Var(y) + σ 2
=⇒ Var(y) = σ 2 /(1 − β 2 )
(iii) yt − Eyt = µ + βyt−1 + ut − E(µ + βyt−1 + ut ) = µ + βyt−1 + ut − (µ + βEyt−1 )
= β(yt−1 − Eyt−1 ) + ut
So E(yt − Eyt )(yt−1 − E(yt−1 )) = E(β(yt−1 − Eyt−1 ) + ut )(yt−1 − Eyt−1 )
= βE(yt−1 − Eyt−1 )2 = βVar(yt )
(b) (i) Eyt = µ
(ii) Var(y) = E(yt − µ)2 = (1 + .62 + .22 )σ 2 = 1.4σ 2
(iii) E(yt − Eyt )(yt−1 − E(yt−1 )) = E(ut + .6ut−1 + .2ut−2 )(ut−1 + .6ut−2 + .2ut−3 )
= .6Eu2t−1 + .12Eu2t−2 = .72σ 2

7. at t∗ ∆y = 9 × 1 = 9
at t∗ + 1, ∆y = 0.7 × 9 + 9 × 1 + 2 × 1 = 17.3
at t∗ + 2, ∆y = 0.7 × 17.3 − 0.4 × 9 + 9 × 1 + 2 × 1 = 19.51
9+2 11
long run effect is ∆y = 1−.7+.4 = 0.7 = 15.71

7
8. (a) Effect at time t∗ : −2.0
at time t ∗ +1 : −2.0 + 0.8 × (−2.0) = −3.6
at time t ∗ +2 : −2.0 + 0.8 × (−3.6) = −4.88
at time t ∗ +∞ : −2.0/(1 − .8) = −10.0
(b) At every time period, the effect of a change in x2 on y is −0.25 times the effect of a change
in x1 on y. This is because the coefficient on x2 is −0.25 times the coefficient on x1 . This
ratio does not change over time, because the way that the effect changes over time in this
model depends only on the coefficient on yt−1 , in the same way for both the x1 and x2
effects.

9. (a) When t follows a stationary AR(1) process with first-order autocorrelation coefficient ρ,
then Corr(t , t−s ) = ρs . Therefore Corr(t , t−3 ) = ρ3 = (0.6)3 = 0.216
(b) Var(t ) = ρ2 Var(t−1 ) + Var(ut )
Var(t ) = 0.36Var(t ) + 5
5
Var(t ) = 1−0.36 = 7.81
(c) E(t ) = ρE(t−1 ) + E(ut )
E(t ) = 0.6E(t ) + 10
10
E(t ) = 1−0.6 = 25

10. (a)
Var(yt ) = Var(ut ) + (.7)2 Var(ut−1 ) + (.1)2 Var(ut−2 )
= 20 + .49(20) + (.01)20
= 20(1 + .5) = 30

(b) Since E(yt ) = 0, then

Cov(yt , yt−1 ) = E(yt yt−1 )


= E(ut + .7ut−1 + .1ut−2 )(ut−1 + .7ut−2 + .1ut−3 )
= .7E(u2t−1 ) + (.7)(.1)E(u2t−2 )
= (.7 + .07)20 = 15.4

(c)
Cov(yt , yt−1 )
Corr(yt , yt−1 ) = p
Var(yt )Var(yt−1 )
15.4
= √ = .513
30 × 30

You might also like