0% found this document useful (0 votes)
64 views6 pages

Problem Set 2-1

This document summarizes the key steps to solve a problem involving an AR(2) time series process. It first establishes that the given process is stationary based on the roots of its AR polynomial being outside the unit circle. It then computes the mean, autocovariances, and autocorrelations of the process. The variance is also derived using the causal representation of the process. Finally, it shows that a different given MA(2) process is covariance stationary by examining its autocovariance function.

Uploaded by

Kadir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views6 pages

Problem Set 2-1

This document summarizes the key steps to solve a problem involving an AR(2) time series process. It first establishes that the given process is stationary based on the roots of its AR polynomial being outside the unit circle. It then computes the mean, autocovariances, and autocorrelations of the process. The variance is also derived using the causal representation of the process. Finally, it shows that a different given MA(2) process is covariance stationary by examining its autocovariance function.

Uploaded by

Kadir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Problem Set 2-1

Vladislav Morozov

1 Problem 1
Process:
𝑥𝑡 − 1.1𝑥𝑡−1 + 0.24𝑥𝑡−2 = 𝑐 + 𝜀𝑡 (1)
a The simplest way to solve this is to check stationary via a condition for ARMA models.
Examine the AR polynomial:

𝜑(𝑧) = 1 − 1.1𝑧 + 0.24𝑧 2 (2)

This has two roots 𝑧 = 5/4 and 𝑧 = 10/3, both larger than 1 in absolute value. By, say,
theorems 3.1.1+3.1.3 in Brockwell and Davis there exists a unique stationary solution
∑︀∞ to (1).
Moreover, 𝑥𝑡 can be written as a sum of present and past shocks only: 𝑥𝑡 = 𝜇 + 𝑖=0 𝜓𝑖 𝜀𝑡−𝑖
(in such a case we say 𝑥𝑡 is causal ).

Now that we know that the process is stationary, we can compute the mean and the
autocovariances easily. Start with the mean:

E(𝑥𝑡 ) = 𝑐 + 1.1 E(𝑥𝑡−1 ) − 0.24𝜑2 E(𝑥𝑡−2 ) (3)

This produces
𝑐
E(𝑥𝑡 ) = (4)
0.14
The procedure for finding autocovariances is best explained in terms of a generic AR(2)
process.
𝑥𝑡 = 𝜑1 𝑥𝑡−1 + 𝜑2 𝑥𝑡−2 + 𝑐 + 𝜀𝑡 (5)
First observe that
1 − 𝜑1 − 𝜑2 1 𝜑1 𝜑2
𝑐= 𝑐= 𝑐− 𝑐− 𝑐 (6)
1 − 𝜑1 − 𝜑2 1 − 𝜑1 − 𝜑2 1 − 𝜑1 − 𝜑2 1 − 𝜑1 − 𝜑2

Let E(𝑥𝑡 ) = 𝜇, then 𝑐 = 𝜇 − 𝜑1 𝜇 − 𝜑2 𝜇, so we get that we can write the above process

(𝑥𝑡 − 𝜇) = 𝜑1 (𝑥𝑡−1 − 𝜇) + 𝜑2 (𝑥𝑡−2 − 𝜇) + 𝜀𝑡 (7)

Defining the zero-mean process 𝑌𝑡 = 𝑥𝑡 − 𝜇, we obtain that

𝑌𝑡 = 𝜑1 𝑌𝑡−1 + 𝜑2 𝑌𝑡−2 + 𝜀𝑡 (8)

1
1.0

Thus, WLOG we can work with mean zero processes. 𝑌 has the same stationarity/causali-
ty/invertibility properties as 𝑥𝑡 and the same ACF.

Since 𝑌 is mean zero, Cov(𝑌𝑡+ℎ , 𝑌𝑡 ) = E(𝑌𝑡+ℎ 𝑌𝑡 ). Let ℎ ≥ 0 and multiply the process by
𝑌𝑡−ℎ and take expectations to obtain

E(𝑌𝑡−ℎ 𝑌𝑡 ) = 𝜑1 E(𝑌𝑡−1 𝑌𝑡−ℎ ) + 𝜑2 E(𝑌𝑡−2 𝑌𝑡−ℎ ) + E(𝑌𝑡−ℎ 𝜀𝑡 ) (9)

This is just
𝛾𝑌 (ℎ) = 𝜑1 𝛾𝑌 (ℎ − 1) + 𝜑2 𝛾𝑌 (ℎ − 2) (10)
̸ 0. Note that we can divide this by 𝛾𝑌 (0) and obtain the same difference equation in
for ℎ =
autocorrelations:
𝜌𝑌 (ℎ) = 𝜑1 𝜌𝑌 (ℎ − 1) + 𝜑2 𝜌𝑌 (ℎ − 2) (11)
Now, if we set ℎ = 1, we have

𝜌𝑌 (1) = 𝜑1 𝜌𝑌 (0) + 𝜑2 𝜌𝑌 (1) (12)

Since 𝜌𝑌 (0) = 1
𝜑1
𝜌𝑌 (1) = (13)
1 − 𝜑2
Given this, we can recursively find 𝜌𝑌 (ℎ) for any horizon. For instance,

𝜑1 𝜑2 + 𝜑2 (1 − 𝜑2 )
𝜌𝑌 (2) = 𝜑1 𝜌𝑌 (1) + 𝜑2 𝜌𝑌 (0) = 𝜑1 + 𝜑2 = 1 (14)
1 − 𝜑2 1 − 𝜑2

and so on. Last, we need 𝛾(0) to unravel autocovariances from what we have. For that set
ℎ = 0, and consider1

𝛾𝑌 (0) = 𝜑1 𝛾𝑌 (1) + 𝜑𝛾𝑌 (2) + 𝜎 2 = 𝜑1 𝜌𝑌 (1)𝛾𝑌 (0) + 𝜑2 𝜌𝑌 (2)𝛾𝑌 (0) + 𝜎 2 (15)

and so we get

𝜎2 1 − 𝜑2
𝛾𝑌 (0) = = 𝜎2 (16)
1 − 𝜑1 𝜌𝑌 (1) − 𝜑2 𝜌𝑌 (0) (1 + 𝜑2 )((1 − 𝜑2 )2 − 𝜑21 )

Now, we can just get all autocovariances from autocorrelations. If we want to find them
recursively from their own equation, we also need 𝛾𝑌 (1). This can be done from noting at
ℎ = 1:
𝛾𝑌 (1) = 𝜑1 𝛾𝑌 (0) + 𝜑2 𝛾𝑌 (1) (17)
or
𝜑1 𝜑1
𝛾𝑌 (1) = 𝛾𝑌 (0) = 2 2
𝜎2 (18)
1 − 𝜑2 (1 + 𝜑2 )((1 − 𝜑2 ) − 𝜑1 )
(3.3.14) in Brockwell and Davis gives the general form. Substituting 𝜑1 = 1.1 and 𝜑2 = −0.24
gives the desired answer.
1
Remember that we now get E(𝑌𝑡 𝜀𝑡 ) = 𝜎 2 term which is not present otherwise.

2
2.0

b Yes, the process is already written in the inverted form (compare (3.1.18) in Brockwell
and Davis)

c, d Yes, the AR polynomial has no roots on the unit circle. In particular, it has no roots
inside the unit circle as well, so the process is causal, so the inverse of 𝜑 will involve only
nonnegative powers. (theorem 3.1.1 in Brockwell, Davis)
Write (︂ )︂ (︂ )︂
2 4 3
(1 − 1.1𝑧1 + 0.24𝑧 ) = 1 − 𝑧 1− 𝑧 (19)
5 10
and invert both sides to obtain
(︃ ∞ (︂ )︂ )︃ (︃ ∞ (︂ )︃
∑︁ 4 𝑗 ∑︁ 3 )︂𝑘
(1 − 1.1𝑧1 + 0.24𝑧 2 )−1 = 𝑧 𝑧 (20)
𝑗=0
5 𝑘=0
10

Hence (︃ ∞ (︂ )︂ )︃ (︃ ∞ (︂ )︃
∑︁ 4 𝑗 ∑︁ 3 )︂𝑘
𝑌𝑡 = 𝐿 𝐿 𝜀𝑡 = 𝜀𝑡 + 1.1𝜀𝑡 + . . . (21)
𝑗=0
5 𝑘=0
10

and (︃ ∞ (︂ )︂𝑗 )︃ (︃∑︁


∞ (︂ )︂𝑘 )︃
∑︁ 4 3
𝑥𝑡 = 𝜇 + 𝐿 𝐿 𝜀𝑡 (22)
𝑗=0
5 𝑘=0
10

e The LR variance of an AR(2) is given by the ∞


∑︀
ℎ=−∞ 𝛾(ℎ), which in principle can be
obtained from a . Alternatively, we can use the causal representation 𝑦𝑡 = 𝜓(𝐿)𝜀. Then

∞ ∞ ∞
[︃ ∞
]︃2
∑︁ ∑︁ ∑︁ ∑︁
𝛾(ℎ) = 𝜎2 𝜓𝑗 𝜓𝑗+ℎ = 𝜎 2 𝜓𝑗 = 𝜎 2 𝜓 2 (1) (23)
ℎ=−∞ ℎ=−∞ 𝑗=−∞ 𝑗=−∞

In this case 𝜓 = 𝜑−1 , so

2 2 𝜎2
𝜎 𝜓 (1) = 2
≈ 51.02𝜎 2 (24)
(1 − 1.1 + 0.24)

2 Problem 2
a The process is covariance stationary by proposition 3.1.2 in Brockwell and Davis. Alterna-
tively, we can check covariance stationary by definition.
First the mean
E(𝑥𝑡 ) = 𝑐 (25)
which doesn’t depend on 𝑡. Define 𝑌𝑡 = 𝑥𝑡 − 𝑐, then the equation becomes

𝑌𝑡 = 𝜀𝑡 + 𝜃1 𝜀𝑡−1 + 𝜃2 𝜀𝑡−2 (26)

3
3.0

The ACF

𝛾𝑌 (𝑡 + ℎ, ℎℎ) = Cov (𝜀𝑡+ℎ + 𝜃1 𝜀𝑡+ℎ−1 + 𝜃2 𝜀𝑡+ℎ−2 , 𝜀𝑡 + 𝜃1 𝜀𝑡−1 + 𝜃2 𝜀𝑡−2 ) (27)

When the indices diverge by more than 0, then it’s equal to zero. In fact, we can simply
write:
⎧ 2

⎪ 𝜎 (1 + 𝜃12 + 𝜃22 ) = 1 + (−0.8)2 + 0.152 , ℎ = 0
⎨𝜎 2 (𝜃 + 𝜃 𝜃 ) = −0.8 − 0.15 × 0.8,

ℎ = ±1
1 1 2
𝛾𝑥 (ℎ) = 2
(28)
⎪𝜎 𝜃2 = 0.15,

⎪ ℎ = ±2
|ℎ| > 2

0,

This also doesn’t depend on 𝑡, so 𝑥 is covariance stationary.

b Examine the MA polynomial:

𝜃(𝑧) = 1 − 0.8𝑧 + 0.15𝑧 2 (29)

Its roots are 2 and 10/3, both outside the unit circle. Hence by theorem 3.1.2 in Brockwell
and Davis the process is invertible.

c The LR variance is obtained by summing 𝛾𝑥 (ℎ). Alternatively, since 𝑥𝑡 is already written


in causal form
𝐿𝑅𝑉 = 𝜎 2 𝜓 2 (1) = (1 − 0.8 + 0.15)2 = 0.1225 (30)

d Many possible conditions. Easiest one: strengthen 𝜀𝑡 to be strictly stationary. Then


𝑥𝑡 inherits strict stationarity The long-run variance of 𝑥𝑡 is finite, so theorem 5 of slide 68
applies. Alternatively, with strict stationarity we can apply a CLT for 𝑚-dependent processes,
since 𝑥𝑡 shows limited dependence2 .

3 Problem 3
a First, note that 𝑥𝑡 is covariance stationary. We need to examine the ACF of an AR(1).
To find it, multiply
𝑥𝑡 = 𝜑𝑥𝑡−1 + 𝜀𝑡 (31)
by 𝑥𝑡−ℎ , ℎ ≥ 1, and take expectations to obtain

𝛾(ℎ) = 𝜑𝛾(ℎ − 1) (32)

leading to
𝛾(ℎ) = 𝜑|ℎ| 𝛾(0) (33)
2
Intuitively, 𝑥𝑡 is 𝑚-dependent 𝑥𝑡 and 𝑥𝑡+𝑚+ℎ are independent for all ℎ ≥ 1. More formally, all finite-
dimensional distributions separated by more than 𝑚 + 1 period must be independent.

4
4.0

In addition, multiply the defining equation by 𝑥𝑡 to get


𝜎2
𝛾(0) = (34)
1−𝜑
Since |𝜑| < 1, 𝛾(ℎ) → 0 as ℎ → ∞ and the process is ergodic for the mean (theorem 7.1.1 in
Brockwell and Davis).
Similarly, since |𝜑| < 1, 𝛾(ℎ) → 0 ∞
∑︀
ℎ=−∞ |𝛾(ℎ)| < ∞ and the process is ergodic for the
second moments (theorem 7.1.2).

b E(𝑥𝑡 ) = 0 for all 𝑡, however,


Var(𝑥𝑡 ) = 𝜎 2 𝑡 (35)
which is a function of 𝑡 so 𝑥𝑡 cannot be stationary (or 𝜎 2 = ∞ and the process doesn’t have
second moments).

4 Problem 4
It’s convenient to write the OLS estimator in sampling error form:
−1
∑︀𝑇
𝑇 𝑡=2 𝑦𝑡−1 𝜀𝑡
𝜑ˆ = 𝜑 + (36)
𝑇 −1 𝑇𝑡=2 𝑦𝑡−1
2
∑︀

a.i In this case 𝑦𝑡−1 is independent from 𝜀𝑡 since 𝑦𝑡−1 = ∞ 𝑗


∑︀
𝑗=0 𝜑 𝜀𝑡−1−𝑗 . Then we have
E(𝑦𝑡−1 𝜀𝑡 ) = 0. In addition,(𝜀𝑡 , 𝑦𝑡−1 ) is strictly stationary and ergodic (for example, see
theorem 7.1.1 and 7.1.3 in Durrett’s Probability). 𝑦𝑡 also possesses a finite second moment,
so by the ergodic theorem + CMT as 𝑇 → ∞ the second term tends to 0 in probability, and
𝜑ˆ is consistent.
Observe that E(𝜀𝑡 |𝑦𝑡−1 ) = 0 by model specification. This implies that E(𝜀𝑡 𝑦𝑡−1 |𝑦𝑡−2 𝜀𝑡−1 , . . . ) =
0, and so 𝑦𝑡−1 𝜀𝑡 forms a strictly stationary MDS with finite second moments. The MDS CLT
applies and √
𝑇 (𝜑ˆ − 𝜑) ⇒ 𝑁 (0, E(𝜀2𝑡 𝑦𝑡−1
2
)) (37)
This second moment can be explicitly computed from the causal representation of 𝑦.

a.ii We immediately obtain E(𝑦𝑡−1 𝜀𝑡 ) = E(𝑦𝑡−1 (𝛿𝜀𝑡−1 + 𝑣𝑡 )) = 𝛿𝜎𝜀2 + · · · =


̸ 0. 𝜀𝑡 is still strictly
stationary, which implies that 𝑦𝑡−1 and 𝜀𝑡−1 are jointly so. The ergodic theorem applies and
shows that 𝜑ˆ is inconsistent.

b Apply a Cochrane-Orcutt type transformation by computing 𝑦𝑡 − 𝛿𝑦𝑡−1 :

𝑦𝑡 − 𝛿𝑦𝑡−1 = 𝜑𝑦𝑡−1 − 𝜑𝛿𝑦𝑡−2 + 𝑣𝑡 (38)

or
𝑦𝑡 = (𝜑 + 𝛿)𝑦𝑡−1 − 𝜑𝛿𝑦𝑡−2 + 𝑣𝑡 (39)
So the process is observationally equivalent to an AR(2) with iid errors and coefficients given
above. Orthogonality is restored in this case and OLS is consistent for 𝜑 + 𝛿 and −𝜑𝛿.

5
6.0

However, if you wish to estimate (𝜑, 𝛿), this requires an additional identification condition
condition. For example, observe that 𝜑 = 0.5 and 𝛿 = −0.5 from 𝜑 = −0.5 and 𝛿 = 0.5
correspond to the same 𝜑 + 𝛿 = 0 and 𝜑𝛿 = −0.25. Making an assumption on the sign of one
coefficient is enough to resolve this.

c Yes, assuming joint stationarity and ergodicity.

5 Problem 5
a First arrow: E(𝜀𝑡 |𝜀𝑡−1 , . . . ) = E(𝜀𝑡 ) = 0 by independence. Second arrow: E(𝜀𝑡 𝜀𝑡−ℎ ) =
E(E(𝜀𝑡 𝜀𝑡−ℎ |𝜀𝑡−1 , 𝜀𝑡−2 , . . . )) = E(𝜀𝑡−1 E(𝜀𝑡 |𝜀𝑡−1 , 𝜀𝑡−2 , . . . )) = 0.

b We give simple examples with two points, more sophisticated time series ones are also
possible. For the first arrow, let (𝑋1 , 𝑋2 ) take values in Ω = {(0, 0), (1, −1), (1, 1)} with equal
probabilities. Then E(𝑋2 ) = 0, and E(𝑋2 |𝑋1 ) = 1/2(1 − 1) I𝑋1 =1 +0 I𝑋1 =0 = 0. However, the
two variables are not independent.
For the second arrow, let (𝑋1 , 𝑋2 ) take the following values: 𝑃 ((1, −1)) = 𝑃 ((1, 1)) = 1/4
and 𝑃 ((0, −1)) = 1/2. Then mean E(𝑋2 𝑋1 ) = 0 = E(𝑋2 ) E(𝑋1 ). However, E(𝑋2 |𝑋1 =
−1) = 1 ̸= 0.

6 Problem 6
∑︀
a |𝜑𝑖 | < ∞ implies that starting from some 𝑖0 for all 𝑖 ≥ 𝑖0 |𝜓𝑖 | ≤ 1. Then for all 𝑖 ≥ 𝑖0
2
𝜓𝑖 ≤ |𝜓|𝑖 , hence

∑︁ ∞
∑︁
𝜓𝑖2 ≤ |𝜓𝑖 | < ∞ (40)
𝑖=𝑖0 𝑖=𝑖0

Adding the finitely many terms up to 𝑖0 gives the result.

b We know

∑︁
2
𝛾𝑌 (ℎ) = 𝜎 𝜓𝑗 𝜓𝑗+ℎ (41)
𝑗=−∞

So let’s sum across ℎ:


⃒ ⃒

∑︁ ∞ ⃒
∑︁ ∞ ⃒ ∞ ∞
⃒ 2 ∑︁ 2
∑︁ ∑︁
|𝛾𝑌 (ℎ)| = ⃒𝜎 𝜓𝑗 𝜓𝑗+ℎ ⃒ ≤ 𝜎 |𝜓𝑗 𝜓𝑗+ℎ | (42)

⃒ ⃒
ℎ=−∞ ℎ=−∞ 𝑗=−∞ 𝑗=−∞ ℎ=−∞

∑︁ ∞
∑︁
2
=𝜎 |𝜓𝑗 | |𝜓𝑗+ℎ | < ∞ (43)
𝑗=−∞ ℎ=−∞

You might also like