Problem Set 2-1
Problem Set 2-1
Vladislav Morozov
1 Problem 1
Process:
𝑥𝑡 − 1.1𝑥𝑡−1 + 0.24𝑥𝑡−2 = 𝑐 + 𝜀𝑡 (1)
a The simplest way to solve this is to check stationary via a condition for ARMA models.
Examine the AR polynomial:
This has two roots 𝑧 = 5/4 and 𝑧 = 10/3, both larger than 1 in absolute value. By, say,
theorems 3.1.1+3.1.3 in Brockwell and Davis there exists a unique stationary solution
∑︀∞ to (1).
Moreover, 𝑥𝑡 can be written as a sum of present and past shocks only: 𝑥𝑡 = 𝜇 + 𝑖=0 𝜓𝑖 𝜀𝑡−𝑖
(in such a case we say 𝑥𝑡 is causal ).
Now that we know that the process is stationary, we can compute the mean and the
autocovariances easily. Start with the mean:
This produces
𝑐
E(𝑥𝑡 ) = (4)
0.14
The procedure for finding autocovariances is best explained in terms of a generic AR(2)
process.
𝑥𝑡 = 𝜑1 𝑥𝑡−1 + 𝜑2 𝑥𝑡−2 + 𝑐 + 𝜀𝑡 (5)
First observe that
1 − 𝜑1 − 𝜑2 1 𝜑1 𝜑2
𝑐= 𝑐= 𝑐− 𝑐− 𝑐 (6)
1 − 𝜑1 − 𝜑2 1 − 𝜑1 − 𝜑2 1 − 𝜑1 − 𝜑2 1 − 𝜑1 − 𝜑2
Let E(𝑥𝑡 ) = 𝜇, then 𝑐 = 𝜇 − 𝜑1 𝜇 − 𝜑2 𝜇, so we get that we can write the above process
1
1.0
Thus, WLOG we can work with mean zero processes. 𝑌 has the same stationarity/causali-
ty/invertibility properties as 𝑥𝑡 and the same ACF.
Since 𝑌 is mean zero, Cov(𝑌𝑡+ℎ , 𝑌𝑡 ) = E(𝑌𝑡+ℎ 𝑌𝑡 ). Let ℎ ≥ 0 and multiply the process by
𝑌𝑡−ℎ and take expectations to obtain
This is just
𝛾𝑌 (ℎ) = 𝜑1 𝛾𝑌 (ℎ − 1) + 𝜑2 𝛾𝑌 (ℎ − 2) (10)
̸ 0. Note that we can divide this by 𝛾𝑌 (0) and obtain the same difference equation in
for ℎ =
autocorrelations:
𝜌𝑌 (ℎ) = 𝜑1 𝜌𝑌 (ℎ − 1) + 𝜑2 𝜌𝑌 (ℎ − 2) (11)
Now, if we set ℎ = 1, we have
Since 𝜌𝑌 (0) = 1
𝜑1
𝜌𝑌 (1) = (13)
1 − 𝜑2
Given this, we can recursively find 𝜌𝑌 (ℎ) for any horizon. For instance,
𝜑1 𝜑2 + 𝜑2 (1 − 𝜑2 )
𝜌𝑌 (2) = 𝜑1 𝜌𝑌 (1) + 𝜑2 𝜌𝑌 (0) = 𝜑1 + 𝜑2 = 1 (14)
1 − 𝜑2 1 − 𝜑2
and so on. Last, we need 𝛾(0) to unravel autocovariances from what we have. For that set
ℎ = 0, and consider1
and so we get
𝜎2 1 − 𝜑2
𝛾𝑌 (0) = = 𝜎2 (16)
1 − 𝜑1 𝜌𝑌 (1) − 𝜑2 𝜌𝑌 (0) (1 + 𝜑2 )((1 − 𝜑2 )2 − 𝜑21 )
Now, we can just get all autocovariances from autocorrelations. If we want to find them
recursively from their own equation, we also need 𝛾𝑌 (1). This can be done from noting at
ℎ = 1:
𝛾𝑌 (1) = 𝜑1 𝛾𝑌 (0) + 𝜑2 𝛾𝑌 (1) (17)
or
𝜑1 𝜑1
𝛾𝑌 (1) = 𝛾𝑌 (0) = 2 2
𝜎2 (18)
1 − 𝜑2 (1 + 𝜑2 )((1 − 𝜑2 ) − 𝜑1 )
(3.3.14) in Brockwell and Davis gives the general form. Substituting 𝜑1 = 1.1 and 𝜑2 = −0.24
gives the desired answer.
1
Remember that we now get E(𝑌𝑡 𝜀𝑡 ) = 𝜎 2 term which is not present otherwise.
2
2.0
b Yes, the process is already written in the inverted form (compare (3.1.18) in Brockwell
and Davis)
c, d Yes, the AR polynomial has no roots on the unit circle. In particular, it has no roots
inside the unit circle as well, so the process is causal, so the inverse of 𝜑 will involve only
nonnegative powers. (theorem 3.1.1 in Brockwell, Davis)
Write (︂ )︂ (︂ )︂
2 4 3
(1 − 1.1𝑧1 + 0.24𝑧 ) = 1 − 𝑧 1− 𝑧 (19)
5 10
and invert both sides to obtain
(︃ ∞ (︂ )︂ )︃ (︃ ∞ (︂ )︃
∑︁ 4 𝑗 ∑︁ 3 )︂𝑘
(1 − 1.1𝑧1 + 0.24𝑧 2 )−1 = 𝑧 𝑧 (20)
𝑗=0
5 𝑘=0
10
Hence (︃ ∞ (︂ )︂ )︃ (︃ ∞ (︂ )︃
∑︁ 4 𝑗 ∑︁ 3 )︂𝑘
𝑌𝑡 = 𝐿 𝐿 𝜀𝑡 = 𝜀𝑡 + 1.1𝜀𝑡 + . . . (21)
𝑗=0
5 𝑘=0
10
∞ ∞ ∞
[︃ ∞
]︃2
∑︁ ∑︁ ∑︁ ∑︁
𝛾(ℎ) = 𝜎2 𝜓𝑗 𝜓𝑗+ℎ = 𝜎 2 𝜓𝑗 = 𝜎 2 𝜓 2 (1) (23)
ℎ=−∞ ℎ=−∞ 𝑗=−∞ 𝑗=−∞
2 2 𝜎2
𝜎 𝜓 (1) = 2
≈ 51.02𝜎 2 (24)
(1 − 1.1 + 0.24)
2 Problem 2
a The process is covariance stationary by proposition 3.1.2 in Brockwell and Davis. Alterna-
tively, we can check covariance stationary by definition.
First the mean
E(𝑥𝑡 ) = 𝑐 (25)
which doesn’t depend on 𝑡. Define 𝑌𝑡 = 𝑥𝑡 − 𝑐, then the equation becomes
3
3.0
The ACF
When the indices diverge by more than 0, then it’s equal to zero. In fact, we can simply
write:
⎧ 2
⎪
⎪ 𝜎 (1 + 𝜃12 + 𝜃22 ) = 1 + (−0.8)2 + 0.152 , ℎ = 0
⎨𝜎 2 (𝜃 + 𝜃 𝜃 ) = −0.8 − 0.15 × 0.8,
⎪
ℎ = ±1
1 1 2
𝛾𝑥 (ℎ) = 2
(28)
⎪𝜎 𝜃2 = 0.15,
⎪
⎪ ℎ = ±2
|ℎ| > 2
⎩
0,
Its roots are 2 and 10/3, both outside the unit circle. Hence by theorem 3.1.2 in Brockwell
and Davis the process is invertible.
3 Problem 3
a First, note that 𝑥𝑡 is covariance stationary. We need to examine the ACF of an AR(1).
To find it, multiply
𝑥𝑡 = 𝜑𝑥𝑡−1 + 𝜀𝑡 (31)
by 𝑥𝑡−ℎ , ℎ ≥ 1, and take expectations to obtain
leading to
𝛾(ℎ) = 𝜑|ℎ| 𝛾(0) (33)
2
Intuitively, 𝑥𝑡 is 𝑚-dependent 𝑥𝑡 and 𝑥𝑡+𝑚+ℎ are independent for all ℎ ≥ 1. More formally, all finite-
dimensional distributions separated by more than 𝑚 + 1 period must be independent.
4
4.0
4 Problem 4
It’s convenient to write the OLS estimator in sampling error form:
−1
∑︀𝑇
𝑇 𝑡=2 𝑦𝑡−1 𝜀𝑡
𝜑ˆ = 𝜑 + (36)
𝑇 −1 𝑇𝑡=2 𝑦𝑡−1
2
∑︀
or
𝑦𝑡 = (𝜑 + 𝛿)𝑦𝑡−1 − 𝜑𝛿𝑦𝑡−2 + 𝑣𝑡 (39)
So the process is observationally equivalent to an AR(2) with iid errors and coefficients given
above. Orthogonality is restored in this case and OLS is consistent for 𝜑 + 𝛿 and −𝜑𝛿.
5
6.0
However, if you wish to estimate (𝜑, 𝛿), this requires an additional identification condition
condition. For example, observe that 𝜑 = 0.5 and 𝛿 = −0.5 from 𝜑 = −0.5 and 𝛿 = 0.5
correspond to the same 𝜑 + 𝛿 = 0 and 𝜑𝛿 = −0.25. Making an assumption on the sign of one
coefficient is enough to resolve this.
5 Problem 5
a First arrow: E(𝜀𝑡 |𝜀𝑡−1 , . . . ) = E(𝜀𝑡 ) = 0 by independence. Second arrow: E(𝜀𝑡 𝜀𝑡−ℎ ) =
E(E(𝜀𝑡 𝜀𝑡−ℎ |𝜀𝑡−1 , 𝜀𝑡−2 , . . . )) = E(𝜀𝑡−1 E(𝜀𝑡 |𝜀𝑡−1 , 𝜀𝑡−2 , . . . )) = 0.
b We give simple examples with two points, more sophisticated time series ones are also
possible. For the first arrow, let (𝑋1 , 𝑋2 ) take values in Ω = {(0, 0), (1, −1), (1, 1)} with equal
probabilities. Then E(𝑋2 ) = 0, and E(𝑋2 |𝑋1 ) = 1/2(1 − 1) I𝑋1 =1 +0 I𝑋1 =0 = 0. However, the
two variables are not independent.
For the second arrow, let (𝑋1 , 𝑋2 ) take the following values: 𝑃 ((1, −1)) = 𝑃 ((1, 1)) = 1/4
and 𝑃 ((0, −1)) = 1/2. Then mean E(𝑋2 𝑋1 ) = 0 = E(𝑋2 ) E(𝑋1 ). However, E(𝑋2 |𝑋1 =
−1) = 1 ̸= 0.
6 Problem 6
∑︀
a |𝜑𝑖 | < ∞ implies that starting from some 𝑖0 for all 𝑖 ≥ 𝑖0 |𝜓𝑖 | ≤ 1. Then for all 𝑖 ≥ 𝑖0
2
𝜓𝑖 ≤ |𝜓|𝑖 , hence
∞
∑︁ ∞
∑︁
𝜓𝑖2 ≤ |𝜓𝑖 | < ∞ (40)
𝑖=𝑖0 𝑖=𝑖0
b We know
∞
∑︁
2
𝛾𝑌 (ℎ) = 𝜎 𝜓𝑗 𝜓𝑗+ℎ (41)
𝑗=−∞