STA457 Week 4 Notes
STA457 Week 4 Notes
Lecture 6
Lijia Wang
Last Time:
1 Definitions of stationary
2 Estimation of correlation
3 Large sample properties of sample statistics
Today:
1 Autoregressive (AR) process
2 Moving average (MA) process
3 Autoregressive moving average (ARMA)
Over the next few weeks, we will learn the following models:
1 Autoregressive (AR)
2 Moving average (MA)
3 Autoregressive moving average (ARMA)
4 Autoregressive integrated moving average (ARIMA)
Autoregressive models are based on the idea that the current value of the
series, xt , can be explained as a function of p past values,
xt→1 , xt→2 , · · · , xt→p , where p determines the number of steps into the
past needed to forecast the current value.
For an example,
xt = xt→1 → 0.9xt→2 + wt
where wt ↑ N(0, 1). This is the autoregressive model with order 2.
or
xt = ϑ + ω1 xt→1 + ω2 xt→2 + · · · + ωp xt→p + wt ,
where
ϑ = µ (1 → ω1 → ω2 → · · · → ωp ) .
Bxt = xt→1
and extend it to powers
B k xt = xt→k
where ! "
2 p
ω(B) = 1 → ω1 B → ω2 B → · · · → ωp B .
ϱ(h) = ωh ; h ↗ 0.
4 xt is stationary.
For any AR(1) process, xt = ωxt→1 + wt is , where |ω| < 1, one can show
that,
↑
$
xt = ωj wt→j = ς(B)wt .
j=0
For the linear process, we may show that the autocovariance function is
given by
↑
$
ϖx (h) = εw2 ςj+h ςj ,
j=→↑
for h > 0.
! "
Consider the random walk model xt = xt→1 + wt , where wt ↑ wn 0, εw2 .
1 Show that the autocovariance function
2 xt is not stationary.
3 Consider AR (1) process xt = ωxt→1 + wt with |ω| > 1. Such
processes are called explosive because the values of the time series
quickly become large in magnitude.
For the explosive AR (1) process xt = ωxt→1 + wt with |ω| > 1, we may
re-write it as
1 1
xt = xt+1 → wt+1 .
ω ω
Because |ω|→1 < 1, this result suggests the stationary future-dependent
AR(1) model
$↑
xt = → ω→1 wt+j
j=1
When a process does not depend on the future, such as the AR(1)
when |ω| < 1, we will say the process is causal.
In the explosive case of this example, the process is stationary, but it
is also future dependent, therefore not causal.
The technique of iterating backward works well for AR(1), but not for
larger p. A general technique is matching coe!cients. Since we write
AR(p) of the form
ω(B)xt = wt ,
the stationary solution usually has the form
↑
$
xt = ςj wt→j = ς(B)wt ,
j=0
therefore,
ω(B)ς(B)wt = wt ,
matching the coe!cients yields the solution.
Since
→1 1
ω (z) = = 1 + ωz + ω2 z 2 + · · · + ωj z j + . . . , |z| ≃ 1.
(1 → ωz)
Moving average models are based on the idea that the current value of the
series, xt , is the moving average of q past steps of white noise,
wt→1 , wt→2 , . . . , wt→q , where q determines the number of steps into the
past.
AR(p) model: {xt } on the left-hand side of the defining equation are
assumed to be combined linearly;
MA(q) model: {wt } on the right-hand side of the defining equation
are combined linearly.
xt = φ(B)wt
For the MA(1) model, xt = wt + φwt→1 , notice that the following models
share the same ACFs:
1
φ= 5 and εw = 5, i.e.
1 i.i.d
xt = wt + wt→1 , wt ↑ N(0, 25)
5
φ = 5 and εw = 1, i.e.
i.i.d
yt = vt + 5vt→1 , vt ↑ N(0, 1)
To discover which model is the invertible model, we can reverse the roles
of xt and wt (mimicking the AR case),
wt = →φwt→1 + xt .
xt = φ(B)wt ,
↼(B)xt = wt ,
ω(B)xt = φ(B)wt .
xt = ϑ+ω1 xt→1 +ω2 xt→2 +· · ·+ωp xt→p +wt +φ1 wt→1 +φ2 wt→2 +· · ·+φq wt→q ,
! 2
"
where wt ↑ wn 0, εw .
(1 → 0.5B)xt = (1 → 0.5B)wt
or
xt = 0.5xt→1 → 0.5wt→1 + wt
.
where
↑
$ ↑
$
ς(B) = ςj B j , and |ςj | < ↘; we set ς0 = 1.
j=0 j=0
Remark: An ARMA process is causal only when the roots of ω(z) lie
outside the unit circle; that is, ω(z) = 0 only when |z| > 1.
Remark: An ARMA process is invertible only when the roots of φ(z) lie
outside the unit circle; that is, φ(z) = 0 only when |z| > 1.