Lecture 4
Lecture 4
Yt Yt 1 u t
• For simplicity we do not include an error term
• Here l l < 1 and ut is an uncorrelated random error term with a zero mean and
constant variance σ2 (white noise).
• The assumption behind the AR(1) model is that the time series behavior of Y t is
largely determined by its past value in the preceding period.
• We say in this case that Yt follows a first order autoregressive or AR(1) stochastic
process.
• The forecast value of y at time t is simply some proportion of its value at time (t-1)
plus a random shock or disturbance at time t, with the y-values expressed as
deviations around their mean values.
An autoregressive (AR) process Cont.
• An AR process can be generalized so that y depends on more
than one of its previous period values:
Yt Yt 1 ...................................Yt p u t
• The current value of y can thus follow a second order
autoregressive, or AR(2) process in which the value of y at time
t depends on its value in the previous two time periods.
• More generally, this can be extended to a pth‑order
autoregressive, or AR(p) process.
• Note that only the current and previous values of y are used;
there are no other regressors. It can be said in this sense that
the ‘data speak or themselves’.
A moving average (MA) process.
• The AR process described above may not be
the only mechanism that may have generated
y.
• The series may follow a moving average, or
MA process where y at time t is given by a
constant plus a moving average of the current
and past error terms.
• A first-order moving average, or MA(1)
process, would be modeled as follows:
A moving average (MA) process Cont.