0% found this document useful (0 votes)
22 views23 pages

Econ 4 - Time Series

Uploaded by

l237345
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views23 pages

Econ 4 - Time Series

Uploaded by

l237345
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Time Series

Dr. Muhammad Sabeeh Iqbal


THE UNIT ROOT TEST
You might be tempted to say, why not
use the usual t test? Unfortunately,
under the null hypothesis that δ = 0
(i.e., ρ = 1), the t value of the
estimated coefficient of Y does not
t−1

follow the t distribution even in large


samples; that is, it does not have an
asymptotic normal distribution.
What is the alternative? Dickey and
Fuller have shown that under the null
hypothesis that δ = 0, the estimated t
value of the coefficient of Y in
t−1
THE UNIT ROOT TEST

Read Pages 754-757


THE UNIT ROOT TEST
The Augmented Dickey–Fuller (ADF) Test
TRANSFORMING NONSTATIONARY TIME SERIES
 Now that we know the problems associated with
nonstationary time series, the practical question is what to
do.
 To avoid the spurious regression problem that may arise from
regressing a nonstationary time series on one or more
nonstationary time series, we have to transform
nonstationary time series to make them stationary.
 The transformation method depends on whether the time
series are difference stationary (DSP) or trend stationary
(TSP). We consider each of these methods in turn.
TRANSFORMING NONSTATIONARY TIME SERIES
 Difference-Stationary Processes
 If a time series has a unit root, the first differences of such
time series are stationary. Therefore, the solution here is to
take the first differences of the time series.
 Trend-Stationary Process
 As we have seen in Figure 21.5, a TSP is stationary around
the trend line. Hence, the simplest way to make such a time
series stationary is to regress it on time and the residuals
from this regression will then be stationary.
TRANSFORMING NONSTATIONARY TIME SERIES
 It should be pointed out that if a time series is DSP but we
treat it as TSP, this is called under-differencing.
 On the other hand, if a time series is TSP but we treat it as
DSP, this is called over-differencing. The consequences of
these types of specification errors can be serious
 In short, provided we check that the residuals from
regressions are I(0) or stationary, the traditional regression
methodology (including the t and F tests) that we have
considered extensively is applicable to data involving
(nonstationary) time series.
Time Series
Econometrics: Forecasting
 An Autoregressive (AR) Process
 Let Yt represent GDP at time t. If we model Yt as
(Yt − δ) = α1(Yt−1 − δ) + ut
 where δ is the mean of Y and where ut is an
uncorrelated random error term with zero mean and
constant variance σ2 (i.e., it is white noise), then we
say that Yt follows a first-order autoregressive, or
AR(1).
Time Series
Econometrics: Forecasting
 But if we consider this model,
(Yt − δ) = α1(Yt−1 − δ) + α2(Yt−2 − δ) + ut
then we say that Yt follows a second-order autoregressive, or
AR(2)
In general, we can have
(Yt − δ) = α1(Yt−1 − δ) + α2(Yt−2 − δ) + ·· ·+αp(Yt−p − δ) +
ut
in which case Yt is a pth-order autoregressive, or AR(p),
process
Time Series
Econometrics: Forecasting
A Moving Average (MA) Process
 The AR process just discussed is not the only mechanism that may have generated Y.
Suppose we model Y as follows:
Yt = μ + β0ut + β1ut−1
 where μ is a constant and u, as before, is the white noise stochastic error term. Here Y
at time t is equal to a constant plus a moving average of the current and past error
terms. Thus, in the present case, we say that Y follows a first-order moving average, or
an MA(1), process.
 But if Y follows the expression
 Yt = μ + β0ut + β1ut−1 + β2ut−2 then it is an MA(2) process.
 More generally, Yt = μ + β0ut + β1ut−1 + β2ut−2 + ·· ·+βq ut−q
 is an MA(q) process. In short, a moving average process is simply a linea combination
of white noise error terms.
Time Series
Econometrics: Forecasting
An Autoregressive and Moving Average (ARMA) Process Of course, it is
quite likely that Y has characteristics of both AR and MA and is therefore
ARMA. Thus, Yt follows an ARMA(1, 1) process if it can be written as
Yt = θ + α1Yt−1 + β0ut + β1ut−1
because there is one autoregressive and one moving average term. In θ
represents a constant term.
In general, in an ARMA( p, q) process, there will be p autoregressive and q
moving average terms.
Exercise: Write following
AR (7), MA (2), ARMA(1,3)
Time Series
Econometrics: Forecasting
THE BOX–JENKINS (BJ) METHODOLOGY
The million-dollar question obviously is: Looking at a time series, such as the U.S.
GDP series in Figure 21.1, how does one know whether it follows purely AR process
(and if so, what is the value of p) or a purely MA process (and if so, what is the
value of q) or an ARMA process (and if so, what are the values of p and q) or an
ARIMA process, in which case we must know the values of p, d, and q. The BJ
methodology comes in handy in answering the preceding question. The method
consists of four steps:
Step 1. Identification. That is, find out the appropriate values of p, d, and
q. We will show shortly how the correlogram and partial correlogram aid
in this task.
Step 2. Estimation. Having identified the appropriate p and q values, the next stage
is to estimate the parameters of the autoregressive and moving average terms
included in the model.
Time Series
Econometrics: Forecasting
THE BOX–JENKINS (BJ) METHODOLOGY
Step 3. Diagnostic checking. Having chosen a particular ARIMA model, and having
estimated its parameters, we next see whether the chosen model fits the data
reasonably well, for it is possible that another ARIMA model might do the job as
well. This is why Box–Jenkins ARIMA modeling is more an art than a science;
considerable skill is required to choose the right ARIMA model. One simple test of
the chosen model is to see if the residuals estimated from this model are white
noise; if they are, we can accept the particular fit; if not, we must start over. Thus,
the BJ methodology is an iterative process.
Step 4. Forecasting. One of the reasons for the popularity of the ARIMA modeling
is its success in forecasting. In many cases, the forecasts obtained by this method
are more reliable than those obtained from the traditional econometric modeling,
particularly for short-term forecasts. Of course, each case must be checked..
THE BOX–JENKINS (BJ) METHODOLOGY
 IDENTIFICATION
 The chief tools in identification are the autocorrelation
function (ACF), the partial autocorrelation function
(PACF), and the resulting correlograms, which are simply
the plots of ACFs and PACFs against the lag length.
 In similar fashion the partial autocorrelation measures
correlation between (time series) observations that are k
time periods apart after controlling for correlations at
intermediate lags (i.e., lag less than k). In other words,
partial autocorrelation is the correlation between Yt and
Yt−k after removing the effect of the intermediate Y’s.
THE BOX–JENKINS (BJ) METHODOLOGY
 IDENTIFICATION
THE BOX–JENKINS (BJ) METHODOLOGY
 IDENTIFICATION
SARIMA
 Is there a seasonal
Pattern?
 Is there a trend?
 Is the current value
seem to depend upon
the past, an AR or MA
component may
present?
 If the answer is yes to
all these questions use
SARIMA.
 ARIMA(p, d, q)(P, D, Q)m
ARIMA(p, d, q)(P, D, Q)m
 m= number of periods after which the
pattern repeats
 D= Seasonal differencing required to
make the data stationary
 P= Seasonal AR
 Q= Seasonal MA

 Let’s identify ARIMA(0,0,0)(0,0,1)12


using ACF and PACF.
 Also, ARIMA(0,0,0)(1,0,0)12
Writing SARIMA models
 ARIMA(0,0,0)(0,0,1)12

 ARIMA(0,0,0)(1,0,0)12

 ARIMA(0,0,0)(1,0,1)12

 ARIMA(1,0,0)(2,1,1)12

 ARIMA(1,0,1)(1,1,1)12

You might also like