Var Intro - Introduction To Vector Autoregressive Models: Description Remarks and Examples References Also See
Var Intro - Introduction To Vector Autoregressive Models: Description Remarks and Examples References Also See
stata.com
var intro Introduction to vector autoregressive models
Description
References
Also see
Description
Stata has a suite of commands for fitting, forecasting, interpreting, and performing inference
on vector autoregressive (VAR) models and structural vector autoregressive (SVAR) models. The suite
includes several commands for estimating and interpreting impulseresponse functions (IRFs), dynamicmultiplier functions, and forecast-error variance decompositions (FEVDs). The table below describes
the available commands.
varwle
[TS] varwle
vargranger
[TS] vargranger
varlmar
[TS] varlmar
varnorm
[TS] varnorm
This entry provides an overview of vector autoregressions and structural vector autoregressions.
More rigorous treatments can be found in Hamilton (1994), Lutkepohl (2005), and Amisano and
Giannini (1997). Stock and Watson (2001) provide an excellent nonmathematical treatment of vector
autoregressions and their role in macroeconomics. Becketti (2013) provides an excellent introduction
to VAR analysis with an emphasis on how it is done in practice.
stata.com
Introduction to VARs
A VAR is a model in which K variables are specified as linear functions of p of their own lags,
p lags of the other K 1 variables, and possibly additional exogenous variables. Algebraically, a
p-order VAR model, written VAR(p), with exogenous variables xt is given by
t {, } (1)
where
f 1 xt + W
f 2 xt2 + + W
f s xts + et
W0 yt = a + W1 yt1 + + Wp ytp + W
(2)
(3)
The cross-equation error variancecovariance matrix contains all the information about contemporaneous correlations in a VAR and may be the VARs greatest strength and its greatest weakness.
Because no questionable a priori assumptions are imposed, fitting a VAR allows the dataset to speak
for itself. However, without imposing some restrictions on the structure of , we cannot make a
causal interpretation of the results.
If we make additional technical assumptions, we can derive another representation of the VAR in
(1). If the VAR is stable (see [TS] varstable), we can rewrite yt as
yt = +
X
i=0
Di xti +
i uti
(4)
i=0
where is the K 1 time-invariant mean of the process and Di and i are K M and K K
matrices of parameters, respectively. Equation (4) states that the process by which the variables in
yt fluctuate about their time-invariant means, , is completely determined by the parameters in
Di and i and the (infinite) past history of the exogenous variables xt and the independent and
identically distributed (i.i.d.) shocks or innovations, ut1 , ut2 , . . . . Equation (4) is known as the
vector moving-average representation of the VAR. The Di are the dynamic-multiplier functions, or
transfer functions. The moving-average coefficients i are also known as the simple IRFs at horizon
i. The precise relationships between the VAR parameters and the Di and i are derived in Methods
and formulas of [TS] irf create.
The joint distribution of yt is determined by the distributions of xt and ut and the parameters
v, Bi , and Ai . Estimating the parameters in a VAR requires that the variables in yt and xt be
covariance stationary, meaning that their first two moments exist and are time invariant. If the yt are
not covariance stationary, but their first differences are, a vector error-correction model (VECM) can
be used. See [TS] vec intro and [TS] vec for more information about those models.
If the ut form a zero mean, i.i.d. vector process, and yt and xt are covariance stationary and are
not correlated with the ut , consistent and efficient estimates of the Bi , the Ai , and v are obtained
via seemingly unrelated regression, yielding estimators that are asymptotically normally distributed.
When the equations for the variables yt have the same set of regressors, equation-by-equation OLS
estimates are the conditional maximum likelihood estimates.
Much of the interest in VAR models is focused on the forecasts, IRFs, dynamic-multiplier functions,
and the FEVDs, all of which are functions of the estimated parameters. Estimating these functions is
straightforward, but their asymptotic standard errors are usually obtained by assuming that ut forms
a zero mean, i.i.d. Gaussian (normal) vector process. Also, some of the specification tests for VARs
have been derived using the likelihood-ratio principle and the stronger Gaussian assumption.
In the absence of contemporaneous exogenous variables, the disturbance variancecovariance
matrix contains all the information about contemporaneous correlations among the variables. VARs
are sometimes classified into three types by how they account for this contemporaneous correlation.
(See Stock and Watson [2001] for one derivation of this taxonomy.) A reduced-form VAR, aside
from estimating the variancecovariance matrix of the disturbance, does not try to account for
contemporaneous correlations. In a recursive VAR, the K variables are assumed to form a recursive
dynamic structural equation model in which the first variable is a function of lagged variables, the
second is a function of contemporaneous values of the first variable and lagged values, and so on.
In a structural VAR, the theory you are working with places restrictions on the contemporaneous
correlations that are not necessarily recursive.
Stata has two commands for fitting reduced-form VARs: var and varbasic. var allows for
constraints to be imposed on the coefficients. varbasic allows you to fit a simple VAR quickly
without constraints and graph the IRFs.
Because fitting a VAR of the correct order can be important, varsoc offers several methods for
choosing the lag order p of the VAR to fit. After fitting a VAR, and before proceeding with inference,
interpretation, or forecasting, checking that the VAR fits the data is important. varlmar can be used
to check for autocorrelation in the disturbances. varwle performs Wald tests to determine whether
certain lags can be excluded. varnorm tests the null hypothesis that the disturbances are normally
distributed. varstable checks the eigenvalue condition for stability, which is needed to interpret the
IRFs and IRFs.
Introduction to SVARs
As discussed in [TS] irf create, a problem with VAR analysis is that, because is not restricted
to be a diagonal matrix, an increase in an innovation to one variable provides information about the
innovations to other variables. This implies that no causal interpretation of the simple IRFs is possible:
there is no way to determine whether the shock to the first variable caused the shock in the second
variable or vice versa.
However, suppose that we had a matrix P such that = PP0 . We can then show that the variables
in P1 ut have zero mean and that E{P1 ut (P1 ut )0 } = IK . We could rewrite (4) as
yt = +
s PP1 uts
s=0
=+
s P1 uts
s=0
=+
s wts
(5)
s=0
(6)
Equation (6) implies that Psr = A1 B, where Psr is the P matrix identified by a particular
short-run SVAR model. The latter equality in (6) implies that
X
yt = +
sr
(7)
s ets
s=0
Psr identifies the structural IRFs by defining a transformation of , and Psr is identified by
the restrictions placed on the parameters in A and B. Because there are only K(K + 1)/2 free
parameters in , only K(K + 1)/2 parameters may be estimated in an identified Psr . Because there
are 2K 2 total parameters in A and B, the order condition for identification requires that at least
2K 2 K(K + 1)/2 restrictions be placed on those parameters. Just as in the simultaneous-equations
framework, this order condition is necessary but not sufficient. Amisano and Giannini (1997) derive
a method to check that an SVAR model is locally identified near some specified values for A and B.
Before moving on to models with long-run constraints, consider these limitations. We cannot place
constraints on the elements of A in terms of the elements of B, or vice versa. This limitation is
imposed by the form of the check for identification derived by Amisano and Giannini (1997). As
noted in Methods and formulas of [TS] var svar, this test requires separate constraint matrices for
the parameters in A and B. Also, we cannot mix short-run and long-run constraints.
Long-run restrictions
A general short-run SVAR has the form
In long-run models, the constraints are placed on the elements of C, and the free parameters are
estimated. These constraints are often exclusion restrictions. For instance, constraining C[1, 2] to be
zero can be interpreted as setting the long-run response of variable 1 to the structural shocks driving
variable 2 to be zero.
Statas svar command estimates the parameters of structural VARs. See [TS] var svar for more
information and examples.
Dynamicmultiplier functions describe how the endogenous variables react over time to a unit
change in an exogenous variable. This is a different experiment from that in IRFs and FEVDs because
dynamic-multiplier functions consider a change in an exogenous variable instead of a shock to an
endogenous variable.
irf create estimates IRFs, Cholesky orthogonalized IRFs, dynamic-multiplier functions, and
structural IRFs and their standard errors. It also estimates Cholesky and structural FEVDs. The irf
graph, irf cgraph, irf ograph, irf table, and irf ctable commands graph and tabulate these
estimates. Stata also has several other commands to manage IRF and FEVD results. See [TS] irf for a
description of these commands.
fcast compute computes dynamic forecasts and their standard errors from VARs. fcast graph
graphs the forecasts that are generated using fcast compute.
VARs allow researchers to investigate whether one variable is useful in predicting another variable.
A variable x is said to Granger-cause a variable y if, given the past values of y , past values of x are
useful for predicting y . The Stata command vargranger performs Wald tests to investigate Granger
causality between the variables in a VAR.
References
Amisano, G., and C. Giannini. 1997. Topics in Structural VAR Econometrics. 2nd ed. Heidelberg: Springer.
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Lutkepohl, H. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Stock, J. H., and M. W. Watson. 2001. Vector autoregressions. Journal of Economic Perspectives 15: 101115.
Watson, M. W. 1994. Vector autoregressions and cointegration. In Vol. 4 of Handbook of Econometrics, ed. R. F.
Engle and D. L. McFadden. Amsterdam: Elsevier.
Also see
[TS] var Vector autoregressive models
[TS] var svar Structural vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
[TS] vec Vector error-correction models
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs