04 Var
04 Var
Macroeconometrics
Freddy Espino
Readings
• Enders, Ch. 5
1, pp. 1-48.
Page 1 of 39
Class Notes VAR Models Freddy Espino
1. Introduction
at refining and extending these techniques so that they are well suited
variables.
• In the two variables case, we can let the time path 𝑦1,𝑡 be affected by
Page 2 of 39
Class Notes VAR Models Freddy Espino
2. 𝜀𝑡′ s are white-noise disturbances with variance 𝜎12 and 𝜎22 ; and
• Note that 𝜀 ′ 𝑠 are pure innovations (or shocks) in 𝑦1,𝑡 and 𝑦2,𝑡 .
• Using matrix algebra, we can write the system in the compact form:
4. 𝐵𝑦𝑡 = 𝛤0 + 𝛤1 𝑦𝑡−1 + 𝜀𝑡
5. 𝑦𝑡 = 𝐴0 + 𝐴1 𝑦𝑡−1 + 𝑢𝑡
Page 3 of 39
Class Notes VAR Models Freddy Espino
primitive system and the system (7) and (6) is called a Reduced
• SVAR:
𝜀1,𝑡
[𝜀 ] = 𝜀𝑡 ~𝑖𝑖𝑑(0, 𝛴𝜀 )
2,𝑡
𝜎12 0
𝛴𝜀 = [ ]
0 𝜎22
• VAR:
𝑢1,𝑡
[𝑢 ] = 𝑢𝑡 ~𝑖𝑖𝑑(0, Ω𝑢 )
2,𝑡
𝜔12 𝜔12
Ω𝑢 = [ ]
𝜔21 𝜔22
𝑦𝑡 = 𝐴0 + 𝐴1 𝑦𝑡−1 + ⋯ + 𝐴𝑝 𝑦𝑡−𝑝 + 𝑢𝑡
𝑦𝑡 = 𝐴0 + 𝐴(𝐿) 𝑦𝑡 + 𝑢𝑡
Page 4 of 39
Class Notes VAR Models Freddy Espino
• It is important to note that the error terms 𝑢1,𝑡 and 𝑢2,𝑡 are
• Thus:
• 𝐸[𝑢1,𝑡 ] = 𝐸[𝑢2,𝑡 ] = 0
• A critical point to note is that 𝑢1,𝑡 and 𝑢2,𝑡 are correlated. The
• 𝐸[𝑢1,𝑡 𝑢2,𝑡 ] = − 𝑏21 𝜎12 + 𝑏12 𝜎22 ⁄(1 − 𝑏12 𝑏21 )2 = 𝜔12 = 𝜔21
• We cannot estimate the SVAR using per-equation OLS, due to the bias
both 𝑦1𝑡 and 𝑦2𝑡 are endogenous, and the regressors include the current
Page 5 of 39
Class Notes VAR Models Freddy Espino
restriction imposed.
𝑦𝑡 = 𝛾1 + 𝑐𝑡 + 𝑖𝑡 + 𝜀𝑦𝑡
period 𝑡, respectively.
• The terms 𝜀𝑦𝑡 , 𝜀𝑐𝑡 and 𝜀𝑖𝑡 and are zero mean random disturbances for
1 −1 −1 𝑦𝑡 𝛾1 0 0 0 𝑦𝑡−1 𝜀𝑦𝑡
[0 1 0 ] [ 𝑐𝑡 ] = [𝛾2 ] + [𝛼 0 0] [ 𝑐𝑡−1 ] + [ 𝜀𝑐𝑡 ]
0 −𝛽 1 𝑖𝑡 𝛾3 0 −𝛽 0 𝑖𝑡−1 𝜀𝑖𝑡
Page 6 of 39
Class Notes VAR Models Freddy Espino
terms of its own lags, lags of other endogenous variables, current and
Page 7 of 39
Class Notes VAR Models Freddy Espino
3. Lag Selection
suffice.
likely order.
Page 8 of 39
Class Notes VAR Models Freddy Espino
parameters.
residuals: 𝛴ℓ−1
reject H0.
Page 9 of 39
Class Notes VAR Models Freddy Espino
• The VAR(p)
𝑦𝑡 = 𝐴0 + 𝐴1 𝑦𝑡−1 + ⋯ + 𝐴𝑝 𝑦𝑡−𝑝 + 𝑢𝑡
𝑦𝑡 = 𝐴0 + 𝐴(𝐿) 𝑦𝑡 + 𝑢𝑡
[𝐼 − 𝐴(𝐿)]𝑦𝑡 = 𝐴0 + 𝑢𝑡
𝐶(𝐿)𝑦𝑡 = 𝐴0 + 𝑢𝑡
|𝐼 − 𝐴1 𝐿 − 𝐴2 𝐿2 − ⋯ − 𝐴𝑝 𝐿𝑝 | = 0
0.3 0.7
𝐴1 = [ ]
0.8 0.6
1 0 0.3 0.7
|[ ] − [ ] 𝐿| = 0
0 1 0.8 0.6
Page 10 of 39
Class Notes VAR Models Freddy Espino
1 − 0.3𝐿 −0.7𝐿
|[ ]| = 0
−0.8𝐿 1 − 0.6𝐿
1 − 0.9𝐿 − 0.38𝐿2 = 0
𝐿1 = −3.1927
𝐿2 = 0.8243
𝑦𝑡
𝑦𝑡−1
𝜉𝑡 = 𝑦𝑡−2
⋮
[𝑦𝑡−𝑝+1 ]
𝐴1 𝐴2 𝐴3 ⋯ 𝐴𝑝−1 𝐴𝑝
𝐼 0 0 ⋯ 0 0
ℱ= 0 𝐼 0 ⋯ 0 0
⋮ ⋮ ⋮ ⋯ ⋮ ⋮
[0 0 0 ⋯ 𝐼 0]
𝜀𝑡
0
𝑣𝑡 = 0
⋮
[0]
Page 11 of 39
Class Notes VAR Models Freddy Espino
companion form:
𝜉𝑡 = ℱ𝜉𝑡−1 + 𝑣𝑡
• Or
𝑦𝑡 𝐴1 𝐴2 𝐴3 ⋯ 𝐴𝑝−1 𝐴𝑝 𝑦𝑡−1 𝜀𝑡
𝑦𝑡−1 𝐼 0 0 ⋯ 0 0 𝑦𝑡−2 0
𝑦𝑡−2 = 0 𝐼 0 ⋯ 0 0 𝑦𝑡−3 + 0
⋮ ⋮ ⋮ ⋮ ⋯ ⋮ ⋮ ⋮ ⋮
[𝑦𝑡−𝑝+1 ] [ 0 0 0 ⋯ 𝐼 𝑦
0 ] [ 𝑡−𝑝 ] [ 0 ]
values of 𝜆.
• Example:
0.3 0.7
𝐴1 = [ ]
0.8 0.6
Page 12 of 39
Class Notes VAR Models Freddy Espino
1 0 0.3 0.7
|[ ]𝜆 − [ ]| = 0
0 1 0.8 0.6
𝜆 − 0.3 −0.7
|[ ]| = 0
−0.8 𝜆 − 0.6
𝜆2 − 0.9𝜆 − 0.38 = 0
𝜆1 = 1.2132
𝜆2 = −0.3132
roots.
Page 13 of 39
Class Notes VAR Models Freddy Espino
5. Identification
• Suppose that you want to recover the SVAR from your estimate VAR.
o In our example, the reason is that 𝑦2,𝑡 is correlated with the error
• Note there is no such problem in estimating the VAR. OLS can provide
VAR model?
VAR model.
𝐶𝑜𝑣[𝑢1𝑡 , 𝑢2𝑡 ].
• Suppose that you are willing to impose a restriction on SVAR such that
1 0 1 0
𝐵=[ ] ⇒ 𝐵−1 = [ ]
𝑏21 1 −𝑏21 1
• We know that:
𝑢𝑡 = 𝐵−1 𝜀𝑡
𝑢1𝑡 1 0 𝜀1𝑡
[𝑢 ] = [ ][ ]
2𝑡 −𝑏21 1 𝜀2𝑡
• Thus
𝑢1𝑡 𝜀1𝑡
[𝑢 ] = [𝜀 − 𝑏 𝜀 ]
2𝑡 2𝑡 21 1𝑡
Page 15 of 39
Class Notes VAR Models Freddy Espino
• The restriction manifests itself such that 𝜀2𝑡 and 𝜀1𝑡 shocks affect the
value of 𝑦1,𝑡 .
Cholesky Decomposition.1
𝑢𝑡 = 𝐵−1 𝜀𝑡
𝜀𝑡 = 𝐵𝑢𝑡
matrix P so that:
Ω𝑢 = 𝑃𝑃′
𝐵 = 𝑃−1
1
In linear algebra, the Cholesky Decomposition or Cholesky Factorization is a decomposition of a
Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate
transpose:
• 𝑀 = 𝐿𝐿′
• Where L is a lower triangular matrix with real and positive diagonal entries, and 𝐿′ denotes the
conjugate transpose of L.
It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924.
Page 16 of 39
Class Notes VAR Models Freddy Espino
Page 17 of 39
Class Notes VAR Models Freddy Espino
• The VAR
𝑦𝑡 = 𝐴0 + 𝐴(𝐿) 𝑦𝑡 + 𝑢𝑡
𝑢𝑡 ~𝑖𝑖𝑑(0, Ω𝑢 )
𝐶(𝐿)𝑦𝑡 = 𝐴0 + 𝑢𝑡
𝑦𝑡 = 𝜇 + Ψ(𝐿)𝑢𝑡
• Therefore, we cannot say, hold 𝑢𝑖𝑡 constant and let only 𝑢𝑗𝑡 vary for all
𝑖 ≠ 𝑗.
Page 18 of 39
Class Notes VAR Models Freddy Espino
𝑢𝑡 = 𝐵−1 𝜀𝑡
𝜀𝑡 = 𝐵𝑢𝑡
• Then
𝑦𝑡 = 𝜇 + Ψ(𝐿)𝐵−1 𝐵𝑢𝑡
𝑦𝑡 = 𝜇 + Θ(𝐿)𝜀𝑡
• The coefficients of 𝜃𝑖𝑗 (𝑘) can be used to generate the effects of 𝜀𝑡 on the
𝜕𝑦𝑡 𝜕𝑦𝑡+𝑘
= = 𝜃(𝑘)
𝜕𝜀𝑡−𝑘 𝜕𝜀𝑡
Page 19 of 39
Class Notes VAR Models Freddy Espino
evaluated at 1:
Θ(1)
• Notice that the results of the IRF depends on the identification of the
VAR.
𝜉𝑡 = ℱ𝜉𝑡−1 + 𝑣𝑡
• For any horizon ℎ ≥ 0, the IRF to a unit shock at time “t” is just the
matrix 𝐴1 element ℱ ℎ .
needed.
Page 20 of 39
Class Notes VAR Models Freddy Espino
𝑦𝑡 = 𝐴0 + 𝐴1 𝑦𝑡−1 + 𝑢𝑡
generating process and the current and past realizations of the {𝜀𝑡 }
𝐸𝑡 𝑦𝑡+1 = 𝐴0 + 𝐴1 𝑦𝑡
𝑦𝑡+1 = 𝐴0 + 𝐴1 𝑦𝑡 + 𝑢𝑡+1
𝐸𝑡 𝑦𝑡+2 = 𝐴0 + 𝐴1 𝐸𝑡 𝑦𝑡+1
𝐸𝑡 𝑦𝑡+2 = 𝐴0 + 𝐴1 (𝐴0 + 𝐴1 𝑦𝑡 )
𝑛−1
𝑦1𝑡+𝑛 𝑦1𝑡+𝑛 𝜃 (𝑘) 𝜃12 (𝑘) 𝜀1(𝑡+𝑛−𝑘)
[𝑦 ] − 𝐸𝑡 [𝑦 ] = ∑ [ 11 ][ ]
2𝑡+𝑛 2𝑡+𝑛 𝜃21 (𝑘) 𝜃22 (𝑘) 𝜀2(𝑡+𝑛−𝑘)
𝑘=0
Page 22 of 39
Class Notes VAR Models Freddy Espino
• Because all values of 𝜃𝑗𝑘 (𝑖)2 are necessarily nonnegative, the variance
• The proportions of 𝜎1 (𝑛)2 due to shocks in the {𝜀1𝑡 } and {𝜀2𝑡 } sequences
are:
• And
• If 𝜀2𝑡 shocks explain none of the forecast error variance of 𝑦1,𝑡 at all
• At the other extreme, 𝜀2𝑡 could explain all the forecast error variance
endogenous.
Page 23 of 39
Class Notes VAR Models Freddy Espino
effect on 𝑦1,𝑡 , but acted to affect the 𝑦1,𝑡 sequence with a lag.
the VAR.
Page 24 of 39
Class Notes VAR Models Freddy Espino
8. Granger Causality
(1972).
• In a two-equation model with 1 lag, 𝑦1,𝑡 does not Granger cause 𝑦2,𝑡 if
• Thus, if 𝑦1,𝑡 does not improve the forecasting performance of 𝑦2,𝑡 , then
𝐻0 : 𝑎21 = 0
• If 𝑦2,𝑡 is some sort of forecast of the future, such as a future price, then
𝑦2,𝑡 may help to forecast 𝑦1,𝑡 even though it does not cause a la Granger
𝑦1,𝑡 .
Page 25 of 39
Class Notes VAR Models Freddy Espino
9. Structural Decomposition
• Sims’s (1980) VAR approach has the desirable property that all
identification restrictions.”
completely arbitrary.
Page 26 of 39
Class Notes VAR Models Freddy Espino
𝜔12 𝜔12
𝐸[𝑢𝑡 𝑢𝑡′ ] = Ω𝑢 = [ ]
𝜔21 𝜔22
Ω𝑢 = 𝐸[(𝐵−1 𝜀𝑡 )(𝐵−1 𝜀𝑡 )′ ] = 𝐸[(𝐵−1 )𝜀𝑡 𝜀𝑡′ (𝐵−1 )′ ] = (𝐵−1 )𝐸[𝜀𝑡 𝜀𝑡′ ](𝐵−1 )′
• Notice that:
𝜎12 0
𝛴𝜀 = 𝐸[𝜀𝑡 𝜀𝑡′ ] =𝐷=[ ]
0 𝜎22
• Thus:
• The symmetry of the system is such that 𝜔21 = 𝜔12 so that there are
is imposed.
Page 27 of 39
Class Notes VAR Models Freddy Espino
diagonal to be zero.
exactly identified.
• For example, define matrix 𝐶 = 𝐵−1 with elements 𝑐𝑖𝑗 . Hence, 𝑢𝑡 = 𝐶𝜀𝑡 .
VAR:
𝑢1𝑡 = 𝜀1𝑡
𝑢1𝑡 1 0 0 𝜀1𝑡
𝑢
[ 2𝑡 ] = [𝑐21 1 0] [𝜀2𝑡 ]
𝑢3𝑡 𝑐31 𝑐32 1 𝜀3𝑡
Page 28 of 39
Class Notes VAR Models Freddy Espino
coefficient to unity.
Page 29 of 39
Class Notes VAR Models Freddy Espino
o For example, suppose it is known that an oil price shock does not
affect GDP for the first two quarters after the shock.
o Mountford and Uhlig (2008) show how such sign restrictions can
be used in identification.
Page 30 of 39
Class Notes VAR Models Freddy Espino
innovations.
• Consider two observed series 𝑦1𝑡 ~𝐼(1) and 𝑦2𝑡 ~𝐼(0) and define 𝑦𝑡 =
(𝛥𝑦1𝑡 , 𝑦2𝑡 )′ so that 𝑦𝑡 ~𝐼(0), 𝑦1𝑡 is the log of real GNP and 𝑦2𝑡 is the
unemployment rate.
that do not.
unemployment.
Page 31 of 39
Class Notes VAR Models Freddy Espino
• They allow supply shocks (𝜀1𝑡 ) to have a long-run impact on the level
represented as follows:
∞
𝜃12 (𝑠) = ∑ 𝜃12 (𝑠) = 0
𝑠=0
• The restriction that shocks to 𝜀1𝑡 and 𝜀2𝑡 have no long-run effect on the
• The long-run restriction makes the long-run impact matrix 𝛩(1) lower
triangular:
𝜃11 (1) 0
𝛩(1) = [ ]
𝜃21 (1) 𝜃22 (1)
𝛬 = 𝛹(1)Ω𝑢 𝛹(1)′
𝑦𝑡 = 𝜇 + 𝛹(𝐿)𝑢𝑡
𝑦𝑡 = 𝜇 + 𝛹(𝐿)𝐵−1 𝐵𝑢𝑡
𝑦𝑡 = 𝜇 + Θ(𝐿)𝜀𝑡
Ω𝑢 = 𝐵−1 𝐷𝐵−1′
Page 32 of 39
Class Notes VAR Models Freddy Espino
𝛹(1) = 𝛩(1)𝐵
𝛬 = 𝛹(1)Ω𝑢 𝛹(1)′
𝛬 = 𝛩(1)𝐵Ω𝑢 𝐵′𝛩(1)′
𝛬 = 𝛩(1)𝐷𝛩(1)′
𝐷=𝐼
• So that the structural shocks 𝜀1𝑡 and 𝜀2𝑡 have unit variances. Thus, the
𝛬 = 𝛩(1)𝛩(1)′
• So that:
𝐵 = [𝐶(1)𝛩(1)]−1
𝐵 = [(𝐼 − 𝐴(1))𝛩(1)]−1
Page 33 of 39
Class Notes VAR Models Freddy Espino
• SVAR:
• Reduced VAR:
𝑦𝑡 = 𝐴1 𝑦𝑡−1 + ⋯ + 𝐴𝑝 𝑦𝑡−𝑝 + 𝑢𝑡
𝑢𝑡 = 𝐴−1 𝐵𝜀𝑡
𝐴𝑢𝑡 = 𝐵𝜀𝑡
𝑆 = 𝐴−1 𝐵 ⇒ 𝑢𝑡 = 𝑆𝜀𝑡
variables.
• The number of parameters of the reduced form VAR (leaving out the
𝐾)/2.
Page 34 of 39
Class Notes VAR Models Freddy Espino
matrices A and B is 2𝐾 2 .
• It follows that:
2
(𝐾 2 + 𝐾) 2
𝐾2 − 𝐾
2𝐾 − =𝐾 +
2 2
𝑌𝑡 = 𝐴(𝐿, 𝑞)𝑌𝑡−1 + 𝑈𝑡
𝑌𝑡 = [𝑇𝑡 , 𝐺𝑡 , 𝑋𝑡 ]′
𝑈𝑡 = [𝑡𝑡 , 𝑔𝑡 , 𝑥𝑡 ]′
𝑔
1. 𝑡𝑡 = 𝑎1 𝑥𝑡 + 𝑎2 𝑒𝑡 + 𝑒𝑡𝑡
Page 35 of 39
Class Notes VAR Models Freddy Espino
𝑔
2. 𝑔𝑡 = 𝑏1 𝑥𝑡 + 𝑏2 𝑒𝑡𝑡 + 𝑒𝑡
3. 𝑥𝑡 = 𝑐1 𝑡𝑡 + 𝑐2 𝑔𝑡 + 𝑒𝑡𝑥
𝑡
1 0 −𝑎1 𝑡𝑡 1 𝑎2 0 𝑒𝑡
𝑔
[ 0 1 −𝑏1 ] [𝑔𝑡 ] = [𝑏2 1 0] [𝑒𝑡 ]
−𝑐1 −𝑐2 1 𝑥𝑡 0 0 1 𝑒𝑡𝑥
𝑔
𝐸𝑡 = [𝑒𝑡𝑡 , 𝑒𝑡 , 𝑒𝑡𝑥 ]′
𝐴𝑈𝑡 = 𝐵𝐸𝑡
𝑔
1. 𝑡𝑡 = 𝑎1 𝑥𝑡 + 𝑎2 𝑒𝑡 + 𝑒𝑡𝑡
𝑔
2. 𝑔𝑡 = 𝑏1 𝑥𝑡 + 𝑏2 𝑒𝑡𝑡 + 𝑒𝑡
3. 𝑥𝑡 = 𝑐1 𝑡𝑡 + 𝑐2 𝑔𝑡 + 𝑒𝑡𝑥
𝑥(𝑖)
4. 𝑥𝑖,𝑡 = 𝑑1 𝑡𝑡 + 𝑑2 𝑔𝑡 + 𝑒𝑡
𝑡
1 0 −𝑎1 0 𝑡𝑡 1 𝑎2 0 0 𝑒𝑡𝑔
0 1 −𝑏1 0 𝑔𝑡 𝑏 1 0 0 𝑒𝑡
[ ][ ] = [ 2 ]
−𝑐1 −𝑐2 1 0 𝑥𝑡 0 0 1 0 𝑒𝑡𝑥
−𝑑1 −𝑑2 0 1 𝑥𝑖,𝑡 0 0 0 1 [𝑒 𝑥(𝑖) ]
𝑡
𝑔 𝑥(𝑖)
𝐸𝑡 = [𝑒𝑡𝑡 , 𝑒𝑡 , 𝑒𝑡𝑥 , 𝑒𝑡 ]′
𝐴𝑈𝑡 = 𝐵𝐸𝑡
Page 36 of 39
Class Notes VAR Models Freddy Espino
restrictions.
Page 37 of 39
Class Notes VAR Models Freddy Espino
Under the null hypothesis (H0) that restrictions are valid, if the
be rejected.
iv. Now, allow for two sets of over identifying restrictions such that the
Page 38 of 39
Class Notes VAR Models Freddy Espino
• Price Puzzle
than a decrease.
puzzle”.
Page 39 of 39