DynamicFactorModelsFactorAugmentedVectorAutoregressions
DynamicFactorModelsFactorAugmentedVectorAutoregressions
Lawrence J. Christiano
Dynamic Factor Models and Factor
Augmented Vector Autoregressions
Problem:
– the time series dimension of data is relatively short.
– the number of time series variables is huge.
DFM’s and FAVARs take the position:
– there are many variables and, hence, shocks,
– but, the principle driving force of all the variables may be just
a small number of shocks.
Factor view has a long-standing history in macro.
– almost the de…nition of macroeconomics: a handfull of shocks
- demand, supply, etc. - are the principle economic drivers.
– Sargent and Sims: only two shocks can explain a large fraction
of the variance of US macroeconomic data.
1977, “Business Cycle Modeling Without Pretending to Have
Too Much A-Priori Economic Theory,” in New Methods in
Business Cycle Research, ed. by C. Sims et al., Minneapolis:
Federal Reserve Bank of Minneapolis.
Why Work with a Lot of Data?
Estimates of impulse responses to, say, a monetary policy
shock, may be distorted by not having enough data in the
analysis (Bernanke, et. al. (QJE, 2005))
– Price puzzle:
measures of in‡ation tend to show transitory rise to a monetary
policy tightening shock in standard (small-sized) VARs.
One interpretation: Monetary authority responds to a signal
about future in‡ation that is captured in data not included in a
standard, small-sized VAR.
May suppose that ‘core in‡ation’is a factor that can only be
deduced from a large number of di¤erent data.
May want to know (as in Sargent and Sims), whether the data
for one country or a collection of countries can be characterized
as the dynamic response to a few factors.
Outline
Observer equation:
Factors:
θ = [θ 0 , θ 1 , ..., θ n ]
Identi…cation Problem in DFM
DFM:
yi,t = ai + λi0 ft + ξ i,t
ft = φ0,1 ft 1 + ... + φ0,q ft q + u0,t , u0,t N (0, Σ0 )
ξ i,t = φi,1 ξ i,t 1 + ... + φi,p ξ i,t p + ui,t .
Suppose H is an arbitrary invertible κ κ matrix.
– Above system is observationally equivalent to:
0
yi,t = ai + λ̃i f̃t + ξ i,t
f̃t = φ̃0,1 f̃t 1 + ... + φ̃0,q f̃t q + ũ0,t N 0, Σ̃0 ,
where
0
f̃t = Hft , λ̃i = λi0 H 1
, φ̃0,j = Hφ0,j H 1
, Σ̃0 = HΣ0 H 0 , .
Desirable to restrict model parameters so that there is no
change of parameters that leaves the system observationally
equivalent, yet has all di¤erent factors and parameter values.
Geweke-Zhou (1996) Identi…cation
Note for any model parameterization, can always choose an H
so that Σ0 = Iκ .
– Find C such that CC0 = Σ0 (there is a continuum of these),
set H = C 1 .
Geweke-Zhou (1996) suggest the identifying assumption,
Σ 0 = Iκ .
– But, this is not enough to achieve identi…cation.
– Exists a continuum of orthonormal matrices with property,
CC0 = Iκ .
Simple example: for κ = 2, for each ω 2 [ π, π ] ,
cos (ω ) sin (ω )
C= , 1 = cos2 (ω ) + sin2 (ω )
sin (ω ) cos (ω )
Alternative identi…cations:
– Σ0 diagonal and diagonal elements of Λ1,κ equal to unity.
– Σ0 unrestricted (positive de…nite) and Λ1,κ = Ik .
Next:
De…ne:
p ft jft q:t 1 , θ 0 .
P= θ (1) θ (M)
Computes sequence:
θ (1) θ (M)
P= = [ P1 PM ] .
f (1) f (M)
Parameters, θ
observer equation: ai , λi
measurement error: σ2i , φi
law of motion of factors: φ0 .
ỹi,t f̃i,t
z }| { z }| {
yi,t φi (L) yi,t 1 = (1 φi (1)) ai + λi0 [1 φi (L) L] ft + ui,t ,
or,
ỹi,t = [1 φi (1)] ai + λi0 f̃i,t + ui,t .
Let h i
ai (1 φi (1))
Ai = , xi,t = ,
λi f̃i,t
so
ỹi,t = Ai0 xi,t + ui,t ,
(s 1)
where ỹi,t and xt are known, conditional on φi .
Drawing Observer Equation Parameters and
Measurement Error Variance
From the Normality of the observer equation error:
p yi,t jyi,t p:t 1 , ft p:t , θ i
( 2
)
1 1 yi,t φi (L) yi,t 1 + Ai0 xi,t
∝ exp
σi 2 σ2i
( 2
)
1 1 ỹi,t Ai0 xi,t
= exp
σi 2 σ2i
Then,
T
∏ p yi,t jyi,t p:t 1 , ft p:t , θ i
t=p+1
( 2
)
1 1 T ỹi,t Ai0 xi,t
2 t=∑
∝ T p
exp .
σi p+1 σ2i
Drawing Observer Equation Parameters and
Measurement Error Variance
As in the BVAR analysis, express in matrix terms:
( 2
)
1 1 T ỹi,t Ai0 xt
2 t=∑
p yi jyi,1:p , f1:T , θ i ∝ T p
exp
σi p+1 σ2i
( )
1 1 [yi Xi Ai ]0 [yi Xi Ai ]
= T p exp
σ 2 σ2i
i
where
2 3 2 0 3
ỹi,p+1 xi,p+1
6 7
fp+1:T = f (s 1)
, yi = 4 ... 5 , Xi = 4 ... 5 ,
ỹi,T 0
xi,T
where fq p:p …xed (could set to unconditional mean of zero).
Note: calculations are conditional on factors, f (s 1) , from
previous Gibbs sampling iteration.
Including Dummy Observations
As in the BVAR analysis, T̄ dummy equations are one way to
represent priors, p (θ i ) :
T
p (θ i ) ∏ p yi,t jyi,t p:t 1 , ft p:t , θ i
t=p+1
T
∑
2
ξ i,t Ai0 xi,t = [ yi Xi Ai ] 0 [ yi Xi Ai ] ,
t=p+1
where 2 3 2 0 3
ξ i,p+1 xi,p+1
6 7
yi = 4 ... 5 , Xi = 4 ... 5 ,
ξ i,T 0
xi,T
Draw Distributed Lag Coe¢ cients in
Measurement Error Law of Motion
If we impose priors by dummies, then
1
So, we draw φi from N Ai , σ2i Xi0 X .
Draw Parameters in Law of Motion for
Factors
Law of motion of factors:
θ (1) θ (M)
P= = [ P1 PM ] .
f (1) f (M)
This can be put into our state space form (in which the errors
in the observation equation are iid) by quasi-di¤erencing the
observer equation.
Observer Equation
Quasi di¤erencing:
ỹi,t constant
z }| { z }| {
[1 φi (L) L] yi,t = [1 φi (1)] ai + λi0 [1 φi (L) L] ft + ui,t
Then,
2 3 0 1 0 1
[1 φi (1)] ai ỹ1,t ft
a = 4 ..
. 5 , ỹt = @ ... A , Ft = @ ... A
[1 φi (1)] ai ỹn,t ft p
2 3 0 1
λ10 0
λ1 φ1,1 0
λ1 φ1,p u1,t
6 7
H = 4 ... ..
.
..
.
..
.
.
5 , ut = @ .. A
λn0 λn0 φn,1 λn0 φn,p un,t
ỹt = a + HFt + ut
Law of Motion of the State
Here, the state is denoted by Ft .
Law of motion:
0 1 2 φ 0κ
30 1
ft 0,1 φ0,2 φ0,q (p+1 q) ft 1
B ft 1 C 6 κ 6 I 0κ 0κ 0κ (p+1 q) 7 B ft 2 C
B ft 2 C 6 0 7B C
B . C=6 Iκ 0κ 0κ (p+1 q) 7 B ft 3 C
@ . A 4 .. .. ... .. .. 7@ .. A
. . . . . 5 .
ft p 0 0 Iκ 0κ (p+1 q)
ft 1 p
0 1
u0,t
B 0κ 1 C
B C
+ B 0κ 1 C
@ .. A
.
0κ 1
LoM:
Ft = ΦFt 1 + ut , ut N 0κ ( p + 1 ) 1 , V(p+1)κ (p+1)κ .
State Space Representation of the Factors
Observer equation:
ỹt = a + HFt + ut .
Ft = ΦFt 1 + ut .