0% found this document useful (0 votes)
1 views

DynamicFactorModelsFactorAugmentedVectorAutoregressions

Uploaded by

Oğulcan Vural
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

DynamicFactorModelsFactorAugmentedVectorAutoregressions

Uploaded by

Oğulcan Vural
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Dynamic Factor Models and Factor

Augmented Vector Autoregressions

Lawrence J. Christiano
Dynamic Factor Models and Factor
Augmented Vector Autoregressions
Problem:
– the time series dimension of data is relatively short.
– the number of time series variables is huge.
DFM’s and FAVARs take the position:
– there are many variables and, hence, shocks,
– but, the principle driving force of all the variables may be just
a small number of shocks.
Factor view has a long-standing history in macro.
– almost the de…nition of macroeconomics: a handfull of shocks
- demand, supply, etc. - are the principle economic drivers.
– Sargent and Sims: only two shocks can explain a large fraction
of the variance of US macroeconomic data.
1977, “Business Cycle Modeling Without Pretending to Have
Too Much A-Priori Economic Theory,” in New Methods in
Business Cycle Research, ed. by C. Sims et al., Minneapolis:
Federal Reserve Bank of Minneapolis.
Why Work with a Lot of Data?
Estimates of impulse responses to, say, a monetary policy
shock, may be distorted by not having enough data in the
analysis (Bernanke, et. al. (QJE, 2005))
– Price puzzle:
measures of in‡ation tend to show transitory rise to a monetary
policy tightening shock in standard (small-sized) VARs.
One interpretation: Monetary authority responds to a signal
about future in‡ation that is captured in data not included in a
standard, small-sized VAR.
May suppose that ‘core in‡ation’is a factor that can only be
deduced from a large number of di¤erent data.
May want to know (as in Sargent and Sims), whether the data
for one country or a collection of countries can be characterized
as the dynamic response to a few factors.
Outline

Describe Dynamic Factor Model


– Identi…cation problem and one possible solution.
Derive the likelihood of the data and the factors.
Describe priors, joint distribution of data, factors and
parameters.
Go for posterior distribution of parameters and factors.
– Gibbs sampling, a type of MCMC algorithm.
– Metropolis-Hastings could be used here, but would be very
ine¢ cient.
– Gibbs exploits power of Kalman smoother algorithm and the
type of fast ‘direct sampling’done with BVARS.
FAVAR
Dynamic Factor Model
Let Yt denote an n 1 vector of observed data
Yt related to κ n unobserved factors, ft , by measurement
(or, observer) equation:
vector of κ factor loadings idiosyncratic component of yi,t
z}|{ z}|{
yi,t = ai + λi0 ft + ξ i,t .
Law of motion of factors:
ft = φ0,1 ft 1 + ... + φ0,q ft q + u0,t , u0,t N (0, Σ0 ) .
Idiosyncratic shock to yi,t (‘measurement error’):
ξ i,t = φi,1 ξ i,t 1 + ... + φi,pi ξ i,t pi + ui,t , ui,t N 0, σ2i .
ui,t , i = 0, ..., n, drawn independently from each other and over
time.
For convenience:
pi = p, for all i, q p + 1.
Notation for Observer Equation

Observer equation:

yi,t = ai + λi0 ft + ξ i,t


ξ i,t = φi,1 ξ i,t 1 + ... + φi,pi ξ i,t pi + ui,t , ui,t N 0, σ2i .

Let θ i denote the parameters of the ith observer equation:


2 2 3 2 3
σi φi,1
6 7
θi = 4 ai 5 , φi = 4 ... 5 , i = 1, ..., n.
|{z} λi
(2+κ +p) 1 φi φi,p
Notation for Law of Motion of Factors

Factors:

ft = φ0,1 ft 1 + ... + φ0,q ft q + u0,t , u0,t N (0, Σ0 ) .

Let θ 0 denote the parameters of factors:


32
h i φ0,1
Σ0
θ0 = , φ0 = 4 ... 5
|{z} φ0 |{z}
κ (q+1) κ κq κ
φ0,q

All model parameters:

θ = [θ 0 , θ 1 , ..., θ n ]
Identi…cation Problem in DFM
DFM:
yi,t = ai + λi0 ft + ξ i,t
ft = φ0,1 ft 1 + ... + φ0,q ft q + u0,t , u0,t N (0, Σ0 )
ξ i,t = φi,1 ξ i,t 1 + ... + φi,p ξ i,t p + ui,t .
Suppose H is an arbitrary invertible κ κ matrix.
– Above system is observationally equivalent to:
0
yi,t = ai + λ̃i f̃t + ξ i,t
f̃t = φ̃0,1 f̃t 1 + ... + φ̃0,q f̃t q + ũ0,t N 0, Σ̃0 ,
where
0
f̃t = Hft , λ̃i = λi0 H 1
, φ̃0,j = Hφ0,j H 1
, Σ̃0 = HΣ0 H 0 , .
Desirable to restrict model parameters so that there is no
change of parameters that leaves the system observationally
equivalent, yet has all di¤erent factors and parameter values.
Geweke-Zhou (1996) Identi…cation
Note for any model parameterization, can always choose an H
so that Σ0 = Iκ .
– Find C such that CC0 = Σ0 (there is a continuum of these),
set H = C 1 .
Geweke-Zhou (1996) suggest the identifying assumption,
Σ 0 = Iκ .
– But, this is not enough to achieve identi…cation.
– Exists a continuum of orthonormal matrices with property,
CC0 = Iκ .
Simple example: for κ = 2, for each ω 2 [ π, π ] ,
cos (ω ) sin (ω )
C= , 1 = cos2 (ω ) + sin2 (ω )
sin (ω ) cos (ω )

– For each C, set H = C 1 = C0 . That produces an


observationally equivalent alternative parameterization, while
leaving intact the normalization, Σ0 = Iκ , since
HΣ0 H 0 = C0 C = C 1 C = Iκ .
Geweke-Zhou (1996) Identi…cation
Write:
2 3
λ1
6 .. 7
6 . 7 h i
6 7
Λ=6
λκ 7 = Λ1,κ , Λ1,κ
6
6 λ κ +1 7
7 Λ2,κ κ κ
4 .. 5
.
λn

Geweke-Zhou also require Λ1,κ is lower triangular.


– then, in simple example, only orthonormal matrix C that
preserves lower triangular Λ1,κ is lower triangular (i.e., b = 0,
a = 1).
Geweke-Zhou resolve identi…cation problem with last
assumption: diagonal elements of Λ1,κ non-negative (i.e., a = 1
in example).
Geweke-Zhou (1996) Identi…cation

Identifying restrictions: Λ1,κ is lower triangular, Σ0 = Iκ .


– Only …rst factor, f1,t , a¤ects …rst variable, y1,t .
– Only f1,t and f2,t a¤ect y2,t , etc.

Ordering of yit a¤ects the interpretation of the factors.

Alternative identi…cations:
– Σ0 diagonal and diagonal elements of Λ1,κ equal to unity.
– Σ0 unrestricted (positive de…nite) and Λ1,κ = Ik .
Next:

Move In direction of using data to obtain posterior distribution


of parameters and factors.

Start by going after the likelihood.


Likelihood of Data and Factors
System, i = 1, ..., n :

yi,t = ai + λi0 ft + ξ i,t


ft = φ0,1 ft 1 + ... + φ0,q ft q + u0,t , u0,t N (0, Σ0 )
ξ i,t = φi,1 ξ i,t 1 + ... + φi,p ξ i,t p + ui,t .

De…ne:

φi (L) = φi,1 + ... + φi,p Lp 1


, Lxt xt 1.

Then, the quasi-di¤erenced observer equation is:

[1 φi (L) L] yi,t = [1 φi (1)] ai + λi0 [1 φ i ( L ) L ] ft


ui,t
z }| {
+ [1 φi (L) L] ξ i,t
Likelihood of Data and of Factors

Quasi-di¤erenced observer equation:

yi,t = φi (L) yi,t 1 + [1 φi (1)] ai + λi0 [1 φi (L) L] ft + ui,t

Consider the MATLAB notation:

xt1 :t2 xtt , ..., xt2 .

Note: yi,t , conditional on yi,t p:t 1 , ft p:t , θ i , is Normal:

p yi,t jyi,t p:t 1 , ft p:t , θ i

N φi (L) yi,t 1 + [1 φi (1)] ai + λi0 [1 φi (L) L] ft , σ2i


Likelihood of Data and of Factors
Independence of ui,t ’s implies the conditional density of
Yt = [ y1,t yn,t ]0 :
n
∏p yi,t jyi,t p:t 1 , ft p:t , θ i .
i=1

Density of ft conditional on ft q:t 1 :

p ft jft q:t 1 , θ 0 .

Conditional joint density of Yt , ft :


n
∏p yi,t jyi,t p:t 1 , ft p:t , θ i p ft jft q:t 1 , θ 0 .
i=1
Likelihood of Data and of Factors
Likelihood of Yp+1:T , fp+1:T , conditional on initial conditions:
p Yp+1:T , fp+1:T jY1:p , fp q:p , θ
" #
T n
= ∏ ∏p yi,t jyi,t p:t 1 , ft p:t , θ i p ft j ft q:t 1 , θ 0
t=p+1 i=1

Likelihood of initial conditions:


p Y1:p , fp q+1:p j θ
= p Y1:p jfp q+1:p , θ p fp q+1:p j θ 0

Likelihood of Y1:T , fp q:T conditional on parameters only, θ :


" #
T n
∏ ∏p yi,t jyi,t p:t 1 , ft p:t , θ i p ft jft q:t 1 , θ 0
t=p+1 i=1
p Y1:p jfp q+1:p , θ i , i = 1, .., n p fp q+1:p j θ 0
Joint Density of Data, Factors and
Parameters
Parameter priors: p (θ i ) , i = 0, ..., n.
Joint density of Y1:T , fp q:T , θ :
T n
∏ p ft jft q:t 1 , θ 0 ∏p yi,t jyi,t p:t 1 , ft p:t , θ i
t=p+1 i=1
" #
n
∏p yi,1:p jfp q+1:p , θ p (θ i ) p fp q+1:p j θ 0 p (θ 0 )
i=1
From here on, drop the density of initial observations.
– if T is not too small, then has no e¤ect on results.
– BVAR lecture notes describe an example of how to not ignore
initial conditions; for general discussion, see Del Negro and
Otrok (forthcoming, RESTAT, "Dynamic Factor Models with
Time-Varying Parameters: Measuring Changes in International
Business Cycles").
Outline

Describe Dynamic Factor Model (done!)


– Identi…cation problem and one possible solution.
Derive the likelihood of the data and the factors. (done!)
Describe priors, joint distribution of data, factors and
parameters. (done!)
Go for posterior distribution of parameters and factors.
– Gibbs sampling, a type of MCMC algorithm.
– Metropolis-Hastings could be used here, but would be very
ine¢ cient.
– Gibbs exploits power of Kalman smoother algorithm and the
type of fast ‘direct sampling’done with BVARS.
FAVAR
Gibbs Sampling

Idea is similar to what we did with the Metropolis-Hastings


algorithm.
Gibbs Sampling versus Metropolis-Hastings
Metropolis-Hastings: we needed to compute the posterior
distribution of parameters, θ, conditional on the data.
– output of Metropolis-Hastings algorithm: sequence of values of
θ whose distribution corresponds to the posterior distribution
of θ given the data:

P= θ (1) θ (M)

Gibbs sampling algorithm: sequence of values of DFM model


parameters, θ, and unobserved factors, f , whose distribution
corresponds to the posterior distribution conditional on the
data:
(1)
θ (M)
P = θ(1) .
f f (M)
Histogram of elements in individual rows of P represent
marginal distribution of corresponding parameter or factor.
Gibbs Sampling Algorithm

Computes sequence:

θ (1) θ (M)
P= = [ P1 PM ] .
f (1) f (M)

Given Ps 1 compute Ps in two steps.


– Step 1: draw θ (s) given Ps 1 (direct sampling, using approach
for BVAR)
– Step 2: draw f (s) given θ (s) (direct sampling, based on
information from Kalman smoother).
Step 1: Drawing Model Parameters

Parameters, θ

observer equation: ai , λi
measurement error: σ2i , φi
law of motion of factors: φ0 .

where the identi…cation, Σ0 = I, is imposed.


– Algorithm must be adjusted if some other identi…cation is used.
For each i :
– Draw ai , λi , σ2i from Normal-Inverse Wishart, conditional on
(s 1)
the φi ’s.
– Draw φi from Normal, given ai , λi , σ2i .
Drawing Observer Equation Parameters and
Measurement Error Variance
The joint density of Y1:T , fp q:T , θ:
" #
T n
∏ p ft j ft q:t 1 , θ 0 ∏p yi,t jyi,t p:t 1 , ft p:t , θ i
t=p+1 i=1
n
p (θ 0 ) ∏ p (θ i ) ,
i=1

was derived earlier (but we have now dropped the densities


associated with the initial conditions).
Recall,
p (A, B) p (A, B)
p (AjB) = =R
p (B) A p (A, B) dA
Drawing Observer Equation Parameters and
Measurement Error Variance
Conditional density of θ i obtained by dividing joint density by
itself, after integrating out θ i :
p Y1:T , fp q:T , θ
p θ i jY1:T , fp q:T , θj j6 =i
=R
θi p Y1:T , fp q:T , θ dθ i
T
∝ p (θ i ) ∏ p yi,t jyi,t p:t 1 , ft p:t , θ i
t=p+1

here, we have taken into account that the numerator and


denominator have many common terms.
(s)
We want to draw θ i from this posterior distribution for θ i .
Gibbs sampling procedure:
(s 1)
– …rst, draw ai , λi , σ2i taking the other elements of θ i from θ i .
– then, draw other elements of θ i taking ai , λi , σ2i as given.
Drawing Observer Equation Parameters and
Measurement Error Variance
The quasi-di¤erenced observer equation:

ỹi,t f̃i,t
z }| { z }| {
yi,t φi (L) yi,t 1 = (1 φi (1)) ai + λi0 [1 φi (L) L] ft + ui,t ,

or,
ỹi,t = [1 φi (1)] ai + λi0 f̃i,t + ui,t .
Let h i
ai (1 φi (1))
Ai = , xi,t = ,
λi f̃i,t
so
ỹi,t = Ai0 xi,t + ui,t ,
(s 1)
where ỹi,t and xt are known, conditional on φi .
Drawing Observer Equation Parameters and
Measurement Error Variance
From the Normality of the observer equation error:
p yi,t jyi,t p:t 1 , ft p:t , θ i
( 2
)
1 1 yi,t φi (L) yi,t 1 + Ai0 xi,t
∝ exp
σi 2 σ2i
( 2
)
1 1 ỹi,t Ai0 xi,t
= exp
σi 2 σ2i
Then,
T
∏ p yi,t jyi,t p:t 1 , ft p:t , θ i
t=p+1
( 2
)
1 1 T ỹi,t Ai0 xi,t
2 t=∑
∝ T p
exp .
σi p+1 σ2i
Drawing Observer Equation Parameters and
Measurement Error Variance
As in the BVAR analysis, express in matrix terms:
( 2
)
1 1 T ỹi,t Ai0 xt
2 t=∑
p yi jyi,1:p , f1:T , θ i ∝ T p
exp
σi p+1 σ2i
( )
1 1 [yi Xi Ai ]0 [yi Xi Ai ]
= T p exp
σ 2 σ2i
i
where
2 3 2 0 3
ỹi,p+1 xi,p+1
6 7
fp+1:T = f (s 1)
, yi = 4 ... 5 , Xi = 4 ... 5 ,
ỹi,T 0
xi,T
where fq p:p …xed (could set to unconditional mean of zero).
Note: calculations are conditional on factors, f (s 1) , from
previous Gibbs sampling iteration.
Including Dummy Observations
As in the BVAR analysis, T̄ dummy equations are one way to
represent priors, p (θ i ) :
T
p (θ i ) ∏ p yi,t jyi,t p:t 1 , ft p:t , θ i
t=p+1

Dummy observations (can include restriction that Λ1,κ is lower


triangular by suitable construction of dummies)
ȳi = X̄i Ai + Ūi ,
2 3
ui,1
Ūi = 4 ... 5 .
ui,T̄
Stack the dummies with the actual data:
h i
yi Xi
y = ȳi , Xi = X̄i
.
i
|{z} |{z}
(T p+T̄ ) 1 (T p+T̄ ) (1+κ )
Including Dummy Observations
As in BVAR:
p yi jyi,1:p , f1:T , θ i p λi , ai jσ2i
8 h i0 h i9
>
< >
1 1 yi Xi Ai yi Xi Ai =
∝ exp
σi
T +T̄ p >
: 2 σ2i >
;
( )
0
1 1 S + (Ai Ai ) Xi0 Xi (Ai Ai )
= T+T̄ p exp
σi 2 σ2i
( ) ( )
0
1 1 S 1 (Ai Ai ) Xi0 Xi (Ai Ai )
= T+T̄ p exp exp
σ 2 σ2i 2 σ2i
i
where
h i0 h i
1
S= y Xi Ai y Xi Ai , Ai = Xi0 Xi Xi0 y .
i i i
Inverse Wishart Distribution
Scalar version of Inverse Wishart distribution with (i.e., m = 1
in BVAR discussion) :
( )
ν/2 ν +2
j S j 2 S
p σ2i = ν ν σ2i exp ,
2 Γ 2 2σ2i
degrees of freedom, ν, and shape, S (Γ denotes the Gamma
function).
Easy to verify (after collecting terms), that

p yi jyi,1:p , f1:T , θ i p λi , ai jσ2i p σ2i


1
= N Ai , σ2i Xi0 X
IW (ν + T p + T̄ (κ + 1) , S + S ) .
Direct sampling from posterior of distribution:
– draw σ2i from IW . Then, draw Ai from N , given σ2i
Draw Distributed Lag Coe¢ cients in
Measurement Error Law of Motion
Given λi , ai , σ2i , draw φi .
Observer equation and measurement error process:

yi,t = ai + λi0 ft + ξ i,t


ξ i,t = φi,1 ξ i,t 1 + ... + φi,p ξ i,t p + ui,t .

Conditional on ai , λi and the factors, ξ i,t can be computed from

ξ i,t = yi,t ai λi0 ft ,

so the measurement error law of motion can be written,


2 3 2 3
φi,1 ξ i,t 1
0
ξ i,t = Ai xi,t + ui,t , Ai = φi = 4 .
.. 5 , xi,t = 4 .. 5
.
φi,p ξ i,t p
Draw Distributed Lag Coe¢ cients in
Measurement Error Law of Motion
The likelihood of ξ i,t conditional on xi,t is

p ξ i,t jxi,t , φi , σ2i = N Ai0 xi,t , σ2i


( 2
)
1 1 ξ i,t Ai0 xi,t
= exp ,
σi 2 σ2i
where σ2i , drawn previously, is for present purposes treated as
known.
Then, the likelihood of ξ i,p+1 , ..., ξ i,T is

p ξ i,p+1:T jxi,p+1 , φi , σ2i


( 2
)
1 1 T ξ i,t Ai0 xi,t
2 t=∑
∝ exp
( σ i )T p p+1 σ2i
Draw Distributed Lag Coe¢ cients in
Measurement Error Law of Motion

T

2
ξ i,t Ai0 xi,t = [ yi Xi Ai ] 0 [ yi Xi Ai ] ,
t=p+1

where 2 3 2 0 3
ξ i,p+1 xi,p+1
6 7
yi = 4 ... 5 , Xi = 4 ... 5 ,
ξ i,T 0
xi,T
Draw Distributed Lag Coe¢ cients in
Measurement Error Law of Motion
If we impose priors by dummies, then

p ξ i,p+1:T jxi,p+1 , φi , σ2i p (φi )


8 h i0 h i9
>
< y X A y Xi Ai >
=
1 1 i i i i
∝ T p
exp 2
,
( σi ) >
: 2 σi >
;

where y and Xi represents the stacked data that includes


i
dummies.
By Bayes’rule,
1
p φi jξ i,p+1:T , xi,p+1 , φi , σ2i = N Ai , σ2i Xi0 X .

1
So, we draw φi from N Ai , σ2i Xi0 X .
Draw Parameters in Law of Motion for
Factors
Law of motion of factors:

ft = φ0,1 ft 1 + ... + φ0,q ft q + u0,t , u0,t N (0, Σ0 )

The factors, fp+1:T , are treated as known, and they correspond


to f (s 1) , the factors in the s 1 iteration of Gibbs sampling.
By Bayes’rule:

p φ0 jfp+1:T ∝ p fp+1:T jφ0 p φ0 .

The priors can be implemented by dummy variables.


– direct application of the methods developed for inference
about the parameters of BVARs.
Draw φ0 from N .
This Completes Step 1 of Gibbs Sampling

Gibbs sampling computes sequence:

θ (1) θ (M)
P= = [ P1 PM ] .
f (1) f (M)

Given Ps 1 compute Ps in two steps.


– Step 1: draw θ (s) given Ps 1 (direct sampling)
– Step 2: draw f (s) given θ s (Kalman smoother).

We now have θ (s) , and must now draw factors.


– This is done using the Kalman smoother.
Drawing the Factors
For this, we will put the DFM in the state-space form used to
study Kalman …ltering and smoothing.
– In that previous state space form, the measurement error was
assumed to be iid.
– We will make use of the fact that we have all model
parameters.
The DFM:

yi,t = ai + λi0 ft + ξ i,t


ft = φ0,1 ft 1 + ... + φ0,q ft q + u0,t , u0,t N (0, Σ0 )
ξ i,t = φi,1 ξ i,t 1 + ... + φi,p ξ i,t p + ui,t .

This can be put into our state space form (in which the errors
in the observation equation are iid) by quasi-di¤erencing the
observer equation.
Observer Equation
Quasi di¤erencing:
ỹi,t constant
z }| { z }| {
[1 φi (L) L] yi,t = [1 φi (1)] ai + λi0 [1 φi (L) L] ft + ui,t

Then,
2 3 0 1 0 1
[1 φi (1)] ai ỹ1,t ft
a = 4 ..
. 5 , ỹt = @ ... A , Ft = @ ... A
[1 φi (1)] ai ỹn,t ft p
2 3 0 1
λ10 0
λ1 φ1,1 0
λ1 φ1,p u1,t
6 7
H = 4 ... ..
.
..
.
..
.
.
5 , ut = @ .. A
λn0 λn0 φn,1 λn0 φn,p un,t
ỹt = a + HFt + ut
Law of Motion of the State
Here, the state is denoted by Ft .
Law of motion:
0 1 2 φ 0κ
30 1
ft 0,1 φ0,2 φ0,q (p+1 q) ft 1
B ft 1 C 6 κ 6 I 0κ 0κ 0κ (p+1 q) 7 B ft 2 C
B ft 2 C 6 0 7B C
B . C=6 Iκ 0κ 0κ (p+1 q) 7 B ft 3 C
@ . A 4 .. .. ... .. .. 7@ .. A
. . . . . 5 .
ft p 0 0 Iκ 0κ (p+1 q)
ft 1 p
0 1
u0,t
B 0κ 1 C
B C
+ B 0κ 1 C
@ .. A
.
0κ 1
LoM:
Ft = ΦFt 1 + ut , ut N 0κ ( p + 1 ) 1 , V(p+1)κ (p+1)κ .
State Space Representation of the Factors
Observer equation:

ỹt = a + HFt + ut .

Law of motion of state:

Ft = ΦFt 1 + ut .

Kalman smoother provides:

P Fj jỹ1 , ..., ỹT , j = 1, ..., T,

together with appropriate second moments.


Use this information to directly sample f (s) from the
Kalman-smoother-provided Normal distribution, completing
step 2 of the Gibbs sampler.
Factor Augmented VARs (FAVAR)
Favar’s are DFM’s which more closely resemble macro models.
– There are observables that act like ‘factors’, hitting all
variables directly
– Examples: the interest rate in the monetary policy rule,
government spending, taxes, price of housing, world trade,
international price of oil, uncertainty, etc.
The measurement equation:

yi,t = ai + γi y0,t + λi ft + ξ i,t , i = 1, ..., n, t = 1, ..., T,

where y0,t and γi are m 1 and 1 m vectors, respectively.


The vectors, y0,t and ft follow a VAR:
h i h i
ft ft 1 ft q
y0,t = Φ0,1 y0,t 1 + ... + Φ0,q y0,t q + u0,t ,
u0,t N (0, Σ0 )
Literature on FAVARs is Large

Initial paper: Bernanke and Boivin (2005QJE), "Measuring the


E¤ects of Monetary Policy: A Factor-Augmented Vector
Autoregressive (FAVAR) Approach."
Intention was to correct problems with conventional VAR-based
estimates of the e¤ects of monetary policy shocks.
Include a large number of variables:
– better capture the actual policy rule of monetary authorities,
which look at lots of data in making their decisions.
– include a lot of variables so that the FAVAR can be used to
obtain a comprehensive picture of the e¤ects of a monetary
policy shock on the whole economy.
– Bernanke, et al, include 119 variables in their analysis.
Literature on FAVARs is Large
Literature is growing: "Large Bayesian Vector Autoregressions,"
Banbura, Giannone, Reichlin (2010Journal of Applied
Economicts), studies importance of including sectoral data to
get better estimates of impulse response functions to policy
shocks and a better estimate of their impact.
DFM have been taken in interesting directions, more suitable
for multicountry settings, see, e.g., Canova and Ciccarelli
(2013,ECB WP1507)
Time varying FAVARs: Eickmeier, Lemke, Marcellino, "Classical
time-varying FAVAR models - estimation, forecasting and
structural analysis," (2011Bundesbank Discussion Paper, no.
04/2011). Argue that by allowing parameters to change over
time, get better forecasts and characterize how the economy is
changing.

You might also like