Chapter 1
Chapter 1
independent variable (δ 0 +δ 1+ δ 2)
A221 Question 1a/ A172_Q1a) ii)
Explain the meaning of the finite distributed lag model. [5]
Answer: A finite distributed lag model (FDL) assumes a linear relationship between a dependent variable y𝑦 and several lags of an
independent variable 𝑥. Equation 1 shows a finite distributed lag model of order q.
yt=α+β0xt+β1xt−1+...+βqxt−q+et (1)
The coefficient βs𝛽𝑠 is an s𝑠-period delay multiplier, and the coefficient β0𝛽0, the immediate (contemporaneous) impact of a change
in x𝑥 on y𝑦, is an impact multiplier. If x𝑥 increases by one unit today, the change in y𝑦 will be β0+β1+...+βs𝛽0+𝛽1+...+𝛽𝑠 after s𝑠 periods;
this quantity is called the s𝑠-period interim multiplier. The total multiplier is equal to the sum of all β𝛽s in the model.
Y t =β 0 + β 1 X t 1+ …+ β k X tk +u t
where {ut : t = 1, 2, …, n} is the sequence of errors or disturbances. Here, n is the number of observations (time periods).
In the notation xtj, t denotes the time period, and j is, as usual, a label to indicate one of the k explanatory variables. The terminology used in cross-sectional
regression applies here: yt is the dependent variable, explained variable, or regressand; the xtj are the independent variables, explanatory variables, or
regressors. We should think of Assumption TS.1 as being essentially the same as Assumption MLR.1 (the first cross-sectional assumption), but we are now
specifying a linear model for time series data. The examples covered in Section 10.2 can be cast as (10.8) by appropriately defining xtj. For example, equation
(10.5) is obtained by setting xt1= zt, xt2= zt-1, and xt3= zt-2.
2. No perfect Collinearity
In the sample (and therefore in the underlying time series process), no independent variable is constant nor a perfect linear combination of the
others. That assumption does allow the explanatory variables to be correlated, but it rules out perfect correlation in the sample
Assumption TS.3 implies that the error at time t, ut , is uncorrelated with each explanatory variable in every time period. The fact that this is
stated in terms of the conditional expectation means that we must also correctly specify the functional relationship between yt and the
explanatory variables. If ut is independent of X and E(ut ) = 0.
4. Homoskedasticity
Conditional on X, the variance of ut is the same for all t: Var(ut X) = Var(ut) = o2 , t = 1, 2, …, n.
This assumption means that Var(ut uX) cannot depend on X—it is sufficient that ut and X are independent—and that Var(ut ) must be constant over time.
When TS.4 does not hold, we say that the errors are heteroskedastic, just as in the cross-sectional case
5. No Serial Correlation
Conditional on X, the errors in two different time periods are uncorrelated: Corr(ut ,us X) 5 0, for all t = s
The easiest way to think of this assumption is to ignore the conditioning on X. Then, Assumption TS.5 is simply
(This is how the no serial correlation assumption is stated when X is treated as non random.) When considering whether Assumption TS.5 is likely to hold,
we focus on equation (10.12) because of its simple interpretation
6. Normality
The errors ut are independent of X and are independently and identically distributed as Normal (0,o2 ).
A182_Question 1a)
Explain the meaning of long run propensity or long-run multiplier. [5]
1.3 Functional form and dummy variables (refer textbook 356-362)
BEEQ 3013 Econometrics
A182_Question 1(c)
Explain the meaning of a correlogram and draw the correlogram for the case of highly persistent time series.[5]
A correlogram (also called Auto Correlation Function ACF Plot or Autocorrelation plot) is a visual way to show serial correlation in data that
changes over time (i.e. time series data). Serial correlation (also called autocorrelation) is where an error at one point in time travels to a
subsequent point in time. For example, you might overestimate the value of your stock market investments for the first quarter, leading to an
overestimate of values for following quarters. Draw= loot at slide at page 9
BEEQ 3013 Econometrics