Bootstrap Prediction Interval
Bootstrap Prediction Interval
1. INTRODUCTION
Unlike ARIMA models, models with unobserved components may have several
disturbances. Therefore, the bootstrap procedures proposed by Thombs and
Schucany (1990) and Pascual et al. (2004) cannot be directly applied to them. To
overcome this problem, Wall and Stoffer (2002), from now on WS, propose that
bootstrap prediction intervals for future observations can be obtained by using
the innovation form (IF) of the SS models, which is defined in terms of a unique
disturbance. They show that their procedure works well in the context of
Gaussian SS models. Moreover, Pfeffermann and Tiller (2005) show that the
bootstrap estimator of the underlying unobserved components based on the IF is
asymptotically consistent. Following Pascual et al. (2004), in this article, we
propose a bootstrap procedure to obtain prediction intervals of future
observations in SS models that simplifies the WS procedure both from the
computational point of view and because it does not require the backward
representation. Similar to Wall and Stoffer, our proposed bootstrap procedure is
based on the IF. We show that the new procedure has the advantage of being
much simpler without losing the good behaviour of bootstrap prediction intervals.
The rest of the article is organized as follows. In section 2, we describe the
model, the filters and propose a new bootstrap procedure. We analyse its finite-
sample properties, comparing them with those of the standard and the Wall and
Stoffer prediction intervals. Section 3 presents an application of the new bootstrap
procedure to a real time series. Section 4 concludes the paper with our conclusions
and some suggestions for future research.
bootstrap prediction errors, they estimate the density of the conditional forecast
errors that can be used for constructing the corresponding bootstrap prediction
intervals. They are given by
h i
~yT þkjT þ Qa=2;d ; ~yT þkjT þ Q1a=2;d ð6Þ
k k
Step 4. Run the Kalman filter with ^ h and the original observations and obtain
a bootstrap replicate of the state vector at time T which incorporates the
uncertainty caused by parameter estimation, ^aT jT 1 .
Step 5. Obtain conditional bootstrap k-step-ahead predictions, f^yT þkjT ;
1 k Kg, by the following expressions
X
k1 X
k1
^ k ^
aT þkjT ¼ T
^ aT jT 1 þ ^ k1j c þ
T ^ k1j K
T ^ F^ 1^v ;
T þj T þj T þj
j¼0 j¼0
X
k1
^yT þkjT ¼ Z
^ T
^ k a
^T jT 1 þ Z
^ ^ k1j c þ d
T
j¼0
X
k 1
^
þZ ^ k1j K
T ^ F^ 1^v þ ^v ; k ¼ 1; . . . ;
T þj T þj T þj T þk
j¼0
^ ^
where ^vT ¼ yT Z aT jT 1 and the hat in top of the matrices means that they are
obtained by substituting the parameters by their corresponding bootstrap
estimates.
TABLE I
Monte Carlo Average Coverages, Length and Percentage of Observations Left Out on the
Right and on the Left of the Prediction Intervals for yTþk Constructed Using ST, Wall
and Stoffer (WS) and SSB when et is N(0, 1), gt is N(0, q) and the Nominal Coverage is 95%
0.06 0.04
0.1 0.04
0.02
0.02
0 0
−5 0 5 −10 0 10 −20 −10 0 10 20
0.08
0.1
0.2 0.06
T = 100
0.04
0.05
0.1
0.02
0 0 0
−5 0 5 −10 0 10 −20 −10 0 10 20
0.08
0.1
0.2 0.06
T = 500
0.04
0.05
0.1
0.02
0 0 0
−5 0 5 −10 0 10 −20 −10 0 10 20
the coverage in both tails is larger in the model with q ¼ 0.1, where the signal is
relatively small with respect to the non-Gaussian noise. Note that the inability of
the ST intervals to deal with the asymmetry in the distribution of et is larger the
larger the sample size. On the other hand, the coverages of the Wall and Stoffer
and SSB intervals are rather similar, with SSB being again slightly closer to the
nominal for almost all models and sample sizes considered. Both bootstrap
intervals are capable of coping with the asymmetry of the distribution of et.
Consequently, according to the results reported in Table II, using the much
simpler SSB method does not imply a worse performance of the prediction
intervals. Figure 2 illustrates these results by plotting the kernel density of the
simulated yTþ1 together with the ST, Wall and Stoffer and SSB densities
obtained with a particular series generated by each of the models and sample
sizes considered. This figure also illustrates the lack of fit of the ST density when
q ¼ 0.1 and 1. On the other hand, the shapes of the Wall and Stoffer and SSB
densities are similar, with SSB being always closer to the empirical.
3. EMPIRICAL APPLICATION
TABLE II
Monte Carlo Average Coverages, Length and Percentage of Observations Left Out on the
Right and on the Left of the Prediction Intervals for yTþk Constructed Using ST, Wall
and Stoffer (WS) and SSB when et is v2ð1Þ , gt is N(0, q) and the Nominal Coverage is 95%
change in the United StatesÕ home equity debt outstanding, unscheduled payments,
observed from the first quarter of 1991 to the second quarter of 2007 (Mortgages)
and measured in USD Billions.4 We use the observations up to the first quarter of
2001, T ¼ 61, to estimate the local level model, leaving the rest to evaluate the
out-of-sample forecast performance of the procedure. The QML estimates of the
parameters are r^2e ¼ 0:126 and ^q ¼ 0:671. These estimates are used in the Kalman
filter to obtain estimates of the innovations and their variances. Figure 3 plots the
correlogram and a kernel estimate of the density of the within-sample standardized
one-step-ahead errors. The correlations and partial correlations are not significant.
However, the density of the errors suggests that they are obviously far from
normality. Therefore, although the local level model seems appropriate to represent
the dependencies in the conditional mean of the Mortgages series, it is convenient to
implement a prediction procedure that takes into account the non-normality of the
Ó 2009 The Authors
Journal compilation Ó 2009 Blackwell Publishing Ltd.
JOURNAL OF TIME SERIES ANALYSIS Vol. 30, No. 2
BOOTSTRAP PREDICTION INTERVALS IN SS MODELS 175
Empirical SSB ST WS
q = 0.1 q=1 q=2
0.6
0.3
0.2
0.4
T = 50
0.2
0.1
0.2 0.1
0 0 0
−4 −2 0 2 4 6 8 −5 0 5 −5 0 5 10
0.6
0.3
0.2
0.4
T = 100
0.2
0.1
0.2 0.1
0 0 0
−4 −2 0 2 4 6 8 −5 0 5 −5 0 5 10
0.6
0.3
0.2
0.4
T = 500
0.2
0.1
0.2 0.1
0 0 0
−4 −2 0 2 4 6 8 −5 0 5 −5 0 5 10
4. CONCLUSIONS
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
−4 −3 −2 −1 0 1 2 3 4 5
4
Observed series SSB PI STD PI WS PI
3.5
2.5
1.5
0.5
0
2005:Q3 2005:Q4 2006:Q1 2006:Q2 2006:Q3 2006:Q4 2007:Q1
Figure 4. Prediction intervals for the out-of-sample forecasts of the Mortgage series.
acknowledgements
notes
REFERENCES
Durbin, J. and Koopman, S. J. (2001) Time Series Analysis by State Space Methods. New York:
Oxford University Press.
Efron, B. (1987) Better bootstrap confidence intervals. Journal of the American Statistical Association
82, 171–85.
Harvey, A. C., Ruiz, E. and Shephard, N. G. (1994) Multivariate stochastic variance models. The
Review of Economic Studies 61, 247–64.
Pascual, L., Romo, J. and Ruiz, E. (2004) Bootstrap predictive inference for ARIMA processes.
Journal of Time Series Analysis 25, 449–65.
Pfeffermann, D. and Tiller, R. (2005) Bootstrap approximation to prediction MSE for state-space
models with estimated parameters. Journal of Time Series Analysis 26, 893–916.
Thombs, L. A. and Schucany, W. R. (1990) Bootstrap prediction intervals for autoregression. Journal
of the American Statistical Association 85, 486–92.
Wall, K. D. and Stoffer, D. S. (2002) A state space approach to bootstrapping conditional forecasts
in ARMA models. Journal of Time Series Analysis 23, 733–51.