0% found this document useful (0 votes)
164 views407 pages

Interest Rate Models, Asset Allocation and Quantitative Techniques For Central Banks and Sovereign Wealth Funds (Arjan Bastiaan Berkelaar, Joachim Coche Etc.)

Uploaded by

Koudou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
164 views407 pages

Interest Rate Models, Asset Allocation and Quantitative Techniques For Central Banks and Sovereign Wealth Funds (Arjan Bastiaan Berkelaar, Joachim Coche Etc.)

Uploaded by

Koudou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 407

Interest Rate Models, Asset Allocation and Quantitative Techniques for

Central Banks and Sovereign Wealth Funds

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Also by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
CENTRAL BANK RESERVES AND SOVEREIGN WEALTH MANAGEMENT (edited)

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Interest Rate Models, Asset
Allocation and Quantitative
Techniques for Central Banks

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
and Sovereign Wealth Funds

Edited By

Arjan B. Berkelaar
Joachim Coche
Ken Nyholm

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Introduction, selection and editorial matter © Arjan B. Berkelaar, Joachim Coche
and Ken Nyholm 2010
All rights reserved. No reproduction, copy or transmission of this
publication may be made without written permission.
No portion of this publication may be reproduced, copied or transmitted
save with written permission or in accordance with the provisions of the
Copyright, Designs and Patents Act 1988, or under the terms of any licence
permitting limited copying issued by the Copyright Licensing Agency,

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Saffron House, 6-10 Kirby Street, London EC1N 8TS.
Any person who does any unauthorized act in relation to this publication
may be liable to criminal prosecution and civil claims for damages.
The authors have asserted their rights to be identified as the authors of this work
in accordance with the Copyright, Designs and Patents Act 1988.
First published 2010 by
PALGRAVE MACMILLAN
Palgrave Macmillan in the UK is an imprint of Macmillan Publishers Limited,
registered in England, company number 785998, of Houndmills, Basingstoke,
Hampshire RG21 6XS.
Palgrave Macmillan in the US is a division of St Martin’s Press LLC,
175 Fifth Avenue, New York, NY 10010.
Palgrave Macmillan is the global academic imprint of the above companies
and has companies and representatives throughout the world.
Palgrave® and Macmillan® are registered trademarks in the United States,
the United Kingdom, Europe and other countries.
ISBN: 978–0–230–24012–4 hardback
This book is printed on paper suitable for recycling and made from fully
managed and sustained forest sources. Logging, pulping and manufacturing
processes are expected to conform to the environmental regulations of the
country of origin.
A catalogue record for this book is available from the British Library.
A catalog record for this book is available from the Library of Congress.
10 9 8 7 6 5 4 3 2 1
19 18 17 16 15 14 13 12 11 10
Printed and bound in Great Britain by
CPI Antony Rowe, Chippenham and Eastbourne

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Contents

List of Illustrations vii

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Notes on Contributors xiv
Preface xxi
Introduction xxii

Part I Interest Rate Modelling and Forecasting

1 Combining Canadian Interest Rate Forecasts 3


David Jamieson Bolder and Yuliya Romanyuk
2 Updating the Yield Curve to Analyst’s Views 31
Leonardo M. Nogueira
3 A Spread-Risk Model for Strategic Fixed-Income Investors 44
Fernando Monar Lora and Ken Nyholm
4 Dynamic Management of Interest Rate Risk for
Central Banks and Pension Funds 64
Arjan B. Berkelaar and Gabriel Petre

Part II Portfolio Optimization Techniques

5 A Strategic Asset Allocation Methodology


Using Variable Time Horizon 93
Paulo Maurício F. de Cacella, Isabela Ribeiro Damaso
and Antônio Francisco da Silva Jr.
6 Hidden Risks in Mean–Variance Optimization: An
Integrated-Risk Asset Allocation Proposal 112
José Luiz Barros Fernandes and José Renato Haas Ornelas
7 Efficient Portfolio Optimization in the Wealth
Creation and Maximum Drawdown Space 134
Alejandro Reveiz and Carlos León
8 Copulas and Risk Measures for Strategic Asset
Allocation: A Case Study for Central Banks
and Sovereign Wealth Funds 158
Cyril Caillault and Stéphane Monier

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
vi Contents

9 Practical Scenario-Dependent Portfolio


Optimization: A Framework to Combine Investor Views and
Quantitative Discipline into Acceptable Portfolio Decisions 178
Roberts L. Grava
10 Strategic Tilting around the SAA Benchmark 189
Aaron Drew, Richard Frogley, Tore Hayward and Rishab Sethi

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
11 Optimal Construction of a Fund of Funds 207
Petri Hilli, Matti Koivu and Teemu Pennanen

Part III Asset Class Modelling and Quantitative Techniques

12 Mortgage-Backed Securities in a Strategic


Asset Allocation Framework 225
Myles Brennan and Adam Kobor
13 Quantitative Portfolio Strategy – Including US MBS
in Global Treasury Portfolios 249
Lev Dynkin, Jay Hyman and Bruce Phelps
14 Volatility as an Asset Class for Long-Term Investors 265
Marie Brière, Alexander Burgues and Ombretta Signori
15 A Frequency Domain Methodology for
Time Series Modelling 280
Hens Steehouwer
16 Estimating Mixed Frequency Data: Stochastic
Interpolation with Preserved Covariance Structure 325
Tørres G. Trovik and Couro Kane-Janus
17 Statistical Inference for Sharpe Ratio 337
Friedrich Schmid and Rafael Schmidt

Index 359

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
List of Illustrations

Tables

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
I.1 The 50 largest public investment funds xxiii
I.2 Types of public investment funds xxviii
2.1 US Treasury yield curves for Example 2 36
3.1 Ratio of RMSFE for the US Treasury curve (N-S model) 58
3.2 Ratio of RMSFE for the swap-spreads 59
3.3 Ratio of RMSFE for the LIBOR-SWAP curve 60
4.1 OLS estimates of first-order autocorrelation
coefficients for interest rates 68
4.2 The ADF statistic for the null hypothesis of a unit root 69
4.3 The KPSS statistic for the null hypothesis of a
stationary process 69
4.4 Rejection frequencies for ADF and KPSS tests when
the true series follows an AR(1) process 70
4.5 Variance ratios for the US, UK and Eurozone 71
4.6 Statistics for benchmark portfolios 73
5.1 Allocations for portfolio number 70 (%) 106
5.2 Allocations for portfolio number 70 (%) 107
5.3 Allocations for portfolio 70 (%) 109
6.1 Main characteristics of the sample 119
6.2 Composition of the optimal portfolios according
to different criteria(%) 131
8.1 Reserves estimates for several central Banks 159
8.2 Reserves estimates for ten SWFs 159
8.3 List of asset classes for CBs and SWFs 165
8.4 First four moments, parameters estimates, AIC
and KS test for each asset class of CBs’ universe 166
8.5 First four moments, parameters estimates, AIC
and KS test for each asset class of SWFs’ universe 168
8.6 Number of asset class pairs selected by copulas
according to the AIC (CB case) 169
8.7 Number of asset class pairs selected by
copulas according to the AIC (SWF case) 169
10.1 Equities versus bonds historical back-test 195
10.2 Strategic tilting historical back-test: summary of results 197
10.3 Historical back-test of tilting as a package 198

vii

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
viii List of Illustrations

10.4 Monte Carlo simulation of tilting strategy 200


10.5 Long-run returns and regrets 201
11.1 Optimally constructed fund of funds 216
11.A.1 Data series used in the estimation 219
12.1 Historical performance statistics of selected
bond indices in % (Jan. 1990–Sept. 2008) 230
12.2 Composition of the US high grade fixed income

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
universe (as of 30 June 2008) 231
12.3 In-sample and out-of-sample estimations 242
12.4 Out-of-sample total return estimation
in % (Dec. 2006–Sept. 2008) 246
13.1 Performance comparison of TBA proxy and MBS
Fixed-Rate Index, Sept. 2001–Sept. 2008 253
13.2 TBA proxy portfolio holdings as of 30 Sept. 2008 256
13.3 Summary of historical performance, Sep. 2000–Sep. 2008 258
13.4 Ex-ante TEV between benchmark and
GlobalAgg (projected by GRM as of Mar. 2008) 259
13.5 Ex-ante TEV between Benchmark 2 and GlobalAgg
(projected by GRM as of Mar. 2008) 260
13.6 Performance of Benchmark 1.3 before and after
credit crisis, relative to G7 Treasuries and GlobalAgg 261
14.1 Portfolio allocation: minimum modified VaR 273
14.A.1 Descriptive statistics 278
14.A.2 Correlation matrix 279
14.A.3 Coskewness matrix 279
14.A.4 Cokurtosis matrix 279
15.1 20th century averages and (geometric)
average growth rates. 297
15.2 Statistics of trend, low frequency and high
frequency components of five economic
and financial variables. 300
15.3 Mean and variance of low, high frequency
and total model from (5). 317
15.4 Complex roots of low and high frequency
models from (5). 317
15.5 Six combinations of DGP representation
and frequency ranges for which error (6) is calculated 319
15.6 Mean errors (6) for each of the six combinations
in Table 15.5 based on 1000 simulations 321
17.1 Statistical impact of the variance stabilizing
transformation on the estimation of the Sharpe ratio 344
17.2 Estimated Sharpe ratio SR m n , mean, standard
deviation, maximum, minimum and length n of
the excess returns time series for different GBP 345

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
List of Illustrations ix

17.3 Estimated Sharpe ratio SR m n , mean, standard


deviation, maximum, minimum and length n of
the excess return time series for different ETFs 350
17.4 Estimates of the parameters μ, ω, α, and β of the
GARCH(1,1) model as defined in Formula (10)
for distinct time series of ETF excess returns 350
17.5 The one-sided hypothesis H0: SRX > SRY,

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
as defined in (16), is tested 356

Figures

I.1 Reserves growth and the number of academic


publications on reserves and sovereign
wealth management xxxii
I.2 Research fields in economics and
finance: number of publications xxxii
1.1 Zero-coupon rates from January 1973 to August 2007 8
1.2 Forecasting interest rates 9
1.3 Predictive performance for frequentist forecasts
relative to random walk 10
1.4 Predictive performance for Bayesian forecasts
relative to random walk 11
1.5 Log predictive likelihood weights over the
training period of 120 points 17
1.6 Log marginal model likelihood weights over
the training period of 120 points 17
1.7 Dynamic model averaging 19
1.8 Static model averaging 19
1.9 Dynamic predictive performance for frequentist
combinations relative to random walk 20
1.10 Dynamic predictive performance for Bayesian
combinations relative to random walk 21
1.11 Dynamic predictive performance for Bayesian
log combinations relative to random walk 22
1.12 Static predictive performance for frequentist
combinations relative to random walk 23
1.13 Static predictive performance for Bayesian
combinations relative to random walk 24
1.14 Static predictive performance for Bayesian log
combinations relative to random walk 25
1.15 Predictive performance of best individual
models and best combinations relative to random
walk, static setting 26
2.1 US Treasury yield curves for Example 2 37

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
x List of Illustrations

2.2 First three eigenvectors of the correlation matrix


of yield variations of Example 3 when PCA is applied
to all three curves simultaneously 39
2.3 Government bond yield curves for Example 3 40
3.1 Fitted and estimated factor loading
structures for spreads 49
3.2 Fitted and estimated factor loading structure for R 53

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
4.1 Duration deviation as a function of signal strength 74
4.2 Information ratios for short-duration
portfolios – level-based strategy 76
4.3 Five-year rolling information ratios for the UK
short-duration portfolios – level-based strategy 77
4.4 Information ratios for long-duration
portfolios – regression-based strategy 78
4.5 Five-year rolling information ratios for the
long-duration portfolios – regression-based strategy 79
4.6 Probit model implied probability of two-year yield
increasing over the next month 80
4.7 Information ratios for short- and long-duration
portfolios – probit-based strategy 81
4.8 Correlation coefficient between scoring signal and
subsequent two-year yield changes (five-year rolling) 82
4.9 Information ratios for long-duration regression-based
strategies – scoring strategy 83
4.10 Five-year rolling information ratios for the long-duration
portfolios – scoring strategy 84
4.11 Information ratios for long-duration regression-based
portfolios – momentum strategy 85
4.12 Five-year rolling information ratios for long-duration
portfolios – momentum strategy 86
4.13 Information ratios for short- and long-duration
portfolios – mixed strategy 87
4.14 Five-year information ratios for the US short-duration
portfolio – mixed strategy 88
5.1 Set of objectives in a multi-objective optimization 96
5.2 Discrete optimization model steps 99
5.3 Full range original efficient frontiers – example 1 102
5.4 Full range optimized frontiers – example 1 103
5.5 Detail of original efficient frontiers – example 1 104
5.6 Detail on optimized frontiers – example 1 104
5.7 Optimized frontier – example 2 scenario 1A 107
5.8 Optimized frontier – example 2 scenario 2A 109
6.1 Trade-off market and credit risk 116
6.2 Efficient frontiers 121

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
List of Illustrations xi

6.3 Skewness of the efficient frontiers 121


6.4 Kurtosis of the efficient frontiers 122
6.5 Expected default of the efficient frontiers 123
6.6 Efficient frontiers 124
6.7 Skewness of the efficient frontiers 125
6.8 Expected default of the efficient frontiers 126
6.9 Sharpe Ratio of the efficient frontiers 126

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
6.10 MPPM of the efficient frontiers 127
6.11 ASSR of the efficient frontiers 127
6.12 ACR of the efficient frontiers 128
6.13 ASCR of the efficient frontiers 129
7.1 Markowitz’s Efficient Frontier 137
7.2 S & P 500 MDD 141
7.3 Wealth creation-MDD’s frontier
(two assets: gold and MSCI) 146
7.4 Wealth creation-MDD’s Frontier (three
assets: gold, MSCI and MSCI-EM) 147
7.5 Wealth creation-MDD’s Frontier (two, three,
five and 18 assets17) 148
7.6 Calmar Ratio (two, three, five and 18 assets) 148
7.7 EF composition (18 assets, by asset class) 149
7.8 EF composition with expected return and MDDAER 151
7.9 EF with expected return and MDDAER 151
8.1 Diagram of the asset allocation process 161
8.2 50% equity–50% treasury bond portfolio value
at a 30-year horizon 163
8.3 43% equity–57% treasury bond portfolio
at a 30-year horizon 163
8.4 Strategic asset allocation for a global government
portfolio 171
8.5 Portfolios (a1) to (c1) and (a2) to (c2) 174
9.1 Scenarios used in the framework 181
9.2 Simulated interest rate and return paths 183
10.1 Tilting approaches 193
11.1 Evolution of the 0.1%, 5%, 50%, 95%
and 99.9% percentiles of monthly asset return
distributions over 20 years 213
11.2 Median and 95% confidence interval of the projected
pension expenditure c over the 82-year horizon 214
11.3 Development of the floor F with different discount
factors r over the 82-year horizon 215
11.4 Optimal initial allocation in the primitive assets 216
12.1 Historical risk and return of selected bond
indices (Jan. 1990–Sept. 2008) 229

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
xii List of Illustrations

12.2 Performance of the agency guaranteed versus


sub-prime MBS 232
12.3 Coupon return estimation 235
12.4 Historical relationship between duration and
yield (Jan. 1990–Sept. 2008) 236
12.5 Duration estimation 237
12.6 Price return estimation 237

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
12.7 The conditional nature of refinancing (1990–2008) 239
12.8 Paydown return estimation 240
12.9 Paydown parameter estimate based on ten-year
rolling samples 241
12.10 Monthly total return estimation 241
12.11 12-month cumulative total return estimation 242
12.12 Chronology of MBS spread history 244
12.13 Out-of-sample MBS monthly return fit
(Dec. 2006–Sept. 2008) 245
13.1 Total return volatility of a mix of US MBS and
G7 Treasuries during different time periods 250
13.2 Histogram of realized tracking errors of
TBA proxy portfolio vs. US MBS Fixed-Rate
Index, Sep. 2001–Sep. 2008 253
13.3 Histogram of standardized TEs of TBA proxy
portfolio vs. MBS Fixed-Rate Index,
Sept. 2001–Sept. 2008 255
13.4 Cumulative outperformance of Benchmark 1.3
relative to G7 Treasury Index and GlobalAgg 262
14.1 Efficient frontiers 272
15.1 Squared gain or PTF and phase of the first
order differencing operator 286
15.2 Decomposition of a long-term interest rate time
series for the Netherlands 289
15.3 Maximum Entropy (or autoregressive)
spectral density estimate 294
15.4 Frequency domain time series modelling approach 295
15.5 Observations of low frequency filtered times series. 298
15.6 Annual observations of high frequency
filtered times series 299
15.7 Level effect in the high frequency volatility
of interest rates 301
15.8 Variance decompositions evaluated at every
12th month in a 35-year horizon. 304
15.9 The risk of perspective distortion from
using short samples 308
15.10 The benefits of the decomposition approach 313

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
List of Illustrations xiii

15.11 Out-of-sample forecasts and confidence intervals


of log GDP in the Netherlands 315
15.12 Non-normalized spectral densities 319
15.13 Example simulation of 200 years 320
16.1 In sample fit relative to using true monthly data 330
16.2 Histograms for the overestimated correlation
between residuals in (7) 331

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
16.3 Term structure of risk, true estimate
and expected, biased estimate 333
16.4 Comparison of estimation risk when using
stochastic interpolation (left) versus using
quarterly data only (right) 334
17.1 Time series of excess returns of the GBP FFTW(U)
(upper left panel) and the corresponding
benchmark returns of the Lehman Global
Aggregate portfolio (upper right panel) 346
17.2 Weekly excess log-returns of the ETFs SPY and
DIA and corresponding partial ACF for the
squared excess returns up to June 2008 349

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Notes on Contributors

Arjan B. Berkelaar is Head of Risk Management at Kaust Investment


Management Company and was Principal Investment Officer at the World

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Bank. He is responsible for developing investment strategies and advising
the various internal and external clients of the World Bank Treasury on asset
allocation and related policy matters. Arjan advises Central Banks on reserves
management issues and Sovereign Wealth Funds, including oil funds and
national pension reserve funds on asset allocation and investment strategies.
He joined the World Bank in July 2000. Before joining the World Bank, he
worked at Ortec Consultants, a pension consultancy firm in the Netherlands.
Arjan has published several papers in international journals and is a regular
speaker at international conferences. He holds a Ph.D. in Finance from the
Erasmus University Rotterdam, an M.Sc. in Mathematics (summa cum laude)
from the Delft University of Technology and is a CFA charter holder.
Joachim Coche works as Senior Asset Management Specialist at the Bank
for International Settlements (BIS) in Basle where he advises central bank
clients in the management of foreign exchange reserves. Prior to joining the
BIS, he worked at the World Bank Treasury where he focused on the devel-
opment of asset allocation strategies for the Bank’s fixed income portfolios.
Before joining the World Bank, he was a Senior Economist at the European
Central Bank. His main interests include strategic asset allocation, asset and
liability modelling, and central bank reserves management. Joachim holds
an M.Sc. and a Ph.D. in Economics from the University of Osnabrück.
Ken Nyholm works in the Risk Management Division of the European
Central Bank, focusing on the practical implementation of financial and
quantitative techniques in the area of fixed-income strategic asset allocation
for the bank’s domestic and foreign currency portfolios, as well as asset and
liability management for pensions. Ken holds a Ph.D. in finance and has
published numerous articles on yield curve modelling and financial market
microstructure. He has extensive teaching experience and communication
experience obtained from teaching university courses at the masters level,
as well as conference speaking engagements and central banking seminars
David Jamieson Bolder is currently a Senior Risk and Investment Analyst at
the Bank of International Settlements (BIS). His responsibilities involve pro-
viding analytic support to the BIS’ Treasury and Asset Management func-
tions. He has previously worked in quantitative roles at the Bank of Canada,
the World Bank Treasury, and the European Bank for Reconstruction and
Development. Over the course of his career, he has also authored a number

xiv

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Notes on Contributors xv

of papers on financial modelling, stochastic simulation and optimiza-


tion. Mr. Bolder, a Canadian national, holds Master’s degrees in Business
Administration and Mathematics from the University of British Columbia
and the University of Waterloo, respectively.
Marie Brière is Head of Fixed Income, Forex and Volatility Strategy at Crédit
Agricole Asset Management, an associate researcher with the Centre Emile
Bernheim at Université Libre de Bruxelles and affiliate professor at CERAM

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Business School. A graduate of the ENSAE School of Economics, Statistics
and Finance and the holder of a Ph.D. in Economics, she also teaches empir-
ical finance, asset allocation and investment strategies. She is the author of
a book on the formation of interest rates and a number of scientific articles
published in books and academic journals.
Cyril Caillault, a French national, joined Fortis Investments in October
2004 as a Risk Manager before becoming responsible of the Quantitative
Strategies of European Fixed Income in July 2007. On the occasion of the mer-
ger with ABN AMRO Asset Management, Mr Caillault was promoted to Head
of Quantitative Strategies for Fixed Income. As part of his role, Mr Caillault
is now in charge of developing and managing Quantitative strategies which
are systematically implemented across portfolios in the Duration & Yield
Curve (including Inflation-Linked bonds), Absolute Return, Investment
Grade Credit, High Yield Credit and Aggregate Investment Centres. Prior
to joining Fortis Investments, Mr Caillault worked at Dexia Crédit Local
between 2001 and 2004. There, he developed models to forecast central
banks’ rates while preparing his thesis: Market Risk, Measures and Backtesting.
A dynamical Copulas Approach, which he defended in front of a jury of spe-
cialists in March 2005. Mr Caillault holds a Ph.D. in Finance from the Ecole
Normale Supérieure (France) and is a Graduate in Mathematical Finance
from the University of Reims (France).
Antônio Francisco da Silva Jr. has an Msc. in Chemical Engineering, an
M.Sc. in Business (Finance) and a Ph.D. in Industrial Engineering. He has
worked in the Central Bank of Brazil since 1994 and has more than seven
years of experience in designing portfolios as well as risk and performance
attribution models. Currently he is senior advisor in the Executive Office for
Monetary Policy Risk Management at Central Bank of Brazil.
Paulo Maurício F. Cacella is an Electronic Engineer with more than 20
years of experience in applied numerical methods and more than ten in
developing solutions for risk and performance models. Since 1992, he has
worked for the Central Bank of Brazil where he developed, among other
things, the institutional framework of the reserves investment based on a
reference portfolio, operational guidelines and performance measurement
and evaluation. Currently he is a senior advisor in the Executive Office for
Monetary Policy Risk Management at the Central Bank.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
xvi Notes on Contributors

Aaron Drew is a Senior Investment Strategist at the New Zealand Super-


annuation Fund and works in the organization’s Portfolio Research Team.
His current research interests focus around a range of strategic asset allo-
cation issues and investment opportunities in the New Zealand economy.
Aaron had also worked as an economist at the OECD in Paris (2001–2004)
and at the Reserve Bank of New Zealand, where he headed up the Bank’s
Research Team (2005–2007).

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Lev Dynkin is the founder and Global Head of the Quantitative Portfolio
Strategies Group. The Institutional Investor survey has rated the group num-
ber one in the category of Quantitative Portfolio Management three years
in a row since 2006, when this category was first introduced. Dynkin joined
Lehman Brothers Fixed Income Research in 1987 after working at Coopers
& Lybrand managing financial software development. In 2008 the group
became part of Barclays Capital Research. Dynkin began his career as a
research scientist in the area of theoretical and mathematical physics.
Dynkin focuses on development of quantitative portfolio strategies and
analysis tools for global institutional fixed income investors including cen-
tral banks and Sovereign Wealth Funds, asset managers, pension funds,
endowments, insurance companies and hedge funds. His areas of interest
include optimal allocation of portfolio risk budget, diversification require-
ments, studies of investment style and costs of investment constraints, alpha
generation and benchmark replication and customization.
Dynkin has a Ph.D. in Physics from the University of St. Petersburg
(Russia) and is a member of the editorial advisory board of the Journal
of Portfolio Management. His publications, besides Lehman publica-
tions, include: “DTS (Duration Times Spread): A New Measure of Spread
Exposure in Credit Portfolios”, Journal of Portfolio Management, Winter
2007; “Replicating Bond Indices with Liquid Derivatives”, Journal of Fixed
Income, March 2006; “Style Analysis and Classification of Hedge Funds”,
Journal of Alternative Investments, Fall 2006 (Martello Award for best prac-
titioner article); “Optimal Credit Allocation for Buy-and-Hold Investors”,
Journal of Portfolio Management, Summer 2004; “Sufficient Diversification in
Credit Portfolios”, Journal of Portfolio Management, Fall 2002; “Hedging and
Replication of Fixed-Income Portfolios”, Journal of Fixed Income, March 2002;
“The Lehman Brothers Swap Indices”, Journal of Fixed Income, September
2002; “Tradable Proxy Portfolios for an MBS Index”, Journal of Fixed Income,
December 2001; “Value of Skill in Security Selection versus Asset Allocation
in Credit Markets”, Journal of Portfolio Management, Fall 2000 (Bernstein
Fabozzi/Jacobs Levy “Award of Excellence” for Outstanding Article);
“Constant-Duration Mortgage Index”, Journal of Fixed Income, June 2000;
“Value of Security Selection vs. Asset Allocation in Credit Markets”, Journal
of Portfolio Management, Summer 1999; “MBS Index Returns: A Detailed
Look”, Journal of Fixed Income, March 1999; and Quantitative Management of
Bond Portfolios, Princeton University Press, 2007.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Notes on Contributors xvii

José Luiz Barros Fernandes is a civil engineer from Universidade Federal


de Pernambuco; he has a Master’s degree in management from the
Universidade de Brasília and a Ph.D. in Business Administration and
Quantitative Methods from the Universidad Carlos III de Madrid. He is
currently working as advisor to the Executive Office for Integrated Risk
Management at the Central Bank of Brazil. He is in charge of evaluating and
proposing the strategic asset allocation of the international reserves to the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Central Bank Board of Directors. His academic main interests are related to
investors’ behaviour and strategic asset allocation. He has published papers
in international journals such as the Applied Financial Economics and Journal
of Financial Risk Management.
Roberts Grava, vice president, is a client portfolio manager in J.P. Morgan
Asset Management’s London International Fixed Income Group, working
with official institutions throughout EMEA and the Americas. Before join-
ing JPMAM in 2007, he spent two years as Principal Financial Officer in the
World Bank Treasury’s SIP/RAMP program, working with a variety of official
institutions throughout Europe, Asia and the Middle East on various reserves
and sovereign wealth management topics, including asset allocation, risk
management, quantitative techniques, operations and governance. From
1994 to 2005, Roberts was a member of the board and head of the Market
Operations Department at the Central Bank of Latvia, where he was respon-
sible for reserves management, portfolio risk management, investment and
risk analytics ,foreign currency interventions and domestic monetary policy
operations, national debt management and operations. From 1989 to 1994,
Roberts was a Senior Consultant at New York-based International Capital
Markets Group, a strategic, financial and communications consultancy for
large European and US financial institutions. He holds a B.A. in Economics
from Columbia University, and a Chartered Financial Analyst charter from
the CFA Institute.
Couro Kane-Janus is Investment Strategist of Asset Allocation & Quantitative
Strategies at the World Bank Treasury. She is responsible for developing asset
allocation strategies for some of the World Bank’s internal clients. In add-
ition, Ms. Kane-Janus develops analytical tools that help governments in oil-
rich developing countries set up funds for the future. Ms. Kane-Janus also
advises Central Banks on Asset Allocation issues. Before joining the World
Bank in 2005, she worked for three years as a consultant in the field of statis-
tical arbitrage and equity derivatives at HypoVereinsbank in Germany. Prior
to that, she designed financial services systems at PricewaterhouseCoopers.
She holds a Ph.D. in Applied Mathematics from Ecole Polytechnique and
University Pierre & Marie Curie, France, and was a post doctoral fellow at
California Institute of Technology, Pasadena.
Adam Kobor is responsible for managing mortgage-backed securities port-
folios at the World Bank. Prior to joining the Investment Management

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
xviii Notes on Contributors

Department in 2008, he worked for the Quantitative Strategies, Risk &


Analytics Department for six years. His responsibilities included preparing
strategic asset allocation recommendations for several internal and external
clients, as well as developing quantitative financial models. Prior to join-
ing the World Bank, he was a risk analyst at the National Bank of Hungary.
He holds a Ph.D. from the Budapest University of Economic Sciences (now
Corvinus University). He is a CFA and a CAIA charterholder.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Matti Koivu is a Financial Analyst at the Market and Operational Risk
Division of the Finnish Financial Supervision Authority. Prior to this, he
worked as an Economist in the Risk Management Division of the European
Central Bank, developing quantitative techniques for the management of
the ECB’s investment portfolios. He holds a Ph.D. in Operations Research.
His main research interests are related to stochastic optimization and simu-
lation techniques, time series analysis and asset and liability management.
He has published widely in these areas.
Carlos León holds a M.Sc. in Finance and Banking from HEC-Université
de Lausanne (Switzerland), a M.A. in International Economics and B.A. in
Finance and International Relations from Externado de Colombia University
(Colombia). His working experience includes risk management positions
at Colombia’s Ministry of Finance-Public Credit General Directorate and
research at Banco de la República-Operations and Market Development
Department. He also gives graduate and undergraduate lectures on Finance
and International Economics at Externado de Colombia University.
Fernando Monar Lora was born in Madrid (Spain) on 17 July 1978. He
graduated with a degree in Economics from the Universitat de les Illes
Balears (Balearic Islands University, Spain), where he combined his studies
in Economics with responsible positions in a student association and in the
university’s representative and decision-making bodies. He joined the Asset
Management Division of the Banco de España (Bank of Spain) in August
2003. His duties at the Banco de España included the formulation of stra-
tegic asset allocation proposals, the maintenance of tactical and strategic
benchmark portfolios, the measurement and control of credit risk and the
analysis and modelling of Fixed-Income markets from a quantitative per-
spective. Married and with one son, he currently holds an Expert position
at the Strategic Asset Allocation Unit of the European Central Bank’s Risk
Management Division, further specializing in strategic asset allocation.
Leonardo Nogueira obtained an M.Sc. in Financial Engineering and
Quantitative Analysis and a Ph.D. in Finance at the ICMA Centre, University
of Reading, in the United Kingdom. He previously graduated from the
Federal University of Pernambuco in Recife, Brazil, with a B.Sc. in Computer
Science. Since 1998, Leonardo has worked for Banco Central do Brasil,
where he is currently responsible for the quantitative research of the foreign

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Notes on Contributors xix

reserves department. He also joined the ICMA Centre in 2006 as a Lecturer


in Finance. His research interests include, but are not limited to, pricing
and hedging of derivatives, risk management, volatility modelling, trading
systems and portfolio optimization.
José Renato Haas Ornelas holds a Ph.D. in Business Administration and
Management from Bocconi University, Italy. He has also obtained a Master’s
degree in Business Economics from Catholic University of Brasilia, a M.B.A.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
in Finance from Ibmec, a Bachelor’s degree in Business Administration from
PUC-RJ and a Bachelor’s Degree in Computer Sciences from the Federal
University of Rio de Janeiro. He has worked for the Central Bank of Brazil
since 1998, and is currently working in the Executive Office for Integrated
Risk Management as advisor for market risk. He has published several art-
icles in Brazilian and international journals. His research topics include Risk
Management, Performance Measurement, Asset Allocation, Option Pricing
and Behavioural Finance.
Gabriel Petre is investment officer of asset allocation & quantitative strat-
egies at the World Bank Treasury. He is part of the team responsible for devel-
oping asset allocation strategies for the World Bank’s pension and medical
funds. Gabriel also advises Central Banks on reserves management issues
and more recently has been advising governments in oil-rich developing
countries on setting up funds for the future. He joined the World Bank
in July 2006. Before joining the World Bank, he worked for three years at
The National Bank of Romania, as part of the team in charge of managing
the foreign reserves portfolio. Gabriel holds a Bachelor of Science from the
Academy of Economic Studies in Bucharest and is a CFA charterholder.
Alejandro. Reveiz is currently a Senior Investment Officer at the World
Bank Group. Prior to this appointment, he was in charge of the Open Market
Operations, FX intervention and capital markets development at Banco
de la República de Colombia. He also headed the International Reserves
Department and the Research Department of the International Affairs
Division of the central bank. At the Latin American Reserves Fund (FLAR)
he was in charge of the Asset Management Operation both for internal port-
folios and external clients. He has vast experience in fixed income portfolio
management, capital markets regulation and central bank intervention. His
research interests focus primarily on the application of artificial intelligence
techniques and complexity theory to financial markets, in particular the
impact of regulation and of portfolio construction and management.
Before joining the Risk Control department at the Bank for International
Settlements as a Senior Risk Analyst, Rafael Schmidt taught as an assistant
professor at the University of Cologne and the London School of Economics
and Political Science. He has worked on various risk-management projects
at DaimlerChrysler (Financial Services), CSFB and LSE Enterprise, where

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
xx Notes on Contributors

he developed quantitative models and algorithms for risk-based pricing


and credit risk quantification systems. Rafael has university degrees in
Mathematics, Economics, and Statistics from Syracuse University, New York,
and University of Ulm, Germany. He holds a Ph.D. and a habilitation in
Financial Statistics and Econometrics.
Hens Steehouwer studied Econometrics at the Erasmus University of
Rotterdam. From 19972005 he held various consultancy, R&D and man-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
agement positions at ORTEC Finance in Rotterdam, the Netherlands.
During that time he worked for many pension funds and insurance com-
panies, both in the Netherlands and other countries. In the same time
he also worked on his Ph.D. thesis Macroeconomic Scenarios and Reality:
A Frequency Domain Approach for Analyzing Historical Time Series and
Generating Scenarios for the Future on empirical macroeconomics and the
modelling of economic scenarios (free download from www.ortec-finance.
com). In 2005 he received his Ph.D. in Economics at the Free University
of Amsterdam. Since 2006, he has been head of the ORTEC Centre for
Financial Research (OCFR). The objective of the OCFR is to be the linchpin
between the applied models and methodologies of ORTEC Finance on the
one hand and all worldwide (academic) economic and financial research
on the other. An important current project at the OCFR is the implemen-
tation of a new scenario model according to the principles of the afore-
mentioned Ph.D. research. This new model will be released in 2009. Hens
Steehouwer is affiliated with the Econometric Institute of the Erasmus
University Rotterdam, a member of the Program Committee of INQUIRE
Europe (www.inquire-europe.org) and a member of the Editorial Board of
NETSPAR (www.netspar.nl). His research interests include Time Series and
Frequency Domain Analysis, Filtering Techniques, Long Term Growth,
Business Cycles, Market Consistent and Value Based Asset and Liability
Management, Scenario Analysis and Modelling, Monte Carlo Valuation
and Embedded Derivatives in Pension and Insurance contracts.
Tørres G. Trovik is a Senior Investment Officer in The World Bank Treasury,
Quantitative Strategies group. He joined Norges Bank (NBIM) as a Senior
Portfolio Manager in 1998. In 2002 he moved on to work on strategic asset
allocation and governance as a Special Advisor in the Governor’s Staff of
Norges Bank. His academic work has ranged from financial engineering in
continuous time to, more recently, a focus on econometric challenges with
real time output gap estimation. He obtained his Ph.D. at the Norwegian
School of Economics and Business Administration in 2001. He has par-
ticipated in several technical assistance missions for the IMF on sovereign
wealth funds. He has been a member of the Investment Advisory Board for
the Petroleum Fund on East Timor since 2005. Trovik joined the World Bank
in 2008.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Preface

On 24–25 November 2008, a conference on Strategic Asset Allocation for


Central Banks and Sovereign Wealth Funds was held, jointly organized by

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
the Bank for International Settlements, the European Central Bank, and the
World Bank Treasury. A total of 35 speakers presented their perspectives on
asset allocation, quantitative investment strategies and risk management.
The proceedings of that conference are published in two books. This
book contains chapters on the themes of Interest Rate Modelling and
Forecasting, Portfolio Optimization Techniques, and Asset Class Modelling
and Quantitative Techniques.
Papers on the themes of Reserves Management and Sovereign Wealth
Fund Management are collected in the book Central Bank Reserves and
Sovereign Wealth Management, edited by Arjan B. Berkelaar, Joachim Coche
and Ken Nyholm and published by Palgrave Macmillan 2009 (ISBN 978-0-
230-58089-3).

xxi

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Introduction

Reserves and asset accumulation

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Over the past decade public entities, i.e. governments, central banks and
other public intuitions, have accumulated significant investable assets,
especially in the areas of central bank foreign exchange reserves, commod-
ity savings funds, and pension reserve and social security funds.
Foreign exchange reserves (excluding gold) have grown to about USD seven
trillion by the end of 2008. While a discussion about reserves adequacy in
the context of recent market events is ongoing, there continues to be a view
that reserves in many countries are in excess of what is deemed adequate
to protect against exogenous shocks or adverse external financing condi-
tions. Consequently, some countries have therefore officially established
reserves investment corporations out of excess central bank reserves to seek
higher returns. In other countries central banks have notionally split the
reserves portfolio into separate tranches, including an investment tranche
that might be invested in a broader set of asset classes that goes beyond the
traditional investment universe of central bank reserves managers (covering
just government instruments, agencies and instruments issued by supra-
national institutions). An enhanced investment universe allows for add-
itional exposures to credit risk obtained, for example, via asset classes such
as agency bonds, mortgage backed securities (MBS), and in some cases even
idiosyncratic risk in the form of corporate bonds and equities. While risk
aversion globally (including that of central banks) has increased as a result
of the recent global financial crisis, the longer-term trend of reserves diver-
sification will likely continue.
With rising commodity prices in the past couple of years, several com-
modity-exporting countries have accumulated large amounts of foreign
currency assets. Many countries have established commodity funds to form
a buffer against volatile commodity prices and manage their new-found
riches more efficiently. By some estimates, commodity funds have accumu-
lated about USD two trillion. These funds serve different purposes, including
stabilization of fiscal revenues and inter-generational saving. Stabilization
funds typically invest in high-grade fixed income instruments, while sav-
ings funds seek to invest in investment-grade fixed income, public and pri-
vate equity and hedge funds.
Finally, as a result of aging populations and demographic shifts, many
countries have established pension reserve funds and social security funds to
support pay-as-you-go pension systems. Pension reserve funds are established

xxii

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Introduction xxiii

and funded by the government through direct fiscal transfers. Social secur-
ity funds are part of the overall social security system. Inflows are mainly
surpluses of employee and/or employer contributions over current payouts,
as well as top-up contributions from the government through fiscal trans-
fers. According to estimates by the Organisation for Economic Co-operation
OECD, pension reserve and social security funds total around USD two tril-
lion (excluding the US social security trust fund, which does not have invest-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
able assets).
Many of the funds identified above have been classified as ‘sovereign
wealth funds’ (SWFs) by the financial press. There is no single, universally
accepted definition of an SWF, but one simple working definition is: ‘an
investment fund controlled by a sovereign and invested (at least partially) in
foreign assets’. Table I.1 shows a list of various large public investment funds
across the world, including central banks, sovereign wealth funds and pen-
sion reserve funds. Estimated assets under management by the largest 50
funds total over USD 11 trillion. A total of 44 funds are funds in emerging
or developing countries managing over USD three trillion.
Out of the 50 largest funds listed in Table I.1, 23 are institutions other
than central banks. Many of these sovereign wealth funds were established
in the last ten years1. These new public funds’ investment strategies are
likely to follow the lead of established funds and other institutional inves-
tors, moving from fixed income investments into equities, and even hedge
funds, private equity and other alternative investments.
Table I.1 The 50 largest public investment funds

Estimated AUM
Country Name of the Fund (in USD bln)*

China Central Bank Reserves 1530


Japan National Reserve Funds 1218
Japan Central Bank Reserves 974
UAE Abu Dhabi Investment 875
Authority (ADIA)
Russia Central Bank Reserves 542
Saudi Arabia Various Funds 433
Norway The Government Pension Fund 401
Singapore GIC 330
China SAFE Investment Company 312
India Central Bank Reserves 303
Kuwait Kuwait Fund for Future 264
Generations
Korea Central Bank Reserves 258
Korea National Pension Service 229
Euro area Central Bank Reserves 222

Continued

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
xxiv Introduction

Table I.1 Continued

Estimated AUM
Country Name of the Fund (in USD bln)*

Brazil Central Bank Reserves 206


China China Investment Corporation 200
Singapore Central Bank Reserves 177

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
China-HK Hong Kong Monetary 173
Authority
Hong Kong SAR Central Bank Reserves 158
Russia Reserve Fund 141
Algeria Central Bank Reserves 141
Singapore Temasek 134
Sweden National Pension Funds 133
(AP1-AP4 and AP-6)
Canada Canadian Pension Plan 111
Malaysia Central Bank Reserves 109
Thailand Central Bank Reserves 100
Libya Libya Investment Authority 100
(includes LAFICO)
Mexico Central Bank Reserves 99
Libya Central Bank Reserves 87
Dubai Dubai Investment Corporation 82
Turkey Central Bank Reserves 77
China National Social Security Fund 74
Poland Central Bank Reserves 71
Nigeria Central Bank Reserves 62
United States Central Bank Reserves 61
United Arab Central Bank Reserves 61
Emirates
Qatar Qatar Investment Authority 60
Indonesia Central Bank Reserves 57
Norway Central Bank Reserves 50
Algeria Fonds de Régulation des Recettes 47
de l'Algérie
Argentina Central Bank Reserves 46
Switzerland Central Bank Reserves 45
Spain Fondo de Reserva de la 45
Seguridad Social
Australia Future Fund 44
Canada Central Bank Reserves 43
United Kingdom Central Bank Reserves 42
France Fonds de Reserve pour les 42
Retraites
Romania Central Bank Reserves 39
Kazakhstan National Fund 38
Ukraine Central Bank Reserves 37
* Data reflect latest available figures as reported by individual entities or authoritative sources,
with various reporting dates between 2004 and 2008.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Introduction xxv

Public investment funds: Objectives and liabilities

We cannot paint all public investment funds with the same broad brush.
To better understand investment objectives, governance arrangements and
investment behaviour, it is helpful to classify the funds according to their
policy objectives and liability structure. As in Rozanov (2007)2, we distin-
guish between five types of public investment funds:

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
● stabilization and buffer funds, and central bank FX reserves,
● reserves investment corporations,
● savings and heritage funds,
● pension reserve and social security funds,
● government holding management companies.

Stabilization and buffer funds as well as central bank reserves are typically
invested with a focus on safety and liquidity. These funds face a contingent
liability that is subject to volatile prices such as exchange rates and/or com-
modity prices. Stabilization funds may need to transfer significant money
to the government budget when commodity prices drop precipitously.
Central banks may need to intervene in the foreign exchange markets when
the domestic currency comes under pressure. Capital preservation, either
in nominal or in real terms, is therefore of paramount importance. The
investment horizon in most instance ranges from one to three years and
managing credit and liquidity risk are critical. We include traditional cen-
tral bank reserves in the first category, while so-called excess reserves3 are
included under the category of reserves investment corporations – whether
a country has actually established such an organization or not – as the asset
allocation problem for both is the same.
It should be noted that the discipline of central bank reserves manage-
ment is evolving dramatically with the tremendous growth of central bank
reserves, stronger balance of payments positions and global capital flows.
As emerging market reserves have increased – both in outright terms and
beyond that needed for external financial stability – the investment return
and negative carry4 associated with holding reserves has become more of
an issue. Central banks have pursued mainly two strategies to address this
problem. Some countries have engaged in asset/liability management at
the national level and used ‘excess’ reserves to pay down foreign denomi-
nated debt, thus reducing the cost of carry on the national balance sheet.
Beyond debt repayment, central banks have also sought to increase long-
term returns through more efficient or aggressive investment strategies to
reduce the negative carry. This has been done by in various ways:

1. shifting excess reserves into an SWF (e.g. China Investment Corporation)


in a swap arrangement with the Ministry of Finance;

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
xxvi Introduction

2. setting up a separate investment agency to manage the long-term invest-


ment tranche of the foreign currency reserves (e.g. Korea Investment
Corporation);
3. managing the investment tranche within the central bank (e.g. the Swiss
National Bank, the Central Bank of Botswana).

The investment tranche is typically invested in broader investment instru-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
ments and over a longer investment horizon with less need for immediate
liquidity. The implicit liability of Central Bank reserves is typically char-
acterized by domestic short to medium-term debt that has been issued for
sterilization purposes.
Savings and heritage funds are typically established out of commodity rev-
enues and represent net wealth for a country – unlike central bank reserves
which are borrowed. The objective of these funds is to sustain government
spending after commodity resources have been depleted. Decision-makers
are faced with two trade-offs that will, together, determine the ultimate
size and life of the fund: the current versus future level of spending and the
investment strategy for the fund’s assets. Transfers to and from the fund are
typically determined by a savings or spending rule.
Broadly, there are two types of savings and spending rules. The first is
based only on fiscal considerations and any saving is a residual. In this case
commodity revenues typically flow into the budget first and a portion is
transferred to the fund. Transfer rules include balanced budget require-
ments whereby allocations to the fund are made only after balancing the
budget and there is no cap on the amount of deficit financing available
from commodity extraction and sales. Also included in this category are
those rules that rely on an administrative oil price to divide oil revenues
between the budget and savings. While these rules may stabilize the volatil-
ity of government revenues, they do not ensure any capital accumulation to
support future spending needs. The second type puts an explicit cap on the
spending of oil revenues ensuring some level of capital accumulation over
time. In this case commodity revenues typically flow into the fund first and
a portion is transferred to the budget. Various ad hoc spending rules have
been devised, but a general principle is that if the fund is to have a perman-
ent nature, the average real spending rate over time should not exceed the
expected real return on the portfolio.
Savings and heritage funds tend to have a perpetual investment horizon:
they are expected to provide for current and future generations for per-
petuity. The asset allocation problem of savings and heritage funds is com-
parable to that of endowments and foundations, but there are important
differences as well. Many savings and heritage funds are in emerging mar-
ket countries. Typically commodity exporting countries receive commodity
revenues in USD. When commodities represent a large portion of a coun-
try’s economic base, commodity price volatility can easily be transmitted

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Introduction xxvii

to economic volatility and lead to the so-called Dutch disease5. One of the
purposes of the commodity savings fund is to accumulate wealth in USD,
so only a portion of the fund to be transferred to the government budget
will be converted into the domestic currency. The bulk of the assets of the
fund will therefore be kept in foreign currency. Consequently, most – if
not all – of the assets will be in foreign investments. Managing exchange
rate risk therefore becomes important, particularly if the domestic currency

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
appreciates against e.g. USD. Savings fund are restricted, however, in their
ability to hedge foreign currency risk exposure relative to the domestic
currency6.
Our fourth category is pension reserve and social security funds. Unlike
savings funds and foreign reserves, these funds have explicit and clearly
defined liabilities. Also these funds typically have a significant allocation
to domestic assets. Some observers refer to such funds as sovereign pen-
sion funds and define them a separate group of sovereign wealth funds.
This group is not well-defined, however. Pension reserve funds are funded
by the government from general tax revenues and have been set up to par-
tially or fully pre-finance future the pension liabilities of the government,
particularly in light of an aging population. The purpose is to smooth the
expected rising fiscal burden on the public pay-as-you-go system. The assets
of these funds are owned by the government and fully at their disposal.
These funds are rightfully labelled SWFs and are typically found in OECD
countries where populations are aging rapidly.
Pension reserve funds are usually established with a finite horizon of
about 40 to 50 years. The objective of these funds is to set aside and invest
a significant portion of financial resources over the next 20 to 25 years dur-
ing a so-called accumulation phase, making the accumulated assets grad-
ually available thereafter during a so-called withdrawal phase that also lasts
about 20 to 25 years at the discretion of the government or as mandated in
applicable pension reserve laws. During the accumulation phase withdraw-
als from the fund are not allowed (typically by law). Consequently, pension
reserve funds can allocate a significant portion of their assets to illiquid
and risky investments. During the withdrawal phase managing liquidity
becomes more important and the allocation should gradually be rebalanced
to fixed income assets. Pension reserve funds have only been established in
the last ten years and so all of these funds are currently in the accumulation
phase.
Social security funds, on the other hand, are part of the overall social
security system. These funds invest contributions from employers and/or
employees and are not typically funded by government revenues7. In other
words, the money does not belong to the government. The government or
a separate arms-length agency is acting as fiduciary. These funds should
therefore not be classified as SWFs. A third group that is sometimes (mis-
takenly) included under the label sovereign pension funds are pension plans

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Table I.2 Types of public investment funds

Type of fund Objective Source of funds Type of liability Risk appetite Types of asset classes

Stabilization funds Stability, liquidity Commodity Contingent liability Low High grade
and central bank and return revenues, FX reserves (depends on unpredictable fixed income
adequate reserves* and volatile variables such
as commodity prices and
exchange rates)
Reserves investment Minimize opportunity FX excess reserves Domestic short to Medium Investment-grade
corporations and cost of holding excess medium-term fixed income and
central bank excess reserves debt - issued for public equities
reserves sterilization purposes
Savings and heritage Share wealth across Commodity revenues, Contractually defined Medium to high Equities and
funds generations by fiscal revenues interim payouts (typically alternative
converting governed by a spending investments
non-renewable assets rule) with perpetual
in financial assets investment horizon
Social security Fund social security Contributions by Fixed liabilities in Medium to high Investment-grade
funds benefits participants domestic currency that fixed income, public
(employees and are contractually defined equities, and some
employers) alternatives
Pension reserves Pre-finance all or a Fiscal revenues Contractually defined High – particularly Equities and illiquid
funds portion of future obligations in domestic during the alternative
public pension liabilities currency. Typically accumulation stage. investments
and act as a fiscal drawdowns are prohibited Risk appetite will
smoothing mechanism for the first 15 to 20 years. decrease during
payout stage.
Government Maximize investment Fiscal revenues, No identifiable Very high Equities and illiquid
holding management return subject to privatization liability alternative
companies acceptable level of risk investments

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Introduction xxix

that cover government workers. Unlike pension reserve funds, many social
security funds do not have an explicit end-date and are currently paying out
social security benefits to the eligible citizens.
The fifth category is government holding companies. Government hold-
ing companies are typically funded by privatization proceedings from
former national companies. Investments are mostly direct equity stakes in
various domestic companies on behalf of the government. Some govern-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
ment holding companies have also bought direct stakes in foreign com-
panies. These types of investments have received a lot of attention in the
press and are the subject of debate and concern in the developed world.
Government holding companies tend to behave more like private equity
funds and less like institutional investors. They do not have any identifi-
able liability.
It could be argued that there is a sixth category: development funds.
These funds are set up with the specific goal of developing the domestic
economy by taking large stakes in critical infrastructure projects. It should
be noted that this objective could also be achieved through the spending
policy of savings and heritage funds or even through domestic investments
by savings and heritage funds. Domestic investments require care, however,
to avoid contracting a bad case of Dutch disease and politicization (or even
corruption) of investment decisions.
Table I.2 presents a summary and overview of the five types of public
investment funds that we discussed above, including traditional central
bank reserves and social security funds. The table highlights the distinct
objectives of each type of fund, their typical liability structures and risk
appetites as well as the types of asset classes that these funds might invest
in. Many of the recently established funds are still far from the ideal asset
allocation. The process of moving from the current asset allocation towards
a more appropriate portfolio will likely be gradual. At first, funds will prob-
ably be managed conservatively, in ways not dramatically different from
how official reserves are managed.

Strategic asset allocation

The strategic asset allocation decision for any investor sets out the opti-
mal long-term portfolio, i.e. the portfolio with the highest expect total
return given the overall objectives, investment horizon and risk toler-
ance. It is generally accepted among practitioners and academics that the
strategic asset allocation (SAA) is the main driver of the risk and return
profile of any investment portfolio. The investment policy is typically
determined through portfolio optimization. The asset allocation should
have a long-term focus but be reviewed periodically – e.g. every one to
three years.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
xxx Introduction

A typical decision framework for setting the strategic asset allocation is:

1. articulate the objectives for overall investment management and the eli-
gible investment universe;
2. specify the risk measures used to define the Board’s tolerance for invest-
ment risk (e.g. the probability of adverse outcomes, value-at-risk and
expected shortfalls), and set out what are unacceptable outcomes;

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
3. define the investment horizon over which the risk profile and success
of the strategic asset allocation in meeting the objectives should be
assessed;
4. formalize the methodology for developing the strategic asset allocation
proposal – including the determination of capital market assumptions for
each of the eligible asset classes and the techniques for deriving optimal
risk-efficient portfolios;
5. operationalize the strategic asset allocation by setting out portfolio
weights to each of the eligible asset classes, appropriate benchmarks, the
rebalancing strategy, and an overall budget for active risk.

Academic research has, historically, focused on portfolio choice for indi-


viduals and defined benefit pension funds. Asset allocation for public
institutions, on the other hand, appears to be somewhat neglected. While
many of the techniques developed in the academic and practitioner lit-
erature can be applied to public investors, the unique circumstances and
the investment universe of public investment funds require additional
attention.
Some examples include:

● Public investment funds face policy objectives tied to their liabilities that
may differ from those of other institutional investors. These may include,
for example, reduction of the cost of sterilization for central banks and
reserves investment corporations, stabilization of government revenues
due volatile exchange rates and oil prices, and domestic (infrastructure)
investments to support and grow the domestic economy.
● Balance sheet considerations at the national level are important and gov-
ernments want to avoid what is called ‘mental accounting’ in the discip-
line of behavioural finance, which can loosely be defined as: “the left
hand doesn’t know what the right hand is doing”. The simplest example
is one where a country has sizeable foreign debts and, at the same time,
holds significant foreign currency reserves. Financially, this country
would be better off by repaying its debts before it accumulates foreign
assets. Coordinated financial management at the national level is particu-
larly important for a country that has significant excess reserves, an oil
savings fund and/or a national pension reserve fund.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Introduction xxxi

● Broader macroeconomic implications should be considered when design-


ing investment policies and saving (funding) and spending (withdrawal)
rules. Many SWFs, for example, are forced to invest abroad (as the source
of their revenues is in foreign currency and the country wants to avoid
contracting Dutch disease) – unlike other institutional investors that typ-
ically have a large allocation to their domestic portfolio. The fund(s) need
to be integrated into the government budget.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
● Another reason that public investment funds might need to invest a sig-
nificant portion of their assets abroad is the size of their domestic markets.
Many public investment funds are in developing or emerging market coun-
tries where domestic markets are not (yet) liquid and deep enough. If the
fund is large in relation to domestic financial markets the actions of the
public investment fund might move the markets, forcing them to go over-
seas with their investments. While investors in developed countries typic-
ally hedge a significant amount of their foreign investments back into the
domestic currency, this is not available to SWFs in emerging markets due
to the absence of depth and liquidity in forward currency markets.
● The investment universe for some public investors might be somewhat
different from that of individual investors and defined benefit pension
funds. Central banks invest mostly in fixed income securities. Modelling
yield curve dynamics over time is therefore important in constructing
the appropriate asset allocation. Pension reserve and social security funds
typically invest a (large) portion of their assets in domestic markets.
Modelling the returns on domestic assets can be challenging due to the
lack of data availability.
● Finally, investments by public funds are exposed to the public spotlight
and reputational considerations play a more important role than for other
institutional investors. This has implications for how best to design gov-
ernance arrangements to ensure that assets are managed as efficiently as
possible.

As can be surmized from the examples above, the asset allocation prob-
lem for public investment funds requires additional attention and analysis.
While research interest in central bank reserves management and sover-
eign wealth funds has increased in recent years (see Figure I.1), these topics
still appear to be an under-researched field in economics and finance (see
Figure I.2).
Against this background, the Bank for International Settlements, the
European Central Bank, and the World Bank Treasury organized a conference
on Strategic Asset Allocation for Central Banks and Sovereign Wealth Funds
on 2425 November 2008. A total of 35 speakers presented their perspectives
on asset allocation, quantitative investment strategies, and risk management.
Many of the speakers were representatives from public investment funds.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
xxxii Introduction

14
Publications on foreign reserves
World Foreign Exchange Reserves

12 Publications on SWFs
as percentage of World GDP

Foreign exchange
10 reserves / GDP

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Number of publications
6
80
70
4 60
50
40
2 30
20
10
0 0
68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 00 02 04 06 08
Year

Figure I.1 Reserves growth and the number of academic publications on reserves and
sovereign wealth management
Note: The ratio of global foreign exchanges reserves as a percentage of World GDP. Number of
publications in the fields of foreign reserves and sovereign wealth funds as identifiable on basis
of title, keywords and abstracts in the EconLit database maintained by the American Economic
Association.

29941
17709
Number of Publications

7030
3889

1611
1297
731 719 601
422 158 59 48 56 12 25
se o c ing
llo ice

y
yl ing
le

se ch s
llo e

F ma ven )
R por age ns
F rve lio c ent

re et qu e
n allo acy
lth ion

s
(F
ta lic

As io d

nd
tio

t a oic

tio

ve Ass ade oic


ru

+ tfol fun

tio
t a ho
As foli ric

Ta get

ea cat
e o m
o

er nt es
ca

ca

fu
h
or
tio y p
rt t p

rv
r

PF o ion
fla ar

R rate se
Po sse

ve er
n
+ ns
n
e t

s
es tf

w
A

on

r
e
r

I
ge n
PF P
M

an eig

ig
In

+
ch For

es

So
Ex

Figure I.2 Research fields in economics and finance: number of publications


Note: Number of matches in keywords, titles and abstracts of publications in EconLit database
maintained by the American Economic Association. Numbers are based on contributions pub-
lished between 1968 and 2008.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Introduction xxxiii

The presentations given can be broadly grouped into five different


themes:

● Reserves management
● Sovereign wealth fund management
● Interest rate modelling and forecasting
● Portfolio optimization techniques

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
● Asset class modelling and quantitative techniques

This book is a collection of the chapters which cover the latter three themes
(chapters relating to the first two themes are collected in a separate book,
Central Bank Reserves and Sovereign Wealth Management). The next section
provides a brief summary of each of the chapters in this book.

Overview of the book

Theme 1: Interest rate modelling and forecasting


Interest rate modelling and forecasting are important for monetary policy
analysis, portfolio allocations and risk management decisions. A danger in
using interest rate models and forecasts is model risk: are we using the right
model? One solution is to use multiple models to forecast interest rates;
however, this not only greatly increases the amount of work that needs to
be carried out, but it still leaves open the question: which model forecast
should we use? Structural shifts or regime changes as well as possible model
misspecifications make it difficult for any single model to capture all of the
trends in the data and come out as the clear winner.
David Jamieson Bolder and Yuliya Romanyuk examine various techniques
for combining or averaging alternative models in the context of forecasting
the Canadian term structure of interest rates using both yield and macro-
economic data. They perform an empirical study with four different term
structure models from January 1973 to July 2007. They examine a number of
model-averaging schemes in both a frequentist and a Bayesian setting. The
forecasts from individual models and combination schemes are evaluated
in a number of ways; preliminary results show that model averaging gener-
ally assists in mitigating model risk, and that simple combination schemes
tend to outperform more complex variants. These findings carry significant
implications for central banking reserves management: a unified approach
towards accounting for model uncertainty can lead to improved forecasts
and, consequently, better decisions.
Leonardo Nogueira introduces a framework that allows analysts or port-
folio managers to map their views on a small set of interest rates into an
entire yield curve and ultimately into expected returns on bond portfolios.
The model builds on the theory of principal component analysis (PCA), can
easily be extended to other markets and has no restrictions on the number

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
xxxiv Introduction

of forecast variables or the number of views. It also operates on the first


two moments of the joint probability distribution of yields and makes no
assumption about higher moments. This is an advantage relative to Bayesian
theory, for instance, in which a parametric distribution is often assumed for
the random variables.
Fernando Monar Lora and Ken Nyholm present a new empirical approach
to modelling spread-risk and forecasting credit spreads. Their object of

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
interest is the price discount at which risky bonds trade relative to risk-
free bonds, i.e. the discounted value of excess cash flows associated with
credit spreads. Using US data for the LIBOR/Swap curve, they show that
one single time-varying risk factor is needed to successfully model observed
credit spreads. In an out-of-sample experiment they show that the suggested
model specification out-performs random-walk forecasts while improving
upon the information content of other reduced-form empirical models.
Arjan B. Berkelaar and Gabriel Petre consider several strategies to dynam-
ically manage interest rate duration for central bank and pension fund port-
folios. They examine level-dependent strategies, regression-based strategies,
scoring strategies and crossover moving average strategies. The performance
of each of these strategies is evaluated against a constant maturity bench-
mark at monthly, quarterly, semi-annual and annual rebalancing frequen-
cies. In general, they find weak evidence of mean-reversion in interest rates
over the medium- to long-term, and momentum in interest rates in the
short-term. Strategies based on mean reversion only work for central bank
portfolios when the rebalancing frequency is 12 months or longer. Scoring
and momentum strategies only work for central bank portfolios when the
rebalancing frequency is one month. For pension portfolios, strategies based
on mean reversion only work when the rebalancing frequency is over 12
months, while scoring and momentum strategies work at all rebalancing
frequencies. Overall, they find that while some of the strategies produce
positive information ratios, the results are not consistent over time. In gen-
eral, central banks and pension funds might be better off keeping the dur-
ation of their portfolios relatively constant.

Theme 2: Portfolio optimization techniques


Constructing a strategic asset allocation involves defining investment
objectives, investment constraints and the investment horizon. Using these
inputs, a practical portfolio that reflects the investor’s risk and return pro-
file needs to be constructed. It is common to use portfolio optimization
techniques based on a set of expected returns, correlation and volatilities
over the desired investment horizon. One of the drawbacks of traditional
optimization approaches is their static nature. These models assume that
investors do not change their asset allocation over the investment horizon.
Investor preferences may change over the investment horizon, however,
and there is empirical evidence that capital market assumptions are time-

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Introduction xxxv

varying. The investment horizon of the strategic asset allocation is import-


ant and can result in wildly different portfolios and even affect investor
perception when setting risk preferences.
Paulo Maurício F. de Cacella, Isabela Ribeiro Damaso and Antônio Francisco
da Silva Jr. develop a dynamic model to allow portfolio optimization with a
variable time horizon instead of defining a single fixed investment horizon.
The model finds the best portfolio for an investor, with a specific invest-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
ment horizon, while minimizing costs relative to the efficient frontier if the
investor exits from the strategy sooner than expected. The problem can be
formulated as a variable investment horizon portfolio choice problem. De
Cacella et al. use a multi-objective evolutionary optimization algorithm to
find a set of viable portfolios that maximize expected return while min-
imizing exit or other costs. Several examples are provided to illustrate the
variable investment horizon methodology.
Another problem with traditional portfolio optimization models is that
market risk is the only risk factor used in determining the optimal asset
allocation. Certain asset classes, however, are exposed to other risk factors
that provide compensation to investors (i.e. risk premia). If those risk premia
are ignored in the analysis, optimal portfolios tend to be distorted with
credit and more negatively skewed assets dominating the portfolios. José
Luiz Barros Fernandes and José Renato Haas Ornelas propose a new per-
formance measure that takes into account both skewness and credit risk.
They illustrate that compared to traditional mean variance analysis, using
this new performance measure results in more balanced portfolios. An asset
allocation example with hedge funds, corporate bonds and high yield is
provided to illustrate the new performance measure.
Alejandro Reveiz and Carlos León propose an alternative to mean–vari-
ance optimization, namely a cumulative wealth and maximum drawdown
optimization framework. They discuss the technical advantages and coher-
ence of maximum drawdown and present an application of the new port-
folio optimization framework. The main findings indicate that the new
framework may help overcome some of the shortcomings of the traditional
mean–variance framework.
Cross-market correlations typically increase significantly during turbu-
lent periods. The normal distribution typically used by financial practition-
ers does not capture this kind of dependence. Copula functions, however,
can be used to model more general dependence structures. The multivari-
ate distribution of returns can be separated into two parts: the marginal
distributions of the return for each asset class, and the dependence struc-
ture of the asset classes described by the copula. When the two are com-
bined, the multivariate return distribution is obtained. Cyril Caillault and
Stéphane Monier use copula functions and the normal inverse Gaussian dis-
tribution to model asset returns. Based on simulated asset returns, Caillault
and Monier compare optimal portfolios using traditional mean–variance

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
xxxvi Introduction

analysis and three alternative approaches: mean–Value-at-Risk, mean–ex-


pected shortfall and mean–Omega optimization. Using several examples,
they conclude that mean–variance analysis produces the least diversified
portfolios and that the three alternatives are superior to mean–variance
analysis. The mean–Omega optimization approach is preferred as it takes
into account all the moments of the distribution.
Roberts Grava outlines a framework that uses a minimum of inputs from

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
portfolio managers or investment strategists, in a format ‘native’ to their
habitat: horizon expectations for headline government interest rates, sector
spreads, FX rates, and equity index levels for a base case scenario, and as
many risk scenarios that they feel appropriate. Users do not have to specify
confidence levels for their base case or the probability of each risk scenario
occurring, instead specifying a minimum level of desired return, or max-
imum amount of acceptable loss or underperformance for each risk scen-
ario. Finally, a downside risk constraint for the entire portfolio is specified.
The optimization process focuses on discrete probability distributions of
forward-looking asset class returns and the optimization problem is set up
to maximize expected return under the base case, subject to a portfolio
risk limit, expressed as conditional Value-at-Risk (expected shortfall) with
a given confidence level. Any constraints germane to the user (individual
upper and lower bounds, group limits, currency exposure, duration devia-
tions, etc.) can be incorporated easily.
Cochrane (1999) has argued that the finance literature generally supports
long-run mean reversion in asset returns that is at least partially predict-
able8. Aaron Drew, Richard Frogley, Tore Hayward and Rishab Sethi describe
a dynamic portfolio asset allocation approach called ‘strategic tilting’ that
is consistent with exploiting mean reversion in asset returns. A range of his-
torical back-tests are presented that tend to support the strategy. Drew et al.
also show that in the presence of uncertainty about the return predictability
process, strategic tilting tends to perform at least as well as the traditional
approach of re-balancing asset classes to their weights in the strategic asset
allocation. Strategic tilting usually involves the risk of bearing short-term
underperformance for the prospect of longer-term gains. The underper-
formance may persist for a substantial period, and consequently most asset
managers are seemingly unwilling or unable to engage in strategic tilting,
even if they are pre-disposed towards believing in longer-run mean rever-
sion in asset markets. Drew et al. propose mechanisms to enhance the sus-
tainability of the strategy.
Petri Hilli, Matti Koivu and Teemu Pennanen study the problem of diver-
sifying a given initial capital over a finite number of investment funds that
follow different trading strategies. The investment funds operate in a mar-
ket where a finite number of underlying assets may be traded over a finite
discrete time. The goal is to find diversification that is optimal in terms of a
given convex risk measure. They formulate an optimization problem where

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Introduction xxxvii

a portfolio manager is faced with uncertain asset returns as well as liabil-


ities. The main contribution is a description of a computational procedure
for finding optimal diversification between funds. The procedure combines
simulations with large-scale convex optimization and can be efficiently
implemented with modern solvers for linear programming. The optimiza-
tion process is illustrated on a problem coming from the Finnish pension
insurance industry.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Theme 3: Asset class modelling and quantitative techniques
Strategic asset allocation involves modelling the risk and return character-
istics of different asset classes. Modelling the returns and risk of different
asset classes usually relies heavily on econometric techniques and time ser-
ies analysis. The resulting models can be used for forecasting asset returns,
in a Monte Carlo simulation framework to evaluate alternative portfolios
over time or as an input into a portfolio optimization process.
Myles Brennan and Adam Kobor present a return attribution model that
can be used to estimate the performance of the agency guaranteed mortgage
backed securities (MBS) universe under certain yield curve and spread sce-
narios. The proposed model can be considered as a framework to model the
MBS sector separately from governments and other fixed income sectors.
Driven by yield curve and spread scenarios, the proposed model facilitates
the generation of inputs relevant to an asset allocation optimizer. The his-
torical fit of the model is quite good, even though the model is linked only
to the seven-year swap rate. Brennan and Kobor suggest that going forward
fixed income analysts have to pay special attention to the factors that drive
the spread that they add to government yields. The expected path of the
spreads can be determined after a careful analysis of the housing and the
mortgage markets. In addition, the model can be easily extended into a mul-
tifactor model, including yield curve slope or volatility as well.
Lev Dynkin, Jay Hyman and Bruce Phelps consider the question of
whether a combination of G7 government bonds and MBS can achieve a
return profile which is broadly similar – on a risk-return basis – to that of
the Barclays Capital Global Aggregate Index. Several different variations on
such a benchmark are investigated, using different construction rules. First,
a simple blend of two existing market-weighted indices G7 Treasuries and
US fixed-rate MBS are explored. However, this approach leads to a much
higher concentration of USD denominated debt. Second, this blend is rebal-
anced to achieve a global interest-rate exposure more similar to that of the
Lehman Global Aggregate. Third, it is investigated whether it is beneficial to
add securitized products in the euro-denominated portion of the index.
Marie Brière, Alexander Burgues and Ombretta Signori consider the ben-
efits of investing in volatility as an asset class. Exposure to volatility risk is
achieved through the combination of two very different sets of strategies: on
the one hand, long exposure to implied volatility and, on the other hand,

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
xxxviii Introduction

long exposure to the volatility risk premium, where the latter is defined as
the difference between the implied volatility of an underlying and its subse-
quent realized volatility. Both sets of strategies are consistent with the classic
motivations that prompt investors to move into an asset class, i.e. the pos-
sibility for diversification and expected return-enhancement. The remark-
ably strong negative correlation between implied volatility and equity prices
during market downturns offers timely protection against the risk of capital

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
loss. Therefore, exposure to implied volatility is highly attractive to inves-
tors for diversification purposes. On the other hand, exposure to the vola-
tility risk premium has historically delivered attractive risk-adjusted returns
albeit with greater downside risk. Investing in the volatility premium can be
described as a strategy similar to selling insurance premia.
Hens Steehouwer describes a frequency domain methodology for time ser-
ies modelling. With this methodology it is possible to construct time series
models that give a better description of the empirical long-term behaviour
of economic and financial variables, bring together the empirical behaviour
of these variables at different horizons and observation frequencies and get
insight into and understanding of the corresponding dynamic behaviour.
Steehouwer introduces the most important frequency domain techniques
and concepts, describes and illustrates the methodology and finally pro-
vides the motivation for using these techniques. The methodology can help
investors construct better Monte Carlo simulations of economic and finan-
cial variables for asset allocation and risk management purposes.
Some financial series for alternative asset classes are available only at low
frequencies. Additional data on standard asset classes are typically available
at higher frequencies. A method to combine series of different frequencies is
needed to avoid throwing away available information in higher frequency
series. Tørres G. Trovik and Couro Kane-Janus suggest using a Brownian
bridge, restricted to adhere to a correlation structure in the full data set, to
fill in missing observations for low frequency series. The method is tested
against other available methods and evaluated both through simulation
and in terms of the precision added in various implementations.
Risk-adjusted financial performance of investment portfolios or invest-
ment funds is typically measured by the Sharpe ratio, also called the infor-
mation ratio. From an investor’s point of view, the ratio describes how well
the return of an investment compensates the investor for the risk he takes.
Financial information systems, for example, publish lists where investment
funds are ranked by their Sharpe ratios. Investors are then advised to invest
into funds with a high ratio. The Sharpe ratio estimates historical and future
performance based on realized historical excess returns, and thus contains
estimation error. The derivation of the estimation error is important in
order to determine statistically significant rankings between the invest-
ments. For an investor, it is relevant to understand whether two investment
portfolios exhibit different Sharpe ratios due to statistical noise, or whether

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Introduction xxxix

the difference is significant. Friedrich Schmid and Rafael Schmidt derive


explicit formulas for the estimation error of the Sharpe ratio for general cor-
relation structures of excess returns. Particular emphasis is put on the case
where the excess returns exhibit volatility clustering. Furthermore, in case
of temporally independent returns, a variance stabilizing transformation
is developed for general return distributions. An empirical analysis which
examines excess returns of various financial funds is presented in order to

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
illustrate the results.

Notes
1. Eleven out 23 funds held at institutions other than central banks were established
over the past ten years.
2. Andrew Rozanov (2007) Sovereign Wealth Funds: Defining Liabilities, SSgA.
3. The notion of excess reserves is not well defined. Typically economists use the so-
called Greenspan-Guidotti rule that defines excess reserves as reserves in excess
of short-term external debt (outstanding external debt with maturity less than
one year).
4. The negative carry emanates from the cost of sterilization foreign currency inflows
as in most instances domestic interest rates exceed the rate of return on the for-
eign assets in which the reserves are invested. In addition, many central banks
have experienced significant foreign exchange losses as a result of the appreci-
ation of their domestic currencies relative to reserve currencies.
5. Dutch disease is an economic phenomenon describing the relationship between a
boom in foreign currency revenues (e.g. from the exploitation of natural resources)
and an appreciation of the real exchange rate (with the consequence of making
the manufacturing sector less competitive) as these revenues are converted into
domestic currency and spent by the government.
6. Hedging foreign currency risk using forwards or swaps is often not possible due to
a lack of deep and liquid markets in many emerging market currencies and even
undesirable – particularly if the fund is large relative to the domestic economy.
7. In some instances, however, the government provides top-up contributions via
fiscal transfers.
8. John H. Cochrane (1999) “Portfolio advice of a multifactor world,” Economic
Perspectives, Federal Reserve Bank of Chicago, issue Q III, 59–78.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
This page intentionally left blank

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Part I
Interest Rate Modelling and
Forecasting

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
This page intentionally left blank

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
1
Combining Canadian Interest
Rate Forecasts

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
David Jamieson Bolder and Yuliya Romanyuk

1.1 Introduction and motivation

Model risk is a real concern for financial economists using interest-rate


forecasts for the purposes of monetary policy analysis, strategic portfolio
allocations, or risk-management decisions. The issue is that one’s analysis
is always conditional upon the model selected to describe the uncertainty
in the future evolution of financial variables. Moreover, using an alterna-
tive model can, and does, lead to different results and possibly different
decisions. Selecting a single model is challenging because different models
generally perform in varying ways on alternative dimensions, and it is rare
that a single model dominates along all possible dimensions.
One possible solution is the use of multiple models. This has the advan-
tage of diversifying away, to a certain extent, the model risk inherent in
one’s analysis. It does, however, have some drawbacks. First of all, it is time
consuming insofar as one must repeat one’s analysis with each alternative
model. In the event one uses a simulation-based algorithm, for example,
this can also substantially increase one’s computational burden. A second
drawback relates to the interpretation of the results in the context of mul-
tiple models. In the event that one employs n models, there will be n separ-
ate sets of results and a need to determine the appropriate weight to place
on these n separate sets of results. The combination of these two drawbacks
reduces the appeal of employing a number of different models.
A better approach that has some theoretical and empirical support involves
combining, or averaging, a number of alternative models to create a single
combined model. This is not a new idea. The concept of model averaging has a
relatively long history in the forecasting literature. Indeed, there is evidence dat-
ing back to Bates and Granger (1969) and Newbold and Granger (1974) suggest-
ing that combination forecasts often outperform individual forecasts. Possible
reasons for this are that the models may be incomplete, they may employ dif-
ferent information sets, and they may be biased. Combining forecasts, there-
fore, acts to offset this incompleteness, bias, and variation in information sets.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
4 David Jamieson Bolder and Yuliya Romanyuk

Combined forecasts may also be enhanced by the covariances between indi-


vidual forecasts. Thus, even if misspecified models are combined, the combin-
ation may, and often will, improve the forecasts (Kapetanios et al. 2006).
Another motivation for model averaging involves the combination of large
sets of data. This application is particularly relevant in economics, where
there is a literature describing management of large numbers of explanatory
variables through factor modelling (see, for example, Moench 2006 and Stock

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
and Watson 2002). We can also combine factor-based models to enrich the
set of information used to generate forecasts, as suggested in Koop and Potter
(2003) in a Bayesian framework. There is a vast literature on Bayesian model
averaging; for a good tutorial on Bayesian model averaging, see Hoeting et al.
(1999). Draper (1995) is also a useful reference. A number of papers investi-
gate the predictive performance of models combined in a Bayesian setting
and find that there are accuracy and economic gains from using combined
forecasts (for example, Andersson and Karlsson 2007, Eklund and Karlsson
2007, Ravazzolo et al. 2007, and De Pooter et al. 2007).
However, model averaging is not confined to the Bayesian setting. For
example, Diebold and Pauly (1987) and Hendry and Clements (2004)
find that combining forecasts adds value in the presence of structural
breaks in the frequentist setting. Kapetanios et al. (2005) use a frequentist
information-theoretic approach for model combinations and show that it
can be a powerful alternative to both Bayesian and factor-based methods.
Likewise, in a series of experiments Swanson and Zeng (2001) find that com-
binations based on the Schwartz Information Criterion perform well relative
to other combination methods. Simulation results in Li and Tkacz (2004) sug-
gest that the general practice of combining forecasts, no matter what combin-
ation scheme is employed, can yield lower forecast errors on average.
It appears, therefore, that there is compelling evidence supporting the
combination of multiple models as well as a rich literature describing alter-
native combination algorithms. This chapter attempts to explore the impli-
cations for the aforementioned financial economist working with multiple
models of Canadian interest rates. We ask, and attempt to answer, a sim-
ple question: does model averaging work in this context and, if so, which
approach works best and most consistently? While the model averaging lit-
erature finds its origins in Bayesian econometrics, our analysis considers
both frequentist and Bayesian combination schemes. Moreover, the prin-
cipal averaging criterion used in determining how the models should be
combined is their out-of-sampling forecasting performance. Simply put, we
generally require that the weight on a given model should be larger for those
models that forecast better out of sample. This is not uniformly true across
the various forecasting algorithms, but it underpins the logic behind most
of the nine combination algorithms examined in this chapter.
The rest of the chapter is organized in four main parts. In Section 1.2, we
describe the underlying interest-rate models and review their out-of-sample

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Canadian Interest Rate Forecasts 5

forecasting performance. Next, in Section 1.3, we describe the alternative


combination schemes. Section 1.4 evaluates the performance of the differ-
ent model averaging approaches when applied to Canadian interest-rate
data, and Section 1.5 concludes.

1.2 Models

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
The primary objective of this chapter is to investigate whether combined
forecasts improve the accuracy of out-of-sample Canadian interest-rate fore-
casts. The first step in attaining this objective is to introduce, describe, and
compare the individual interest-rate models that we will be combining. Min
and Zellner (1993) point out that if models are biased, combined forecasts
may perform worse than individual models. Consequently, it is critically
important to appraise the models and their forecasts carefully before com-
bining them. The models used in this work are empirically motivated from
previous work in this area. In particular, Bolder (2007) and Bolder and Liu
(2007) investigate a number of models, including affine (see, for example,
Dai and Singleton 2000, Duffie et al. 2003, Ang and Piazzesi 2003), in which
pure-discount bond prices are exponential-affine functions1 of the state var-
iables, and empirical-based (such as those in Bolder and Gusba 2002 and the
extension of the Nelson-Siegel model by Diebold and Li 2003). The results
indicate that forecasts of affine term-structure models are inferior to those
of empirically-motivated models.
Out of these models, we choose those with the best predictive ability,
in the hope that their combinations will further improve term-structure
forecasts. The four models examined in this chapter, therefore, are the
Nelson-Siegel (NS), Exponential Spline (ES), Fourier Series (FS) and a state-
space approach (SS). It should be stressed that none of these models are
arbitrage-free; in our experience, the probability of generating zero-cou-
pon rate forecasts that admit arbitrage is very low2 . An attractive feature of
the selected models is that they allow us to easily incorporate macroeco-
nomic factors into our analysis of the term structure, assuming a unidirec-
tional effect from macroeconomic factors to the term structure. This has
a documented effect of increasing forecasting efficiency. We do not model
feedback between macroeconomic and yield factors, since Diebold et al.
(2006) and Ang et al. (2007) find that the causality from macroeconomic
factors to yields is much higher than that from yields to macroeconomic
factors.
The models have the following basic structure:

Z ( t , t ) = G ( t , t ) Yt ,
L
Yt = C + ∑ FY
l t − l + nt , nt : N ( 0 ,  ) (1)
l =1

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
6 David Jamieson Bolder and Yuliya Romanyuk

Here, Z(t,  ) denotes the zero-coupon rate at time t for maturity , ( − t) the
term to maturity, and G the mapping from state variables (factors) Y to zero-
coupon rates. We model the vector Yt by a VAR(L) with L = 2, which we find
works best for our purposes. For the ES and FS models, Z (t , t ) = ln ( P (t , t )) / t  t
and P (t , t ) = ∑ nk =1Yk ,t g k ( t  t ), where P(t,  ) is the price of a zero-coupon bond
at t for maturity . In the ES model, gk( − t) are orthogonalized exponen-
tial functions; in the FS model, they are trigonometric basis functions (see

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Bolder and Gusba 2002 for details).
For all models except SS, we find the factors Yt at each time t by minimiz-
ing the square distance between P(t,  ) above and the observed bond prices.
We augment the factors with three macroeconomic variables – the output
gap xt, consumer price inflation πt, and the overnight rate rt – and collect
these to form a time series. This procedure and the estimation of model-
specific parameters for the NS, ES and FS models are given in Bolder and
Liu (2007) and the references therein. In the SS model, we simply regress
the vector of zero-coupon rates Zt on the first three principal components,
extracted from the observed term structure up to time t, and the three
contemporaneous macroeconomic variables. Note that only the SS model
allows for a direct connection between the macroeconomic factors and the
zero-coupon rates. In the other three models, only the term-structure fac-
tors determine the yields or bond prices: in the mapping from state variables
to bond prices or zero-coupon rates, the coefficients for macroeconomic fac-
tors are set to zero3.

1.2.1 A few words about Bayesian frameworks


The task of selecting appropriate parameters for the prior distributions
is not a trivial one, and a number of papers discuss this issue (see, for
instance, Litterman 1986, Kariyala and Karlsson 1997, Raftery et al. 1997,
Fernandez et al. 2001). We have tried a variety of specifications, includ-
ing those in the references above as well as some calibrated ones. We have
found that for our purposes, the g-prior (Zelner 1986) appears to produce
the most satisfactory results. We estimate the parameters for the g-prior
from the in-sample data. While this may not be the most optimal way
to estimate a prior distribution, and ideally we would like to set aside a
part of our data just for this purpose, we are constrained by the length
of the available time series. First, we have to forecast for relatively long
horizons and thus set aside a large proportion of the time series for the
out-of-sample testing. Second, we have to leave some part of the time ser-
ies to train model combinations. Third, our models are multidimensional
and require a sizeable portion of the data just for estimation. Finally, it is
difficult to have a strong independent (from observed data) prior belief
about the behaviour of parameters in high-dimensional models. For these
reasons, we estimate the g-prior and the posterior distribution using the
same in-sample data.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Canadian Interest Rate Forecasts 7

While our models have the general structure of state-space models, there
are differences. We assume that zero-coupon rates Z in observation equa-
tions are observed without error for all models except the SS. To estimate
the models in a full Bayesian setup, we could have introduced an error
term in each of these equations and then we would have had to use a fil-
ter to extract the unobserved state variables Y. However, because FS and
ES models are highly nonlinear (and the dimensions of the corresponding

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
factors are high), such a procedure would be very computationally heavy
and might not be optimal4. Instead of this, we take the state variables as
given (from Bolder 2007) and estimate the transition VAR(2) equations in
the Bayesian framework for each of the models. This facilitates computa-
tions greatly, because we can use existing analytic results for VAR(L) mod-
els (for details and derivations, please refer to the appendix in Bolder and
Romanyuk 2008).
We use transition equations to determine weights for Bayesian model
averaging schemes. For consistency with the other models, we compute the
weights based on the transition equation of the SS model, even though the
observation equation for the SS model is a regression with an error term.
Technically speaking, this approach does not give proper Bayesian poster-
ior model probabilities for the four models that are competing to explain
the observed term structure, since the data y has to be the same (with the
same observed zero-coupon rates Z) and the explanatory variables different
depending on the model Mk. In our case, the y data differs for each transi-
tion equation: it is the NS, ES, FS or SS factors. So in effect we are assigning
weights to each model in the forecast combination based on how well the
transition equations capture the trends in the underlying factors of each
model. In light of our assumption that observation equations do not con-
tribute any new information since they have no error term5, this approach
appears reasonable.

1.2.2 Forecasts of individual models


In practice, we do not observe zero-coupon rates. We do not even observe
prices of pure-discount bonds. We must use the observed prices of coupon-
bearing bonds and some model for zero-coupon rates to extract the zero-
coupon term structure. A number of alternative approaches for extracting
zero-coupon rates from government bond prices are found in Bolder and
Gusba (2002). Figure 1.1 shows the Canadian term structure of zero-coupon
rates from January 1973 to August 2007. As in many industrialized economies,
the Canadian term structure is characterized by periods of high volatile rates
in the late 1980s and the 1990s. Moreover, starting in 2005, the term struc-
ture becomes rather flat. Any single model will generally have difficulties
describing and forecasting both volatile and stable periods equally well.
To evaluate the forecasts of the four models, we use monthly data for
bond prices for different tenors and macroeconomic variables (output gap,

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
8 David Jamieson Bolder and Yuliya Romanyuk

0.2
Zero-coupon rates

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.15

0.1

0.05

15
10
2005
2000
1995 5 Tenor (years)
1990
1985
1980
Time (years) 1975

Figure 1.1 Zero-coupon rates from January 1973 to August 2007. The rates are
extracted from Government of Canada treasury bill and nominal bond prices using
a nine-factor exponential spline model described in Bolder and Gusba (2002).

consumer price inflation, and overnight rate) from January 1973 to August
2007. This constitutes 416 observations. We take the first 120 points as
our initial in-sample estimation data. Once the models are estimated, we
make out-of-sample interest rate forecasts for horizons h = 1, 12, 24, and 36
months at time T = 120 (the information set up to time T will be denoted
by filtration F T). Next, for each model M k, k = 1, ... , 4, we evaluate the vec-
tor of N tenors of forecasted zero-coupon rates ZˆT + h = E ( ZT + h | FT , M k against the
actual zero-coupon rates ZT+h, N × 1, extracted from observed bond prices:

e Mk
( )(
 ZT + h − ZˆTM+kh ’ ZT + h − ZˆTM+kh
= 
) 
T +h
  (2)
N
 

A schematic describing the various steps in the determination of these


overlapping forecasts is found in Figure 1.2.
We subsequently re-estimate each model for each T ∈ [121, 416 − h] in-
sample points, calculating the corresponding forecast errors for each model.
Figure 1.3 shows the root mean squared deviations between the actual and

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Canadian Interest Rate Forecasts 9

Starting Data Rolling Forecasts

These data points We continue to update the data set and


{ 1 s
}
xt ,..., xt are used for perform new forecasts.
the first forecasts.
t1 ts ts+1 ts+2

... .t T

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
1. Set i = s and k = 1.
2. Formulate EM ( Zt | Ft ) .
k i +h i

3. Observe Zt . i+h

Compute et = Zt − EM ( Zt | Ft ) .
M
4. i +h
k
i +h k i +h i

5. Repeat steps 13 for k = 2, ... , n models.


6. Repeat steps 14 for i = s + 1, ... , T − h observations.
7. Repeat steps 15 for h = 1, ... , H months.

Figure 1.2 Forecasting interest rates. This schematic describes the steps involved
in generating rolling interest-rate forecasts, which in this work, act as the principal
input for the parametrization of our model-averaging schemes.

forecasted zero-coupon rates relative to the errors from random walk fore-
casts using a rolling window of 48 observations6. We include the Root Mean
Squared Error (RMSE) for the random walk model as a reference because, in
the term-structure literature, it is frequently used as a benchmark model and it
is not easy to beat, at least for affine models (see, for example, Duffee 2002 and
Ang and Piazzesi 2003). Note that the forecasts of the random walk are just the
last observed zero-coupon rates.
From Figure 1.3, we observe that for all horizons, there are periods when
the models outperform the random walk, but none of the models seem to
outperform the random walk on average (over the sample period). As one
would expect, the forecasting performance of all four models deteriorates as
the forecasting horizon increases. For horizons beyond one month, all mod-
els have difficulties predicting interest rates during the period of high inter-
est rates in the early 1990s. The models also struggle to capture the flat term
structure observed in the early 2000s; however, the FS and the ES models
appear to be more successful at this than the NS and the SS models. While
all models perform similarly for the short-term horizon, certain patterns
emerge at longer horizons: the NS and SS models tend to move together, as
do the FS and ES models7. The heterogeneity between the models is a strong
motivating factor for model averaging. In particular, it suggests that there is
some potential for combining models to complement the information car-
ried by each model and thereby produce superior forecasts.
Figure 1.4 shows the performance of our models estimated in the Bayesian
setting relative to the random walk. Comparing with Figure 1.3, we see that
Bayesian forecasts are virtually identical to frequentist forecasts. We do not

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
10 David Jamieson Bolder and Yuliya Romanyuk

1-Month horizon (BASE) 12-Month horizon (BASE)


2.2
1.3 2
RW
1.8 NS
1.2
1.6 ES
1.1 FS
1.4 SS
1 1.2

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.9 1
0.8
1990 1995 2000 2005 1990 1995 2000 2005

24-Month horizon (BASE) 36-Month horizon (BASE)


2.2 2
2
1.8
1.8
1.6
1.6
1.4
1.4
1.2
1.2
1
1
0.8
0.8
1990 1995 2000 2005 1990 1995 2000

Figure 1.3 Predictive performance for frequentist forecasts relative to random walk

test whether the Bayesian forecasts are statistically significantly different


from the frequentist ones, since we are not comparing frequentist versus
Bayesian estimation methods. We estimate the models in the Bayesian set-
ting only because we need Bayesian forecast distributions to obtain weights
for Bayesian model averaging schemes.

1.3 Model combinations

In this work, we investigate nine alternative model combination schemes, which


we denote C1−C9. They are Equal Weights, Inverse Error, Simple OLS, Factor
OLS, MARS, Predictive Likelihood, Marginal Model Likelihood, Log Predictive
Likelihood, and Log Marginal Model Likelihood. We refer to the first five schemes
as ad-hoc, and the last four as Bayesian8. Our goal is to calculate weights for each
model Mk, horizon h, and combination Cj : wkC,h , k = 1, ... ,4, j = 1, ... ,9, h = 1, 12, 24, 3
j

months. Conceptually, therefore, different model averaging schemes merely


amount to alternative methods for determining the amount of weight (i.e. the
w’s) to place on each individual forecast.
Models estimated in the frequentist setting produce point forecasts,
whereas in the Bayesian setting we obtain forecast densities. There are two

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Canadian Interest Rate Forecasts 11

1-Month horizon 12-Month horizon


2.2
1.3 2 RW
NS
1.8 ES
1.2
1.6 FS
1.1 SS
1.4
1 1.2

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.9 1

0.8 0.8
1990 1995 2000 2005 1990 1995 2000 2005

24-Month horizon 36-Month horizon


2.2
2
2
1.8
1.8
1.6
1.6
1.4
1.4
1.2
1.2
1
1
0.8
0.8
1990 1995 2000 2005 1990 1995 2000

Figure 1.4 Predictive performance for Bayesian forecasts relative to random walk

approaches to combine Bayesian forecasts: the first refers to averaging the


individual densities directly (Mitchell and Hall 2005, Hall and Mitchell
2007, and Kapetanios et al. 2005), while the second refers to combining the
moments of individual densities (Clyde and Georde 2004). For example, as
indicated in the last article, a natural point prediction at time T for a zero-
coupon rate vector h-steps ahead is
4 4
ZˆT + h = E ( ZT + h | FT ) = ∑ wk ,jhE ( ZT + h | FT , M k ) = ∑ wk ZˆTM+kh ,
C
(3)
k =1 k =1

where ẐTM+ h are the means of individual forecast densities.


k

1.3.1 C1: Equal weights


This is the simplest possible combination scheme. Each individual forecast
receives an equal weight as follows:

1
wkC,1h = (4)
n

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
12 David Jamieson Bolder and Yuliya Romanyuk

While the Equal Weights combination is very simple, it is a standard bench-


mark for the evaluation of alternative model-averaging algorithms precisely
because it performs quite well relative to individual forecasts and more
complicated schemes (see, for example, Hendry and Clements 2004 and
Timmermann 2006).

1.3.2 C2: Inverse error

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
In this combination scheme, we assign higher weights to models whose fore-
casts perform better out of sample. We set aside M points from our sample
to evaluate the predictive performance of each model, and then we average
the forecast errors over these M points. More specifically, we estimate the
models using T = 120 initial points, make h-step forecasts and evaluate each
model’s performance by calculating the forecast error (2). Then we repeat
these steps for each T ∈ [121, 120 + M − h]. This procedure yields M − h + 1
forecast errors, which we average. The resulting weights are given by

1 / ( ∑ 120 eT + h / ( M − h + 1) )
+ M h M k
T =120
wkC,2h = (5)
∑ 4
k =1 1 / ∑ 120 eT + h / ( M − h + 1) 
+ M − h Mk
T =120

This combination scheme is also simple, but it differs from the Equal Weights
approach in that it requires data. We use M observations to train the weights
for this and all subsequent model combinations depending on the evalu-
ation approach. Indeed, the Equal Weights combination is the only tech-
nique that does not require a training period.

1.3.3 C3: Simple OLS


Here we combine the forecasts from individual models using simple OLS
regression coefficients as weights. First, we estimate the models and make
h-step forecasts for each T ∈ [120, 120 + M − h]. We treat these M − h + 1
forecasts ẐTM+ h as realizations of four predictor variables, and for each tenor
k

i ∈ [1, N], we regress9 the actual zero-coupon rates ZT+h against these individ-
ual forecasts for the respective tenor i:

4
ZT + h ( i ) = b0 ,h ( i ) + ∑ bk ,h ( i ) ZTM+kh ( i ) (6)
k =1

The weights for the simple OLS scheme are given by

wkC,3h ( i ) = bk ,h ( i ) (7)

This type of combination scheme is very flexible, since the weights are
unconstrained. What this implies is that one can place negative weights

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Canadian Interest Rate Forecasts 13

on certain forecasts and significant positive weights on other forecasts.


As a consequence of this flexibility, this approach turns out to be our
best-performing combination. Its flexibility is not, however, without a cost
since we find the approach can be sensitive to the training period. We dis-
cuss these points later in the discussion.

1.3.4 C4: Factor OLS

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
A drawback of the simple OLS scheme is that we estimate the weights separ-
ately for a set of prespecified zero-coupon tenors and then interpolate for the
remaining tenors. This leads to a fairly large number of regressions. To reduce
the number of parameters, therefore, we construct a lower-dimensional
alternative, which we term the factor OLS scheme.
First, we perform a basic decomposition of the zero-coupon term struc-
ture as follows:

Yt (1) = Zt ,15 y , Yt (2 ) = Zt ,15 y − Zt ,3m , Yt (3)


N N N
Level Slope Curve

(
= 2 Zt ,2 y − Zt ,3m + Zt ,15 y ) (8)

Here 3m, 2y and 15y refer to the 3-month bill, and 2- and 15-year bonds
respectively. Clearly, this approach is motivated by the notions of well-
known level, slope and curvature variables stemming from principal com-
ponents analysis.
Now we have only three components from which we build the term
structure of zero-coupon yields. To obtain the OLS weights, we regress10
the actual l-th factor Y T+h(l), l = 1, 2, 3, on the factors forecasted by each
model, YT + h (l ) :
M k

4
YT + h (l ) = b0 ,h (l ) + ∑ bk ,h (l )YTM+kh (l ) (9)
k =1

The weights for the factor OLS scheme are

vkC,4h (l ) = bk ,h (l ) (10)

Once we have the combined forecasted factors ŶT + h (l ), we invert the decom-
position iteratively as follows:

Yt (3) + 2Yt (1) − Yt (2 )


Zt ,15 y = Yt (1) , Zt ,3m = Yt (1) − Yt (2 ) , Zt ,2 y = (11)
2

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
14 David Jamieson Bolder and Yuliya Romanyuk

The advantage of this averaging approach is that it reduces the number


of regressions and thus estimated parameters. Its disadvantage is that we
are forced now to interpolate the entire curve from on only three points. In
some cases, the error with such an approximation may be substantial.

1.3.5 C5: MARS


The previous four schemes are relatively straightforward. For the purposes of

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
comparison, however, we opted to include a more mathematically complex
approach to combine the forecasts from individual models. The approach
we selected is termed Multiple Adaptive Regression Splines (MARS), which
is a function-approximation technique based on the recursive-partitioning
algorithm. The basic idea behind this technique is to define piecewise lin-
ear spline functions on an overlapping partition of the domain (Bolder and
Rubin 2007 provide a detailed description of the MARS algorithm). As such,
the MARS combination scheme can be considered an example of a math-
ematically complicated nonparametric, nonlinear aggregation of our four
alternative models.
The combination is trained on a set of M + h − 1 realized zero-coupon rates
ZT+h and their forecasts ẐTM+ h , T  [120, 120 + M − h], for all tenors, horizons
k

and models. Once trained, we combine the individual forecasts according


to the MARS algorithm. Note that, unlike in the previous four schemes, we
cannot write the combined forecast ẐT + h as a linear combination of weights
wkC,h and individual forecasts ẐTM+ h due to the nonlinearity and complexity of
5 k

the MARS scheme.

1.3.6 C6: Predictive likelihood


In our Bayesian model averaging schemes, the weights are some version of
posterior model probabilities. Theoretically, the posterior model probabil-
ities P(M k|Y) are

p (Y , M k )
P (M k | Y ) =
∑ 4
j =1 p (Y , M j )

p (Y ) (12)
p (Y | M k ) P ( M k )
=
∑ n
j =1 p (Y | M j ) P (M j )

We think that all of the models are equally likely, so we take prior model
probabilities P (M k ) = 1/ n.
The quantity p(Y|M k) is the marginal model likelihood for model M k,
which measures in-sample fit and fit to prior distribution only. However,
out-of-sample forecasting ability is our main criterion for selecting models
and evaluating model combinations (Geweke and Whiteman 2006 indicate

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Canadian Interest Rate Forecasts 15

that ‘a model is as good as its predictions’). This and other recent papers (for
example, Ravazzolo et al. 2007, Eklund and Karlsson 2007, and Andersson
and Karlsson 2007) use predictive likelihood, which is the predictive dens-
ity evaluated at the realized value(s), instead of the marginal model likeli-
hood, to average models in a Bayesian setting11. Following this stream of
literature to obtain the weights for combination C6, for each model M k and
horizon h, we (a) formulate EM (YT + h | FT ) = YTM+, (b) formulate p(Y T |M k, FT−h),
k

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
(c) observe Y T and evaluate p(Y T |M k, FT−h), and (d) use p(Y T |M k, FT−h) to
combine EM (YT + h | FT.
k

Substituting the predictive likelihood into (12) in place of the marginal


model likelihood, we obtain the weights for the predictive likelihood com-
bination. Similarly to the previous four combinations, we calculate the
weights for each T  [120, 120 + M − h] and average the resulting M − h + 1
weights to get the fixed weights that will be used to evaluate model combin-
ations out of sample:

 p (Y | M , F ) 
∑ 120 + M − h
T =120
 4 T k T −h

 ∑ j =1 p (YT | M j , FT − h ) 
wkC,6h =  
(13)
M −h +1

Strictly speaking, such weights are not proper posterior model probabilities,
but their advantage is measuring the out-of-sample predictive ability.

1.3.7 C7: Marginal model likelihood


Even though marginal model likelihood evaluates in-sample fit only, we
use it as one of our model combination schemes, since this is the classical
Bayesian model averaging approach (see, for instance, Madigan and Raftery
1994 and Kass and Raftery 1995). To generate a combined forecast, we cal-
culate the marginal model likelihood p(Y T |M k) for model M k using T in-
sample data points. The weight for each model is its posterior probability.
Then we average the weights for each T ∈ [120, 120 + M − h], as with pre-
vious model combinations, to obtain the weights for the marginal model
likelihood combination:

 p (Y | M ) 
∑ 120 + M − h
T =120
 4 T k

 ∑ j =1 p (YT | M j ) 
wkC7 =  
(14)
M −h +1

Unlike with weights based on the predictive likelihood, the weights based
on the marginal model likelihood do not depend on the forecasting
horizon h.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
16 David Jamieson Bolder and Yuliya Romanyuk

1.3.8 C8 and C9: Log likelihood weights


It turns out that in practice the weights based on marginal model likeli-
hood and predictive likelihood vary significantly depending on the estima-
tion period (see Bolder and Romanyuk 2008). To obtain a smoother set of
weights based on the marginal model (or predictive) likelihood, we take the
logarithms of the marginal model (predictive) likelihood values and trans-
form them linearly into weights. We want these weights wk, k = 1, ... , 4, to

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
satisfy wk ∈ (0, 1), ∑ k =1 wk = 1 and the relative distance between the weights
4

should be preserved by the transformation.


One possibility for such transformation is to let a be the lower bound of the
interval on which our observed log likelihoods lie, order the log likelihoods in
ascending order, and specify that [log ( p (YT | M i ) ) − a ]/[log ( p (YT | M j ) ) − a ] = wi ,T / wi ,T
for i = 1, 2, 3, j = 2, 3, 4, with ∑ k =1 wk = 1. For marginal model likelihoods
4

(alternatively, we could have used logs of predictive likelihoods), the set of


weights

log ( p (YT | M k ) ) − a
wk ,T =
( log ( p (Y ) )
(15)
∑ | Mj ) − a
4
j =1 T

solve the linear system and satisfy the desired properties for weights stated
above. Now the only tricky part is to choose a appropriately12. We take
a = log (p(Y T |M1)) − s, where s is the standard deviation of the log marginal
model (predictive) likelihoods from their mean.
Figures 1.5 and 1.6 show logs of predictive likelihood and marginal model
likelihood weights, respectively, for T ∈ [120, 120+M − h] and M = 120. They
are more stable than the raw predictive likelihood and marginal model like-
lihood weights. Note that in Figure 1.6 the weights are the same for all four
forecasting horizons, since log marginal model likelihood weights are inde-
pendent of the forecasting horizon.
Finally, we average the weights over the training period. For log marginal
model likelihood combination, the weights are

 
log ( p (YT | M k )) − a
∑  
120 + M − h

vC8
=
T =120
∑

4
j =1 ( (( ))
log p YT | M j − a ) 

(16)
M −h +1
k

For log predictive likelihood combination, we have

 
 log ( p (YT | M k , FT − h ) ) − a 

120 + M − h

w C9
=
T =120
 ∑ j =1
 ( (
 4 log p Y | M , F
( T j T −h ) − a ) ) 

 (17)
k ,h
M −h +1

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Canadian Interest Rate Forecasts 17

1-Month horizon 12-Month horizon


1 1
NS
ES
0.8 0.8
FS
SS
0.6 0.6

0.4 0.4

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.2 0.2

0 0
1984 1986 1988 1990 1992 1984 1986 1988 1990 1992

24-Month horizon 36-Month horizon


1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0
1984 1986 1988 1990 1992 1984 1986 1988 1990 1992
Figure 1.5 Log predictive likelihood weights over the training period of 120 points

All horizons
1
sMML NS
0.9 sMML ES
sMML FS
0.8 sMML SS
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1983 1984 1985 1986 1987 1988 1989 1990 1991 1992

Figure 1.6 Log marginal model likelihood weights over the training period of 120
points

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
18 David Jamieson Bolder and Yuliya Romanyuk

1.4 Evaluating model-combination schemes

We use two methods to evaluate the performance of the nine previously


described model combinations schemes. We call these approaches dynamic
and static model averaging. For both we require the following ingredients:
forecasts from individual models to be combined, a subset of the data to
train the weights for model combinations, and the remainder of the data to

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
evaluate the out-of-sample forecasts of different model combinations.
We generate individual forecasts for our models ZTM+ h , k = 1,... 4, for T  [120,
k

416 − h], as described in Section 1.2.2, and set these aside. Next we take a sub-
set of these forecasts of length M to evaluate the predictive ability of the mod-
els and use this information to obtain the weights for model combinations. In
Section 1.3 we refer to this as training the weights. The last observation used
in the training period to evaluate individual forecasts is 120 + M. Starting at
this point T = 120 + M, we can combine the models using their respective
weights and evaluate the out-of-sample predictive ability of the combinations
using the remainder of the sample. That is, we calculate the forecast error

e
Cj
= 
( )(
 ZT + h − ZˆTC+j h ’ ZT + h − ZˆTC+j h )  (18)
T +h
 N 
 

for j = 1, ... , 9 model combinations at points T ∈ [120 + M, 416 − h]. Schematics


with a graphic description of the dynamic and static forecasting approaches
are found in Figures 1.7 and 18.
The key difference between the two methods for evaluating the combin-
ations is their treatment of the training period. In the dynamic approach,
the parameters of the model averaging scheme are updated gradually as we
move forward in time. In this way, the most recent information regarding
the forecasting performance of the models is incorporated in the model-
averaging algorithm. The static approach, however, involves only a single
computation of the model-combination parameters. As we move through
time, therefore, the parameters are not updated to incorporate the most
recent forecasting performance. Such evaluation is not the typical approach
used in the forecasting literature, but is nonetheless appropriate for exam-
ining the usefulness of a given model-combination scheme for simulation
analysis, where one does not have the liberty of continuously updating
one’s information set. We expect that with a limited training set, the static
forecast combinations should underperform their dynamic counterparts.

1.4.1 Dynamic model averaging


The idea with dynamic model averaging is to use as much recent information
as possible to train the weights for model combinations. We consequently

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Canadian Interest Rate Forecasts 19

Starting Data Rolling Forecasts

These data points Xt ,..., Xt { 1 s


} We continue to update the
are used for the first forecasts. data set and perform new
forecasts.

t1 ts ts+1 ts+2 tT
Starting Data Training Data

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
These data points Xt ,..., Xt { 1 s
} Forecasts from these periods
are used for the first forecasts. are used to estimate model
averaging parameters.

t1 ts tm t m+1 t m+2 tT

0. Set i = m, j = 1, and h = 1.
1. Estimate Pc (M k | Ft ) for k = 1, ... , n.
Apply weights to ZˆtM , k = 1,..., n to form E C ( Zt | Ft.
{ }
j i

2. k
j i +h

Compute etC = Zt − E C ( Zt | Ft ) .
i +h

3. j
i +h i +h j i +h i

4. Repeat steps 13 for j = 2, ... , k model-averaging approaches.


5. Repeat steps 14 for i = m + 1, ... , T − h.
6. Repeat steps 15 for h = 2, ... , H forecasting horizons.

Figure 1.7 Dynamic model averaging, This schematic describes the steps involved
in dynamic model averaging whereby the parameters for each model-averaging algo-
rithm are updated as new information becomes available.

Starting Data Rolling Forecasts

These data points Xt1 ,..., Xts {


We continue to update the data set }
are used for the first forecasts. and perform new forecasts.

t1 ts ts+1 ts+2 tT
Starting Data Training Data

These data points Xt ,..., Xt {


Forecasts from these periods are
1 s
}
are used for the first forecasts. used to estimate model averaging
parameters.

t1 ts tm tT

0. Estimate once PC (M k | Ft ) for k = 1, ... , n. Note: m is fixed.


j m

1. Set i = m, j = 1, and h = 1.
2. Apply fixed weights to {ZˆtM , k = 1,..., n} to form E C ( Zt | Ft ) . k
i +h j i +h i

Compute et = Zt − E C ( Zt | Ft ) .
C
3. j
i +h i +h j i +h i

4. Repeat steps 23 for j = 1, ... , κ model-averaging approaches.


5. Repeat steps 24 for i = m + 1, ... , T − h observations.
6. Repeat steps 2 for h = 2, ... , H forecasting horizons.

Figure 1.8 Static model averaging. This schematic describes the steps involved in
static model averaging whereby the parameters for each model-averaging algorithm
are estimated only once with a fixed set of training data and are not updated as new
information becomes available.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
20 David Jamieson Bolder and Yuliya Romanyuk

1-Month horizon 12-Month horizon


RW
EW
2.5

Rolling RMSE (bps.)


Rolling RMSE (bps.)

1.8 IE
sOLS
1.6 fOLS
2 MARS
1.4
1.5
1.2

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
1 1

1998 2000 2002 2004 2006 1998 2000 2002 2004 2006
Time (yrs.) Time (yrs.)
24-Month horizon 36-Month horizon
2
Rolling RMSE (bps.)

Rolling RMSE (bps.)

2
1.8
1.6
1.5
1.4
1.2
1 1
0.8
1998 2000 2002 2004 1998 2000 2002 2004
Time (yrs.) Time (yrs.)

Figure 1.9 Dynamic predictive performance for frequentist combinations relative to


random walk

update the training period as new information arrives: starting with M =


120, we increase the training period until we run out of data (the last value
for M is 416 − h). The steps involved are given in Figure 1.7.
Figure 1.9 shows the predictive performance of frequentist combinations
(C1 − C5) relative to the random walk using a rolling window of 48 observa-
tions. With the exception of factor OLS, all combinations beat the random
walk on average for a one-month horizon. As the horizon increases, the per-
formance of Inverse Error, Equal Weights and especially MARS combinations
worsen13, while factor scheme OLS improves significantly. Past the one-month
horizon, the simple OLS scheme outperforms all other frequentist combin-
ations, approaching the random walk at one- and two-year horizons, and
beating the random walk for the entire out-of-sample evaluation period at the
three-year horizon. An interesting result is that the predictive performance of
Inverse Error and Equal Weights are almost identical in our setting.
Figure 1.10 shows the performance of the Bayesian model averaging
schemes C6 and C7 relative to the random walk, as well as Equal Weights
and simple OLS, for comparison with the frequentist combinations. We see

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Canadian Interest Rate Forecasts 21

1-Month horizon 12-Month horizon


2.2
Rolling RMSE (bps.)

1.2

Rolling RMSE (bps.)


2 RW
PL
1.8 MML
1.1 sOLS
1.6
EW
1 1.4
1.2

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.9 1
0.8
1998 2000 2002 2004 2006 1998 2000 2002 2004 2006
Time (yrs.) Time (yrs.)

24-Month horizon 36-Month horizon


2.2 2
Rolling RMSE (bps.)

2
Rolling RMSE (bps.)

1.8
1.8
1.6
1.6
1.4
1.4
1.2
1.2
1
1
0.8
0.8
0.6
1998 2000 2002 2004 1998 2000 2002 2004
Ti ( ) Ti ( )
Figure 1.10 Dynamic predictive performance for Bayesian combinations relative to
random walk

that our Bayesian schemes do not beat the frequentist ones in the dynamic-
evaluation approach.
Figure 1.11 compares Bayesian log combinations C8 and C9 to the random
walk. The Equal Weights and simple OLS schemes are also displayed for ref-
erence. We observe that using weights based on the logs of marginal model
and predictive likelihoods improves the performance of Bayesian schemes
significantly: they beat the random walk and the simple OLS scheme at
the one-month horizon and get close to the Equal Weights combination at
longer horizons.

1.4.2 Static model averaging


We may not always be in the position where we can increase the training
period as is done in the dynamic setting14. So we have to test how well
the different combinations perform if we calculate the weights over a fixed
training period and apply these weights to all remaining individual fore-
casts out-of-sample, without updating the training period. The steps for
static model averaging are given in Figure 1.8.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
22 David Jamieson Bolder and Yuliya Romanyuk

1-Month horizon 12-Month horizon


1.1 1.8
Rolling RMSE (bps.)

RW

Rolling RMSE (bps.)


1.05 1.6 PL
MML
1 1.4 sOLS
EW
0.95 1.2
0.9

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
1
0.85
0.8
1998 2000 2002 2004 2006 1998 2000 2002 2004 2006
Time (yrs.) Time (yrs.)

24-Month horizon 36-Month horizon


1.8 1.8
Rolling RMSE (bps.)

Rolling RMSE (bps.)


1.6
1.6
1.4
1.4
1.2
1.2
1
1
0.8
0.8
0.6
1998 2000 2002 2004 1998 2000 2002 2004
Time (yrs.) Time (yrs.)
Figure 1.11 Dynamic predictive performance for Bayesian log combinations relative
to random walk

Figures 1.12–1.14 show the predictive performance of our nine combinations


in the static model averaging setting. Comparing to the same figures from
the dynamic setting, we see that Equal Weights, Inverse Error, and Bayesian
schemes are more robust to the training period than other combinations –
MARS, simple OLS, and factor OLS – in the sense that predictive performance
of the former combinations is quite similar in both dynamic and static set-
tings and thus not very sensitive to the estimation period. The performance
of the latter schemes (particularly MARS) deteriorates when we estimate the
weights over a fixed training period. However, the performance of the com-
binations relative to each other is the same in both dynamic and static set-
tings: Equal Weights and simple OLS are still the best frequentist schemes,
and Bayesian log likelihood schemes are close to Equal Weights. Finally, for
horizons beyond one month, simple OLS combination beats all other schemes
and is only slightly worse than the random walk at long horizons.

1.4.3 Best combinations vs. best individual models


Since the objective of this chapter is to answer the question of whether
there is benefit from using combinations of models as opposed to a single

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Canadian Interest Rate Forecasts 23

1-Month horizon 12-Month horizon


2.2 2.5
2 RW
EW
1.8 2
IE
1.6 sOLS
1.5 fOLS
1.4
MARS
1.2

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
1
1
0.8
1998 2000 2002 2004 2006 1998 2000 2002 2004 2006

24-Month horizon 36-Month horizon


2.5 2.5

2 2

1.5 1.5

1 1

1998 2000 2002 2004 1998 2000 2002 2004

Figure 1.12 Static predictive performance for frequentist combinations relative to


random walk

best-performing model, it makes sense to address this question directly.


From Figure 1.3, we see that the Nelson-Siegel model performs well for short
horizons, and the Fourier Series model performs well for longer horizons.
Figure 1.15 compares these two models, and the combination schemes that
perform best in the static model averaging setting (Equal Weights, Log
Predictive Likelihood, and simple OLS), to the random walk.
We can make the following observations. All of our best combinations
beat the best individual models at the one-month horizon on average.
As the length of the horizon increases, Equal Weights and Log Predictive
Likelihood schemes outperform the Nelson-Siegel model, but not the
Fourier Series model. On average, the simple OLS combination outper-
forms both individual models at all horizons. While it may be tempting to
conclude that the simple OLS combination should be implemented instead
of a single model, we are not ready to accept this conclusion. First, simple
OLS is unconstrained, which means that the weights can be negative and
they need not sum to one. The idea of assigning negative weights to par-
ticular forecasts may be difficult to accept for policymakers. Consequently,
there may be practical obstacles to implementing this combination
scheme. Also, forecasts with unconstrained OLS weights and no intercept

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
24 David Jamieson Bolder and Yuliya Romanyuk

1-Month horizon 12-Month horizon


1.15
1.1 1.8 RW
1.6 PL
1.05 MML
1 1.4 sOLS
EW
0.95 1.2
0.9

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
1
0.85 0.8

1998 2000 2002 2004 2006 1998 2000 2002 2004 2006

24-Month horizon 36-Month horizon


2
1.8
1.8
1.6
1.6
1.4 1.4

1.2 1.2
1 1
0.8 0.8
1998 2000 2002 2004 1998 2000 2002 2004

Figure 1.13 Static predictive performance for Bayesian combinations relative to


random walk

(as is the case in our situation) may be biased, as pointed out in Diebold
and Pauly (1987). Second, some preliminary testing results (not reported
here) show that the simple OLS scheme is sensitive to the subset of data
used for the training period and to the length of the training period, as
can be expected with least squares estimation in a relatively small sample.
Further analysis of this particular combination scheme, including hypoth-
esis testing and forecast error analysis such as that done in Li and Tkacz
(2004), is left for future work.

1.5 Final remarks

The main question of this chapter is whether or not one can combine
multiple interest-rate models to create a single model that outperforms
any one individual model. To this end, nine alternative model averaging
techniques are considered, including choices from the frequentist and
Bayesian literature as well as a few new alternatives. These approaches are
compared, in the context of both a dynamic and a static forecasting exer-
cise, with more than thirty years of monthly Canadian interest-rate and
macroeconomic data. We do not conduct hypothesis tests in this chapter,

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Canadian Interest Rate Forecasts 25

1-Month horizon 12-Month horizon


1.8
1.1 RW
1.6 PL
1.05 MML
1.4 sOLS
1 EW
0.95 1.2

0.9 1

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.85 0.8

1998 2000 2002 2004 2006 1998 2000 2002 2004 2006

24-Month horizon 36-Month horizon


1.8
1.8
1.6
1.6
1.4 1.4

1.2 1.2

1 1
0.8 0.8
1998 2000 2002 2004 1998 2000 2002 2004

Figure 1.14 Static predictive performance for Bayesian log combinations relative to
random walk

so we do not claim any statistical improvements, but we can still make


some observations regarding the predictive performance of the different
model combinations.
The principal observation is that we find evidence of model combin-
ations outperforming the best individual forecasts over the evaluation
period. The degree of outperformance depends, however, on both the fore-
casting horizon and the type of model combination. At shorter forecasting
horizons, for example, almost all model combinations outperform the best
single forecast. As the forecasting horizon increases, however, only the sim-
ple OLS averaging scheme consistently outperforms the best single-model
forecast. Indeed, the simple OLS approach also outperforms, on a number
of occasions, the rather difficult random-walk forecasting benchmark; this
is something that none of the individual forecasts achieve on a consistent
basis. It is also clear that the simpler model combination approaches tend
to outperform their more complex counterparts. Similarly to our results,
Ravazzolo et al. (2007) find that the unconstrained OLS combination
scheme (like our simple OLS scheme) and combinations with time-varying
weights outperform more complex schemes. While this is consistent with
the evidence in the literature that simpler schemes dominate their more

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
26 David Jamieson Bolder and Yuliya Romanyuk

1-Month horizon 12-Month horizon


2.2
1.2 2 RW
1.15 PL(log)
1.8 NS
1.1 FS
1.05 1.6 sOLS
1.4 EW
1
0.95 1.2

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.9 1
0.85 0.8
1998 2000 2002 2004 2006 1998 2000 2002 2004 2006

24-Month horizon 36-Month horizon


2.2
2
2
1.8
1.8
1.6
1.6
1.4 1.4
1.2 1.2
1 1
0.8 0.8
1998 2000 2002 2004 1998 2000 2002 2004

Figure 1.15 Predictive performance of best individual models and best combinations
relative to random walk, static setting

complex counterparts, Stock and Watson (2004) note that it is difficult to


explain such findings in the context of combining weights in a stationary
environment.
Even though the simple OLS combination scheme generally performs
quite well, it does have the disadvantage of demonstrating some instabil-
ity with respect to the training period selected for the determination of
the model-combination parameters. We need to investigate the simple OLS
combination scheme further and test its sensitivity to the training period
(its length and the time over which the weights are trained). This type of
analysis should also be done for other combination schemes, such as Log
Predictive Likelihood, that have shown promise in our study. Another inter-
esting direction is to investigate the predictive performance of the com-
bination of the less stable simple OLS and the very stable, and generally
well-performing, Equal Weights.
One more possibility for further investigation is to consider combinations
that are based on time-varying weights. Ravazzolo et al. (2007) find that
time-varying combinations perform well in terms of predictive ability as

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Canadian Interest Rate Forecasts 27

well as in economic sense, based on the results of an investment exercise.


Time-varying weights have the advantage that they may capture structural
breaks by assigning varying weights to the combined models at different
periods. However, we have to be careful about incorporating time-varying
weights in the context of funds management, since we may not be at liberty
to update the information set in operational activities.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Acknowledgements

We would like to thank Scott Hendry, Greg Tkacz, Greg Bauer, Chris D’Souza,
and Antonio Diez de los Rios from the Bank of Canada; Francesco Ravazzolo
from the Norges Bank; Michiel de Pooter from the Econometric Institute,
Erasmus University Rotterdam; and David Dickey from North Carolina State
University. We retain any and all responsibility for errors, omissions, and
inconsistencies that may appear in this work.

Notes
1. More complex mappings are considered by Leippold and Wu (2000) and Cairns
(2004), among others.
2. If such outcomes occur, there are a number of possible solutions. For example,
one co-uld substitute for the arbitrage forecast the previous forecast or some
combination of previous forecasts.
3. Using the state-space (Diebold et al. 2006) adaptation of the Nelson-Siegel model,
De Pooter et al. (2007) account for the effects of macroeconomic variables in a
similar manner.
4. De Pooter et al. (2007) discuss issues that arise in the Bayesian inference of affine
models, whose parameters are highly nonlinear, similarly to our models.
5. While some may argue that such assumption is not realistic, we feel that it is
justified by the tangible benefits of greatly reduced estimation complexity and
computational effort. We think that such benefits would not be outweighed by
the advantages of introducing error into the observation equations to make the
already stylized models more realistic.
6. The random walk is scaled to one. Consequently, values higher than one imply
worse, and lower than one better, performance than the random walk. We opt
for graphs with relative root mean squared forecast errors as opposed to the com-
monly reported tables with the same information, because we have found graphs
easier to read.
7. The correlation between the forecast errors from the NS, SS, ES and FS models is
shown in Bolder and Romanyuk (2008).
8. The difference between the two types of schemes is that ad-hoc combinations can
be applied to forecasts generated in either frequentist or Bayesian setting, where
as Bayesian combination schemes should be applied to Bayesian forecasts.
9. This can be done with or without the intercept β0,h and/or forcing βk,h to add up
to one. We have found (in studies unreported here) that unconstrained regres-
sion without an intercept works best in our case.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
28 David Jamieson Bolder and Yuliya Romanyuk

10. As with the simple OLS combination scheme, we can do this with or without an
intercept or forcing the coefficients to add up to one, but we obtain better results
for the specification with no intercept and no restrictions.
11. Model averaging based on predictive likelihood methods is not limited to
Bayesian framework. Kapetanios et al. 2006 use predictive likelihood, as opposed
to the likelihood of observed data, to construct weights based on information
criteria in a frequentist setting.
12. There are many ways to do this. We are not claiming that our suggested method

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
is superior in any way; it is just a way to measure dispersion in the observed
data.
13. The MARS result is not surprising: as shown in Sephton, the MARS scheme
is very promising in-sample, but its out-of-sample performance is not entirely
accurate.
14. For instance, as debt managers in a central bank, we may have to use weights
calculated over some fixed period to calculate term-structure forecasts for the
purposes of managing a foreign reserves portfolio or debt issuance for the next
couple of years.

Bibliography
Andersson, M.K. and Karlsson, S (2007). ‘Bayesian Forecast Combination for VAR
Models’. Sveriges Riksbank Working Paper 216.
Ang, A., S. Dong, and Piazzesi, M. (2007). ‘No-Arbitrage Taylor Rules’. National Bureau
of Economic Research Working Paper 13448.
Ang, A. and M. Piazzesi (2003). ‘A No-Arbitrage Vector Autoregression of Term
Structure Dynamics with Macroeconomic and Latent Variables’. Journal of Monetary
Economics, 50, 745–787.
Bates, J.M. and Granger, C. W. J. (1969). ‘The Combination of Forecasts’. Operational
Research Quarterly, 20(4), 451–468.
Bolder, D.J. (2007). ‘Term-Structure Dynamics for Risk Management: A Practitioner’s
Perspective’. Bank of Canada Working Paper 2006–48.
Bolder, D.J. and Gusba, S. (2002). ‘Exponentials, Polynomials, and Fourier Series:
More Yield Curve Modelling at the Bank of Canada’. Bank of Canada Working
Paper 2002–29.
Bolder, D.J. and Liu, S. (2007). ‘Examining Simple Joint Macroeconomic and Term-
Structure Models: A Practitioner’s Perspective’. Bank of Canada Working Paper
2007–49.
Bolder, D.J. and Romanyuk, Y. (2008). ‘Combining Canadian Interest-Rate Forecasts’.
Bank of Canada Working Paper 2008–34.
Bolder, D.J. and Rubin, T. (2007). ‘Optimization in a Simulation Setting: Use of
Function Approximation in Debt Strategy Analysis’. Bank of Canada Working
Paper 2007–13.
Cairns, A.J.G. (2004). ‘A Family of Term-Structure Models for Long-Term Risk
Management and Derivative Pricing’. Mathematical Finance, 14(3), 415–444.
Clyde, M. and George, E. I. (2004). ‘Model Uncertainty’. Statistical Science, 19(1),
81–94.
Dai, Q. and Singleton, K. J. (2000). ‘Specification Analysis of Affine Term Structure
Models’. Journal of Finance, 55(5), 1943–78.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Canadian Interest Rate Forecasts 29

De Pooter, M., Ravazzolo, F. and van Dijkm, D. (2007). ‘Predicting the Term
Structure of Interest Rates: Incorporating Parameter Uncertainty, Model Unce
rtainty and Macroeconomic Information’. Tinbergen Institute Discussion Paper
TI 2007028/4.
Diebold, F.X. and Li, C. (2003).’Forecasting the Term Structure of Government Bond
Yields’. National Bureau of Economic Research Working Paper 10048.
Diebold, F.X., and Pauly, P. (1987). ‘Structural Change and the Combination of
Forecasts’. Journal of Forecasting, 6, 21–40.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Diebold, F.X., Rudebusch, G. D. and Aruoba, S. B. (2006). ‘The Macroeconomy and
the Yield Curve: A Dynamic Latent Factor Approach’. Journal of Econometrics, 131,
309–338.
Draper, D. (1995). ‘Assessment and Propagation of Model Uncertainty’. Journal of the
Royal Statistical Society, Series B (Methodological), 57(1), 45–97.
Duffee, G.R. (2002). ‘‘Term Premia and Interest Rate Forecasts in Affine Models’.
Journal of Finance, 57(1), 405–443.
Duffie, D., Filipovic, D. and Schachermayer, W. (2003). ‘Affine Processes and
Applications in Finance’. Annals of Applied Probability, 13(3), 984–1053.
Eklund, J. and Karlsson, S. (2007). ‘Forecast Combination and Model Averaging Using
Predictive Measures’. Econometric Reviews, 26(2–4), 329–363.
Fernandez, C., Ley, E. and Steel, M. F. J. (2001). ‘Benchmark Priors for Bayesian Model
Averaging’. Journal of Econometrics, 100, 381–427.
Geweke, J. and Whiteman, C. (2006). ‘Bayesian Forecasting’, In Handbook of Economic
Forecasting, Vol. 1, Elliott, G., C.W.J. Granger and A. Timmermann (Eds), North-
Holland.
Hall, S.G. and Mitchell, J. (2007). ‘Combining Density Forecasts’. International Journal
of Forecasting, 23, 1–13.
Hendry, D.F. and Clements, M. P. (2004). ‘Pooling of Forecasts’. Econometrics Journal,
7, 1–31.
Hoeting, J.A., Madigan, D., Raftery, A. E. and Volinsky, C. T. (1999). ‘Bayesian Model
Averaging: A Tutorial’. Statistical Science, 14(4), 382–417.
Kadiyala, K.R. and Karlsson, S. (1997). ‘Numerical Methods for Estimation
and Inference in Bayesian VAR-Models’. Journal of Applied Econometrics, 12, 99–132.
Kapetanios, G., Labhard, V. and Price, S. (2005). ‘Forecasting Using Bayesian and
Information Theoretic Model Averaging: An Application to UK Inflation’. Bank of
England Working Paper 268.
Kapetanios, G., Labhard, V. and Price, S. (2006). ‘Forecasting Using Predictive
Likelihood Model Averaging’. Econometric Letters, 91, 373–379.
Kass, R.E., and Raftery, A. E. (1995). ‘Bayes Factors’. Journal of the American Statistical
Association, 90(430), 773–795.
Koop, G. and Potter, S. (2003). ‘Forecasting in Dynamic Factor Models Using Bayesian
Model Averaging’. Econometrics Journal, 7, 550–565.
Leippold, M. and Wu, L. (2000). ‘Quadratic Term Structure Models’. Swiss Institute of
Banking and Finance Working Paper.
Li, F. and Tkacz, G. (2004). ‘Combining Forecasts with Nonparametric Kernel
Regressions’. Studies in Nonlinear Dynamics and Econometrics, 8(4), Article 2.
Litterman, R.B. (1986). ‘Forecasting with Bayesian Vector Autoregressions – Five Years
of Experience’ Journal of Business and Economic Statistics, 4(1), 25–38.
Litterman, R.B. and Scheinkman, J. (1991). ‘Common Factors Affecting Bond Returns’.
Journal of Fixed Income 1, 54–61.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
30 David Jamieson Bolder and Yuliya Romanyuk

Madigan, D. and Raftery, A. E. (1994). ‘Model Selection and Accounting for Model
Uncertainty in Graphical Models Using Occam’s Window’. Journal of the American
Statistical Association, 89(428), 1535–1546.
Min, C. and Zellner, M. (1993). ‘Bayesian and Non-Bayesian Methods for Combining
Models and Forecasts with Applications to Forecasting International Growth Rates’.
Journal of Econometrics 56, 89–118.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
2
Updating the Yield Curve to
Analyst’s Views

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Leonardo M. Nogueira

2.1 Introduction

Fixed income analysts are accustomed to monitoring a few benchmark yields


on a continuous basis and providing point estimates for these yields, or for
a combination of them. Yet, the optimization of fixed income portfolios
requires an accurate forecast of not only a few benchmark yields, but of com-
plete yield curves. This chapter derives a forecast of one or more yield curves
that is consistent with analysts’ views. The model is based on a novel appli-
cation of principal component analysis (PCA). It can be extended to other
markets and has no restrictions on the number of forecast variables, or the
number of views. We consider examples of forecasting the government bond
yield curves of the US, the Eurozone and the UK, simultaneously or not.
The translation of an analyst’s expectations about a few market variables
into reliable forecasts of other market variables is a long-standing problem
in financial modelling. For instance, in a hypothetical scenario for the fol-
lowing month, a fixed income analyst might have views on the US and UK
yield curves, and be interested in the movement of the Euro yield curve that
is consistent with those views. A solution to this type of problem requires
forecasting a large number of variables (such as all benchmark yields of the
Euro curve) and dealing with the complex correlation structure between
different sectors of the yield curves.
This chapter solves the forecasting problem by mapping the analyst’s
views to a forecast of the principal components of the set of market variables.
The mapping is unique, linear and correct under the assumption that the
analyst’s views can be fully explained by broad market movements (e.g. sur-
prises about inflation, GDP growth, central bank activity, etc.) rather than
by specific dynamics of individual market variables.
The proposed model can be applied to any set of correlated random vari-
ables. As it turns out, all we need to run the model is a covariance matrix
and a good representation of the analyst’s views. Having said that, for brev-
ity this chapter focuses on fixed income applications and considers only the

31

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
32 Leonardo M. Nogueira

case of forecasting yield curves that are consistent with views on elements
of the same curves. The extension to other applications would require a
straightforward change of variables.
A typical fixed income analyst would express market views in terms of
projections to a limited set of benchmark yields or spreads. These views
would be used in turn by investors and fund managers to produce trading
strategies or to optimize fixed income portfolios. In this context, the model

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
thus derived could, for instance, be applied to extend the analyst’s views to
other markets or to check the consistency of the views.
In the following sections, we first describe the notation for the views,
introduce the model and provide a simple example from the US Treasury
yield curve. Next we discuss how to express uncertainty in the views.
Finally, we revise the example above and show how to find the Euro
yield curve that is consistent with a set of views on the US and UK yield
curves.

2.2 Expressing views

Let m be the number of yields to forecast and n be the number of analyst’s


views on these yields, with 1 ≤ n ≤ m. Define yt as the m × 1 vector of yields
at time t and suppose that an analyst expresses her views on the yield curve
for time t + 1 as:

Vy t +1 = q t +1 + «t +1 (1)

In (1), V is a n × m matrix that normally takes elements from the set {−1, 0, 1},
qt+1 is the n × 1 vector of expected values of the views and εt+1 is another n × 1
vector which captures the random error in the forecast. We assume that
E[εt+1] = 0 so that qt+1 = VE [yt+1] since V is non-random. We also assume that
var [εt+1] = Ω in which Ω is the n × n covariance matrix that captures the ana-
lyst’s uncertainty on the views. To avoid redundancy of views, we require
that rank (V) = n or, equivalently, that det (VVT) ≠ 0. Although (1) is similar
to the specification of views in the Black-Litterman portfolio optimization
model, we emphasize that, as opposed to Black and Litterman (1992), we do
not take a Bayesian approach in this chapter.
The rationale for (1) is that the analyst has a forecast of where a few yields
should be at t + 1 but is not certain about the forecast, hence Ω denotes
this uncertainty. In practice, (1) could be the output of another forecasting
model that links the future values of a few benchmark yields to expected
movements in macroeconomic variables.
Example 1. Suppose the analyst holds two independent views on the US
Treasury bond yield curve for t + 1:

i. the expected five-year yield is 5% with a standard error of 1%;

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Updating the Yield Curve to Analyst’s Views 33

ii. the expected two-year to ten-year spread is 50 basis points (bp) with a
standard error of ten bp.

These views may be written in matrix notation as:

   y2 
 0 1 0   t + 1   5% 
   yt5+1  =   + «t +1

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
   10   50bp 
 −1 0 1   yt +1 
  

 
 (1 % ) 0 
2

Ω = var [ «t +1 ] =  
 2
0 (10bp ) 
 

where one can immediately identify the elements of (1).

2.3 Forecasting yields

Because the number of views (n) is typically less than the number of yields
to forecast (m), the solution to (1) is not unique in general. In fact, when
n < m there are an infinite number of yield curves that satisfy (1) and, to
choose among all possible solutions, we need a model that is consistent not
only with the views expressed in (1) but also with the covariance matrix of
yield variations.
One possibility is to use a multivariate regression on yield variations.
However, as the number of views and the number of variables grow, one
must account for both cross-section and time series properties of yield
curves, which can be challenging.
Another idea is to apply Bayesian theory to derive the conditional joint
probability distribution of yields given the analyst’s views – also known as
the ‘posterior’ distribution of yields. The drawback is that a tractable poster-
ior distribution can be obtained only in special cases. As a result, the joint
normal distribution is often used and this may be inappropriate if there is
evidence of strong non-normality in the data (see Meucci 2005: S.7.1 and
Rachev et al. 2008).
A third possibility is to assume a factor model for the yield curve. A good
example is the popular Nelson-Siegel family of models (see e.g. Nelson and
Siegel 1987; Diebold and Li 2006). Here the factors have the nice interpret-
ation of level, slope and curvature components of a term structure. But
unfortunately calibration is non-linear and extensions to multiple term
structures – for several countries or different asset classes – are not straight-
forward (see Diebold et al. 2008 for an extension of the Diebold-Li model to
multiple countries).

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
34 Leonardo M. Nogueira

To overcome some of the drawbacks of the approaches above, this section


introduces an alternative, simpler model that (i) is tractable, (ii) does not rely
on a specific probability distribution, (iii) does not assume any structure for
the factors, (iv) is linear, (v) is easily extended to higher dimensionality, (vi) is
not restricted to term structures, and finally (vii) gives intuitive forecasts.
Given a set of m random variables to forecast and a set of n views on
linear combinations of these variables, with 1 ≤ n ≤ m, we assign a point

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
estimate and a standard error to each random variable. This is achieved by
mapping the n views to a forecast of the n most important principal com-
ponents of the set of normalized random variables. The mapping is unique,
linear and correct under the assumption that the analyst’s views can be
fully explained by movements on the first n principal components. These
movements are often associated with market-wide shocks, such as those
caused by surprises about inflation and unemployment rates, GDP growth,
monetary policy etc. By contrast, shocks caused by, say, the activity of a
large institutional investor or some temporary liquidity squeeze tend to
have a limited impact on the market and are captured by the remaining
m − n principal components.

2.3.1 The model


Suppose yields are observable at time t, such that qt = Vyt denotes the value
at time t of the linear combinations of yields on which views are taken.
Subtracting qt from both sides of (1) gives:

Vy = q + «t +1 (2)

def def
where ∆y = yt +1 − yt and ∆q = qt +1 − qt .
Use E[∆y] = μm×1 to denote the unconditional mean of ∆y and var
[∆y] = S = DCD to denote the unconditional covariance matrix of ∆y, where
Dm×m is the diagonal matrix of standard deviations and Cm×m is the correl-
ation matrix. These matrices may be estimated at time t from historical data.
μ and S are defined as unconditional forecasts because they are calculated
before the views are taken into account.
From the spectral decomposition of a symmetric matrix (see Jolliffe
2002: S.2.1) we have C = WΛWT, in which Λm×m is the diagonal matrix of
eigenvalues of C in descending order, and Wm×m denotes the normalized
eigenvectors of C in the same order as Λ. Define Λ̂(m−n)×(m−n) as the sub-matrix
of Λ with the smallest m − n eigenvalues along the diagonal and decompose
W into the sub-matrices W̃ and Ŵ according to:

W = W
def
 ˆ
W 
 m×n m×( m − n ) 

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Updating the Yield Curve to Analyst’s Views 35

such that W̃ contains the first n columns of W and Ŵ contains the remain-
ing m − n columns of W.
Theorem 1. Under the assumption that all yield curve movements implicit
in the views can be fully explained by movements on the first n principal
components of normalized yield variations, the forecast yield curve at time
t + 1 is given by:

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
E* [ y t +1 ] = y t + m + DA ( q − Vm ) (3)

(
var* [ y t +1 ] = D AΩA + BB
T T
)D
1
with A m×n = W̃ (VDW̃) and Bm×(mn) = (Im  AVD) Ŵ where Im is the m × m
identity matrix.
Theorem 1 gives the point estimate and the covariance matrix of the
vector of yields yt+1 that are consistent with the views expressed in (1). We
use a star (*) to stress that this forecast is conditional on the views and the
assumption above.
DA: R n → R m maps the n views to a forecast of movements for the m yields,
and DB: R m−n → R m maps the error of the approximation using PCA (prin-
cipal component analysis) to an error for the forecast. Therefore, var* [yt+1]
is the sum of two clearly defined terms: DAΩATD, which captures the ana-
lyst’s uncertainty on the views, and DBΛ̂B TD, which captures the error in
the PCA approximation.
To use Theorem 1 we need to observe the yield curve at time t, to have
a set of subjective views {V, qt+1, Ω}, and to have forecasts for the (uncondi-
tional) mean vector and covariance matrix of yield variations. In the con-
text of yield curves we regard μ = 0 as an acceptable assumption because μ
is small in general and has a secondary role in the forecast.

2.4 Example from the US yield curve

The following example shows how Theorem 1 may be used to forecast the US
Treasury bond yield curve that is consistent with a set of views. We consider
the actual yields-to-maturity available from Bloomberg for nine benchmark
maturities (one-, three- and six-month Treasury bills; and one- ,two-, three-,
five-, ten- and 30-year Treasury bonds).
Example 2. Today is 31 December 2007, and the analyst has two views on
the US yield curve on 31 January 2008:

i. the three-month yield is expected to decrease from 3.24% to 1.94%;


ii. the five-year yield is expected to decrease from 3.44% to 2.76%.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
36 Leonardo M. Nogueira

In matrix notation, the elements of (1) are:

 0 1 0 0 0 0 0 0 0
V=
 0 0 0 0 0 0 1 0 0
(
y Tt +1 = yt1+M1 yt3+M1 yt6+M1 yt1+Y1 yt2+Y1 yt3+Y1 yt5+Y1 yt10+1Y yt30+1Y )
 1.94

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
q t +1 = 
 2.76

We assume that Ω = VSVT, where S is the estimated 9 × 9 covariance matrix


of monthly first differences of yields from February 2003 to December
2007.
Table 2.1 compares the current yield curve (on 31 December 2007), the
forecast and the realized curve on 31 January 2008 (i.e. what actually hap-
pened in the market). The standard errors are provided in brackets under
each forecast. Figure 2.1 shows the same curves and includes the confidence
intervals of yields in terms of two bands, each of them two standard away
from the forecast.
The views in Example 2 were deliberately chosen to match the realized
values of the three-month and five-year yields on 31 January 2008, and
this date was chosen because yield movements were exceptionally large.
Therefore, this example allows us to answer the following question: if the
analyst can provide very accurate forecasts of a few points of the yield
curve, how good is the forecast given by Theorem 1 for the remaining
points?
One example is certainly insufficient for a proof, but Figure 2.1 provides a
good indication that the forecast can be very accurate, even during periods
of extreme market activity. All realized values are within the confidence
intervals given by the two bands. Thus, providing we have an accurate fore-
cast of the three-month and the five-year US yields, we should be able to
forecast the entire yield curve accurately.1

Table 2.1 US Treasury yield curves for Example 2. The forecast of the long-term
yields is accurate, but one may experience problems with short-term yields. All values
are in percentages

1M 3M 6M 1Y 2Y 3Y 5Y 10Y 30Y
31/12/07 2.61 3.24 3.39 3.25 3.05 3.02 3.44 4.02 4.45
Forecast 0.87 1.94 2.27 2.28 2.10 2.09 2.76 3.64 4.31
(st.error) (0.41) (0.20) (0.19) (0.21) (0.26) (0.29) (0.28) (0.25) (0.22)
Realized 1.58 1.94 2.05 2.08 2.09 2.17 2.76 3.59 4.32

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Updating the Yield Curve to Analyst’s Views 37

3
31/12/2007

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Lower band
2
Upper band
1 Forecast
Realised
0
1m 3m 6m 1y 2y 3y 5y 10y 30y

Figure 2.1 US Treasury yield curves for Example 2. The realized values are very close
to the forecast for long-term yields

The difference between the forecast and the realized yield curves is larger
for the one-month yield. This may be due to a variety of reasons, but we
highlight that:

● The one-month yield shows a weak correlation to the rest of the curve.
However, the effect of correlation is already taken into account by Theorem
1 so that a weaker correlation would generally imply a larger standard
error in the forecast, as observed in this case.
● The analyst has two views; thus, Theorem 1 assumes that these views
are explained by the first and second principal components alone.
These components are responsible for the ‘trend’ and ‘tilt’ movements
of the yield curve. Hence, the forecast is the combination of a paral-
lel movement (because the views for both three-month and five-year
yields imply a negative trend) with a substantial ‘steepening’ of the
curve (the three-month to five-year spread increased from 20bp to
82bp). As a result, both views push the one-month yield downwards
and explain why its forecast is so low. See Loretan (1997) and Alexander
(2008a: Ch.II.2) for more applications of PCA to fixed income and other
financial markets.
● We used the historical, equally weighted covariance matrix of monthly
first differences of yields from February 2003 to December 2007, but this
matrix may be inappropriate for a distressed period. Alternatively one
could estimate the covariance matrix using EWMA or GARCH models,
for instance, because these models give higher weights to more recent
information. See e.g. Alexander (2008b) for a review of models to estimate
covariance matrices.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
38 Leonardo M. Nogueira

2.5 Expressing uncertainty in the views

One of the hardest tasks when expressing views is to choose the uncer-
tainty matrix Ω. When (1) is the output of another forecasting model, Ω
follows from this model and no further assumptions are necessary. However,
when no such a model is available, one may consider one of the alternatives
below.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Alternative I: Assume that views are independent (as if drawn from inde-
pendent experiments) and define Ω as a diagonal matrix H according to the
analyst’s confidence on each view:

 1 − g1 
 0 0 
 g1 
Ω I = H = k ( In − G ) G −1 = k 0 ... 0 
 (4)
 1 − gn 
 0 0 
 gn 

where In is the n × n identity matrix, G is the n × n diagonal matrix of cred-


ibility weights gi ε (0, 1] and κ is an optional positive penalty term, possibly
linked to the risk aversion of the analyst. The drawback is that this defin-
ition is inconsistent with empirical evidence, since yields (and hence the
views) are highly correlated in practice.
Alternative II: As in Example 2 above, let S be the covariance matrix of
yield variations and set Ω II = VSVT. This definition guarantees consistency
but does not allow the analyst to express confidence in the views.
Alternative III: Combine the alternatives above to be consistent with yield
correlations and capture the analyst’s confidence at the same time. This is
obtained if one defines Ω III = HVSVT H. This effectively scales up or down
the variances given by Ω II according to the analyst’s confidence in the
views.
There are certainly many other ways of expressing uncertainty in the
views, but we believe that choosing one of the alternatives above provides a
reasonable starting point.

2.6 Going global

We now return to our starting problem, in which the analyst had views on
the US and the UK yield curves and would like to forecast the impact of
these views on the Euro curve. This example illustrates that the model can
be easily applied to higher dimensionality without losing its tractability.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Updating the Yield Curve to Analyst’s Views 39

Example 3. Today is 30 April, 2007, and the analyst has two views for
31 May 2007:

i. the US two-year yield is expected to increase from 4.59% to 4.91% with


80% confidence;
ii. the UK ten-year yield is expected to increase from 5.04% to 5.26% with
100% confidence.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Given these views, we ask: What is the impact of the views on our expect-
ation for the Euro curve?
To answer this question, we consider seven vertices (three-month, one-year,
two-year, three-year, five-year, ten-year, 30-year) for each of the three yield
curves (US, Eurozone and UK) and define yt+1 as the 21 × 1 vector of yields.
The unconditional mean vector and covariance matrix of yield variations are
estimated using monthly data from February 2003 to April 2007, and we set
Ω = ΩIII, as in Alternative III above, and assign a penalty term of 1.
To gain more intuition on the correlation structure among the three yield
curves, Figure 2.2 plots the first three eigenvectors of the correlation matrix
of the 21 yield variations. The first eigenvector, which is associated with
the first principal component, explains 62.8% of the total variability of
the data. This eigenvector is positive for all yields; thus it is interpreted as
the ‘trend’ component and implies that the three curves move up or down
together most of the time.2 The second eigenvector explains 12.5% of the
yield curve co-movements and has a mixed impact on the curves. According

US Euro UK
0.5
0.4
0.3
0.2
0.1
0.0
−0.1
−0.2 w1
−0.3 w2
−0.4 w3

−0.5
3m
1y
2y
3y
5y
10y
30y
3m
1y
2y
3y
5y
10y
30y
3m
1y
2y
3y
5y
10y
30y

Figure 2.2 First three eigenvectors of the correlation matrix of yield variations of
Example 3 when PCA is applied to all three curves simultaneously

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
40 Leonardo M. Nogueira

US Euro UK
6.5
30/04/2007
Lower band
6.0 Upper band
Forecast
Realised
5.5

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
5.0

4.5

4.0

3.5
3m
1y
2y
3y
5y
10y
30y
3m
1y
2y
3y
5y
10y
30y
3m
1y
2y
3y
5y
10y
30y
Figure 2.3 Government bond yield curves for Example 3. The confidence intervals
clearly suggest a short duration strategy for the three curves. There is no confidence
interval for UK ten-year because the analyst is 100% confident about this view

to this eigenvector, some roughly parallel changes of the US curve (simi-


lar to a ‘bull steepener’) cause virtually no change on the Euro curve but
change the slope of the UK curve, which also moves to the opposite direc-
tion (a ‘bear flattener’). Finally, the third eigenvector is approximately equal
for the three curves and explains a further 6.9% of the variance. This eigen-
vector explains the well-known correlation between the ‘tilt’ movements
of the curves. Having said that, we note that the shape of the eigenvectors
and their explanatory power are not constant over time, thus other patterns
could be observed for different sample periods.
Figure 2.3 summarizes the forecast results using Theorem 1. In the scen-
ario of Example 3 we have that, for instance:

● The Euro curve is expected to move upwards; in fact, the whole confi-
dence interval is above the current yield in most cases. Thus, a short dur-
ation strategy – which benefits from increasing yields – is appropriate in
this market.
● The forecast yield curves are very similar to the realized curves in the
three markets. This is remarkable given that we have 21 variables to fore-
cast but only two views. It also highlights the strong correlation between
the three yield curves, in which only two principal components are suffi-
cient to explain more than 75% of the data.3
● The poorest forecast is again in the short end of the curves, yet the real-
ized values are still within the confidence intervals. This is because the

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Updating the Yield Curve to Analyst’s Views 41

views are expressed in the two-year and ten-year sectors of the curves and
these are weakly correlated with the three-month yields.
● The confidence interval for the UK 10Y collapses to a single point because
the analyst is 100% certain about this view (recall the definition of Ω III in
alternative III of the previous section).

We note that, according to Theorem 1, decreasing confidence in the views adds

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
uncertainty to the forecast but does not affect expected values. This is because
the expected value in (3) is not a function of Ω. Theorem 1 provides, roughly,
the ‘most likely’ scenario for the yield curves that is consistent with the views.
In particular, the forecast of the US two-year is exactly 4.91% and the forecast
of the UK ten-year is exactly 5.26% because these are indeed the views. By con-
trast, in a Bayesian approach, such as in the Black-Litterman model, the fore-
cast arises from the combination of two (or more) probability distributions.
Thus, increasing or decreasing confidence in the views would shift the forecast
towards one distribution or the other, with an obvious impact on both the
expected value and the variance of the forecast variables.

2.7 Conclusions

Fixed income analysts deal constantly with the challenge of mapping their
expectations about the general macroeconomic environment to movements
of yield curves and ultimately into trading strategies. Given the complexity
of this problem, many analysts prefer to first develop a forecasting model of
a few benchmark yields, and only then consider the problem of forecasting
complete yield curves, if necessary.
This chapter assumed that an analyst is able to provide forecasts of at
least a few benchmark yields or combinations of yields. Then it constructed
the yield curve that is consistent with the analyst’s views and the histor-
ical correlations between yields, and computed confidence intervals for the
forecast. Thus, the model proposed here is useful for a study of scenario
analysis, when the analyst could generate alternative scenarios for the yield
curve depending on the expected developments in the macroeconomic
environment.
The model builds on the theory of principal component analysis (PCA),
can be easily extended to other markets and has no restrictions on the num-
ber of forecast variables or the number of views. It also operates in the first
two moments of the joint probability distribution of yields and makes no
assumption about higher moments. This is an advantage relative to Bayesian
theory, for instance, in which a parametric distribution is often assumed for
the random variables.
One extension of the model could use ICA (independent component ana-
lysis) to derive the common factors driving yields (see e.g. Hyvarinen et al.
2001). ICA works with independent factors (up to cokurtosis) whilst PCA

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
42 Leonardo M. Nogueira

requires only that factors are uncorrelated. Thus, the forecast may work bet-
ter with ICA when there is evidence of strong non-normality in the data.
However, the benefit of applying ICA in this chapter would be marginal
given that the main source of error in the forecast arises from the views. If
they are incorrect, there is little ICA could do to improve the forecast.
Besides, PCA has been traditionally used in fixed income risk manage-
ment to calculate the sensitivity of a bond portfolio to shocks on the yield

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
curve (see e.g. Loretan 1997). For instance, to assess the sensitivity of a bond
portfolio to parallel, tilt or curvature movements of the yield curve, one
would disturb the first, second or third PC of yield variations, respectively.
However, this interpretation of PCs is not necessarily true when we consider
multiple curves, as observed in Example 3 above.
From a trader’s point of view, it is probably more intuitive to assess the
portfolio sensitivity to shocks on a few benchmark yields, because these
are the yields that traders are accustomed to monitoring on a continuous
basis. That is, an analyst could generate a series of alternative scenarios for
benchmark yields and use the model above to compute the yield curve
and the portfolio return that are consistent with each scenario. By doing
that, the analyst not only produces scenarios that are intuitive to traders,
but also avoids the necessity for an economic interpretation of principal
components.

Notes
1. In general, at least two views are necessary for a good forecast of the yield curve:
one on a short-term maturity and one on a long-term maturity. Views on indi-
vidual yields (as above) impose stronger constraints on the forecast than relative
views (such as on the two-year to ten-year spread). Thus, relative views tend to
produce larger standard errors.
2. The obvious exception is the three-month yield in each of the three curves, which
is dominated by government monetary policy and does not respond to parallel
shocks with the same magnitude as long-term bond yields.
3. Of course the two views of Example 3 are very accurate ones, and this is critical
for the forecast. Yet this does not diminish the importance of taking the cor-
relation between the yield curves into account when proposing trading ideas.
In fact, another good exercise is to use the forecast of the Euro curve given by
Figure 3 to check the consistency of the two views with a third view on the Euro
curve provided by the analyst, for instance.

Bibliography
Alexander, C. (2008a). Market Risk Analysis, Volume II: Practical Financial Econometrics.
John Wiley & Sons.
Alexander, C. (2008b). ‘Moving Average Models for Volatility and Correlation’. In
Handbook of Finance, Volume 1. Fabozzi, F.J. (ed.), Wiley.
Black, F. and Litterman, R. (1992). ‘Global Portfolio Optimization’. Financial Analysts
Journal, 48(5): 28–43.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Updating the Yield Curve to Analyst’s Views 43

Diebold, F. and Li, C. (2006). ‘Forecasting the Term Structure of Government Bond
Yields’. Journal of Econometrics, 130(2), 337–364.
Diebold, F., Li, C., and Yue, V. (2008). ‘Global Yield Curve Dynamics and Interactions:
A Generalized Nelson-Siegel Approach’. Journal of Econometrics, 146, 351–363.
Hyvarinen, A., Karhunen, J., and Oja, E. (2001). Independent Component Analysis.
Wiley-Interscience.
Jolliffe, I. (2002). Principal Component Analysis. Springer, 2nd edition.
Loretan, M. (1997). ‘Generating Market Risk Scenarios using Principal Components

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Analysis: Methodological and Practical Considerations’. In The Measurement of
Aggregate Market Risk. Bank for International Settlements, CGFS Publications 7.
Meucci, A. (2005). Risk and Asset Allocation. Springer.
Nelson, C. and Siegel, A. (1987). ‘Parsimonious Modeling of Yield Curves’. Journal of
Business, 60, 473–489.
Rachev, S., Hsu, J., Bagasheva, B., and Fabozzi, F. (2008). Bayesian Methods in Finance.
John Wiley & Sons.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
3
A Spread-Risk Model for Strategic
Fixed-Income Investors

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Fernando Monar Lora and Ken Nyholm

3.1 Introduction

Surprisingly little attention has been paid in the academic literature to the
forecasting of credit spreads1. Although this is understandable, and in line
with traditional academic progression where one aims to fully understand
the in-sample behaviour of a phenomenon before starting to develop the-
ories and models for how this phenomenon could behave out-of-sample, it
leaves the financial practitioner in an unpleasant vacuum.
Extensive academic efforts have been devoted to the search for models and
factors that explain observed credit spreads, but presently the credit spread puz-
zle seems to be prevailing2. Traditional (probability of migration and default,
loss rate, risk-premia) and alternative explanations have been investigated, for
example, liquidity risk (Houweling et al. 2005, among others) and tax-effects
(Elton et al. 2001). Driessen (2005) manages to provide a relatively accurate
empirical decomposition of corporate bond returns into these several under-
lying factors, while Collin-Dufresne et al. (2001) show that residuals from a
multifactor model are not well-behaved, since the first principal component
extracted from these residuals can explain most of their covariance structure.
Even if a well-specified multifactor arbitrage-free model were found to
represent the dynamic evolution of credit-spreads in an appropriate man-
ner, their relevance for a financial practitioner interested in out-of-sample
forecasts for the credit-spreads may be questioned. Academic models are
often formulated in the paradigm of affine no-arbitrage models. While such
models are crucial for relative pricing purposes, such as derivatives pricing,
it is not clear that they also provide superior forecasts (see Diebold and Li
2006, among many others). A modelling objection that a practitioner might
have against the no-arbitrage affine setting, especially related to forecast-
ing, is that one is required to specify a functional form for the dynamic evo-
lution of the market price of risk3. First, it is not clear what an appropriate
functional form is. In the academic literature it seems that the functional
forms used in the various models are mainly chosen such that the model

44

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Spread-Risk Model for Fixed-Income Investors 45

stays tractable and provides a good in-sample fit. Second, in a forecasting


experiment it is not obvious that one is better off by specifying the factor
dynamics and the market price of risk separately, as it is done in affine mod-
els, or jointly, as it is done in yield curve models formulated directly under
the empirical measure. For example, if the market price of risk specification,
that is, its functional form, changes over time and such flexibility is not
incorporated explicitly into the modelling framework, then it may be a bet-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
ter and more flexible approach to jointly model the dynamics of the yield
curve factors and the market price of risk, as it is done in the models spe-
cified directly under the empirical measure. In particular, the parameters
of the market price of risk equation help determine the yield curve factor
loadings in the affine model class; naturally, if the underlying dynamics of
the market price of risk change, then ‘pressure’ mounts on the remaining
free parameters of the model and eventually a wedge is created between the
correct model and the model that is actually implemented. A similar dichot-
omy between the factor dynamics and the yield curve factor loadings will
not occur in the empirical model counterparts – here only re-estimation is
necessary to realign the model parameters and data.
A similar reasoning applies to credit-spread modelling. Long-term inves-
tors are probably more interested in estimating the credit and liquidity risk
of instruments relevant to their investment universe under the empirical
measure, because their actions are taken in the measure. No-arbitrage mod-
els will give estimates of risk-neutral hazard and severity rates; however, since
these numbers pertain to a hypothetical risk-free trading environment, they
are not directly applicable in the trading process. Also, empirical evidence
(Driessen 2005) indicates that results obtained from the two modelling
approaches are materially different, that is, default probabilities estimated
from a risk-neutral model are significantly higher than similar estimates
obtained from empirical models. Empirical credit-risk models working under
the physical measure still use credit spreads as a relevant exogenous input for
calculating the loss associated with credit migrations. One could argue that
by recognizing the relationship between the relative riskiness of an instru-
ment, or credit rating category, and the evolution of its credit spreads, the
modelling capacities of those models for the strategic asset allocation process
can be improved. Hence, the information content of credit spread factors
may be relevant even when an empirical approach is chosen.
These considerations have led us to rely on the empirical model class in this
study. In particular, credit spreads are modelled as ‘add-ons’ to the govern-
ment yield curve estimated via a Dynamic Nelson-Siegel specification. Our
approach integrates regime-switches and can be seen as a credit-risk extension
to Bernadell et al. (2005); it is related to the modelling approach in Koivu et al.
(2007), but is perhaps more economically intuitive. In this sense, a pragmatic
modelling approach is advocated, based on a direct extraction of factors from
spreads. Since no-arbitrage considerations are not explicitly addressed in the

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
46 Fernando Monar Lora and Ken Nyholm

suggested modelling framework, some practical assumptions and simplifica-


tions are implemented regarding the pricing of coupon-bearing bonds and
the specification of the dynamics of risk-free and risky yield curves.
We set forth a spread yield curve modelling-framework that relies on one
single underlying time-varying risk factor (this model is labelled Risk Model
(RM)). We compare and contrast this model with two other empirically
derived models for the spreads based on Koivu et al. (2007). One of these

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
models is a single factor spread model (labelled SM1), and the other is a two
factor spread model (labelled SM2). The forecasts produced by each evaluated
model are compared to forecasts generated by the Random Walk model (RW);
in other words, we used the RW as a benchmark for forecasting performance.
The three models (RM, SM1 and SM2) rely on a dynamic Nelson-Siegel model
to represent the risk-free (US Treasury) curve, and use an ad-hoc empirically
derived representation for how the spread ‘add-on’ is parameterized. Models
SM1 and SM2 rely on factors that aim directly at modelling yields and spreads
as they are observed in relevant databases. A consequence of this approach
is that the economic intuition of the spread related factor(s) is lost. Contrary
to this, the RM installs a clear economic interpretation of the spread-related
factor, in that the factor in this model corresponds directly to the price of
(aggregated) risk implied by the spread curve. In particular, the spread factor
in the RM accounts for all sources of spread risk, for example liquidity risk,
default risk, migration risk and tax effects, as they are perceived and priced
by the market-participants. For the purpose of easy reference, we refer to this
single factor as ‘Implied Risk’. The advantage of such a ‘holistic’ representa-
tion of the market’s perceived disutility/price attached to spread risk is that
it can readily be applied to different corporate bond yield curves, as well as
to the LIBOR-SWAP curve, without the need to account for the specific risk
characteristics of each of these yield curve segments.
Since the investment horizon for a strategic investor is usually classified as
being medium-to-long term, the empirical comparison between the models
conducted in the chapter uses projection horizons of one to five years. It is
observed that the out-of-sample performance of the three models is similar,
and generally is better than that of the RW. The forecasting performance
of the models improves when regime-switches in the yield curve slope are
modelled explicitly. The single factor models (RM and SM1) seem to prod-
uce slightly better results than the SM2, probably as a consequence of the
well-known trade-off between parsimony and in-sample fit, and between
flexibility and out-of-sample performance4.

3.2 The data

The data consists of constant maturity yields for maturities three, six, 12,
24, 36, 60, 84 and 120 months for coupon-bearing instruments. We have
used as the risk-free rates the yields corresponding to the US Treasury in the

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Spread-Risk Model for Fixed-Income Investors 47

period from March 1954 to September 20085. As the risky curve we use data
from November 1988 to September 2008 corresponding to the US LIBOR/
SWAP curve6.

3.3 The observation equation for the Nelson-Siegel model

The following sections will present state-space models for the credit spreads

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
that will serve as add-ons to the government yield curve. To model the gov-
ernment yield curve segment this chapter relies on a dynamic Nelson-Siegel
model (see Diebold and Li 2006), and includes a regime-switching extension
of the model as suggested by Bernadell et al. (2005).
The observation equation for the government segment is defined by the
original Nelson-Siegel (1987) model:

rt = H govt btgovt + utgovt , utgovt  N ( 0, Ω govt ) (1)

where y is a vector containing the yields observed at maturities  for the gov-
ernment segment of the market. H is a loading matrix defined by Nelson-
Siegel (1987):

 1 − e − lt1 1 − e − lt1 
1 − e − lt1 
lt1 lt1
 
 1 − e − lt2 1 − e − lt2 
 1 − e − lt 2 
H govt = lt 2 lt 2  (2)
# # # 
 − lt n − lt n

1 1 − e 1− e
− e − ltn 
 lt n lt n 

which corresponds to three yield curve factors, namely ‘level’, ‘slope’ and
‘curvature’. These three yield curve factors can be interpreted as the level
of the yield curve (which can be seen as the yield at infinite maturity), the
negative of the yield curve slope (representing the difference between the
short and the long ends of the yield curves), and the curvature of the yield
curve. The parameter  determines the segment-specific time-decay in the
maturity spectrum of factor sensitivities two and three as can be seen from
the definition of Hgovt above.
Effectively, modelling a given spread curve segment would amount
to adding the projected r and the spreads (S) given by the models SM1,
SM2 or RM. Setting the Nelson-Siegel model in a dynamic context implies
that a time-series model is hypothesized for the dynamic evolution of the
underlying yield curve factors. This dynamic context will be presented in
Section 3.6.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
48 Fernando Monar Lora and Ken Nyholm

3.4 Purely empirically founded spread models (SM1 and SM2)

The models SM1 and SM2 follow the parsimonious parameterization ori-
ginally developed for jointly modelling international yield curves in Koivu
et al. (2007). In that Chapter the authors strive to capture the specific
and empirically observed term-structure of spreads while modelling two
spread-related factors: one is a spread shift factor, and the other is a tilt

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
factor which can narrow or widen spreads either in the short or long end
of the maturity spectrum. In this way Koivu et al. (2007) present a joint
model for yield curve segments with a single loading matrix containing
the coefficients for the factors corresponding to each maturity and yield
curve segment.
While their model was targeting the modelling of US, German and
Japanese yields, the current chapter applies the same methodology to the
default-free US treasury yield curve and various credit curves. We fol-
low their game plan and first extract Nelson-Siegel (N-S) factors from the
default free curve. Then we deduct the corresponding N-S government
curve from the credit yield curves. This leaves us with term-structures
of credit spreads (akin to the country spread term-structures obtained
by Koivu et al. 2007). Then we conduct a principal component analysis
to determine how many factors we need to model. We have found that
the first principal component explains more than 72% of the covariances
of the spreads, while the second one explains more than 22%. Based on
the principle of parsimony and tractability we retain a maximum of two
principle components, since these factors together explain approximately
95% of the variability of the term-structure of spreads. As in Koivu et al.
(2007), these factors are denoted by ‘shift’ and ‘tilt’, not to confuse them
with the N-S factors labelled ‘level’, ‘slope’ and ‘curvature’. We then fol-
low Koivu et al. (2007) in parameterizing the factor loading structure
applicable to the two identified factors. Accordingly, the ‘shift’ factor can
be modelled using a fixed weight, or loading, for all maturities, similar to
the level factor of the Nelson-Siegel model, and it can be normalized to
one. Regarding the other factor, after observing its shape, a second order
polynomial approximation in the maturity appears adequate to param-
eterize this loading.
The loading matrix for the spread factors for the SM2 then has the
following form:

 1 at1 + bt1 + c 
2

 1 at 2 + bt + c 
H SM 2 = 2 2  (3)
 ... ... 
 1 at 2 + bt + c 
n n 

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Spread-Risk Model for Fixed-Income Investors 49

where i represents the maturity (expressed in years) of the yield curve


maturity-segment i, for maturities one to n. The loading matrix for the SM1-
model (H SM1) is defined to consist only of the ‘shift’ factor, and is thus iden-
tical to the first column in H SM2.
To illustrate the used factor loadings corresponding to the shift and
tilt factors, Figure 3.1 shows the empirical and the fitted loadings for the
average swap spreads, defined as the difference between the LIBOR/SWAP

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
rates and the US Treasury curve. The fitted loadings are obtained by using
Equation (3). The weights and the average spreads have been normalized to
have an average of unity. The x-axis represents the maturity in years.
Figure 3.1 shows that the empirically observed loading structure for the
level factor is well approximated by Equation (3) and that it is not neces-
sary to include a separate mean in the factor model specification. It can

7
6
5
4
3
2
1
0
−1
−2
−3
−4
−5
−6
−7
0 1 2 3 4 5 6 7 8 9 10
Time-to-maturity (years)

Shift loading Tilt loading Normalised mean


Linear (Shift loading) Polinómica (Tilt loading)

Figure 3.1 Fitted and estimated factor loading structures for spreads
Note: This figure shows the empirically extracted loading structure and the fitted loading pat-
terns following Equation (3) and using the spread between the LIBOR/SWAP curve and the US
government curve for the period from 1998 to 2008. ‘Shift Loading’ refers to the empirically
determined factor-loading structure for the ‘shift’ factor that explains the majority of the vari-
ability of the LIBOR/SWAP spreads. ‘Tilt Loading’ refers to the loading structure for the second
most important factor that has an interpretation as tilting the spreads i.e. narrowing the spread
for short maturities while widening it for longer maturities, or visa-a-versa. ‘Linear (Shift load-
ing)’ and ‘Poly. (Tilt loading)’ refer to the fitted loading structure following Equation (3) for
the correspondingly estimated empirical factor loadings. ‘Normalized Mean’ is the normalized
average of the spread data.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
50 Fernando Monar Lora and Ken Nyholm

also be observed that the fit of the tilt factor-loading is not perfect. While
the parameterized/fitted tilt loadings capture the main characteristics of
the observed empirical pattern, it is apparent that the fit is not perfect.
On the one hand, it underestimates the tilt effect in the very short end
of the maturity spectrum (below approximately six-month maturities)
and for medium to long-term maturities (for maturities between approxi-
mately four and nine years). On the other hand, the fitted tilt loading

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
pattern overestimates tilt effects for the medium maturity spectrum (for
maturities between approximately six months and four years) and for very
long maturities (above nine years). However, it should be recalled that the
objective for the fitting exercise is not to precisely map the empirical load-
ing patterns, because these are estimates based on a specific data sample.
Rather, our interest is to find a flexible and parsimonious functional form
for the loading patterns that would accommodate as many credit market
segments and as many countries as possible. Nevertheless, it should be
emphasized that the identified fitted loading pattern in Figure 3.1 for the
tilt factor will give rise to modelling distortions if it is extrapolated for
maturities longer than ten years, and might also produce some distortions
if extrapolated in the short end of the maturity spectrum towards a matur-
ity of zero7.
According to the parsimonious loading structure set forth in Equation (3),
the vector of yield spreads relative to the N-S representation for the govern-
ment yield curve at time t can then be expressed as:

St = H SM 2 btspread + utspread , utspread : N ( 0, Ω spread ) (4)

with

 btshift 
btspread =  tilt  (5)
 bt 

in the model labelled SM2. Consequently, in the model labelled SM1 these
equations take the following form:

St = H SM 1btspread + utspread , utspread : N ( 0, Ω spread ) (6)

where

btspread = btshift (7)

The variable u represents the vector of residuals, which is assumed to be nor-


mally distributed with a mean of zero and a diagonal covariance matrix
Ω spread .

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Spread-Risk Model for Fixed-Income Investors 51

Equations (4) and (6) represent alternative state equations in a state-space


model for the yield spreads. To complete such a yield curve model, we need
a state equation for the government yield curve and a transition equation
that would govern the time-series evolution of the spread and yield curve
factors. These currently absent components are presented below.
While SM1 and SM2 do a relatively good job of capturing the shape and
location of the spreads, they do not offer economically interpretable spread

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
factors. In a sense, SM1 and SM2 are true to the original N-S model, which
relies on a parsimonious approach to the modelling of certain observed
characteristics of the cross-section of yields, but does not aim to link the
underlying factors to any economic variables or models. However, it may be
desirable to attach economically meaningful ‘tags’, especially to the spread
factors, because spread movements are often associated with changes in the
underlying economic environment.

3.5 A spread-risk model (RM)

This section presents a spread yield curve model (RM), which, akin to SM1
and SM2, models credit spreads as add-ons to the Treasury yield curve. The
distinguishing feature of this spread model, as compared to SM1, SM2 and
other models in the market, is that it is based on the existence of a single
underlying factor representing a time-varying risk-assessment parameter. By
directly linking the credit spread to the market’s perception of risk, the RM
presents an economically intuitive spread factor that is easily interpretable.
The price of a bond is denoted by P( ,c ,y), the present value of the future
stream of cashflows, where  is the years-to-maturity, c is the coupon rate,
and y is the discount rate ,also called the yield-to-maturity. The bond price
can be determined by8:

c
P(t , c , y ) = (1 − e − yt ) + e − yt (8)
y

Next, denote the LIBOR/SWAP rate corresponding to the maturity  by l ,


and the risk-free rate (treasury yield) with an identical maturity by r . It is
possible to decompose the price of a risky bond into two components, the
first being the present value of its cashflows discounted at the risk-free rate,
and the second being the implied risk or discounted expected loss, adjusted
for risk-aversion. This second component, which comprises all types of
risks, tax effects and risk aversion correction, perceived to be relevant and
as priced by the market, is denoted by R. The price of a LIBOR-paying bond
with maturity , is then expressed as:

P(t ,lt ,lt ) = P(t ,lt , rt ) − Rt (9)

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
52 Fernando Monar Lora and Ken Nyholm

Considering next a par-coupon LIBOR-bond and a par-coupon Treasury


bond with the same maturity that have the same price (one):

P(t , rt , rt ) = P(t ,lt ,lt ) = P(t ,lt , rt ) − Rt (10)

Using (8) gives:

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
rt −r t −r t l −r t −r t
(1 − e t ) + e t = t (1 − e t ) + e t − Rt (11)
rt rt

If the spread is defined to be s = l – r, it is possible to re-write (11) as:

st −r t
Rt = (1 − e t ) (12)
rt

In essence, R can be seen as the present value of the expected disutility of


bearing spread risk as it is priced by the market, that is, as the present value
of the cashflows a LIBOR linked bond pays in excess of the cashflows gener-
ated by the risk-free government bond.
An intuitive way of looking at the equivalence equation is to consider
two portfolios having similar risk profiles. One portfolio consists of a risky
bond, and the other consists of a risk-free treasury bond and sold protection
against all sources of risk relevant for the risky bond. The price of this writ-
ten protection can be expressed as R or as a fixed periodic coupon s to be
paid over the life of the bond. If only credit risk is considered, this portfolio
comparison example would amount to one portfolio consisting of a long
position in a credit-risk bond, and the other portfolio consisting of a credit
risk-free government bond and a short position in a Credit Default Swap
(referenced to the risky bond). However, the premise for the RM is that the
risks captured by R are not limited to credit risk alone.
From Equation (12) it is naturally possible to derive the risks implied by
the spreads between treasury yields and LIBOR rates for every maturity
and every data observation covered by the data sample denoted by R,t .
Following the approach outlined in connection to the identification of a
parsimonious loading structure for the spread factor(s) included in SM1
and SM2, a principal component analysis has been performed on R,t . It is
found that the loading structure for the first principal component, which
explains roughly a 93% of the variances-covariances of R,t , can be param-
eterized as a linear function of maturity. The average value of R,t has a
similar pattern.
Figure 3.2 displays the empirical and fitted loading patterns for a one-
factor model on R. It is observed that a linear loading pattern in maturity

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Spread-Risk Model for Fixed-Income Investors 53

Implied risks (for price 100) 3

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
1

0
0 1 2 3 4 5 6 7 8 9 10
Time-to-maturity (years)

Implied-risk loading Normalised mean Linear (Implied-risk loading)

Figure 3.2 Fitted and estimated factor loading structure for R


Note: This figure shows the empirically extracted loading structure and the fitted loading pat-
terns following Equation (15), and using the spread between the LIBOR/SWAP curve and the US
government curve for the period from 1998 to 2008. ‘Implied Risk loading’ refers to the empiric-
ally determined factor-loading structure for the ‘Implied Risk’ factor that explains the majority
of the variability of the risks implied by observed spreads. ‘Linear (Implied Risk loading)’ refers
to the fitted loading structure following Equation (15) for the correspondingly estimated empir-
ical factor loadings. ‘Normalized Mean’ is the normalized average of the spread data.

provides a good fit. The average of R is also shown in the figure to document
that it is not necessary to include a constant in the factor model.
The factor model for R can therefore be express as:
Rt ,t = tgt + ut ,t (13)

where g risk is a general risk factor. In vector/matrix notation the model is:
t

Rt = Lriskgt + utrisk , utrisk : N ( 0, Ωrisk ) (14)

where:

 t1 
t 
= 
2
Lrisk (15)
 ... 
 t 
n

and where utrisk represents the vector of residuals.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
54 Fernando Monar Lora and Ken Nyholm

Equation (14) gives the observation equation for the RM specification,


where the explained variables are the implied risks corresponding to the
observed spreads. However, since we are interested in finding a spread com-
ponent that can be used as an add-on to the observation equation for the
government yield curve segment, we need to reverse the transformation
presented in (12) to find the vector of spreads (S) corresponding to the pro-
jected vector of risks (R), the vector of risk free rates (r) and their correspond-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
ing maturities (). To do this, an equation similar in spirit to (4) and (6) for
the SM1 and SM2 specifications is presented.

St = H risk ,t gt (16)

The loading structure in Equation (16) is time varying, since it depends


on the risk-free rate corresponding to each maturity at time t. The loading
matrix would then be:

 t1r1,t (1 − e − t1r1,t ) 
 − t2 r2 ,t

 t r (1 − e )
H risk ,t =  2 2 ,t  (17)
 ... 
 t r (1 − e − tnrn ,t )
n n ,t

An exploration of the t series indicates that an AR(1) model of the nat-


ural logarithm of t (ln(t)) captures the most central dimensions of the
dynamic evolution of the variable. The logarithmic function is inspired by
the observed frequency distribution of t and the fact that the Implied Risk
factor intuitively is bounded from below at zero. Effectively, by assuming the
shocks to t are lognormally distributed, in periods with higher implied risk
and wider spreads, higher spread volatility is introduced, in line with the
findings of Ben Dor et al. (2007). Therefore, a log-risk factor is defined as:

btrisk = ln(gt ) (18)

Notice from Equation (9) that if the risk-free rate r stays constant over time
for a given maturity (), a change in R translates directly into an inverse
change in the price of the risky bond (P = –R). Therefore, by modelling
the dynamics of the factor explaining most of the variance of R, and know-
ing the sensitivity of each maturity to this factor (Lrisk), it is not only the evo-
lution of credit and liquidity risks implied in the yields of different bonds
that is modelled, but also a more comprehensive measure of spread risk is
captured. Hence, RM builds a bridge between risk measures for spread-risk
(e.g. spread duration and duration times spread) and measures for credit and
liquidity risk under the risk-neutral measure. The spread risk of a portfolio

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Spread-Risk Model for Fixed-Income Investors 55

of instruments priced against the single risky curve as presented in this sec-
tion can be measured by using the exposure to the single risk factor (Lrisk), its
level at time t (t), the volatility of its natural logarithm (risk) and the port-
folio weights collected in a row vector w = {w1, w2, ... , wn }:

SpreadRiskt = wLriskgt s risk (19)

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
We argue that the presented RM specification offers a clear economic
interpretation of the factor, which is a ‘level’ factor derived from the risk
implied in the spreads, as priced by the market participants. The loading
matrix Lrisk shows the sensitivity of individual bonds and maturity points
on the spread yield curve, to this level of risk. The loading pattern is found
to be linear to maturity. The maturity-contribution of the spread products
to a given portfolio, multiplied by the time-varying value of the risk factor
t and the volatility of its natural logarithm (risk), can serve as a measure for
the spread-risk of the portfolio, since changes in R represent, all else being
equal, the net price returns for a par-coupon bond arising from changes in
the underlying factor driving spread changes.
In conclusion, the model we present serves on the one hand the purpose
of modelling and measuring spread-risk as market risk arising from spread
changes, which is seen by investors as the main source of market-to-market
profits and losses for their Investment Grade portfolios, and on the other
hand serves as a parsimonious specification for spread curves which can be
easily applied to various rating curves9 while offering valuable information
to enhance the empirical credit-risk models applied in strategic asset alloca-
tion and risk management frameworks10.

3.6 The dynamics for the factors

We model the dynamic evolution of the spread and yield curve factors accord-
ing to the specifications set forth by Diebold and Li (2006) and Bernadell et
al. (2005); the latter approach facilitates the integration of regime-switches
in the slope factor. Accordingly, we assume the following autoregressive spe-
cification for the factors:

bt = Ct + F bt −1 + «t , «t : N ( 0, ) (20)

We explore two versions of this dynamic equation for the yield and spread
factors. One is a standard Vector Autoregressive model in which the inter-
cept Ct is set to be constant, that is, equal to C. This model variant we denote
VAR. Another variant of the model includes regime-switches, and this vari-
ant is denoted RS-VAR. Following Bernadell et al. (2005) the implementation
of regime-switches in the yield curve state-space model is implemented as

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
56 Fernando Monar Lora and Ken Nyholm

in Kim and Nelson (1999) and Hamilton (1989). The chosen model param-
eterization hypothesis is that yield curve observations can be classified into
three distinct groups according to the relative slope (slope divided by level)
of the US Treasury yield curve. Different means for the different factors
are estimated for each regime, and therefore, the intercept Ct is allowed to
vary over time according to the projected evolution of the state-probabilities
p̂t = ppt −1. The variable p is the transition probability matrix, in our case a 3

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
× 3 matrix containing the transition probabilities for switching from one
state to another.
The interpretation of these three regimes is based on the shape of the
yield curve and defined as: Steep, Normal and Inverse. The regime-switch-
1
ing probabilities at time t are denoted by p̂t = ptS ptN ptI  and the diagonal
matrix F collects the autoregressive parameters. The constant in the state
equation for the default-free curve would then be determined by:

 cNlevel cSlevel c Ilevel   ptS 


Ct = Cp̂t =  cNslope cSslope c Islope   ptN  (21)
  
 c curve cScurve c curve   p I 
N I t

We deviate from the joint Kalman and Hamilton filter maximum


approach used in Bernadell et al. (2005) to estimate the regime-switch-
ing model. To decrease estimation time we rely on a two-step approach
in which first the regime classifications are obtained from a univariate
regime-switching model on the yield curve slope, and in a second step
the remaining parameters of the dynamic Nelson-Siegel model are found
using Ordinary Least Squares (OLS) conditional on the estimated regime
probabilities from step one. The regime-switching spread and risk models
are also estimated taking the regime classifications as an exogenous input.
We rely on the Kalman filter to estimate the more complex SM2 model,
while OLS have been used for the SM1 and RM models. The non-regime-
switching models are estimated using the same techniques, but without
conditioning the estimates to regime classifications.

3.7 Out-of-sample comparison

Two data samples are used in our empirical application of the models out-
lined above. US Treasury yields observed monthly at maturities three, six,
12, 24, 36, 60, 84 and 120 months, covering the period from 1954:3 to
2008:9, are used as one sample. The second data sample, available for the
LIBOR/SWAP yield curve data, is somewhat shorter, and covers the period
from 1988:11 to 2008:9. We conduct a recursive out-of-sample forecasting
experiment in which the models are re-estimated on data samples that are
expanded by one observation at each recursion. The initial data sample is

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Spread-Risk Model for Fixed-Income Investors 57

fixed to cover a minimum of five years of data and consequently, our first
forecasts are generated using data until 1993:10. Models are compared at
forecast horizons of one, two, three, four and five years, reflecting a stra-
tegic/long-term investment horizon. We thus generate 167 forecasts for the
one-year horizon, 155 forecasts for the two-year horizon, 143 forecasts for
the three-year horizon, 131 forecasts for the four-year horizon and 119 fore-
casts for the five-year horizon.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
The tested models comprise SM1, SM2 and RM, each with factor dynam-
ics following either a standard VAR(1) or a regime-switching VAR(1). Using
the abbreviations introduced above we compare the following models SM1-
VAR, SM1-RS-VAR, SM2-VAR, SM2-RS-VAR, RM-VAR, RM-RS-VAR. As a bench-
mark we use the forecasts generated by RW; results are presented as the ratio
of root mean square forecast errors (RMSFE) of each model relative to the
RMSFE of the RW model.
Results from the out-of-sample experiment are shown in Tables 3.1, 3.2,
and 3.3. In Table 3.1 a comparison is drawn between the two models used
to forecast the government yield curve, that is, the standard VAR and the
regime-switching VAR. Table 3.3 shows the forecast comparison between the
joined effects of forecasting the government yield curve and the LIBOR/SWAP
spreads. Finally, Table 3.2 presents the forecasting comparison of only the
spread part of the models. Since the ratios of RMSFE for the models are pre-
sented relative to the RMSFE of the RW, entries lower than unity mean that a
given model performs better than the random walk model, and entries above
unity indicate that the random walk forecast is better. Following this logic,
the lower the table entries are, the better the respective model performs.
Bold table entries indicate the best performing model for a given maturity
and a given forecasting horizon. It should be emphasized that no statistical
tests are performed to identify whether a given model’s out-performance
is statistically different from the Random Walk or from its competitors. In
fact, casual inspection of the performance numbers could give the impres-
sion that some of the models perform equally well.
In the comparison between the models NS-VAR and NS-RS-VAR, Table 3.1
indicates that the latter model performs better when applied to the US
Treasure yield curve. For all but a few maturities and forecasting horizons
the regime-switching model outperforms the random walk forecasts. It also
performs better than the no-regime-switching competitor model (NS-VAR).
In fact, it is noted that the NS-VAR model seems to have some trouble per-
forming better than the random walk model, indicated by the relatively
high number of table entries that are larger than one.
Econometric analysis of the dynamic evolution of yields in the time-
series dimension sometimes suggests an I(1) model as being more appro-
priate than an I(0) model. This means that yields themselves behave in a
way that is very similar to a random walk model. There are many econom-
ically founded counterarguments against yields being I(1), for example that

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
58 Fernando Monar Lora and Ken Nyholm

Table 3.1 Ratio of RMSFE for the US Treasury curve (N-S model)

Horizon\
Maturity
Model years 3 6 12 24 36 60 84 120
NS-VAR 1 year 0.962 0.963 1.005 1.022 1.039 1.050 1.041 1.056
2 years 0.919 0.921 0.958 0.988 1.032 1.109 1.125 1.197

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
3 years 0.860 0.854 0.894 0.936 1.001 1.116 1.158 1.270
4 years 0.890 0.881 0.934 1.000 1.079 1.202 1.238 1.328
5 years 1.033 1.022 1.073 1.145 1.223 1.347 1.372 1.456
NS-RS-VAR 1 year 0.988 0.993 1.016 1.000 0.991 0.983 0.978 0.997
2 years 0.934 0.932 0.950 0.945 0.958 0.987 0.980 1.019
3 years 0.856 0.844 0.852 0.847 0.871 0.921 0.923 0.987
4 years 0.819 0.804 0.821 0.840 0.881 0.943 0.945 0.995
5 years 0.875 0.863 0.883 0.912 0.950 1.000 0.978 1.015
Note: This table contains the root mean squared forecast errors for the evaluated dynamic N-S
models applied to the US Treasure yield curve, relative to the RMSFE of the Random Walk model.
NS-VAR refers to a model specification in which the dynamic evolution of the underlying yield
curve factors (in the state equation) follow a VAR(1) process. NS-RS-VAR refers to a model of the
underlying yield curve factors (in the state equation), which includes a three-state regime-switch-
ing specification, with regimes identified via the slope of the yield curve. Bold numbers in the
table indicate the best performing model for a given forecast horizon and a given maturity. No
statistical tests are performed to identify whether model performances are significantly different.

nominal yields cannot take on negative values. However, the near I(1)-ness
of yields makes it increasingly difficult to outperform the random walk
as the forecasting horizon and the maturity are increased. This is seen in
Table 3.1 for both models; however, it is more pronounced for the NS-VAR
specification.
Based on the results in Table 3.1 we conclude that the regime-switching
model (NS-RS-VAR) performs better than the random walk model and the
non-regime-switching counterpart (NS-VAR) on US Treasury yield curve data.
Table 3.2 presents the pure spread forecasting performance of the mod-
els, where the spreads are defined as the difference between the projected
LIBOR/SWAP and the projected government yield curves. It reveals that the
SM1-RS-VAR tends to perform best for short maturities and the RM-RS-VAR
performs best for the longer maturities.
The results for the forecasts of the LIBOR/SWAP curve are shown in
Table 3.3. These results show the combined quality of the forecasts produced
by the dynamic Nelson-Siegel models (with and without regime-switching)
and the spread models (SM1, SM2 and RM). Three models stand out in terms
of bold number entries. The first is RM-VAR, which seems to produce super-
ior results for the shorter forecasting horizons and the lower maturities.
RM-RS-VAR produces better results than its competitors for longer forecasting

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Table 3.2 Ratio of RMSFE for the swap-spreads

Horizon\
Maturity
Model years 3 6 12 24 36 60 84 120

SM1-VAR 1 year 1.137 1.053 1.016 0.947 0.968 1.036 0.947 1.165
2 years 0.939 0.887 0.871 0.832 0.846 0.876 0.847 0.913
3 years 0.854 0.821 0.772 0.692 0.730 0.745 0.730 0.773
4 years 0.778 0.742 0.698 0.631 0.671 0.692 0.694 0.731

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
5 years 0.761 0.718 0.681 0.595 0.650 0.673 0.682 0.719
SM2-VAR 1 year 1.230 1.165 1.097 0.973 0.905 1.028 0.967 1.100
2 years 1.078 1.057 1.021 0.934 0.884 0.977 0.954 0.980
3 years 1.013 1.023 0.971 0.886 0.871 0.919 0.912 0.922
4 years 0.911 0.913 0.878 0.859 0.863 0.905 0.922 0.925
5 years 0.861 0.855 0.837 0.835 0.874 0.908 0.930 0.939
RM-VAR 1 year 1.240 1.139 1.096 0.999 1.054 1.132 0.990 1.143
2 years 1.002 0.937 0.914 0.878 0.924 0.950 0.883 0.908
3 years 0.916 0.873 0.820 0.757 0.821 0.810 0.758 0.760
4 years 0.833 0.790 0.750 0.721 0.773 0.764 0.727 0.717
5 years 0.814 0.764 0.737 0.695 0.758 0.751 0.723 0.708
SM1-RS-VAR 1 year 1.069 0.974 0.956 0.934 0.975 1.026 0.940 1.151
2 years 0.889 0.830 0.816 0.790 0.803 0.821 0.797 0.862
3 years 0.832 0.797 0.752 0.676 0.696 0.701 0.692 0.722
4 years 0.758 0.725 0.679 0.600 0.621 0.629 0.636 0.663
5 years 0.744 0.702 0.657 0.545 0.578 0.588 0.596 0.631
SM2-RS-VAR 1 year 1.150 1.081 1.035 0.988 0.946 1.004 0.931 1.048
2 years 0.980 0.943 0.929 0.892 0.840 0.873 0.854 0.880
3 years 0.906 0.894 0.853 0.779 0.753 0.766 0.765 0.767
4 years 0.814 0.795 0.748 0.670 0.661 0.664 0.679 0.691
5 years 0.785 0.757 0.704 0.598 0.605 0.595 0.599 0.640
RM-RS-VAR 1 year 1.118 1.004 0.977 0.955 1.018 1.030 0.928 1.015
2 years 0.910 0.834 0.814 0.781 0.808 0.811 0.772 0.783
3 years 0.859 0.811 0.763 0.686 0.719 0.702 0.675 0.659
4 years 0.782 0.739 0.694 0.624 0.652 0.630 0.607 0.586
5 years 0.760 0.707 0.666 0.567 0.606 0.584 0.558 0.542

Note: This table contains the root mean squared forecast errors for the evaluated spread curve models
applied to the LIBOR/SWAP spread term structures, relative to the RMSFE of the RW. The tested mod-
els comprise SM1, SM2 and RM; SM refers to purely empirical factor identification, while RM is linked
directly to economic intuition. Each model call is coupled with factor dynamics following either a
standard VAR(1) or a regime-switching VAR(1). Using the abbreviations introduced above we compare
the following models: SM1-VAR, SM1-RS-VAR, SM2-VAR, SM2-RS-VAR, RM-VAR and RM-RS-VAR refer to
model specifications in which the dynamic evolution of the underlying yield curve factors (in the state
equation) follow a VAR(1) process. NS-RS-VAR refers to a model of the underlying yield curve factors (in
the state equation) which includes a three-state regime-switching specification, in which regimes are
identified via the slope of the yield curve. Bold numbers in the table indicate the best performing model
for a given forecast horizon and a given maturity. No statistical tests are performed to identify whether
model performances are significantly different.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Table 3.3 Ratio of RMSFE for the LIBOR-SWAP curve

Horizon \
Maturity
Model years 3 6 12 24 36 60 84 120

1 year 0.987 0.999 1.012 1.015 1.015 1.013 1.002 0.990


2 years 0.904 0.919 0.940 0.972 0.997 1.029 1.037 1.027
SM1-VAR 3 years 0.810 0.823 0.851 0.903 0.944 1.002 1.031 1.042
4 years 0.802 0.822 0.867 0.950 1.009 1.084 1.119 1.135

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
5 years 0.909 0.934 0.992 1.097 1.160 1.232 1.258 1.256
1 year 1.008 1.022 1.034 1.035 1.036 1.035 1.022 1.000
2 years 0.931 0.945 0.965 0.995 1.019 1.050 1.057 1.045
SM2-VAR 3 years 0.840 0.852 0.878 0.927 0.967 1.023 1.052 1.069
4 years 0.825 0.842 0.884 0.962 1.017 1.088 1.123 1.146
5 years 0.917 0.939 0.994 1.093 1.152 1.218 1.242 1.254
1 year 0.982 0.992 1.002 1.003 1.005 1.010 1.009 1.012
2 years 0.889 0.902 0.922 0.952 0.977 1.012 1.030 1.041
RM-VAR 3 years 0.792 0.805 0.831 0.880 0.920 0.982 1.020 1.054
4 years 0.782 0.801 0.845 0.926 0.984 1.063 1.109 1.147
5 years 0.885 0.910 0.967 1.069 1.132 1.209 1.246 1.269
1 year 1.020 1.024 1.017 0.988 0.967 0.950 0.943 0.948
2 years 0.920 0.925 0.928 0.929 0.929 0.931 0.928 0.920
SM1-RS-VAR 3 years 0.824 0.823 0.823 0.827 0.836 0.851 0.855 0.851
4 years 0.766 0.769 0.782 0.814 0.838 0.867 0.875 0.870
5 years 0.801 0.812 0.841 0.891 0.915 0.929 0.916 0.883
1 year 1.025 1.032 1.028 1.002 0.983 0.966 0.955 0.955
2 years 0.939 0.945 0.948 0.950 0.952 0.957 0.954 0.944
SM2-RS-VAR 3 years 0.846 0.844 0.844 0.849 0.858 0.875 0.883 0.883
4 years 0.784 0.785 0.797 0.827 0.850 0.879 0.890 0.891
5 years 0.812 0.822 0.849 0.897 0.919 0.932 0.922 0.899
1 year 1.019 1.023 1.015 0.986 0.967 0.957 0.957 0.970
2 years 0.918 0.923 0.926 0.926 0.928 0.938 0.945 0.955
RM-RS-VAR
3 years 0.821 0.819 0.818 0.822 0.832 0.854 0.871 0.888
4 years 0.759 0.761 0.774 0.806 0.832 0.869 0.891 0.911
5 years 0.791 0.801 0.830 0.881 0.906 0.931 0.934 0.930

Note: This table contains the root mean squared forecast errors for the evaluated spread curve models
applied to the LIBOR/SWAP yield curve, relative to the RMSFE of the RW. The tested models comprise
SM1, SM2 and RM, where SM refers to purely empirical factor identification, while RM is linked directly
to economic intuition. Each model call is coupled with factor dynamics following either a standard
VAR(1) or a regime-switching-VAR(1). Using the abbreviations introduced above we compare the fol-
lowing models: SM1-VAR, SM1-RS-VAR, SM2-VAR, SM2-RS-VAR, RM-VAR and RM-RS-VAR refer to model
specifications in which the dynamic evolution of the underlying yield curve factors (in the state equa-
tion) follow a VAR(1) process. NS-RS-VAR refers to a model of the underlying yield curve factors (in the
state equation) which includes a three-state regime-switching specification, in which regimes are iden-
tified via the slope of the yield curve. Bold numbers in the table indicate the best performing model
for a given forecast horizon and a given maturity. No statistical tests are performed to identify whether
model performances are significantly different.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Spread-Risk Model for Fixed-Income Investors 61

horizons and lower maturities, and for all but one forecasting horizons at
the 24- and 36-month maturities. For maturities above 36 months the SM1-
RS-VAR model performs best.
Regime-switching is seen to be important for all maturities and forecast-
ing horizons apart from maturities three, six and 12 and forecast horizons
below three years. For this particular subset of the forecasting space, the
RM-VAR model is better. While the models incorporating regime-switches

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
are better than the RW for the longer horizons and higher maturities, these
models are actually slightly worse than the RW model for the lower matur-
ities and shorter forecasting horizons, that is, the context in which these
models were outperformed by the RM-VAR model. It should also be noted
that when the RM-VAR model outperforms its competitors, it also performs
better than the RM.
Another dimension of the results is how well RM compares with the SM
models. Table 3.3 shows that RM dominates for all forecasting horizons at
maturities lower than 36 month and SM-RS-VAR dominates for maturities
60, 84 and 120 months.
When comparing the above models, in particular the two better per-
forming models RM-RS-VAR and SM1-RS-VAR, it is somewhat surprising
that when evaluated at the level of LIBOR/SWAP yields (Table 3.3) the
RM-RS-VAR model performs better than the SM1-RS-VAR model for short to
medium maturities; the reverse is true when looking at spreads (Table 3.2),
when RM-RS-VAR performs better than the SM1-RS-VAR model for longer
maturities.
The reason is probably that there is no material difference between the
forecasting performances of the two models. In fact, closer inspection of the
numbers in the tables reveals that there are only very marginal differences
between the performances of these models.

3.8 Conclusion

An economically intuitive model is presented for the modelling of the


LIBOR/SWAP spread yield curves. The model relies on a single underlying
factor, which has an interpretation as the risk implied in credit-spreads, as
it is priced by the market. A parsimonious parameterization of the loading
structure for this implied-risk factor is identified, and we show how it can
be set in a state-space modelling context.
The model is tested on US LIBOR/SWAP data covering the period from
November 1988 to September 2008. In combination with a regime-switch-
ing dynamic Nelson-Siegel model for the US Treasury yields, it is shown that
this model generates forecasts that are as good, and sometimes better, than
a purely empirical model specification.
The main contribution of the chapter is in the presentation of an econom-
ically intuitive spread model that comprises all sources of spread risk.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
62 Fernando Monar Lora and Ken Nyholm

Notes

Fernando Monar Lora Corresponding author: Fernando.


[email protected]
1. One exception is Lekkos et al. (2006), who explore the ability of factor models to
predict the dynamics of the three-year, seven-year and ten-year maturity US and
UK interest rate swap spreads.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
2. See Christensen(2008) for an excellent brief summary of the literature.
3. This object facilitates the translation of the underlying yield curve factor dynam-
ics into observable yield curves, i.e. it facilitates the mapping between the risk-
neutral and the empirical measures.
4. On the one hand, a more parsimonious model, such as the single-factor models
(SM1 and RM), will tend to out-perform less parsimonious models when evalu-
ated out-of-sample, and this out-performance is likely to increase with the fore-
casting horizon. On the other hand, the most flexible model (SM2) is likely to
perform best in-sample and perhaps at very short forecast horizons.
5. Source: Federal Reserve.
6. Source: British Bankers Association and Bloomberg.
7. Alternative parameterizations have been investigated without encouraging
results as regards the extrapolation of spreads.
8. Using unity as the base for the principal of the bond. It can be seen that a par-
coupon bond (c = y) has a price equal to its principal (one). This pricing function
is only applicable for yields different to zero. If the yield were zero the price of
the bond would be the sum of the principal and the coupons.
9. A preliminary exploration of this possibility has shown how for a given sector a
single risk factor can explain most f the variation of the risks implied by observ-
able Spreads.
10. The linkage between the extracted Implied-Risk factor and traditional empir-
ical credit-risk models constitutes a very interesting subject of study for future
applied research.

Bibliography
Ben Dor, A., Dynkin, L., Hyman, J., Houweling, P., Leeuwen, E.V. and Penninga, O.
2007, ‘DTS (Duration Times Spread)’, Journal of Portfolio Management, Winter 2007.
Bernadell, C., Coche, J. and Nyholm, K. 2005, ‘Yield Curve Prediction for the Strategic
Investor’, ECB working paper series, No. 472, April 2005.
Christensen, J.E. 2008, ‘The Corporate Bond Credit Spread Puzzle’, FRBSF Economic
Letter, No. 10, March, 1–3.
Collin-Dufresne, P., Goldstein, R.S. and Martin, J.S. 2001, ‘The Determinants of
Credit Spread Changes’, Journal of Finance, 56, 2177–2207.
Diebold, F.X and Li, C. 2006, ‘Forecasting the term structure of government bond
yields’, Journal of Econometrics, 130, 337–64.
Driessen, J. 2005, ‘Is Default Event Risk Priced in Corporate Bonds’, Review of Financial
Studies, 18(1), 165–195.
Elton, E.J., Gruber, M.J., Agrawal, D. and Mann, C. 2001, ‘Explaining the Rate Spread
on Corporate Bonds’, Journal of Finance, 56, 247–277.
Kim, C-J. and Nelson, C.R. 1999, ‘State Space Models with Regime Switching,’
Cambridge (MA): The MIT Press.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Spread-Risk Model for Fixed-Income Investors 63

Hamilton, J.D. 1989, ‘A new apporoach to economic analysis of non-stationary time


series and the business cycle,’ Econometrica, 357–84.
Houweling, P., Mentink, A. and Vorst, T. 2005, ‘Comparing Possible Proxies of
Corporate Bond Liquidity’, Journal of Banking and Finance, 29, 1331–1358.
Koivu, M., Nyholm, K. and Stromberg, J. 2007, ‘Joint modelling of international yield
curves’, European Central Bank, Risk Management Division, discussion paper.
Lekkos I., Milas, C. and Panagiotidis, T. 2007, ‘Forecasting Interest Rate Swap Spreads
using Domestic and International Risk Factors: Evidence from Linear and Non-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
linear Models’, Journal of Forecasting, 26, 601–619
Nelson, C.R. and Siegel, A.F. 1987, ‘Parsimonious Modelling of Yield Curves’, Journal
of Business, 60, 473–489.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
4
Dynamic Management of Interest
Rate Risk for Central Banks and
Pension Funds

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Arjan B. Berkelaar and Gabriel Petre

4.1 Introduction

The strategic asset allocation decision for any investor sets out the port-
folio with the highest expected return given investors’ overall objectives,
investment horizon and risk tolerance. The objective of the strategic asset
allocation study is a policy benchmark. This benchmark is typically time-
invariant and represents the ‘neutral’ position against which risk and return
are measured. Given that typically over 90% of the risk of investment port-
folios is derived from the policy benchmark, a great deal of effort goes into
the process of creating it. In most instances, this benchmark is reviewed
periodically, often on a three to five year timetable.
Basing a portfolio around a static benchmark is not typically an optimal
solution. Academic research in the area of dynamic asset allocation going
back to the late 1960s and early 1970s (e.g. Samuelson 1969, Merton 1971)
has shown that a static portfolio is only optimal under two conditions:

1. constant expected returns through time, and


2. constant relative risk aversion.

The latter condition means that investors’ willingness to bear investment


risk does not depend on their level of wealth. If the conditions above
hold then a static policy benchmark is the optimal approach. If expected
returns are not constant through time or if investor’s utility function does
not exhibit constant relative risk aversion, then in general a static policy
benchmark will not be optimal. While significant progress has been made
in recent years (e.g. Campbell and Viceira 2002, Brandt et al. 2005), solving
dynamic asset allocation problems is not straightforward.
In this Chapter we consider a simpler problem: should investors keep
the duration of their portfolio constant or should duration be time-
varying? Consequently, we restrict our attention to interest rate risk only.

64

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Dynamic Management of Interest Rate Risk 65

We motivate our study by considering the asset allocation problems of both


central banks and defined benefit pension funds. Central banks typically
set their strategic policy annually such that the probability of a negative
return over 12 months is small (e.g. 1% or 5%). In order to adhere to this risk
constraint, the duration of the portfolio should be adjusted. Pension funds,
on the other hand, have a much longer investment horizon, given the long
nature of their liabilities. Pension liabilities are exposed to interest rate risk

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
and a key question for a defined benefit pension fund is how much of this
interest rate risk to hedge. Given the historically low interest rates in recent
years, many pension funds have been reluctant to fully hedge the interest
rate risk in their liabilities. Is such a strategy optimal or should the duration
of the assets be matched to the duration of their liabilities regardless of the
level of interest rates?
Our chapter is empirical in nature. The objective is not to solve a dynamic
portfolio optimization problem but simply to test various strategies for vary-
ing duration over time. We consider four different types of strategies to
dynamically alter the duration of both a short-duration portfolio (represen-
tative of central bank reserves portfolios) and a long-duration portfolio (rep-
resentative of pension portfolios). The four strategies are: 1) level-dependent
strategies based on belief in mean reversion in interest rates, 2) regression-
based strategies (both linear and probit regressions) in which interest rate
changes are related to macro variables, 3) scoring strategies using a set of
macro variables, and 4) crossover moving average strategies based on short-
term momentum in interest rates. We assess the performance of each of
these strategies against a constant maturity benchmark. We calculate a
range of statistics for each of the strategies against the benchmark, but focus
primarily on the information ratio (the ratio of average annualized excess
return over annualized tracking error) in evaluating the results.
We consider four different rebalancing frequencies: one month, three
months, six months and 12 months, and we use two approaches for convert-
ing the signal into a duration deviation: a laddered approach and a smooth
approach using a sigmoid function.
We find that level-dependent strategies work reasonably well for the short-
duration portfolio in the UK. In the US and Eurozone, level-dependent strat-
egies only work when the rebalancing frequency is at least 12 months (with
information ratios between 0.1 and 0.4), suggesting that mean reversion
is slow. For the long-duration portfolios, level-dependent strategies do not
work, resulting in negative information ratios. Regression-based strategies
using linear regression do not work in general. The information ratio from
these strategies is negative most of the time. Improvements could be made
by including other macro variables, although we have included the usual
suspects as explanatory variables. Using a probit model improves the results
somewhat, but information ratios are not consistent across markets and
rebalancing frequencies.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
66 Arjan B. Berkelaar and Gabriel Petre

Scoring strategies work reasonably well for the long-duration portfolios


at all rebalancing frequencies, but information ratios are low (between 0.1
and 0.3). The scoring strategy does not work for the short-duration portfo-
lios, with the exception of the US market when the rebalancing frequency is
1 month. Momentum strategies also work reasonably well for long-duration
portfolios at all rebalancing frequencies, but again information ratios are low
(between 0.1 and 0.4). For short-duration portfolios the results are mixed:

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
we only get positive information ratios for the US and Eurozone when the
rebalancing frequency is one month or 12 months. Combining the scoring
approach with momentum produces information ratios between 0.1 and
0.4 for the long-duration portfolios for all rebalancing frequencies, but only
works for the short-duration portfolio in the US market when the rebalan-
cing frequency is one month.
While some of the strategies discussed in this chapter produce positive
risk-adjusted returns, we conclude that, in general, central banks and pen-
sion funds would be better off keeping the duration of their portfolios rela-
tively constant. The remainder of this chapter is organized as follows. The
next section briefly introduces the asset allocation problem and duration
decision faced by central bank reserves managers and pension funds, and
performs some basic tests to determine whether interest rates exhibit mean
reversion. In Section 4.3 we discuss the various strategies that are tested in
this chapter. Section 4.4 summarizes our findings and conclusions are pre-
sented in Section 4.5.

4.2 Motivation

4.2.1 Asset allocation setting


Most central banks invest their foreign reserves conservatively, guided by
principles of liquidity and safety, and are concerned with market losses over
the accounting cycle. The investment universe for central bank reserves, in
most instances, consists of short- to medium-term government bonds of the
United States, the Eurozone, Japan and the UK. The asset allocation prob-
lem for central bank reserves is typically formulated as follows: maximize
the expected return of the portfolio such that the probability of a negative
return over 12 months is small, e.g. 1% or 5%. The duration of central bank
reserves portfolios is typically around two years.
When interest rates are high, the income return on fixed income invest-
ments provides a cushion and the probability of a negative return is rela-
tively small (particularly for lower-duration portfolios). When interest
rates are low, however, the probability of a negative return increases mark-
edly and the only way to avoid negative returns is by reducing the dur-
ation of the portfolio significantly. Consequently to avoid losses over an
annual horizon, the duration of central bank reserves portfolios will need
to be managed dynamically. Would central banks be better off – in terms of

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Dynamic Management of Interest Rate Risk 67

risk-adjusted returns – by changing the duration of their portfolio over time


or should they stick with a constant duration strategy?
Pension funds, on the other hand, have a much longer investment hori-
zon, promising retirement income to plan participants for many years to
come. Pension liabilities can be represented by an aggregate stream of cash
flows – the benefits that are expected to be paid to plan participants. The
present value of this stream of cash flows represents the liabilities of the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
pension fund. The duration of these liabilities is on the order of eight to
15 years depending on the maturity of the pension scheme. Typically pen-
sion funds in North America and Europe have fixed income allocations of
20% to 40% benchmarked to an intermediate fixed income index with dur-
ation of about five years. This results in a significant interest rate mismatch
between pension assets and liabilities that needs to be managed.
In recent years, pension funds have become increasingly aware of this
mismatch for a variety of reasons. To address the interest rate mismatch and
manage pension assets more in line with the underlying liabilities, pension
funds can invest in long-duration bonds or use a swap overlay to extend the
duration of their portfolio. One of the factors that has kept pension funds
from reducing the interest rate mismatch in their portfolio is the historic-
ally low level of interest rates over the past several years. The general feeling
seems to be that it is too costly at these low interest rates to fully reduce the
duration mismatch. This reflect a view that interest rates are likely to go up
in the future and many pension funds have been sitting on the sidelines,
waiting for interest rates to increase before putting on a hedging program.
Is this the optimal strategy or should pension funds hedge the duration of
their liabilities independent of the level of interest rates?

4.2.2 Are interest rates mean reverting?


Before discussing the various strategies for managing interest rate risk dyn-
amically we first consider the empirical evidence on mean reversion in
interest rates. We run tests for three month, two year, five year and ten year
interest rates in the US, Eurozone (we use German yields before 1999) and the
UK to determine whether interest rates are stationary (i.e. mean-reverting) or
follow integrated processes. We use monthly and quarterly generic yield data
from February 1967 to September 2008.
Consider the following AR(1) process for interest rates:

yt +1 = c + r yt + «t +1

If ρ=1, interest rates follow a random walk and the process is said to be
integrated. If ρ<1, interest rates are mean-reverting and the process is said
to be stationary. Simple ordinary least squares (OLS) regressions result in
estimates for ρ that are less than one (see Table 4.1).

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
68 Arjan B. Berkelaar and Gabriel Petre

Table 4.1 OLS estimates of first-order autocorrelation


coefficients for interest rates

3mth 2yr 5yr 10yr

US
Monthly data 0.986 0.990 0.992 0.993
Quarterly data 0.923 0.943 0.954 0.965

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
UK
Monthly data 0.985 0.986 0.990 0.992
Quarterly data 0.946 0.953 0.961 0.972

EUR
Monthly data 0.984 0.990 0.993 0.993
Quarterly data 0.944 0.962 0.970 0.974

While these estimates suggest that interest rates are mean-reverting, more
formal tests are required. The augmented Dickey-Fuller (ADF) test tests for
a unit root in the series – i.e. it tests whether we can reject the hypothesis
that ρ=1. The test can be performed with or without a time-trend in the
data. Table 4.2 shows the results for one and two lag for quarterly interest
rates.
The null hypothesis of a unit root in interest rates cannot be rejected,
except for the US three month interest rate. Using monthly data, the pres-
ence of a unit root cannot be rejected in any of the cases. An alternative test
is the so-called KPSS test (Kwaitowski et al. 1992) where the null hypothesis
is that the time series is stationary. The KPSS test statistic is sensitive to the
choice of the truncation lag.
Table 4.3 shows the test statistics for the KPSS test with two different trun-
cation lags suggested by Kwaitowski et al. (1992):

Lq = floor( q(T / 100 )1 / 4 )

with q=4 and q=12. Note that in almost all cases, the null hypothesis of sta-
tionary interest rates is rejected. Results for monthly data are similar.
Both the ADF and KPSS tests have weak power. This is particularly the
case if the process is stationary but with a root close to the non-stationary
boundary, i.e. the tests are poor at deciding if the autocorrelation coefficient
is one or 0.95, especially with small sample sizes. For monthly data the first-
order autocorrelation coefficient for interest rates of different maturity in
the US, UK and EU ranges between 0.985 and 0.993. For quarterly data the
coefficient ranges between 0.923 and 0.974. This suggests that interest rates
are nearly integrated.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Dynamic Management of Interest Rate Risk 69

Table 4.2 The ADF statistic for the null hypothesis of a unit root

ADF with no ADF with ADF with no ADF with


trend trend trend trend

1 lag 2 lags

US3mth −3.35 −4.14 −3.10 −3.98


US2yr −2.85 −3.75 −2.55 −3.49

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
US5yr −2.51 −3.39 −2.23 −3.14
US10yr −1.98 −2.81 −1.80 −2.64

UK3mth −1.72 −2.27 −2.03 −2.66


UK2yr −2.16 −3.16 −2.11 −3.17
UK5yr −2.18 −3.50 −1.99 −3.38
UK10yr −2.15 −3.86 −1.76 −3.61

EUR3mth −1.51 −1.69 −2.16 −2.41


EUR2yr −1.29 −1.85 −1.71 −2.37
EUR5yr −1.24 −2.15 −1.53 −2.56
EUR10yr −1.07 −2.25 −1.34 −2.60

Note: This table shows the augmented Dickey Fuller (ADF) test statistic for quarterly yields. At
the 5% significance level, the critical value of the ADF statistic for 167 observations is −2.87 when
there is no trend. The comparable figure when there is a trend is −3.45.

Table 4.3 The KPSS statistic for the null hypothesis of a stationary process

KPSS with no KPSS with KPSS with no KPSS with


trend trend trend trend

truncation lag L_4 truncation lag L_12

US3mth 1.176 0.398 0.526 0.193


US2yr 1.306 0.483 0.549 0.216
US5yr 1.296 0.554 0.531 0.238
US10yr 1.237 0.598 0.498 0.250

UK3mth 1.197 0.483 0.534 0.233


UK2yr 1.670 0.587 0.686 0.262
UK5yr 1.885 0.608 0.747 0.262
UK10yr 1.999 0.576 0.779 0.245

EUR3mth 0.558 0.301 0.279 0.158


EUR2yr 1.605 0.211 0.717 0.116
EUR5yr 2.040 0.270 0.863 0.147
EUR10yr 2.133 0.318 0.892 0.170

* This table shows the KPSS test statistic for quarterly yields. At the 5% signifi-
cance level, the critical value of the KPSS statistic is 0.463 when there is no trend.
The comparable figure when there is a trend is 0.146.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
70 Arjan B. Berkelaar and Gabriel Petre

To illustrate the weakness of both the ADF and KPSS tests when time
series are nearly integrated, we perform a simple Monte Carlo exercise. We
simulate 5000 paths of 500 observations for an AR(1) process with differ-
ent autocorrelation coefficients (ρ=0.7, ρ=0.9, ρ=0.95, ρ=0.98 and ρ=0.99), a
long-term mean of 5% and a standard deviation for the error term of 0.30%.
The starting level is 2%. Table 4.4 shows the rejection frequency at a 5% sig-
nificance level for the 5000 simulations with the ADF and KPSS tests.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Note that despite the fact that the series follows an AR(1) process and
is therefore stationary, both tests have weak power, particularly when ρ is
close to unity. When ρ=0.98, the ADF test only rejects the presence of a unit
root in 39% of the cases. The KPSS test has even weaker power and rejects
stationarity in almost 95% of the cases when the truncation lag is L_4 and
in 64% of the cases when the truncation lag is L_12 The KPSS test even
rejects stationarity in 10% to 20% of the cases when ρ is only 0.7. The weak
power of the ADF and KPSS tests is a well known fact. A battery of unit root
tests has been developed over the years, but to date there is no uniformly
accepted test.
An alternative to unit root tests is the so-called variance ratio test. The
principle behind a variance ratio test is that if interest rates follow a random
walk then the variance of e.g. two-year changes in yields should be twice
the variance of one-year changes in yields, and the variance of ten-year
changes should be ten times the variance of one-year changes. If the vari-
ance ratio is less than one then this is evidence of mean reversion; if greater
than one, mean aversion. If the variance ratio is close to one for all horizons,
the series follows a random walk process.
Table 4.5 shows variance ratios for both monthly and quarterly yield data
for the US, UK and Eurozone. The variance ratios suggest that quarterly
interest rates in the US and UK are mean-reverting – particularly at longer
horizons. There is little to no evidence of mean reversion for yields in the
Eurozone. Monthly interest rates are mean-averting in the short-term and
mean-reverting in the longer term. Mean aversion in the short-term sug-
gests that momentum strategies might work. Monthly and quarterly interest
rates for the Eurozone exhibit mean aversion.

Table 4.4 Rejection frequencies for ADF and KPSS


tests when the true series follows an AR(1) process

ADF(%) KPSSL_4(%) KPSSL_12(%)

=0.7 100.0 20.1 9.3


=0.9 100.0 52.2 20.3
=0.95 98.0 77.5 36.7
=0.98 38.8 94.9 63.9
=0.99 15.6 97.4 76.7

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Table 4.5 Variance ratios for the US, UK and Eurozone

US Yields UK Yields EU Yields

3-month 2-year 5-year 10-year 3-month 2-year 5-year 10-year 3-month 2-year 5-year 10-year

Months Monthly variance ratios

1 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
3 1.35 1.41 1.40 1.35 1.24 1.13 1.17 1.02 1.13 1.31 1.33 1.19
6 1.32 1.36 1.37 1.37 1.32 1.08 1.12 0.88 1.28 1.51 1.52 1.35
12 1.26 1.30 1.39 1.48 1.40 1.00 0.96 0.81 1.61 1.76 1.71 1.51
18 1.33 1.35 1.40 1.49 1.37 0.89 0.83 0.77 1.78 1.92 1.80 1.58
24 1.33 1.31 1.32 1.39 1.33 0.86 0.79 0.77 1.77 1.93 1.79 1.55
36 1.19 1.17 1.16 1.23 1.10 0.71 0.71 0.78 1.59 1.91 1.77 1.52
48 1.01 1.09 1.15 1.29 0.96 0.63 0.66 0.75 1.45 1.81 1.61 1.36
60 0.82 0.98 1.13 1.34 0.69 0.48 0.54 0.68 1.31 1.53 1.33 1.13

Quarters Quarterly variance ratios

1 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
2 0.84 0.85 0.86 0.93 1.11 0.96 0.90 0.80 1.16 1.14 1.11 1.12
4 0.75 0.76 0.82 0.95 1.19 0.93 0.80 0.76 1.41 1.37 1.26 1.26
6 0.75 0.77 0.81 0.93 1.15 0.79 0.69 0.72 1.55 1.48 1.33 1.32
8 0.77 0.76 0.77 0.88 1.13 0.77 0.64 0.70 1.53 1.51 1.33 1.29
12 0.69 0.68 0.68 0.79 0.95 0.65 0.58 0.72 1.35 1.50 1.31 1.26
16 0.57 0.64 0.67 0.82 0.84 0.57 0.53 0.68 1.23 1.40 1.19 1.11
20 0.46 0.57 0.65 0.85 0.60 0.42 0.43 0.60 1.12 1.19 0.98 0.92

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
72 Arjan B. Berkelaar and Gabriel Petre

4.2.3 Setup of the study


In the remainder of this chapter we consider several dynamic duration
strategies. Our objective is to test if any of these strategies can produce
superior risk-adjusted returns. We benchmark the returns on each of these
strategies against a constant maturity benchmark. For central banks the
benchmark is represented by the returns on two-year bonds, while for
pension funds the benchmark is represented by the returns on ten-year

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
bonds.
For the purpose of this study we use monthly generic yields starting in
February 1967 for three major markets (US, UK and Eurozone).1 We estimate
the term structure of yields at each point in time using the Nelson-Siegel
yield curve model (Nelson and Siegel 1987). The Nelson-Siegel yield curve
model is given by:

1 − e − ln  1 − e − ln 
Yn ,t = b1,t + b2 ,t + b3,t  − e − ln 
ln  ln 

where Yn,t is the yield on an n-period maturity bond at time t. The first term
b1,t is a level factor, the second term is the negative of the slope, and the
third term represents the curvature. This can be seen from the limit behav-
iour of the above equation. The yield of a very long-maturity bond (τ → ∞)
approaches b1,t . The yield of a bond with a very short-maturity bond (τ → 0)
approaches b1,t + b2,t (thus the long-term yield plus the negative of the slope).
Finally, the loading of the third factor approaches its maximum for inter-
mediate maturities while it is zero for both very short and very long bonds,
hence representing curvature.
We use the derived yield curves to calculate realized returns on both the
benchmark and the actual portfolio at each of the rebalancing frequencies.
The duration for the constant maturity benchmark portfolios at time t for a
bond with maturity n are calculated as follows:

1 − (1 + Yn ,t )− n
Dn ,t =
1 − (1 + Yn ,t )−1

Table 4.6 shows some statistics for the benchmark portfolios using
monthly returns.
All strategies will be evaluated against these benchmarks. The duration
deviation of the actual portfolio against a constant maturity benchmark at
each decision point in time is a function of the type of strategy used (the
signal), the maximum deviation allowed, the translation function used to
convert the signal into a duration deviation and the rebalancing frequency.
The maximum duration deviation allowed in each period is ±two years

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Dynamic Management of Interest Rate Risk 73

Table 4.6 Statistics for benchmark portfolios

2 year portfolios 10 year portfolios

Benchmark portfolios US UK EUR US UK EUR

Average annual return (%) 7.3 8.3 6.3 7.9 9.3 7.5
Volatility (%) 2.9 3.4 2.0 7.0 8.6 6.0
Worst-case one period loss (%) −2.4 −3.1 −1.7 −7.6 −6.9 −5.9

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Maximum drawdown (%) −4.3 −3.8 −1.7 −15.2 −27.3 −12.1
Percent of negative periods (%) 19.2 16.4 15.5 33.6 36.1 34.2
Losing streak (# of consecutive 4 3 4 5 6 6
periods of negative return)
Average duration 1.9 1.9 1.9 7.5 7.1 7.6

for the short-duration portfolios and ±four years for the long-duration
portfolios.

4.3 Description of the strategies

We consider four types of strategies in this chapter:

● Level-dependent strategies in which the duration deviation from the


benchmark is contingent upon the level of nominal rates in comparison
to the long-term historical average.
● Regression-based strategies where the exposure to interest rates is driven
either by the convergence from the current yield level to a ‘fair value’
model-based estimate, or by the probability of yields increasing or decreas-
ing in the following period as estimated by a probit type of model.
● Scoring-based strategies attempt to capture the direction of yield changes
over the next period by comparing the current values of a set of macro-
economic variables with their long-term average levels.
● Momentum-based strategies look to exploit the continuation of past
observed trends in yield movements. In this case the deviation from the
benchmark is based on a dual crossover moving average system.

In addition we also consider possible mixed strategies resulting from the


combination of the individual signals.
We use two alternative ways to translate the signal of each of the strat-
egies into active positions. The first is based on a laddered approach where
a duration position equal to 50% of the maximum deviation is taken if the
normalized z-score of the signal is higher than one (in absolute terms) and
a duration position equal to 100% of the deviation is taken if the z-score of
the strategy is higher than two (in absolute terms). This approach has the
advantage that it avoids frequent trading based on relatively weak signals.
It only results in a deviation from the benchmark when the signal is at least
one standard deviation away from the mean.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
74 Arjan B. Berkelaar and Gabriel Petre

In the second approach, labelled the ‘smoothed approach’, the duration


deviation is based on a sigmoid function of the form:

 2 
D= z − 1 Dmax
 − 
1+ e K

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
where z is the normalized z-score of the strategy (the signal strength), K is
a calibration parameter set to 0.75 and Dmax is the maximum allowed dur-
ation deviation. This approach is more sensitive to the trading signal of the
strategy and thus more useful in capturing the informational content of
the strategy. However, it also results in more frequent trading and conse-
quently higher transaction costs. The duration deviation as a function of
signal strength is shown in Figure 4.1.

Laddered approach
100
% of maximum deviation

75
50
25
0
−25
−50
−75
−100
−4.0 −3.0 −2.0 −1.0 0.0 1.0 2.0 3.0 4.0
Signal strength (z-score)

Smoothed approach
100
75
% of maximum deviation

50
25
0
−25
−50
−75
−100
−4.0 −3.0 −2.0 −1.0 0.0 1.0 2.0 3.0 4.0
Signal strength (z-score)

Figure 4.1 Duration deviation as a function of signal strength

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Dynamic Management of Interest Rate Risk 75

To identify the rebalancing policy that best captures a strategy’s infor-


mation content, we assess the results over four different rebalancing
frequencies – monthly, quarterly, semiannually and yearly. At the begin-
ning of the period a decision is made with respect to the duration deviation
of the actual portfolio, and that decision is revisited at the end of each of the
four different rebalancing periods.
Each strategy is evaluated on an ex-post basis against the fixed-maturity

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
benchmarks. For each strategy we calculate the following set of statistics,
which we have grouped into two broad categories:

● Quality of the strategy’s information content as indicated by statistics


such as:
● average excess return;

● tracking error;

● information ratio;

● percentage of periods with negative excess return;

● maximum drawdown.

● Consistency of the strategy as evidenced by indicators such as:


● maximum losing/winning streak – maximum number of consecutive

investment periods with negative/positive excess return;


● rolling information ratios over a five-year window;

● worst-case relative loss;

● maximum relative drawdown;

● volatility of duration changes.

In the next section we describe the results for each of the strategies. We
focus primarily on the information ratio in discussing the results, but detailed
information with all of the statistics listed above can be found in the annex.
We refer to the portfolios benchmarked against two-year constant maturity
bonds as short-duration portfolios. The portfolios benchmarked against ten-
year constant maturity bonds are referred to as long-duration portfolios.

4.4 Results

4.4.1 Level-dependent strategies


This strategy is based on the idea that yields are mean-reverting around an
equilibrium level which in this study is assumed to be the historical aver-
age yield level over the previous five years. As such, when current yields are
higher than the historical average the expectation is that they will to revert
back to the mean. This strategy results in extending duration when yields
are above the mean and reducing the duration compared to the benchmark
at yield levels below the mean.
Figure 4.2 shows the information ratio for the short-duration level-depend-
ent strategies. When the rebalancing period is one or three months, only
the short-duration UK portfolio outperforms the benchmark, suggesting

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
76 Arjan B. Berkelaar and Gabriel Petre

that nominal interest rates in the US and EU may not exhibit mean rever-
sion in the short-run. As the rebalancing frequency is extended the results
improve resulting in positive information ratios for the short-duration US
and EU portfolios when the rebalancing frequency is 12 and 18 months,
pointing to mean reversion in interest rates over longer horizons. In add-
ition, taking active positions only at the extremes as implied by the lad-
dered approach generates higher returns and lower volatility as compared to

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
the smoothed approach. The results for the long-duration portfolios are not
as good – only producing positive information ratios when the rebalancing
frequency exceeds 12 months.

Laddered approach
0.50
0.40
0.30
Information ratio

0.20
0.10
0.00
−0.10
−0.20
US short-duration portfolio
−0.30
EU short-duration portfolio
−0.40 UK short-duration portfolio
−0.50
1 3 6 12 18
Month

0.50 Smoothed approach


0.40
0.30
0.20
Information ratio

0.10
0.00
−0.10
−0.20
US short-duration portfolio
−0.30
EU short-duration portfolio
−0.40 UK short-duration portfolio
−0.50
1 3 6 12 18
Month

Figure 4.2 Information ratios for short-duration portfolios – level-based strategy

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Dynamic Management of Interest Rate Risk 77

0.8

0.6

0.4
Rolling IRs

0.2

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0

−0.2
−0.4

−0.6

−0.8
77 80 83 86 89 92 95 98 01 04 07
January

Figure 4.3 Five-year rolling information ratios for the UK short-duration portfolios –
level-based strategy

To assess the consistency of the informational content of the signal we


focus on the best performing level-based strategy, i.e. the short-duration UK
portfolio. While overall the strategy results in positive information ratios,
the five-year rolling information ratio is unstable suggesting that there can
be extended periods over which the strategy underperforms (Figure 4.3).

4.4.2 Linear regression-based strategy


In this approach we estimate a ‘fair value’ level of yields based on a linear
regression model and assume that yields will converge from the current
level to the ‘fair value’ level predicted by the regression model. The regres-
sion model can be described by the following equation:

p
Yt = b0 + ∑ bi Xi ,t −1 + «t
i =1

where we use the following factors as explanatory variables:

● unemployment;
● industrial production;
● index of leading indicators;
● inflation;
● inflation volatility;
● monetary policy rate;
● level, slope and curvature of the yield curve.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
78 Arjan B. Berkelaar and Gabriel Petre

We estimate the regression model over the entire history. While it is obvious
that the performance of the strategy is going to be driven by the predictive
power of the regression model, we chose to constrain the number and type
of independent variables included in the model to generic indicators which
are readily available (See Figure 4.4).
The strategy results in significant volatility in duration deviations for the
actual portfolios with no concrete evidence of predictive power. The strat-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
egy performs poorly across all the different short-duration portfolios, while
generating slightly positive information ratios for US and EUR long-duration
portfolios when the rebalancing frequency is short. Increasing the length of
rebalancing period improves the results for the short-duration portfolios,
but the opposite is true for the long-duration portfolios.
When comparing the impact of the strategy on the risk profile of the
actual portfolio compared to the benchmark portfolio, we find that the

Laddered approach
0.50
US long-duration portfolio
0.40 EU long-duration portfolio
0.30 UK long-duration portfolio
Information ratio

0.20
0.10
0.00
−0.10
−0.20
−0.30
−0.40
−0.50 1 3 6 12
Month

0.50 Smoothed approach


0.40 US long-duration portfolio
0.30 EU long-duration portfolio
UK long-duration portfolio
Information ratio

0.20
0.10
0.00
−0.10
−0.20
−0.30
−0.40
−0.50
1 3 6 12
Month

Figure 4.4 Information ratios for long-duration portfolios – regression-based strategy

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Dynamic Management of Interest Rate Risk 79

regression-based approach results, in general, in a worse risk profile as meas-


ured by the worst-case loss and the maximum drawdown, compared to the
benchmark portfolios. Furthermore, as in the case of the level-based strat-
egy, the five-year rolling information ratio varies significantly over time for
all the different portfolios (Figure 4.5).

4.4.3 Probit regression model

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
As an alternative to the linear regression model in the previous section, we
use a probit model to estimate the probability of yields increasing or decreas-
ing. We use the same variables to estimate the probit regression. Compared
to the linear regression approach, this strategy allows for a higher degree of
imprecision as we are not looking to estimate the ‘fair value’ level of yields
but are merely trying to capture the probability of yields rising or falling
over the next period. To estimate the probit model we use changes in yields
and in the underlying explanatory variables. We estimate the parameters of
the following model:

P(Y = 1 X = x ) = f( x’b)

where Y is one if yields are expected to increase and zero otherwise, x is the
vector of explanatory variables and f is the cumulative distribution func-
tion of the standard normal distribution. The b parameters are estimated
using maximum likelihood.

1.5

0.5
Rolling IRs

−0.5

−1 US long-duration portfolio
EUR long-duration portfolio
UK long duration portfolio
−1.5
77 79 81 83 85 87 89 91 93 95 97 99 01 03 05 07
January

Figure 4.5 Five-year rolling information ratios for the long-duration portfolios –
regression-based strategy

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
80 Arjan B. Berkelaar and Gabriel Petre

We find that the probabilities implied by the probit model do not send
a strong enough signal (Figure 4.6) which in turn translates into only
small deviations from the benchmark. The probit model works best for the
smoothed approach. For the short-duration portfolios, the best results are
obtained for the UK market. For the long-duration portfolios, the informa-
tion ratios improve when we extend the length of the rebalancing period,
suggesting that there might be a lag between the signal and the actual

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
change in long-term yields. (See Figure 4.7)

1 US market

0.9
0.8
0.7
Probability

0.6
0.5
0.4
0.3
0.2
0.1
0
72 74 76 78 80 82 84 86 88 90 92 94 96 98 00 02 04 06 08
January

1 UK market
0.9
0.8
0.7
Probability

0.6
0.5
0.4
0.3
0.2
0.1
0
72 74 76 78 80 82 84 86 88 90 92 94 96 98 00 02 04 06 08
January

Figure 4.6 Probit model implied probability of two-year yield increasing over the
next month

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Dynamic Management of Interest Rate Risk 81

4.4.4 Scoring model


In contrast to the regression model, the scoring model tries to capture infor-
mation about the direction of interest rates. For each country we include the
following variables in the scoring model with a one-period lag:

● leading indicator index,


● inflation,

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
● slope of the yield curve,
● monetary policy rate.

For the US market we also include the duration of the MBS Master Index
from Merrill Lynch since it became available (late 1980s).

Smoothed approach
0.50 US short-duration portfolio
0.40 EU short-duration portfolio
0.30 UK short-duration portfolio
Information ratio

0.20
0.10
0.00
−0.10
−0.20
−0.30
−0.40
−0.50
1 3 6 12
Month

Smoothed approach
0.50
0.40
0.30
Information ratio

0.20
0.10
0.00
−0.10
−0.20
US long-duration portfolio
−0.30 EU long-duration portfolio
−0.40 UK long-duration portfolio
−0.50
1 3 6 12
Month

Figure 4.7 Information ratios for short- and long-duration portfolios – probit-based
strategy

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
82 Arjan B. Berkelaar and Gabriel Petre

In each period we compute the z-score of the changes in these variables


using the historical volatility of changes over the previous five years, and
then compute average z-scores across the different variables. Positive changes
in excess of the historical average are associated with expected increases in
interest rate, which in turn will result in a lower duration compared to the
benchmark. We find that there is a consistent positive correlation between
the normalized z-scores from the scoring system and the subsequent change

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
in yields. This suggests that there is informational content in the strategy.
Overall the correlation coefficient is highest for the US market (0.67 for
short-duration portfolios) and the correlation increases once the duration of
the MBS Master Index becomes available (Figure 4.8).

0.9 US market
0.8
Corrrelation coefficient

0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
77 79 81 83 85 87 89 91 93 95 97 99 01 03 05 07
January

0.9 UK market

0.8
Corrrelation coefficient

0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
77 79 81 83 85 87 89 91 93 95 97 99 01 03 05 07
January

Figure 4.8 Correlation coefficient between scoring signal and subsequent two-year
yield changes (five-year rolling)

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Dynamic Management of Interest Rate Risk 83

The scoring approach produces the best results for the long-duration port-
folios. For the short-duration portfolios, information ratios are positive only
for the US when the rebalancing frequency is one or three months. The
scoring approach results in positive information ratios for the long-duration
portfolios in both the US and the Eurozone at all rebalancing frequencies
(Figure 4.9). Unfortunately, rolling five-year information ratios are not sta-
ble over time (Figure 4.10).

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
4.4.5 Momentum-based strategy
The underlying assumption of this strategy is that interest rates are trend-
ing and observed short-term trends will continue over the next period. In
its most basic form, we compare the current level of yields with a moving

0.50 Laddered approach


US long-duration portfolio
0.40 EU long-duration portfolio
0.30 UK long-duration portfolio
Information ratio

0.20
0.10
0.00
−0.10
−0.20
−0.30
−0.40
−0.50 1 3 6 12
Month

0.50 Smoothed approach


US long-duration portfolio
0.40 EU long-duration portfolio
0.30 UK long-duration portfolio
Information ratio

0.20
0.10
0.00
−0.10
−0.20
−0.30
−0.40
−0.50 1 3 6 12
Month

Figure 4.9 Information ratios for long-duration regression-based strategies – scoring


strategy

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
84 Arjan B. Berkelaar and Gabriel Petre

1.5

0.5
Rolling IRs

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
−0.5

−1
US long-duration portfolio
EUR long-duration portfolio
UK long duration portfolio
−1.5
77 79 81 83 85 87 89 91 93 95 97 99 01 03 05 07
January

Figure 4.10 Five-year rolling information ratios for the long-duration portfolios –
scoring strategy

average over the last 12 months (Tables 4.1–4.4). We also test alternative
specifications by using a dual crossover moving average system:

● Fast moving average (Fast MA): average over the last two, three, four or
six months;
● Slow moving average (Slow MA): average over the last 12 or 24 months.

The decision rule for the smoothed approach implies shortening the dur-
ation of the actual portfolio compared to the benchmark when the level of
the Fast MA is above the Slow MA level and extending the duration when
the opposite is true. For the laddered approach, a duration deviation is only
triggered when the difference between the two moving averages is at least
one standard deviation away from the average difference.
Momentum strategies work best for the long-duration portfolios, produ-
cing positive information ratios at all rebalancing frequencies, suggesting
that there is evidence of trending in long-term interest rates (Figure 4.11).
The results are slightly better with the smoothed approach than with the
laddered approach. Extending the number of previous months included
in the Fast MA results in slight improvements of the information ratios.
Overall, however, the information ratios are modest and unstable over time
as evidenced by the five-year rolling information ratios (Figure 4.12).
For the short-duration portfolios, the results are mixed. With a one-month
rebalancing period the strategy produces positive excess returns for all the
portfolios except for the short-duration UK portfolio. The information ratios

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Dynamic Management of Interest Rate Risk 85

Smoothed approach
0.50
0.40
0.30
0.20
Information ratio

0.10
0.00
−0.10

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
−0.20
US long-duration portfolio
−0.30 EU long-duration portfolio
−0.40 UK long-duration portfolio
−0.50
1 3 6 12
Month
Smoothed approach
0.5
0.4
0.3
Information ratio

0.2
0.1
0
−0.1
−0.2
−0.3 US long-duration portfolio
EU long-duration portfolio
−0.4 UK long-duration portfolio
−0.5
MA-1m MA-2m MA-3m MA-4m MA-6m

Figure 4.11 Information ratios for long-duration regression-based portfolios –


momentum strategy

with a three-month and six-month rebalancing period are close to zero or


negative. Extending the number of previous months included in the Fast
MA results in negative information ratios for all short-duration portfolios.

4.4.6 Mixed strategy: Combining scoring and momentum


The scoring strategy previously described is informed mostly by variables
related to economic activity. We are looking to improve on that by combin-
ing the signal of the scoring strategy with the momentum strategy, which
is mostly driven by short-term trend dynamics. In this case the signal of
the strategy would be given by the sum of the standardized momentum
and scoring strategy such that conflicting signals of similar intensity would
cancel each other out and result in no deviation from the benchmark, while
similar signals would result in more extreme deviations compared to each
strategy individually.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
86 Arjan B. Berkelaar and Gabriel Petre

1.5

0.5
Rolling IRs

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0

−0.5

−1 US long-duration portfolio
EUR long-duration portfolio
UK long duration portfolio
−1.5
77 79 81 83 85 87 89 91 93 95 97 99 01 03 05 07
January

Figure 4.12 Five-year rolling information ratios for long-duration portfolios –


momentum strategy

The informational content of the strategy is higher when the two signals
are combined, compared to both momentum and scoring strategies in iso-
lation. This is evidenced by the higher information ratios produced by the
strategy. The mixed strategy produces positive information ratios for the
long-duration portfolios in each of the three markets at all rebalancing fre-
quencies. For the short-duration portfolio the mixed strategy only works
well for the US when the rebalancing frequency is monthly.
The US short-duration portfolio based on the mixed strategy generates sig-
nificant risk-adjusted excess returns and positive rolling information ratios
(Figure 4.14). As in the case of the scoring strategy, here too we can observe
that the strategy performs better in more recent periods once the duration
of the MBS index becomes available, reinforcing the point that it represents
a useful market signal for the purpose of timing the market. The strategy
does not produce positive excess returns for short-duration portfolios in the
EUR and UK market, however. Extending the rebalancing period beyond
one month does not produce better information ratios and excess returns
for the short-duration portfolios.
Additional results and tables with respect to all the different strategies are
available from the authors upon request.

4.5 Conclusions

In this chapter we have considered four types of strategies to dynamically


alter duration for both central banks and pension funds. We considered

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Dynamic Management of Interest Rate Risk 87

0.70 Smoothed approach

US short-duration portfolio
0.50
Information Ratio EU short-duration portfolio
UK short-duration portfolio
0.30

0.10

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
−0.10

−0.30

−0.50
1 3 6 12
Month

0.50 Smoothed approach


0.40
0.30
Information Ratio

0.20
0.10
0.00
−0.10
−0.20 US long-duration portfolio
EU long-duration portfolio
−0.30 UK long-duration portfolio
−0.40
−0.50
1 3 6 12
Month

Figure 4.13 Information ratios for short- and long-duration portfolios – mixed strategy

both a short-duration portfolio (representative of central bank reserves port-


folios) and a long-duration portfolio (representative of pension portfolios).
We tested level-dependent strategies, regression-based strategies, scoring
strategies and crossover moving average strategies. The performance of each
of these strategies was evaluated against a constant maturity benchmark.
We used four different rebalancing frequencies: monthly, quarterly, semi-
annually and annually. At the beginning of each period a duration decision
is made and the decision is revisited at the end of each of the four differ-
ent rebalancing periods. We have also used two approaches for converting
the signal into a duration deviation: a laddered approach and a smooth
approach using a sigmoid function.
In general, we find weak evidence of mean reversion in interest rates.
Therefore, strategies based on the concept of mean reversion do not perform

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
88 Arjan B. Berkelaar and Gabriel Petre

1.8

1.6
1.4
1.2
Rolling IRs

1
0.8

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.6
0.4

0.2

−0.2
77 79 81 83 85 87 89 91 93 95 97 99 01 03 05 07
January

Figure 4.14 Five-year information ratios for the US short-duration portfolio – mixed
strategy

well, especially over shorter periods. None of the strategies is successful in


producing high information ratios and positive excess returns across all the
different markets and duration spectrums. Some of the strategies produce
positive information ratios but the results are not stable across time. In gen-
eral, we conclude that central banks and pension funds might be better off
keeping the duration of their portfolios relatively constant.

Notes
We would like to thank Alejandro Reveiz for helpful comments and suggestions.
1. We use German yields for the Eurozone before 1999.

Bibliography
Brandt, Michael W., Amit Goyal, Pedro Santa-Clara, and Jonathan R. Stroud (2005),
‘A Simulation Approach to Dynamic Portfolio Choice with an Application to
Learning about Return Predictability’, Review of Financial Studies, 18, 831–873.
Campbell, John Y. and Luis M. Viceira (2002), Strategic Asset Allocation: Portfolio Choice
for Long-term Investors, Oxford University Press.
Dickey, D.A. and Fuller, W. A. (1979), ‘Distribution of the Estimators for Autoregressive
Time Series with a Unit Root’, Journal of the American Statistical Association, 74,
427–431.
Kwiatkowski, Denis, Peter C.B. Phillips, Peter Schmidt and Yongcheol Shin (1992),
‘Testing the Null Hypothesis of Stationarity Against the Alternative of a Unit
Root: How Sure Are We That Economic Time Series Have a Unit Root?’, Journal of
Econometrics, 54, 159–178.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Dynamic Management of Interest Rate Risk 89

Merton, Robert C. (1971), ‘Optimum Consumption and Portfolio Rules in a


Continuous-time Model’, Journal of Economic Theory, 3, 373–413.
Nelson, C.R. and Siegel, A. F. (1987), ‘Parsimonious Modeling of Yield Curves’, Journal
of Business, 60(4), 473–489.
Samuelson, Paul A. (1969), ‘Lifetime Portfolio Selection by Dynamic Stochastic
Programming’, The Review of Economics and Statistics, 51(3), 239–246.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
This page intentionally left blank

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Part II
Portfolio Optimization Techniques

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
This page intentionally left blank

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
5
A Strategic Asset Allocation
Methodology Using Variable Time
Horizon

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Paulo Maurício F. de Cacella, Isabela Ribeiro Damaso and
Antônio Francisco da Silva Jr.

5.1 Introduction

Strategic asset allocation is usually the single most important decision that
determines the total return performance of a portfolio. This decision can
be made using a plethora of distinct methodologies that are well known in
the literature and among practitioners. The typical approach uses a stand-
ard Markowitz model or a similar one based on a double objective quad-
ratic optimization. The optimal portfolio can be obtained by defining risk
preferences with intended restrictions, asset return and risk expectations
and investment time horizon. Sometimes, additional tools like stress testing
are used to extend the restriction framework. However, only a small set of
investor’s preferences can be represented with standard models. As prefer-
ences tend to be multiple and even conflicting, a more complex framework
is necessary to reflect them properly.
To address the problem, we propose an approach that finds optimal
portfolios, with specific time horizons, for investors that want to min-
imize their costs from the efficient frontier if they exit from the strategy
sooner than expected. In practice, this is a variable time horizon choice.
The model also allows investors to include additional objectives to be
minimized.
The basic idea presented is to consider multi-objective optimization as a
tool to solve the problem. An evolutionary algorithm approach is used to
find a set of viable portfolios that maximizes expected return while min-
imizing exit or any other scenario costs along the frontiers at a specific risk
level. The proposed method can accept any portfolio selection model as
input and any number of objectives in the objective function.
We begin by discussing weaknesses of traditional strategic asset allocation
methodologies. Then we introduce a model for optimal strategic asset allo-
cation with a variable time horizon.

93

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
94 Paulo Maurício F. de Cacella et al.

5.2 Weaknesses of strategic asset allocation approaches

The modern world of strategic asset allocation begins with the seminal work
of Markowitz (1952) on portfolio selection. It considers investor expected
behaviour of future prices and quadratic optimization techniques. It also
considers return as the expected return, i.e. mean, and risk as variance of
this expected return (Markowitz 1959). With this setup, the investor’s dou-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
ble objective of maximizing return while minimizing risks is the trade-off
that the model intends to solve. This model is widely accepted by practition-
ers as shown by Perold (1984) and is used in several distinct areas including
indexing, active management and asset allocation.
Historically, some simplifications have been made in order to reduce com-
putational requirements. These have lead to single and multiple index mod-
els (Sharpe 1963; Cohen and Pogue 1967). A natural addition to the model
is to consider a third factor in return distributions – skewness, the third cen-
tral momentum. However, two problems arise: assets skewness estimation
and non-convex optimizations. Several authors also point out the limita-
tions of mean–variance analysis when the utility function is not quadratic
and return distributions are not normal (Samuelson 1970; Arditti and Levy
1975; Konno and Suzuki 1995; Harvey et al. 2004). To solve the problems of
investors with income from labor, and other sources of perturbations result-
ing in a consumption–investment trade-off, several authors have defined a
multi-period mean–variance approach (Samuelson 1969; Fama 1970,1976;
Hakansson 1971; Stapleton and Subrahmanyan 1978).
Event risks also pose a problem that can cause investors’ behaviour to
change in a more cautious way (Liu et al. 2003). Adaptations of the Markowitz
model for an uncertain time horizon have showed that investors facing this
problem would, in general, make sub-optimal allocation decisions resulting
in significant loss (Martellini and Urosevic 2005). Subsequently, multi-ob-
jective optimization techniques are being used as a tool for portfolio selec-
tion. Some of them are evolutionary algorithm-based (Lin and Gen 2007).
Currently available models for portfolio selection have well known prob-
lems in practice. Usually, investors adapt their risk preferences to the model
limitations. This results in sub-optimal portfolio selection as standard mod-
els accept a very limited set of preferences.

5.2.1 Optimization problems


The standard Markowitz approach has several practical problems for practi-
tioners. Among them we emphasize the dominant nature of the framework
versus the stochastic behaviour of risk and return forecasts. This charac-
teristic leads to the selection of a specific asset, with some risk level and an
expected return, discarding another asset with similar risk level but a slightly
lower expected return. This dominance frequently results in a very concen-
trated and unreasonable portfolio. Some more practical adaptations that

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Strategic Asset Allocation Methodology 95

consider uncertainty in forecasts, like re-sampling models, try to overcome


these problems. But they are, in fact, double objective constrained models,
and no matter how we try to capture better expectations in the mean–var-
iance framework or even extended frameworks (Harvey et al. 2004), at the
end, we are usually restricted to these double objective optimizations.
However, investors’ objectives are much broader than a simple double
objective Markowitz model. They include other aspects like exit costs, stress

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
scenarios and multiple time horizons. In this more complex objective set,
we can expect, likely, that a wealth maximizer, based on mean–variance or
similar models, will not be efficient regarding other objectives. In fact, you
may have a solution that is slightly worse, when compared to the efficient
frontier, but much better considering investors’ other objectives.

5.2.2 Forecasts
Portfolio selection is basically dependent on asset risks and returns, fore-
casts and investors’ set of risk preferences. Three approaches are frequently
used to build forecasts: forecasts can be based on past data, based on future
expectations, implied on current market prices, or based on investor-de-
fined scenarios. No matter which method is used, optimal portfolios can
be wildly different for the same set of preferences, depending on the past
time window used for forecasting or the weight averaging of implied data.
In that way, every asset allocation model based on forecasts is, at most, as
good as the forecasts themselves. However, it is important to notice that
multi-objective optimization can consider several scenarios at once, allow-
ing a better translation of investors’ preferences and better performance by
the selected portfolio over all scenarios.

5.2.3 Market and credit risk integration


Market and credit risk integration is a very complex task, as they do not
share the same distribution over time. Credit risk inferred from derivatives
instruments like Credit Default Swaps (CDS) or from spreads over a credit
risk-free bond is very difficult to separate from other market aspects like
liquidity risk, barrier costs and other sources of risk or imperfect markets.
More challenging than credit and market risk calculations is to define a
decision framework where investors can clearly state risk preferences in a
unique norm, where you can switch k credit risk units for j market risk ones.
Risk integration in strategic asset allocation requires an absolute evaluation
of risk and not a relative one. In this case, the time horizon chosen may
have a huge impact on strategic preference decisions.

5.2.4 The impact of the time horizon


Time horizon is such an important definition for asset allocation that we
can be assured that it impacts not only expected returns and covariance
matrices, but, more importantly, investors’ risk preferences. This is meant,

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
96 Paulo Maurício F. de Cacella et al.

not in the sense of the long-discussed ‘time diversification’, but rather con-
sidering the fact that usually, time horizon is a debatable decision and the
portfolio behaviour before the target time horizon is very important as
investors ponder possible exit strategies or scenarios.

5.2.5 Event risk


The vast majority of models used by institutional investors do not explicitly

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
consider an event risk factor in optimization calculations. As a consequence,
they end up selecting a portfolio that is more prone to riskier positions. This is
a very important problem needing to be addressed, due to its possible impact
on investors’ balance sheets. Distressed times in financial markets likely cause
corresponding structural collapse and, as diversification benefits fade, the port-
folio can experience huge losses. Special care must be taken in leveraged posi-
tions of negatively correlated assets. Investors usually test selected portfolios
using historical data or even scenarios. However, the tests outputs are ad hoc
and cannot be used in a mean–variance standard approach.

5.3 A model for variable time horizon strategic asset allocation

In this section, we present a model developed to overcome some of the


restrictions discussed before. Additional decision variables are introduced
and, to solve the multi-objective optimization problem, an evolutionary
algorithm is used as a tool. The proposed model supports any kind of port-
folio selection methodology, distinct utility functions and multiple investor
objectives. This framework does not solve the problems of incorrect fore-
casting or poor portfolio selection models. It only allows a better translation
of risk preferences in real portfolios as it can capture all of investors’ risk
preferences at once.

5.3.1 Set of objectives


Expanding the set of objectives is not an easy task. Frequently, objectives
compete or conflict with one another. Figure 5.1 shows some conflicting

Different expected return


and covariance structures
for each time horizon

Different time horizons Exit strategies


Multiobjective
optimization
Market and credit risk Investor restrictions

Different models for Stress scenarios


each frontier

Figure 5.1 Set of objectives in a multi-objective optimization

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Strategic Asset Allocation Methodology 97

objectives or other preferences that can be defined when setting the opti-
mization parameters.
Let us discuss some objectives and preferences, and how the proposed
model deals with them.

5.3.1.1 Different time horizons


A time horizon is not a single number defined by investors. It is a broader

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
concept in the sense that it comprises uncertain exit times and investors can
even attribute weights across time horizons. Investors may choose to have a
preferred time horizon or not.

5.3.1.2 Different forecasts for each time horizon


As each efficient frontier for a specific time horizon is built separately, it is
possible to use different risk/return forecasts for each time horizon. Forward
curves, projections based on historical data and economic cycles may also
be used to price assets differently across time.

5.3.1.3 Different models for each time horizon


Here, we use a standard unconstrained Markowitz approach in our examples,
unless otherwise stated. It is possible, however, to use any other model, such
as Michaud (1998) or Black and Litterman (2001). The multi-objective opti-
mization model is not dependent on portfolio selection model.

5.3.1.4 Exit strategies


As a new strategy may be in place well before the time horizon is reached,
a common concern among investors is how the selected portfolio will
behave until the time horizon. In this case, investors expect that the
portfolio will have a smaller loss compared to the efficient frontier asso-
ciated to the exit time. Multi-objective models search portfolios that are
well-behaved among several frontiers while maximizing returns for a
given risk level.

5.3.1.5 Stress scenarios and event risk


Another very common objective among investors is to guarantee that the
selected portfolio will be optimal, with respect to losses, in some specific
scenarios. There are standard scenarios based on past events or synthetic
ones. Some of them are extreme events.
When defining investor risk preferences, it is important to note that dis-
ruptions in the market that lead to dramatic changes in volatility and, fre-
quently, to losses, must be considered a priori in portfolio optimizations. It
is important to notice that exit strategies may trigger a rebalancing, but do
not protect investor from losses.
The model can consider any number of scenarios to be optimized.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
98 Paulo Maurício F. de Cacella et al.

5.3.2 Multi-objective optimization


Due to complex nature of a multi-objective optimization, we opted for a
heuristic stochastic method based on evolutionary algorithms to solve the
problem. The optimization framework accepts any set of objectives as long
as it is possible to attribute, for each portfolio, a definite value to each object-
ive. The objective function to be minimized is an operator that includes all
of the separated objectives in the same equation. This objective function

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
can be linear or non-linear. The framework also accepts constraints.
Finding a solution to a multi-objective optimization requires that we solve
a non-linear vector optimization problem. Usually, the solution to such a
problem is not unique. The existence of feasible solutions which satisfy all
constraints, but that cannot be optimized further with regard to a given
criterion without compromising any other criteria, leads to the Pareto opti-
mal. The Pareto optimal solutions set builds a Pareto front. Every solution
on a Pareto front is a feasible solution of the multi-objective optimization
problem.
There are several methods based on evolutionary approaches (Lee and
El-Sharkawi 2008; Liu et al. 2001) that can be used to solve this kind of
problem: aggregate objective functions (AOF), multi-objective optimization
evolutionary algorithms (MOEA), normal constraint (NC), normal bound-
ary intersection (NBI) and others.
Two problems must be solved in order to calculate a multi-objective opti-
mization. The first is to establish an objective function that builds the
design space. The second is to search this space in order to locate acceptable
solutions that are expected to be global optima in terms of the decision
variables.
An aggregated objective function is minimized by stochastic evolution-
ary modelling. This approach uses three operators: it mutates original port-
folio weights according to the mutation rate defined by the user, it allows
a special mutation, tailor-made for portfolio optimization, and it also uses
a crossover operator. Using these operators, the model may replace the ori-
ginal portfolio by a fitter one. The starting population is defined as N port-
folios calculated on each separated frontier (time horizon). As traditional
genetic models are not directly applicable to solving portfolio allocation
problems, a more adequate solution based on evolutionary algorithms is
proposed. A stochastic model has been used in this case because this kind of
problem may have several local minima and a global search algorithm, cap-
able of searching for the global minima in the design space, is needed.
5.3.3 Details on proposed discrete time horizon model
Pareto optimality is a concept that solves the trade-off between a given set
of mutually contradictory objectives. A solution is Pareto optimal when it
is not possible to improve one objective without deteriorating at least one
of the others. It is important to notice, however, that a limitation of Pareto

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Strategic Asset Allocation Methodology 99

Investors risk preferences


Optimization model to
calculate one frontier for Expected return/covariances
each time position Choice of a model :
Markowitz, Michaud
Starting population
based on portfolios on
each frontier Minimize exit costs while
maximizing returns

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Definition of objective
function based on multiple Minimize scenario losses
objectives
Portfolio optimization
Building the Pareto Mutation rates
front Crossover rates
Weight objectives and Crowding
define genetic parameters
for the run Generations number

Calculate Pareto front

Figure 5.2 Discrete optimization model steps

optimality is its localization. The Pareto improvement does not necessarily


define a global optimum. However, using a global search algorithm, we can
find the global optimum in the limit. Figure 5.2 shows procedures used to
calculate the Pareto front.
In the proposed method, the Pareto front is calculated considering, at min-
imum, the cost function represented by the distance, risk-adjusted, from a port-
folio to an optimal portfolio at the same risk level for each time horizon. The
modelling begins by building several efficient frontiers, one for each desired
time horizon. These frontiers are built based on an expected return and covari-
ance matrix of asset classes, and on a portfolio selection model. After calculat-
ing N portfolios on the frontier, this data is used to do a non-linear least square
fit of frontiers using any function as model. Although a cubic polynomial was
used in the samples, several exponential combinations were tested as alterna-
tives. The objective is to allow a fast calculation of the distance, in risk-adjusted
terms, from any portfolio to a specific efficient frontier.
The next step is to define the optimization objectives and relative weights
for each objective. In our simulations, equal weight objectives are used,
unless otherwise stated.
Now we are ready to start the evolutionary algorithm. The defined add-
itional evolutionary run parameters are:

● the number of generations to run the model;


● the mutation rate specifies the stochastic change in composition for each
offspring (we used a uniform distribution applied to each asset followed
by weight normalization);

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
100 Paulo Maurício F. de Cacella et al.

● the crossover operator rate;


● the crowding operator.

The model runs with N portfolios that we call parent portfolios, gen-
erating offspring based on genetic rules. If an offspring better minimizes
the objective function, it replaces its parent in the solution set. This will
run for the desired number of generations. The expected result is a view of

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
the Pareto front with portfolios characterized as possible solutions for the
problem.
The model is not affected by premature convergence, as elitism is forced
by replacing the population in such a way that an offspring and its parent
are not simultaneously allowed into the next generation. The samples in this
chapter concentrate on a preferred time horizon defined by the investor,
and shorter time horizons are used to calculate exit and/or scenario costs.

5.3.3.1 Details on the evolutionary algorithm and global searching


To solve the multi-objective optimization, we used an evolutionary algo-
rithm with two mutation types and a crossover operator. The first mutation
operator generates offspring based on a mutation rate that affects the weight
of each asset in a portfolio. This operator forces a diversification bias, but
works in the vicinity of the portfolio; its strength is searching for a local
minimum. The second mutation operator exchanges the allocation of the
asset with highest weight in the portfolio with that of the asset that has
the lowest risk difference. This operator tries to find other local minima in
the search space.
The crossover operator tries to find local minima in the whole search
space. It is calculated by adding two portfolios selected at random, using a
crossover rate and pondering weights for generating an offspring. The start-
ing population is defined as the frontier portfolios for each time horizon.
The evolutionary algorithm has, optionally, a crowding option that allows
the scattering of portfolios in a more uniform way regarding risk. This algo-
rithm uses a defined error band for risk to assure that the final portfolio will
be in the vicinity of a desired risk level.
We can set the probability of the crossover and mutation operators. In
our model, each parent portfolio gives one offspring that is tested against
an objective function. If the offspring is better, it replaces its parent in the
next generation.

5.3.3.2 Intuition about the solution


The problem can be understood as a utility maximization over three sim-
ultaneous points in time. The utility function is defined by the objective
function setting and the method is not limited to a quadratic utility. The
Pareto front is not dependent on any information other than investors’ pref-
erences and expectations.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Strategic Asset Allocation Methodology 101

The proposed problem is not treatable with standard Monte Carlo tech-
niques. Solving efficient frontiers using brute force methods is not a feasible
solution in terms of computational requirements. The only possible path is
to try to use an oriented stochastic approach based on some characteristics
of the problem.
We started the population based on portfolios located at efficient frontiers
because we knew that the desired solution should be in the vicinity of these

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
frontiers. The evolutionary approach creates a path of local minima and
jumps to other minima that will be achievable only if the objective function
is minimized in each step. In that way we stay near the efficient frontier,
allowing the other objectives to be minimized.
5.3.3.3 Information about modelling
Limitations and drawbacks present in the modelling structure used in prac-
tical problems will be shown next. Some of these limitations allow much
faster computation of each generation. The first limitation on the model is
not to use a real efficient frontier to calculate distances of portfolios, but,
rather a fitted function based on points of real portfolios located on the
frontier. We have used several functions, from exponential to polynomial
ones, and they are available in the software However, as the optimization is
calculated using the distance of the expected return from the fitted func-
tion and not from the real frontier, this causes errors in the objective func-
tion minimization. Although these errors are negligible in our examples,
they must to be taken into account depending on the quality of the fit.
Sometimes, this cost can be slightly negative.
For sake of simplification, standard unrestricted Markowitz optimization
was used in the problems. The method and the software developed, how-
ever, straightforwardly allow the use of other models and restrictions.
The method we developed is well-suited to parallel computation.
Computing time is generally fast – around five minutes for a study of nine
asset classes over a run of 200 generations, using 30 portfolios on each fron-
tier and modern computers.
5.3.3.4 Basic data used in problems
Data available: Jan 1999 up to Sep 2007, Gold (XAU), SP500, 3Mo USD
Treasury-Bill, 13Yr USD Treasury-Bond, 15Yr USD Treasury-Bond, 17Yr USD
Treasury-Bond, 13Yr EUR Bond, 15Yr EUR Bond, 17Yr EUR Bond. Indexes
used are from Lehman Brothers.
Expected return model: Average returns of a historical time window that
doubles the time horizon for each frontier. Returns in figures and tables are
shown on an annual basis.
Covariance model: Covariance matrix of a historical time window that
doubles the time horizon for each frontier. Risks in figures and tables are
shown on a monthly basis.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
102 Paulo Maurício F. de Cacella et al.

Number of frontiers: 3
Number of portfolios in the frontier: 30
Number of generations: 200
Weight on standard deviation/mean: 0.5
Mutation slow rate: 0.05
Mutation asset exchange rate: 0.05
Crossover rate: 0.05

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
5.3.3.5 Example 1 – Find optimal portfolios in a three-step time horizon with
minimal mean and variance of expected return losses in each frontier, considering
exit costs (variable time horizon).
This example has the objective of finding a set of optimal portfolios that
simultaneously minimizes the distance from each frontier, defined for
each time horizon, while minimizing the variance of this distance. Both
decision variables build the objective function. The Pareto front is the
optimal choice for investors with an undefined time horizon as it mini-
mizes exit costs in each frontier, allowing the portfolio to be quasi-optimal
in every case.
The problem can be understood as a way to find the best portfolios, given
a defined time horizon, for an investor that wants to minimize costs from

Efficient frontiers along time horizons


45
1yr
2yr
40 3yr

35

30
Return %

25

20

15

10

0
0 1 2 3 4 5 6 7 8 9
Risk %

Figure 5.3 Full range original efficient frontiers – example 1

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Strategic Asset Allocation Methodology 103

Optimized portfolios over efficient frontiers


45
1yr
2yr
40 3yr

35

30

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Return %

25

20

15

10

0
0 1 2 3 4 5 6 7 8 9
Risk %

Figure 5.4 Full range optimized frontiers – example 1

the efficient frontier if he exits from the strategy sooner than expected. In
practice, we can suppose that this is a variable time horizon choice.
Figure 5.3 presents three efficient frontiers for each time horizon (in this
case, for one, two and three years), the fitted functions and the scattering of
each portfolio in the frontier when calculated in the other frontiers. As we
can notice, there is a loss that is variable according to risk level and across
frontiers. We can also observe that portfolios are obviously on the frontiers
for their respective time horizons, but far away from the frontiers when cal-
culated in different time horizons.
Figure 5.4 shows the Pareto front view that is represented by the projec-
tions of each portfolio at each time horizon. It shows that portfolios on the
Pareto front are much more stable in terms of exit costs compared with the
original portfolios. This ends up being a better choice for investors with
time horizon uncertainty. Figures 5.5 and 5.6 show details on shorter risk
frontiers. The optimization effect on exit costs is very clear.
In terms of genetic dynamics, the results, which can be seen at http://
finance.tachyonweb.net, show that the number of successful offspring
after mutation varies between five and 40, between zero to five in case of
exchanges and between zero to two in case of crossovers, for each port-
folio. The total number of generations was 200 and very few portfolios were

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
104 Paulo Maurício F. de Cacella et al.

Efficient frontiers along time horizons


1yr
2yr
20 3yr

15

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Return %

10

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Risk %

Figure 5.5 Detail of original efficient frontiers – example 1

Optimized portfolios over efficient frontiers

20 1yr
2yr
3yr

15
Return %

10

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Risk %

Figure 5.6 Detail on optimized frontiers – example 1

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Strategic Asset Allocation Methodology 105

already near Pareto front. The number of successful offspring is related to


the distance of the original portfolios to the optimal portfolios.
Complete optimization and allocation data are available from tables 4 to
19 at http://finance.tachyonweb.net. (table 4) shows allocation for each ini-
tial portfolio on each frontier. We have 30 portfolios on each frontier, for a
total of 90. These allocations were calculated with a standard unrestricted
Markowitz approach. (table 5) shows allocations after the optimization run.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
We can see that optimized portfolios are more diversified than the original
ones, considering the number of asset classes used. This gives us a richer set
of choices than a Markowitz optimization with a single time horizon.
(table 6) presents some details regarding the initial portfolio data. We have
the expected risk and return of portfolios, the distance from each frontier
and the standard deviation and mean of these distance sets. Distance on
frontiers is not zero for portfolios with the same time horizon due to the fact
that we use a fitted frontier and not a real one to calculate distances. The
error is less than 0.2% in the vast majority of cases.
(table 7) presents same data regarding the optimized portfolios. As
expected, the standard deviations are much lower than in the original set.
This guarantees that the cost, represented by the mean, is almost the same
for any time horizon.
To be clearer regarding the benefits of this approach, we selected a port-
folio to be compared across the examples. The selected portfolio, num-
ber 70, has an original time horizon of three years and is located at the
respective efficient frontier. Calculating the exit costs for the case in which
investors decide to exit from their strategy earlier than the time horizon,
we find that they are expected to lose 2.49% if they exit in the first year
and 3.17% if they exit in the second year. However, if we observe the data
for the same portfolio after optimization, we will have a loss of 1.21% for
exiting in the first year, and a loss of 1.20% for exiting in the second year.
If they exit in the desired time horizon, three years, investors will lose
1.03% at the same risk level for both the original and optimized portfolios.
By comparing the allocation for both portfolios, we can also see that the
optimized portfolio is more diversified with respect to the number of asset
classes used.
Table 5.1 compares a standard Markowitz optimal portfolio with a three-
year time horizon and its optimized counterpart. The optimized portfolio
is not only better in terms of objective function, but is also more diversified
with respect to the number of asset classes.

5.3.3.6 Example 2 – Find optimal portfolios given a 3-step time horizon and
exit strategies with additional objectives based on scenarios
This example is an extension of Example 1 in which the objective func-
tion is not only defined by the mean and variance of distances to the
frontiers in each time horizon, but also by the exit costs calculated for a

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
106 Paulo Maurício F. de Cacella et al.

Table 5.1 Allocations for portfolio number 70 (%)

UST1- UST1- UST1- EURT1- EURT1- EURT1-


XAU SP500 TB<ill/>3M 3Y 5Y 7Y 3Y 5Y 7Y

Original 15.08 0.00 7.30 35.42 0.00 0.00 0.00 0.00 42.20
Optimized 10.22 24.97 8.36 15.31 0.00 3.17 2.87 0.42 34.69

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
scenario in each frontier. The method allows the use of multiple scenarios
simultaneously.

5.3.3.6.1 Example 2, Scenario 1 – bull market. The first scenario is a stand-


ard bull market in which USD interest rates rise with a two standard devi-
ation loss event, based on historical data. Stocks and commodities returns
rise by two standard deviations. This scenario was applied to each frontier
calculation.
In the first simulation, we considered the scenario applied to the first year
with second and third years having no modification in terms of original
return expectations. Figure 5.7 shows the three frontiers and the optimized
portfolios. We can see that the optimized portfolios have a very similar
profile to those in Example 1 (Figure 5.4). The genetic behaviour also was
similar to that observed in Example 1, and can be seen at http://finance.
tachyonweb.net.
To compare the costs and stability of the initial and optimized portfolios,
we can see data on table 9 and (table 10 also available at http://finance.
tachyonweb.net) . Portfolio number 70, the same portfolio described in
Example 1, which was defined in a standard Markowitz optimization with
a three-year time horizon, has exit costs of 2.49% and 3.17% for the first
and second years. With optimization using the bull market scenario in the
first year, the selected portfolio has an exit cost of 1.23% in the first year,
1.24% in the second year and 1.23% in the third year. This leads to better a
than average cost and standard deviation for the optimized portfolio. When
considering the scenario’s cost benefits, the optimized portfolio has a posi-
tive result of 2.48% in the first year while the original portfolio has a result
of 2.25%. Therefore, the optimized portfolio performs better both on exit
costs and on the exit scenario in the first year.
Additionally, we have generated a scenario with losses and gains extend-
ing through all three years. In this case, the selected portfolio, number 70,
has an exit cost of 0.83% in the first year, 1.46% in the second year and
1.31% in the third year. This leads to a better than average cost and stand-
ard deviation for the optimized portfolio. When considering the scenario’s
cost benefits, the optimized portfolio has a positive result of 3.38% in first
year, 3.03% in the second year and 3.74% in the third year, while the ori-
ginal portfolio has a result of 2.25% in the first year, 1.85% in the second
and 1.90% in the third. Thus, the optimized portfolio performs better on
exit costs and on the exit scenario in every year.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Strategic Asset Allocation Methodology 107

Table 5.2 Allocations for portfolio number 70 (%)

UST1- UST1- UST1- EURT1- EURT1- EURT1-


XAU SP500 TB<ill/>3M 3Y 5Y 7Y 3Y 5Y 7Y

Original 15.08 0.00 7.30 35.42 0.00 0.00 0.00 0.00 42.20
Optimized 10.22 24.97 8.36 15.31 0.00 3.17 2.87 0.42 34.69
E1

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Optimized 9.98 25.40 1.77 14.68 4.79 5.84 3.48 5.01 29.06
E2S1
Optimized 16.03 21.38 22.67 16.11 1.80 0.12 1.87 0.00 20.02
E2S1B

Optimized portfolios over efficient frontiers


45
1yr
2yr
40 3yr

35

30
Return %

25

20

15

10

0
0 1 2 3 4 5 6 7 8 9
Risk %

Figure 5.7: Optimized frontier – example 2 scenario 1A

Table 5.2 compares the original Markowitz portfolio to the optimized


Example 1 and optimized Example 2 Scenario 1 portfolios, with a one-year
Horizon and with a three-year impact . Both optimized portfolios have bet-
ter diversification related to the number of asset classes present.

5.3.3.6.2 Example 2 Scenario 2 – flight to quality The second scenario is a


flight to quality in which USD interest rates fall by two standard deviations,
based on historical data. Commodities and stocks returns also fall by two

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
108 Paulo Maurício F. de Cacella et al.

standard deviations. This scenario is, in some ways, opposite to the previous
one and will show that we can still have a better portfolio selection under
the current proposal.
In the first simulation, we considered the scenario applied to the first
year with the second and third years having no modification in terms of
the original return expectations. Figure 5.8 presents the three frontiers and
the optimized portfolios. We can see that the optimized portfolios have a

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
slightly different profile than in Example 1 and in Example 2 Scenario 1, as
seen in Figures 5.4 and 5.7
To compare costs and stability of initial and optimized portfolios, we can
observe the data, on table 15 and table 16. These tables can be seen at http://
finance.tachyonweb.net. Portfolio number 70, defined in a standard Markowitz
optimization with a three-year time horizon, as detailed in Example 1, has exit
costs of 2.49% and 3.17% for the first and second years. With optimization and
using a bull market scenario, the selected portfolio has an exit cost of 1.69% in
the first year, 1.40% in the second year and 1.47% in the third year. This leads
to a better than average cost and standard deviation for the optimized port-
folio. When considering the scenario’s cost benefits, the optimized portfolio
has a negative result of 1.72% in the first year, while the original portfolio has
a negative result of 2.25%. Therefore, the optimized portfolio performed better
both on exit costs and on the exit scenario in the first year.
Additionally, we generated a scenario with losses and gains extending
through all three years. In this case, the selected portfolio has an exit cost
of 1.87% in the first year, 1.49% in the second year and 1.43% in the third
year. This leads to an average cost and standard deviation that are better
than those of the optimized portfolio. When considering the scenario’s cost
benefits, the optimized portfolio has a negative result of 1.62% in first year,
1.56% in the second year and 2.30% in the third year, while the original
portfolio has a negative result of 2.25% in the first year, 1.85% in the second
and 1.90% in the third. Thus, the optimized portfolio performs better on
exit costs and on the exit scenario in every year.
Table 5.3 compares the original Markowitz portfolio to optimized Example 1,
optimized Example 2 Scenarios 1A and 1B and optimized Example 2 scenar-
ios 2A and 2B. All optimized portfolios have better diversification related to
the number of asset classes present. All portfolios have the same risk level at
the three-year time horizon.

5.4 Conclusion

The single most important task for institutional investors is to select port-
folios that comply with their expectations about future performance and
with their risk preferences. Currently available methodologies allow the use
of only a subset of investor preferences. Usually, preferences are translated
into restrictions on asset classes or on expected losses based on historical

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Strategic Asset Allocation Methodology 109

data. This necessarily leads to sub-optimal portfolio selection, as several


preference trade-offs are not available in the optimization process.
We have proposed a framework based on multi-objective optimization
techniques that allow a more complete representation of investor prefer-
ences and trade-offs. We have shown through examples that the proposed

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Table 5.3 Allocations for portfolio 70 (%)

UST1- UST1- UST1- EURT1- EURT1- EURT1-


XAU SP500 TB<ill/>3M 3Y 5Y 7Y 3Y 5Y 7Y

Original 15.08 0.00 7.30 35.42 0.00 0.00 0.00 0.00 42.20
Optimized E1 10.22 24.97 8.36 15.31 0.00 3.17 2.87 0.42 34.69
Optimized 9.98 25.40 1.77 14.68 4.79 5.84 3.48 5.01 29.06
E2S1
Optimized 16.03 21.38 22.67 16.11 1.80 0.12 1.87 0.00 20.02
E2S1B
Optimized 5.04 26.41 0.00 0.00 3.39 15.43 19.75 11.72 18.27
E2S2
Optimized 4.60 25.49 0.00 0.00 9.37 9.06 13.61 11.00 26.87
E2S2B

Optimized portfolios over efficient frontiers


45
1yr
2yr
40 3yr

35

30
Return %

25

20

15

10

0
0 1 2 3 4 5 6 7 8 9
Risk %

Figure 5.8 Optimized frontier – example 2 scenario 2A

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
110 Paulo Maurício F. de Cacella et al.

model generates better portfolios than the ones generated by a single time
horizon reference, as it considers exit and scenario costs. The method is
generic and can be used with any set of investors’ preferences. We can fore-
see several applications in credit-market risk integration and behavioural
finance, as it is well suited for multiple time horizons and for investors’
complex preferences.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Note
The views expressed in this work are those of the authors and do not reflect those of
the Banco Central do Brasil or its members.

Bibliography
Arditti, F.D. and Levy, H. (1975). ‘Portfolio Efficiency Analysis in Three Moments:
The Multi-Period Case’. Journal of Finance, 30, 797–809.
Black, F. and Litterman, R. (2001). ‘Global Portfolio Optimization’. Financial Analysts
Journal, Sept., 28–43.
Cohen, K.J. and Pogue, J.A. (1967). ‘An Empirical Evaluation of Alternative Portfolio
Selection Models’. Journal of Business, Apr.,166–193.
Fama, E.F. (1976). ‘Multiperiod Consumption–Investment Decisions: A Correction’.
American Economic Review, 66(4), 723–724.
Fama, E.F. (1970). ‘Multiperiod Consumption–Investment Decisions’. American
Economic Review, 60(1), 163–174.
Hakansson, N.H. (1971). ‘Multi-period Mean–Variance Analysis: Toward a General
Theory of Portfolio Choice’. Journal of Finance, 26, 857–884.
Harvey, C.R., Liechty, J., Liechty, M.W. and Mueller, P. (2004). Portfolio Selection with
Higher Moments. (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=634141).
Konno, H. and Suzuki, K. (1995). ‘A Mean–Variance–Skewness Optimization Model’.
Journal of Operations Research Society of Japan, 38, 137–187.
Lee, Y.K. and El-Sharkawi, M.A. (2008). Modern Heuristic Optimization Techniques.
IEEE Wiley-InterScience.
Lin, C.M. and Gen, M. (2007). ‘An Effective Decision-Based Genetic Algorithm
Approach to Multiobjective Portfolio Optimization Problem’. Applied Mathematical
Sciences, 1(5), 201–210.
Liu, G.P., Yang, J.B. and Whidborne, J.F. (2001). Multi-Objective Optimization and
Control. Research Studies Press, Baldock, England.
Liu, J., Longstaff, F.A. and Pan, J. (2003). ‘Dynamic Asset Allocation with Event Risk’.
Journal of Finance, 58(1), 231–259.
Markowitz, H.M. (1952). ‘Portfolio Selection’. Journal of Finance, March, 77–91.
Markowitz, H.M. (1959). ‘Portfolio Selection, Efficient Diversification of Investments’.
Cowles Foundation Monograph, 16.
Martellini, L. and Urosevic, B. (2005). Static Mean–Variance Analysis with Uncertain
Time Horizon. (http://www.edhec-risk.com).
Michaud, R. (1998). Efficient Asset Management. Boston: Harvard Business School
Press.
Perold, A.F. (1984). ‘Large-scale Portfolio Optimization’. Management Science, 30(10),
1143–1160.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Strategic Asset Allocation Methodology 111

Samuelson, P. A. (1969). ‘Portfolio Selection by Dynamic Stochastic Programming’.


Review of Economics and Statistics, 50, 239–246.
Samuelson, P. A. (1970). ‘The Fundamental Approximation Theorem of Portfolio
Analysis in Terms of Means, Variances and Higher Moments’. Review of Economic
Studies, 37, 537–542.
Sharpe, W.F. (1963). ‘A Simplified Model for Portfolio Analysis’. Management Science
(9), 277–293.
Stapleton, R. and Subrahmanyan, M. (1978). ‘A Multiperiod Equilibrium Asset-Pricing

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Model’. Econometrica, 46, Sept.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
6
Hidden Risks in Mean–Variance
Optimization: An Integrated-Risk
Asset Allocation Proposal

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
José Luiz Barros Fernandes and José Renato Haas Ornelas

6.1 Introduction

The traditional mean–variance asset allocation approach (Markowitz 1952)


considers the volatility of returns as the only risk factor. However, investors
are usually concerned about other types of risk or negative statistical proper-
ties of returns. For instance, investors usually care about credit and liquidity
risks, and the skewness and kurtosis of returns. Thus, there is a risk pre-
mium embedded in their returns to compensate for additional risk taking. If
those risk premia are not taken into account in the analysis, the results of the
model tend to be distorted, with portfolios carrying these hidden risks dom-
inating the risk-free portfolios. Moreover, the resulting portfolios for the
traditional model tend to be badly behaved due to the overconfidence on
the risk/return estimation. Black and Litterman (1992) realize that quantita-
tive asset allocation models have not played the important role they should
in global portfolio management, partly due to the previous problems.
BIS (2003) has been concerned about risk integration and has observed two
important trends, based of a survey of 31 financial institutions: first, there
is a greater emphasis on the management of risk on an integrated firm-wide
basis, and second, there are efforts to ‘aggregate’ risks through mathematical
risk models. The integration between market and credit risks has been tack-
led by a number of papers recently. Duffie and Singleton (2003), Hou (2005),
Kuritzkes et al. (2003) and Fernandes et al. (2008) provide frameworks for an
integration approach. Fernandes et al. (2008) in particular have proposed a
performance measure that integrates market and credit risk.
Markowitz’s framework can also be extended to consider higher moments
of the return’s distribution. Arditti and Levy (1975), Athayde and Flores
(2004) and Harvey et al. (2004) have proposed models in which an invest-
or’s utility is a function of mean, variance and skewness. Other papers,
including Koekebakker and Zakamouline (2008), Ziemba (2005) and Kaplan

112

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Hidden Risks in Mean–Variance Optimization 113

and Knowles (2004), have proposed performance measures that focus on


skewness. These models are especially important when we are dealing with
asset classes with non-normal, negative skewed and leptokurtic distribu-
tions. One interesting example is the hedge fund industry, as these funds’
returns deviate significantly from normality (Malkiel and Saha 2005). Amin
and Kat (2002) find that although the inclusion of hedge funds may signifi-
cantly improve a portfolio’s mean–variance characteristics, it can also be

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
expected to lead to significantly lower skewness and higher kurtosis.
This chapter has two main objectives. The first is to propose a novel risk
and performance measure that takes into account skewness and standard
deviation of returns and credit risk. Second, we show how traditional mean–
variance optimization may load two types of ‘hidden’ risks into the optimal
portfolio: skewness and credit risk. Toward this end, for a default-free set of
assets, we consider the effects of the inclusion of hedge funds, corporate and
high yield bonds in a mean–variance optimization, and observe that port-
folios in the efficient frontier have higher credit risk and lower skewness.
We consider a stochastic optimization technique based on Michaud (1998)
in order to overcome the estimation error problem. In short, we are propos-
ing an innovative portfolio optimization technique with a novel integrated
performance measure, which takes into account skewness as well as market
and credit risk.
Our results verify that a simplistic mean–variance asset allocation tends to
be distorted in the presence of credit and negative skewed assets. Moreover,
the proposed performance ratio (the Adjusted for Skewness and Credit Ratio)
shows that while the effect of skewness is weak, credit risk has a very strong
impact in the asset allocation solution.
This chapter makes several contributions: first, we propose a perform-
ance measure that integrates skewness, along with market and credit risk,
into a single measure; second, we propose a metric to compare portfolios
with different skewness as well as market and credit risks, and so provide an
objective way to price the trade-off between the previous risks; and third,
we propose a portfolio optimization technique which takes into account
both estimation risk and the previous integrated performance measure, thus
providing several practical insights in terms of asset allocation. It is import-
ant to stress that our empirical evaluations are based on the viewpoint of a
long-term conservative investor using a comprehensive database of global
assets.
The remaining of this chapter is as follows. Section 6.2 briefly offers a lit-
erature review, describing the traditional mean–variance methodology for
portfolio optimization and discussing the effects of skewness and credit risk.
Section 6.3 proposes an integrated performance measure that incorporates
skewness as well as market and credit risk. Section 6.4 presents the empirical
study, describing the data and the implementation, and provides the results.
Section 6.5 concludes the chapter, reviewing the main achievements.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
114 José Luiz Barros Fernandes and José Renato Haas Ornelas

6.2 Literature review

6.2.1 Portfolio optimization models


Markowitz’s (1952) mean–variance model is the traditional paradigm for
portfolio allocation. By using variance as a measure of risk, Markowitz for-
malized the intuition that investors optimize the risk and return trade-off.
The mean–variance model has been criticized because it assumes quadratic

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
preferences and symmetric return distributions. Also, it treats risk and return
not as estimates, but as though they are known with certainty. One way to
overcome the risk of estimation is to use portfolio resampling techniques.
Portfolio sampling allows an analyst to visualize the estimation error in
traditional portfolio optimization methods. Suppose we have estimated
both the variance–covariance matrix and the excess return vector by using
Y observations. It is important to remember that the point estimates are
random variables and so another sample from the same distribution would
result in different estimates. By repeating the sampling procedure n times,
we get n new sets of optimization inputs, and then a different efficient fron-
tier. The resampled portfolios generally include more assets in the solution
than the classical mean–variance efficient portfolio and should also exhibit
smoother transitions in allocations as return requirements change. Both
characteristics are desirable for practitioners.
Out of sample analysis (see, for instance, Markowitz and Usmen 2003)
has shown favorable results for resampled portfolios. Moreover, resampled
portfolios do seem to offer higher stability and so lower transaction costs,
and these are two important features for long-term investors.

6.2.2 Hedge funds and skewness


The literature shows that assets and portfolios return distributions that may
be non-normal, usually with fat tails and negative skewness. We can iden-
tify two reasons for this non-normality. One reason is that the individual
assets available may themselves have non-normal distributions. Thus, when
we incorporate non-normal assets with the portfolio, the resulting return
will be also non-normal. The other reason is the use of derivatives, which
can be used to change the portfolio leverage or to add negative skewness.
However, mean–variance optimization models ignore these higher
moments of the return distribution (Popova et al. 2007). Amin and Kat
(2002) find that although the inclusion of hedge funds may significantly
improve a portfolio’s mean–variance characteristics, it can also be expected
to lead to significantly lower skewness and higher kurtosis. The solution for
this problem is linked to the inclusion of the higher moments in the opti-
mization and performance models.
Arditti and Haim Levy (1975), Athayde and Flores (2004) and Harvey et
al. (2004) have proposed models in which an investor’s utility is a function
of mean, variance and skewness. Other papers have proposed performance

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Hidden Risks in Mean–Variance Optimization 115

measures that take into account skewness, as is the case of Koekebakker and
Zakamouline (2008), Ziemba (2005) and Kaplan and Knowles (2004). Several
other authors have proposed measures that go beyond the mean–variance
world. This is the case of Keating and Shadwick (2002) and Goetzmann
et al. (2007).

6.2.3 Credit Risk

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
As could be seen in Section 6.2.1, the only risk factor considered in a trad-
itional Markowitz model is the volatility of returns. However, some asset
classes have other types of risk, and so there are risk premia embedded in
their returns to compensate for the additional risk. This is the case of credit
risk, liquidity risk, legal risk, etc. Therefore, in a risk-return framework with
the volatility being the only risk considered, assets with a premium for other
type of risks will dominate the remaining ones. In the case of credit risk,
corporate bonds tend to dominate treasury bonds, since the market risk will
be similar, but the risk premium for the credit risk and the corresponding
returns of corporate bonds are expected to be higher.
Generally, market risk and credit risk are the most important sources of
risk in terms of the impact on profit and loss at the portfolio level. Credit
risk is the uncertainty of changes in value related to a default or to a change
in the credit quality of the corresponding counterparty, while uncertainty
in market risk originates essentially from the volatility of market prices.
Besides, the information required for calculation of each type of risk is dif-
ferent (for credit risk, we need the probability of default and, for market risk,
the expected change in market value). That is why practitioners treat credit
and market risk separately, and their integration is not common. However, as
more shares of a portfolio are being allocated into high-yield assets, incorp-
orating credit risk within an integrated view is becoming mandatory.
In recognition of the current reality of the financial world (see BIS 2003)
and the recent academic trend, this chapter tackles the issue of integration
between market and credit risks. Duffie and Singleton (2003), Hou (2005),
Kuritzkes et al. (2003) and Jobst et al. (2006) provide frameworks for an inte-
gration approach. Fernandes et al. (2008) propose a portfolio optimization
framework that incorporates the credit risk dimension; however, they make
no reference to skewness.

6.3 A performance measure integrating skewness, volatility


and credit risk

We propose an integrated measure of performance that combines expected


default (or another measure of credit risk), volatility and skewness. This is
done in two steps. First, we convert credit risk (measured by the expected
default) into volatility, in a way that is similar to Fernandes et al. (2008).
Then, we use the Adjusted for Skewness Sharpe Ratio (ASSR) performance

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
116 José Luiz Barros Fernandes and José Renato Haas Ornelas

index to adjust for skewness in the previous measure. Therefore, this new
measure adjusts the Sharpe Ratio for credit and skewness.
In the first step, the idea is, in line with Fernandes et al. (2008), to convert
the credit risk into market risk using the return’s dimension as the reference.
Our integrated risk measure is the sum of the market risk and the credit risk
converted to market risk in the way we describe below.
Consider, in the volatility × return plane, a yield curve A with a degree of

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
credit risk CA = 0 (default-free), and a yield curve B with credit risk CB, higher
than CA (see Figure 6.1). Consider also a portfolio P1 in the yield curve related
to credit risk CB, and a portfolio P2 on the yield curve related to credit risk
CA, both with the same level of volatility. The credit risk difference (CB – CA)
for portfolios P1 and P2 can be associated1 with a premium R P that is the
vertical distance between P1 and P2. This premium R P can also be obtained
by moving from portfolio P2 to the right, over the curve A, until we reach a
portfolio P3 with the same return as portfolio P1. This portfolio P3 will have
a higher market risk than portfolios P2 and P1, and the same credit risk as P2,
so that the premium R P can also be obtained from the difference between
the market risk of portfolios P1 (or P2) and P3, which we call ∆M. Therefore,
as the premium R P can be obtained by either increasing the market risk in
∆M or increasing the credit risk in (CB – CA), we can argue that both risks are
equivalent, that is, both risks are worth the same from the market’s point of
view. This will be our definition of risk equivalence.

A
Expected return

P1

P3
RP

∆M P2

Volatility

Figure 6.1 Trade-off market and credit risk


This figure shows that a return premium R P can be generated by adding market or credit risk to a
portfolio. Yield curves A and B have a credit risk CA and CB respectively. We can add an amount
R P to the expected return of Portfolio P2 either by increasing its credit risk from C A to CB, i.e.
going to portfolio P1, or by increasing its market risk in ∆M, i.e. going to portfolio P3.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Hidden Risks in Mean–Variance Optimization 117

Our Integrated Risk Measure (IRM) will use this equivalence. First, we
define the conversion value (CV) of credit risk in terms of market risk for a
portfolio Pi in the following way:

CV(Pi) = ∆Mi = MR(Pj) – MR(Pi )

given that:

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
R(Pj ) = R(Pi)

where:

● MR(P) is the market risk of a portfolio P,


● CR(P) is the credit risk of a portfolio P,
● R(P) is the expected return of a Portfolio P,
● Pj is a portfolio in the default-free curve.

The idea is that the market risk differential of Pi and Pj generates the same
excess return of the credit risk of Pi, so that both are equivalent. We then
define an IRM that is the converted credit risk from the default-free curve
plus the market risk of Pj:

IRM(Pi) = MR(Pj) + CV(Pi)

It is worth noting that the IRM(Pi) will be the market risk of the default-
free portfolio Pj. Fernandes et al. (2008) uses a slightly different approach;
they convert credit into market risk for each credit curve, in a recursive way.
This chapter, however, projects each credit curve directly into the default-
free, without using intermediate credit layers.
In terms of financial efficiency, the traditional Sharpe Ratio is a measure
of the mean excess return per unit of market risk. It does not capture all the
effects from other types of risk. In order to overcome this pitfall, we propose
a ratio of mean excess return per unit of integrated risk.
Our ratio has the excess return in the numerator, as in the traditional
Sharpe Ratio. The denominator has the integrated measure of risk, i.e. the
market risk plus the credit risk converted into market risk in a recursive way,
as described in the previous section. Then we reach an Adjusted for Credit
Ratio (ACR)2, as in Fernandes et al. (2008):

ACR(Pi) = (R[Pi)] – R[TB]) / IRM(Pi)

where TB is the short-term default-free interest rate.


Note that the ACR of any portfolio in the default-free curve is calculated
in the same way of the traditional Sharpe Ratio, since there is no credit risk

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
118 José Luiz Barros Fernandes and José Renato Haas Ornelas

to be converted. As the credit exposure of a portfolio increases, one should


expect an increase on the traditional Sharpe Ratio, since returns tend to
increase and the volatilities are little affected by higher credit risk. The ACR,
however, captures this higher credit risk through the integrated risk meas-
ure, located in the denominator of the ratio.
This metric takes into account market and credit risk and can be used to
price the trade-off between these two sources of risk. Moreover, it clarifies

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
the portfolio choice problem in an integrative framework and so it has mul-
tiple practical usages. However, it lacks to consider the impact of skewness.
From Koekebakker and Zakamouline (2008), the ASSR is given by:

bSk
ASSR = SR 1 + SR
3

where SR is the traditional Sharpe Ratio, b refers to the skewness prefer-


ence and Sk is the skewness of the returns. The higher the parameter b is,
the more skewness-averse the investor is. If b = 0, we have the traditional
Sharpe Ratio, and investors are neutral to skewness. In order to take into
account both skewness and credit risk, we then propose the ASCR (Adjusted
for Skewness and Credit Ratio) given by:

bSk
ASCR = ACR 1 + ACR
3

Our performance measure, while addressing credit risk, still has the good
properties of ASSR, such as: it does not depend on the choice of a thresh-
old, it takes into account both downside and upside risk, and it can be also
derived from expected utility theory, which is a cornerstone of modern
finance. Note that the performance measure does not take into account any
diversification effects among credit risk, volatility and skewness.

6.4 Empirical study

6.4.1 Dataset
Our empirical investigation is based on a sample composed of treasury, cor-
porate and high yield bonds, equity indexes and hedge funds indexes. The
time series covers a period from January 2000 to June 2008, comprising
102 monthly observations. The hedge funds and bond indexes are from
Lehman Brothers, while the stock indexes were downloaded from Thompson
DataStream. All total return time series are calculated on a US-dollar basis
and we use a three-month US treasury bill rate when calculating excess
returns3. Table 6.1 presents the description of each asset considered and
the main statistical properties of the excess returns. We believe that this

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Hidden Risks in Mean–Variance Optimization 119

Table 6.1 Main characteristics of the sample


Excess Returns

Asset Class Market Mean(%) Std Dev(%) Skewness

US Treasury 1–5 years Treasury US 0.15 0.68 −0.0045


US Treasury 5−10 years Treasury US 0.28 1.62 −0.3701
US Treasury 10−20 years Treasury US 0.37 2.22 −0.6651

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
US Treasury 20 years Treasury US 0.41 2.82 −0.6745
EURO Treasuries Treasury Eurozone 0.59 2.95 0.5049
UK Guilts Treasury UK 0.36 2.52 0.398
FTSE100 Equity UK 0.09 4.13 −0.2144
S&P 500 Equity US –0.18 4.07 −0.3056
DAX30 Equity Eurozone 0.33 6.74 −0.4005
CAC30 Equity Eurozone 0.27 5.47 −0.4345
EUR Corp AAA Credit Eurozone 0.58 2.88 0.5035
EUR Corp AA Credit Eurozone 0.60 2.91 0.5581
EUR Corp A Credit Eurozone 0.59 2.88 0.5672
EUR Corp BBB Credit Eurozone 0.55 2.85 0.4303
EUR HY BB Credit Eurozone 0.80 3.70 0.1236
EUR HY B Credit Eurozone 0.38 4.49 −0.1042
EUR HY CCC Credit Eurozone 0.25 6.87 −0.1357
US Corp AAA Credit US 0.27 1.25 −0.7929
US Corp AA Credit US 0.25 1.19 −0.4921
US Corp A Credit US 0.23 1.30 −0.3924
US Corp BBB Credit US 0.23 1.48 −0.0964
US HY BB Credit US 0.29 1.78 −1.3984
US HY B Credit US 0.19 2.48 0.0058
US HY CCC Credit US 0.21 3.89 −0.1417
HF – Commodities Hedge Funds 0.69 2.40 0.2115
HF – Foreign
Exchange Hedge Funds 0.43 2.41 −0.2157
HF – Equity –
Market Neutral Hedge Funds 0.20 0.81 −0.4331
HF – Equity Other
Relative Value Hedge Funds 0.40 1.28 −0.4814
HF – Equity –
Long Only Hedge Funds 0.97 4.51 −0.6355
HF – Equity –
Long Bias Hedge Funds 0.51 2.99 0.459
HF – Equity –
Variable Bias Hedge Funds 0.46 1.67 0.0347
This table shows the properties of the asset’s returns. Excess returns are the raw returns minus
the Lehman Brothers 13 months Treasury Bill Index. All returns and standard deviations are
calculated in USD and on a monthly basis. US Treasuries, EURO Treasuries and UK Gilts are
the Lehman Brothers Treasury Indexes from the United States, Eurozone and United Kingdom.
EUR Corp and US Corp are the Lehman Brothers Corporate Bond indexes from United States
and Eurozone. HY stands for Lehman Brothers High Yield Bond indexes. HF stands for Lehman
Brothers Asset−Weighted Hedge Fund Indexes.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
120 José Luiz Barros Fernandes and José Renato Haas Ornelas

sample covers a diversified group of assets in terms of currency, market and


instrument type.
Regarding the hedge fund indexes, we have selected only the sub-indexes
from the Lehman Brothers Asset Weighted Hedge Fund Index that do
not contain relevant credit risk, namely equity, commodities and Foreign
Exchange (FX). The fixed income and multi-market sub-indexes were not
considered since they usually contain credit risk, but we are not able to

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
measure the exposure to credit.
We have equity and treasury indexes from three currencies: US dollars,
Euros and Great Britain pounds. The credit indexes were divided by rating
from AAA to CCC for both the US and the Eurozone, comprising 14 indexes.
The only index that was split by maturity was the US Treasury.
We analyze the results using three sub-samples of assets besides the full
sample:

● default-free sub-sample: only treasuries and equity indexes;


● credit sub-sample: treasuries, corporate, high yield and equity indexes;
● hedge fund sub-sample: treasuries, equity and hedge fund indexes.

The idea is to begin with a limited number of traditional assets, and then
expand the frontier with the inclusion of hedge funds, credit bonds and
both together. As we can see from Table 6.1, the equity indexes had a poor
performance during this period, since we began the sample in the period
just before the burst of the Internet bubble. However, the best asset in terms
of returns was the hedge fund equity long-only sub-index.

6.4.2 Hidden risks on the mean–variance optimization


In this section, we perform resampling mean–variance optimization based
on Michaud’s methodology, with four samples: the full sample and three
sub-samples describe on last section. Our goal is to show that optimization
that uses only variance as the risk measure may hide other types of risk, like
credit risk and skewness.
Figure 6.2 shows the traditional efficient frontiers for the four samples.
We can see that both credit assets and hedge funds significantly improve
the efficient frontier over the default-free frontier. The improvement con-
tributed by hedge funds is higher than that by credit assets. Also, there is
very little improvement when we move from the default-free plus hedge
funds to the full sample (i.e. when we add credit assets to the hedge fund
sub-sample). Thus, hedge funds have mean–variance characteristics that
seem to be better than those of credit assets.
However, this upgrade in the efficient frontier caused by hedge funds may
be due to some hidden risk factor. In fact, the literature on hedge funds
shows that these managers tend to load negative skewness strategies in order
to obtain higher Sharpe Ratios. If we plot the skewness of the efficient fron-
tier portfolios (Figure 6.3), we see that efficient portfolios with hedge funds

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
0.011
0.010
0.009
0.008
Expected return

0.007
0.006
0.005

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.004
0.003
0.002
0.001
0.000
0.00 0.01 0.02 0.03 0.04 0.05
Standard deviation

Default free (EF) Default free + hedge funds (EF)


Default free + credit (EF) Full sample (EF)

Figure 6.2 Efficient frontiers


This graph shows the traditional mean–variance efficient frontiers for the four samples described
in Section 6.4.1: default-free, credit, hedge funds and full sample.

0.8

0.6

0.4

0.2
Skewness

0.0

−0.2

−0.4

−0.6

−0.8
0.00 0.01 0.02 0.03 0.04 0.05
Standard deviation
Full sample (Skew) Default free + hedge funds (Skew)
Default free (Skew) Default free + credit (Skew)

Figure 6.3 Skewness of the efficient frontiers


This graph shows the return’s skewness for the traditional mean–variance efficient frontiers,
for the four samples described in Section 6.4.1: default-free, credit, hedge funds and the full
sample.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
122 José Luiz Barros Fernandes and José Renato Haas Ornelas

5.0

4.5

4.0
Kurtosis

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
3.5

3.0

2.5

2.0
0.00 0.01 0.02 0.03 0.04 0.05
Standard deviation
Full sample (Kurt) Default free + hedge funds(Kurt)
Default free (Kurt) Default free + credit (Kurt)

Figure 6.4 Kurtosis of the efficient frontiers


This graph shows the return’s kurtosis for the traditional mean–variance efficient frontiers
for the four samples described in Section 6.4.1: default-free, credit, hedge funds and the full
sample.

actually have a lower skewness. For higher values of the standard deviation
the level of the skewness is about –0.5, while for lower standard deviations
the skewness is slightly negative. Portfolios without hedge funds have a
small positive skewness.
The same analysis may be done with the kurtosis on the efficient frontier
(see Figure 6.4). But for the kurtosis we do not have a clear picture: for low
standard deviations, portfolios with hedge funds have higher kurtosis, but
for high standard deviations these portfolios have lower kurtosis. Therefore,
we cannot say that kurtosis is a hidden risk in efficient portfolios with hedge
funds. It is interesting to note that the minimum variance portfolios have
especially high kurtosis.
If hedge funds load on skewness, the corporate and high yield portfolios
load on credit risk. Figure 6.5 shows the expected default of efficient fron-
tiers portfolios with credit assets.4 We see that the sub-sample with default-
free and credit assets will always load a credit risk higher than a single-A
(expected default = 0.001%). When we include hedge funds, the expected
default decreases since credit assets tend to be dominated by hedge funds in
our sample. The expected default of the full sample is approximately equiva-
lent of a single-A to double-AA rating.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Hidden Risks in Mean–Variance Optimization 123

0.008

0.007

0.006
Expected default

0.005

0.004

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.003

0.002

0.001

0.000

−0.001
0.00 0.01 0.02 0.03 0.04 0.05
Standard deviation

Default free + credit (ExpDef) Full sample (ExpDef)

Figure 6.5 Expected default of the efficient frontiers


This graph shows the expected default calculated based on credit ratings of the traditional
mean–variance efficient frontiers for the four samples described in Section 6.4.1: default-free,
credit, hedge funds and full sample.

Overall, we see that mean–variance optimization tends to load either credit


risk or skewness to the efficient portfolios. As investors are usually not neutral
to these factors, one should use an optimization procedure and a performance
evaluation that consider these risk factors. In the next section, we will evaluate
mean–variance portfolios using performance indexes that go beyond mean–
variance. Section 6.4.4 will deal with portfolio optimization using ASCR.

6.4.3 Portfolio performance evaluation of mean–variance


optimization
In this section, we perform an analysis of the performance of the resam-
pling mean–variance optimization with credit risk restrictions. The idea is
to evaluate the performance using the traditional Sharpe Ratio, ASSR, ACR
and the Manipulation-proof Performance Measure (MPPM). Each perform-
ance measure used considers different risk factors.
In order to avoid portfolios with high credit risk and also to facilitate
the calculation of the ACR, we follow Fernandes et al. (2008) by adding a
restriction on the expected default of the portfolios in the optimization.
The minimization problem is:

Min 1 2 a T Va
a

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
124 José Luiz Barros Fernandes and José Renato Haas Ornelas

subject to:

E(Ra) = aTX = x
E(d) = aTD ≤ y

where d is the default rate of the portfolio and D is the vector of the default
probabilities for each asset. The vector D was built using the ratings of the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
assets and the KMV one-year transition matrices as tabulated from expected
default frequencies.5 All of the treasury indexes were considered default-
free, i.e. the expected default was set to zero.
We used an expected default restriction (y) equivalent to a triple-B rating,
which means that the portfolio has a one-year expected default equal to 0.26%.
The minimization problem was then tested inside the resampling Michaud
methodology, using four samples: the full sample and three sub-samples.
Figure 6.6 shows the efficient frontiers of the four samples, using the
credit restricted minimization problem above. Again, we see that both
credit assets and hedge funds significantly improve the efficient fron-
tier over the default-free frontier. The improvement given by hedge
funds is greater than that given by credit assets. Also, there is very little

0.011
0.010
0.009
0.008
Expected return

0.007
0.006
0.005
0.004
0.003
0.002
0.001
0.000
0.00 0.01 0.02 0.03 0.04 0.05
Standard deviation
Default free (EF) Default free + credit (EF)
Default free + hedge funds (EF) Full sample (EF)

Figure 6.6 Efficient frontiers


This graph shows the resampling credit-constrained mean–variance efficient frontiers for the
four samples described in Section 6.4.1: default-free, credit, hedge funds and the full sample. The
expected default of the portfolios is restricted to 0.26% over one year, which is equivalent to a
triple-B rating (investment grade).

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Hidden Risks in Mean–Variance Optimization 125

improvement when we move from the default-free plus hedge funds to the
full sample (i.e. when we add credit assets to the hedge fund sub-sample).
Thus, hedge funds have mean–variance characteristics that seem better
than credit assets.
Figures 6.7 and 6.8 show the skewness and expected default of the effi-
cient frontiers, and the results are very similar to the credit unconstrained
case presented in the last section: portfolios with hedge funds have more

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
negative skewness, and portfolios in which corporates and high yields are
allowed tend to load as much credit as possible, but when hedge funds
and credit assets are put together, hedge funds dominate credit in our
sample.
As expected, the Sharpe Ratio of the full sample is higher than that of
the sub-samples, although the default-free plus hedge funds sub-sample
has very similar results (see Figure 6.9). When we use the new perform-
ance measure tailored for hedge funds, the MPPM, the full sample is still
the best one, but the shape of the curves change (see Figure 6.10). The
portfolio with a higher MPPM is now a portfolio with higher market risk
(around 2.5%).

0.8

0.6

0.4

0.2
Skewness

0.0

−0.2

−0.4

−0.6

−0.8
0.00 0.01 0.02 0.03 0.04 0.05
Standard deviation
Default free (Skew) Default free + credit (Skew)
Default free + hedge funds (Skew) Full sample (Skew)

Figure 6.7 Skewness of the efficient frontiers


This graph shows the return’s skewness of the resampling credit-constrained mean–variance
efficient frontiers for the four samples described in Section 6.4.1: default-free, credit, hedge
funds and the full sample. The expected default of the portfolios is restricted to 0.26% over one
year, which is equivalent to a triple-B rating (investment grade).

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
0.0030

0.0025
Expected default

0.0020

0.0015

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.0010

0.0005

0.0000
0.00 0.01 0.02 0.03 0.04 0.05
Standard deviation

Default free + credit (ExpDef) Full sample (ExpDef)

Figure 6.8 Expected default of the efficient frontiers


This graph shows the expected default calculated based on the credit ratings of resampling cred-
it-constrained mean–variance efficient frontiers for the four samples described in Section 6.4.1:
default-free, credit, hedge funds and the full sample. The expected default of the portfolios is
restricted to 0.26% over one year, which is equivalent to a triple-B rating (investment grade).

0.55

0.50

0.45
Sample ratio

0.40

0.35

0.30

0.25

0.20

0.15
0.00 0.01 0.02 0.03 0.04 0.05
Standard deviation

Default free (IS) Default free + credit (IS)


Default free + hedge funds (IS) Full sample (IS)

Figure 6.9 Sharpe Ratio of the efficient frontiers


This graph shows the Sharpe Ratio of resampling credit-constrained mean–variance Efficient
Frontiers for the four samples described in Section 6.4.1: default-free, credit, hedge funds and
thefull sample. The expected default of the portfolios is restricted to 0.26% over one year, which
is equivalent to a triple-B rating (investment grade).

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
2.0

1.5
MPPM

1.0

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.5

0.0
0.00 0.01 0.02 0.03 0.04 0.05
Standard deviation

Default free (MPPM) Default free + credit (MPPM)


Default free + hedge funds (MPPM) Full sample (MPPM)

Figure 6.10 MPPM of the efficient frontiers


This graph shows the MPPM (Manipulation-Proof Performance Measure) of resampling credit-
constrained mean–variance efficient frontiers for the four samples described in Section 6.4.1:
default-free, credit, hedge funds and the full sample. The expected default of the portfolios is
restricted to 0.26% over one year, which is equivalent to a triple-B rating (investment grade).

0.50

0.45

0.40

0.35
ASSR

0.30

0.25

0.20

0.15

0.0
0.00 0.01 0.02 0.03 0.04 0.05
Standard deviation

Default free (ASSR) Default free + credit (ASSR)


Default free + hedge funds (ASSR) Full sample (ASSR)

Figure 6.11 ASSR of the efficient frontiers


This graph shows the ASSR (Adjusted for Skewness Sharpe Ratio) of resampling credit-constrained
mean–variance efficient frontiers for the four samples described in Section 6.4.1: default-free,
credit, hedge funds and the full sample. The expected default of the portfolios is restricted to
0.26% over one year, which is equivalent to a triple-B rating (investment grade).

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
128 José Luiz Barros Fernandes and José Renato Haas Ornelas

0.55
0.50
0.45
0.40
AC ratio

0.35

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.30
0.25
0.20
0.15
0.10
0.00 0.01 0.02 0.03 0.04 0.05

Standard deviation

Default free (ACR) Default free + credit (ACR)


Default free + hedge funds (ACR) Full sample (ACR)

Figure 6.12 ACR of the efficient frontiers


This graph shows the ACR of resampling credit-constrained mean–variance efficient frontiers for
the four samples described in Section 6.4.1: default-free, credit, hedge funds and the full sample.
The expected default of the portfolios is restricted to 0.26% over one year, which is equivalent
to a triple-B rating (investment grade).

Results for the ASSR (Figure 6.11) show a picture very similar to the Sharpe
Ratio, although the adjustment for skewness narrowed the distance between
the frontiers with and without hedge funds. Therefore, we can say that the
adjustment for skewness proposed by Koekebakker and Zakamouline (2008)
does not entirely offset the higher return/volatility ratio of hedge funds. It
is worth noting that we use an investor’s preferences to set skewness param-
eter (b) equal to two. If we increase this parameter, meaning that investors
have stronger preferences for positive skewness, the gap between the fron-
tiers with and without hedge funds will narrow.
The results for the ACR (Figure 6.12) show a different picture from the
Sharpe Ratio, especially for portfolios with high credit loadings. These port-
folios are significantly downgraded by the ACR. For the Sharpe Ratio, the
full sample has a slightly better performance if compared with the hedge
fund sub-sample, but when we consider the ACR, the full sample under-
performs compared with the hedge funds’ sub-sample. When we compare
the default-free sub-sample with the credit sub-sample, the results are very
different: while for the Sharpe ratio the credit sub-sample performs well

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Hidden Risks in Mean–Variance Optimization 129

0.50

0.45

0.40

0.35
ASC ratio

0.30

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.25

0.20

0.15

0.0
0.00 0.01 0.02 0.03 0.04 0.05
Standard deviation

Default free (ASCR) Default free + credit (ASCR)


Default free + hedge funds (ASCR) Full sample (ASCR)

Figure 6.13 ASCR of the efficient frontiers


This graph shows the ASCR of resampling credit-constrained mean–variance efficient frontiers
for the four samples described in Section 6.4.1: default-free, credit, hedge funds and the full sam-
ple. The expected default of the portfolios is restricted to 0.26% over one year, which is equiva-
lent to a triple-B rating (investment grade).

above, when we consider the ACR the default-free sub-sample outperforms


the credit sub-sample, as expected for the minimum variance portfolio.
This corroborates the view that credit assets are excessively loaded in mean–
variance optimization, but when we account for credit risk, they become
less attractive.
Finally, Figure 6.13 shows the ASCR. The results are very similar to those
for the ACR, although again the adjustment for skewness narrows the dis-
tance between the frontiers with and without hedge funds. We may conclude
that, while the adjustment for credit is very relevant for credit portfolios, the
adjustment for skewness has a marginal effect on hedge funds’ portfolios, at
least with an investor’s preference for skewness parameter equal to two. If
we increase this parameter, the results may be less similar.
All optimizations carried out in this section are based on a mean–vari-
ance framework, so the efficient frontiers are not optimal under other cri-
teria. We may find portfolios with better ACR, ASSR or ASCR if we optimize
with these specific performance indexes in mind. In the next section will
show a portfolio optimization based on the ASCR, so that we can have an
idea of how sub-optimal the mean–variance optimization is from the ASCR
perspective.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
130 José Luiz Barros Fernandes and José Renato Haas Ornelas

6.4.4 Portfolio optimization using alternative


performance measures
Given that mean–variance optimization tends to load undesirable properties
to the portfolios, we need a way to find portfolios with our desired proper-
ties. In this section, we show portfolio optimizations based on the ASSR, ACR
and ASCR, so that we can compare the several portfolios compositions.
Therefore, we run the following optimization problem:

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Max Perf (a)
a
s.t.: ∑ ai = 1
ai ≥ 0

where a is the vector of the asset’s weights, and Perf is a measure of


performance.
The weights of the best portfolio of the mean–variance optimization
under that specific performance measure perspective are used as the ini-
tial guess of the optimization algorithm. Although we cannot guarantee
the convergence of the optimization algorithm, this initial guess procedure
brought very fast results6. We use as performance measure the SR, ASSR,
ACR and ASCR. The best ASCR of the full sample mean–variance optimiza-
tion was 0.4672, but with the optimization we found an ASCR of 0.4962,
showing that mean–variance optimization is actually sub-optimal under
the ASCR framework.
The result of the optimal portfolio compositions for the full sample and
credit sub-sample are shown in Table 6.2. We can see that the compositions
are not very different. This is expected since all of the measures have a com-
mon basis: they depend on mean and variance. The differences related to
credit risk are more relevant than those related to skewness. The measures
that take credit risk (ACR and ASCR) into account typically load less credit
risk than the others. This is especially relevant on the Credit sample, since
in the full sample the hedge funds have a relatively strong dominance over
credit assets. For instance, in the credit sample, the SR and ASSR load about
18% in high yield bonds while the ACR and ASCR load only 8% in corporate
bonds. In the full sample, the credit assets play a minor role, with only 2.7%
loaded by the SR and 0.0% by the ACR.
The effects of skewness are not clear like the credit risk. Remember that
skewness, as calculated in Table 6.1, is from individual assets, and when
they are put inside a portfolio, the characteristics of co-skewness among
them will determine the more attractive assets. In this way, we can say that
the hedge funds of commodities and market neutral should have good co-
skewness properties, since their weights are significantly improved by the
two adjusted for skewness measures (ASSR and ASCR).

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Hidden Risks in Mean–Variance Optimization 131

Table 6.2 Composition of the optimal portfolios according to different criteria(%)

Full sample Optimization by Credit Sample Optimization by


Assets SR ACR ASSR ASCR SR ACR ASSR ASCR
US Treasury 1–5 years 53.5 53.2 43.0 42.8 59.6 64.3 65.3 65.5
US Treasury 5–10 years 4.6 6.7 8.6 8.6 7.3 14.0 1.7 13.6
US Treasury 10–20 0.9 0.0 0.0 0.0 3.1 2.5 5.4 0.0
years

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
US Treasury 20+ years 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
EURO Treasuries 1.0 0.5 1.8 2.2 4.7 9.7 4.5 11.3
UK Gilts 4.6 4.7 3.0 2.6 3.4 0.0 3.6 0.0
FTSE100 0.9 0.0 0.0 0.0 0.6 0.0 0.0 0.0
S&P 500 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
DAX30 0.1 0.6 0.0 0.0 0.8 1.5 1.4 1.2
CAC30 0.6 0.0 0.0 0.0 0.9 0.0 0.0 0.0
EUR Corp AAA 0.0 0.0 0.0 0.0 0.0 8.1 0.0 8.4
EUR Corp AA 0.0 0.0 0.0 0.0 0.2 0.0 0.0 0.0
EUR Corp A 0.0 0.0 0.0 0.0 0.2 0.0 0.0 0.0
EUR Corp BBB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
EUR HY BB 1.1 0.0 1.7 0.9 7.8 0.0 11.7 0.0
EUR HY B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
EUR HY CCC 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
US Corp AAA 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0
US Corp AA 0.0 0.0 0.0 0.0 0.5 0.0 0.0 0.0
US Corp A 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
US Corp BBB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
US HY BB 1.3 0.0 0.0 0.0 11.0 0.0 6.4 0.0
US HY B 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
US HY CCC 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
HF – Commodities 3.4 5.0 9.4 9.2
HF – Foreign Exchange 0.5 0.0 0.0 0.0
HF – Equity – 1.8 2.7 13.7 14.5
Market Neutral
HF – Equity Other 16.5 18.7 14.4 14.9
Relative Value
HF – Equity – 3.2 4.0 2.7 2.7
Long Only
HF – Equity – 0.0 0.0 0.0 0.0
Long Bias
HF – Equity – 5.7 3.9 1.6 1.6
Variable Bias
This table shows the percentage allocation for each asset using four optimization processes and two
samples. The full sample and credit sample are those described in Section 6.4.1. We run optimiza-
tions maximizing the following performance measures: SR, ASSR, ACR and ASCR. US Treasuries,
Euro Treasuries and UK Gilts are the Lehman Brothers Treasury Indexes from the US, Eurozone
and UK. EUR Corp and US Corp are the Lehman Brothers Corporate Bond indexes from the US and
Eurozone. HY stands for Lehman Brothers High Yield Bond indexes. HF stands for Lehman Brothers
Asset-Weighted Hedge Fund indexes.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
132 José Luiz Barros Fernandes and José Renato Haas Ornelas

6.5 Conclusions

We have shown how traditional mean–variance optimization may load two


types of ‘hidden’ risks into the optimal portfolio: skewness and credit risk.
Toward this end, we consider the effects of the inclusion of hedge funds,
corporate and high yield bonds in a mean–variance optimization, and see
that optimal portfolios have higher credit risk and lower skewness.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Also, we have developed a novel risk and performance measure that takes
into account skewness, market and credit risk. We consider a Michaud-based
(1998) stochastic optimization technique in order to overcome the estima-
tion error problem. Thus, we are proposing an innovative portfolio opti-
mization technique with a novel integrated risk measure, which takes into
account skewness, market risk and credit risk.
Our results indicate that a simplistic mean variance asset allocation tends
to be distorted in the presence of credit and negative skewed assets. Moreover,
the proposed ASCR (Adjusted for Skewness and Credit Performance Ratio)
shows that while the effect of skewness is weak, credit risk has a very strong
impact in the asset allocation solution.
Our empirical investigation has some limitations that can be overcome
in future studies. First, our sample time window is very difficult for equity
investments and very good for hedge funds. The use of different time
windows would bring different allocation results. Also, our information
is restricted to past returns, which implies that investors make decisions
based on past returns and do not use other conditioning information such
as variables which take into account the economic cycle. Finally, our exer-
cise is an ‘in-sample’ portfolio selection. It would be of interest to compare
the performance of the optimal portfolios given here in an out-of-sample
analysis.

Notes
The views expressed in this work are those of the authors and do not reflect those of
the Banco Central do Brasil or its members.

1. In fact, this premium can be associated with other types of risk, like liquidity
risk.
2. Fernandes et al. (2008) labelled this ratio ‘Adjusted Sharpe Ratio’. In this chapter,
to distinguish our version from other variations of the Sharpe Ratio, we include
the term ‘Credit’, labelling it the ‘Adjusted for Credit Sharpe Ratio’.
3. Our evaluation is performed from a US-based investor perspective.
4. We use the KMV one-year expected default frequencies. All the treasury indexes
are considered default-free, i.e. the expected default is set to zero.
5. Source: CreditMetrics Technical Document (1997), Table 6.3.
6. We have tested other portfolios of the mean–variance frontier as initial guess, and
the algorithm converged to the same portfolio.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Hidden Risks in Mean–Variance Optimization 133

Bibliography
Amin, Gaurav and Harry Kat (2002) ‘Stocks, Bonds and Hedge Funds: Not a Free
Lunch!’ Cass Business School Research Paper # 009.
Arditti, Fred D. and Haim Levy (1975) ‘Portfolio Efficiency Analysis in Three Moments:
The Multiperiod Case’, The Journal of Finance, 30(3), 797–809.
Athayde, G.M. and Flôres, R.G. (2004) ‘Finding a Maximum Skewness Portfolio –
A General Solution to Three-moments Portfolio Choice’, Journal of Economic

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Dynamics & Control, 28, 1335–52.
BIS (2003) ‘Trends in Risk Integration and Aggregation’, Basel Committee on Banking
Supervision.
Black, Fisher and Robert Litterman (1992) ‘Global Portfolio Optimization’, Financial
Analysts Journal, 48, 5, 28–43.
Duffie, Darrell and Kenneth Singleton (2003) Credit Risk, Princeton University Press.
Fernandes, José Luiz, José Renato Ornelas and Marcelo Takami (2008) ‘Integrating
Market and Credit Risk in Stochastic Portfolio Optimization’, Icfai Journal of
Financial Risk Management, 5(1), 7–28.
Goetzmann, William, Jonathan Ingersoll, Matthew Spiegel and Ivo Welch (2007)
‘Portfolio Performance Manipulation and Manipulation-Proof Performance
Measures’, The Review of Financial Studies, 20(5), 1503–46.
Harvey, Campbell R., John Liechty, Merrill W. Liechty and Peter Mueller (2004)
‘Portfolio Selection with Higher Moments’, Available at SSRN: http://ssrn.com/
abstract=634141.
Hou, Yuanfeng (2003) ‘Integrating Market Risk and Credit Risk: A Dynamic Asset
Allocation Perspective’, Yale University, Working Paper.
Jobst, Norbert J., Gautam Mitra and Zenios Stavros (2006) ‘Integrating Market and
Credit Risk: A Simulation and Optimisation Perspective’, Journal of Banking and
Finance, 30, 717–42.
Kaplan, P.D. and Knowles, J.A. (2004) ‘Kappa: A Generalized Down-Side Risk-adjusted
Performance Measure’, Journal of Performance Measurement, 8(3), 42–54.
Keating C. and Shadwick, W. (2002) ‘A Universal Performance Measure’, Journal of
Performance Measurement 6, 59–84.
Koekebakker, Steen and Valeri Zakamouline (2008) ‘Portfolio Performance Evaluation
with Generalized Sharpe Ratios: Beyond the Mean and Variance’, Working Paper,
Proceedings of the 2008 Financial Management Association International.
Kuritzkes, Andrew, Til Shuermann and Scott Weiner (2003) ‘Risk Measurement, Risk
Management and Capital Adequacy in Financial Conglomerates’, In Herring, R.
and R. Litan (eds), Brookings-Wharton Papers in Financial Services, 141–94.
Malkiel, B. G. and Saha, A. (2005) ‘Hedge Funds: Risk and Return’, Financial Analysts
Journal, 61(6), 80–8.
Markowitz, Harry (1952) ‘Portfolio Selection’, The Journal of Finance, 7, 77–91.
Markowitz, Harry and Nilufer Usmen (2003) ‘Resampled Frontiers versus Diffuse
Bayes: An Experiment’, Journal of Investment Management, 1(4), 9–25.
Michaud, Richard (1998) Efficient Asset Management, Boston, MA: Harvard Business
School Press.
Popova Ivilina, David P. Morton, Elmira Popova and Jot Yau (2007) ‘Optimizing
Benchmark-Based Portfolios with Hedge Funds’, Journal of Alternative Investments,
Summer Volume 10(1), 35–55.
Ziemba, W. (2005) ‘The Symmetric Downside-Risk Sharpe Ratio’, Journal of Portfolio
Management, 32(1), 108–22.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
7
Efficient Portfolio Optimization in
the Wealth Creation and Maximum
Drawdown Space

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Alejandro Reveiz and Carlos León

7.1 Introduction

It is widely known that the Markowitz formulation of the portfolio opti-


mization problem, based on maximizing expected return and minimizing
risk, is the main pillar of the portfolio management theoretical foundations.
Nevertheless, its limited impact in investment management practice is also
widely recognized1, which has fostered new approaches to the portfolio
optimization problem.
When analyzing the portfolio optimization problem for Colombian pen-
sion funds, the authors were confronted with the typical shortcomings of
the Markowitz framework and faced some others not commonly discussed
by the literature. This chapter presents an intuitive, convenient and theoret-
ically robust approach to reformulating the portfolio optimization problem.
The latter mainly consists of a change in the solution space both for the met-
rics for risk, from standard deviation or variance to a market practitioner’s
measure known as maximum drawdown (MDD), and for return, with the use
of cumulative returns or end of period wealth, as well as the optimization
mechanics – using the actual time series instead of the estimated moments.
The chapter is organized as follows: the first section presents a brief review
of the traditional framework for portfolio optimization. The second sum-
marizes selected recognized problems of this framework, focusing on those
related to the choice of risk measure. Next, based on some desirable proper-
ties, both theoretical and practical, the change of risk measure from disper-
sion to MDD is justified. Finally a MDD-based case of portfolio optimization
is presented and discussed.

7.2 The Markowitz framework for portfolio optimization

The main contribution of the Markowitz (1952) formulation consists of rec-


ognizing that the rational behaviour of investors is better represented by

134

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Efficient Portfolio Optimization and Drawdowns 135

the rule of considering expected return as desirable and variance of return


as undesirable, instead of the short-sighted hypothesis of merely maximiz-
ing discounted returns which prevailed at that time. This is known as the
mean–variance criteria (MVC), which states that when an investor faces two
portfolios, A and B, he will prefer portfolio A to B when

E( rA ) ≥ E( rB )

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
and

␴ 2 ( rA ) ≤ ␴ 2 ( rB )

where

E( ri ) Expected return of asset i

␴ 2 ( ri ) Variance of asset i

When formalizing the portfolio’s return-risk framework, Markowitz iden-


tified the benefit of diversification as the milestone of modern portfolio
theory. Despite being an old and widely used concept, diversification’s first
mathematical formalization was provided by Markowitz, who was also the
first to recognize numerically how diversification can reduce risk for a given
level of expected return2. For the case of N assets, a portfolio’s expected
return and variance are calculated as follows:

E( rp ) = W T E( r ) (1)

where

E( rp ) Portfolio’s expected return

 w1 
w 
W= 
2
Column vector of assets’ weights
 # 
 
wN 

 E( r1 ) 
 E( r ) 
E( r ) =  2  Column vector of assets’ expected returns
 # 
 
 E( rN )

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
136 Alejandro Reveiz and Carlos León

and

␴ 2 ( rp ) = W T ⍀W = [ w1 w2 " w N ]
 Var ( j, j ) Cov( j, k ) " Cov( j , N )   w1 
 Cov( k , j ) Var ( k , k ) #  w 
u   2
(2)
 # % #   # 
   
 Cov ( N , j ) " " Var ( N , N )  wN 

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
where

␴ 2 ( rp ) Portfolio’s variance

 Var ( j, j ) Cov( j, k ) " Cov( j, N ) 


 Cov( k , j ) Var ( k , k ) # 
⍀=  Covariance matrix
 # % # 
 
Cov( N , j ) " " Var ( N , N )

Using the expected return–standard deviation space and introducing the


quadratic approximation to portfolio risk presented above, Markowitz
was able to demonstrate the existence of a set of efficient combinations of
expected return and risk which is commonly known as the efficient frontier
(EF). Generally, for each point on the EF, the optimization procedure is car-
ried out with the following quadratic program3:
N N
min ␴ 2p = ∑ ∑ wi w j ␴ij = W T ⍀W (3)
i =1 j =1

s.t.
N

∑w
i =1
i =1

∑ w E(r ) = E(r )
i =1
i i p

The EF is a plot resulting from the optimization process above, in which


for each level of expected return the minimum expected risk is attained,
and all portfolios below the minimum-variance portfolio are discarded – see
Figure 7.1.
Many other contributions to portfolio theory have been made since
Markowitz’s seminal work, but most still rely on its foundations. Surprisingly,
despite the fact that this theoretical framework is extensively known and has
survived the test of time, it is not widely applied to practical asset allocation.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Efficient Portfolio Optimization and Drawdowns 137

× 10−4
Expected return (daily)

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
2

1
0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018
Risk (Daily standard deviation)

Figure 7.1 Markowitz’s Efficient Frontier


Source: Authors’ calculations

Some of the reasons why the optimization problem within the Markowitz
framework is not practical are: the portfolio weights are sensitive to input
changes; the portfolio weights are extreme and non-intuitive4; and histor-
ical estimation of risk and correlation has proven to fail constantly in prac-
tice5, among others.

7.3 Risk measurement and the portfolio problem

Dealing with some of the recognized shortcomings of modern portfolio the-


ory for portfolio management of pension funds6, the authors have faced
some others related to i) risk measurement and ii) the way investors would
optimize their portfolios, taking into account the risk measure.
The first shortcoming relates to the choice of a risk measure. In contrast to
risk, return is a rather clear concept and its calculation for an asset or a port-
folio is straightforward. The ordinary statistical measure of risk is volatility,
a metric of dispersion which measures the size of a typical observation’s
departure from its expected value. Litterman (2003) recognizes two main
sources of weakness for volatility as the metric for risk: i) only in special
cases, such as normally or Gaussian distributed returns, can volatility alone
provide enough information to measure the likelihood of most events of

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
138 Alejandro Reveiz and Carlos León

interest, namely extreme events, and ii) volatility is a measure of risk that
does not differentiate upside risk from downside risk, a rather important
issue when considering non-symmetric distributions.
Going beyond Litterman’s arguments, Taleb (2004, 2007) discusses the
origins of the usage of Gaussian distributions and volatility in finance. He
concludes that people in finance just borrowed a technique from disciplines
which do not have trouble eliminating extreme values from their samples,

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
such as education and medicine. Likewise, Rebonato (2007) recognizes the
existence of overconfident extrapolation in the current approach to sta-
tistics applied to finance and highlights the weaknesses of the traditional
‘frequentist’ approach to probability. Taleb (2007) argues that due to the
fact that concepts such as standard deviation and correlation do not exist
outside the Gaussian world, and because in a Gaussian world the odds of
a deviation decline exponentially as departing from the mean, relying on
Gaussian distributions when dealing with aggregates in which magnitudes
do matter, such as portfolio management, ignores unpredictable large devia-
tions, which, cumulatively, have a dramatic impact on wealth.
This risk metric issue is not new and was first mentioned by Markowitz
(1952). Some authors7 have tried to solve the weakness of volatility as meas-
ure of risk, and found that using metrics such as value-at-risk and expected
shortfall generate EFs which are subsets of the mean–variance frontier if and
only if the normality, or at least ellipticality, assumption holds. Nevertheless,
the allocations obtained by these authors are similar to those of the mean–
variance frontier, and the normality of returns and equal treatment of
upside and downside risk usually remain.
When analyzing the impact of implementing a ‘multi-fund’ scheme for
Colombian pension funds’ portfolio management, the authors were con-
fronted with the way final investors, namely the future pensioners or the
‘ordinary person’, behave and decide under uncertainty. At first it is rea-
sonable to assume that any individual has monotonic preferences and will
be non-satiated in consumption (i.e. will prefer more rather than less of a
good); according to this, in what is known as dominance, an investor will
always prefer the investment that pays as much in all states of nature, and
strictly more in at least one state8.
But if an investor faces alternative investment opportunities not covered
by the dominance concept, namely if there is no investment that pays as
much in all states of the nature and strictly more in at least one, the deci-
sion becomes less clear and the mean–variance dominance concept emerges.
According to mean–variance dominance, an individual will characterize
the investment opportunities by their first two moments (mean and vari-
ance) and decide accordingly.
Even though the MVC and mean–variance dominance are milestones
of modern portfolio theory, the variance or standard deviation choice as a
metric for risk for an individual is far from being practical and meaningful

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Efficient Portfolio Optimization and Drawdowns 139

for what we can call an ‘ordinary person’. Variance may not be the best
measure of risk, not only because traditional calculations of variance rely
on normality of returns and give equal treatment to upward and downward
risk, but also because it is subject to estimation errors and escapes from
the knowledge and understanding of the common individual. Moreover, for
pension investments, an individual is faced with the fact that is a one-time
experiment during his lifecycle.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Despite not receiving much attention, Roy (1952) developed an alter-
nate mathematical foundation for the optimization problem in parallel to
Markowitz. Roy, concerned with the ‘ordinary person’s’ behaviour under
uncertainty, deviated from the Markowitz framework when defining the
appropriate metric for risk; he instead developed the safety first concept.
The safety first concept tries to handle two main observations made by Roy
(1952):

i. The ordinary person has to consider the possible outcomes of a given course of
action on one occasion only: the average (or expected) outcome, if this conduct
were repeated a large number of times under similar conditions, is irrelevant.
ii. Is it reasonable that real people have, or consider themselves to have, a precise
knowledge of all possible outcomes of a given line of action, together with their
respective probabilities or potential for surprise9?

Roy, concerned by these observations, recognized that given observation,


a) disasters do exist and are investors’ most important source of concern,
and b) investors generally suffer from limited knowledge. Consequently he
developed the principle of safety first, which asserts that is reasonable, and
probable in practice, that an individual, given his lack of knowledge, will
simply reduce the chance of a disaster from happening.
When facing the portfolio optimization problem, Roy concluded that
the best structure for assets is the one which keeps the chance of disasters
happening as small as possible at the end of a given period of time, thus
providing an alternative source of supporting for the portfolio diversifica-
tion principle. Despite it being somewhat obvious that disasters should be
avoided, recent market developments show that prevailing risk models have
performed poorly, mainly because they are not really designed to perform
under stress.
As Greenspan (2008) points out, state-of-the-art statistical models perform
poorly because the underlying data is generally drawn from both euphoria
and fear periods, which show very different dynamics, namely length and
magnitude. Because contraction phases are far shorter10 and far more abrupt,
the prevailing risk model’s correlation benefits – based on average comove-
ments, evident during euphoria or calm periods – collapse as all the prices
fall together, rendering the models ineffective. This argument is shared by
Bhansali (2005) and Zimmermann et al. (2003).

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
140 Alejandro Reveiz and Carlos León

Consequently, Bhansali (2005) and Zimmermann et al. (2003) emphasize


that since most Value at Risk (VaR) and shortfall models are based on his-
torically estimated covariance matrices, they are notorious for failing when
most needed. Both agree that the difference in volatility and correlations
between up and down market environments implies that the risk reduc-
tion potential of diversification is limited in down markets, thus making
such models exhibit a downward bias and leaving them unable to fore-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
see stress-type events. Furthermore, Bhansali (2005) highlights that most
models lean very heavily on the notion of stability by assuming stable dis-
tributions for the dynamics of prices, which clearly ignores the impact of
structural breaks and market discontinuities. Taleb (2007), as already men-
tioned, blames reliance on Gaussian distributions for the poor performance
of finance when dealing with unpredictable large deviations, sharp jumps
or discontinuities.
Thus, it can be pointed out that the ordinary person’s lack of knowledge
is, perhaps, not exclusive to uninformed individuals but shared even by
experienced market participants. Thus, the inclusion in portfolio construc-
tion of a risk measure that better includes disasters and accounts for our lack
of knowledge, the existence of novelty and the animal spirits and irrational
behaviour that govern financial markets is desirable.

7.4 Disasters and maximum drawdown as a measure of risk

In several topics in finance, practitioners tend to twist the commonly


accepted theoretical model to make it useful when facing the realism of
market practice11 or to just develop measures that pay attention to what
they care most. This is the case for risk, where many industry measures
are used in spite of somewhat weak theoretical foundations. In line with
Roy’s (1952) worries about the convenience of minimizing the occurrence
of disasters, some safety first-type measures are being used by money
management professionals. One of these measures is the maximum draw-
down ( MDD).
Defined as the maximum sustained percentage decline (peak to trough)
which has occurred in an investment (individual asset or portfolio) within
a period, the MDD provides an intuitive and easy to understand measure of
the loss arising from potential extreme events. The MDD calculation is not
available in a closed-form formula and should be calculated recursively12.
When calculating MDD for period [0,T], let V T be the end dollar value of the
series and Vmax the maximum dollar value of the series in the [0,T-1] period,
given the prior calculation of MDD for [0,T-1]:

 V − Vmax 
MDD[ 0 ,T ] = min  T , MDD[ 0 ,T −1]  (4)
 Vmax 

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Efficient Portfolio Optimization and Drawdowns 141

Figure 7.2 below presents the MDD concept. Using the 365 days of 2007
data from the S&P 500 Index, MDD risk measure is calculated. For the sake
of comparison the two sharpest declines are presented. The sharpest decline
(shown by the dotted line) corresponds to the drop from observations 282 to
330, a 10.09% drop, which is the MDD for the S&P 500 Index during 2007.
Following Roy (1952), the MDD measure provides valuable information
because for the investor, being primarily concerned about the outcome of

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
his decision on an occasion when a disaster may seriously erode his wealth,
the average outcome is simply irrelevant. Any rational individual, when
confronted with two assets with the same return will prefer the one with
the lowest MDD, as is the case with variance in the MVC.
When facing the pension fund problem in Colombia, Reveiz and León
(2008) had to deal with very long term investment decisions, with investors
who were quite sensitive to the long run wealth destruction due to sharp
mark-to-market driven loses13. Similarly, Magdon-Ismail and Atiya (2004)
recognize that most trading desks are interested in long-term performance,
that is, systems that can survive over the long run, with superior return and
small drawdowns; they state that a reasonably low MDD is in fact critical to
the success of any fund.
Like other financial market practitioners, the authors find that the MDD
provides a useful yet intuitive and sound market risk metric, which deals
with risk in ways variance cannot. Some major advantages are: i) the MDD

1600
Peak (1565.15)
Peak (1553.08)
1550
S&P 500 index

1500

−10.09%
1450 −9.43%
(MDD)

1400
Trough (1406.7) Trough (1407.22)

1350
0 50 100 150 200 250 300 350
Year 2007 observation

Figure 7.2 S & P 500 MDD


Source: Authors’ calculations

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
142 Alejandro Reveiz and Carlos León

only comprises downward risk, which is a desirable property when consid-


ering the issue about the period (euphoria or fear) from which the model
inputs are drawn; ii) because it corresponds to a proxy of the magnitude
and length of disaster, the MDD gives a better picture of what the market
discontinuities and irrational behaviour may look like; iii) since it relies dir-
ectly on historical returns, the MDD conveniently avoids normality – or any
distributional assumptions and estimation errors and iv) optimization with

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
drawdown is straightforward.
Despite these serious advantages, the MDD is not a widely discussed topic
in financial theory. In an effort to contribute to its theoretical founda-
tion, following Artzner et al. (1998), who have developed a formal the-
ory of financial risk, we now evaluate whether the MDD may be generally
regarded as a convenient and sound risk measure. In an attempt to concep-
tualize what a risk metric should be, Artzner et al. (1998) postulate a set of
axioms which ensure that a risk measure is, as they call it, coherent, that
is, it is a risk measure which is to be used to effectively regulate or manage
risks.
Following Dowd (2005) and Cheng et al. (2004), let X and Y represent the
changes in the values of an investment, and let ρ(.) be a measure of risk,
which represents the minimum extra cash that has to be added to the risky
position in order to make it acceptable. The measure of risk ρ(.) is coherent
if it satisfies the following properties:

i. Monotonicity: Y ≥ X ⇒ ␳(Y ) ≤ ␳( X ) . This means that if two random vari-


ables X and Y, representing dollar changes in the values of investments,
are such that Y ≥ X, then their risk measures have to satisfy ρ(Y) ≤ ρ(X).
In other words, because the dollar change in value of investment Y is
always higher than X, the latter should be compensated so it is accept-
able to hold it; then it can be identified as more risky14.
ii. Subadditivity: ␳( X + Y ) ≤ ␳( X ) + ␳(Y ) . This means that the measure of risk of
a portfolio composed by X and Y should always be equal or lower than the
sum of the risk of X and Y alone. This property, the single most import-
ant and desirable of them all, reflects that any reasonable risk measure
should aggregate individual risks in such a way that there is some reduc-
tion, or at least not an increment when compared to the simple sum of
individual risks; otherwise, firms or investors would be tempted to break
up their accounts or investments in order to reduce risk.
iii. Positive homogeneity: ␳( hX ) = h ␳( X ) for any h > 0 . This means that the risk
of a position is proportional to its scale or size, which makes sense when
the positions are liquid; if the positions or instruments are not liquid
enough there may be a case for ␳( hX ) ≥ h ␳( X ) for h > 0 , just because sell-
ing a large position may confront liquidity risk.
iv. Translation invariance: ␳( X + n ) = ␳( X ) − n . This means that the addition of
a sure amount reduces the cash needed to make the position acceptable.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Efficient Portfolio Optimization and Drawdowns 143

It is of great importance because if the sure amount n is equal to ␳( X ) ,


then ␳( X + ␳( X )) = ␳( X ) − ␳( X ) = 0, which is a neutral position.

The following items provide the intuition behind the performance of the
MDD risk measure for each property:

i. Concerning monotonicity, the MDD offers a measure which assures that

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
the investment with the lowest performance should be compensated in
order to make it acceptable to hold. If the MDD is calculated for two
random variables X and Y, representing changes in the dollar value of
an investment, and the dollar value of the random variable Y is always
higher than X, then X happens to be riskier. It is worth mentioning that
a special case for this property can be found: because the MDD is zero
when applied to a strictly increasing price time-series, when dollar value
changes of X and Y are strictly positive then ␳(Y ) = ␳( X ) = 0.
ii. Subadditivity is guaranteed given the computation of a portfolio’s MDD
results from a linear combination of the returns of the individual assets.
Moreover, given that not all individual asset’s extreme results always
happen at the same time, there may be a diversification gain (reduction
in aggregated risk) because individual disasters tend to be averaged out;
the portfolio’s MDD diversification effect does not rely on the estima-
tion of a correlation matrix, so it is free of estimation errors. Even if
there is a major catastrophe in which strictly all assets exhibit major
adverse movements at strictly the same time, the MDD will equal the
weighted average of the individual MDDs, without any diversification
gain. Then, the ␳( X + Y ) ≤ ␳( X ) + ␳(Y ) property will always hold.
iii. Regarding positive homogeneity, due to the fact that the MDD is defined
as the maximum sustained percentage decline (peak to trough) which
has occurred in an investment within a period, scaling the value of the
dollar value-random variable is simply in vain: the MDD will be the
same. For the MDD to comply with positive homogeneity it suffices to
adjust the way this measure is presented: if we convert the MDD, which
is by construction a percentage, into an absolute monetary value by sim-
ply multiplying it by the size of the nominal position, this property will
always hold. This is equivalent to creating a dollar-MDD measure15.
iv. Finally, the translation invariance is combined with a similar presen-
tation adjustment of the MDD measure. If converted into an absolute
monetary value by multiplying the MDD by the size of the nominal
position, which is equivalent to creating a dollar-MDD measure, this
property will hold. These two conversions, for properties (c) and (d), are
by no means a twist of the properties: it just takes into account the fact
that the risk measure ρ(.) represents the minimum extra cash that has to
be added to the risky position in order to make it acceptable, not a per-
centage value as is the case with the MDD.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
144 Alejandro Reveiz and Carlos León

According to the intuition presented, and based on empirical tests when


deemed necessary, the authors conclude that besides being an intuitive and
sound risk metric used by market practitioners, the MDD can be regarded as
a coherent risk measure in the Artzner et al. (1998) sense, and thus useful to
effectively regulate or manage risks.

7.5 The portfolio optimization problem under the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
MDD risk measure

Given the practical advantages and the coherence of the MDD as a measure
of risk, it’s tempting to use the EF framework to replace the dispersion (vari-
ance or standard deviation) resulting from the MDD.
In Markowitz’s MVC the EF results from an optimization which attains
the lowest possible portfolio dispersion for a given portfolio expected
return. The portfolio optimization proposed in this chapter differs from the
MVC’s not only in the risk measure: looking again to market practitioners’
alternative measures, the approximation to expected return via the average
of past returns is replaced by the total past effective return, which is simply
the wealth created by a portfolio over the period studied.
Using the total return as measure of expected return and the MDD as a
measure of risk, we find the Calmar Ratio (CR), which is a measure used by
some portfolio managers despite the fact that no explicit theoretical support
exists16:

TR( i ,t )
CR( i ,t ) = (5)
MDD( i ,t )

where:

TR(i,t) Total return of asset i over the period t


MDD(i,t) MDD of asset i over the period t

CR is a risk-adjusted performance measure which presents the trade-off


between wealth creation and risk, with the latter considered as a measure
of disaster.
Then, when tackling the portfolio problem in the proposed modified
return-risk framework, the EF results from an optimization which attains
the lowest possible MDD for a given portfolio wealth creation level, once all
the portfolios below the minimum MDD portfolio are discarded. For each
point on the frontier, the optimization procedure would be carried out as
follows:

min MDDp = MDD ( A . W ) (6)

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Efficient Portfolio Optimization and Drawdowns 145

s.t.

∑w
i =1
i =1

∑ w TR
i =1
i i = TR ⋅ W T = TRp

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
wi ≥ 0

where

TRi Total return for asset i

TRp Portfolio’s total return

 TR1 
 TR 
TR = 
2 
Column vector of assets’ rotal return
 # 
 
TRN 

 a1,1 a1,2 " a1, N 


a a2 ,2 #  Matrix (time series) of asset prices
A=
2 ,1

 # (K observations, N assets)
% # 
 
 K ,1
a " " a K ,N 

 w1 
w 
W= 
2
Column vector of assets’ weights
 # 
 
wN 

Due to the fact that the subadditivity property holds and the fact that
there exists a diversification gain when combining assets, we could expect
to find an EF to some extent similar in shape to Markowitz’s mean–
variance one, with two great advantages: i) no correlation matrix has to be
estimated – the benefit of diversification corresponds to the realized risk
reduction due to the combination of assets; ii) the resulting diversification
benefit corresponds to the risk reduction when it matters most: in the mid-
dle of disaster; iii) the risk metric used is free of the dispersion-type risk
measures’ shortcomings and iv) no distribution is assumed.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
146 Alejandro Reveiz and Carlos León

Using a set of daily prices for 18 assets or risk factors from February 1990
to December 2007, comprising commodities, equity indexes, sovereign
fixed income indexes including major industrialized economies and emer-
ging markets, US mortgages and corporate bonds, we construct the complete
spectrum of the wealth creation-MDD attainable by different combinations
of assets or risk factors. For instructive purposes, in Figure 7.3, we begin by
combining two risk factors, where each mark corresponds to 1 of the 30

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
portfolios constructed for each frontier, diversified and undiversified.
The line of red crosses corresponds to the lowest possible MDD for a given
portfolio wealth creation level with the portfolio’s MDD calculated as shown
in equation (6), and thus the diversified frontier. The line of blue dots cor-
responds to the lowest possible MDD for a given portfolio wealth creation
level, with the portfolio’s MDD simply calculated as the weighted average of
each asset’s MDD; this corresponds to a major catastrophe in which strictly
all assets exhibit major adverse movements at strictly the same time, and
thus, the undiversified frontier.
The interpretation of a specific portfolio such as B could be as follows: a
certain combination of gold and MSCI provides an investment opportunity
which in the analyzed period attains a total return or wealth creation of
95.1% and is exposed to a 28.3% MDD. The A portfolio corresponds to the
minimum MDD diversified portfolio, equivalent to Markowitz’s minimum

1.15
Diversified
1.1 Undiversified
Minimum MDD
1.05 Diversified portfolio

1
B
C
Wealth creation

0.95
Diversification benefit
0.9

0.85 A

0.8

0.75

0.7

0.65
0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55
MDD
Figure 7.3 Wealth creation-MDD’s frontier (two assets: gold and MSCI)
Source: Authors’ calculations

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Efficient Portfolio Optimization and Drawdowns 147

variance portfolio; the portfolios with a wealth creation equal or above that
of A would constitute the EF. The horizontal difference between the diver-
sified and undiversified frontiers for each level of wealth represents the
benefit of diversification attained by the combination of assets within the
portfolio. Because the diversified frontier dominates the undiversified, the
subadditivity property is once again verified.
Next we include one more asset, the MSCI-Emerging Markets (MSCI-EM)

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
index, and analyze the results in Figure 7.4.
As with traditional Markowitz-type portfolio optimization, the addition
of an extra asset expands the range of return and risk combinations. The
three-asset diversified frontier, represented by the black asterisks, dominates
the two-asset frontiers (red crosses). We now introduce two more frontiers,
with five and 18 assets, but drop the undiversified frontiers for graphic
ease.The risk-adjusted performance measure which presents the trade-off
between wealth creation and the MDD (CR) for each of the calculated fron-
tiers is plotted in Figure 7.5.
An investor seeking to maximize the wealth creation per unit of risk
(MDD) would choose the highest CR. Figure 7.6 shows that the 18-asset
frontier achieves the highest CR levels, followed by the five-, three- and
two-asset frontiers respectively. These results show that the higher the
number of assets or risk factors, the higher the CR attainable due to the

1.7
1.6
1.5
1.4
Wealth creation

1.3 Diversified (2 assets)


1.2 Undiversified (2 assets)
Diversified (3 assets)
1.1 Undiversified (3 assets)
1
0.9
0.8
0.7

0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55


MDD

Figure 7.4 Wealth creation-MDD’s Frontier (three assets: gold, MSCI and MSCI-EM)
Source: Authors’ calculations

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
148 Alejandro Reveiz and Carlos León

2.5
2 assets
3 assets
2 5 assets
18 assets
Wealth creation

1.5

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
1

0.5

0
0 0.1 0.2 0.3 0.4 0.5 0.6
MDD

Figure 7.5 Wealth creation-MDD’s Frontier (two, three, five and 18 assets17)
Source: Authors’ calculations

60
2 assets
3 assets
50 5 assets
18 assets

40
Calmar ratio

30

20

10

0
0 5 10 15 20 25 30
Portfolio #

Figure 7.6 Calmar Ratio (two, three, five and 18 assets)


Source: Authors’ calculations

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Efficient Portfolio Optimization and Drawdowns 149

0.9

0.8

0.7
Portfolio share

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.6

0.5

0.4
Emerging markets equity
0.3
US Mortgages
Sovereign fixed income
0.2
Equity
US AAA corporates
0.1 Commodities
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Minimum Portfolio # Max. total return
MDD portfolio portfolio
Figure 7.7 EF composition (18 assets, by asset class)
Source: Authors’ calculations

diversification effect, which comes from disasters averaging out as more risk
factors are added to the portfolio.
Finally, after discarding the non-efficient portfolios, the 18-asset efficient
frontier is composed of 16 portfolios. Figure 7.7 presents the portfolio break-
down by asset class, where the first portfolio is the minimum MDD port-
folio, and moving along the x-axis provides the composition of more risky
portfolios.
As expected, fixed income (sovereigns, mortgages and corporates) exhibits
a clear majority at low risk levels, with a small portion of commodities.
When moving along the x-axis, where riskier portfolios are found, fixed
income instruments and commodities are progressively replaced by equity;
nevertheless, the share of sovereign fixed income instruments is signifi-
cant even in high risk portfolios. The absence of emerging markets equity
is noteworthy.

7.6 Application of wealth creation-MDD optimization to


forward-looking optimization using Genetic Algorithms (GA)

We have shown that optimization techniques can be successfully applied in


the wealth creation-MDD space. In this section we propose an application of

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
150 Alejandro Reveiz and Carlos León

the techniques presented in this document to obtain a set of MDD-adjusted


expected returns (MDDAER) that would reflect an ‘environment-robust’ port-
folio that accounts for wealth maximization and MDD using a genetic algo-
rithm for optimization purposes.
Thus, we calibrate18 a genetic algorithm (GA) with a fitness19 function
with the spirit of Loraschi et al. (1995), defined as

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
TR( i ,t ) .(1 − ␭) − ␭.MDD( i ,t ) (7)

A value of ␭ equal to 1 reflects the highest risk aversion, whereas ␭ equal


to 0 emphasizes wealth maximization. For the example below, the sampling
period t − i is set to 36 months and ␭ is set to 0.5; that is, we weight both
equally.
The resulting portfolio allocations are used to determine the implicit
expected returns through a reverse optimization process (Sharpe 1974 and
Fisher 1975), as follows:

Π = ␦⌺w (8)

where

Π MDDAER

⌺ Covariance matrix

␦ Market price of risk

w Portfolio weights

These MDDAER can then be used as an input in a mean–variance


optimization.
Figures 7.8 and 7.9 present the asset allocation and efficient frontier for
the optimal portfolios computed with the historical expected returns20 and
the MDDAER.
The results in Figures 7.8 and 7.9 show that:

● For low risk portfolios, asset allocations are quite similar as low risk assets
are well described by a normal distribution and non-linearity is not sig-
nificant.
● Fixed income is favored in the MDDAER as coupon payments reduce
the volatility of wealth in the long term. In the fixed income subset, US

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Efficient Portfolio Optimization and Drawdowns 151
Expected returns (ER)
1
WTI
0.8 Wheat
Asset %

0.6 Cotton
Platinum
0.4 GOLD
0.2 Nasdaq
MSCI
0 MSCI EM
0 5 10 15 20 25 30
UST 5–10
Portfolio # UST 10+

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
GER 5–10
MDD-adjusted expected returns (MDDAER) GER 10+
1 UK 5–10
0.8 UK 10+
Asset %

0.6 JAP 5–10


JAP 10+
0.4 Mortgage Mtr
0.2 Corp AAA
0
0 5 10 15 20 25 30
Portfolio #

Figure 7.8 EF composition with expected return and MDDAER


Source: Authors’ calculations

Comparison under expected returns (mean-variance space)


0.18
0.16 EF computed with ER
EF computed with MDDAER
Return %

0.14
0.12
0.1
0.08
0.06
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Standard deviation %

Comparison under MDD-adjusted expected returns (mean-variance space)


0.18
0.16 EF computed with ER
EF computed with MDDAER
Return %

0.14
0.12
0.1
0.08
0.06
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Standard deviation %

Figure 7.9 EF with expected return and MDDAER


Source: Authors’ calculations

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
152 Alejandro Reveiz and Carlos León

Treasuries and Bunds are overweight as the flight to quality to those assets
results in a lower MDD.
● High risk portfolios are also overweight in fixed income, probably from the
GA picking up on the fixed income and equities diversification through
the economic cycle.

Recall that this diversification is obtained without including asset bind-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
ing restrictions; in fact the GA optimization learns the proper schemata (or
sets of combinations of assets) that contribute to reduce the impact of the
MDD and maximize wealth under different environments. As the evolu-
tionary process exposes the portfolios to distinct ‘environments’ to build
the schemata that maximize wealth and minimize the MDD, the resulting
expected returns reflect asset combinations that are robust through differ-
ent environments, thus obtaining a set of robust expected returns in the
evolutionary sense.
In order to obtain forward looking expectations, the MDDAER can be
combined with expected returns using the Black-Litterman (1992) model,
by means of the following equation:

[( ␶⍀ ) ] [( ␶⍀ ) Π + P’⌺ V ]
−1 −1 −1
E( R) = + P’⌺ −1P −1
(9)

where the expected return vector E(R) is obtained from a Bayesian mix-
ing of the MDDAER Π and the investor views of the expected returns V,
where the former are used as the attractor to stabilize the asset allocations.
It must be noted that in this case, instead of the market, the attractor is
a portfolio that behaves well on average under all environments to maxi-
mize wealth while minimizing the MDD. The matrix Ω is the covariance
matrix of excess returns, Σ is the diagonal of error terms of the views, ␶ is
the effective weight placed on the views, and matrix P selects the assets on
which the views are imposed.
Variations in which several GA optimizations are computed in parallel to
find environment adjusted portfolios (Reveiz 2008) or different objectives
functions are straightforward.

7.7 Final remarks

Due to the inconvenience of facing the portfolio problem with the stand-
ard-academic tools being widely recognized by practitioners, and the evi-
dent poor performance of contemporary state-of-the-art statistical models,
when confronted with critical events, practitioners lack practical models for
approaching risk in a meaningful and sound manner.
We have presented a method for approaching the portfolio problem by
modifying the solution space for the portfolio optimization using wealth

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Efficient Portfolio Optimization and Drawdowns 153

and the MDD as metrics. The benefits of using the MDD can be summarized
as follows:

i. The MDD risk metric deals with what risk models should be more con-
cerned about: extreme adverse events.
ii. The MDD risk metric, because it does not rely on the assumption of full
knowledge of the probability distribution of future outcomes, is more

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
convenient and sound than traditional dispersion metrics.
iii. When calculated as in (6), the portfolio’s MDD benefits from combin-
ing assets, but it does so without estimation errors and is focused on the
diversification that matters the most for an investor: diversification in
extreme adverse events.
iv. Despite being a practitioners’ risk measure, authors’ analysis and empir-
ical tests have proven that the MDD has interesting properties which
meet formal theoretical coherence criteria.
v. Wealth creation and the MDD together provide a new solution space for
portfolio optimization.

Moreover, the contribution of this chapter is not only linked to a change


in the solution space but also to the use of a genetic algorithm to obtain
the MDDAER that are computed from an environment-robust portfolio.
The latter can be used directly or can be combined with forward looking
expectations using the Black-Litterman equation to obtain efficient portfo-
lios through a mean–variance optimization, resulting in intuitively sensible
asset allocations.
The authors recognize some critical implementation issues exist. As with
Markowitz and the majority of approaches to the portfolio problem, the
quality and length of the time series and its periodicity is extremely rele-
vant for the optimization result. Due to the fact that the purpose of a risk
metric such as the MDD is to address extreme market movements and dis-
continuities, the authors suggest using a considerable amount of informa-
tion, which allows the model to find each asset’s infrequent but critical
wealth-destructive rare event and its marginal risk diversification impact
when added to a portfolio; it is strongly recommended to use enough infor-
mation to cover at least a complete business cycle, but is always advisable to
cover more than one21.
Another issue concerns the power of the model to anticipate extreme
events. Although this model – or any model – is incapable of anticipating the
origin, magnitude and length of the next extreme event to come, the authors
find that using the MDD provides the investor with a more realistic and
sound risk measure, especially when compared to traditional models based
on Gaussian distributions and the estimation of volatility or correlation.
The main drawback of our proposal relates to the computational resources
needed. The mean–variance and other distribution moment optimiza-
tion procedures only require finding the asset weights which achieve the

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
154 Alejandro Reveiz and Carlos León

minimum risk for each level of return, where the portfolio’s risk results from
a closed-form formula such as standard deviation or variance – an easy task
for any standard optimizer. Our proposal, based on the calculation of the
MDD as in (6), does not rely on any moment estimation, and thus requires a
more complex and time demanding optimization procedure22 that finds the
weights for each asset’s time series, rather than the moments, which min-
imize the MDD, a non-closed form risk measure. Finally, genetic algorithm

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
optimization needs to be calibrated (selection and mutation probabilities,
etc.) to the specific features of each problem, as sample size and frequencies.
This work gives a preliminary insight to the authors’ work on optimization
under the wealth and MDD solution space. Further research is underway23,
in particular on determining appropriate out-of-sample methodologies to
compare ex-pot returns under constant and rebalancing scenarios with
other asset allocation techniques, as well as backtesting adapted to the prob-
lem at hand.

Notes
The opinions and statements are the sole responsibility of the authors. We are grate-
ful to Juan Mario Laserna, Marco Ruíz, Ivonne Martinez, Miguel Sarmiento and
Vadim Cebotari, who provided valuable comments and suggestions.

1. He and Litterman (1999), Pedersen and Rudholm-Alfvin (2003), Pézier (2007),


Chabra (2005) and Bhansali (2005), among others.
2. Rubinstein (2002).
3. Cuthberson and Nitzsche (2004). Short-sale restrictions are commonly
included.
4. He and Litterman (1999).
5. Bhansali and Wise (2001) and Greenspan (2008).
6. Previous works from the authors concerning pension funds and their portfolio
optimization problem for the Colombian capital market are Reveiz and León
(2008) and Reveiz et al. (2008).
7. De Giorgi (2002) and Hurlimann (2002). For a review of other authors and alter-
native measures of risk please refer to Pedersen et al. (2003).
8. Danthine and Donaldson (2002).
9. His view was closer to Popper’s (1990) propensities and the Complexity dis-
cipline (see Reveiz 2008). Similarly, Rebonato (2007) shares Roy’s concern and
blames modern finance for assuming that investors ‘just know’ the objective
probabilities of all future states of the world.
10. Based on the IMF’s International Financial Statistics (IFS) annual industrial pro-
duction index, the authors estimated that in 19482006 the 22 industrial countries
were in contraction (negative growth) between one-fourth and one-fifth of the
time; meanwhile the US alone was in contraction nearly one-sixth of the time.
11. Perhaps the most famous and studied case is practitioners’ adjustments to the
Black & Scholes option pricing model. Practitioners, in order to make the pricing
model useful, violate the theoretical assumption of constant volatility and just
plug the volatility surface to approximate observed market prices.
12. Lohre et al. (2007).

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Efficient Portfolio Optimization and Drawdowns 155

13. The problem is compounded by the fact that contributions are not made con-
stantly, i.e. the probability of an affiliated contributing in a given month can be
lower than 40%.
14. For the sake of comprehension of the monotonicity property the reader should
avoid thinking of risk using conventional dispersion measures. Dispersion-type
measures of risk, such as standard deviation or variance, don’t comply with this
property; dispersion measures don’t differentiate between the signs of the ran-
dom variables, and thus they would only reveal the magnitude of the changes in

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
value of an investment X or Y, not their direction.
15. Because it is outside the scope of this chapter we disregard the liquidity issue,
which may cause the violation of the positive homogeneity property.
16. Pedersen et al. (2003) and Magdon-Ismail and Atiya (2004).
17. The 2-asset frontier corresponds to the combination of gold and MSCI; the 3-as-
set frontier corresponds to gold, MSCI and MSCI-EM; the 5-asset frontier corres-
ponds to gold, MSCI, MSCI-EM, US Treasury 510y and US AAA Corporates. The
18-asset frontier adds commodities, other equity indexes, other sovereign fixed
income indexes and US mortgages.
18. It consists of a binary implementation with a normalized geometric fitness func-
tion, a simple crossover function with probability 0.6 and a binary mutation (0.1
probability) using the Houck et al. (1995) GA toolbox.
19. For details on Genetic Algorithms see Holland (1975).
20. These are simply the mean, standard deviation and correlation matrix of the
overall sample.
21. The authors have tested the power of the algorithm in daily, monthly and quar-
terly data, not only for the assets in the sample of this chapter but also including
alternatives, and found that the methodology performs well in all cases.
22. The computational time required to find the portfolio frontier in the wealth
creation-MDD space was about 598.6 seconds for the two-asset case, while the
time required for the mean–variance space was about 3.5 seconds.
23. León and Laserna (2008) present an application for the strategic asset allocation
of Colombian pension funds.

Bibliography
Artzner, P., Delbaen, F., Eber, J-M. and Heath, D. (1998) ‘Coherent Measures of Risk’,
Mathematical Finance, Vol. 9, November, 203–228.
Bhansali, V. (2005) ‘Putting Economics (Back) into Quantitative Models’, Risk
Magazines Annual Quant Congress, New York, 89 November.
Bhansali, V. and Wise, M. (2001) ‘Forecasting Portfolio Risk in Normal and Stressed
Markets’, Journal of Risk, 4(1), Fall 2001, 91–106.
Black, F. and Litterman, R. (1992) ‘Global Portfolio Optimization’, Financial Analyst
Journal, SeptemberOctober, 28–43.
Chabra, A.B. (2005) ‘Beyond Markowitz: A Comprehensive Wealth Allocation Framework
for Individual Investors’, The Journal of Wealth Management, 7(4), Spring, 8–34.
Cheng, S., Liu, Y. and Wang, S. (2004) ‘Progress in Risk Measurement’, Advanced
Modelling and Optimization, Vol. 6, No. 1.
Cuthbertson, K. and Nitzsche, D. (2004) Quantitative Financial Economics, Second
Edition, John Wiley & Sons.
Danthine, J-P. and Donaldson, J.B. (2002) Intermediate Financial Theory, Prentice
Hall.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
156 Alejandro Reveiz and Carlos León

De Giorgi, E. (2002) ‘A Note on Portfolio Selection under Various Risk Measures’,


Institute of Empirical Research, University of Zurich, 19 August.
Dowd, K. (2005) Measuring Market Risk, Second Edition, John Wiley & Sons.
Fisher, L. (1975). ‘Using Modern Portfolio Theory to Maintain an Efficiently
Diversified Portfolio’, Financial Analyst Journal 31(3), 73–85.
Greenspan, A. (2008) ‘We Will Never Have the Perfect Model of Risk’, Financial Times,
16 March.
He, G. and Litterman, R. (1999) ‘The Intuition Behind Black-Litterman Model

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Portfolios’, Investment Management Research, Goldman Sachs Investment
Management.
Holland J. (1975) Adaptation in Natural and Artificial Systems, The University of
Michigan Press.
Houck, C.R., Joines, J.A. and Kay, M.G. (1995) ‘A Genetic Algorithm for Function
Optimization: A Matlab Implementation’, Technical Report NCSU-IE-TR-95-09,
North Carolina State University, Raleigh, NC(1995).
Hurlimann, W. (2002) ‘An Alternative Approach to Portfolio Selection’, In Proceedings
of the 12th international AFIR Colloquium, Cancun, Mexico.
IMF (2008) International Financial Statistics, CD-ROM.
León, C. and Laserna, J.M. (2008) ‘Asignación Estratégica de Activos para Fondos
de Pensiones Obligatorias en Colombia: Un Enfoque Alternativo’, Borradores de
Economía, Vol. 523, Banco de la República.
Litterman, R. (2003) ‘Risk Measurement’, Modern Investment Management: An
Equilibrium Approach, John Wiley & Sons, Hoboken, New Jersey.
Lohre, H., Neumann, T. and Winterfeldt, T. (2007) ‘Portfolio Construction with
Downside Risk’, Available at SSRN: http://ssrn.com/abstract=1112982.
Loraschi, A. (1995) Distributed Genetic Algorithms with an Application to Portfolio
Selection. SIGE Consulnza S.P.A. Milano, Italy.
Magdon-Ismail, M. and Atiya, A. (2004) ‘Maximum Drawdown’, Risk, October, 1085–
1100.
Markowitz, H.M. (1952) ‘Portfolio Selection’, The Journal of Finance, 7(1), March,
77–91.
Pedersen, C.S. and Rudholm-Alfvin, T. (2003) ‘Selecting a Risk-Adjusted Shareholder
Performance Measure’, Journal of Asset Management, 4(3), October, 152–172.
Pézier, J. (2007) ‘Global Portfolio Optimization Revisited: A Least Discrimination
Alternative to Black-Litterman’, ICMA Centre Discussion Papers in Finance, ICMA,
University of Reading.
Popper, K. (1990) A World of Propensities, Thoemmes, Bristol.
Rebonato, R. (2007) Plight of the Fortune Tellers, Princeton University Press.
Reveiz, A. (2008) ‘The Case for Active Management for the Perspective of Complexity
Theory’, Borradores de Economia. Vol. 495, Banco de la República.
Reveiz, A. and León, C. (2008) ‘Administración de fondos de pensiones y multifondos
en Colombia’, Borradores de Economía, Vol. 506, Banco de la República.
Reveiz, A., León, C., Laserna, J.M. and Martinez, I. (2008) ‘Recomendaciones para
la modificación del régimen de pensiones obligatorias de Colombia’, Ensayos sobre
Política Económica, 26(56), edición junio 2008, 78–113.
Roy, D. (1952) ‘Safety First and the Holding of Assets’, Econometrics, 20(3), July,
431–449.
Rubinstein, M. (2002) ‘Markowitz’s ‘Portfolio Selection’: A Fifty-Year Retrospective’,
The Journal of Finance, 57(3), June 1041–1045.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Efficient Portfolio Optimization and Drawdowns 157

Sharpe, W.F. (1974) ‘Imputing Expected Returns from Portfolio Composition’, Journal
of Financial and Quantitative Analysis, 9(3), 463–472.
Taleb, N.N. (2004) Fooled by Randomness, Random House.
Taleb, N.N. (2007) The Black Swan, Random House.
Zimmermann, H., Drobetz, W. and Oertmann, P. (2003) Global Asset Allocation, John
Wiley & Sons.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
8
Copulas and Risk Measures for
Strategic Asset Allocation: A Case
Study for Central Banks and

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Sovereign Wealth Funds
Cyril Caillault and Stéphane Monier

8.1 Introduction

Between 1995 and 2008, global total official reserves, excluding gold, grew
from $1.3 trillion to $6.0 trillion. Growth has been particularly strong since
2002. The bulk of the increase took place in emerging economies, whereas
the reserves of the G-10 countries excluding Japan have remained stable.
Foreign exchange market interventions, on the other hand, have declined
substantially in developed economies. In Table 8.1, we present estimates of
the reserves of various central banks.
In the aftermath of the 1998 Asian crisis, some central banks (CB) started
to increase their reserves to defend against liquidity crises. The surge in
commodity prices since 2002 has led to increased reserves and they have
reached a level well above the one year of short-term debt proposed by the
Greenspan-Guidotti rule. Some of the resulting excess cash has been trans-
ferred to sovereign wealth funds (SWF). The purpose of the funds is clearly
stated as outperforming OECD inflation + 3%. In Table 8.2, you will find
estimates of the size of various SWFs.
Part of this wealth is managed externally by asset management compan-
ies, who consider central banks to be prestigious clients for two reasons.
First, in terms of marketing, having a central bank in the client database is
a strong recommendation; second, it is an opportunity to demonstrate the
company’s talent in managing money. At the same time, it is an opportun-
ity for central banks to benefit from the asset managers’ experience through
technology transfer.
The question for CBs and SWFs is how to allocate this wealth in line with
the objectives of safety, liquidity and return. Earlier in the 1990s, research

158

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Risk Measures for Strategic Asset Allocation 159

in risk management made significant progress and a large number of new


tools were developed to measure risk. This research was pushed forward
by the 1988 Basel Accord, whose the aim was to develop a risk-based cap-
ital framework to strengthen and stabilize the banking system. The asset
management industry started to study whether these new methods could
be utilized in the efficient management of the portfolios of its clients. The
risk measures considered in this chapter are: volatility, value at risk (VaR)1,

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Table 8.1 Reserves estimates for several central Banks

Assets $ Assets $ January 2002


Country Billion Currency Billion Last Update Billion % Change

US 75 US $ 75 January-2009 68 11
United Kingdom 49 US $ 49 January-2009 37 31
Canada 43 CN $ 53 January-2009 34 28
Germany 110 EURO € 141 January-2009 97 13
France 129 EURO € 165 January-2009 71 81
Italy 85 EURO € 109 January-2009 54 59
China 1913 US $ 1913 January-2009 217 780
Russia 387 US $ 387 January-2009 36 963
Japan 1011 US $ 1011 January-2009 401 152
Brazil 186 US $ 186 January-2009 36 415
Source: Datastream

Table 8.2 Reserves estimates for ten SWFs


Assets
Country Abbreviation Fund $Billion Inception Origin

United Arab ADIA Abu Dhabi Investment 875 1976 Oil


Emirates Authority
Norway GPF Government Pension Fund 391 1990 Oil
of Norway
Singapore GIC Government of Singapore 330 1981 Non-
Investment Corporation commodity
Kuwait KIA Kuwait Investment Authority 264 1953 Oil
China CIC China Investment 200 2007 Non-
Corporation commodity
Singapore – Temasek Holdings 1 159 1974 Non-
commodity
Australia FFMA Australian Government 81 2004 Non-
Future Fund commodity
Qatar QIA Qatar Investment Authority 60 2005 Oil
Alaska APFC Alaska Permanent Fund 40 1976 Oil
(United States)
Libya – Libyan Investment Authority 50 2007 Oil

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
160 Cyril Caillault and Stéphane Monier

expected shortfall (ES)2 and the Omega function3. Although these measures
were initially developed for the purposes of bank supervision, we show how
we can use them to determine a strategic asset allocation in the context of
central banks and sovereign wealth funds portfolios.
In addition, there is a debate over which is the best way to calculate these
risk measures. Most are based on estimates of volatility. RiskMetrics, an early
standard in VaR, uses an EWMA volatility model (exponential weighted

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
moving average); others use ARCH/GARCH-based models, as developed by
Engel1 (1982) and extended by Bollerslev (1986), or regime switching models
introduced by Hamilton (1988). Multifactor approaches are also widely used
in the Finance industry. Without any doubt, the most comprehensive model
is the BarCap Point risk management system; see Dynkin et al. (2007) for
further details. For a review of these methods see for instance Caillault and
Guégan (2008).
Our approach is different and is based on copula functions (see Sklar 1959
and Caillault and Guégan 2005). The existence of co-movements and con-
tagion in financial time series are important issues that have been widely
discussed in the literature. In the main, it has been shown that cross-market
correlation increases significantly during turbulent periods, which has
a major impact on market risk. The Gaussian distribution does not cap-
ture this kind of dependence; however, copula functions do. The copula
approach has two main advantages: 1) it allows us to relax the Gaussian
assumption, if necessary, and 2) it provides a full representation of the port-
folio distribution. The multivariate distribution of the portfolio can be sep-
arated into two parts: the marginal distributions of each asset class, and
the dependence structure of the portfolio, described by the copula. When
the two are combined, we get the portfolio distribution. Copulas are well
known in two dimensions, but for portfolios with a large number of assets,
the use of copulas is restricted to Student-t or the Gaussian distributions
for reasons of tractability. One problem is that both copulas are symmetric,
which is not necessarily true for our universe of asset classes. This chapter
proposes to investigate a convex sum of Clayton and Gumbel copulas for
dimensions greater than two. This copula gives us enough flexibility to fit
an asymmetric distribution.
Thus, the aim of this chapter is twofold. First, we propose to model the
distribution of returns for a large number of asset classes using copula func-
tions and non-normal distributions such as the Normal Inverse Gaussian
and generalized Student-t distributions to model asymmetric and fat-tailed
behaviour observed in our data set. Second, we investigate a strategic asset
allocation for CBs and SWFs. Investment horizons are very different for CBs
and SWFs, at around one year and 30 years respectively, and as a conse-
quence, their benchmarks are also different. For CBs, the benchmark is typ-
ically treasury bonds with one to three year maturity. SWFs have a wide
range of nominal or real return objectives, typically OECD inflation + 3%.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Risk Measures for Strategic Asset Allocation 161

We propose to compare four strategic asset allocation frameworks which


attempt to minimize the risk measure when a certain level of expected
return is fixed: mean–variance, mean–VaR, mean–ES and mean–Omega.

8.2 Strategic asset allocation methodology

In this section we describe the steps to obtain a strategic asset allocation.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
At each step we provide some references for the tools we need in our pro-
cess. Figure 8.1 represents the methodology that we use to obtain a strategic
asset allocation. This process can be used for CBs and SWFs, as well as for
pension funds. First, the collection of the data has to be done carefully in
order to have a good representation of the asset classes in which we want
to allocate. Our sources are banks such as J.P. Morgan or Merrill Lynch, and
Bloomberg.
In the second step, we aim to fit the asset classes’ distributions and the
joint distribution of all asset classes. The marginal distributions considered
are the Normal Inverse Gaussian distribution (NIG)2 and the Generalized
Student-t distribution (GT)3; the copula functions4 are the Gaussian,
Student-t, Gumbel, Clayton and the convex sum of Clayton and Gumbel
copulas, referred to as the Gumclay copula.
The parameterization determined in the second step is then used in the
third step to generate future outcomes for the asset classes. In this step we
can select the investment horizon of the portfolio. The expected returns are

Data collection
(historical returns of different as set classes)

Individual level Multivariate level


Adjust the distribution Adjust the joint distribution

Scenarios generator (Monte Carlo simulations)


× years horizon

Heuristic optimization Funds/clients


(choice of risk measures and leverage) constraints

Final portfolio : Asset Asset Asset …………. Asset


class 1 class 2 class 3 class n

Leverage
0% 100%

Figure 8.1 Diagram of the asset allocation process

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
162 Cyril Caillault and Stéphane Monier

based on the average historical returns as well as simulated real returns, but
other assumptions such as yield to maturity or the subjective views of the
portfolio manager can be considered.
The optimizations are then run on the simulated outcomes. The con-
straints of the client are added in the optimizer and the optimization is
launched for different frameworks: mean–variance, mean–VaR, mean–ES
and mean–Omega. The optimizer permits the leverage of the portfolio.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
When the Gaussian assumption is relaxed, constraints are introduced and
risk measures like VaR, ES and Omega are used. The consequence is that
some problems cannot always be reduced to linear or quadratic programs.
The optimization becomes more complex as it may exhibit multiple local
extrema. In particular, the use of risk measures taking into account the fat
tails and the asymmetry of asset returns leads to a non-convex problem,
and the shape exhibits several local minima. The use of classical Newton-
Raphson type algorithms in such situations gives solutions which are highly
dependent on the starting point. In fact, the algorithm has to escape from
local minima and must accept uphill moves. These algorithms may not find
the optimal solution and heuristic optimization techniques have to be con-
sidered. The algorithm we use is called Threshold Accepting. This algorithm
can reject a solution if the solution is not below a certain threshold. We will
not describe the algorithm here, but refer the reader to Gilli et al. (2006).

8.3 An example to justify the use of an appropriate


distribution

In this section, we show the relevance of having a good representation of


the distribution of returns, demonstrate the benefits of the use of copulas
to allocate the wealth of CBs or SWFs and show the impact of the portfolio
distribution choice on risk.
We consider the simple case where an SWF invests only in US equities
and US treasury bonds. The target of this portfolio is to design a strategy
capable of beating a 100% bond benchmark in 90% of cases. In Weinberger
and Golub (2007), the authors make very simple assumptions about US
equity and US treasury bond distributions: a zero correlation between
bonds and equities, and a normal distribution for both asset classes. Over a
30-year horizon, they observe that the portfolio meeting the condition is a
50% equity–50% treasury bond allocation. In Figure 8.2 we reproduce this
work and the portfolio found by the two authors is represented in the dark
blue bar. This portfolio is compared with the brown bar representing the
100% bond portfolio at the 10% quantile. Above this quantile, they con-
clude that the portfolio value will always beat the 100% bond benchmark.
But in this example we can also observe that the target is not reached for
other assumptions about the asset classes’ distributions. We made four more
sets of assumptions: NIG with a correlation of 0%, Gaussian distribution

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
× 107 Risk Measures for Strategic Asset Allocation 163
4.5

4
Portfolio value

3.5

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
3
Gaussian Distributions – Correlation 0%
Normal Inverse Gaussian Distribution – Correlation 0%
Gaussian Distributions – Correlation 20%
2.5 Normal Inverse Gaussian Distribution – Correlation 20%
Normal Inverse Gaussian Distribution – Student-t Copula
100% Bond Portfolio

2
0.075 0.08 0.085 0.09 0.095 0.1 0.105 0.11 0.115 0.12 0.125
Quantile

Figure 8.2 50% equity–50% treasury bond portfolio value at a 30-year horizon. The
initial value is 10M$

× 107
4.5

4
Portfolio value

3.5

Gaussian Distributions – Correlation 0%


Normal Inverse Gaussian Distribution – Correlation 0%
2.5 Gaussian Distributions – Correlation 20%
Normal Inverse Gaussian Distribution – Correlation 20%
Normal Inverse Gaussian Distribution – Student-t Copula
100% Bond Portfolio

2
0.075 0.08 0.085 0.09 0.095 0.1 0.105 0.11 0.115 0.12 0.125
Quantile

Figure 8.3 43% equity–57% treasury bond portfolio at a 30-year horizon. The initial
value is 10M$

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
164 Cyril Caillault and Stéphane Monier

with a correlation of 20%, NIG with correlation of 20% (Gaussian cop-


ula) and NIG with student-tcopula, which we believe is the most realistic
representation of distribution of the four. It is interesting to note the double
impact of the NIG and the copula, represented by the light blue and orange
bars. The portfolio value decreases each time that the model distribution
becomes more realistic. This means that the portfolio is more risky than it
appears under the Gaussian and zero correlation assumptions. Figure 8.3

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
represents the final portfolio value of a 43%–57% allocation. We find that
a 43% equity–57% treasury bond portfolio is required to get a return above
benchmark in 90% of cases. Thus, an investor can reach the same return
target with less risk in the portfolio when the portfolio distribution is well
specified.

8.4 The distributions of the CB and SWF asset class universe

Reserves held by CBs for the purpose of liquidity or foreign exchange man-
agement are usually invested in safe liquid assets. Recently, however, many
CBs have accumulated significantly more reserves than are needed for these
purposes. It is believed that CBs increasingly invest these excess reserves
with a longer investment horizon and with more emphasis on long-term
return. Thus, whereas a CB’s reserve portfolio will typically be invested in
high quality, lower maturity bonds, an SWF portfolio may well extend into
longer maturity or lower quality bonds. For SWFs that fully embrace this
transition away from the traditional conservative CB investment policy, the
portfolio may invest in public and private equity, real estate and even hedge
funds.The primary challenge in managing a CB reserve portfolio or an SWF
is to establish the appropriate asset allocation benchmark. This requires
an understanding of the purposes of the fund, the nature and timing of
the inflows and potential future spending, and the risk preference of its
sponsors.
To sum up, the asset class universe in which CBs typically invest has to be
safe and very liquid, with a maturity below five years. In order to add some
diversification to CB portfolios, we propose to build a portfolio based on the
following asset classes: government, agencies, mortgages and Pfandbriefe
indices with maturities below five years. SWFs look to maximize the long-
term return and are not restricted to five-year investments, but can seek
return in asset classes with higher maturities and risk profiles. The portfolio
that we propose can invest in Asset Backed Securities (ABS), credit corporate,
credit high yield, emerging countries debt and S&P 500; see Table 8.3 for the
details of the asset classes.
Having described our methodology, we now present our finding on the
marginal distributions. Our historic data set starts in January 1998 and ends
in September 2008. We estimate the parameters of the NIG and GT distri-
butions by maximizing the log likelihood functions. This maximization

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Risk Measures for Strategic Asset Allocation 165

Table 8.3 List of asset classes for CBs and SWFs

CBs universe(Y= Years) SWFs universe(Y=Years)

US Gov: 1–3 Y US Gov: 1–10 Y


US Gov: 3–5 Y US Gov: 10+ Y
US Agen: 1–3 Y US Agen: 1–10 Y
US Agen: 3–5 Y US Agen: 10+ Y
US Mortgages: 0–3 Y US Mortgages: 0–10 Y

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
US Mortgages: 3–5 Y US Corp: 1–10 Y
EURO Gov: 1–3 Y US Corp: 10+ Y
EURO Gov: 3–5 Y US High yield
EURO Quasi Gov: 1–3 Y US ABS
EURO Quasi Gov: 3–5 Y EURO Gov: 1–10 Y
EURO Pfandbriefe: 1–3 Y EURO Gov: 10+ Y
EURO Pfandbriefe: 3–5 Y EURO Pfandbriefe: 1–10 Y
UK Gilts: 1–3 Y EURO Pfandbriefe: 10+ Y
UK Gilts: 3–5 Y EURO Corp: 1–10 Y
– EURO Corp: 10+ Y
– Euro High Yield
– Central Europe emerging ex Russia
– EURO Quasi Gov: 1–10 Y
– EURO Quasi Gov: 10+ Y
– UK Gilts: 1–10 Y
– UK Gilts: 10+ Y
– UK Corp & Coll: 1–10 Y
– UK Corp & Coll: 10 + Y
– Emerging Countries: Sovereigns BBB
– SP500

provides the Akaike information criteria (AIC); see Akaike (1974). We use it
to discriminate between the NIG and the GT. Also, we provide the results
of the Kolmogorov-Smirnov (KS) test with the null hypothesis that the
data comes from one of the two distributions. Table 8.4 provides the four
first moments, the NIG and GT parameters, the AIC and the Kolmorogov-
Smirnov test for each asset class at daily, weekly and monthly frequencies.
For daily and weekly frequencies, all asset classes are skewed and fat-tailed,
but this is not the case at a monthly frequency. Another interesting point
is the decrease of the kurtosis as the frequency decreases. Hence, the Euro
asset classes have a kurtosis below three and the use of the NIG and the GT
cannot be considered. In those specific cases, we keep the Gaussian distri-
bution by default. For all others asset classes, the AIC is the lowest for the
NIG and the false signal returned by the KS test encourages us to select the
NIG as the appropriate distribution. We also provide the results for SWFs; we
obtain similar conclusions for Euro government: 1–10 Y, Euro government:
10+ Y, EURO Pfandbriefe: 1–10 Y, EURO quasi government: 1–10 Y, EURO
quasi government: 10+ Years, UK Gilts: 1–10 Y and UK Gilts: 10+ Y (see Table
8.5). These asset classes are modelled by the Gaussian distribution. US ABS is

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Table 8.4 First four moments, parameters estimates, AIC and KS test for each asset class of CBs’ universe. “False” means that the
assumption that the data comes from the NIG (or GT) cannot be rejected

Akaike
Moments NIG parameters GT parameters Criteria KS Test
Monthly
Y=Years Mean SD Skew Kurtosis alpha beta mu delta lambda eta NIG GT NIG GT

US Gov: 1–3 Y 0.41 0.49 0.05 3.63 4.44 0.17 0.37 1.08 0.02 13.53 1.45 1.78 FALSE TRUE
US Gov: 3–5 Y 0.49 1.06 -0.25 3.60 2.36 −0.47 1.00 2.50 −0.12 14.98 2.99 3.22 FALSE TRUE
US Agen: 1–3 Y 0.41 0.44 −0.06 3.83 4.37 −0.16 0.44 0.83 −0.02 11.26 1.19 1.57 FALSE TRUE
US Agen: 3–5 Y 0.48 0.86 −0.45 4.12 2.37 −0.66 0.93 1.55 −0.19 10.50 2.55 2.79 FALSE TRUE
US Mortgages: 0–3 Y 0.43 0.42 −0.85 7.14 2.48 −0.68 0.55 0.40 −0.25 5.91 1.06 1.47 FALSE TRUE
US Mortgages: 3–5 Y 0.48 0.72 −1.39 8.43 1.84 −0.87 0.83 0.65 −0.48 6.17 2.08 2.37 FALSE TRUE
EURO Gov: 1–3 Y 0.33 0.38 0.04 2.41 NA NA NA NA NA NA NA NA TRUE TRUE
EURO Gov: 3–5 Y 0.39 0.74 −0.13 2.48 NA NA NA NA NA NA NA NA TRUE TRUE
EURO Quasi Gov: 1–3 Y 0.32 0.36 0.04 2.38 NA NA NA NA NA NA NA NA TRUE TRUE
EURO Quasi Gov: 3–5 Y 0.38 0.71 −0.11 2.54 NA NA NA NA NA NA NA NA TRUE TRUE
EURO Pfandbriefe: 1–3 Y 0.32 0.36 0.11 2.61 NA NA NA NA NA NA NA NA TRUE TRUE
EURO Pfandbriefe: 3–5 Y 0.37 0.72 −0.14 2.53 NA NA NA NA NA NA NA NA TRUE TRUE
UK Gilts: 1–3 Y 0.46 0.44 −0.34 3.35 10.96 −4.78 1.21 1.54 −0.19 27.35 1.23 1.65 FALSE TRUE
UK Gilts: 3–5 Y 0.50 0.78 −0.29 2.86 NA NA NA NA NA NA NA 2.64 TRUE TRUE

Akaike
Moments NIG parameters GT parameters Criteria KS Test
Weekly
Y=Years Mean SD Skew Kurtosis alpha beta mu delta lambda eta NIG GT NIG GT

US Gov: 1–3 Y 0.09 0.23 −0.33 5.17 5.39 −0.72 0.12 0.28 −0.11 6.93 −0.15 −0.04 FALSE TRUE
US Gov: 3–5 Y 0.10 0.50 −0.47 4.40 3.54 −0.93 0.32 0.80 −0.19 9.13 1.42 1.50 FALSE TRUE
US Agen: 1–3 Y 0.08 0.21 −0.44 5.52 5.66 −0.97 0.13 0.24 −0.14 6.64 −0.35 −0.24 FALSE TRUE
US Agen: 3–5 Y 0.10 0.43 −0.53 4.85 3.54 −0.90 0.26 0.60 −0.20 7.89 1.11 1.18 FALSE TRUE

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
US Mortgages: 0–3 Y 0.10 0.25 −0.93 7.85 3.85 −1.08 0.16 0.22 −0.26 5.68 −0.07 0.01 FALSE TRUE
US Mortgages: 3–5 Y 0.10 0.42 −0.79 6.30 2.88 −0.84 0.24 0.44 −0.25 6.40 0.97 1.02 FALSE TRUE
EURO Gov: 1–3 Y 0.07 0.17 −0.33 3.82 12.74 −2.94 0.15 0.36 −0.15 12.29 −0.67 −0.53 FALSE TRUE
EURO Gov: 3–5 Y 0.08 0.37 −0.38 3.49 10.29 −4.09 0.54 1.06 −0.20 20.16 0.81 1.72 FALSE FALSE
EURO Quasi Gov: 1 – 3 Y 0.07 0.17 −0.46 4.01 13.56 −4.24 0.17 0.32 −0.21 11.41 −0.77 −0.63 FALSE TRUE
EURO Quasi Gov: 3 – 5 Y 0.08 0.34 −0.41 3.42 16.27 −8.82 0.82 1.14 −0.23 25.36 0.69 0.80 FALSE FALSE
EURO Pfandbriefe: 1 – 3 Y 0.07 0.17 −0.48 4.24 11.51 −3.29 0.15 0.29 −0.20 9.90 −0.74 −0.59 FALSE TRUE
EURO Pfandbriefe: 3 – 5 Y 0.08 0.36 −0.48 3.78 8.38 −3.35 0.44 0.82 −0.23 14.41 0.75 0.84 FALSE FALSE
UK Gilts: 1–3 Y 0.09 0.21 −0.66 5.38 6.61 −1.86 0.17 0.26 −0.23 7.20 −0.33 −0.20 FALSE TRUE
UKGilts: 3–5 Y 0.10 0.38 −0.42 4.30 4.64 −1.09 0.25 0.62 −0.17 9.33 0.89 0.98 FALSE TRUE
Akaike
Moments NIG parameters GT parameters Criteria KS Test
Daily
Y=Years Mean SD Skew Kurtosis alpha beta mu delta lambda eta NIG GT NIG GT

US Gov: 1–3 Y 0.02 0.11 −0.12 7.97 7.17 −0.22 0.02 0.08 −0.03 5.21 −1.72 −1.66 FALSE TRUE
US Gov: 3–5 Y 0.02 0.24 −0.23 6.26 4.12 −0.31 0.04 0.23 −0.07 5.88 −0.12 −0.06 FALSE TRUE
US Agen: 1–3 Y 0.02 0.10 −0.20 6.89 9.09 −0.54 0.02 0.09 −0.05 5.57 −1.94 −1.89 FALSE TRUE
US Agen: 3–5 Y 0.02 0.20 −0.22 5.86 5.25 −0.40 0.04 0.20 −0.06 6.14 −0.47 −0.42 FALSE TRUE
US Mortgages: 0–3 Y 0.02 0.11 −0.55 7.97 7.72 −1.15 0.03 0.09 −0.14 5.34 −1.79 −1.75 FALSE TRUE
US Mortgages: 3–5 Y 0.02 0.19 −0.24 6.84 4.80 −0.34 0.03 0.17 −0.06 5.60 −0.65 −0.62 FALSE TRUE
EURO Gov: 1–3 Y 0.01 0.08 −0.31 6.64 12.29 −1.17 0.02 0.07 −0.08 5.71 −2.43 −2.37 FALSE TRUE
EURO Gov: 3–5 Y 0.02 0.16 −0.28 5.17 7.66 −0.88 0.04 0.19 −0.09 6.89 −0.91 −0.85 FALSE TRUE
EURO Quasi Gov: 1–3 Y 0.01 0.07 −0.25 9.40 9.80 −0.56 0.02 0.05 −0.06 4.95 −2.59 −2.53 FALSE TRUE
EURO Quasi Gov: 3–5 Y 0.02 0.14 −0.37 4.59 10.42 −1.85 0.05 0.21 −0.14 8.15 −1.09 −1.02 FALSE TRUE
EURO Pfandbriefe: 1–3 Y 0.01 0.07 −0.35 5.69 15.79 −1.98 0.02 0.08 −0.11 6.36 −2.57 −2.50 FALSE TRUE
EURO Pfandbriefe: 3–5 Y 0.02 0.15 −0.32 4.67 9.64 −1.46 0.05 0.21 −0.12 7.86 −1.03 −0.97 FALSE TRUE
UK Gilts: 1–3 Y 0.02 0.09 −0.45 7.00 10.13 −1.37 0.03 0.08 −0.12 5.62 −2.11 −2.05 FALSE TRUE
UK Gilts: 3–5 Y 0.02 0.16 −0.38 5.29 7.51 −1.13 0.05 0.19 −0.12 6.83 −0.87 −0.81 FALSE TRUE

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Table 8.5 First four moments, parameters estimates, AIC and KS test for each asset class of SWFs’ universe. “False” means that the assumption that
the data comes from the NIG (or GT) cannot be rejected

Moments NIG parameters GT parameters Akaike Criteria KS Test


Monthly
Y=Years Mean SD Skew Kurtosis alpha beta mu delta lambda eta NIG GT NIG GT

US Gov: 1–10 Y 0.45 0.88 −0.24 3.54 3.02 −0.63 0.91 2.20 −0.12 16.23 2.62 2.81 FALSE TRUE
US Gov: 10+ Y 0.60 2.33 −0.68 4.42 1.02 −0.44 2.55 4.05 −0.30 10.20 4.53 6.10 FALSE TRUE

US Agen: 1–10 Y 0.46 0.83 −0.46 4.14 2.45 −0.70 0.90 1.50 −0.20 10.43 2.49 2.71 FALSE TRUE
US Agen: 10+ Y 0.60 2.26 −0.81 5.08 0.85 −0.36 2.10 3.23 −0.32 8.36 4.45 5.90 FALSE TRUE
US Mortgages: 0–10 Y 0.49 0.75 −0.39 3.32 11.64 −7.57 2.96 2.89 −0.22 34.12 2.30 2.56 FALSE TRUE
US Corp: 1–10 Y 0.41 1.19 −1.76 12.13 0.82 −0.38 0.83 0.82 −0.57 5.49 3.00 3.21 FALSE TRUE
US Corp: 10+ Y 0.41 2.24 −0.91 5.92 0.68 −0.26 1.54 2.65 −0.33 7.10 4.41 5.62 FALSE TRUE
US High yield 0.35 2.24 −0.84 6.17 0.58 −0.19 1.20 2.45 −0.28 6.62 4.27 5.30 FALSE TRUE
US ABS 0.21 0.72 −2.88 12.58 NA NA NA NA −1.27 6.19 NA 1.69 TRUE FALSE
EURO Gov: 1–10 Y 0.39 0.78 −0.16 2.43 NA NA NA NA 0.15 NA NA NA TRUE TRUE
EURO Gov: 10+ Y 0.52 1.84 −0.04 2.69 NA NA NA NA 0.10 NA NA NA TRUE TRUE
EURO Pfandbriefe: 1–10 Y 0.37 0.67 −0.12 2.47 NA NA NA NA 0.11 NA NA NA TRUE TRUE
EURO Pfandbriefe: 10+ Y 0.44 1.60 −0.50 3.96 1.59 −0.59 1.73 3.27 −0.23 12.30 3.79 4.43 FALSE TRUE
EURO Corp: 1–10 Y 0.32 0.97 −2.03 13.32 1.14 −0.61 0.73 0.64 −0.81 5.68 2.58 2.67 FALSE TRUE
EURO Corp: 10+ Y 0.35 1.75 −2.18 15.02 0.58 −0.30 1.02 1.08 −0.94 5.58 3.75 4.37 FALSE TRUE
Central Europe emerging ex Russia 0.86 1.47 1.01 9.82 0.54 0.13 0.59 1.06 0.25 5.19 3.47 4.20 FALSE TRUE
Euro High Yield 0.18 3.52 −0.48 5.41 0.35 −0.07 0.98 4.12 −0.16 6.80 5.05 6.81 FALSE TRUE
EURO Quasi Gov: 1–10 Y 0.40 0.77 −0.14 2.44 NA NA NA NA NA NA NA NA TRUE TRUE
EURO Quasi Gov: 10+ Y 0.52 1.56 −0.18 2.47 NA NA NA NA NA NA NA NA TRUE TRUE
UK Gilts: 1–10 Y 0.48 0.88 −0.27 2.80 NA NA NA NA NA NA NA NA TRUE TRUE
UK Gilts: 10+ Y 0.51 1.93 0.07 2.49 NA NA NA NA NA NA NA NA TRUE TRUE
UK Corp & Coll: 1–10 Y 0.41 0.96 −2.18 15.36 1.00 −0.51 0.76 0.58 −0.87 5.48 2.57 2.76 FALSE TRUE
UK Corp & Coll: 10 + Y 0.46 1.93 −0.19 4.56 0.73 −0.06 0.70 2.71 −0.07 7.93 4.18 5.08 FALSE TRUE
Emerging Countries: Sovereigns 0.64 1.50 −1.31 6.89 1.42 −0.85 1.87 1.65 −0.54 7.32 3.53 4.24 FALSE TRUE
BBB
SP500 0.24 4.32 −0.51 3.55 1.54 −1.01 11.04 12.56 −0.28 21.92 5.78 9.31 FALSE TRUE

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Risk Measures for Strategic Asset Allocation 169

different from the others and the best fit here is the GT. Our feeling is that
the subprime crisis has so stressed this asset class that the tail is now very
fat (with a kurtosis above 12) and the GT is the only distribution we have
considered that is able to capture the part of the return resulting from this
turbulent period.
The estimation of the copula’s parameters is done by maximizing the
pseudo log-likelihood function. Hence, for a generic parameter ψ, the esti-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
mator is given by

N
 
c = arg max ∑ log L  c ; Fˆ iN ( zik ), Fˆ jN ( zik ) ,
c k =1  

where L(ψ ;u,v)=(⭸2/⭸u⭸v)C(u,v), and F̂ is the empirical cumulative distribution


function. For tractability reasons, the joint distribution of each pair of asset
classes is first estimated. Table 8.6 and 8.7 summarize the selected copula
for each pair of asset classes. The copula that is selected most frequently is
chosen as the copula for the global joint distribution.
In the CB case, the Student-t copula is selected, whereas in the SWF case
it is the Gumclay copula. The choice for CBs is not an obvious one as the
Gumclay copula is selected almost the same number of times as the Student-t

Table 8.6 Number of asset class pairs selected


by copulas according to the AIC (CB case)

# of pairs

Copula Daily Weekly Monthly

Gaussian 0 0 0
Student 91 55 46
Gumbel 0 0 1
Clayton 0 0 0
GumClay 0 36 45

Table 8.7 Number of


asset class pairs selected
by copulas according to
the AIC (SWF case)

Copula # of pairs

Gaussian 18
Studen-t 78
Gumbel 17
Clayton 32
GumClay 154

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
170 Cyril Caillault and Stéphane Monier

copula. This means that the tail dependence for these asset classes is quite
symmetrical and the selection of the best copula is dependent on the data
in the centre of the distribution. This effect is less pronounced with higher
frequency data (see Table 8.6). We can argue that the monthly frequency
tends to smooth the returns and the extreme effects as we observed for
their marginal distributions. Now, the parameters for the global depend-
ence structure are determined as follows. For the Student-t copula, the cross-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
correlations and μ are easily estimated from the log likelihood estimator.
The degree of freedom for μ is different for each pairs of asset classes, but the
global Student-t copula is defined for a unique μ.
Here, μ̂ is defined as the median of all estimated μ for all pairs of asset
classes and its value is 10.4. From these estimates we calculated the
tail dependence matrix which contains non null elements. We repeat
this for the parameters ␪1, ␪2, w of the GumClay copula and find that
uˆ1 = 1.9, uˆ2 = 1.7, w
ˆ = 0.6 . From these estimations we can derive the tail depend-
ence measures and we obtain ␭ˆu = 0.31 and ␭ˆl = 0.30 . This result suggests the
use of a copula with tail dependence. In the rest of the chapter we use the
results obtained for monthly data.

8.5 Strategic allocation for CBs and SWFs

We can now perform strategic asset allocations for CBs and SWFs. For
CBs we propose a portfolio with US, Euro and UK asset classes. The target
return for this portfolio is designed to beat the JPM global government
index by 25 basis points, or 4.06%. Next we consider the case of an SWF
portfolio and we take a nominal rate of 5.2% as the target return. It repre-
sents the average of the historical inflations in the US, Euro and UK over
the period September 1998–September 2008, i.e. 2.2% plus 3%. But SWFs
would rather seek inflation protection over a shorter time horizon, let us
say, annually. In that specific case, we include inflation in the Monte Carlo
simulation. This requires determining the marginal distribution of infla-
tion and its dependence structure with the other asset classes. Hence, the
future outcome of inflation and the asset classes will be linked by the cop-
ula. This case is investigated in the last part of this section and we show
the differences we observe between the portfolios obtained for a given
nominal rate.
We consider four approaches. For the classical mean–variance method,
we are going to minimize the variance of the portfolio to get the target
return. In mean–VaR and mean–ES, the risk measures are expressed as
positive terms so we are going to minimize these measures to reach our
objectives. The confidence level ␣ we use is 95%. For the Omega func-
tion, a higher value indicates a better portfolio. Since here we consider
the negative of the Omega value, we seek to minimize its value in our
optimizer.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Risk Measures for Strategic Asset Allocation 171

8.5.1 CB case: Figure 8.4


For this portfolio the mean–variance optimization allocates only in US
mortgages: 13Y, Euro quasi government: 13Y and Euro Pfandbriefe: 13Y,
whereas the others methods put some weight on UK Gilts and Euro govern-
ment. mean–variance is the optimization method that diversifies the least.
This approach provides a portfolio with low volatility asset classes which
is what we can expect from an asset allocation based on volatility (in fact

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
these three asset classes have the lowest monthly volatility; see Table 8.4).
Now, by looking closely at the kurtosis of US mortgages: 03Y, we notice that
it is well above three and in this specific case, the mean–Variance method
neglects the fat-tail risk.
We can observe that the mean–VaR and mean–ES methods reduce the
weight to this asset class and mean–Omega does not allocate to it at all.

UK Gilts: 3–5 Years

UK Gilts: 1–3 Years

EURO Pfandbriefe: 3–5 Years

EURO Pfandbriefe: 1–3 Years

EURO Quasi Gov: 3–5 Years

EURO Quasi Gov: 1–3 Years

EURO Gov: 3–5 Years

EURO Gov: 1–3 Years

US Mortgages: 3–5 Years


Mean/Variance
US Mortgages: 0–3 Years Mean/VaR
Mean/Expected shortfall
US Agen: 3–5 Years Mean/Omega

US Agen: 1–3 Years

US Gov: 3–5 Years

US Gov: 1–3 Years

0 10 20 30 40 50 60
(In %)

Figure 8.4 Strategic asset allocation for a global government portfolio. The target
return of this portfolio is the average total return of JPM global government over the
period 19982008 plus 25 basis points

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
172 Cyril Caillault and Stéphane Monier

The characteristic of the mean–VaR and mean–ES is that the optimizer will
put more weight on asset classes with low kurtosis. Nevertheless, we cannot
conclude that skewness and mean have a significant impact on the portfolio
selection. The Omega function is a more global risk measure as it takes into
account of the entire portfolio distribution. As with for the two previous
approaches, Omega allocates to low kurtosis asset classes but it does allocate
to those with high absolute skewness and to high mean asset classes, such

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
as UK Gilts.
Remark 1: It is interesting to notice that none of the optimizations allocate
to US government asset classes. When we consider the short part of the
curve, we may argue that the Federal Reserve has had a more active monet-
ary policy than the European or UK CBs, hence creating more volatility in
its fixed income market. This fact is well reflected in the US kurtosis, which
is higher than for the two other markets.

8.5.2 SWF case: nominal rate


The portfolio returned by the mean–variance method is once again not very
well diversified. It proposes to allocate in seven asset classes and two of
them represent more than 50% of the portfolio. Moreover 25% of the total
portfolio is in the US ABS, which seems high given the 2007 Subprime cri-
sis. In fact, the volatility of this asset class remains below the other asset
classes, and the optimization on volatility allocates naturally to this asset
class. If we compare this portfolio with the ones given by the other three
methods, we can observe that the number of asset classes ranges from 14
(for the mean–ES) to 19 asset classes (for the mean–Omega), and none of the
asset classes in these portfolios has a weight above 25%. Hence, the three
other methods definitely improve the asset class diversification. Also, the
post-Markowitz methods reduce the exposure to US ABS significantly, as
they take into account the fat-tail risk.
We also observe that the allocation in risky asset classes (not government
or government-related) is quite significant even though the average return
over the period 19982008 is not very attractive. The allocation is 15% with
the classical approach and 24% to 46% for the Omega function approach.
Thus, we observe that the diversification comes from equities, high yield
and also from emerging countries. There is also a preference for Euro and
UK corporate bonds compared to the US sector, which performed the worst
over the last decade.
Remark 2: If we compare this portfolio to the allocation for CBs, the allo-
cation includes the US government asset classes. Here we considered longer
maturity asset classes, one to five years for CBs and one to ten years for SWFs.
So, the volatility induced on the short part of the curve by the activism of the
Federal Reserve seems diluted and we also observe a decrease in the kurtosis.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Risk Measures for Strategic Asset Allocation 173

8.5.3 SWFs case: OECD inflation + 3% – Figure 8.5


In order to provide protection against inflation to SWFs, we incorporate
inflation in our Monte Carlo scenario generator. Thus, inflation is now con-
sidered as an asset class for which the distribution has to be determined.
The time series considered for inflation is the monthly change of the annual
inflation rate over the same period as for the other asset classes, i.e. the last
ten years. This transformation permits us to suppress a large part of the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
seasonality effect contained in inflation. We calculated that the kurtosis is
above three and we find that the NIG represents inflation the best. We also
find that the dependence structure between inflation and the other asset
classes is Gaussian with negative correlations. Nevertheless, as the num-
ber of Gumclay copula selected remains the highest among the copulas in
the SWF asset classes’ universe, we keep this copula for the Monte Carlo
simulation.
The optimizations are done to generate a return equal to inflation + 3%.
Figure 8.5 presents the optimized portfolios, (a1) to (c1), at the sector level5 for
the three post-Markowitz approaches: mean–VaR, mean–Expected Shortfall
and mean–Omega. We compare these allocations to the ones obtained for
the previous SWFs case, (a2) to (c2), where the target return is expressed as a
nominal rate. The differences between the portfolios are represented in the
bar charts on the bottom of Figure 8.5.
When using the sector view, we note that the asset allocations (a2)
and (b2) are quite similar. In fact, the government sector has the biggest
weight, then the covered bonds and credit sectors. The exposure to equity
is quite low and can be explained by the poor performance of this sec-
tor over the last tens years. With this view, we observe once again, that
the mean–ES method reduces the exposure to covered bonds as we saw
in the previous paragraph. The asset allocation with the mean–Omega
approach is different putting more weight to credit and increasing the
expose to equity from 3% to 5%. Now, if we consider the asset allocation
in real terms, then portfolios (a1) and (b1) returned by the mean–VaR and
mean–ES approaches are different. We note that the optimizers increase
the weight to the government and equity sectors, respectively +10% and
+1% for the mean–VaR method and +5% twice for the mean–ES approach.
It is quite surprising to not observe a stronger increase in the equity sector,
as it has been often said that the equity market protects well against infla-
tion. The mean–Omega approach provides a very different result by allo-
cating 35% more in the government sector. At the same time, the weight to
credit and equity is strongly decreased and the exposure to covered bonds
is unchanged. In fact, with the Omega function in conjunction with VaR
and ES, it seems that the best protection against inflation is the govern-
ment bond sector.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
174 Cyril Caillault and Stéphane Monier

SWFs case for a Target return of OECD inflation + 3%

(a1) Mean / VaR (b1) Mean / Expected shortfall (c1) Mean / Omega

Equity Equity Equity


4% Covered 7% 0%
bonds Covered Covered
23% bonds bonds
23% 29%

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Credit Credit
Credit 5%
18%
21%
Government Government Government
55% 49% 66%

SWFs case for a target return expressed in nominal rate


(a2) Mean / VaR (b2) Mean / Expected shortfall (c2) Mean / Omega
Equity Equity Equity
3% 2% 5%
Covered Covered Covered
bonds bonds bonds
32% 28% 23%

Government
Government Government 32%
44% 44%
Credit Credit Credit
21% 26% 40%

Differences between the portfolios

15 6

10 4
2
5
(In %)

(In %)

0
0
−2
−5 −4
−10 −6
Covered Credit Govern- Equity Covered Credit Govern- Equity
bonds ment bonds ment
40
30
20
10
(In %)

0
−10
−20
−30
−40
Covered Credit Govern- Equity
bonds ment

Figure 8.5 Portfolios (a1) to (c1) and (a2) to (c2). Portfolios (a1) to (c1) are obtained by
using the mean–VaR, mean–ES and mean–Omega methods and for a real yield tar-
get return. The asset classes are joined by sectors. Portfolios (a2) to (c2) are obtained
using the three post-Markowitz approaches for a nominal rate target return. The bar
charts represent the differences between the portfolios

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Risk Measures for Strategic Asset Allocation 175

8.6 Conclusions

In this chapter we have compared several methods for performing optimal


asset allocations for CBs and SWFs. First we highlighted that the choice of
the asset classes’ distribution at the marginal and multivariate levels is crucial
and has a significant impact on the portfolio allocation. This improvement
can lead us to build less risky portfolio when the appropriate distribution is

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
used. Also, by using the AIC and the Kolmorov-Smirnov test we show that
the NIG can be applied to the fixed income markets. To our knowledge, this
is the first time that this distribution has been applied in this way. While
the appropriateness of this distribution for riskier fixed income asset classes
is not a real surprise, the use of a NIG for the short maturity US treasury
bonds does suggest some interesting questions. Is the monetary policy pur-
sued by the Federal Reserve smooth enough to ensure price stability? Does
the Federal Reserve induce too much volatility to maintain confidence in the
market? We will not address these questions here but as we have pointed out,
there is a real difference to the European monetary policy. The copula ana-
lysis has shown that the dependence structure is almost symmetric for the
CBs’ and SWFs’ universe of asset classes. It is interesting to note the use of the
Gumclay copula as the appropriate copula. Secondly, we demonstrate that
the choice of the risk measure is also very important. Indeed, the mean–VaR,
mean–ES and mean–Omega methods return portfolios with more diversi-
fication than does mean–variance. The two first post-Markowitz methods
really take into account fat-tail risk and the mean– ES tends to reduce the
exposure to fat-tail asset classes. That is the main difference that we observed
between these two methods. The Omega function allocates to the same type
of asset classes but tends to select the ones with higher returns. Finally, by
considering the real return plus 3% as the target return for an SWF, we find
that the best protection against inflation is not the equity market, as is very
often argued, but rather the government bond sector.
In this chapter we did not take into account the time dependence that is
often detected in financial time series. We believe that it also exits in fixed
income market and that further investigation will be necessary to choose
the most appropriate model. Some researchers have already researched this
topic for the equity markets; see Fantazzini (forthcoming) and Patton et al.
(2006), for instance. This research will be the subject of another article.

Notes
This Chapter is the short version of the initial article. The latter can be obtained
under request.

Dr. Cyril Caillault, CIO, Quantitative Strategies Fixed Income and currencies,
Fortis Investments, 82 Bishopgate, London EC2N 4BN United Kingdom. Tel.
00442070637230, [email protected]

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
176 Cyril Caillault and Stéphane Monier

Stéphane Monier, CFA CIO Fixed income and currencies, Fortis investments, 82
Bishopgate, London EC2N 4BN United Kingdom, Tel. 00442070637162, stephane
[email protected]

1. RiskMetrics technical document (1996)


2. Artzner et al. (1997), Rochafellar and Uryasev (2002)
3. Keating and Shadwick (2002)
4. Nobel prize in Economics 2003.
5. Details on the Normal Inverse Gaussian distribution (NIG) can be found in

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Barndorff-Nielsen (1997) and Guégan and Houdain (2005).
6. The Generalized Student-t (GT) distribution is introduced in Hansen (1994). See
Jondeau and Rockinger (2003) for some applications.
7. All materials used on copulas functions are described in the following references:
Embrechts et al. (1999, 2002), Frees and Valdez (1998), Breymann et al. (2003),
Sklar (1959), Joe (1993), Nelsen (1999) and Caillault and Guégan (2005).
8. The sectors that we use are: government, covered bonds, credit and equity. In the
government sector we combine US, Euro, and UK government bonds, US Agency
and Euro quasi government. The covered bonds sector contains US ABS, US mort-
gages and Euro Pfandbriefe asset classes. In the credit sector we join the following
asset classes: US, Euro, UK credit investment grade; US and Euro high yield and
emerging credit. The equity sector is the S&P 500 index. Please refer to Table 8.4
for the details of the asset classes used in this chapter.

Bibliography
Akaike, H. (1974), ‘A New Look to the Statistical Model Identification’, IEEE Transactions
on Automatic Control, AC-19, 716–723.
Artzner, P., Delbaen, F., Eber, J. and Heath, D. (1997), ‘Thinking Coherently’, Risk,
10, 68–71.
Barndorff-Nielsen, O.E. (1997), ‘Normal Inverse Gaussian Processes and Stochastic
Volatility Modelling’, Scandinavian Journal of Statistics, 24, 1–13.
Bollerslev, T. (1986), ‘Generalized Autoregressive Conditional Heteroscedasticity’,
Journal of Econometrics, 31, 307–327.
Breymann, W., Dias, A. and Embrechts,.P. (2003), ‘Dependence Structures for
Multivariate High-frequency Data in Finance’, Quantitative Finance, 3, 1–14.
Caillault, C. and Guégan, D.(2005), ‘Empirical Estimation of Tail Dependence Using
Copulas. Application to Asian Markets’, Quantitative Finance, 5, 489–501.
Caillault, C. and Guégan, D. (2008), ‘Forecasting VaR and Expected Shortfall
Using Dynamical Systems: A Risk Management Strategy’, Frontiers in Finance and
Economics, 6, 1, 26–50.
Dynkin, L., Gould, A and Hyman, J (2007), Quantitative Management of Bond Portfolios,
Princeton University Press, Princeton.
Embrechts, P., McNeil, A. and Straumann, D. (1999), ‘Correlation: Pitfalls and
Alternatives, a Short, Non-Technical Article’, RISK Magazine, May, 69–71.
Embrechts, P., McNeil, A. and Straumann, D. (2002), ‘Correlation and Dependence
in Risk Management: Properties and Pitfalls’, In Dempster, MA.H. (ed.) Risk
Management: Value at Risk and Beyond, Cambridge University Press, Cambridge,
176–223.
Engle, R.F. (1982), ‘Autoregressive Conditional Heteroscedasticity with Estimates of
the Variance of the United Kingdom Inflation’, Econometrica, 50, 987–1007.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Risk Measures for Strategic Asset Allocation 177

Fantazzini, D. (Forthcoming), ‘Dynamic Copula Modelling for Value at Risk’, Frontiers


in Finance and Economics.
Frees, E.W. and Valdez, E. A. (1998), ‘Understanding Relationships Using Copulas’,
North American Actuarial Journal, 2, 1–25.
Gilli, M., Këllezi, E. and Hysi, H. (2006), ‘A Data-Driven Optimization Heuristic for
Downside Risk Minimization’, Journal of Risk 8(3), 1–19.
Guégan, D. and Houdain, J. (2006), ‘Hedging Tranches Index Products: Illustration of
Model Dependency’, The ICFAI Journal of Derivatives Markets, 3, 39–61.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Hamilton, J. (1988), ‘Rational Expectations Econometric Analysis of Changes in
Regime: An Investigation of the Term Structure of Interest Rates’, Journal of Economic
Dynamics and Control, 12, 385–423.
Hansen, B.E. (1994), ‘Autoregressive Conditional Density Estimation’, International
Economic Review 35(3), 705–730.
Joe, H. (1993), ‘Parametric Families of Multivariate Distributions with Given Margins’,
Journal of Multivariate Analysis, 46, 262–282.
Jondeau, E. and Rockinge, M. (2006), ‘The Copula-GARCH Model of Conditional
Dependencies: An International Stock Market Application’, Journal of International
Money and Finance, Elsevier, 25(5), 827–853.
J.P. Morgan (1996), RiskMetrics Technical Document,4th Edition, J.P. Morgan, New
York.
Keating, C. and Shadwick, W. F. (2002), A Universal Performance Measure, The Finance
Development Centre.
Nelsen, R. (1999), ‘An Introduction to Copulas’, Lectures Notes in Statistics 139, Springer
Verlag, New York.
Patton, A.J. (2004), ‘On the Out-of-Sample Importance of Skewness and Asymmetric
Dependence for Asset Allocation’, Journal of Financial Econometrics, 2(1), 130–168.
Rockafellar, R.T. and Uryasev, S. (2002), ‘Conditional Value-at-Risk for General Loss
Distributions’, Journal of Banking and Finance, 26, 1443–1471.
Sklar, A. (1959), ‘Fonctions de Répartition à n Dimensions et leurs Marges’, Publications
de l’Institut de Statistique de L’Université de Paris, 8, 229–231.
Weinberger, F. and Golub, B. (2007), Asset Allocation and Risk Management for Sovereign
Wealth Funds, Central Banking Publications, London, UK.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
9
Practical Scenario-Dependent
Portfolio Optimization: A Framework
to Combine Investor Views and

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Quantitative Discipline into
Acceptable Portfolio Decisions
Roberts L. Grava

9.1 Introduction

Depending on the institution, personnel, teams or culture, getting market-


facing professionals with specific investment views to become enthusiastic
about processes that impose quantitative discipline around their decisions
can be a challenge of varying difficulty. At the risk of greatly oversimplify-
ing matters, front office staff have exhibited tendencies to be less enthusias-
tic about utility functions, risk aversion parameters, yield curve factors and
forecast confidence levels, more enthusiastic about their own concrete views
about levels or prices in the financial markets they transact in and fairly
confident about their investment decisions taken based on these intuitive
views. With increasing dimensions of investment decision-making, how-
ever (multiple markets, curves, currency risk, credit risk, sector decisions,
etc.), quantitative and computational assistance become quite important in
reaching investment decisions that yield acceptable results.
This chapter outlines a framework that uses a minimum of inputs from
portfolio managers or investment strategists, in a format ‘native’ to their
habitat: horizon expectations for headline government interest rates, sec-
tor spreads, FX rates and equity index levels for a base case scenario, and
as many risk scenarios as they feel are appropriate. Users do not have to
specify confidence levels for their base case or the probability of each risk
scenario occurring, instead specifying a minimum level of desired return,
or a maximum amount of acceptable loss or underperformance for each
risk scenario. Finally, a downside risk constraint for the entire portfolio is
specified.

178

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Scenario-Dependent Portfolio Optimization 179

The result of this process satisfies several portfolio design goals: with
the inclusion of investor/strategist views, the process maximizes portfolio
return when the base case scenario, representing the investors’ ‘best ideas’,
materializes. At the same time, total portfolio risk is contained by imple-
menting constraints on downside risk, based on distributions of forward-
looking asset/portfolio returns generated using more neutral assumptions.
Finally, maximum loss or underperformance constraints on user-generated

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
risk scenarios mean that portfolio returns will at least be acceptable in the
event of concretely specified alternative scenarios – in effect, stress-testing
is ‘baked in’ to the optimization process.

9.2 Institutional issues

In many institutions, in both official and private sectors, different types


of stakeholders have inputs into asset management processes and live with
the results of investment decisions, but these representatives from differ-
ent parts of the institution have different strengths and weaknesses. At the
risk of wading into sweeping generalizations, front office staff (portfolio
managers, strategists) possess greater market insight than other team mem-
bers, due to their focus on the day-to-day developments in the financial
markets they transact in or think about. Consequently, their intuitive feel
for market direction translates very well into confidence about qualitative
investment decisions in the markets they cover. An experienced portfolio
manager should have no problem in expressing preferences to be long or
short duration, currencies, credit, etc., or to express themes related to curve
positions or sector strategies. At the same time, while it may be easy to
favor certain concrete trades or strategies, the impact of individual trades
on entire portfolios is much less intuitive. Indeed, as the dimensions of
risk-taking increase to include multiple currencies, curves, sectors, asset
classes and strategies, the ultimate impact of individual positions is more
difficult to understand without the help of computational and quantitative
assistance.
This leads to the strengths of middle office staff (risk managers, quanti-
tative specialists, etc.). Such employees should be well-trained in the quan-
titative techniques needed to make sophisticated calculations about the
effect of individual investment decisions on entire portfolios. They can
evaluate the relative attractiveness of various investment themes given
certain assumptions, can help size certain positions to meet broad risk
criteria, and, given the right tools and methodologies, can be invaluable
in portfolio construction exercises. The weaknesses of quantitative staff,
however, can include lack of actual trading and market experience, and a
resulting inability to fully understand the rationale behind certain front
office decisions. Furthermore, middle office experts can suffer from the
tendency to focus on quantitative processes, working to ensure that they

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
180 Roberts L. Grava

are sophisticated, elegant and/or robust, while losing sight of the main
goal – ensuring that their institutions generate the best risk-adjusted
returns possible.
Institutions have a third set of stakeholders, slightly removed from
the investment process. We will call them senior management, but this
moniker will encompass a broad array of people including, possibly,
department heads, board members, plan sponsors or clients of external

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
managers who do not run their own internal investment processes. The
important and useful input from this sector includes the ultimate risk tol-
erance of an institution or an investment mandate. Unfortunately, these
stakeholders do not always have the financial expertise to communicate
these risk tolerance parameters efficiently to investment experts. While
‘senior management’ may have an implicit feeling for what constitutes
a bad outcome, they may not be able to distil this into a quantitative
process-friendly parameter such as a utility function or a concrete down-
side risk measure.
The goal of our portfolio construction framework is to leverage the best
inputs available from each stakeholder, while not burdening them for inputs
related to their weak areas. The net result should be a process that is inclu-
sive of the viewpoints of these various team members, satisfies each of their
concerns, generates a sense of ownership for this interactive portfolio con-
struction framework and ultimately results in efficient portfolios that meet
several design criteria.

9.3 Portfolio design goals

The primary goal of portfolios derived under this approach is to maximize


returns in the case in which the investors’ base case scenario materializes.
Investors produce a forecast of key factors determining asset class returns
that represent their ‘best ideas’ for the way financial markets may develop
over the immediate investment horizon.
The scale of investment position sizes is limited by total portfolio risk con-
straints. Because real-world investors tend to focus on downside risk (and
not symmetric volatility measures), and because the optimization process
easily handles this metric, Conditional Value-at-Risk (Expected Shortfall)
will be the risk metric used in this framework. The probability distributions
describing asset class, and, ultimately, portfolio returns, will be based on a
set of stochastic scenarios derived from a forward-looking return simulation
methodology described later in this chapter.
Finally, investors produce discrete ‘risk’ scenarios to describe alternate
outcomes away from their base case. Framework users do not need to ascribe
probabilities to the base or risk scenarios, as the optimization process will
handle the two types of scenarios differently. The process, as mentioned,
will maximize results under the base case as the objective function, and

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Scenario-Dependent Portfolio Optimization 181

14
12 Stochastic
10 scenarios

8
6 Risk scenario
Return (%)

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
4 ‘Best idea’
base case scenario
2
0
Risk scenario
−2 Downside risk
−4
−6
0 1 2 3 4 5 6 7 8 9 10 11 12

Figure 9.1 Scenarios used in the framework. Stochastic return paths are combined
with user-defined scenarios in the optimization process.

users will be able to specify levels of ‘regret’, minimum return or maximum


underperformance, for each of the risk scenarios separately.
Figure 9.1 illustrates the various scenarios used in this process, and it is
important to note the role each of them plays in portfolio optimization.
Most broadly, a range of stochastic scenarios defines a range of potential
outcomes for each asset class, and forms the basis for portfolio risk (espe-
cially downside risk) calculations. The ‘best idea’ base case scenario is the
investment strategists’ core forecast for the return of the asset class. As will
be described later, the investor will not forecast the actual return of the asset
class, but forecast the factors influencing the return.
At this point it should be pointed out that the base case return does not
need to equal the mean or median return of the underlying broad set of
simulated scenarios. The purpose of the optimization process is to utilize
the investors’ best idea about asset returns, whether it is significantly opti-
mistic, fairly pessimistic, or somewhat neutral. The simulated return dis-
tribution, however, is based upon a more neutral set of assumptions and
imposes discipline, as the return distribution of the simulated scenarios will
be used to calculate asset class and portfolio risk.
Finally, risk scenarios are also specified by portfolio managers or invest-
ment strategists. While it may be convenient to think in terms of two ‘wing’
scenarios, one optimistic and one pessimistic, this framework handles any
number of risk scenarios, which do not need to be in opposite directions.
In a multi-currency and multi-asset class example, scenarios could relate
to investment themes, such as ‘global growth slowdown’, or ‘commodity
shock’, with resultant expected returns for various types of assets.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
182 Roberts L. Grava

9.4 Process inputs

To keep the process interesting for all users, team members are only required
to provide inputs germane to their habitats. Portfolio managers will provide
horizon forecasts for interest rates, spreads, exchange rates and index levels
where appropriate, and quantitative specialists will translate these inputs into
expected returns for relevant asset classes. This allows investment strategists

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
to share information in a format that they already are familiar with, with-
out requiring them to undertake the cumbersome and unnecessary process
of forecasting asset returns based on these assumptions. Without quantita-
tive and computational discipline, individual asset class forecasts could well
result in inconsistencies. A limited set of inputs (rates, spreads, etc.) allows
front office staff to check for the intuitive consistency of their views.
The quantitative specialists will have the most heavy lifting in generating
inputs for the portfolio optimization process. They will need to generate
expected asset class returns based on input from the investment teams, but
this should be fairly straightforward with some computational assistance.
More interestingly, the middle office staff will generate the simulated for-
ward-looking return scenario sets that define the potential outcome of asset
returns, and are the key ingredient for calculating asset class and portfolio
risk parameters.
To do this, they will use a combination of current market conditions
and historic information to generate forward-looking stochastic scenarios
describing input paths, and ultimately asset returns. For fixed income asset
classes, the main ingredients are headline interest rates, including both cur-
rent levels and a history of past yield moves. A bootstrap-based simulation
process will generate interest rate paths which start at current yield levels
and progress based on sampled yield moves. In the yield path simulations,
a lower bound of zero is placed on yields to avoid the occurrence of nega-
tive yields that may otherwise be simulated, especially in an environment
where beginning interest rates are low. Yield paths are simulated for each
part of a yield curve, and asset class return paths are then derived from this
information. The actual methodology of calculating the returns from yield
curve paths is beyond the scope of this chapter, but the process does cap-
ture return from coupon income, capital gains and losses from interest rate
moves and roll-down. A graphical representation of the simulated yield and
return paths can be found in Figure 9.2.
As the purpose of these distributions is to reflect a neutral range of out-
comes, users must agree on appropriate assumptions for mean yield, spread,
exchange rate and index paths. This might suggest the assumption of
unchanged yields, or yields converging ahead of the mean yield paths. The
process can certainly accommodate other assumptions for mean paths, such
as consensus forecasts or user scenarios, but this approach would be more
difficult to defend conceptually.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Scenario-Dependent Portfolio Optimization 183

Yield Paths Returns


7 6

6 4
5
2

Return (%)
4
Yield

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
3
−2
2

1 −4

0 −6
0 1 2 3 4 5 6 7 8 9 10 11 12 0 1 2 3 4 5 6 7 8 9 10 11 12
Months Months

Figure 9.2 Simulated interest rate and return paths. For each simulated path of yields
there is a corresponding simulated return path. The black bands represent 5th per-
centile values.

For non-government fixed income asset classes, the extra dimension of


spread movement is added. The bootstrap simulation process will generate
spread paths based on current spreads and historic spread volatility. These
spread paths will be added back to relevant simulated government yield
paths, and return paths of spread-type securities will be calculated. The
return calculation methodology is a bit more complicated than the one used
for government asset classes, as factors for defaultable securities need to be
taken into account. For such asset classes, further adjustments include the
return consequences of the risk and cost of ratings migration, the probabil-
ity and severity of default and, finally, a convexity adjustment to penalize
for significant volatility. While most of these asset classes are not formally
negatively convex, such a penalty allows users to capture the effects of
incomplete indexation – in the simulation model, spread assets will earn
relatively less when spreads tighten by a large amount, and underperform
by a greater margin when spreads widen significantly.
For currencies, equities, and other asset classes, historic volatility is used
around a set of ‘neutral’ expected returns. For currencies, the expected
return might be zero, for risk asset classes, the expected return might be
a consensus estimate, or even a historic return, though the former would
be preferable to remain consistent with the forward-looking nature of this
simulation methodology.
The asset class return probability distributions generated by the bootstrap
simulation process may turn out to be asymmetrical, misshapen, fat-tailed,
or generally non-normal, and this should be considered a positive aspect
of the entire framework. As many financial asset classes are not normally

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
184 Roberts L. Grava

distributed, and one of the goals of this portfolio optimization framework


is to focus on left tail risk, the generation and use of plausible non-Gaus-
sian distributions will help to better focus on the downside risk parameters
that may be unique to each asset class. Put another way, transforming the
simulation results into normal distributions would be an unwanted and
unnecessary simplifying assumption, which might well throw away useful
data about scenarios in extreme cases.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Rounding out the suite of process inputs, users need to specify their regret
tolerance, or minimum return requirements, for each of the risk scenarios.
For example, in the case of an extreme event scenario, users might specify
an acceptable minimum return that could even be lower than the one used
for the broad downside risk target. For scenarios that only slightly modify
the base case, or have a higher implied probability, the minimum return
requirement might be higher. Depending on whether the optimization pro-
cess is run on a passive/strategic basis or an active/tactical one, the regret
constraints would be set as absolute returns, or relative returns vs. a bench-
mark, respectively.

9.5 Portfolio optimization

In a series of papers, Uryasev, Krokhmal, Palmquist and Rockefeller (see


Uryasev 2000, Rockefellar and Uryasev 2000 and Uryasev et al. 2002) out-
line methodologies for portfolio optimization when Conditional Value-at-
Risk (CVaR) is used as a target or a constraint, and these methodologies work
extremely well in our portfolio design process for several reasons. First, one
of our goals is to focus on downside portfolio risk, and CVaR is an excellent
metric to capture this. While VaR is, in effect, a snapshot of a slice of a prob-
ability distribution, CVaR, which measures the average of all observations
less than VaR, is a better measure to identify the severity of events in the
left tails of distributions. CVaR is also mathematically better behaved than
VaR in the optimization process, and in any event the process calculates VaR
while optimizing CVaR simultaneously. Also, CVaR-efficient portfolios will
also be VaR-efficient.
This optimization process does not need to assume Gaussian distributions,
and handles each discrete set of simulated asset class returns as a separate
scenario. This is a significant benefit, as it can give us better estimates of tail
risk by not assuming normality. Finally, the whole optimization exercise
can be set up as a linear programming problem, and very large problems can
be solved using simple (or at least cheap) optimization libraries.
Our framework adds on to that of Uryasev et al. with the specification of
objective functions and constraints unique to our portfolio design goals.
While the large set of stochastic return scenarios generated by the bootstrap
simulation process will serve as the main input for generating downside risk
(CVaR) calculations, the objective function of the portfolio optimization

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Scenario-Dependent Portfolio Optimization 185

problem will not be the set of average returns from the simulation analysis.
While these distributions, which represent a form of ‘no-view’ outcomes for
various asset classes (see Dynkin et al. 2005), are critical for the total port-
folio risk calculations, our expected horizon asset class returns will come
from the investors’ views, represented by the ‘best ideas’ base case scenario.
The intuition for this is the following: market-facing investors should (if
they are doing their jobs correctly) have views on market direction, and the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
portfolio should profit if and when these views materialize. At the same
time, it would be presumptuous to think that the distribution of portfolio
outcomes under uncertainty (the results of the simulation analyzes) should
be driven by using the investor views as the mean path of returns. If inves-
tors have a particularly optimistic view on a certain asset class and this view
were also used as the mean path of simulated returns, the simulated prob-
ability distribution would have a fairly optimistic skew, and downside risk
could be significantly underestimated.
A better way to parse the world, we think, is to use a neutral set of assump-
tions in generating the simulated return scenarios. These scenarios would
represent a universe of plausible forward-looking returns, and should conse-
quently be used for downside risk calculations. The base case scenario, while
representing investors’ best ideas, is just one possible outcome from a much
wider set of outcomes under uncertainty. Our optimization process so far
will position portfolios to perform well when investors are correct in their
forecasts, but constrain risk in the event they are not.
A broad constraint on a broad downside risk measure, however, may not
be enough. Bringing the ‘senior management’ back into the mix, these
process participants may not even know what CVaR represents, much less
have an intuitive understanding of what such a constraint might have on
ultimate return outcomes. These participants should, however, understand
phrases such as ‘if this (specific bad scenario) happens, the portfolio will
still earn at least 1%’, for example. And a dialogue with these stakeholders
could take place to elicit risk tolerance by having them help define ‘bad out-
comes’ in adverse scenarios. While this is commonly the role of stress-test-
ing, often exercised after portfolio decisions have been made or suggested,
there is no reason to exclude this stress-testing in the portfolio optimization
process itself.
These risk scenarios can be included in the portfolio optimization exer-
cise by adding specific group bounds into the optimization problem. For
each risk scenario, the vector of expected asset returns under that scen-
ario defines the group, and lower bounds specifying minimum return
or maximum regret under that scenario are specified accordingly. Each
risk scenario can have its own minimum return specification. For a low-
probability but high volatility scenario, the maximum regret could be
a lower return, while a higher probability scenario could have tighter
constraints.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
186 Roberts L. Grava

This approach has several benefits in executing an intuitive portfolio


optimization exercise. Users don’t have to spend valuable time thinking
about probabilities of risk scenarios occurring; such an exercise is ques-
tionable anyway. A probability-weighted average of expected user-scenario
returns, if used as the objective function for optimization, is not neces-
sarily useful, as the weighted averages might not represent asset returns
that have any probability of occurrence in any scenario. Our framework

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
allows the separation of the ‘best ideas’ base case, to maximize portfolio
returns, from the risk scenarios, which limit loss (in an intuitive way) if
things go wrong.
It is important to note that the risk scenarios do not have to represent
stress events. The scenarios, as mentioned previously, can alternatively or
additionally represent investment themes, and can be used by front office
staff to ensure portfolio performance if different, but not necessarily stress-
ful, market outcomes unfold. Different scenarios could specify different
scales of outcomes for particular markets, and the optimization process
would capture these nuances and suggest appropriate portfolio positions
accordingly.
While the objective function for the portfolio exercise is clear (maxi-
mize returns under the base case), the interaction of the broad portfolio
risk constraints with the specific risk scenario constraints may be less
so. The two types of constraints should be considered complementary.
Depending on how users set up the specific risk constraints and resultant
performance tolerance, at some times the portfolio CVaR constraint will
be binding, and at other times one or more of the specific risk constraints
will be binding. There is certainly scope for users to get overenthusiastic
with the number and severity of risk constraints, resulting in less efficient
portfolios or possibly unsolvable optimization problems, but it is import-
ant to note that the main purpose of this entire approach is not to pro-
pose a hands-off ‘black box’ solution to generate efficient, but possibly
unintuitive portfolio allocations, but rather to encourage the input of vari-
ous stakeholders with various expertise and insight, and leverage existing
methodologies to combine various views and concerns into portfolios that
satisfy several constraints.

9.6 The approach in practice

One of the chief benefits of this framework, beyond ultimately recommend-


ing acceptable portfolios, is that it encourages users to engage in dialogue
about various assumptions, which should ultimately result in greater con-
sistency of investment views. As an example, investors might be asked not
only to provide yield, spread, etc. forecasts for the relevant user scenarios,
but also to intuitively specify portfolio positions that would perform well

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Scenario-Dependent Portfolio Optimization 187

given the scenarios. In effect, this is what they would do on a day-to-day


basis without the benefit of quantitative tools.
The middle office then does its work, and returns a preliminary set of
portfolio recommendations based on the first optimization results. More
often than not, these results will differ from the intuitive portfolio alloca-
tions first specified by investors. These differences are actually quite posi-
tive, as they represent opportunities for investors to re-evaluate some of

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
their initial assumptions. Perhaps the optimization engine is suggesting an
overweight in a particular asset class, which suggest that investors’ initial
views on this asset’s returns were too optimistic. Alternatively, another opti-
mizer-suggested position may have identified an investment opportunity
that was not initially thought about in the first set of instinctive positions.
As the dimensions of risk-taking increase, the more difficult it is for humans
unassisted by computational or quantitative tools to navigate the universe
of investment opportunities efficiently.
Ultimately, this framework works best as an iterative process. Investors
provide views, senior management provides guidance on the definition
of ‘bad outcomes’, and quantitative specialists provide their own work
as inputs into the optimization process. As long as the optimizer’s sug-
gestions differ dramatically from intuitive portfolio positions, all par-
ticipants have the opportunity to rethink their assumptions on horizon
market moves, risk tolerance and asset allocation, and to engage in an
active dialogue about these inputs. When the optimizer’s suggestions
coincide with at least the broad themes of the instinctive positions, the
process is done.

9.7 Conclusions

The framework described above is nothing more than an approach to


impose some quantitative discipline upon what are ultimately qualitative
investment decisions. The best use of this process is to foster an inter-
active dialogue between various stakeholders involved in asset manage-
ment; the quantitative outputs may shed interesting light on investor
intuition. When this dialogue happens efficiently, the optimization pro-
cess will result in portfolios that meet several compelling criteria: first,
the portfolio will be positioned to perform best when investors’ best ideas
about market developments materialize. When they do not, a comprehen-
sive forward-looking return simulation methodology, combined with the
optimization engine, allows portfolio downside risk to be constrained.
Finally, specific risk scenarios can be added, which allows stress-testing to
be ‘baked in’ to the optimization framework, and not added as an after-
thought. This process works equally well for strategic or tactical asset allo-
cation exercises.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
188 Roberts L. Grava

Bibliography
Dynkin, L., Ben Dor, A., Hyman, J. and Phelps, B. (2005) Total Return Management of
Central Bank Reserves, Second Edition, Lehman Brothers Fixed Income Research.
Rockafellar, R.T. and Uryasev, S. (2000) ‘Optimization of Conditional Value-at-Risk’,
The Journal of Risk, 2(3), 21–41.
Uryasev, S. (2000) ‘Conditional Value-at-Risk: Optimization Algorithms and
Applications’, Financial Engineering News,14, February, 1–5.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Uryasev, S., Krokhmal, P. and Palmquist, J. (2002) ‘Portfolio Optimization with
Conditional Value-at-Risk Objective and Constraints’, The Journal of Risk, 4(2),
11–27.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
10
Strategic Tilting around the SAA
Benchmark

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Aaron Drew, Richard Frogley, Tore Hayward and Rishab Sethi

10.1 Introduction

Long-run mean reversion in asset market returns is one of a set of core


‘investment beliefs’ of the Guardians of the New Zealand Superannuation
Fund (NZSF). These beliefs underpin the investment strategies of the Fund.
In this chapter, we present a dynamic portfolio asset allocation strategy that
we call ‘strategic tilting’ which aims at exploiting the mean reversion pro-
cess in asset markets. It is one of a set of portfolio strategies that the NZSF
regards as a source of additional value over market returns. Strategic tilting
involves adjusting (or tilting) exposures to broad asset classes around their
benchmark weights in the strategic asset allocation (SAA) according to their
relative return prospects.
In part, underpinning the belief in mean reversion is a large volume of
literature in support of this process. For example, support for mean rever-
sion in equity markets is found in Shiller (1981), Lewellen (2004), Campbell
and Yogo (2006) and Hjalmarsson (2008). Support for mean reversion in
bond returns is seen in Fama and Bliss (1987), Campbell and Shiller (1991),
and Cochrane and Piazzesi (2005). Neely (2002 and 2004), Fischer (2003)
and Becker and Sinclair (2004) report that central banks that sell their
currencies when relatively ‘expensive’ and buying when ‘cheap’ tend to
be profitable. On the other hand, the mean reversion process is highly
contested. Seminal work by Meese and Rogoff (1983) suggesting that no
exchange rate model systematically outperforms the simple random walk
assumption has stood the test of time fairly well. Key challenges to mean
reversion in equity market returns include Goetzmann and Jorion (1993),
Stambaugh (1986, 1999), Boudoukh et al. (2006) and Ang and Bekaert
(2007). Overall, we are persuaded by the argument of Cochrane (2007)
that the data are more likely to be generated by a process that has mean
reversion than not, but recognize that empirical ‘proof’ of this likelihood
will often fail conventional significance tests given short samples and the
‘noise’ in the data.

189

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
190 Aaron Drew et al.

In line with the asset return predictability literature, tilts are based on
forecasts of asset returns that mainly use simple valuation measures as pre-
dictor variables. This strategy differs from more common tactical asset allo-
cation (TAA) strategies in several respects. First, positions are expected to be
held over a much longer period (months or years) given that mean reversion
in asset markets can be a long-lived process. Second, the strategy makes no
attempt to incorporate market timing and momentum factors. Third, the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
strategy inherently involves changing the short-run risk profile of the port-
folio based on a relatively small number of positions. For that reason, the
strategy is envisaged as being developed and executed by the fund manager,
rather than contracted to an external manager. Finally, the strategy is not
anticipated as a purely mechanical exercise. We see the application of tilts
as an exercise in ‘constrained discretion’. The default decision is to adjust
exposures from benchmark SAA weights in line with the type of tilting
‘signals’ described in this chapter. However, an explicit judgemental overlay
mechanism is built into the strategy to incorporate off-model concerns that
are not captured by our analytical framework.
To the extent that there is predictable mean reversion in asset markets,
strategic tilting should enhance portfolio performance, compared to a more
conventional portfolio strategy of periodically re-balancing asset classes to
their SAA weights. The focus of this chapter is to examine the portfolio
implications of exploiting the mean reversion process via strategic tilting, a
practical complement to the work of Campbell and Viceira (2002), Barberis
(2000), and Koijen et al. (2008).
We present several tests of our tilting strategy on a representative port-
folio of equities, bonds, and listed property together with a currency over-
lay. Results tend to support strategic tilting insofar as they are economically
meaningful. However, in keeping with the much of the asset return pre-
dictability literature, the augmentation of risk-adjusted returns fails con-
ventional significance levels. Given that the available data do not span a
particularly long period, and because the return predictability literature is
not uncontroversial, we also present results from some Monte Carlo simula-
tions. We show that strategic tilting tends to perform at least as well as re-
balancing asset classes to their SAA weights in the presence of uncertainty
over the return predictability process. In the limit, where asset returns are
iid-normal, strategic tilting performs as well as portfolio re-balancing on
average.
Although many funds engage in TAA strategies, it is much less com-
mon for funds to pursue a medium-run strategic tilting approach. Based
on discussion with our asset managers and peer funds, we attribute the
main reason for this to the fact that the strategy will initially tend to be a
drag on portfolio performances relative to market neutral or long-run SAA
benchmarks. In popular terms, the strategy involves leaning against the
wind – incurring short-term losses for the prospect of longer-term gains

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Strategic Tilting around the SAA Benchmark 191

when markets presumably correct. Given that the period of underperform-


ance may well run into several years, it is perhaps not surprising than most
asset managers are unwilling or unable to engage in strategic tilting, even
if they are pre-disposed towards believing in longer-run mean reversion in
asset returns.
A second focus of this chapter is on risk-management mechanisms to
reduce pressure for the strategy to be abandoned in periods where it under-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
performs. These mechanisms relate to the calibration of the strategy and to
governance and decision-making processes. We propose calibrating max-
imal tilt limits to reflect tolerances over fund tracking error and an approach
of tilting when asset class valuation measures are at relatively extreme levels
to reduce the effects of model uncertainty. To that end, we also favour the
addition of an explicit role for decision-maker judgement.
In the next section we present the empirical analysis, while Section 10.3
proposes several mechanisms aimed at enhancing the sustainability of stra-
tegic tilting. Section 10.4 summarizes the chapter and offers directions for
future research.

10.2 Empirical analysis

10.2.1 Overview of the strategic tilting methodology


To illustrate the likely effects of strategic tilting around benchmark SAA
weights we present results from a range of historical back-tests and synthetic
Monte Carlo simulations in this section. To retain focus on strategic tilting
and the resulting portfolio performance, rather than on valuation models,
we use very simple models and methods to derive a change in exposure
(or tilt) from target SAA weights. In practice, we think a ‘suite of models’
approach involving alternative models, data sources and de-trending meth-
ods would be highly desirable to ameliorate model uncertainty, or at least
provide a better idea of risks and uncertainties, relative to a single model or
method.
In general, the size of a tilt on any asset class is an outcome of:

● the model used to generate a risk premium measure;


● the de-trending method and/or model employed to generate a trend meas-
ure of the risk premium;
● the calculation of a ‘tilting signal’;
● a response rule that maps a given tilting signal onto a change in the asset
exposure;
● any limit on the maximum tilt size permitted.

In each of the simulations reported, we consider tilts of up to ±ten per-


centage points around benchmark SAA weights. In principle, results may

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
192 Aaron Drew et al.

be scaled linearly for tilt changes of different magnitudes. We step through


each of the remaining elements in turn.
The risk premium on an asset classes is simply modelled as:

rpt = yld t − rft


(1)

where rpt is a risk premium measure, yldt is a measure of the yield on the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
asset class and rf t is a measure of the risk-free rate. For example, in the case
of the historical back-test for equities, we calculated the real equity yield as
the inverse of the cyclically adjusted price-earnings ratio for the S&P 500
(using the common method of a ten-year trailing average of earnings for the
cyclical adjustment). We combined this real yield with expected inflation
to derive a nominal yield, and used the risk-free rate as the US ten-year gov-
ernment bond rate1.
Trend risk premiums are calculated by running a moving average (gen-
erally with a 15-year window) through the raw risk premiums. This is an
undoubtedly crude measure of trend which will not be robust to level or
trend structural breaks in the data. On the other hand, moving averages
have the virtue of the simplicity we seek, and do not require any forward-
looking information in their calculation. The length of the moving average
itself is a pragmatic choice that reflects the data span on hand for the vari-
ous premiums. We similarly calculate rolling standard deviations of the risk
premiums in order to define time varying z-scores which we use as our tilt
‘signals’.
A positive signal to tilt towards an asset class is generated whenever the
raw risk premium lies above trend and vice-versa. Given the signal, there are
an infinite number of ways that this could be transformed into an actual tilt
position. In this chapter we focus our attention on two basic type of tilting
response rules:

1. A linear rule that tilts an asset class whenever the raw risk premium devi-
ates from the trend measure (incremental tilting).
2. A piece-wise linear rule that tilts an asset class when the raw risk premium
deviates from the trend measure by a sufficiently ‘extreme’ amount (tilt-
ing at extremes).

We further impose a constraint that the tilt limit of ±ten percentage points
is reached once the tilt signal is sufficiently extreme (a level generally set
at 2.5 standard deviations above the trend signal). The response rules are
graphically represented in Figure 10.1 below. Under Rule (1), tilts are applied
proportionally to the z-score signal. Rule (2) is similar except that a tilt is
not applied until the tilt signal reaches a ‘trigger’ or kick-in threshold. Once

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Strategic Tilting around the SAA Benchmark 193

100

80 Incremental approach
% of tilts applied to an asset class Tilt at extremes
60

40

20

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0

−20

−40

−60

−80

−100
−3 −2 −1 0 1 2 3 4
Number of SDs relative value is away from equilibrium

Figure 10.1 Tilting approaches

the trigger is hit, the response function is steeper relative to the incremental
approach.
Our base case results use an at extremes tilt. We argue in Section 10.3 that
such a rule likely enhances the sustainability of the tilting strategy relative
to an incremental approach. This is partly because tilting at extremes is
likely to be less susceptible to model uncertainty and partly because such
a rule will imply less time away from SAA benchmarks, and therefore less
pressure to unwind what may initially be a ‘losing’ position. Some limited
robustness testing around these presumptions is provided below2.
A further constraint imposed in the historical counterfactual test is that
the tilted position, relative to the SAA benchmark, cannot increase by more
than 250 basis points (bp) in each period. This constraint reflects the prac-
tical difficulty of engineering large shifts in the portfolio composition
around the SAA weights over a month. This constraint is not imposed in
the Monte Carlo simulations, which are run at an annual frequency and
at which it is easier to shift the portfolio completely in line with the tilt-
ing response function. Finally, we impose a transaction cost of 20bp on
adjustment of the model portfolios considered below. Though this cost is
incurred under both strategic tilting and portfolio re-balancing, tilting is
more ‘aggressive’ and consequently we would expect that the total transac-
tion cost to be larger.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
194 Aaron Drew et al.

10.2.2 Historical back-tests


We use historical US data for the back-tests that we report in this section.
This is because we seek long, consistent, returns data for each of the dif-
ferent asset classes that we propose tilting over, and these are most easily
available for the US. More specifically, we report results from tests on the
following tilts:

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
1. US equities against US government bonds;
2. US listed property against US government bonds;
3. US investment grade credit against US government bonds;
4. the degree of currency hedging of foreign asset exposures.

Tilts are applied using the methodology described above, except when
altering the degree of currency hedging3. The frequency of the analysis is
monthly with the end date of the sample being August 2008. The start date
depends on the available returns data for the asset classes across which we
consider tilting. For currency, investment grade credit and property, the
data begin in the early 1970s. Given the use of 15-year moving averages to
calculate trends, the sample over which we consider tilting begins in the
mid-1980s. For the equity-bond tilt we use post-war data, implying a start
date for tilting in 1960. We first report results from tilting equities-to-bonds
only. The data span is the longest for this tilt, and so we also consider a
broad range of robustness tests for it. We then present the results for tilting
listed property and investment grade credit against government bonds, and
the degree of currency hedging in isolation. Finally, we show the impact of
running the tilts as a package.

10.2.2.1 Tilting between equities and bonds


Results for the equities-bonds tilt are summarized in Table 10.1. Over the
sample from 1960 to 2008 we see that tilting delivers active return of around
11bp per annum. (Active return is defined as the difference between the
performance achieved by following a tilting strategy, and the default bench-
mark of re-balancing to the SAA weights.) Conditional on the tilt being
applied, which is around 20% of the time, the return is 45bp and the infor-
mation ratio is 0.48. This is a fairly high ratio, but still fails to be significant
at conventional testing levels.
As may be expected given the base-case calibration of the tilting
response function, tilts away from fixed asset allocation weights are fairly
infrequent. For the first 15 years of the strategy – from 1960 until mid-
1974 – no tilts are triggered. From 1975 until the end of the 1970s, a large
tilt towards equities is put in place. In the equity bull markets of the late
1980s and 1990s there are moderate tilts away from equities, whilst in
2003 and again in August 2008 a very small tilt towards equities is put
in place.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Strategic Tilting around the SAA Benchmark 195

Table 10.1 Equities versus bonds historical back-test

Samples

Full sample: From date of first When tilts are


Jan. 1960– tilt: Jul. 1974– on: 20% of
Aug. 2008 Aug. 2008 sample

Active arithmetic return (%) 0.11 0.16 0.45

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Standard deviation (%) 0.41 0.52 0.92
Information ratio 0.23 0.30 0.48

Detailed performance

(a) Percentage of time tilting strategy is in line with or ahead of fixed portfolio weights
1 month 6 months 1 year 5 years 10 years 20 years

% 87 86 85 87 91 100
(b) Worst monthly returns since 1960
Worst 30
months Oct. 1987 Aug. 1988 Sep. 1974 Nov. 1973 Apr. 1970
Fixed −6.95 −16.01 −10.80 −8.84 −8.74 −7.92
Tilting −6.87 −14.3 −10.26 −9.08 −8.74 −7.92
Difference 0.07 1.65 0.54 −0.24 −0.0 0.0
(c) Best and worst active returns from adopting tilting strategy
1 month 6 months 1 year 5 years 10 years 20 years
Worst active −0.5 −0.9 −0.7 −0.5 −0.5 0.14
returns
Best active −1.6 −2.1 3.3 3.0 3.5 4.8
returns

Tilting enhances portfolio returns over each of these episodes. The tilt
towards equities at the tail end of the severe bear market of the early 1970s
sets up a large out-performance (peaking at 300bp) when markets recover
in the late 1970s. The tilts in the late 1980s towards bonds successfully pick
up the very good performance of bonds over this period. The tilt away from
equities in the late 1990s is a response to the overvaluation of equities dur-
ing the dot-com boom. The strategy is cumulatively successful, but there is
some pain to wear in 1999 where active returns trough at around -70bp. The
tilts applied in this decade have not been long and strong enough to materi-
ally affect performance4.
To see how the strategy performs over shorter periods, we report the per-
centage of time that tilting results in no-worse or better performance in roll-
ing (overlapping) evaluation periods of one month through to 20 years. We
see that the strategy is ahead around 80% of the time in short samples, and

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
196 Aaron Drew et al.

becomes increasingly dominant as the evaluation period increases (from


around five years forward). After 20 years, there is a negligible chance of the
strategy delivering worse performance than the benchmark of re-balancing
the portfolio to the SAA.
We separately evaluate strategic tilting when markets perform poorly.
Results suggest that tilting provides a little cushioning in the worst months,
with returns around five bp higher. This suggests that the strategy tends to

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
underweight an overvalued asset class heading into a correction (as most
dramatically seen for the worst month in the whole sample – October 1987).
We also examined results over the best months in the sample. Interestingly,
the tilting strategy tends to be materially ahead in these months, a likely
consequence of the fact that the best months in our sample tend to be those
that closely follow equity market troughs. At these times the portfolio is
overweight equities.
A number of departures from the base-case equity–bond tilt were investi-
gated, including:

1. Adjusting the tilts incrementally, and tilting at even more extreme levels
wherein the tilts are not applied until the signal is at least two standard
deviations away from trend. In either case, the maximal tilt continues to
be restricted at a level consistent with a signal that is 2.5 standard devia-
tions away from trend.
2. Adjusting tilt triggers asymmetrically – tilts begin when the signal is 1.5
standard deviations away from trend but are not turned off until the
valuation signal returns to the trend level.
3. Removing restrictions on the minimum and maximum position adjust-
ments from tilting in any month, and imposing a restriction of monthly
changes being limited to 50bp moves.

These variants of the tilting function examine the robustness of results to


calibration choices given the tilting signal. The two further checks assess
the robustness of the tilting signals directly:

1. alternative estimates of trend and dispersion of the ERP signal derived by


applying moving averages of different periods to the historical data;
2. the use of the Kalman filter to estimate the trend ERP and its variance.

Results from each of the variants (available on request) still favour tilt-
ing, broadly confirming the robustness of the base case results. However,
tilting incrementally leads to a noticeable deterioration in performance
across all metrics whilst tilting at even more extreme levels than the base
case improves performance when measured by the information ratio and
the fraction of time that the tilting strategy delivers superior performance.
From this, we draw some support for our prior belief that tilting at extremes

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Strategic Tilting around the SAA Benchmark 197

enables us to better identify when markets are truly away from their long-
run ‘fundamentals’.

10.2.2.2 Tilting property, investment grade credit and currency hedges


Results for historical back-tests of property, investment grade credit and cur-
rency hedge ratio tilts are provided in Table 10.2. Again, we distinguish
between the full sample and the sample over which tilts are actually applied.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Tilts for the foreign currency hedging ratio and listed property enhance
returns, while the tilt on investment grade credit very slightly detracts. As
with the equity-to-bond tilt, none of these results are significant at conven-
tional levels.
The negative outcome from tilting the portfolio allocation to investment
grade credit against government is very much a function of our small sam-
ple. There are only two occasions where a significant tilt is applied – tilts
towards IG credit over 20012002, and from November 2007 until the end
of our sample. The tilt over 20012002 adds to returns, but this is more
than wiped out by the underperformance at the end of the sample, given
the sell-off in IG credit as the credit crunch has intensified. Moderate tilts
in favour of property occur in 19901991 and 20022003, while a large tilt
away from property occurs from mid-2005 until early 2007. The moderate
tilts add to returns with no initial downside, while the tilt from 20052007
initially depresses performances before turning positive. Tilts to the net
unhedged foreign currency exposure occur over 1997 (when hedges are
reduced in response to an overvalued currency), in 20012002 (when hedges
are increased), and from mid-2005 to end-2006 and mid-2007 to the end

Table 10.2 Strategic tilting historical back-test: summary of results

Active arithmetic Standard


Tilt return (%) deviation (%) Information ratio

Full sample
Investment-grade −0.01 0.01 0.04
credit–bonds
Property–bonds 0.06 0.39 0.14
Foreign currency 0.10 0.39 0.23
hedge ratio
Over dates tilts are applied
Investment-grade −0.06 0.58 0.11
credit–bonds
Property–bonds 0.27 0.96 0.30
Foreign currency 0.17 0.52 0.33
hedge ratio

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
198 Aaron Drew et al.

Table 10.3 Historical back-test of tilting as a package

Static portfolio Tilted portfolio


Arithmetic return % 9.29 9.45
Standard deviation % 8.14 8.0
Sharpe ratio 1.14 1.18
Worst returns over:
1 month (%) −7.0 −6.40

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
3 months (%) −9.2 −9.4
6 months (%) −11.1 −11.7
1 year (%) −8.8 −9.2
% of months tilts applied 63%
Average return when tilts active 25bp
Average tracking error when tilts active 93bp
Information ratio when tilts active 0.27

of our sample (when hedges are reduced). The tilt in 1997 is very success-
ful as it increases foreign currency exposures heading into the Asian crisis,
which prompted a very sharp depreciation in the NZ dollar. The increased
hedges in 20012002 are put in place at a time where most macroeconomists
thought the NZ dollar was very undervalued and sets up a mild positive
return, while the tilts at the end of the sample have not, on balance, added
to returns5.

10.2.2.3 Tilting as a package


Table 10.3 reports outcomes from running tilts as a package over the period
from January 1988 to August 2008. The model portfolio comprises US equities
(50%), US government bonds (30%), US investment grade credit (10%) and US
property (10%). In addition, we presume that a fixed proportion of the port-
folio is hedged back into NZ dollars. Again, we permit tilts of up to ten per-
centage points in the assets against government bonds and in the hedge ratio,
and compare returns from a tilted portfolio against the benchmark portfolio
where the assets are re-balanced to the target weights each month.
The arithmetic mean return for the tilted portfolio is 16bp per annum
higher, or 25bp conditional on a tilt being applied (around 63% of all
months in our sample.) This gain is not achieved by simply increasing the
risk profile – the standard deviation of the tilted portfolio is in fact margin-
ally lower than that of fixed allocation portfolio. The difference in portfolio
performances are not statistically significant, although they are economic-
ally material. The value of the portfolio with the tilting strategy is around
350bp higher at the end of the sample.

10.2.3 Monte Carlo analysis


It may be possible that the gains from tilting reported in the historical
back-test are a historical ‘quirk’. On the other hand, though none of the

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Strategic Tilting around the SAA Benchmark 199

results in the back-tests are statistically significant, it is at least possible that


with a longer sample results could become significant. In this section, we
report results from a Monte Carlo study that aims to test how a tilting strat-
egy performs under a range of synthetically generated market conditions
and uncertainties. To simplify the analysis, we only consider tilts over one
lever – either a growth asset or a risk-free asset which earns a constant rate of
return. The system set-up for the analysis is familiar from the equity return

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
predictability literature:

rpt = a + a yld t −1 + ⑀t (2)

yld t = b + r yld t −1 + gt (3)

where rpt is the risk premium on the growth asset (i.e. the excess return to
the risk-free asset), yldt is the yield on the asset and ⑀t and ⑀t are sources of
disturbance to the system, modelled as iid-Normal processes. The coeffi-
cient ␣ determines the degree of predictable mean reversion in the system,
while ␳ governs the persistence of the predictor variable to shocks6.
As with the historical back-tests, we consider tilts of ±ten percentage
points around the benchmark portfolio, and the tilting rule kicks-in when
the risk premium is 1.5 standard deviations from its mean and maxes out at
2.5 standard deviations. The tilts are run over 30 years and the distribution
of outcomes is the result of 10,000 draws7.
It is assumed that the tilting ‘agent’ does not know the system as described
above and no attempt is made at learning. Instead, we make two polar
assumptions regarding the agent’s state of knowledge of long-run returns:

● The agent knows the true equilibrium risk premium and the variance
around it.
● The agent estimates the equilibrium risk premium and signal bands from
a 15-year ‘pre-sample’ draw. These estimates are carried forward over the
subsequent 30-year simulation horizon with no update.

The results are shown in Table 10.4. When the true risk premium is
known, the results unambiguously favour tilting. The long-run outperform-
ance is around ten bp per annum and the strategy tends to dominate over
all horizons. Under the worst 1% of outcomes, tilting still manages to break
even relative to the fixed weight strategy given enough time (+20 years).
When the premium is estimated with an error the tilting strategy performs
nearly as well on average as tilting when the risk premium is known – this
is because over 10,000 draws, the risk premium will be estimated with lit-
tle error on average. However, more tellingly, the average masks outcomes
where tilting does fairly poorly. Under the worst 1% of outcomes the strategy

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
200 Aaron Drew et al.

Table 10.4 Monte Carlo simulation of tilting strategy

Long-run return is Long-run return is


unknown known
Full sample Tilted sample Full sample Tilted sample
Active arithmetic 0.07 0.30 0.07 0.33
return (%)
Standard deviation (%) 0.30 0.64 0.24 0.60

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Information ratio 0.23 0.47 0.27 0.55

Detailed performance

(a) Percentage of time tilting strategy is in line with or ahead of fixed portfolio
weights

1 year 2 years 3 years 5 years 10 years 20 years

Return unknown 93 93 93 94 94 96
Return known 94 94 95 96 97 98

(b) Active cumulative returns at different horizons (active less fixed)


1 year 2 years 3 years 5 years 10 years 20 years
Average when long-run return 0.07 0.14 0.22 0.40 0.80 1.74
is not known
Worst 1% when long-run −0.85 −1.13 −1.20 −1.60 −1.90 −2.25
return is not known
Average when long-run return 0.07 0.13 0.22 0.33 0.66 1.32
is known
Worst 1% when long-run −0.4 −0.33 −0.33 0.33 −0.3 −0.20
return is known

leaves the value of the portfolio significantly behind the value it achieves
under fixed allocation weights after a year and the underperformance grows
over time. As might be expected, persisting with an incorrect belief on long-
run returns is a costly mistake.
The portfolio has a weight of 80% on the growth asset and 20% on the
risk-free asset. Given our interest is in the relative performance of the tilted
and static portfolios, the weights on the two assets are irrelevant to the
outcome.that the strategy survives through all market conditions. The idea
behind these simulations is to provide some quantification of the ‘regrets’
from strategic tilting.
When returns are not predictable, outcomes are almost identical under
the two strategies – tilting performs only very slightly worse owing to the
presence of its higher transaction costs. When tilting in a simulation where
there is no predictability, on average, the portfolio has the same weight on
the asset classes as the fixed-weight strategy. As there is no short-run bene-
fit (or cost) from being away from the average, long-run returns are hence

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Strategic Tilting around the SAA Benchmark 201

Table 10.5 Long-run returns and regrets

Annual gain Gain from tilt-


from tilting ing after 30
strategy (bp) years
1. Returns when there is predictability 10 300
2. Returns when there is no predictability −0.001 −0.3
3. Tilting abandoned when returns are -100bp −1.5 −50

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
over any year (occurs in 13% of model runs)
4. Tilting abandoned when cumulative returns −3.5 −100
are -50bp at any point (occurs in 15% of
model runs)

largely the same. If we thought that asset returns were equally likely to be
predictable over long horizons as not, strategic tilting would be the better
alternative. There is almost no difference in performance from operating
a tilting framework when there is no mean reversion, but there is a large
opportunity cost foregone if we do not tilt and asset returns are partially
predictable.
Another source of regret could be from committing to a tilting strategy
only to abandon it in short order. We consider two cases for abandoning
tilting: (i) when performance is -100bp in any year; and (ii) when cumu-
lative returns are -50bp. In the first scenario, the value of the portfolio is
50bp lower than the fixed asset allocation strategy after 30 years. This loss
is around half the 100bp loss trigger owing to the presence of intermediate
gains from tilting before the strategy is abandoned. When tilting is aban-
doned subsequent to cumulative losses of 50bp, the cumulative value of the
portfolio is 100bp less than under the fixed asset allocation strategy after 30
years. The larger long-run loss is partly due to the compounding effect of
the initial loss and partly because, in some of the model draws, the initial
cumulative loss will be greater than 50bp. Overall, perhaps the key message
of this section is that whilst long-run returns can be enhanced from tilting,
there remains considerable scope for regret from engaging in the strategy if
we care about periods of poor performance along the way.

10.3 Enhancing the sustainability of strategic tilting

In discussion with peer funds and other portfolio managers, a consistent


theme emerged that strategic tilting may be worthwhile in principle, but
may run into severe sustainability pressures in practice. The sustainability
problem arises from the fact that the strategy implies considerable periods
of time (perhaps running into years) where the fund will be away from fixed
asset allocation benchmarks. In the case that it results in being underweight
an asset class in a bull market, or overweight in a bear market, the pressure
to unwind the strategy can be enormous.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
202 Aaron Drew et al.

Pressure may arise internally through negative reputation effects or more


directly if remuneration structures are at least partially based on perform-
ance relative to passive market benchmarks. External pressure is also likely
to be a factor – at the extreme, clients may lose faith in the fund manager
and withdraw funds en masse. Such pressure is only enhanced in the normal
course of a fund’s internal staff turnover. New entrants will naturally feel
less inclined to stick with what are seen as ‘losing’ strategies.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
The sustainability problem is serious. As illustrated above, perhaps the
worst possible outcome for a fund would be to abandon a tilting strategy at
times when valuations for an asset class (or classes) prove to be extreme ex
post. Some fund attributes, such as a genuinely long investment horizon and
a single plan sponsor (as is the case with the NZSF), should imply greater
ability to withstand sustainability pressures relative to most other market
participants. Nevertheless, these advantages may not be sufficient, and we
suggest three avenues for enhancing the sustainability of strategic tilting:

1. Limiting overall exposures from strategic tilting: The larger the tilt, the
larger the resulting impact on the performance of the fund relative to
passive or SAA benchmarks. There are two broad resolutions to this prob-
lem. First, the benchmark could be redefined as one that incorporates the
tilting strategy. In principle, this may be a credible option for a fund like
the NZSF. However, it is much less obvious such a benchmark would be
acceptable for a commercial fund manager. A second approach is to cali-
brate the strategy such that, ex ante, the overall impact on fund perform-
ance is within an acceptable tolerance limit. This can be accomplished
by calibrating the maximum tilt positions such that the tracking error
is within an acceptable tolerance limit when tilts are at their maximum
positions using standard risk-budget assumptions with respect to the
covariance structure of the broad asset classes.
2. Tilting at extremes rather than incrementally: As shown in Section 10.3,
tilting at extremes implies significantly shorter periods of time away
from SAA benchmarks. This may well enhance the sustainability of the
strategy because the period of underperformance will be much shorter.
In addition, our sense is that tilting at extremes would reduce model
uncertainty problems, or in other terms the potential for regret from fol-
lowing what, ex post, turns out to be a false signal. Finally, while stra-
tegic tilting can be considered a contrarian investment strategy – and a
necessary attribute of being a contrarian is a willingness to go against the
crowd – it would still be of some comfort to be in the company of some
like-minded market participants when wearing initial losses. This is more
likely if tilts are being taken when several different valuation metrics
appear extreme.
3. Building into the process flexibility and judgment: The statistician George
Box is credited with remarking that, ‘All models are wrong but some are

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Strategic Tilting around the SAA Benchmark 203

useful’. Taking this modelling philosophy to heart, we see the models used
to infer tilting signals as a starting point for discussion. As with any mod-
el-based portfolio strategy, there will be a need to continually test and
refine them. We also see the need to embed a judgmental overlay to the
strategic tilting process. This overlay should enhance sustainability, both
because judgments will take into account off-model concerns, and because
more ownership of the tilts is available to the fund’s managers. Note that

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
the tilting at extremes approach dovetails with this philosophy – it leaves
more room for judgment and flexibility relative to the more mechanical
incremental tilting approach. That said, we think it is important that the
starting presumption is that the models do identify a correct signal and
response and that the onus should be on those ultimately responsible for
the positions to identify reasons against tilting on a given occasion. We
think such reasons should not include market timing factors. We are skep-
tical about success in this regard and also because we believe it will always
feel uncomfortable going against the direction of the market.

10.4 Summary and future directions

In this chapter, we have examined whether a strategy of dynamically adjust-


ing broad asset classes around SAA weights according to their medium- to
long-term return prospects enhances portfolio performance relative to port-
folio re-balancing. Under a simple application of the strategy, our results
suggest gains could be economically material, although the outperform-
ances are not statistically significant. We also show that in the presence of
uncertainty over the mean-reversion process, not tilting may have higher
opportunity costs than tilting.
With the lack of statistical significance, a decision to implement strategic
tilting will need to be supported by some degree of prior belief in the mean-
reversion of asset prices. Based on our back-tests, the gains from tilting are
unlikely to be large enough to survive if external management costs are
imposed. However, if the strategy is retained in-house – for which there is an
additional strong reason since tilting directly affects a fund’s risk budget –
then the gains from tilting appear to be large enough to be of interest.
There are a number of directions in which this work may be taken further.
First, the range of uncertainties considered could be broadened to take on
board, for example, trend changes in risk premia and structural breaks in
the relationship between predictor variables and long-run returns. Second,
an examination of whether the strategy could be enhanced by the applica-
tion of more elaborate forecasting models and signal extraction methods
would be enormously useful. Third, the range of tilting response rules could
be broadened. In particular, a momentum strategy of ‘riding the wave’ (ini-
tiating a tilt when a signal reaches an extreme level as in the chapter, but
unwinding it much later) may enhance returns further. Fourth, the number

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
204 Aaron Drew et al.

of tilts considered could be broadened to include other asset classes (e.g.


commodities and cash), further tilts between asset classes (e.g. equities
versus listed property) and tilts at the sub-asset class level (e.g. value ver-
sus growth stocks or tilts between country-specific portfolio allocations).
Incorporating cash into the analysis, in particular, would raise the option of
tilting away from all asset classes when risk premia appear too low. Fifth, the
analysis could be formalized to consider the utility gains from the strategy

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
as in Campbell and Viceira (2002)8.
Finally, the chapter has not considered the implementation issues asso-
ciated with the practical application of tilting. The strategy will likely be
more efficient in both a time and transaction costs sense if implemented via
derivatives (swaps or futures on the relevant instruments or close proxies)
rather than through changes in physical holdings. On the other hand, given
that the strategy is envisaged as being implemented in times when valu-
ation metrics are at extremes, it is possible that this corresponds to times
when market volatility is similarly extreme – as dramatically seen over the
months of September and October 2008. In this context, strategic tilting
needs to be considered within the fund’s broader ability to manage counter-
party relationships and short-term demands for liquidity.

Notes
The views expressed in this chapter do not necessarily reflect those of the Guardians
of New Zealand Superannuation.

1. A rationale for using the real yield to estimate the real return for equities is pro-
vided in Campbell et al. (2000).
2. Our Monte Carlo simulations described in the section below suggest that an incre-
mental rule is preferable to a tilt at extremes rule in general, as it takes more
advantage of any mean reversion. On the other hand, our supposition is that such
a rule will be more vulnerable to model uncertainty and will be less sustainable
than a tilting at extremes.
3. A signal to deviate from the benchmark degree of hedging is not generated from
a measure of the risk premium on the currency (i.e. the NZ dollar), though this is
implicitly captured when returns from hedging are calculated since these incorp-
orate interest rate differentials. Instead, a tilt is applied when an effective real New
Zealand dollar exchange rate deviates from its 15-year moving average by at least
a threshold level. The effective exchange rate examined is consistent with the
actual foreign asset exposures hedged by the NZSF and comprises a weighted aver-
age of seven currencies – USD, EUR, JPY, GBP, CAD, AUD and CHF. In line with the
other tilts, the degree of hedging is adjusted by up ±ten percentage points. Instead
of using the real exchange rate to devise the tilting signal, it is preferable to use a
measure of the expected returns from currency hedging. However these are only
available from 1985 onwards when the New Zealand dollar was floated, yield-
ing too-short a sample for our tests. A comparison of tilting signals constructed
from the real exchange rate and from the expected returns to currency hedging
between 1985 and 2008 reveals that they have broadly similar properties.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Strategic Tilting around the SAA Benchmark 205

4. Since completing the analysis, over September to mid-March equity markets


underwent one of their worst performances in many decades, while government
bonds rallied strongly. This generated very large tilting signals which the NZSF
acted upon in early April 2009. At the time of writing, equity markets have rallied
strongly from their March low points whilst sovereign bonds have sold-off, gener-
ating positive returns from the tilt.
5. Since completing the analysis, the NZ dollar has depreciated very sharply against
the US dollar and especially the Yen, implying a large positive gain from the cur-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
rency tilt.
6. The calibration of the parameters is as follows: a = −0.074, b = 0.0071,␣ = 1.5 and
␳ = 0.9. ␧ and ␥ are jointly normally distributed with standard deviations of 1.4%
and 14% respectively, and a correlation of -0.9. The value for a is irrelevant for
determining the benefit from tilting, which increases linearly in ␣.
7. The portfolio has a weight of 80% on the growth asset and 20% on the risk-free
asset. Given our interest is in the relative performance of the tilted and static port-
folios, the weights on the two assets are irrelevant to the outcome.
8. Our initial modelling of the strategy using power utility suggests that a much
more aggressive tilting response rule is optimal than what has been calibrated in
this chapter. Our sense, however, is that such a rule would not be sustainable to
the incorporation of a broader range of model uncertainty sources in the analysis,
nor would it be sustainable in practice.

Bibliography
Ang, A. and Bekaert, G. (2007), ‘Stock Return Predictability: Is it There?’ Review of
Financial Studies, 20, 651–707.
Barberis, N. (2000), ‘Investing for the Long-run when Returns are Predictable,’ The
Journal of Finance, 55, 225–64.
Becker, C. and Sinclair, M. (2004), ‘Profitability of Reserve Bank Foreign Exchange
Operations: Twenty years after the Float,’ Reserve Bank of Australia Discussion
Paper, 2004–06.
Boudoukh, J., Richardson, M. and Whitelaw, R. F. (2006), ‘The Myth of Long-horizon
Predictability,’ Review of Financial Studies, 10, 1577–1605.
Campbell, J.Y., Diamond, P. and Shoven, J. (2001), ‘Estimating the Real Rate of Return
on Stock over the Long Term’, Paper presented to the Social Security Advisory
Board, August 2001.
Campbell, J.Y. and Shiller, R. J. (1991) ‘Yield Spreads and Interest Rate Movements: A
Birds Eye View,’ Review of Economic Studies, 58(3), 495–514.
Campbell, J.Y. and Viceira, L. M. (2002), Strategic Asset Allocation: Portfolio Choice for
Long-Term Investors. Oxford University Press, Oxford.
Campbell, J.Y. and Yogo, M. (2006), ‘Efficient Tests of Stock Return Predictability,’
Journal of Financial Economics, 81, 27–60.
Cochrane, J. (2007), ‘The Dog that did not Bark: A Defense of Return Predictability,’
Review of Financial Studies, 21(4), 2008, 1533–1575.
Cochrane, J. and Piazzesi, M. (2005), ‘Bond Risk Premia,’ American Economic Review,
95(1), 138–160.
Fama, E.F. and Bliss, R. R. (1987), ‘The Information in Long-maturity Forward Rates,’
Review of Financial Studies 21(4), 2008, 1533–1575.
Fischer, A.M. (2003), ‘Measurement Error and the Profitability of Interventions: A
Closer Look at SNB Transactions Data’, Economics Letters, 81, 137–142.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
206 Aaron Drew et al.

Goetzmann, W.N., and Jorion, P. (1993), ‘Testing the Predictive Power of Dividend
Yields,’ Journal of Finance, 48, 663–679.
Hjalmarsson, E. (2008), ‘Predicting Global Stock Returns,’ Board of Governors of the
Federal Reserve System International Finance Discussion Papers, 933.
Koijen, R.S., Rodriguez, J. C. and.Sbuelz, A. (2008), ‘Momentum and Mean-reversion
in Strategic Asset Allocation’, Social Science Research Network Working Paper
Series.
Lewellen, J. (2004), ‘Predicting Returns with Financial Ratios,’ Journal of Financial

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Economics, 74, 209–235.
Meese, R. and Rogoff, K. (1983), ‘Empirical Exchange Rate Models of the Seventies:
Do they Fit out of Sample?’, Journal of International Economics, 14(1), 3–24.
Neely, C.J. (2000), ‘The Practice of Central Bank Intervention: Looking under the
Hood’, Central Banking, 11(2), 24–37.
Neely, C.J. (2004), ‘The Case for Foreign Exchange Intervention: The Government as
an Active Reserve Manager’, The Federal Reserve Bank of St Louis Working Paper
Series, 200431.
Shiller, R.J. (1981), ‘Do Stock Prices Move too much to be Justified by Subsequent
Changes in Dividends?’, American Economic Review, 71, 421–436.
Stambaugh, R.F. (1986). ‘Bias in Regressions with Lagged Stochastic Regressors’,
Manuscript, University of Chicago.
Stambaugh, R.F. (1999). ‘Predictive Regressions’, Journal of Financial Economics, 54,
375–421.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
11
Optimal Construction of a Fund of
Funds

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Petri Hilli, Matti Koivu and Teemu Pennanen

11.1 Introduction

We study the problem of diversifying a given initial capital over a finite


number of investment funds that follow different trading strategies. The
investment funds operate in a market where a finite number of underlying
assets may be traded over a finite discrete time. Our goal is to find a diver-
sification that is optimal in terms of a given convex risk measure (see e.g.
Föllmer and Schied 2004, Chapter 4). We formulate an optimization prob-
lem in which a portfolio manager is faced with uncertain asset returns as
well as liabilities.
The main contribution of this chapter is a description of a computational
procedure for finding an optimal diversification between funds. The pro-
cedure combines simulations with large-scale convex optimization and it
can be efficiently implemented with modern solvers for linear program-
ming. We illustrate the optimization process on a problem coming from the
Finnish pension insurance industry. The liabilities are taken as the claim
process associated with the current claims portfolio of a private sector occu-
pational pension system and an investment horizon of 82 years. The results
reveal a significant improvement over a set of standard investment styles
that are often recommended for long-term investors.
The rest of this chapter is organized as follows. We begin by review-
ing some well-known parametric investment strategies in Section 11.2.
Section 11.3 presents the optimization problem and Section 11.4 outlines
the numerical procedure for its solution. The application to pension fund
management is reported in Section 11.5. The market model used in the case
study is described in the Appendix.

11.2 Basic investment strategies

Consider a financial market where a finite set J of securities can be traded


over a finite discrete time t = 0, ... , T. The return on asset j ∈ J over holding

207

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
208 Petri Hilli et al.

period [t − 1, t] will be denoted by Rt,j. The interpretation is that if ht−1,j units


of cash are invested in asset j ∈ J at time t − 1, the investment will be worth
Rt,j ht−1,j at time t.
We study dynamic trading strategies from the perspective of an investor
who has given initial capital w0 and liabilities c = ( ct )Tt =1. Here, ct denotes a
claim the investor has to pay at time t. The claim process c is allowed to take
both positive and negative values so it can be used to model liabilities as

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
well as income. The return processes Rj = ( Rt ,j )Tt =1 are assumed to be positive,
but otherwise their joint distribution with the claim process c is arbitrary.
Several rules have been proposed for updating an investment portfolio in
an uncertain dynamic environment. In the following paragraphs, we recall
four well-known examples modified to accommodate for claim payments.
The simplest strategy is the buy and hold (BH) strategy where an initial
investment portfolio is held over time without updates. When the claim
process c is nonzero, BH strategies may be infeasible. A natural modification
is to liquidate each asset in the proportion of the initial investments to cover
the claims. The resulting strategy consists of investing


p j w0 t = 0
ht , j = 
R h
 t , j t −1, j − p j ct t = 1,...,T

units of cash in asset j ∈ J at the beginning of the holding period starting


at time t. Here, πj is the proportion invested in asset j ∈ J at time t = 0. Such
strategies will be ‘self-financing’ in the sense that they allow for paying out
the claims without the need for extra capital after time t = 0. If the claim
process c is null, the BH strategy requires no transactions after time t = 0.
Another well-known strategy is the fixed proportions (FP) strategy where at
each time and state the allocation is rebalanced into proportions given by a
vector π ∈ R J whose components sum up to one. In other words,

ht = p wt

where for t = 1, ... , T,

wt = ∑ ht −1, j Rt , j − ct
j ∈J

A target date fund (TDF) is a popular strategy in the pension industry


(Bodie and Treussard 2007). In a TDF, the proportion invested in risky assets
is decreased as the retirement date approaches. In our multi-asset setting, we
implement TDFs as investment strategies that adjust the allocation between

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Optimal Construction of a Fund of Funds 209

two complementary subsets J r and J s of the set of all assets J. Here, J s consists
of ‘safe’ assets and J r consists of the rest. In a TDF, the proportional exposure,
i.e. the proportion of wealth invested in J r at time t, is given by

et = a − bt

The parameter a gives the initial proportional exposure in the risky assets

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
and b specifies how fast the proportional exposure is decreased over time.
Nonnegative proportional exposure in the risky assets can be guaranteed by
choosing a and b so that

a ≥ 0 and a − bT ≥ 0

A TDF is defined by

ht = pt wt

where the vector πt is dynamically adjusted to give the specified propor-


tional exposure:

∑p t ,j = et
j ∈J r

To complete the definition, one has to determine how the wealth is allo-
cated within J r and J s. We do this according to FP rules.
One of the best-known strategies is the constant proportion portfolio insur-
ance (CPPI) strategy (see e.g. Black and Jones 1987, Black and Perold 1992 and
Perold and Sharpe 1995). In a CPPI, the proportional exposure in the risky
assets follows a rule of the form

m
et = max {wt − Ft , 0}
wt
 F 
= m max 1 − t , 0
 wt 

where the ‘floor’ Ft represents the time value t of a claim that should be
paid in the future and the parameter m ≥ 0 gives the fraction invested in
risky assets of the excess of wealth over the floor. In our setting, Ft would
represent the value of the part of c remaining at time t. If one wishes to limit
the maximum proportional exposure to a given upper bound l the strategy
becomes

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
210 Petri Hilli et al.

  F  
et = min m max 1 − t , 0 ,l 
  wt  

11.3 The optimization problem

Given an initial capital w0 and a sequence ( ct )Tt =1 of claims representing the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
liabilities of the investor, it is a natural idea to diversify among different
strategies in order to better suit the risk preferences of the owner. The overall
strategy obtained with diversification will also cover the claims ( ct )Tt =1 so one
is free to search for an optimal diversification. Diversifying among paramet-
ric classes of investment strategies, such as those listed above, may produce
new strategies which do not belong to the original parametric classes; see
Section 11.5.3 for further discussion.
The problem of diversifying among a finite set {hi | i ∈ I} of strategies can
be written as

minimize r( ∑ ai wTi )
a∈X
i ∈I

where wTi is the terminal value of a wealth process wi obtained by following


strategy i ∈ I:

X = {a ∈ \ I+ | ∑ ai = 1}
i ∈I

and ␳ is a convex risk measure that quantifies the preferences of the deci-
sion maker over random terminal wealth distributions (see e.g. Föllmer and
Schied 2004 or Rockafellar 2007).
Several choices of ␳ may be considered. We will concentrate on the
Conditional Value at Risk (CVaR), which is particularly convenient in the
optimization context. According to Rockafellar and Uryasev (2000), CVaRδ
at confidence level δ of a random variable w can be expressed as

 1 
CVaRd ( w ) = inf E  max {g − w , 0} − g 
g
1 − d 

Moreover, the minimum over ␥ is achieved by Value at Risk (VaR) at confi-


dence level δ. The problem of optimal diversification with respect to CVaRδ
can be written as

 1 
minimize E  max{g − ∑ ai wTi , 0} − g  (1)
a∈X ,g
1 − d i ∈I 

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Optimal Construction of a Fund of Funds 211

The problem thus becomes that of minimizing a convex expectation func-


tion over a finite number of variables. Mathematically, it is close to the clas-
sical problem of maximizing the expected utility in a one-period setting
and, consequently, similar techniques can be applied for its solution (see
e.g. Sharpe 2007).

11.4 Numerical procedure

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
In order to solve (1), we will first make a quadrature approximation of the
objective (see Pennanen and Koivu 2005 and Koivu and Pennanen (to
appear)). That is, we generate a finite number N of return and claim scenar-
ios (Rk, ck), k = 1, ... , N over the planning horizon t = 0, ... , T and approximate
the expectation by

1 N  1 

N k =1  1 − d
max{g − ∑ ai wTi,k , 0} − g 
i ∈I 

where wTi,k is the terminal wealth along scenario k obtained with strategy hi.
The computation of wTi,k is straightforward: given realizations of Rk and ck,
the corresponding wealth process wi,k is given recursively by


 w0 for t = 0
w i ,k
t =

∑ j∈J Rt , j ht −1, j − ct for t > 0
k i ,k k

where hti −,k1 = ␲ti −1wti ,−k1 and ␲ti −1 is one of the weight vectors specified in the pre-
vious section.
Algorithmically, the solution procedure can be summarized as follows.

1. Generate N scenarios of asset returns Rt and claims ct over t = 1, ... , T.


2. Evaluate each basic strategy i ∈ I along each of the scenarios k = 1, ... , N
and record the corresponding terminal wealth wTi,k .
3. Solve the optimization problem

1 N  1 
minimize ∑
N k =1  1 − d
max{g − ∑ ai wTi,k , 0} − g  (2)
a∈X ,g
i ∈I 

for the optimal diversification weights ␣i.


There are several possibilities for solving Equation (2). We follow Rockafellar
and Uryasev (2000) and reformulate (2) as the linear programming

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
212 Petri Hilli et al.

problem (LP):

1 N  1 k 
minimize ∑
a∈\ ,g∈\ , s∈\ N k =1  1 − d
I N
s −g

subject to s ≥ g − ∑ a wT k = 1,..., N
k i i ,k

i∈I

∑ ai = 1

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
i∈I

ai, s k ≥ 0

This LP has |I| + N + 1 variables, where |I| is the number of funds and N is the
number of scenarios in the quadrature approximation of the expectation.
Modern commercial solvers are able to solve LP problems with millions of
variables and constraints.

11.5 Case study: pension fund management

Consider a closed pension fund whose aim is to cover its accrued pen-
sion liabilities with given initial capital. The pension claims are of the
defined benefit type and they depend on the wage and consumer price
indices. According to the current Finnish mortality tables, all the liabil-
ities will be amortized in 82 years. The following section describes the
stochastic return and claim processes R = ( Rt )Tt =1 and c = ( ct )Tt =1 and Section
11.5.2 lists the basic strategies that will be used in the numerical study in
Section 11.5.3.

11.5.1 Assets and liabilities


The set J of primitive assets consists of:

1. Euro area money market,


2. Euro area government bonds,
3. Euro area equity,
4. US equity,
5. Euro area real estate.

These are the assets in which the individual funds described in Section 11.2
invest. On the other hand, the above asset classes may be viewed as invest-
ment funds themselves.
For the money market fund, the return over a holding period ∆t is deter-
mined by the short rate Y1, where

⌬tYt −1 ,1
Rt ,1 = e

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Optimal Construction of a Fund of Funds 213

The short rate will be modelled as a strictly positive stochastic process


which will imply that R1 > 0. The return of the government bond fund will
be approximated by the formula

−D
 1 + Yt ,2 
Rt ,2 = ⌬tYt −1,2 + 
 1 + Yt −1,2 

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
1.4 8

1.2 6

1 4
Return (%)

0.8 Return (%) 2

0.6 0

0.4 −2

0.2 −4

0 −6
0 50 100 150 200 250 0 50 100 150 200 250
Month Month
(a) Money market fund (b) Bond fund
40 40

30 30

20 20
Return (%)

Return (%)

10 10
0 0
−10 −10
−20 −20
−30 −30 0
0 50 100 150 200 250 50 100 150 200 250
Month Month
(c) Euro area equity fund (d) US equity fund

20

15

10
Return (%)

−5

−10

−15
0 50 100 150 200 250
Month
(e) Euro area real estate fund

Figure 11.1 Evolution of the 0.1%, 5%, 50%, 95% and 99.9% percentiles of monthly
asset return distributions over 20 years

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
214 Petri Hilli et al.

where Yt,2 is the average yield to maturity of the bond fund at time t and D
is the modified duration of the fund. The total returns of the equity and real
estate funds are given in terms of the total return indices Sj:

St , j
Rt , j = , j = 3, 4, 5
St −1, j

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
The pension fund’s liabilities consist of the accrued benefits of the plan
members. The population of the pension plan is distributed into different
cohorts based on members’ age and gender. The fraction of retirees in each
cohort increases with age and reaches 100% by the age of 68. The youngest
cohort is 18 years of age and all the members are assumed to die by the age
of 100. The defined benefit pensions depend on stochastic wage and con-
sumer price indices.
We will model the evolution of the short rate, the yield of the bond port-
folio, the total return indices as well as the wage and consumer price indi-
ces with a Vector Equilibrium Correction-model (Engle and Granger 1987)
augmented with GARCH innovations. A detailed description of the model
together with the estimated model parameters is given in the Appendix.
Figure 11.1 displays the 0.1%, 5%, 50% (median), 95% and the 99.9% per-
centiles of0020the simulated asset return distributions over the first 20 years
of the 82-year investment horizon. Figure 11.2 displays the development of

25
Median
95%-percentile
20

15
Billion €

10

0
2010 2020 2030 2040 2050 2060 2070 2080 2090

Figure 11.2 Median and 95% confidence interval of the projected pension expend-
iture c over the 82-year horizon

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Optimal Construction of a Fund of Funds 215

the median and the 95% confidence interval of the yearly pension claims
over the 82-year horizon.

11.5.2 The investment funds


We will diversify a given initial capital among different investment funds
as described in Section 11.3. The considered funds follow the trading rules
listed in Section 11.2 with varying parameters. The set J s of ‘safe assets’ con-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
sists of the money market and bond investments.
We take five buy and hold strategies each of which invest all in a single
asset. More general BH strategies can be generated by diversifying among
such simple BH strategies. We use 11 FP strategies with varying parameters π.
In TDF and CPPI strategies, we always use fixed proportion allocations within
the safe assets J s and the risky assets J r. We use 20 TDF strategies with varying
values for ␣ and ␤. In the case of CPPI strategies, we define the floor through

FT = 0

Ft = (1 + r ) Ft −1 − ct t = 0,...,T

where r is a deterministic discount factor and c̄ t is the median of claim


amount at time t; see Figure 11.3. This corresponds to the traditional actuar-
ial definition of ‘technical reserves’ for an insurance portfolio. We generate
40 CPPI strategies with varying values for the multiplier m and the discount
factor r in the definition of the floor.

300
r = 4%
r = 5%
250 r = 6%
r = 7%

200
Billion €

150

100

50

0
2010 2020 2030 2040 2050 2060 2070 2080 2090

Figure 11.3 Development of the floor F with different discount factors r over the
82-year horizon

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
216 Petri Hilli et al.

11.5.3 Results
We computed an optimal diversification over the above funds assuming an
initial capital of 225 billion euros. We constructed the corresponding linear
programming problem with 20,000 scenarios, as described in Section 11.4.
The resulting LP consisted of 20,072 variables and 20,001 constraints. The
LP was solved with a MOSEK interior point solver and AMD 3GHz processor
in approximately 30 seconds.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
The optimal solution is given in Table 11.1 with the characteristics of
the funds in the optimal diversification. The optimal allocation in terms of
the primitive assets at time t = 0 is given in Figure 11.4. The CVaR97.5% of the

Money market Real estate


1% 1% US equity
11%
Euro area equity
3%

Bonds
84%

Figure 11.4 Optimal initial allocation in the primitive assets

Table 11.1 Optimally constructed fund of funds

Weight (%) Type Parameters CVaR 2.5% (billion €)


66.5 BH Bonds 1569
2.9 BH Euro Equity 6567
10.4 BH US Equity 5041
2.2 FP m = 0.8 3324
3.9 CPPI m = 1, r = 4%, l = 100% 1420
9.9 CPPI m = 2, r = 4%, l = 100% 1907
4.2 CPPI m = 2, r = 5%, l = 100% 2417
Notes: The first column gives the optimal weight of each of the invest-
ment strategies. The second column indicates the type of the investment
strategy (see Section 11.2). The third column gives the parameters of the
investment strategies, with m denoting the weight of the risky assets,
r the deterministic discount factor and l the upper bound of the risky
assets. The last column gives the CVaR 2.5% for each strategy in billions
of euros.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Optimal Construction of a Fund of Funds 217

optimally constructed fund of funds is 251. The last column of Table 11.1 gives
the CVaR numbers obtained with the individual funds in the optimal fund of
funds. The constructed fund of funds clearly improves upon them. The best
CVaR97.5% value among all individual funds is 1020, which means that the
best individual fund is roughly 300% riskier than the optimal diversification.
Surprisingly, this fund is not included in the optimal fund of funds. All the
CVaR-values were computed on an independent set of 100,000 scenarios.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
11.6 Conclusions

This chapter applies the computational technique developed in Koivu and


Pennanen (to appear) to a long-term asset liability management problem with
dynamic portfolio updates. The technique reduces the original problem to
that of diversifying a given initial capital over a finite number of investment
funds that follow dynamic trading strategies with varying investment styles.
The simplified problem is solved with numerical integration and optimiza-
tion techniques. When evaluated on an independent set of 100,000 scenar-
ios, the optimized fund of funds outperforms the best individual investment
strategy by a wide margin. This opens ample possibilities for future research.
An interesting possibility would be to apply the approach to risk measure-
based pricing of insurance liabilities in incomplete markets.

Appendix: The time series model

As described above, the returns of the investment funds and pension cash
flows can be expressed in terms of seven economic factors: the short term
(money market) interest rate (Y1), the yield of a euro area government bond
fund (Y2), the euro area total return equity index (S3), the US total return
equity index S4, the euro area total return real estate investment index (S5),
the Finnish wage index (W) and the euro area consumer price index (C). We
will model the evolution of the stochastic factors with a Vector Equilibrium
Correction-model (Engle and Granger 1987) augmented with GARCH
innovations. To guarantee the positivity of the processes Y1, Y2, S3, S4, S5, W
and C, we will model their natural logarithms as real-valued processes. More
precisely, we will assume that the vector process

ln Yt ,1
ln Yt ,2
ln St ,3
jt = ln St ,4
ln St ,5
ln Wt
ln Ct

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
218 Petri Hilli et al.

follows a VEqC-GARCH process

⌬jt − d = mt + st «t ,
(3)

where

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
(
mt = A ( ⌬jt −1 − d) + a b T jt −1 − g ) (4)

and

st2 = Cst −1 «t −1 (Cst −1 «t −1 ) + Dst2−l DT + Ω.


T

(5)

In (4), the matrix A captures the autoregressive behaviour of the time ser-
ies, the second term takes into account the long-term behaviour of ␰t around
statistical equilibria described by the linear equations ␤'␰ = ␥ and δ is a vec-
tor of drift rates. The time varying volatilities, and hence covariances, of the
time series are modelled through a multivariate GARCH specification (5),
where matrices C, D and Ω are parameters of the model.
In its most general form the above model specification has a very high num-
ber of free parameters that need to be estimated. To simplify the estimation
procedure and to maintain the model parsimonious, while still capturing the
most essential features observed in the historical time series, we will assume
that the matrices A, C and D are diagonal and fix the matrix ␤ as

T
 
b = 0 1 0 0 0 0 0 
 −1 1 0 0 0 0 0 
 

The specification of the matrix ␤ implies that the government bond yield
and the spread between the bond yield and the short rate are mean revert-
ing processes.
We take the parameter vectors δ and ␥ as user specified parameters and
set their values to

T
d = 10 −3 0 0 7.5 7.5 5.0 2.0 3.0 
 
 ln (5) 
g= 
ln (5 / 4)
 

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Optimal Construction of a Fund of Funds 219

The vector δ allows the user to specify the expected median values of the
equity and real estate returns as well as the growth rates of consumer prices
and wages. Correspondingly, through the specification of the vector ␥, the
user can control the long term median values of the government bond yield,
the spread between the bond yield and short rate, and hence, the expected
median level of the short rate. The set equilibrium values imply that the
median values of the short rate Yt,1 and the yield of the bond portfolio Yt,2

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
will equal four and five, respectively.
We estimate the remaining model parameters using monthly data between
January 1991 and July 2008 by applying an estimation procedure where all
insignificant parameters are deleted one by one until all remaining param-
eters were significant at a 5% confidence level. The time series used in the
estimation are summarized in Table 11.A.1 and the estimated parameter
matrices are given below.

Table 11.A.1 Data series used in the estimation

Stochastic factor Historical time series


Y1 Three month EURIBOR (FIBOR prior to EURIBOR)
Y2 Yield of a German government bond portfolio with an
average modified duration of five years
S3 MSCI Euro area total return equity index
S4 MSCI US total return equity index
S5 EPRA/NAREIT Euro area total return real estate index
W Seasonally adjusted Finnish wage index (Statistics Finland)
C Seasonally adjusted Euro area consumer price index (Eurostat)

41.995 0 0 0 0 0 0 
 
0 14.807 0 0 0 0 0 
 
0 0 0 0 0 0 0 
−2  0 0 0 0 0 0 0 
A = 10  
0 0 0 0 0 0 0 
0 0 0 0 0 96.233 0 
 
0 0 0 0 0 0 93.422 

T
0 −2.119 0 0 0 0 0 
−2
a = 10  
1.514 0 0 0 0 0 0 ,
 

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
220 Petri Hilli et al.

25.788 0 0 0 0 0 0
 
0 29.816 0 0 0 0 0
 
0 0 41.952 0 0 0 0
−2  0 0 0 38.588 0 0 0 ,
C = 10  
0 0 0 0 28.071 0 0
0 0 0 0 0 31.8125 0

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
 
0 0 0 0 0 0 0 

88.301 0 0 0 0 0 0
 
0 91.236 0 0 0 0 0
 
0 0 86.412 0 0 0 0
−2  0 0 0 91.373 0 0 0 ,
D = 10  
0 0 0 0 94.117 0 0
0 0 0 0 0 81.056 0
 
0 0 0 0 0 0 0 

202.241 71.004 −0.460 0.723 −1.622 −0.015 −0.105


 
71.004 170.5
507 30.889 9.200 −3.682 0.134 −0.277
 
 −0.460 30.889 202.430 53.547 54.036 0.021 0.199 
−6  0.723 0.021 
Ω = 10 9.200 53.547 25.330 14.050 0.003
 
 −1.622 −3.682 54.036 14.050 44.769 −0.094 0.179 
 −0.015 0.134 0.021 0.003 −0.094 0.010 0.019 
 
 −0.105 −0.277 0.199 0.021 0.179 0.019 0.198 

Bibliography
Black, F. and Jones, R. (1987) ‘Simplifying Portfolio Insurance’. Journal of Portfolio
Management, 14(1): 48–51.
Black, F. and Perold, A. F. (1992) ‘Theory of Constant Proportion Portfolio Insurance’.
Journal of Economic Dynamics and Control, 16: 403–426.
Bodie, Z. and Treussard, J. (2007) ‘Making Investment Choices as Simple as Possible,
but not Simpler’. Financial Analysis Journal, 63(3): 42–47.
Engle, R.F. and Granger, C. W. J. (1987) ‘Co-integration and Error Correction:
Representation, Estimation, and Testing’. Econometrica, 55(2): 251–276.
Föllmer, H. and Schied, A. (2004) ‘An Introduction in Discrete Time’. Stochastic
Finance, Volume 27 of de Gruyter Studies in Mathematics. Walter de Gruyter & Co.,
Berlin, extended edition.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Optimal Construction of a Fund of Funds 221

Koivu, M. and Pennanen, T. (to appear) ‘Galerkin Methods in Dynamic Stochastic


Programming’. Optimization.
Pennanen, T. and Koivu, M. (2005) ‘Epi-convergent Discretizations of Stochastic
Programs via Integration Quadratures’. Journal of Numerical Mathematics, 100(1):
141–163.
Perold, A.F. and Sharpe, W. F. (1995) ‘Dynamic Strategies for Asset Allocation’.
Financial Analysis Journal, 51(1): 149–160.
Rockafellar, R.T. (2007) ‘Coherent Approaches to Risk in Optimization under

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Uncertainty’. Tutorials in Operations Research INFORMS 2007, 38–61.
Rockafellar, R.T. and Uryasev, S. P. (2000) ‘Optimization of Conditional Value-at-Risk’.
Journal of Risk, 2: 21–42.
Sharpe, W.F. (2007) ‘Expected Utility Asset Allocation’. Financial Analysis Journal,
63(5): 18–30.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
This page intentionally left blank

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Part III
Asset Class Modelling and
Quantitative Techniques

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
This page intentionally left blank

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
12
Mortgage-Backed Securities in a
Strategic Asset Allocation Framework

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Myles Brennan and Adam Kobor

12.1 Motivation

To perform robust asset allocation analysis, investors need reliable quan-


titative models to assess the expected risk and return profile of the asset
classes and sectors that may become constituents of the strategic asset mix.
This Chapter has been written for fixed income investors who would like to
consider a strategic asset allocation to agency guaranteed mortgages (MBS)
in their portfolios. Within the US high grade fixed income universe, the lar-
gest sector is the MBS sector, comprising close to 40% of the universe. So a
reliable asset class model for MBS should be useful to a significant number
of fixed income investors. In fact, this Chapter should be relevant for multi-
asset investors as well, who may consider US high grade fixed income simply
as one asset class among others, like equities or real estate. In their case, MBS
by definition receives a near 40% weight within their fixed income alloca-
tion, so it is beneficial for them to be able to reliably model the behaviour
of the MBS sector.
Strategic asset allocation decisions are made within a long investment
horizon. Even conservative bond investors, who focus on reducing the
chances of capital losses over short time horizons, the time frame for stra-
tegic asset allocation is typically around one to three years. Over a few years,
market environments can change dramatically, so it is critical to have a
model to quantify the downside risks of any asset class.
When one analyzes government bond portfolios, the single most import-
ant determinant of risk is the Treasury yield curve. After setting determin-
istic or stochastic scenarios for the Treasury yield curve, one can easily
translate those scenarios into performance figures for government securities
with different durations.
In order to expand an analytical framework to include mortgage-backed
securities, one potentially needs a model that is robust and parsimonious: a
model that fits well, provides reliable risk and return estimations, but does
not increase the complexity of the quantitative framework unnecessarily.

225

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
226 Myles Brennan and Adam Kobor

One would potentially link MBS to yield curve factors, and perhaps a lim-
ited number of additional factors. One needs a robust model that can be
used for risk and return estimation over a time horizon of a couple of years.
The scope of the strategic asset allocation model is very different from the
relative value models that are used by traders aiming to identify mispric-
ings in the order of basis points, and with time horizons of a few months.
Relative value models need to be precise at the level of the individual secur-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
ity, and require a large number of assumptions. In the case of the strategic
asset allocation, the set of assumptions is more limited. The time horizon is
longer, and one needs to be able to model several other sectors like corporate
bonds or international markets in a common framework with mortgages. It
is interesting to note that the available literature about the role of MBS in
the strategic asset allocation is very limited, while there is extensive pub-
lished work on the valuation of individual mortgage-backed securities.
The model which we present in this chapter is very simple. In fact, we
attribute the performance of MBS solely to the seven-year swap rate. While
the underlying factors could be easily extended to additional risk factors
like yield curve slope or volatility, the goodness of fit is robust even in the
presented setting. The attribution model has three components: (1) coupon
return, (2) price return, driven by a time-varying duration that reflects the
negative convexity profile of the MBS universe, and (3) paydown return.
While the model is simple, we are not aware of other models presented in a
similar fashion.
This chapter is structured as follows. First, we discuss the role of the mort-
gage-backed securities sector in the strategic asset allocation. In the second
section we present the return attribution model. In the third section we
illustrate the model by analyzing the sector performance over 2007–08, and
related to this, we also comment on the implications of the current market
developments. While the events of the 2007–08 crisis certainly have a huge
impact on the broad landscape of the mortgage universe, we try to keep the
chapter as general as possible, focusing on the building blocks of the return
attribution model. One simple reason is that at the time of writing this
chapter, market events and policy responses follow each other rapidly, and
the validity of some of our statements would become outdated within a few
weeks or months.

12.2 MBS as a strategic asset class

12.2.1 Literature review


In this chapter we focus on the agency guaranteed mortgage-backed
securities sector, i.e., those mortgage-backed securities that are guaran-
teed by the Government National Mortgage Association (GNMA), the
Federal National Mortgage Association (FNMA), or the Federal Home Loan
Mortgage Corporation (FHLMC). Mortgage-backed securities have a very

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Mortgage-Backed Securities in a Strategic Framework 227

wide literature. Fabozzi (2006) and Fabozzi et al. (2007) provide a very broad
overview about the details and mechanics of the mortgage market. We also
refer to Gabaix et al. (2007) who discuss a fundamental valuation approach
for mortgages, and Arora et al. (2000) who describe a five-factor attribution
model to analyze the relative performance of mortgages over Treasuries. The
field of strategic asset allocation similarly has a strong theoretical basis and
wide literature. Campbell and Viceira (2002) discuss the general theory and

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
fundamental principles of strategic asset allocation. Bakker and van Herpt
(2007) provides an up-to-date review of the practice and trends in central
bank reserve management that may be relevant for other conservative fixed
income investors as well. On the other hand, there is surprisingly little lit-
erature available which addresses specifically the role of mortgage-backed
securities in the strategic asset allocation. UBS Mortgage Strategist (2003),
for example, provides a detailed discussion and historical performance
review of MBS as an asset class. However, we are not aware of any literature
that presents a simple but robust sector model for MBS.

12.2.2 Structure of the agency guaranteed


mortgage-backed securities universe
In this chapter we use the agency guaranteed mortgage-backed securities
index provided by Lehman Brothers1 to describe the corresponding sector.
For a detailed description of the index, we refer to Mann and Phelps (2003)
and Dynkin et al. (2005).
As of 30 June 2008, the market capitalization of this universe was more
than four trillion dollars, making it comprise about 40% of the US high
grade fixed income market, about 70% of the total US securitized mortgage
market, and about 50% of the total US mortgage loan market, based on data
obtained from the US Federal Reserve and UBS. The securities comprising
this sector can be divided into different categories, according to the follow-
ing characteristics:

● Fixed rate bonds versus adjustable rate mortgages (ARMs); the latter comprise
about one tenth of the index.
● Original maturity of 15, 20 or 30 years; the latter comprises about three
quarters of the universe.
● Discount or premium bonds: the ratio certainly varies dynamically based
on the market conditions. However, it is critical to note that the behav-
iour and interest rate sensitivity of premium and discount bonds differ
significantly due to the option which the borrower retains to prepay the
underlying mortgage.
● The agencies that guarantee the specific security: Bonds guaranteed by GNMA,
a government agency with an explicit guarantee from the US Government,
comprise about one tenth of the universe. The rest are guaranteed by
FNMA and FHLMC, so called Government Sponsored Enterprises that are

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
228 Myles Brennan and Adam Kobor

private corporations but until the middle of 2008 have had only an impli-
cit guarantee by the US Government. In September 2008, FNMA and
FHLMC were taken under the USUS Treasury’s conservatorship, strength-
ening the Government’s financial support of these entities.
● Seasoning: the behaviour of recently issued bond differs from bonds issued
in earlier vintage years, so-called seasoned bonds.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
All these dimensions play a critical importance in the practice of portfolio
management, as well as in a detailed risk and return attribution of the
universe.

12.2.3 Investor considerations and historical performance


There are several ways for a conservative government bond investor to
enhance the risk and return profile of her portfolio. Just a few examples to
consider:

● Duration extension: By simply extending the duration of the portfolio, the


term premium is expected to positively compensate over a long horizon.
The problem, however, is that the risk diversification benefits are very
limited. By simply extending duration, the investor remains exposed to
the same set of risk factors, but at a larger degree of sensitivity.
● Credit risk: By assuming some kind of credit risk, the investor will be com-
pensated by some additional spread for the assumed additional risk, and a
degree of diversification can be expected for the additional risk factor.
● Selling options: Investing in mortgage-backed securities is one way of gener-
ating revenue from receiving an option premium. Mortgage loan borrow-
ers have the right to prepay their loan faster than payments are scheduled,
and thus they may have the opportunity to refinance their loan at a lower
interest rate if mortgage loan rates decrease. Lenders and mortgage-backed
securities investors require an option premium from the borrower which
may enhance the performance of the portfolio. The value of the option is
partly driven by yield volatilities, which is an additional risk factor that
may improve portfolio diversification.
● International diversification: Finally, investing in international bond mar-
kets can be considered as another means of diversification. Even if the
currency risk is eliminated by a hedging mechanism, the exposure to
multiple yield curves may result in a more efficient portfolio.

For illustrative purposes, in Figure 12.1 we show the historical perform-


ance of the previously described investment alternatives based on selected
index returns over the past 18 years. The duration decision can be considered
across the US Treasury bond indices, starting from the three-month T-Bill up
to the seven-ten year Notes index of Merrill Lynch. Credit risk is illustrated
by the Lehman Brothers Corporate Bond index, mortgages are represented

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Mortgage-Backed Securities in a Strategic Framework 229

8.0

7.5 UST 5–7 UST 7–10


MBS US aggregate
7.0 Corporates
Average return (%)

6.5 G7 Govt (Hedged) UST 3–5

6.0

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
5.5 UST 1–3
Outside US treasuries
5.0
US tr. eff. frontier
4.5
3-mth T-Bill US treasuries
4.0
0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0
Volatility (%)

Figure 12.1 Historical risk and return of selected bond indices (Jan. 1990–Sept. 2008)
Source of data: Lehman Brothers and Merrill Lynch

by Lehman’s Agency-Guaranteed MBS index, and international diversifi-


cation by Lehman’s G7 Governments index, hedged back to the US dollar.
Finally, the US high grade fixed income universe is represented by Lehman’s
US Aggregate index. We can observe that except for the corporate bonds, all
alternatives outside US Treasuries reside over the efficient frontier comprised
of Treasury bonds only. In other words, diversification appeared to be more
efficient than pure duration extension over the selected historical period. It
is interesting to note that Balachandran et al. (2008) find that a combination
of the G7 government bonds and MBS can be used as a reasonable proxy for
the whole Lehman Global Aggregate index, the index representing the high
grade bond market of the developed and some emerging markets.
In Table 12.1, we extend our historical snapshot by showing some add-
itional risk and performance statistics for the selected bond investment alter-
natives. By comparing the Sharpe ratios of all the selected indices, we find
that the MBS index had the highest Sharpe ratio. While these statistics are
certainly sensitive to the historical sample selection, it is easy to check that
one would find a similar picture by analyzing shorter historical samples,
say, only the last five years. (However, taking the most recent history, start-
ing from the beginning of the sub-prime crisis in the United States, would
certainly show a more adverse picture for MBS and Corporates.) In addition,
we note that the historical average duration of the MBS index (roughly 3.3
years duration based on the past 18-year history) falls closest to the duration
of the US Treasury three-five. From a downside risk prospective, however,
MBS seemed to be less risky than the three-five years Treasury notes – either
if we compare the lowest historical returns of the first percentile, or whether
we compare the frequencies of negative annual return.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
230 Myles Brennan and Adam Kobor

Table 12.1 Historical performance statistics of selected bond indices in % (Jan.


1990–Sept. 2008)

3-mth UST UST UST UST G7 Govt


T-Bill 1–3 3–5 5–7 7–10 (Hedged) MBS Corporates US Agg.

Average 4.3 5.6 6.7 7.2 7.4 6.7 6.9 6.6 6.8
return
Return 0.5 1.7 3.7 4.6 5.8 3.2 3.0 5.0 3.7

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
volatility
Sharpe 0.89 0.72 0.68 0.59 0.89 0.94 0.50 0.73
ratio
99% Lowest 0.1 −0.7 −1.9 −2.7 −3.6 −1.7 −1.7 −3.1 −2.3
m/m ret.
99% Lowest 1.1 0.5 −2.5 −3.9 −6.2 −3.1 −1.4 −4.4 −3.0
y/y ret.
Freq. of neg. 0.0 0.9 7.0 9.3 12.6 4.2 3.3 8.9 6.5
y/y ret.

Source of data: Lehman Brothers and Merrill Lynch

The historical performance of the agency guaranteed MBS sector has made
it attractive to conservative investors in recent years. In fact, according to
the annual UBS survey of eligible asset classes for central banks, 52% of the
responding central banks claimed MBS as an eligible asset class in 2007,
compared to only 2% in 1998. According to the UBS Mortgage Strategist
(2008), this ratio dropped to 46% in 2008, certainly reflecting concerns
about the US housing and mortgage-backed securities market, but the reality
is that the appetite for most of the non-government bond sectors declined,
including agency bonds, supranationals and equities.
Taking its capitalization and weight in the high grade fixed income uni-
verse into account, the agency guaranteed MBS sector is certainly a natural
candidate for bond investors to consider. According to the current market
weights, as the data in Table 12.2 show, the agency guaranteed MBS account
for almost 40% of the US high grade fixed income market, making this the
sector the largest one. This sector has grown from less than one third of the
universe to almost 40% during the past two decades, reaching over four tril-
lion dollar capitalization.
In order to assess the value-added of a fixed income sector relative to gov-
ernment bonds, we get a clearer picture if we look at the key-rate duration
adjusted excess return, i.e., the return of the specific sector over a portfolio
of government bonds that has the same profile of interest rate sensitivity.
This type of excess return tells us the magnitude of the additional return
that we could have not achieved with a government bond portfolio.
In the case of MBS, duration may vary intensively over time as a result of
the borrowers’ prepayment option. When yields drop and the loans become
refinanceable (akin to when callable bonds become callable), the value of
the loans and the bonds do not increase any further since they will likely be

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Mortgage-Backed Securities in a Strategic Framework 231

Table 12.2 Composition of the US high grade fixed income


universe (as of 30 June 2008)

Market Cap. ($Bn) Percentage

Treasuries 2,353 21.9


Agencies 1,186 110
ABS 83 0.8

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Agency-MBS 4,164 38.7
Corporates 2,131 19.8
Aggregate Index 10,747 100
Source: Lehman Brothers

called. In this environment, duration becomes very low. On the other hand,
when yields rise, duration will increase.
From the point of view of a strategic asset allocation exercise, MBS is likely
to be added ‘as is’, i.e., without having its duration dynamically hedged.
From a practical standpoint, the investor will assume both the interest rate
risk implied by the changing duration of the MBS, as well as its MBS-specific
risk.
In addition, based on historical observations, MBS typically outper-
forms Treasuries with similar duration in those months when interest rates
increase, and somewhat underperforms on average during those months
when yields are decreasing. Note that this is just the opposite of the impact
of duration extension. Longer duration bonds outperform shorter duration
bonds during decreasing yield periods, at least on the price return level. In
fact, this is another empirical evidence for the diversification that MBS may
provide. The reason may be twofold. First, spread sectors in general have
the tendency to outperform government bonds when government yields
increase due to the low or sometimes negative correlation between gov-
ernment yields and spreads. Second, when yields decrease, mortgage loans
become more refinanceable, thus investors face the negative impact of the
paydown return. In an increasing yield period, on the other hand, loans
are less refinanceable, and the positive excess returns simply reflect the pre-
mium for the refinancing option. Nevertheless, we have to note that meas-
uring the duration of mortgages, and thus stating their excess performance
over Treasuries is heavily model-dependent. Similarly, valuing the prepay-
ment option and assessing the expected risk premium from mortgages is
heavily model-dependent, as discussed by Gabaix et al. (2007).
In light of the market crisis of 2007–08, we need to make an important
distinction. In this chapter we discuss the agency guaranteed MBS sector
exclusively. While this sector – together with all other spread sectors up
to the point of writing this chapter – has underperformed US Treasuries
with equivalent duration since the middle of 2007, its performance is vastly

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
232 Myles Brennan and Adam Kobor

15
Cumulative total return (%)

10
5
0
−5
−10
−15
−20 Agency guaranteed MBS

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
−25 Home equity loan
−30
−35
Jan
Feb

Jun

Aug
Sep

Jan
Feb

Jun

Aug
Sep
Jul

Jul
Mar
Apr

Mar
Apr
Oct
May

Nov
Dec

May
Jan 2007–Sep 2008

Figure 12.2 Performance of the agency guaranteed versus sub-prime MBS


Source of data: Lehman Brothers

different from the so-called sub-prime mortgages that account for about
10% of the overall US mortgage market. Sub-prime mortgages suffered tre-
mendous losses over the past year. The contrast is shown in Figure 12.2,
where we use Lehman’s Home Equity Loan index to represent the sub-prime
universe. As the chart suggests, the agency guaranteed and the sub-prime
mortgages had roughly the same performance until the middle of 2007
when the sub-prime mortgage crisis started. After that, the performance of
the sectors diverged dramatically.

12.3 Attribution model for MBS as an asset class

In this section we describe a model that can be used to estimate the per-
formance of the MBS universe in certain interest rate and spread scenarios
for strategic asset allocation purposes. While it is clear from Section 12.2
that this fixed income sector is extremely complex, we try to keep the model
as simple as possible. All the estimates are based on aggregates; we don’t try
to estimate the sub-sectors individually.
Our model is a return attribution driven by interest rates and spreads. In
other words, it is supposed to forecast how the MBS sector will perform in
aggregate under a defined yield and spread scenario. For the sake of robust-
ness and ease in implementation, we do not link the model to any factors
other than interest rates and spreads, or simply swap rates.
Arora et al. (2000) present a regression-based model to attribute the dura-
tion-adjusted excess return of MBS to several fundamental market factors.
This and similar chapters may be very valuable references to understand
how market factors like volatility or yield curve slope explain the sector’s
excess return after adjusting to its key-rate duration profile. In fact, the

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Mortgage-Backed Securities in a Strategic Framework 233

factors enlisted in such research chapters can be easily used to extend our
model according to the analyst’s taste, or can simply be taken into consider-
ation when defining the underlying yield and spread scenario.
However, these types of models cannot be used directly for strategic asset
allocation. This is because it is unlikely that MBS would be included in the
asset allocation exercise under the assumption that its duration would be
dynamically hedged. In fact, the bulk of the total return variance is driven

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
by the price return, which cannot be estimated properly without modelling
mortgage duration and the so-called negative convexity. The term negative
convexity refers here to the fact that the duration of mortgages varies over
time; increasing when interest rates rise and decreasing when yields fall,
reflecting changes in the likelihood of prepayments.
In addition, when we consider MBS in an asset allocation exercise, we
would like to model the behaviour of the whole universe underlying the
MBS index. The MBS universe is constantly regenerating: prepaid loans are
replaced by new, mostly current coupon bonds. This again requires a devi-
ation from the relative value models that are concerned with securities over
their natural life, but do not deal with the nature of replacement. Total
return is generally defined as the market value change of the investment,
plus the cash flows realized during the measurement period. Mortgages gen-
erate cash flows in the form of coupon payment and principal repayment
that can be scheduled and prepayment, i.e., principal payment earlier than
scheduled. The total return of MBS can be expressed below:

⌬MV + CF ( F1 ⋅ P1 − F0 ⋅ P0 ) + ( F0 − F1 ) + F0 ⋅ Acc1
TR = = (1)
MV0 F0 ⋅ ( P0 + Acc0 )

where F represents the MBS factor, i.e., the ratio of the current outstanding
face value relative to the original face value, P denotes the price and Acc
represents the accrued interest or coupon payment. We note that ( F0 − F1 )
represents the principal repayment. By reordering the above description, we
can attribute total return to three components as follows:

TR = Price Ret. + Paydown Ret. + Coupon Ret.


F0 ⋅ ⌬P − ⌬F ⋅ (1 − P1 ) + F0 ⋅ Acc1
= (2)
F0 ⋅ ( P0 + Acc0 )

In Formula (2),

● Coupon return represents the accrued interest on the face value at the
beginning of the measurement period.
● Price return represents the pure appreciation or depreciation in the bond
value due to yield changes.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
234 Myles Brennan and Adam Kobor

● Paydown return represents the fact that some part of the pure price return
is not realized because a certain portion of the principal has been repaid.
Principal prepayment is beneficial to the bondholders when the bond is
at a discount (you get par value instead of the discounted value), and
disadvantageous when the bond is at premium (again, you get par value
instead of the premium value). Of course, prepayment activity becomes
intensive when the prevailing rates are lower than those at the origination

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
of the loan, i.e., when bonds are at premium value. While prepayments
can happen for several social and economic reasons (even when bonds are
at a discount), the pure financial incentive occurs at the premium state,
thus this return component is mostly negative in decreasing yields envir-
onments. When yields increase, paydown return can be slightly positive
as prepayments can happen for different socio-economic reasons other
than the financial incentive.

In the following sections we review each of these three components,


always linking them to the seven-year swap rate as the single underlying
factor. The selection of maturity for the swap rate is a choice for the analyst.
We have observed that many market practitioners use the seven-year, or
more precisely, a blend of the five- and ten-year rates as a reference for the
mortgage rate. Also, the model fit is good if we select the seven-year maturity
bucket. The measurement period is from December 1989 to September 2008
with a monthly frequency, and the reported return figures are expressed on
an annualized basis.

12.3.1 Coupon return


Coupon return on a monthly horizon can be expressed in the following
form:

rc ,t = ct −1 / 12 (3)

where ct is the estimated weighted average coupon (WAC) of the index. The
main question is how to estimate the WAC in a given scenario using only
the path of the seven-year swap rate. Intuitively one would consider using
some form of moving average of the past interest rates. A simple moving
average, however, would not be the best choice, since it would assume that
the ages of the underlying loans are distributed evenly. This is not a safe
assumption, since refinancing waves could significantly reduce or even
clear older mortgage loans. A better alternative is to use an exponentially
weighted moving average, in which case the weight of current observations
is higher than those belonging to older observations:

ct = ␭ ⋅ ct −1 + (1 − ␭) ⋅ yt + εt (4)

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Mortgage-Backed Securities in a Strategic Framework 235

0.9
Index coupon return
0.8 Estimated coupon return
Monthly return (%)

0.7

0.6

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.5

0.4

0.3
90 91 92 93 94 95 96 97 98 99 00 01 02 03 04 05 06 07 08
January

Figure 12.3 Coupon return estimation

Here, yt is the 7-year swap rate, and λ is an estimated parameter. The estima-
tion can be performed with a simple OLS procedure:

(r − ct −1 / 12 ) → min
index 2
c ,t

w.r .t . ␭

Over the whole historical sample, the estimated parameter value is


ˆ␭ = 0.98 . Using this parameter, the fitted coupon return history is shown
in Figure 12.3. Over the selected historical sample, the coupon return of the
MBS index had an average of 6.87% with a volatility of 0.37%, whereas the
fitted time series has an average of 6.81% with a volatility of 0.36%.
We note that using an unconditional λ parameter in order to assess the
WAC as an exponentially weighted moving average of historical rates is an
oversimplification. Certainly, in decreasing yield cycles refinancing activ-
ity becomes more intensive, whereas in increasing yield periods refinan-
cing becomes more modest and the ‘memory’ of WAC potentially becomes
longer. A weighting scheme that is sensitive to yield cycles and refinancea-
bilty could be a refinement of the current model.

12.3.2 Price return


For any type of bond, the first order linear estimation for price change is
given by its duration multiplied by the underlying yield change:

rp ,t = − Dt ⋅ ⌬yt (5)

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
236 Myles Brennan and Adam Kobor

2.5
2.0
Monthly change in duration

1.5
1.0
0.5

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.0
−0.5
−1.0
−1.5
−2.0
−20 −10 0 10 20 30 40
Monthly logarithmic change in yield (%)

Figure 12.4 Historical relationship between duration and yield (Jan. 1990–Sept.
2008)

If we analyze price return over monthly steps, the ⌬y yield change expres-
sion can be expanded into the differences of yields with maturities one
month apart from each other in order to take rolldown into account.
The challenge, however, is that MBS duration is very volatile; historically
the index duration has ranged from half a year to five years. MBS index dur-
ation varies as a consequence of the ‘negative convexity’ property described
earlier. Figure 12.4 illustrates this direct relationship between the monthly
yield change and the monthly difference in duration. Note that the direc-
tion of the yield-duration relationship is just the opposite of the direction of
the relationship in the case of regular non-callable bonds.
We have constructed our model for MBS sector duration based on this
empirical relationship, taking negative convexity into account in the form
of the following regression model2:

Dt = b0 + b1 ⋅ Dt −1 + b2 ⋅ ln ( yt / yt −1 ) + εt (6)

Note that we have also built a mean-reversion property into the model by
adding an autoregressive term as well. The reason comes from the empirical
observation that MBS duration has been rangebound between about one
half and five years. While the historical fit would be almost equally good
without the AR(1) term, it may play an important role in a forward-looking
simulation by not letting durations go into unrealistic regions under some
yield scenarios in which yields follow a monotonic unidirectional pattern.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Mortgage-Backed Securities in a Strategic Framework 237

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
2

1 Index duration
Duration model
-
90 91 92 93 94 95 96 97 98 99 00 01 02 03 04 05 06 07 08
January

Figure 12.5 Duration estimation

4
3
Monthly return (%)

2
1
0
−1
−2
Index price return
−3
Estimated price return
−4
90 91 92 93 94 95 96 97 98 99 00 01 02 03 04 05 06 07 08
January

Figure 12.6 Price return estimation

Based on the historical sample of 1990–2008, our parameter estimates for


β0, β1 and β2 are 0.21, 0.94 and 6.27 respectively, with t-statistics of 3.94,
60.2 and 25.1. The regression R2 is estimated to be 0.96.
Using these parameters and the observed history of the seven-year swap
rate, we have produced an estimated time series for the MBS duration inte-
grated by the monthly duration change estimations using Formula (6) as
shown in Figure 12.5. Figure 12.6 compares the in-sample price return
estimation with the observed price returns, using the seven-year swap rate
history and our duration estimates. Over the selected historical sample,

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
238 Myles Brennan and Adam Kobor

the price return of the MBS index has an average of 0.66% with a volatil-
ity of 2.89%, while the fitted time series has an average of 0.56% with a
volatility of 3.22%. The correlation between the observed and fitted time
series is 0.93.

12.3.3 Paydown return


The most challenging component of the total return model is the paydown

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
return. While from a pure financial standpoint this component reflects the
loss arising from having a sold option exercised by the option holder, the
valuation or estimation of this component cannot be purely limited to a
financial option valuation in the case of MBS. Some additional factors also
matter, including:

● The status of the housing market: in the case of house price depreci-
ation that characterizes the current situation of the market, many mort-
gage loans simply cannot be refinanced since the borrowers do not have
enough equity in homes to qualify for a new loan;
● The overall cost of refinancing a loan, which has been increasing recently,
may slow down refinancing activity as well;
● Psychological factors and the media effect: refinancing waves do not
immediately follow yield drops but may speed up once the media pays
attention to the benefits of refinancing;
● The recent path of interest rates: borrowers who took advantage of a recent
decline in interest rates are less likely to refinance again immediately even
if rates fall further.

To keep our model simple, we link paydown return to the financial incen-
tive expressed as the difference between the WAC estimated by Formula
(4) and the current interest rate. Figure 12.7 illustrates the historical rela-
tionship between the paydown return and the degree of financial incen-
tive. When the prevailing yields are below the average coupon level, the
refinancing option can be considered to be in-the-money, and indeed, the
observed monthly paydown returns take typically negative values, repre-
sented by the black dots in the chart. On the other hand, when the option
is out-of-the-money, paydown return take slightly positive values, as shown
by the gray dots. Altogether, the shape of the paydown diagram resembles
the payout of a short option, but the dispersion around the regression lines
can be explained by factors like the ones that we have referred to in the bul-
let points above. For instance, prepayment occurs even when it would not
be optimal from pure financial standpoint (see the gray dots), but because
of other socio-economical reasons. We note that the asymmetric nature of
the paydown return will also add some negative skewness to the return dis-
tribution of our model.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Mortgage-Backed Securities in a Strategic Framework 239

0.10

0.05
Monthly paydown return (%)

0.00

−0.05

−0.10

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
−0.15

−0.20

−0.25

−0.30
−1.50 −1.00 −0.50 0.00 0.50 1.00 1.50 2.00 2.50 3.00
Refinanceability (WAC-current yld) (%)

Figure 12.7 The conditional nature of refinancing (1990–2008)

Our paydown return component is constructed as a two-state conditional


regression, driven by the difference of the current rate and the estimated
WAC3:

rPD ,t = St ⋅ b0 ,1 + St ⋅ b1,1 ⋅ ( ct − yt ) + (1 − St ) ⋅ b0 ,2 + (1 − St ) ⋅ b1,2 ⋅ ( ct − yt ) + εt (7)

where

ct = ␭ ⋅ ct −1 + (1 − ␭) ⋅ yt + εt

and

 1 ct ≥ yt
St = 
0 ct < yt

Based on the historical sample of 19902008, our parameter estimates for


β0,1 and β1,1, are 0.0002 and −0.0880 respectively with t-statistics of 4.8 and
−21.22. The R2 of the regression is estimated to be 0.78.
Figure 12.8 compares the estimated and the observed paydown return his-
tory using our regression estimates. Over the selected historical sample, the
paydown return of the MBS index has an average of −0.62% with a volatility of
0.25%, while the fitted time series has an average of −0.61% with a volatility of
0.22%. The correlation between the observed and fitted time series is 0.86.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
240 Myles Brennan and Adam Kobor

0.10
Index paydown return
0.05 Paydown return estimation

0.00
Monthly return (%)

−0.05

−0.10

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
−0.15

−0.20

−0.25

−0.30
90 91 92 93 94 95 96 97 98 99 00 01 02 03 04 05 06 07 08
January

Figure 12.8 Paydown return estimation

It is important to note that the model would have largely underestimated


the paydown return in 2007–2008. The reason, in fact, is that house price
declines tightened credit standards, and increased fees associated with refi-
nancing have all made refinancing slower compared to recent historical
standards. From an asset allocation prospective, the implication is that our
model will likely be somewhat conservative in the near future, given that
the estimated paydown return component is skewed to the negative side.
A possible refinement to the model going forward would be to separately
estimate the two-state regression parameters under normal and under dis-
tressed housing market conditions.
To estimate the parameters of Formula (7) we indeed need a reasonably
long historical sample in order to have both refinanceable and non-refi-
nanceable periods adequately represented in the estimation. Figure 12.9
shows the historical parameter estimates based on ten-year rolling windows.
The slope parameters appear to be fairly stable over time, but the constants
exhibit more fluctuations.
Finally, by summing the coupon return, price return and paydown
return, we get the total return estimate for the MBS universe. Figures 12.10
and 12.11 compare the estimated returns driven by the seven-year swap
rate with the historically observed MBS index returns. The chart based on
monthly frequency shows that the model captures the month-to-month
variance, while the chart presenting the 12-month rolling return illustrates
that the model also tracks the level of return well. In Table 12.3 we pro-
vide some comparative statistics based on the in-sample and out-of-sample
return estimation. In the case of the out-of-sample test, the model param-
eters were estimated based on the previous ten-year history preceding each
month. In both cases, we used the observed seven-year swap rates.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Mortgage-Backed Securities in a Strategic Framework 241

0.00 0.04
−0.01 0.04
−0.02
0.03

β0,1 and β0,2 (%)


−0.03
β1,1 and β1,2

−0.04 0.03
−0.05 0.02
−0.06

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.02
−0.07
0.01
−0.08
−0.09 0.01
−0.10 0.00
Jan-00
Jul-00
Jan-01
Jul-01
Jan-02
Jul-02
Jan-03
Jul-03
Jan-04
Jul-04
Jan-05
Jul-05
Jan-06
Jul-06
Jan-07
Jul-07
Jan-08
Jul-08
β1,2 β1,1 β0,2 β0,1

Figure 12.9 Paydown parameter estimate based on ten-year rolling samples

Index return
4 Model estimation

3
Monthly return (%)

−1

−2

−3
90 91 92 93 94 95 96 97 98 99 00 01 02 03 04 05 06 07 08
January

Figure 12.10 Monthly total return estimation

12.4 Implications of the market developments in 2007–2008

A natural use for our MBS asset class attribution model is to link it to some
yield curve and spread scenarios – either discrete or stochastic – to assess the
MBS sector’s performance under different market environments. By running
a large number of stochastic yield curve and spread scenarios, it becomes
possible to compare MBS with other fixed income sectors and asset classes
from their risk and return point of view, and use the simulation results as

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
242 Myles Brennan and Adam Kobor

20
Model estimation (12-mth)
Index return (12-mth)
15
12-mth rolling return (%)

10

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
5

−5
90 91 92 93 94 95 96 97 98 99 00 01 02 03 04 05 06 07 08
January

Figure 12.11 12-month cumulative total return estimation

Table 12.3 In-sample and out-of-sample estimations

In-sample: 1990–2008 Out-of-sample: 2000–2008

Index Total Est. Total Index Total Est. Total


Return % Return % Return% Return %

Average 6.90 6.75 6.08 6.00


Volatility 2.97 3.23 2.66 3.05
St. Error % 0.93 0.76
Correlation 0.97 0.97

input to an asset allocation optimizer. We would like to emphasize that our


MBS model is a risk management tool. In practice, we have always paid spe-
cial attention to the downside risk assessment provided by the model. In the
asset allocation exercise, we have chosen the weight of the MBS sector such
that the portfolio risk does not exceed any loss or drawdown constraint.
The model presented in this chapter is supposed to give a reasonable total
return estimate for the MBS sector; however, in itself it is not a return predict-
ing model. The total return forecasts will be driven by the setup of the yield
curve and spread scenarios. For the government yield curve, predictions or
base case scenarios can be, among many others, defined as follows:

● Future yield curves are expected to be the same as today’s yield curves.
● Future yield curves will evolve over time as predicted by today’s forward
rates.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Mortgage-Backed Securities in a Strategic Framework 243

● Future yield curves will revert back to some long-term means.


● Future yield curves will move to the levels where analysts’ surveys expect
them to be.
● Future yield curves are driven by some economic expectations and/or
models linked to GDP, CPI and other factors (For a detailed discussion of
a similar framework, refer to Bernadell et al. 2005).

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
All these and many other choices may be viable when one prepares a strategic
asset allocation recommendation. Regarding spreads, similar approaches
may be applied, for example:

● Spreads in the future are expected to fluctuate around today’s levels.


● Spreads revert back to some longer-term average.
● Spreads are determined by some underlying market factors, like yield curve
slope, volatility, credit conditions, risk aversion, housing price index, etc.

Needless to say, there are many assumptions an analyst has to make. In


a stochastic simulation framework, the analyst also has to decide on how
many variables to simulate jointly. In any case, these comments are mainly
to address the fact that there are a lot of different ways to apply our model.
Instead of illustrating a large number of Monte Carlo simulation outputs,
we chose to focus on the past 21 months: how would our model have per-
formed had we known the yield curve and spread movements in advance?
Certainly, the point is this: had an analyst come up with a scenario covering
the past one and a half year’s market movements, how would the attribution
model have performed? (We acknowledge that defining the scenario accur-
ately is harder than attributing an asset class performance to it, particularly
over the recent past!) The past year is widely characterized as the ‘sub-prime
crisis’, ‘mortgage-crisis’ or ‘housing crisis’, and so we believe that it presents an
excellent test for our model. There is already a rich literature available about
this crisis; among others Greenlaw et al. (2008) may be a good reference. We
will not get into any details, but simply use market data observations.
To describe the dynamics of the government yield curve, we use the so-
called Nelson-Siegel model, which is a fairly common choice in strategic asset
allocation applications. For more details about the model and its use, refer to
Nelson and Siegel (1987) and Diebold and Li (2003). According to the model,
the government yield with a maturity m is expressed in the following form:

y ( m ) = b0 + ( b1 + b2 ) ⋅ (1 − exp ( −m / ␶ ) ) / ( m / ␶ ) − b2 ⋅ exp ( −m / ␶ ) (8)

where β0, β1 and β2 are the coefficients belonging to three linear fac-
tors, usually interpreted as level, slope and curvature, respectively. These
linear factors are easy to estimate. The last parameter, τ, is the so-called

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
244 Myles Brennan and Adam Kobor

exponentional decay parameter, and is more difficult to estimate – in many


cases, especially during the simulation, it is simply left constant.
The reason for working with a yield curve model rather than using dis-
crete yields is easier tractability. If one would like to simulate a large number
of bond indices, a large number of interest rates are required to appropri-
ately capture their performance according to their key-rate exposures. It is
much easier to handle this in a parametric form, like the one given by the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Nelson-Siegel model and simulate three factors rather than a large number
of interest rates. Certainly, several other yield curve models can be used as
an alternative. In the present analysis, we will use the seven-year and the
seven-year-minus-one-month maturities for MBS in order to take rolldown
into account. For spreads, we simply use the swap spread observation from
the past 21 months. While the seven-year swap spread and the MBS index
OAS spread historically have moved very closely, during the crisis period
the MBS index OAS spread moved by about 60 basis points wider than the
seven-year swap spread. We have to stress this fact in order to highlight the
unusually extreme divergence between them over the period, and to make
the supposition that this will have an impact on the results of the model,
which is driven by the swap rate as a proxy for the mortgage rate.
While in our estimations we use data series with monthly frequency, in
Figure 12.12 for illustrative purposes we also show the OAS spread history
of the MBS index on a daily frequency. We highlight some of the key events
that have driven the spread over the recent past. Note that this is just an illus-
tration; it would be impossible to summarize all the market developments in
one chart. But the point is that a large number of factors and events have
driven the mortgage prices and spreads, including the poor conditions in
the housing market, the spill-over effects across different market sectors, the
worsening financial conditions of the investors, and the reactions from the
policymakers that seem to change a set of rules in the marketplace. When
setting forward-looking scenarios, the analyst may consider how the housing

Lehman’s
200 Concerns arise about
bankruptcy; deepening
MBS Index OAS Auction rate bond GSE’s finances
180 global financial crisis
market shuts down
160 Concerns arise
140 about bond Wide range of policy
insurance co.ies responsnes: TARP, banking
120 and credit ratings regulation, bank debt
Spread (bps)

100 Breakout of the guarantee, direct equity


sub-prime crisis investments in banks, etc.
80
60 Fed’s steps: open
discount window to Conservatorshio
40 Emergency Fed rate of FNMA and
dealers, TSLF, buy-
20 cut of 75 bps out of Bear Stearn s FHLMC

-
Jan-07 Mar-07 May-07 Jul-07 Sep-07 Nov-07 Jan-08 Mar-08 May-08 Jul-08 Sep-08

Figure 12.12 Chronology of MBS spread history

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Mortgage-Backed Securities in a Strategic Framework 245

Total return estimation


2.0
Estimation
1.5 Actual index data

1.0
(in %)

0.5

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.0

−0.5

−1.0
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Jan 2007–Sep 2008

2.0 Excess return estimation*


Estimation
1.5
Actual index data
1.0

0.5
(in %)

0.0

−0.5

−1.0

−1.5
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep

Jan 2007–Sep 2008

Figure 12.13 Out-of-sample MBS monthly return fit (Dec. 2006–Sept. 2008)
* The actual excess return is key-rate duration adjusted, provided by Lehman Brothers. The esti-
mated excess return is simply average duration adjusted.

market, the broad market and the general economic environment influence
mortgage spread, and furthermore, what can be considered as the base case,
the best case and, more importantly, the worst case scenario. Certainly, pre-
dicting spread movements is almost equal in difficulty to predicting the
behaviour of the specific sector relative to other sectors. This is, of course,
a very challenging task. In fact, the volatility of the spread movements has
become much higher than over the preceding years; the annualized volatil-
ity of the monthly changes in the OAS of the MBS index is estimated to be 33
basis points between January 1991 and June 2007, whereas we could estimate
that figure to be 47 basis points for the past 14 months.
In Figure 12.13, we show the total return and duration-adjusted (although
not key-rate duration adjusted) excess return estimates using the seven-year

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
246 Myles Brennan and Adam Kobor

Table 12.4 Out-of-sample total return estimation in % (Dec. 2006–Sept. 2008**)

Coupon Price Paydown Total Excess


Return Return Return. Return Return

Index data 9.90 0.97 0.04 11.01 −1.95


Estimation 9.67 1.22 −0.23 10.74 −1.83
Under-/ −0.24 0.25 −0.27 −0.26 0.12

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Overestimation
** Not annualized return; expressed over 21 months.

Nelson-Siegel fit for the government yield plus the seven-year swap spread.
The MBS attribution model’s parameters are fitted based on the history of
January 1990–December 2006. Thus, we refer to the test as out-of-sample
from the perspective of the attribution model.
In Table 12.4 we provide a more detailed comparison, showing each of
the total return components, as well as showing the excess return estimate.
Although excess return estimation is not the focus of this chapter, we con-
sider it as an additional way of testing the robustness of our model. The
excess return of MBS was simply estimated as the difference between the
estimated total return and the return of a hypothetical Treasury note with a
duration equal to the estimated MBS duration at the end of each month.
From Figure 12.13 and Table 12.4, we can draw the following conclusions,
with respect to the recent past period:

● Our model appears to have captured the month-to-month total return


variability reasonably well.
● Coupon return has been underestimated, but given the long-term smooth-
ing nature of the moving average model, we could not simply attribute
this error to the past 21 months, although higher MBS spreads may have
resulted in higher carry return.
● We overestimated the price return, largely because swap spreads widened
less than MBS spreads.
● In absolute terms, we have overestimated the magnitude of the paydown
return component. In reality, prepayment was slower than what the model
would have implied: partly because mortgage rates were less attractive
than what we could have guessed based on swap rates (again, MBS spreads
widened more than swap spreads), and also because of the deterioration
in the housing market and credit conditions – factors that the model does
not explicitly take into account.
● We have overestimated the excess return, which again is simply due to the
fact that MBS spreads were wider than swap spreads.

Even through the recent market turmoil, we can claim that the attribu-
tion model would have given a reasonably fair total return estimate had

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Mortgage-Backed Securities in a Strategic Framework 247

we known the yield and the spread movements in advance. Going forward,
however, one observation is that analysts will have to be very critical about
their spread scenarios, especially as to whether the relationship between
swap spreads and MBS spreads will be stable, or not.

12.5 Conclusions

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
We have presented a return attribution model that can be used to estimate
the performance of the agency guaranteed MBS universe under given yield
curve and spread scenarios. This model can be considered as a framework
to analyze the MBS sector separately from governments and other fixed
income sectors. Driven by yield curve and spread scenarios, the model can
generate input for an asset allocation optimization. The historical fit of the
model is quite good, even though we only gave the model a single input,
the seven-year swap rate history. Going forward, analysts should pay special
attention to the factors that drive the spread that they add to the govern-
ment yield. The expected paths of spreads need to be determined with care-
ful consideration given to the state of the housing and the mortgage market.
In addition, the model can be extended easily into a multifactor model,
having yield curve slope or volatility included as well.

Acknowledgements

The authors would like to thank Larissa Van Geijlswijk, Krishnan


Chandrasekhar, Maria Skuratovskaya, Gregory Reiter and many other col-
leagues at the World Bank Treasury for their helpful comments. The find-
ings, interpretations and conclusions expressed herein are those of the
authors and do not necessarily represent the views of the World Bank.

Notes
Myles Brennan World Bank Treasury, Director of Investment Management
Department. 1818 H Street, NW, Washington DC 20433, USA. E-mail: mbrennan@
worldbank.org
Adam Kobor World Bank Treasury, Principal portfolio manager at the Investment
Management Department. 1818 H Street, NW, Washington DC 20433, USA. E-mail:
[email protected]

1. Following the acquisition of Lehman Brothers by Barclays, Lehman Brothers indi-


ces became Barclays Capital indices in November 2008.
2. For the unit root test, we applied the augmented Dickey-Fuller (ADF) test. For the
time series of the duration level and the yield changes we got test values of −3.53
and −13.8 respectively, both outside the 1% critical values. Thus, we can consider
the underlying time series stationary.
3. The ADF test values for the time series of the paydown return and refinanceability
are −3.28 and −3.15 respectively, both outside the 5% critical values.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
248 Myles Brennan and Adam Kobor

Bibliography
Arora, A., Heike, D.K. and Mattu, R.K. (2000) ‘Risk and Return in the Mortgage
Market: Review and Outlook’, The Journal of Fixed Income, June, 5–18.
Bakker, A.F.P. and van Herpt, I.R.Y. (Eds) (2007) Central Bank Reserve Management,
Edward Elgar.
Balachandran, B., Dynkin, L. and Hyman, J. (2008) ‘Comparing the Global Aggregate
Index to a Blend of Global Treasuries and MBS’, in. Lehman Brothers Global Relative

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Value, 21 April, 20–28.
Bernadell, C., Coche, J. and Nyholm, K. (2005) ‘Yield Curve Prediction for the
Strategic Investor’, ECB Working Paper Series No. 472, April.
Cambell, J.Y. and Viceira, L.M. (2002) Strategic Asset Allocation – Portfolio Choice for
Long-Term Investors, Oxford University Press.
Diebold, F.X. and Li, C. (2003) ‘Forecasting the Term Structure of Government Bond
Yields’, NBER Working Paper 10048.
Dynkin, L., Mann, J. and Phelps, B. (2005) ‘Managing Against the Lehman MBS
Index: Evaluating Measures of Duration’, Lehman Brothers Quantitative Portfolio
Strategy, 11 April.
Fabozzi, F.J. (Ed.) (2006) The Handbook of Mortgage-Backed Securities, Sixth Edition,
McGraw-Hill.
Fabozzi, F.J., Bhattacharya, A.K. and Berliner, W.S. (2007) Mortgage-Backed Securities –
Products, Structuring and Analytical Techniques, John Wiley & Sons.
Gabaix, X., Krishnamurthy, A. and Vigneron, O. (2007) ‘Limits of Arbitrage: Theory
and Evidence from the Mortgage-Backed Securities Market’, The Journal of Finance,
Vol. 62, No. 2, April, 557–595.
Greenlaw, D., Hatzius, J., Kashyap, A.K. and Shin, H.S. (2008) ‘Leveraged Losses:
Lessons from the Mortgage Market Meltdown’, US Monetary Policy Forum
Conference Draft.
Mann, J.I. and Phelps, B.D. (2003) ‘Managing Against the Lehman Brothers MBS
Index: Prices and Returns’, Lehman Brothers Fixed Income Research, 20 November.
Nelson, C.R. and Siegel, A.F. (1987) “Parsimonious Modeling of Yield Curves”, The
Journal of Business, Vol. 60, No. 4, October, 473–489.
UBS Mortgage Strategist (2003) ‘Mortgage as an Asset Class’, UBS, 16 September,
9–21.
UBS Mortgage Strategist (2008) ‘Central Bank Demand for MBS: Reduced Risk
Appetite=Temporary’, UBS, 17 June, 21–27.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
13
Quantitative Portfolio Strategy –
Including US MBS in Global
Treasury Portfolios

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Lev Dynkin, Jay Hyman and Bruce Phelps

13.1 Introduction

For many years, central bank investment portfolios were traditionally lim-
ited to the most conservative instruments, and consisted largely, or even
entirely, of short-term Treasury debt. The single question that remained was
the setting of the target duration. Over the course of the last decade, there
have been profound changes at official institutions around the world that
have led to relaxations of these constraints in many cases. The emergence of
the European Central Bank led to a re-evaluation of investment objectives
for national central banks within the Eurozone, and the growing role of sov-
ereign wealth funds as managers of national wealth has led to the inclusion
of more aggressive assets and strategies within these portfolios.
Many official institutions, while unable to participate in credit markets,
are interested in adding some spread exposure to their global fixed income
portfolios. One spread product that has appeal to this group of investors
is US fixed-rate, agency mortgage-backed pass-through securities (MBS)1.
While the monthly principal and interest payment amounts are variable
due to the uncertain timing of mortgage prepayments, these payments are
guaranteed by the US mortgage agencies. Despite the market turmoil in
the sub-prime and private-label prime mortgage markets, MBS have con-
tinued to trade with relatively low concern for credit risk. MBS essentially
earn a promised spread due to their prepayment characteristics that prod-
uce uncertainty regarding the timing of cashflows, negative convexity and
volatility sensitivity. Agency MBS spreads are tight relative to other spread
asset classes and have remained a relatively safe harbor through some very
volatile time periods.
One of the prime considerations for official institutions when considering
asset classes for investment is market depth and liquidity. From this point of
view, the MBS market is unparalleled. The market value of the MBS market

249

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
250 Lev Dynkin et al.

as measured by our Global Aggregate index is larger than that of the Treasury
market in any single country in the world. Furthermore, the agency mort-
gage market is relatively homogeneous with only a few systematic risk fac-
tors that drive returns and little idiosyncratic risk (at least compared to the
credit markets). The relative homogeneity of the MBS market offers investors
a dual advantage – it both removes headline risk and simplifies the portfolio
management process. The idea of an investment program that adds MBS to a

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
global Treasury portfolio may therefore prove attractive to institutions that
require highly liquid portfolios with limited issuer-specific risk.
Shifting part of a Treasury portfolio to MBS can also serve to diversify the
set of risk exposures. A portfolio that is invested entirely in global Treasuries
essentially takes 100% of its market risk in the form of exposures to inter-
est rates. As correlations among global rates have increased, diversification
among the rates of different countries does not always provide the risk
reduction that one might hope for. By adding some MBS exposure to the
mix, the risk profile is diversified a bit more by decreasing the interest rate
exposure and replacing it with exposures to spread and volatility. In Figure
13.1, we investigate a simple blend of the G7 Treasury Index with the USD
MBS Index.2 The realized volatility of the blend over various time periods is
shown as a function of the MBS allocation. Over the long run, the realized
volatility is minimized by placing about half the portfolio into MBS. While
the allocation that would have minimized volatility is different for different
time periods, it has been true in every single period that having some MBS

1.05
1.00
Total return volatility

0.95
(%/month)

0.90
0.85
0.80
0.75
0.70
0.65
0 15.00 30.00 45.00 60.00 75.00 90.00
Allocation to US MBS (%)
1988–1993 1993–1998 1998–2003
2003–2008 Entire Period

Figure 13.1 Total return volatility of a mix of US MBS and G7 Treasuries during dif-
ferent time periods

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Quantitative Portfolio Strategy 251

in the portfolio results in lower realized volatility than that of a pure global
Treasury portfolio.
Investors new to the MBS market will encounter certain practical impedi-
ments to participation; the most problematic of these is typically the lack of
systems and procedures that can smoothly handle the monthly cashflows
and paydowns for mortgage pools. Fortunately, while managing for outper-
formance in the MBS market requires a fair amount of expertise, the relative

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
homogeneity of this market makes index replication possible even for rela-
tive newcomers. We have developed a program in which we help investors
synthetically replicate the returns of the MBS index with a relative small
number of TBA positions, without ever taking physical delivery of specific
pools. The methodology that we have used for this procedure, and its real-
ized performance over the past seven years, are detailed in the next section
of this chapter.
Blends of global Treasuries and MBS can be suitable for investors with
different types of risk-return profiles, and can thus be compared either to
a pure global Treasury benchmark or to a much broader one. For investors
who have traditionally limited themselves to global Treasuries, the incorp-
oration of some MBS into the portfolio may be a way to diversify the risk
profile while retaining liquidity. For others, our Global Aggregate Index
has become widely accepted as a representation of the opportunity set for
investments in global investment-grade fixed income. As such, it represents
a reference point in risk-return space with which many investors are famil-
iar and comfortable. In the second half of this chapter, we investigate the
extent to which investors who wish to avoid credit can achieve a return
profile broadly similar to that of the Global Aggregate with a combination
of global Treasuries and MBS. We then conclude with some analysis of how
the performance of such strategies has varied over time, relative to both the
Global Treasury and Global Aggregate indices.

13.2 Replicating the performance of the


MBS Index using TBAs

To some investors, the US mortgage market is enigmatic and intimidating


because of its arcane terminology and highly variable cash flows. A port-
folio of mortgage pools carries some substantial overhead in terms of keep-
ing track of monthly paydowns and adjusting accordingly. In addition,
pool-level performance can be highly idiosyncratic; one particular agency-
program-coupon (e.g. FNMA , 30-year, 6%) pool may prepay at a very differ-
ent rate than another pool with the same agency-program-coupon (perhaps
relating to the geographical distribution or originator of the underlying
mortgages in each pool). The terminology used to discuss and analyze this
market can also be formidable; analysts evaluate relative value using esoteric
terms such as ‘burnout’, ‘refi elbow’, WALA and WAC that are unique to

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
252 Lev Dynkin et al.

the MBS market. However, while achieving outperformance in this market


indeed requires considerable knowledge and experience, the MBS Index is
surprisingly easy to track.
In earlier research3, we explore two approaches to replication of the MBS
index. In one approach, we form a portfolio of mortgage pools. To min-
imize pool-specific risk and maximize liquidity, we assume that only large
pools (i.e. pools containing a very large number of mortgages) are purchased

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
into the portfolio. As it is often difficult to source seasoned mortgages, we
assume that only recently issued pools can be added to the portfolio. Over
time, though, the number of pools in the portfolio continues to grow, and
the portfolio can be managed so that it becomes more similar to the index
over time, with a decreasing tracking error. For this sort of a program to be
successful, however, a back-office that can handle the processing of monthly
pool payments is required, as well as a commitment to maintaining a rela-
tively stable allocation to MBS over the long term.
The second approach to MBS replication, which is preferred by a much
wider range of investors, involves the use of TBA (‘to-be-announced’ forward)
contracts. This standardized market for the forward purchase/sale of MBS with
stated agency-program-coupon allows investors to participate in the return of
MBS without ever taking delivery of actual pools. For example, a TBA purchase
of FNMA 30-year 6.0% mortgages for November 2008 delivery would specify
a purchase price and settlement date, but leave the seller with the ability to
deliver many pools that fit the specified characteristics. However, instead of
taking delivery, the purchaser can later choose to ‘roll’ this position forward,
selling the TBA position for November settlement and simultaneously pur-
chasing a corresponding position for December settlement. While there is
some cash-forward basis risk in this market, it is relatively small and in some
instances provides an additional opportunity for alpha generation4.
Our MBS Index replication methodology uses Barclays Capital’s Global
Multi-Factor Risk Model (and optimizer) to select a set of TBA contracts to
track the MBS Index. We assume cash is invested at one-month daily LIBOR .
At the beginning of each month we identify active TBA markets and use the
risk model to construct a portfolio of TBA contracts to minimize expected
monthly tracking error versus the MBS Index, while limiting the number
of TBA positions in the proxy. At the end of the month, the set of TBA con-
tracts is rolled to the next month using index closing marks and the port-
folio’s performance (including cash) is calculated and compared with that of
the index. Any cash accrual is then reinvested in the current coupon FNMA
TBA, and the entire TBA portfolio is rebalanced again against the index for
the following month.
The TBA replication strategy has been in active use since September 2001,
and we now have seven full years of live performance for the strategy. In
this chapter, we review the strategy’s performance at tracking the Fixed-rate
MBS Index over this period.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Quantitative Portfolio Strategy 253

13.3 TBA proxy performance record:


September 2001–September 2008

Over the 85-month period from September 2001 through September 2008,
the TBA proxy portfolio experienced a realized monthly tracking error vola-
tility of 4.7 bp with a monthly mean excess return over the MBS Index
of 0.2 bp. As summarized in Table 13.1, the performance statistics for the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
TBA proxy strategy over this period are nearly identical to those of the MBS
Index.

Table 13.1 Performance comparison of TBA proxy and MBS


Fixed-Rate Index, Sept. 2001–Sept. 2008

TBA proxy MBS Index Tracking


Total return (bp) Total return (bp) error (bp)

Average 42.7 42.5 0.2


Stdev 78.3 77.2 4.7
Source: Barclays Capital

100%
25 96% 1

85% 0.9

20 0.8

0.7 cumulative frequency


58%
15 0.6
frequency

0.5

10 37% 0.4

0.3
18%
5 0.2

0.1
4%
0 0
<−7 –(5–7) –(2–4) –1 to 1 2–4 5–7 >7
Tracking error (bp)

Figure 13.2 Histogram of realized tracking errors of TBA proxy portfolio vs. US MBS
Fixed-Rate Index, Sep. 2001–Sep. 2008
Source: Barclays Capital

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
254 Lev Dynkin et al.

Generally, the performance of the TBA proxy is very close to that of the
MBS Index. The maximum monthly absolute return difference was a posi-
tive 15 bp in September 2008. The TBA proxy portfolio has a tendency to
outperform the MBS Index when the roll advantage is strong and tends to
underperform when the ‘seasoned’ portion (i.e. the portion that is priced
at non-TBA prices) of the index outperforms the ‘TBA’ portion (i.e. the por-
tion that tracks the TBA deliverable). The relative performance of the TBA

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
proxy is sensitive to relative seasoned-TBA performance because the proxy
is 100% TBA, whereas only 30% of the index is priced at TBA. Fortunately,
the relative spread performance of seasoned versus TBA mortgages is not
very volatile.
Figure 13.2 shows a histogram of the monthly realized tracking errors. As
shown, the TBA proxy portfolio produced a tracking error within ±4.0 bp
approximately 67% of the time, and within ±7.0 bp approximately 92% of
the time. Since the MBS Index comprises just 15% of the Global Aggregate,
the magnitude of these tracking errors is small in the context of an overall
portfolio.

13.4 Normalized tracking error performance


based on risk model

The TBA proxy portfolio is constructed using our risk model and optimizer.
The performance record of the TBA proxy portfolio is an excellent live test
case of how well the risk model estimates a portfolio’s tracking error volatil-
ity (TEV) versus its benchmark. The difficulty is that while the risk model
produces an ex-ante TEV value, we do not observe the portfolio’s ex-post TEV.
Instead, all we observe is a realization of the portfolio’s tracking error (TE)
versus its benchmark.
We can gauge the success of the risk model by comparing, over time, each
month’s realized TE to the month’s ex-ante TEV. To do so, we ‘standardize’
each month’s realized TE by dividing it by the risk model’s TEV estimate for
the proxy as of the beginning of the month:

StdTEi = TEi/TEVi

If the risk model correctly estimates TEV, then the standardized TEs should
have a time series volatility equal to 1.0. However, the volatility of StdTEi
over the past seven years is 0.70, considerably below 1.05. In other words,
at least as MBS are concerned, the risk model has a tendency to overesti-
mate expected TEV. We can also see this feature by examining the empir-
ical distribution of the StdTEis to see the frequency of values greater than
one (i.e. a month’s TE realization was more than a one standard deviation
event), greater than two (greater than a two standard deviation event), etc.
We should have confidence that the risk model is doing a good job if we

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Quantitative Portfolio Strategy 255

observe a similar number of two and three standard deviation realizations


to that predicted by a normal distribution.
The TBA proxy portfolio has yet to experience a two-standard deviation real-
ized TE event. Figure 13.1 presents the frequency distribution for the stand-
ardized TEs and shows that 81% of the months are within −1 and +1. For the
normal distribution, the percentage of observations with −1 and +1 standard
errors is 68%. For the TBA proxy portfolio, all of the StdTEi are within −2 and

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
+2 compared with 95% confidence interval for the normal. It appears that the
MBS risk model tends to be a bit conservative in its TEV estimates. However,
for most portfolio managers, the tendency of the risk model to overstate risk
is a bit of a comfort. Considering the range of market environments over the
past 85 months, including extreme prepayment episodes and sharp move-
ments in interest rates, with volatilities ranging from record lows to record

81% of observations within 1 StdTE 100%


25 98% 1

0.9
92%

20 0.8
75%

0.7

cumulative frequency
15 0.6
frequency

48% 0.5

10 0.4

29%
0.3

5 0.2
11%
0.1
1%
0 0
< −2

−2 to −1.5

−1.5 to −1

−1 to −0.5

−0.5 to 0

0 to 0.5

0.5 to 1

1 to 1.5

1.5 to 2

>2

StdTE

Figure 13.3 Histogram of standardized TEs of TBA proxy portfolio vs. MBS Fixed-
Rate Index, Sept. 2001–Sept. 2008
Source: Barclays Capital

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
256 Lev Dynkin et al.

Table 13.2 TBA proxy portfolio holdings as of 30 Sept. 2008

Position MBS Market


Description % Amount price Value % OAD WAM WAC %

30-year FNMA 5.0 159,040 97.359 11.44 5.03 358 5.62


30-year FHLMC 5.0 42,120 97.328 3.03 5.04 358 5.62
30-year FHLMC 5.5 182,381 99.344 13.39 4.04 354 6.00

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
30-year FNMA 5.5 212,966 99.578 15.67 3.97 354 5.99
30-year FHLMC 6.0 33,234 101.047 2.48 3.12 355 6.47
30-year FNMA 6.0 149,984 101.141 11.21 3.06 355 6.50
30-year FNMA 6.5 112,606 102.375 8.52 2.60 355 6.99
30-year FNMA 7.0 72,618 104.219 5.59 2.08 353 7.66
15-year FNMA 4.5 126,105 97.431 9.08 4.02 167 5.29
15-year FNMA 5.0 142,510 99.203 10.45 3.32 164 5.73
30-year GNMA 6.0 95,404 101.313 7.14 3.80 355 6.50
30-year GNMA 6.5 26,526 102.266 2.00 3.14 341 7.00
Source: Barclays Capital

highs, the risk model has done a very good job of estimating the expected
performance of the TBA proxy portfolio versus the MBS Index.
An example of the MBS proxy portfolio, constructed as of September 2008
and assuming a market value of $1.355 billion, appears in Table 13.2. As can
be seen, it contains just 12 positions that produce no monthly receipts of
principal repayments and interest. The overall market value of these posi-
tions is invested in cash instruments.

13.5 Comparing a portfolio of global Treasuries and MBS to


the Global Aggregate Index

Under ideal conditions, many official institutions would prefer a bench-


mark that mirrors the outstanding investment-grade bond market, such
as the Barclays Capital Global Aggregate Index (GlobalAgg). However, the
GlobalAgg contains asset types (e.g. credit bonds) that do not offer the
liquidity and low idiosyncratic risk that such investors require.
However, is it possible to use G7 Treasuries and USD MBS to construct a
global fixed income benchmark that is similar to the GlobalAgg (hedged
to USD) on a risk-return basis? A good candidate for this benchmark would
have comparable mean returns and acceptable return volatility, as well as
low TEV relative to the GlobalAgg. We investigate several different variations
on such a benchmark, using different construction rules. First, we rebalance
the blend of two existing market-weighted indices, G7 Treasuries and USD
MBS, to achieve a global interest-rate exposure more similar to that of the
GlobalAgg. Second, we investigate the addition of securitized products in
the euro-denominated portion of the index.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Quantitative Portfolio Strategy 257

13.5.1 Benchmark 1: Reweighting the components


of the G7 Treasury Index
We now consider blends of the MBS and G7 Treasury indices to create a
benchmark that is closer to the GlobalAgg. As a first step we break apart the
G7 Treasury Index into its sub-components and reweight them to match the
Global Aggregate Index.
As a first step towards this goal, a custom Pan-Euro Treasury Index is

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
created comprising the Treasury indices of the UK, Germany, France,
and Italy. The weights allocated to each of these indices in this custom
index are proportional to their respective market values in the Global
Treasury Index. Benchmark 1 is then constructed with the following five
elements:

1. US Treasury Index,
2. US MBS Index,
3. Japan Treasury Index,
4. Canada Treasury Index,
5. Custom Pan-Euro Treasury Index.

In this composite benchmark, the US Treasury Index and US MBS Index


together are given the total weight of the US Aggregate Index in the Global
Aggregate Index. The weight for the custom Pan-Euro Treasury Index is the
proportion of the entire Pan-Euro section in the Global Aggregate Index.
Likewise, the weight for the Japan Treasury Index is the proportion of the
entire Asia-Pacific section in the Global Aggregate Index and that for the
Canada Treasury Index is the proportion of the entire Canadian section
of the Global Aggregate Index6. Depending on the allocation of the weight
of the US Aggregate Index between US Treasury and the US MBS, different
benchmark variants can be created. We created three of these, as described
below.

13.5.1.1 Benchmark 1.1 – US component is mostly Treasuries


The proportion of the US MBS Index within the USD bloc is matched to
the proportion of US MBS within the US Aggregate. The remaining market
value is allocated to the US Treasury Index; that is, the credit and secu-
ritized portions of the US Aggregate are represented by Treasuries. As of
30 September 2008, US MBS comprised 40.1% of the US Aggregate. Since the
US Aggregate Index is 37.4% of the GlobalAgg, Benchmark 1.1 has a 15.0%
allocation to MBS.

13.5.1.2 Benchmark 1.2 – US component is mostly MBS


The proportion of the US Treasury Index within the USD bloc is matched to
the proportion of Treasuries within the US Aggregate (22.9%). The remaining

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
258 Lev Dynkin et al.

market value (that is, the majority of the portfolio) is allocated to US MBS.
The credit and securitized portions of the US Aggregate are thus represen-
ted by MBS. Overall, Benchmark 1.2 has a 28.8% allocation to MBS as of
30 September 2008.

13.5.1.3 Benchmark 1.3 – US component is optimal


blend of Treasuries and MBS

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
We have determined that the minimum-TEV proportion of the US MBS
Index within the US Aggregate portion is approximately 55%. Overall, this
gave Benchmark 1.3 a 20.6% allocation to MBS as of 30 September 2008.
Benchmark 1.3 thus has 45% weight allocated to US Treasury and 55%
weight allocated to US MBS within the US Aggregate portion. Table 13.3
summarizes the historical performance of these three benchmarks and
compares them with the GlobalAgg. Benchmark 1.3 clearly has the lowest
TEV among them, and is also most similar to the GlobalAgg in terms of the
level of absolute return volatility.
We then ran the Global Risk Model (GRM) to analyze the key differ-
ences in risk exposures between these three candidate benchmarks and
the GlobalAgg. The results are summarized in Table 13.4, which depicts the
breakdown of the TEV relative to the GlobalAgg. If minimizing TEV is a
goal, curve risk can be made arbitrarily smaller by adjusting the maturity
composition of the Treasury components in each currency.

Table 13.3 Summary of historical performance, Sep. 2000–Sep. 2008

Benchmark Benchmark Benchmark GlobalAgg. G7


Statistic 1.1 1.2 1.3 Treasury

Mean 44.2 43.8 44.1 42.7 42.7


return (bp)
Return volatility 86.4 75.4 81.1 80.4 81.1
(bp/mo)
Min. monthly −219.5 −172.0 −197.0 −206.3 −185.8
return (bp)
Max. monthly 195.9 180.4 185.9 181.9 190.2
return (bp)
Min. quarterly −212.8 −192.8 −204.4 −177.1 −208.5
return (bp)
Max. quarterly 438.3 350.6 397.5 387.1 409.0
return (bp)
Mean tracking 1.5 1.1 1.4
error (bp)
TEV (bp/mo) 19.4 18.9 17.8
Source: Barclays Capital

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Quantitative Portfolio Strategy 259

Table 13.4 Ex-ante TEV between benchmark and GlobalAgg (projected by GRM as of
Mar. 2008)

Benchmark 1.1 Benchmark 1.2 Benchmark 1.3

Contribution Isolated Contribution Isolated Contribution Isolated


to TEV TEV to TEV TEV to TEV TEV

Total 18.4 18.4 17.1 17.1 17.0 17.0

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Systematic 18.1 18.2 16.7 16.9 16.6 16.8
Curve 3.0 6.8 5.6 12.0 2.6 7.6
Swap Spreads 2.7 6.1 1.7 4.3 2.4 5.3
Volatility 0.0 0.1 0.1 1.3 0.0 0.5
Spread −0.2 1.5 0.1 1.3 0.0 1.3
Gov-Related
Spread Credit 12.2 14.7 8.8 14.7 11.5 14.7
and EMG
Spread 0.3 0.8 0.4 2.9 0.2 1.3
Securitized
Idiosyncratic 0.2 2.1 0.3 2.2 0.3 2.1
Credit default 0.1 1.0 0.1 1.0 0.1 1.0

Source: Barclays Capital

13.5.2 Benchmark 2 – adding securitized


products in EUR as well
Analysis of the sources of TEV for Benchmark 1.3 reveals that the EUR com-
ponent is overweight the long end of the yield curve. This is because our
benchmark is entirely in Treasuries, which tend to be longer in duration
than some of the other segments of the market. While we could address this
by rebalancing the maturity profile of the Treasury holdings, we examine
the possibility of including EUR securitized debt in the benchmark, making
the handling of the EUR market more symmetric with that of the USD, and
adding some spread exposure there as well.
Benchmark 2 is thus created by adding the collateralized segment of the
Euro-Aggregate Index. The Euro-Aggregate Index has four components:
Treasuries, Govt-related, Corporate, and Securitized. In Benchmark 1.3, we
have used just the first component. The securitized component is mostly
from the German market for covered bonds (Pfandbriefe), which trade
very close to swaps. The inclusion of this component can be anticipated to
improve tracking.
Thus, Benchmark 2 has the following six components:

1. US Treasury Index,
2. US MBS Index,
3. Japan Treasury Index,
4. Canada Treasury Index,
5. Custom Pan-Euro Treasury Index,
6. Euro-Aggregate Securitized Index.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
260 Lev Dynkin et al.

Table 13.5 Ex-ante TEV between Benchmark 2 and GlobalAgg (projected


by GRM as of Mar. 2008)

Benchmark 2 Benchmark 1.3

Contribution Isolated Isolated


to TEV TEV TEV

Total 14.7 14.7 17.0

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Systematic 14.3 14.5 16.8
Curve 1.8 7.4 7.6
Swap Spreads 1.6 4.5 5.3
Volatility 0.0 0.6 0.5
Spread Gov-Related 0.1 1.4 1.3
Spread Credit and EMG 10.7 13.8 14.7
Spread Securitized 0.1 1.5 1.3
Idiosyncratic 0.3 2.2 2.1
Credit default 0.1 1.0 1.0
Source: Barclays Capital

The total weight given to the Euro Aggregate Securitized Index and the Cus-
tom Pan-Euro Treasury Index (the Euro bloc of our benchmark) is set equal to
the weight given to the Custom Pan-Euro Treasury Index in Benchmark 1.3. The
weight given to the EuroAgg Securitized Index within this Euro bloc is mat-
ched to the proportion of the non-Treasury component within the EuroAgg.
Over the period of analysis, the mean return of Benchmark 2 is 43.8 bp/
month and its return volatility is 78.5 bp/month. It has a mean TE of 1.0 bp/
month against the GlobalAgg, and the TEV has declined to 16.1 bp/month.
Thus, the addition of the Euro-Aggregate Securitized Index has resulted in
the reduction of TEV from 17.8 bp/month to 16.1 bp/month. Table 13.5,
which is an extract from the GRM report for this benchmark portfolio
against the GlobalAgg, shows that this decline is mainly on account of
reductions in the TEVs due to ‘Swap Spreads’ and ‘Spread Credit’.

13.6 Time-dependence of results

For any analysis of asset allocation decisions based on historical data, it is


desirable to use a long time window. We would have preferred to extend
this analysis much further back in time, but were hampered by difficulties
in extending some of the time series further back and adjusting for changes
in the construction of the GlobalAgg over time. Nevertheless, it is clear that
the overweight of MBS and underweight of credit relative to the GlobalAgg
would have led to very different results in different market environments –
in terms of both risk estimation and achieved returns.
As hard as it may be to remember, the big story in fixed income mar-
kets until the middle of last year was ‘the dearth of volatility’. For all the

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Quantitative Portfolio Strategy 261

Table 13.6 Performance of Benchmark 1.3 before and after credit crisis, relative to
G7 Treasuries and GlobalAgg

Relative Perf.
Absolute of G7 Tsy +MBS
performance Benchmark 1.3

G7 Tsy

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
G7 GlobalAgg + MBS vs. G7 vs.
Tsy Bmk 1.3 Tsy GlobalAgg

Entire period Mean 42.7 42.7 44.1 1.4 1.4


Oct 2000–Sep 2008 Stdev 81.1 80.4 81.1 12.7 17.8

First period Mean 43.2 47.2 45.3 2.1 −1.8


Oct 2000–Sep 2006 Stdev 82.1 82.5 82.1 12.9 9.1

Crisis period Mean 41.2 29.5 40.4 −0.8 10.9


Oct 2006–Sep 2008 Stdev 80.0 73.4 79.7 11.9 30.4
Source: Barclays Capital

benchmarks that we have created, had we run the GRM as of early 2007,
calibrated to then-current levels of market risk, we would have estimated
much lower TEV relative to the GlobalAgg, especially its component due to
credit spread volatility.
We next turn our attention to achieved returns in different time peri-
ods. For example, let us examine the performance of Benchmark 1.3, which
is moderately overweight US MBS relative to the GlobalAgg to partially
compensate for the underweight to credit. Table 13.6 shows the perform-
ance of this benchmark, relative to both a pure G7 Treasury Index and
the GlobalAgg, over the entire time period of our study and in two mark-
edly different sub-periods. In the first, relatively calm period from October
2000 through September 2006, we see our mix as somewhat of a mid-point
between the G7 Treasury Index, which it outperforms by 2.1 bp/month, and
the GlobalAgg, which it underperforms by 1.8 bp/month. This was a period
when credit spreads were notably lacking in volatility, and generally fol-
lowed a trend of spread carry augmented by a fairly smooth tightening. TEV
during this sub-period was at similarly low levels relative to either index:
9.1 bp/month for the GlobalAgg and 12.9 bp/month against G7 Treasuries.
During this low-volatility period of steady tightening, the addition of credit
to an investment program generated a comfortable cushion of additional
return, with very little realized downside. In such markets, investors who
forego credit must accept that they will trail the GlobalAgg. However, when
spreads widen out dramatically, as they have done in the recent credit cri-
sis, an underweight to credit can generate significant outperformance. Over

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
262 Lev Dynkin et al.

300 Cumulative outperformance (bp)

200

100

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
−100

−200

−300
00 01 02 03 04 05 06 07
October
vs. Global Agg vs. G7 Treasuries

Figure 13.4 Cumulative outperformance of Benchmark 1.3 relative to G7 Treasury


Index and GlobalAgg
Source: Barclays Capital

the last two years, our MBS-enhanced index modestly underperforms G7


Treasuries, with TEV little changed from the prior period; but relative to
the suffering GlobalAgg, Benchmark 1.3 outperforms by an average 10.9 bp/
month, with a TEV of 30.4 bp/month. Over the entire time period, the
mean return for Benchmark 1.3 exceeds that of both G7 Treasuries and the
GlobalAgg by 1.4 bp/month.
While the net outperformance is the same versus the two referenced
benchmarks, the dramatic timing difference between the two perform-
ance comparisons is illustrated in Figure 13.4, which plots the cumula-
tive outperformance of Benchmark 1.3 relative to G7 Treasuries and the
GlobalAgg. Over most of the time period, as discussed above, the diffe-
rence in spread exposures causes Benchmark 1.3 to outperform the all-
Treasury index and underperform the GlobalAgg; over the last 18 months
the underweight to credit leads to huge outperformance relative to the
GlobalAgg.

13.7 Conclusions

We have seen that an allocation to US MBS can help improve the long-
horizon performance of a global Treasury portfolio. While the MBS market
may be characterized by some fairly complex features, our index replication
program using a small number of TBA positions has made it straightforward

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Quantitative Portfolio Strategy 263

to track the index return quite closely, achieving a low TEV over a seven-
year history of actual transactions.
Compared to a pure global Treasury benchmark, the incorporation of
some amount of MBS can help reduce overall return volatility and diver-
sify the set of portfolio risk exposures. For investors who aim to achieve a
return profile more like the GlobalAgg without investing in credit, we have
shown that this can be achieved to some extent by maintaining a steady

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
overweight to MBS.
While the underlying lack of exposure to credit will create unavoidable
TE relative to the GlobalAgg, one can hope for results with similar levels
of risk and return over the long term. The specifics of the benchmark con-
struction mechanism will depend on the objectives. Simple definitions such
as a simple blend of two indices can achieve the basic idea. More detailed
construction mechanisms, as in Benchmark 1.3, use a finer set of building
blocks to better match the interest rate exposures of the GlobalAgg. The
addition of some EUR-denominated securitized assets as well could offer a
further way to add spread to the benchmark.
We do not mean to suggest that one can track the returns of the GlobalAgg,
or outperform it on a steady basis, without investing in credit. A credit-free
variant of the GlobalAgg, as discussed here, will continue to outperform the
GlobalAgg when spreads are stable or widening, and underperform when
they tighten. For investors who are able to invest in credit, the decision
of when to overweight or underweight this asset class is clearly a power-
ful potential source of market-timing alpha. However, it does seem that on
a through-the-cycle basis, a combination of MBS and Treasuries can offer
an attractive spread-enhanced return with minimal headline and liquidity
risk.

Notes
1. The mortgage agencies also guarantee hybrid adjustable-rate mortgages (i.e.
‘hybrid ARMs’). However, due to their relatively recent addition to the major indi-
ces and limited historical data, we restrict our analysis to fixed-rate agency pass-
throughs (MBS).
2. Specifically, we are using the USD MBS Index that is the sub-index of the Global
Aggregate Index. The USD MBS Index only includes fixed-rate Agency pass-
throughs.
3. See Chapter 6, ‘Tradable Proxy Portfolios for the Lehman Brothers MBS Index’,
in Quantitative Management of Bond Portfolios by L. Dynkin, A. Gould, J. Hyman,
V. Konstantinovsky and B. Phelps, Princeton University Press, 2007.
4. See ‘Mortgage TBA Portfolios with an Alpha Tilt’, Global Relative Value, Lehman
Brothers, 13 August 2007.
5. We can calculate a chi-square test statistic, assuming a normal distribution for
the tracking errors and a population variance of 1.0, to test the hypothesis that
the volatility of the actual standardized tracking errors is 1.0. The calculated test

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
264 Lev Dynkin et al.

statistic is 41.16, which allows us to reject the null hypothesis that the volatility
equals 1.0 at any reasonable level of significance.
6. For an idea of how the weights are allocated, the composition of the Global
Aggregate by market value, as of 30 September 2008, was as follows: US Aggregate –
37.4%, Pan European Aggregate Index – 37.0%, Asian Pacific Aggregate Index –
20.1%, Canadian – 2.6%, Eurodollar – 1.0%, 144A – 1.2%, Euro-Yen – 0.2%, and
Other Currencies – 0.5%. In our benchmark, the Eurodollar weight is allocated to
the US Aggregate, and the Euro-Yen weight is given to the Asian Pacific Aggregate

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Index.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
14
Volatility as an Asset Class for
Long-Term Investors

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Marie Brière, Alexander Burgues and Ombretta Signori

14.1 Introduction

Long-term investors are usually very conservative in their asset allocations,


investing the bulk of their portfolios in government bonds. What often
deters them from including other assets is inherent portfolio risk. Many
long-term investors, such as pension funds and sovereign wealth funds,
have substantial liabilities that prevent them from making risky alloca-
tions. By opting for conservatism, however, they are also denying them-
selves the opportunity to invest in asset classes that earn higher returns
over the long run.
Volatility can be considered as a full-fledged asset class with many advan-
tages. For example, being negatively correlated with equities, it can reduce
the risk of an equity investment without sacrificing returns. But the advan-
tages of volatility do not stop there. The recent development of standard-
ized products, especially volatility index futures and variance swaps, gives
investors access to a wide range of strategies for gaining structural exposure
to volatility.
Two sets of strategies can be used to gain volatility exposure, namely long
investment in implied volatility and exposure to the volatility risk premium.
Though very different, the two strategies are consistent with the classic
motivations – diversification and return enhancement – that prompt inves-
tors to opt for an asset class. Being long implied volatility is highly attractive
for diversification purposes, offering timely protection when equity markets
turn down. Exposure to the volatility risk premium, a strategy similar to
selling an insurance premium, has traditionally delivered highly attractive
risk-adjusted returns, albeit with greater downside risk.
For a long-term investor, adding volatility exposure to a strategic portfolio
raises practical issues. Because these strategies are implemented through
derivatives, they require a limited amount of capital. Thus the amount of
risk to be taken, which is equivalent to the strategies’ degree of leverage,
must be properly calibrated. Another difficulty is that volatility strategies

265

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
266 Marie Brière et al.

returns are much more asymmetric and leptokurtic than conventional asset
classes. For volatility premium strategies, low volatility of returns is gener-
ally countered by higher negative skewness and higher kurtosis, two factors
that could cost investors dearly if they are not properly taken in account.
This requires the use of optimization techniques that capture the extreme
risks of the return distribution. Modified Value-at-Risk is an appropriate
tool for our purposes (Favre and Galeano 2002, Agarwal and Naik 2004,

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Martellini and Ziemann 2007) and has not yet been applied to the volatility
asset class. To our knowledge, all the research into adding structural volatil-
ity exposure to either an equity portfolio (Daigler and Rossi 2006), or a fund
of hedge funds (Dash and Moran 2005) uses the mean–variance framework
when optimizing portfolio composition.
This work takes the case of a long-term investor managing a conventional
balanced portfolio and seeking to add strategic exposure to equity vola-
tility. We believe this research is original for three reasons. First, it offers
a framework for analyzing the inclusion of volatility strategies in a port-
folio; second, it combines two contrasting sets of exposures: long implied
volatility and long volatility premium; and third, we have built efficient
frontiers within a Mean/Value-at-Risk framework to capture the peculiar
shape of volatility strategies’ return distributions. We show that volatility
opens up multiple possibilities for long-term investors. By adding long vola-
tility exposure, they can mitigate extreme risk to their portfolio, ultimately
making it less risky than a conventional balanced equity/bond portfolio or
even a 100% fixed income investment. If an investor is willing to accept an
increase in extreme risk (especially higher negative skewness), the volatility
risk premium strategy on its own can strongly boost portfolio returns. And
by combining implied volatility with volatility risk premium strategies, a
long-term investor can substantially increase returns while incurring lower
extreme risk than on a conventional portfolio. This is because the two strat-
egies tend to hedge each other in adverse events.
The rest of the study is organized as follows. Section 14.2 presents the two
strategies for gaining exposure to volatility as an asset class; Section 14.3
explains how to construct the portfolio; Section 14.4 describes our data;
and Section 14.5 presents our results on volatility in an efficient portfolio.
Section 14.6 concludes.

14.2 Volatility as an asset class

We examine two ways for an investor to gain structural exposure to vola-


tility and investigate how this exposure can be used as an asset class in a
traditional portfolio. The first possibility is to expose a portfolio to implied
volatility changes in an underlying asset. The main reason for making this
kind of investment is to benefit from the diversification that arises from the
strongly negative correlation between performance and implied volatility

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Volatilty as an Asset Class 267

of the underlying. This is particularly noticeable in a bear market (Daigler


and Rossi 2006).
To track the implied volatility of an underlying asset, we need a synthetic
volatility indicator. A volatility index, expressed in annualized terms, prices
a portfolio of options across a wide range of strikes (volatility skew) and
with constant maturity (interpolation on the volatility term structure). One
widely used benchmark is the VIX. Published by the Chicago Board Options

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Exchange (CBOE 2004), this index expresses the 30-day implied volatility
generated from S&P 500 traded options. Because the VIX reflects a consen-
sus view of short-term volatility in the equity market, it is used to measure
market participants’ risk aversion. As such, it is referred to as the ‘investor
fear gauge’.
Although the VIX index itself is not a tradable product, the Chicago Futures
Exchange launched futures contracts on it in March 2004. As a result, inves-
tors now have a simple and direct way of exposing their portfolios to varia-
tions in the short-term implied volatility of the S&P 500. VIX futures are a
better way of achieving such exposure than through traditional approaches
relying on delta-neutral combinations of options such as straddles, stran-
gles or more complex strategies such as volatility-weighted combinations of
calls and puts. On short maturities of less than three months, neutralizing
the delta exposure of these portfolios can easily overshadow the impact of
implied volatility variations.
To establish a structurally long investment in implied volatility, we use an
approach that takes advantage of the mean–reverting nature of volatility1
(Dash and Moran 2005). We do this by calibrating the exposure according
to the absolute levels of the VIX, taking the highest exposure when implied
volatility is historically low, and reducing it as volatility rises. Implementing
the long volatility (LV) strategy consists of buying the correct number of
VIX futures such that the impact of a one-point variation in the price of
1
the future is equal to F * 100% (5% impact when the level of VIX is 20). The
t −1

P&L generated between t – 1 (contract date) and t (maturity date) can then
be written as:

1
PLVIX
t = ( Ft − Ft −1 ) (1)
Ft −1

where Ft is the price of the future at time t.


In practice, VIX futures prices exist only since 2004. They represent the
one-month forward market price for 30-day implied volatility. This forward-
looking component is reflected in a term premium between the VIX future
and the VIX index. This premium tends to be positive when volatility is
low (it represents a cost of carry for the buyer of the future) and negative
when it peaks. To approximate pre-2004 VIX futures prices, we used the

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
268 Marie Brière et al.

average relationship between VIX futures and the VIX index, estimated
econometrically over the period between March 2004 and August 2008.
The second strategy involves taking exposure to the difference between
implied and realized volatility. This difference has historically been posi-
tive on average for equity indices (Carr and Wu 2007). The volatility risk
premium (VRP), which is well documented in the literature (Bakshi and
Kapadia 2003, Bondarenko 2006), can be explained by the asymmetric risk

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
between a short volatility position (a net seller of options faces an unlimited
potential loss), and a long volatility position, where the loss is capped at the
premium paid. To offset uncertainty in the future level of realized volatility,
sellers of implied volatility demand compensation in the form of a premium
over the expected realized volatility2.
The VRP is captured by investing in a variance swap, that is, a swap con-
tract on the spread between implied and realized variance. With an over-
the-counter transaction, the two parties agree to exchange a specified level
of implied variance for the actual amount of variance realized over a pre-
agreed period. The implied variance at inception is the level that puts the
net present value of the swap at zero. In theory this level (or strike) is com-
puted from the price of the option portfolio used to calculate the volatility
index itself. The theoretical strike for a one-month variance swap on the
S&P 500 is thus the value of the VIX index. Risk averse investors can now
invest in capped variance swaps, thus fixing the maximum possible loss, or
equivalently an upper limit for the realized volatility3 that will be paid. We
consider a capped variance swap strategy on the S&P 500 held over a one-
month period.
The P&L of a short-capped variance swap position between the start date
(t–1) and end date (t) can be written as follows (Demeterfi et al. 1999):

PLVARSWAP
t = N var iance *  Kt2−1 − (min(2.5 * Kt −1 , RVt −1,t ))2  (2)

where Kt–1 is the volatility strike of the variance swap contract entered at
date t – 1, Kt = VIXt 100 , VIXt is the VIX index, RVt-1,t is the realized volatility
between t – 1 and t, and Nvariance is the ‘variance notional’.
Henceforth we refer to this way of calculating the P&L of a short variance
swap, as expressed in Equation (2). In practice, owing to the difficulty of
replicating the index, it is more realistic to reduce VIX implied volatility by
1% to reflect the replication costs borne by arbitragers (Standard & Poor’s
2008).

14.3 Portfolio construction

When implementing a volatility strategy, one important aspect to take into


account is the non-normality of return distributions, as shown in the next

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Volatilty as an Asset Class 269

section. The mean–variance criterion of Markowitz (1952) is not suitable


when returns are not normally distributed. To compensate for this, many
authors have sought to include higher-order moments of the return dis-
tribution in their analyzes. Lai (1991) and Chunhachinda et al. (1997), for
example, introduce the third moment of the return distribution (i.e. skew-
ness) and show that this produces significant changes in optimal portfolio
construction. A further significant improvement can be achieved by extend-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
ing portfolio selection to a four-moment criterion (Jondeau and Rockinger
2006, 2007).
For investors, the main danger with the proposed volatility framework is
the risk of substantial losses in extreme market scenarios (the left tail of the
return distribution). Since returns on volatility strategies are not normally
distributed, we choose ‘modified Value-at-Risk’ as our reference measure of
risk. Value-at-Risk (VaR) is the maximum potential loss over a time period
given a specified probability α. To capture the effect of non-normal returns,
we replace the quantile of the standard normal distribution with the ‘modi-
fied’ quantile of the distribution w␣ , approximated by the Cornish-Fisher
expansion based on a Taylor series approximation of the moments (Stuart,
Ord and Arnold 1999). This enables us to correct the distribution N(0,1)
by taking skewness and kurtosis into account. Modified VaR is accordingly
written as:

ModVaR(1 − ␣) = −( ␮ + w␣ * ␴ ) (5)

1 1 3 1
w␣ = z␣ + ( z␣2 − 1) * S + ( z␣ − 3z␣ ) * EK − (2 z␣3 − 5z␣ ) * S 2
6 24 36

where m and s are, respectively, the mean and standard deviation of the
return distribution, w␣ is the modified percentile of the distribution at
threshold α, S is the skewness and EK is the excess kurtosis of the portfolio.
Modified VaR is not only easy to implement when constructing the risk
budget for an investor; it explicitly takes into account how that investor’s
utility function changes in the presence of non-normal returns. Modified
VaR will be greater for the portfolio that has negative skewness (left-handed
return distribution) and/or higher excess kurtosis (leptokurtic return distri-
bution). A risk-averse investor will prefer a return distribution where the odd
moments (expected return, skewness) are positive and the even moments
(variance, kurtosis) are low.
In practice, because volatility strategies are implemented through listed or
OTC derivatives, the only capital requirement is the collateral needed when
entering into a variance swap contract, along with margin deposits for listed
futures. Cash requirements being limited, a key step in the process of vola-
tility investing is the proper calibration of the strategies.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
270 Marie Brière et al.

Each volatility strategy is calibrated according to the maximum allowable


risk exposure. Based on our computations of modified VaR for each asset
class, we set monthly modified 99% VaR at 10%, a level comparable to the
equity asset class (see Table 14.A.1 in the Appendix). The returns to the vola-
tility strategies are thus the return on cash plus a fixed proportion of each
strategy’s P&L. This proportion, which for simplicity we call ‘degree of lever-
age’, is determined ex ante by our calibration of the allowed risk.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
14.4 Data

Our dataset is composed of US monthly figures for the period from February
1990 to August 2008. We use the seven to ten year Merrill Lynch index for
government bonds, the S&P 500 for equities, the CBOE’s (2004) VIX index
for the volatility strategies, and the one-month U.S. interbank rate for the
risk-free rate4.
Table 14.A.1 in the Appendix shows the statistics for the four ‘assets’
included in our study: government bonds, equities and the two volatility
strategies. Looking at Sharpe ratios and success rates, the VRP strategy seems
to be the more attractive, with a Sharpe ratio of 2.4 and a success rate of
85%. Bonds (0.5 and 68%), equities (0.4 and 64%) and the LV strategy (0.1
and 53%) follow in that order. Although the LV strategy comes last in this
ranking, it holds considerable interest in terms of diversifying power, as we
will show. The VRP strategy, on the other hand, is the more consistent win-
ner. Its performance is relatively stable, the exception being during periods
of rapidly increasing realized volatility (onset of crises, unexpected market
shocks), when returns are strongly negative5 and much greater in ampli-
tude than for the traditional asset classes. These periods are usually short,
accounting for only 15% of the months in the period under review.
For the chosen calibration, the LV strategy has the highest volatility
(21%) followed by equities, VRP and bonds (14%, 10% and 6% respectively).
Downside deviation – a measure of the asymmetric risk on the left side of
the return distribution – offers the same ranking. Monthly mean returns
range between 0.59% for LV and 2.16% for VRP. An analysis of extreme
returns (min and max) highlights the asymmetry of the two volatility strat-
egies: the LV strategy offers the highest maximum return at 3.084% (its
minimum return is –12.19%), whereas the VRP posts the worst monthly
performance at –15.61% (with the best month at 8.95%).
The higher-order moments show clearly that returns are not normally
distributed6, particularly for the two volatility strategies. This highlights
the importance of taking an adequate measure of risk when optimizing the
portfolio (as discussed in the previous paragraph). The skewness of equity
and bond returns is slightly negative (–0.46 and –0.31 respectively), and
for the VRP strategy it shows a very strong negative figure (–1.80). The only
strategy showing positive skewness (1.00) is LV. Thus, being long implied

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Volatilty as an Asset Class 271

volatility provides a partial hedge for the leftward asymmetry of the other
asset classes. All four assets have kurtosis greater than 3.0 – 3.54 and 3.86 for
bonds and equities, and even higher for the volatility strategies: 5.33 (LV)
and 10.38 (VRP).
The multivariate characteristics of returns are likewise of great interest.
The correlation matrices are shown in Table 14.A.2 of the Appendix. For the
1990–2008 period, we find good diversifying power between equities and

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
bonds, in the form of virtually zero correlation. As expected, the LV strat-
egy offers strong diversifying power relative to traditional asset classes. It is
highly negatively correlated with equities (–61%), a phenomenon already
well publicized by other studies (Daigler and Rossi 2006). What is less well
known is that the LV strategy is also weakly correlated with bonds (8%).
This is an interesting and important property for a long-term conservative
investor.
The VRP strategy shows quite different characteristics: it offers little diver-
sification to equity exposure (46% correlation), but significantly more to
bonds (–17%). More importantly, the two volatility strategies are mutually
diversifying (–61% correlation). And this, as we will see, is very appealing
for portfolio construction.
The importance of extreme risks means that the coskewness and cokurto-
sis matrices of the asset classes (Tables 14.A.3 and 14.A.47 in the Appendix)
need to be analyzed. Positive coskewness value skiij8 suggests that asset j has
a high return when the volatility of asset i is high, that is, j is a good hedge
against an increase in the volatility of i. This is particularly true for the
LV strategy, which offers a good hedge of the VRP strategy, and to a lesser
extent for equities and bonds. In contrast, the VRP strategy does not hedge
the other assets efficiently because it tends to underperform when their
volatility increases.
Because of positive cokurtosis value kuiiij9, the return distribution of asset i
is more negatively skewed when the return on asset j is lower than expected,
that is, i is a poor hedge against a decrease in the value of j. Here again we
find that, unlike the VRP strategy, the LV strategy is an excellent hedge
against equities – far better than a long bond. However, the two volatil-
ity strategies hedge each other quite well. Positive cokurtosis kuiijk is a sign
that the covariance between j and k increases when the volatility of asset i
increases. The most interesting results are seen in periods of rising equity
volatility. The LV/bonds correlation increases, whereas the VRP/bonds and
VRP/LV correlations decline. Thus, during periods of equity market stress,
VRP and equities both perform badly, while LV and bonds do better. Lastly,
positive cokurtosis kuiijj means that volatilities of i and j tend to increase at
the same time. This is the case for all four assets. Once again, all coskewness
and cokurtosis values are respectively significantly different from 0 and 3, a
sign that the structure of dependencies between these strategies differs sig-
nificantly from a multivariate normal distribution10.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
272 Marie Brière et al.

This initial analysis already highlights various advantages of the two vola-
tility strategies within a diversified portfolio: the LV strategy delivers excel-
lent diversification relative to equities and, to a lesser extent, bonds; the VRP
strategy allows for very substantial increase in returns, at the expense of a
broadly increased risk profile (extreme risks and codependencies with equi-
ties). Combining the two strategies is a particularly attractive option since they
tend to hedge each others’ risks, especially in extreme market scenarios.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
14.5 Efficient portfolio with volatility

We compute efficient frontiers in a mean–VaR framework by considering a


shift from a pure bond portfolio into: (1) an initial portfolio invested 100%
in equities and government bonds, and the initial portfolio with the add-
ition of (2) the LV strategy, (3) the VRP strategy and (4) the two volatility
strategies at the same time. As previously noted, the two volatility strategies
in our analytical framework are collateralized (a fixed amount of cash is used
for collateral and margin purposes). To construct the portfolio, the sum of
the percentage shares in the four asset classes must equal 100%. For the two
traditional asset classes (equities and bonds), the portfolio is long-only and
short selling is not allowed. For the two volatility strategies implemented via
derivatives, long and short positions are permitted.
Figure 14.1 shows the four efficient frontiers. We note firstly that adding
the volatility strategies markedly improves the efficient frontier compared
with the initial portfolio of equities and bonds.

1.5

1.3
Monthly returns (%)

1.1

0.9

0.7

0.5

0.3
1.0 2.0 3.0 4.0 5.0
Modified VaR (%)
BE BE+LV BE+VRP BE+LV+VRP

Figure 14.1 Efficient frontiers. Optimization results of the four portfolios: (1) Bond
Equity (BE), (2) Bond Equity + Long Volatility (BE+LV), (3) Bond Equity + Volatility
Risk Premium (BE+VRP), (4) Bond Equity + Long Volatility + Volatility Risk Premium
(BE+LV+VRP); February 1990–August 2008.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Volatilty as an Asset Class 273

We will now examine portfolio performances that minimize VaR expos-


ure. The corresponding allocations are presented in Table 14.1. Compared
with the initial portfolio (76% bonds, 24% equities), the addition of the
LV strategy (21%) combined with an increase in the allocation to equities
(35%) and a decrease in bonds (44%) reduces the VaR to 2.1% from 3.4%.
The resulting portfolio is more attractive because it has a higher Sharpe
ratio (0.9 versus 0.7), obtained through higher annualized return (8.9%

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
versus 8.3%) and lower volatility (5.0% versus 5.5%). The main reasons
for this result is the strong negative correlation (–61%) between the LV
strategy and equities. Furthermore, the distribution of returns for the new
portfolio shows a considerable improvement in the higher-order moments.
The portfolio offers positive skewness (+0.52 versus virtually nil for the
initial portfolio) and an overall decrease in kurtosis from 4.12 to 3.68.
Adding the VRP strategy (30%) to the initial portfolio, at the expense
of equities (0%) and, to a lesser extent, bonds (70%), delivers significantly
higher returns (13.22% versus 8.28%), along with a lower VaR (2.49%).
The success rate of the portfolio rises to 79.8%, and the Sharpe ratio to
1.86. The portfolio return distribution shows more pronounced leftward

Table 14.1 Portfolio allocation: minimum modified VaR. Summary statistics and
composition of the four Minimum Modified VaR portfolios: Bond Equity, Bond
Equity + Long Volatility (LV), Bond Equity + Volatility Risk Premium (VRP), Bond
Equity + Long Volatility + Volatility Risk Premium (LV+VRP); US, February 1990–
August 2008

Bond
Bond Bond Bond Equity + LV
Equity Equity + LV Equity + VRP + VRP

Mean Ann. 8.28 8.94 13.22 12.62


Return(%)
Ann. Std. 5.50 4.98 4.68 3.95
Dev.(%)
Skewness 0.01 0.52 –0.41 0.33
Kurtosis 4.12 3.68 3.41 3.58
Max Monthly –3.79 –3.07 –3.23 –2.03
Loss(%)
Max Monthly 6.20 5.69 4.08 4.60
Gain (%)
Mod. VaR (99%) 3.41 2.13 2.49 1.43
Sharpe Ratio 0.69 0.89 1.86 2.06
Success Rate (%) 70.4 70.4 79.8 84.8

Bond % 76 44 70 37
Equity % 24 35 0 20
LV % _ 21 _ 21
VRP % _ _ 30 22

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
274 Marie Brière et al.

asymmetry (–0.41 versus +0.01), making it less attractive to the most risk-
averse investors.
Finally, the most interesting risk/return profile is obtained by adding
a combination of the two volatility strategies. Adding both the LV (21%)
and the VRP (22%), at the expense of bonds (37%) and equities (20%)
makes it possible to achieve a VaR of 1.4%. The success rate increases
significantly, and the Sharpe ratio (2.06) is the highest of all of the four

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
portfolios. For a long-term investor seeking low risk exposure, the most
appreciable characteristic is the decrease in extreme risks, reflected in
the higher-order moments. Compared with the initial portfolio, this
combined portfolio is less leptokurtic (kurtosis of 3.58 versus 4.12), and
downside risk as measured by the worst-month performance is almost
halved (from –3.79% to –2.03%). But the most appealing property for
risk-averse investors is that the portfolio exhibits positive skewness
(+0.33 versus +0.01).
For conservative investors, typically fully invested in bonds, another
way of looking at the advantage of structural exposure to volatility is to
compare the portfolio characteristics with the bond asset class (first row
of Table 14.A.1 in the Appendix). Comparing the Sharpe ratio and extreme
risks shows that investors fully exposed to bonds can benefit significantly
by diversifying their exposure, adding equities and LV, or even better, equi-
ties combined with the LV and VRP strategies. These two optimal portfolios
have higher Sharpe ratios and lower maximum losses than bonds. More
interestingly and less obviously, they provide positive skewness (compared
with a negative value for bonds and nil for the classic bond/equity expos-
ition) without incremental kurtosis.

14.6 Conclusion

After several decades of analyzing portfolio choice in a mean–variance frame-


work, investors appear to have realized the key role played by higher-order
moments of return distribution. Examples of how extreme risk can rise due
to systematic efforts to minimize volatility are now well documented, and
investors are aware of them, sometimes to their cost. In this context, long-
term investors will pay close attention to all the codependencies between
asset classes in their current portfolios and to the way they change when
new classes are added. A suitable strategic allocation will attempt to deliver
the required long-run returns while decreasing volatility and kurtosis and
increasing skewness (i.e. reducing leftward asymmetry and even obtaining
rightward asymmetry).
This analysis highlights that when viewed as an asset class, volatility is
an extremely attractive tool for long-term investors. Recent literature has
begun to show the merits of including long exposure to implied volatility
in a pure equity portfolio (Daigler and Ross 2006) or in a portfolio of funds

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Volatilty as an Asset Class 275

of hedge funds (Dash and Moran 2005). Our study underscores the new pos-
sibilities available to long-term investors in terms of portfolio choice when
volatility is introduced into a portfolio of classic assets (equity and bonds).
Little has been written on this subject so far.
The results of our a historical analysis of the past 20 years show that
including these volatility strategies in a portfolio is highly appealing. Taken
separately each strategy displaces the efficient frontier significantly out-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
ward, but combining them produces even better results. Long exposure to
volatility is particularly valuable for diversifying a portfolio with equities.
Because this strategy is negatively correlated with the asset class, its hedging
function during bear-market periods is clearly attractive. A volatility risk
premium strategy, on the other hand, boosts returns. It provides little diver-
sification to equities – it loses significantly when share prices fall – but good
diversification with respect to bonds and implied volatility. Combining the
two strategies offers the big advantage of fairly effective reciprocal hedg-
ing during periods of market stress, which significantly improves portfolio
returns for a given level of risk.
One of the limitations of our work relates to the period analyzed. Although
markets experienced several severe crises between 1990 and 2008, with
sharp volatility spikes, there is no assurance that future crises will not be
more acute than those experienced over the testing period or that losses
on variance swap positions will not be greater, thereby partly erasing the
high reward associated with the volatility risk premium. One interesting
continuation of this work would be to explore the extent to which long
exposure to volatility is a satisfactory hedge of the volatility risk premium
strategy during periods of stress and sharply rising realized volatility. In
any case, an essential aspect of using volatility as an asset class is the sig-
nificant possibilities it offers for tailoring a portfolio to investors’ needs,
especially if they are risk averse. Over the long term, volatility strategies
make it possible to build portfolios that are more efficient than a pure-bond
or equity/bonds investment, within a framework that goes beyond simple
mean–variance.

Notes
1. Empirical tests have shown that having an exposure inversely proportional to the
observed level of implied volatility makes the strategy much more profitable.
2. Other components can provide partial explanations of this premium: the convex-
ity of the P&L of the variance swap, and the fact that investors tend to be struc-
tural net buyers of volatility (Bollen and Whaley 2004).
3. In practice, the standard cap is 2.5 times the strike of a variance swap (implied
volatility). The investor that wants to buy this protection has to pay a cost that
will further reduce the VIX implied volatility. In this work we consider an average
cost of 0.2% (Credit Suisse2008).

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
276 Marie Brière et al.

4. All of the data were downloaded as monthly series from Datastream.


5. Realized volatility rises above implied volatility.
6. For equity returns and returns on the two volatility strategies, the null hypoth-
esis of a normality test is significantly rejected.
7. We give a summary presentation of these matrices. For n=4 assets, it suffices
to calculate 20 elements for the coskewness matrix of dimension (4,16) and 35
elements for the cokurtosis matrix of dimension (4, 64).
8. The general formula for coskewness is: sk = E (r − ␮ )(r − ␮ )(r − ␮ ) ␴ ␴ ␴ , where r i is the
ijk i i j j k k i j k

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
return on asset i and µi its mean.
9. The general formula for cokurtosis is: ku = E (r − ␮ )(r − ␮ )(r − ␮ )(r − ␮ ) ␴ ␴ ␴ ␴ .
ijkl i i j j k k l l i j k l

10. The null hypothesis of a multivariate normality test (Kotz et al. (2000)) is signifi-
cantly rejected.

Bibliography
Agarwal, V. and Naik, N. (2004) ‘Risks and Portfolio Decisions Involving Hedge
Funds.’ Review of Financial Studies, 17(1), 63–68.
Bakshi, G. and Kapadia, N. (2003) ‘Delta-Hedged Gains and the Negative Market
Volatility Risk Premium.’ The Review of Financial Studies, 16 (2), 527–566.
Bollen, N.P.B. and Whaley, R.E. (2004) ‘Does Net Buying Pressure Affect the Shape of
Implied Volatility Functions?’ The Journal of Finance, 59(2), 711–753.
Bondarenko, O. (2006) ‘Market Price of Variance Risk and Performance and Hedge
Funds.’ AFA 2006 Boston Meetings Paper.
Carr, P. and Wu, L. (2008) ‘Variance Risk Premia.’ Review of Financial Studies, 2008,
22(3), 1311–1341.
CBOE (2004), ‘VIX CBOE Volatility Index.’ Chicago Board Options Exchange website
http://www.cboe.com/
Chunhachinda, P., Dandapani, K., Hamid, S. and Prakash, A.J. (1997) ‘Portfolio
Selection with Skewness: Evidence from International Stock Markets.’ Journal of
Banking and Finance, 21(2), 143–167.
Credit Suisse (2008) ‘Credit Suisse Global Carry Selector.’ October.
Daigler, R.T. and Rossi, L. (2006) ‘A Portfolio of Stocks and Volatility.’ The Journal of
Investing, 15(2), Summer, 99–106.
Dash, S. and Moran, M.T. (2005) ‘VIX as a Companion for Hedge Fund Portfolios.’
The Journal of Alternative Investments, 8(3), Winter, 75–80.
Demeterfi, K., Derman, E., Kamal, M. and Zhou, J. (1999) ‘A Guide to Volatility and
Variance Swaps.’ The Journal of Derivatives, 6(4), Summer, 9–32.
Favre, L. and Galeano, J.A. (2002) ‘Mean-modified Value at Risk Optimization with
Hedge Funds.’ Journal of Alternative Investment, 5(2), Fall, 21–25.
Jondeau, E. and Rockinger, M. (2006) ‘Optimal Portfolio Allocation under Higher
Moments.’ Journal of the European Financial Management Association, 12, 29–55.
Jondeau, E. and Rockinger, M. (2007) ‘The Economic Value of Distributional Timing.’
Swiss Finance Institute Research Paper 35.
Kotz, S., Balakrishnan, N. and Johnson, N.L. (2000) Continuous Multivariate
Distributions, Volume 1: Models and Applications, John Wiley, New York.
Lai, T.Y. (1991) ‘Portfolio Selection with Skewness: A Multiple Objective Approach.’
Review of Quantitative Finance and Accounting, 1, 293–305.
Markowitz, H. (1952) ‘Portfolio Selection.’ Journal of Finance 7(1), 77–91.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Volatilty as an Asset Class 277

Martellini, L. and Ziemann, V. (2007) ‘Extending Black-Litterman Analysis beyond


the Mean-Variance Framework.’ Journal of Portfolio Management, 33(4), Summer,
33–44.
Standard & Poor’s (2008) ‘S&P 500 Volatility Arbitrage Index: Index Methodology.’
January.
Stuart, A., Ord, K. and Arnold, S. (1999) Kendall’s Advanced Theory of Statistics,
Volume 1: Distribution Theory, 6th edition, Oxford University Press.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Appendix: Descriptive statistics

Table 14.A.1 Descriptive statistics. Summary statistics of monthly returns of Bonds, Equities, Long Volatility (LV) and Volatility Risk
Premium (VRP) (US, February 1990–August 2008).

Ann. Max Max Ann.


Geometric Geometric Monthly Monthly Ann. Down. Mod. Sharpe Success
Mean (%) Mean (%) Median (%) Loss (%) Gain (%) Std. dev. (%) Skewness Kurtosis dev.* (%) VaR (%) Ratio Rate (%)

Bond 0.62 7.68 0.61 –5.55 5.38 5.84 –0.31 3.54 2.97 3.82 0.53 68
Equity 0.79 9.89 1.28 –14.46 11.44 13.71 –0.46 3.86 7.81 10.16 0.39 64
LV 0.59 7.37 0.15 –12.19 30.84 21.20 1.00 5.33 9.48 10.00 0.13 53
VRP 2.16 29.29 2.53 –15.60 8.95 10.21 –1.80 10.38 5.74 10.00 2.42 85

* Downside deviation is determined as the sum of squared distances between the returns and the cash return series.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Volatilty as an Asset Class 279

Table 14.A.2 Correlation matrix. Correlation matrix of monthly


returns of Bonds, Equities, Long Volatility and Volatility risk pre-
mium; US, February 1990–August 2008

Bonds Equity LV VRP

Bonds
Equity –0.01
LV 0.08 –0.61

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
VRP –0.17 0.46 –0.60

Table 14.A.3 Coskewness matrix. Co-skewness matrix of monthly returns of Bonds,


Equities, Long Volatility and Volatility Risk Premium; US, February 1990–August 2008

Bonds^2 Equity ^2 LV^2 VRP^2 Bonds*Equity Bonds*LV Equity*LV

Bonds –0.31 0.35 0.05 0.47


Equity –0.03 –0.46 –0.67 –0.84
LV 0.21 0.59 1.00 0.89 –0.13
VRP –0.15 –0.50 –0.71 –1.80 0.22 –0.21 0.57

Table 14.A.4 Cokurtosis matrix. Cokurtosis matrix of monthly returns of Bonds, Equities,
Long Volatility and Volatility Risk Premium; US, February 1990–August 2008

Bonds* Bonds* Bonds*


Bonds^3 Equity ^3 LV^3 VRP^3 Equity^2 LV^2 VRP^2

Bonds 3.54 –0.53 0.98 –2.49 1.53 1.39 1.34


Equity 0.26 3.86 –3.57 4.57 –0.53 –0.84 –1.24
LV 0.15 –2.81 5.33 –4.75 0.76
VRP –0.42 2.13 –3.79 10.38 –0.81 –1.04

Equity* Equity* Bond^2* Bond^2* Equity^2* LV^2* Equity*LV*


LV^2 VRP^2 Equity LV VRP VRP VRP

Bonds 0.86
Equity 3.04 2.99
LV –2.85 –0.97 –2.30
VRP 2.75 0.69 –0.91 3.66

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
15
A Frequency Domain Methodology
for Time Series Modelling

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Hens Steehouwer

15.1 Introduction

Determining an optimal Strategic Asset Allocation (SAA) in general, and for


Central Banks and Sovereign Wealth Managers in particular, is essentially a
decision-making problem under uncertainty. How well or badly a selected
SAA will perform in terms of the objectives and constraints of the stakehold-
ers will depend on the future evolution of economic and financial variables
such as interest rates, asset returns and inflation rates. Uncertainty about
the future evolution of these variables is traditionally modelled by means of
(econometric) time series models. Campbell and Viceira (2002) provide an
example of this approach. They estimate Vector AutoRegressive (VAR) mod-
els on historical time series and derive optimal investment portfolios from
the statistical behaviour of the asset classes on various horizons as implied
by the estimated VAR model.
It is also known that the results from (SAA) models that take the statistical
behaviour of asset classes as implied by these time series models as input
can be very sensitive to the exact specifications of this statistical behav-
iour in terms of, for example, the expected returns, volatilities, correlations,
dynamics (auto- and cross-correlations) and higher order moments. Section
1.2 of Steehouwer (2005) describes an example of this sensitivity in the con-
text SAA decision-making for a pension fund. Besides the academic rele-
vance, this observation also has an enormous practical impact since many
financial institutions around the world base their actual SAA investment
decisions on the outcomes of such models. Therefore, it is of great import-
ance to continuously put the utmost effort into the development and test-
ing of better time series models to be used for SAA decision-making. This
chapter is intended to make a contribution to such developments.
If we now turn our attention to the methodological foundations of these
time series models, virtually all model builders will agree that empirical
(time series) data of economic and financial variables is (still) the primary
source of information for constructing the models. This can already be seen

280

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 281

from the simple fact that virtually all time series models are being estimated
based on historical time series data. On top of that, of course, forward-look-
ing information can also be incorporated into the models. This is desirable
if some aspects of the behaviour observed in the (historical) time series data
are considered to be inappropriate for describing the possible evolution of
the economic and financial variables in the future.
Furthermore, it is known that the empirical behaviour of economic and

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
financial variables is typically different at different horizons (centuries,
decades, years, months, etc.) and different observation frequencies (annual,
monthly, weekly, etc.). One way of understanding this is by thinking about
well-known economic phenomena such as long-term trends, business cycles,
seasonal patterns, stochastic volatilities, etc. For example, on a 30-year hori-
zon with an annual observation frequency, long-term trends and business
cycles are important, while on a one-year horizon with a monthly obser-
vation frequency, seasonal patterns need to be taken into account, and on
a one-month horizon with a daily observation frequency, modelling sto-
chastic volatility becomes a key issue. A second way of understanding the
relevance of the horizon and observation frequency is by thinking about
the so-called ‘term structure of risk and return’ as described by Campbell
and Viceira (2002). This simply means that expected returns, volatilities and
correlations of and between asset classes are different at different horizons.
For example, the correlation between equity returns and inflation rates is
negative on short (e.g. one-year) horizons, while the same correlation is
positive on long (e.g. 25-year) horizons.
If we now combine these observations with the described sensitivity of
real world investment decision-making regarding the statistical behaviour
of the time series models, we see that the first important issue in time series
modelling for investment decision-making is how to describe the relevant
empirical behaviour as well as possible for the specific problem at hand.
So if, for example, we are modelling data for the purpose of long-term SAA
decision-making, what do empirical data tell us about the statistical prop-
erties of long-term trends and business cycles, and how can we model them
correctly?
A second important issue follows from the fact that the results taken from
time series models are also used in more steps of an investment process than
just to determine the SAA. Once the core SAA has been set for the long run,
other time series models may be used for a medium-term horizon in order to
further refine the actual asset allocation (also called portfolio construction),
for example by including more specialized asset classes or working with
specific views that induce timing decisions. Once an investment portfolio
is implemented, monitoring and risk management are also required, for
example to see if the portfolio continues to satisfy (ex ante) short-term risk
budgets. Because, as mentioned before, the empirical behaviour of economic
and financial variables is different at different horizons and observation

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
282 Hens Steehouwer

frequencies, it is typically the case that different (parametric and non-para-


metric) time series models are used in these various steps of an investment
process. That is, the best model is used for the specific problem at hand.
This would be fine in itself, provided that the different steps in the invest-
ment process do not need to communicate with each other, but obviously
they do need to do so. The SAA is input for the tactical decision-making and
portfolio construction while the actual investment portfolio is input for the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
monitoring and risk management process. If different time series models are
used in these steps, portfolios that were good or even optimal in one step
may no longer be good or optimal in the next step, just by switching from
one time series model to another. It is not hard to imagine the problems that
can occur because of such inconsistencies. A second important issue in time
series modelling for investment decision-making is therefore how to bring
together the empirical behaviour of economic and financial variables that is
observed at different horizons and different observation frequencies in one
complete and consistent modelling approach.
In response to the two important issues described above, this chapter puts
forward a specific frequency domain methodology for time series model-
ling. I will argue that by using this methodology it is possible to construct
time series models that:

1. give a better description of the empirical long-term behaviour of eco-


nomic and financial variables with the obvious relevance for long-term
SAA decision making;
2. bring together the empirical behaviour of these variables, as observed
at different horizons and observation frequencies, which is required for
constructing a consistent framework to be used in the different steps of
an investment process.

In addition, by using frequency domain techniques, the methodology


supports:

3. better insight into and understanding of the dynamic behaviour of eco-


nomic and financial variables at different horizons and observation fre-
quencies, both in terms of empirical time series data and of the time
series models that are used to describe this behaviour.

The methodology combines conventional (time domain) time series mod-


elling techniques with techniques from the frequency domain. It is fair to
say that frequency domain techniques are not used very often in econom-
ics and finance, especially when compared to the extensive use of these
techniques in the natural sciences. This can be explained in terms of the
non-experimental character of the economic and finance sciences and,
therefore, the generally limited amount of data that is available for analysis.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 283

I will show how the corresponding problems of conventional frequency


domain techniques can be solved by using appropriate special versions of
these techniques that work well in the case of limited data.
The methodology builds on the techniques and results described in
Steehouwer (2005) as well as subsequent research. Its applications are
not limited to that of SAA investment decisions as described above, but
cover, in principle, all other types of applications of time series models.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Furthermore, the methodology leaves room for and even intends to stimu-
late the inclusion and combination of many different types of time series
modelling techniques. By this I mean that the methodology can accommo-
date and combine classical time series models, VAR models, London School
of Economics (LSE) methodology, theoretical models, structural time series
models, (G)ARCH models, Copula models, models with seasonal unit roots,
historical simulation techniques, etc. in one consistent framework.
This chapter is not intended to give a full in-depth description of every
aspect of the methodology. Instead, the objective of the chapter is to give a
rather high-level overview of the methodology and provide the appropriate
references for further information. The remainder of the chapter has the
following structure. Section 15.2 proceeds with an introduction to some
basic concepts from the frequency domain, together with what I feel is the
fundamental reason why frequency domain techniques do not have the
widespread use within economics and finance they deserve. Section 15.3
continues with a description of the proposed methodology, followed by the
main points of motivation for proposing this methodology in Section 15.4.
This motivation consists of a combination of technical as well as more meth-
odological issues. This chapter does not (yet) include one comprehensive
example of the application of the described methodology. Instead, separate
examples are given throughout the text to illustrate individual concepts.
Section 15.5 closes the chapter by summarizing the main conclusions.

15.2 Frequency domain

The methodology for time series analysis and modelling proposed in this
chapter is based on concepts and techniques from the frequency domain, also
known as spectral analysis techniques. Frequency domain techniques are not
that well known and are not applied very often in economics and finance.
Furthermore, these techniques require a rather different view of empirical
time series data and stochastic processes. Therefore, this section briefly intro-
duces some key concepts from frequency domain analysis, such as spectral
densities, frequency response functions and the leakage effect. I will argue
that the leakage effect is the key reason why frequency domain techniques
are not used more often in economics and finance. These concepts are used
in the description of the proposed methodology in Section 15.3. Those
interested in a historical overview of the development of frequency domain

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
284 Hens Steehouwer

and spectral analysis techniques are referred to Section 2.2.11 of Steehouwer


(2005). Further details, proofs and references on the concepts discussed here
can be found in Chapter 4 of the same reference. Classic works on spectral
analysis techniques and time series analysis include Bloomfield (1976) and
Brillinger (1981).

15.2.1 Frequency domain versus time domain

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
All frequency domain techniques are built on the foundations of the Fourier
transform. With the Fourier transform, any time series {xt, t = 0, ... , T – 1} can
be written as a sum of cosine functions:

T −1
xt = ∑ Rj cos( vjt + fj ) (1)
j=0

The parameters {Rj, ␻j and ␾j, j = 0, ... , T – 1} represent the amplitudes, fre-
quencies and phases of the T cosine functions. The conventional represen-
tation {xt, t = 0, ... , T – 1} of the time series is referred to as the representation
in the time domain. The representation {Rj, ␻j and ␾j, j = 0, ... , T – 1} is referred
to as a representation in the frequency domain. An important property of this
frequency domain representation is

1 T −1 2 T −1 2
∑ xt = ∑
T t =0 j=0
Rj (2)

If we assume the time series xt to have an average value of zero, then this
relation tells us that the frequency domain representation decomposes the
total variance of the time series into the squared amplitudes of the set of
cosine functions. The higher the Rj for a certain ␻j, the more the frequency
contributes to the total variance of the time series.

15.2.2 Spectral densities


A periodogram plots the variance per frequency from 2.2 as a function
of the frequencies, and thereby shows the relative importance of the dif-
ferent frequencies for the total variance of the time series. If one would
calculate the periodogram for different samples from some stochastic time
series process, this would result in different values and shapes of the perio-
dogram. Doing this for a great number of samples of sufficient length and
calculating the average periodogram on all these samples results in what
is called the spectral density (or auto-spectrum) of a univariate stochastic
process. A spectral density describes the expected distribution of the vari-
ance of the process over periodic fluctuations with a continuous range of
frequencies. The word ‘spectrum’ comes from the analogy of decomposing

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 285

white light into colors with different wavelengths. The word ‘density’
comes from the analogy of a probability density function. A probability
density function describes the distribution of a probability mass of one
over some domain while a spectral density describes the distribution of a
variance mass over a range of frequencies. It can be shown that the spec-
trum and the traditional auto-covariances contain the same information
about the dynamics of a stochastic process. Neither can give information

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
that cannot be derived from the other. The only difference is in the way
of presenting the information. An auto-spectrum specifies the behaviour
of a univariate stochastic process. However, economic and financial vari-
ables need to be studied in a multivariate setting. The dynamic relations
between variables are measured by the cross-covariances of a stochastic
process. A cross-spectral density function (cross-spectrum) between two vari-
ables can be derived in the same way as the auto-spectrum for a single
variable. The only difference is that the cross-covariances need to be used
instead of the auto-covariances. In the form of the coherence and phase spec-
tra, these cross-spectral densities ‘dissect’ the conventional correlations at
the various frequencies into a phase shift and the maximum correlation
possible after such a phase shift. Note that various auto- and cross-spectra
can also be combined in a straightforward manner into multivariate spec-
tral densities.

15.2.3 Filters
If a linear filter G(L) is applied on a time series xt we obtain a new time
series:

b
 b 
yt = ∑ g l xt −l =  ∑ g l Ll  xt = G( L )xt (3)
l=a  l=a 

The Fourier transform of the filter is called the Frequency Response Function
(FRF) of the filter because for each frequency ␻, it specifies how the ampli-
tude and phase of the frequency domain representation of the original time
series xt are affected by the filter. The effect of the filter can be split into two
parts. First, the squared gain gives the multiplier change of the variance of
the component with frequency ␻ in a time series. The squared gain is there-
fore often called the Power Transfer Function (PTF). Second, the phase of a
linear filter gives the phase shift of the component of frequency ␻ in a time
series, expressed as a fraction of the period length. Although it is often not
recognized as such, probably the most often-applied linear filter consists of
calculating the first order differences of a time series. Its squared gain (i.e.
PTF) and phase are shown in Figure 15.1.
Assume t is measured in years. The PTF on the left shows that the variance
at frequencies below approximately 1/6 cycles per year (i.e. with a period

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
286 Hens Steehouwer

4.0
3.6
3.2
Squared gain (w) 2.8
2.4
2.0

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
1.6
1.2
0.8
0.4
0.0
0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
Frequency in cycles per year

0.25

0.20
Phase (w)

0.15

0.10

0.05

0.00
0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
Frequency in cycles per year

Figure 15.1 Squared gain or PTF and phase of the first order differencing operator
Note: The PTF (left panel) shows how the first order differencing operator suppresses the vari-
ance of low frequency fluctuations in a time series while it strongly enhances the variance of
high frequency fluctuations in a time series. The phase (right panel) shows that that the first
order differencing operator also shifts these fluctuations back in time by a maximum of 0.25
times the period length of the fluctuations.

length of more than six years) is being reduced by the first order differencing
filter. This explains why the filter is often used to eliminate trending behav-
iour (i.e. very long-term and low frequency fluctuations) from time series
data. First order differencing also strongly emphasizes the high frequency
fluctuations. This can be seen from the value of the PTF for higher frequen-
cies. The variance of the highest frequency fluctuations is multiplied by a
factor of four. Besides changing the variance at the relevant frequencies,
which can be directly thought of in terms of changing the shape of spectral

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 287

densities, the first order differencing filter also shifts time series in the time
domain, which corresponds to phase shifts in the frequency domain. The
phase on the right shows that the lowest frequencies are shifted backwards
in time by approximately a quarter of the relevant period length, while the
phase shift for the higher frequencies decreases towards zero for the highest
frequency in a linear fashion.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
15.2.4 Leakage effect
The previous sections already demonstrate some clear, intuitively appeal-
ing properties of a frequency approach for time series analysis and model-
ling. Spectral densities very efficiently give information about the dynamic
behaviour of both univariate and multivariate time series and stochastic
time series processes. Gains and phases clearly show what linear filters do to
spectral densities at different frequencies. Nevertheless, frequency domain
techniques do not have the widespread use within economics and finance
one would expect based on these appealing properties, especially given the
extensive use of these techniques in the natural sciences. I feel that the fun-
damental reason for this lies in the fact that in economics and finance the
amount of available historical time series data is generally too limited for
conventional frequency domain techniques to be applied successfully, as
these techniques typically require large amounts of data. If these conven-
tional techniques are applied anyway to time series of limited sample sizes,
this can, for example, result in disturbed and/or less informative spectral
density estimates. In addition, filtering time series, according to some FRF,
can give disturbed filtering results. Fortunately, there are special paramet-
ric versions of frequency domain techniques for the estimation of spectral
densities and filtering of time series that are also especially adapted to work
well on short sample time series data, and can therefore be successfully
applied to economic and financial time series data. These techniques avoid
spurious spectral analysis and filtering results in ways that are described in
Sections 15.3.1 and 15.3.2. The disturbing consequences of applying stand-
ard frequency domain techniques to time series of limited size are caused
by what is called the leakage effect. This effect can best be understood by
thinking of the Fourier transform of a perfect cosine function of some
frequency. Obviously, in the periodogram of this cosine function, 100%
of the variance should be located at the specific frequency of the cosine
function. However, if one only has a limited sample of the cosine func-
tion available for the Fourier transform, this turns out not to be the case.
Instead, a part of the variance at the specific frequency will have ‘leaked’
away to surrounding frequencies in the periodogram. As the sample size
increases, the disturbing effects of leakage decrease and the periodogram
gets better and better at revealing the true identity of the time series by
putting a larger and larger portion of the variance at the specific frequency
of the cosine function. Section 15.3.1 will explain how the leakage effect

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
288 Hens Steehouwer

can result in disturbed and less informative spectral density estimates in


small samples, while Section 15.3.2 will explain how it can cause disturbed
filtering results. In both cases I will also describe the appropriate solutions
to these problems.

15.3 Methodology

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
In this section I first describe the proposed frequency domain methodology
for time series analysis and modelling. In Section 15.4 I will give various
motivations for proposing this methodology. I use this aberrant order of
presentation because knowing the methodology makes it easier to fully
understand and appreciate its motivations. This means that in this section
I deliberately say little about why the methodology works in the proposed
way and instead limit myself to explaining how it works. The methodology
consists of the consecutive steps described in the following sub-sections and
builds on the fundamental frequency domain concepts described in Section
15.2. Various examples are given throughout the text to illustrate individual
concepts. These examples are taken from different sources and are therefore
not necessarily consistent.

15.3.1 Time series decomposition


After having collected the appropriate time series (and possibly also cross
section) data, the first step of the methodology is to zoom in on the differ-
ent aspects of the time series behaviour by decomposing the time series. The
different components of the time series can then be analyzed and modelled
separately by zooming in on the behaviour of the time series in the differ-
ent frequency regions. For example, consider Figure 15.2, which shows a
decomposition of the long-term nominal interest rate in the Netherlands1
in trend, low frequency and high frequency components. This decompos-
ition is such that the three components add up to the original time series.
The trend component consists of all fluctuations in the time series with a
period length longer than the sample length (194 years), which is a very
natural definition for a trend. The low frequency component consists of
all fluctuations with a period length shorter than the sample length but
longer than 15 years. 15 years is a wide upper bound on business cycle
behaviour. The third component consists of all fluctuations with a period
length between 15 and two years, the shortest period length possible on
annual data.
Decomposing a time series in such a way is also called filtering the time
series. It is not hard to imagine that the way this filtering is implemented is
of crucial importance for the subsequent analysis and modelling. Therefore
I shall proceed by giving more information on the available and required
filtering techniques. An overview of filtering techniques and their proper-
ties can be found in Chapter 5 of Steehouwer (2005).

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 289

0.12
Year end level
0.10 Trend
Low frequencies
0.08 High frequencies

0.06

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.04

0.02

0.00

−0.02

−0.04
1830 1850 1870 1890 1910 1930 1950 1970 1990 2010

Figure 15.2 Decomposition of a long-term interest rate time series for the
Netherlands
Note: The time series is decomposed into the three dotted lines in such a way that adding these
component time series results in the original time series. The component time series themselves
can be distinguished by the period length of the fluctuations from which they are constituted.
The first component, indicated by ‘Trend’, captures all fluctuation in the interest rate time series
with a period length between infinity (i.e. a constant term) and the sample length of 193 years.
The second component, indicated by ‘Low frequencies’, captures all fluctuations with a period
length between 193 and 15 years. The third component, indicated by ‘High frequencies’, captures
all fluctuations with a period length between 15 and two years.
Source: GlobalFinancialDatabase (GFD)

15.3.1.1 Filter requirements


What do we require from a filtering technique that is to be applied to decom-
pose time series in the way shown in Figure 15.2? An ideal filter should
allow for, or result in:

1. User defined pass-bands: By this I mean that the user of the filter should
be able to freely specify which period lengths (frequencies) of fluctua-
tions to include and exclude in the filtered time series and should not be
restricted by the properties of the filter.
2. Ideal pass-bands: The filter should exactly implement the required user-de-
fined pass-bands. Often filters do this only in an approximating sense.
3. No phase shifts: The filter should not move time series back or forth in time
as it could influence inferences about the lead–lag relations between varia-
bles. This means that the phase of a filter must be zero for all frequencies.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
290 Hens Steehouwer

4. No loss of data: The available amount of data in economics and finance is


in general limited. Therefore, an ideal filter should not lose observations
at the beginning or end of the sample.

15.3.1.2 Zero Phase Frequency Filter


Many conventional filters fail one or more of the four described require-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
ments. There are two main reasons for this. A first reason is that many fil-
ters were originally defined in the time domain instead of in the frequency
domain in terms of the properties of their PTF and phase. For example,
consider the properties of the simple first order differencing filter described
in Section 15.2.3, which fails on all of the four requirements. To a lesser,
though still significant, extent this also holds for other well-known filters,
such as the exponential smoothing and Hodrick Prescott filter. The second
reason is the leakage effect from Section 15.2.4, which can cause ideal filter-
ing results to fail when filters are applied on time series of a limited sam-
ple size, even when the filters have been explicitly designed to achieve the
required ideal frequency domain properties. To understand how this hap-
pens, think of a conceptually simple direct frequency filter that starts by
transforming a time series into the frequency domain. Next, based on the
required PTF of the filter, the weights at certain frequencies are set to zero
while other are preserved (i.e. the ones that lie within the pass-band of the
filter). Finally, the adjusted frequency domain representation of the time ser-
ies is transferred back into the time domain. For time series of limited sam-
ple sizes this approach does not work well because, as explained in Section
15.2.4, the true frequency domain representation of the limited sample time
series is disturbed in the first step. If, in the second step, the ideal PTF is
applied to the erroneous frequency domain representation, some frequency
components of the time series behaviour will be deleted that should have
been preserved and vice versa.
Section 5.4 of Steehouwer (2005) describes a special Zero Phase Frequency
Filter that does meet all four requirements by focusing on a solution for
the problem of filtering time series of finite sample sizes caused by the
leakage effect. This filtering technique is based primarily on the ideas
from Bloomfield (1976) and the filtering approach of Schmidt (1984). For
example, Hassler et al. (1992), Baxter and King (1999) and Christiano and
Fitzgerald (1999) describe different approaches to deal with the problems
of filtering finite time series in the frequency domain. The filter algorithm
comes down to the iterative estimation of a number of periodic compo-
nents and multiplying each of these components by the value of the PTF
to obtain the filtered time series. The key trick of this filter is that it avoids
the disturbing leakage effects by skipping the transformation into the fre-
quency domain and instead filtering the estimated periodic components
(sine and cosine functions) directly. This is possible because if we would

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 291

have had an infinite sample size, the filtering result of a periodic com-
ponent of a certain frequency would be known exactly beforehand. For
example, think of the frequency domain representation of the perfect
cosine time series described in Section 15.2.4. In a sense, by estimating
the periodic components, the time series is ‘extrapolated’ from the sam-
ple size into infinity and the disturbing effects of leakage can thereby be
avoided.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
As shown in Section 6.3.2 of Steehouwer (2005), the Zero Phase Frequency
Filter results in very similar filtering results to the popular Baxter and King
and Christiano and Fitzgerald filters. The additional advantages are that,
compared to these filters, the Zero Phase Frequency Filter is more precise
by virtue of filtering directly in the frequency domain (Requirement 2),
it causes no phase shifts (Requirement 3) and it leads to no loss of data
(Requirement 4).

15.3.1.3 Zero correlation property


An important advantage of decomposing time series in the frequency
domain, based on ideal filters and implemented by appropriate filtering
techniques, is that all filtered components of non-overlapping pass-bands
of some set of time series have zero correlation in the time domain, both
in a univariate and multivariate context. This (theoretical) property holds
for all filters which adequately implement an ideal pass-band, and is the
continuous analogue of the orthogonal property of cosine functions. This
theoretical property can also be verified to apply to practical filter output
when the Zero Phase Frequency Filter is applied on actual time series. Tests
described in Steehouwer (2007) show that although non-zero correlations
can actually occur in practical filter output, it can still be concluded that
from a theoretical perspective, zero correlations between the component
time series can (and must) be safely assumed for time series modelling
purposes. This can easily be understood by thinking about short samples
of several low frequency components. For short samples, the correlations
between such components can be very different from zero. However, if
the time series behaviour of these low frequency components would be
modelled and simulated on sufficiently long horizons, the fundamental
correlations would still need to be zero. Although the zero correlation
property greatly simplifies the time series modelling process, note that
zero correlations do not need to imply that component time series are
also independent. In fact, quite complex forms of dependencies between
the component time series exist, and need to be taken into account in the
modelling process. The perhaps counterintuitive fact is that, despite the
zero correlation property, the decomposition approach does not hinder
the analysis and modelling of such complex dependencies, but rather,
actually facilitates it. I will say more about these complex dependencies
in Section 15.3.3.3.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
292 Hens Steehouwer

15.3.2 Time series analysis


After having decomposed the time series in order to be able to zoom in
on the behaviour of the time series variables in the different frequency
regions, the second step in the methodology consists of actually analyz-
ing the behaviour of the time series. The resulting understanding of the
time series behaviour then forms the basis for an adequate modelling of
it. For the trend and low frequency components, as for example shown in

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Figure 15.2, traditional time series analysis techniques (mean values, vola-
tilities, correlations, cross-correlations, etc.) typically suffice for obtaining
the appropriate understanding. For higher frequency components, again
as for example shown in Figure 15.2, spectral analysis techniques are very
powerful for a further unraveling and understanding of the time series
behaviour.

15.3.2.1 Maximum Entropy spectral analysis


Two groups of methods for estimating (multivariate) spectral densities as
defined in Section 15.2.2 exist. The first are the traditional non-parametric
spectral estimators, which estimate the spectral density of a stochastic pro-
cess by means of its sample counterpart, the periodogram. The periodogram
in its pure form can be shown to be an inconsistent estimator in the sense
that it does not converge to the true spectrum as the sample size increases.
This inconsistency can be repaired by applying so-called spectral windows –
that is, replacing the periodogram values at all frequencies by a weighted
average of the periodogram values at adjacent frequencies. However, the
most important problem in practice of using the periodogram as an estima-
tor for a spectral density is that because of the finite sample only a limited
number of auto-covariances are fed into the formula for the periodogram,
while the theoretical spectrum contains infinite auto-covariances of the
process. In Section 15.2.4 we saw that the disturbing leakage effect is a dir-
ect consequence of the finite sample size. Although the leakage effect can be
reduced by applying spectral windows, this will always come at the expense
of a lower resolution of the estimated spectrum. By resolution I mean the
extent to which a spectral estimator is able to differentiate between separate,
and possibly adjacent, peaks in the theoretical spectrum. A lower resolution
is therefore equivalent to a larger bias in the estimate. It is unavoidable that
the averaging of the periodogram over adjacent frequencies causes adjacent
peaks in the spectrum to be melted together, which means a possible loss
of valuable information about the dynamic behaviour of the time series
process under investigation. In economics and finance in particular, where
most of the time only limited samples of data are available, the low reso-
lution of the conventional non-parametric estimators is a serious prob-
lem. In case of small samples there will be much leakage and hence much
smoothing required, which leads to a loss of a great deal of potentially valu-
able information.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 293

A second group of methods for estimating (multivariate) spectral dens-


ities are the less well-known parametric spectral estimators. By estimating
the spectrum through a cut-off sequence of sample auto-covariances, all
higher order auto-covariances are implicitly assumed to be zero. This also
holds if a spectral window is applied, although in that case the sample auto-
covariances that are available are additionally modified by the weighting
function. In fact, as a consequence of cutting off the auto-covariances, the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
spectrum of an entirely different, and to some extent even arbitrary, sto-
chastic process is estimated. Parametric spectral estimators try to circum-
vent this problem by first estimating the parameters of some stochastic
process in the available sample. Once such a stochastic process is known,
its auto-covariances can be calculated for any order up to infinity. These
auto-covariances can then be used to calculate the periodogram of the
process as an estimate of the spectral density function. In a way, such a
model extrapolates the auto-covariances observed within the sample into
auto-covariances for orders outside the sample. Note that the parametric
approach for estimating spectral densities has a strong analogy with the
Zero Phase Frequency filter described in Section 15.3.1.2 in terms of avoid-
ing the disturbing leakage effects. When estimating a spectral density, the
sample auto-covariances are extrapolated by means of a parametric model.
When filtering a time series, a periodic component of some frequency pre-
sent in the time series is extrapolated using a cosine function.
A special case of such parametric spectral estimators are the so-called
autoregressive or Maximum Entropy spectral estimators. These consist of first
estimating the parameters of a VAR model on the (decomposed) time ser-
ies data and then calculating the (multivariate) spectral densities from the
estimated model. This autoregressive spectral analysis leads to consistent
estimators of spectral densities. Furthermore, a theoretical justification
exists for choosing autoregressive models instead of many other possible
models (including the non-parametric approach) to estimate the spectral
densities. This justification is based on the information theoretical concept
of Maximum Entropy. Entropy is a measure of ‘not knowing’ the outcome of
a random event. The higher the value of the entropy, the bigger the uncer-
tainty about the outcome of the event. The entropy of a discrete probabil-
ity distribution has its maximum value when all outcomes of the random
event have an equal probability. The fundamental idea of the maximum
entropy approach, as first proposed by Burg (1967), is to select from all pos-
sible spectra that are consistent with the available information, represented
by a (finite) sequence of (sample) auto-covariances, the spectrum which
contains the least additional information as the best spectral estimate. All
additional information on top of the information from the available sam-
ple is not supported by the data and should therefore be minimized. This is
consistent with choosing the spectrum with the maximum entropy from all
spectra that are consistent with the observed sample of auto-covariances.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
294 Hens Steehouwer

The solution to the corresponding optimization problem shows that the


best way to estimate a spectrum in terms of this criterion is by estimating
an autoregressive model and using the spectrum of the model as the spec-
tral estimate. Furthermore, Shannon and Weaver (1949) show that given
a number of auto-covariances, a normal distributed process has the max-
imum entropy. So in total, the Maximum Entropy concept comes down
to estimating a normal distributed autoregressive model on the available

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
time series data and calculating the spectral densities of the estimated
model. Note that the Maximum Entropy concept itself says nothing about
which order to select or which estimation procedure to use. Section 4.7 of
Steehouwer (2005) describes Monte Carlo experiments which are set up to
find the answers to these two questions.
For example, Figure 15.3 shows a Maximum Entropy spectral density esti-
mate of the high frequency component of the long-term interest rate time
series shown in Figure 15.2. Here, an AR(6) model is estimated by means
of the Yule-Walker estimation technique on the full 1814–2007 sample.
In terms of the further unraveling and understanding of the time series
behaviour of this long-term interest rate time series, we can learn from this
spectral density that on average its high frequency behaviour is composed

Auto-spectrum high frequencies order 6


2.8

2.4

2.0

1.6

1.2

0.8

0.4

0.0
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
Frequency in cycles per period

Figure 15.3 Maximum Entropy (or autoregressive) spectral density estimate.


Note: This figures shows an estimage of the high frequency component of the long-term interest
rate time series for the Netherlands from Figure 15.2. Around 50% of the variance of these high
frequency, business cycle-type of fluctuations in the interest rate time series is described by fluc-
tuations with a period length of around ten (1/0.10) years while pseudo-periodic behaviour with
a period length of around four (1/0.25) to five (1/0.20) years can also be observed.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 295

of fluctuations with a period length of around ten years, which describe


approximately 50% of the high frequency variance. Furthermore, fluctua-
tions with a period length of around 4.5 years seem to be important. Both
observations are consistent with what is known about the business cycle
behaviour of many economic and financial time series. By extending the
analysis into a multivariate framework, coherence and phase spectra can
also be used to further unravel the correlations between variables and fre-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
quencies, lead–lag relations and phase corrected correlations.

15.3.3 Model specification and estimation


After having analyzed the decomposed time series in order to obtain an
adequate understanding of the total time series behaviour, the third step in
the methodology consists of actually modelling the component time series
in line with this understanding. Because of the zero correlation property
described in Section 15.3.1.3, the trend, low frequency and high frequency
(or any other) components can in principle be modelled separately. In a

Historical time series

Frequency domain filter

Trend Frequency Frequency


Model Model 1 ... Model n

Forecasts / Confidence intervals / Samples

Figure 15.4 Frequency domain time series modelling approach. The approach starts
at the top by decomposing multiple time series, just as in the example in Figure
15.2. Next, the corresponding components (all high frequency components) from all
time series are modelled by means of a suitable (multivariate) time series model. In
the final step, these models are combined again to obtain a model that adequately
describes the behaviour of the time series at all frequencies.
Note: The approach starts at the top by decomposing multiple time series, just as in the example
in Figure 15.2. Next, the corresponding components (all high frequency components) from all
time series are modelled by means of a suitable (multivariate) time series model. In the final step,
these models are combined again to obtain a model that adequately describes the behaviour of
the time series at all frequencies.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
296 Hens Steehouwer

multivariate context, each of these sub-models will be a multivariate model.


Once the separate trend and frequency models have been constructed, these
can be added back together again to model the total time series behaviour in
terms of for example forecasts, confidence intervals or scenarios. This mod-
elling process is depicted in Figure 15.4.

15.3.3.1 Trend model

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Figure 15.4 intentionally distinguishes the trend model from the models
for the other frequency component time series. The reason for this is that
the trend component of time series will typically require another model-
ling approach than the modelling of the other frequency components. For
example, the low and high frequency component time series shown in Figure
15.2 can be modelled well by means of conventional time series modelling
techniques because the time series show considerable variability. The trend
component however, will typically be a straight flat or trending line for
which it is clearly of little use to apply conventional time series modelling
techniques. What to do then? One possibility would be just to extrapolate
these straight lines into the future as our trend model. Although we have
only one observation of the trend value for each time series, we know that it
could very well have had a different value and hence could very well have a
very different value in our model for the future as well. We should therefore
prefer to work with a stochastic rather than a deterministic (linear) trend
model. One single time series does not offer much information for such a
stochastic trend model. After all, even a very long sample time series only
provides us with one observation of the long-term trend value. Therefore,
other sources of information are needed here. One could, for example, use a
Bayesian approach, survey data or theoretical macroeconomic models. The
distribution of the sample mean estimators could also be used. A more dir-
ect, data-oriented approach is to switch from time series data to cross sec-
tion data to obtain information about the ultra long trend behaviour of
economic and financial variables. Consider the statistics in Table 15.1 and
the top panel of Table 15.2. These are based on cross section data consist-
ing of 20th century annual averages for the five indicated variables for 16
OECD countries (excluding Germany, because of the extreme effects of the
World Wars). The volatilities and correlations observed here could form the
basis for constructing an appropriate trend model. From these tables we
can, for example, see that with a standard deviation of 2.0%, the long-term
inflation uncertainty is substantial and that in terms of the trends, a total
return equity index is positively correlated (0.55) to the long-term inflation
rate. Of course, the fundamental underlying assumption for such a cross
section-based approach is that the countries included in the cross section
are sufficiently comparable to form a homogeneous group on which to base
the long-term trend model behaviour for each of these countries individu-
ally. Finally, note that such an approach is very similar to the ones followed

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 297

Table 15.1 20th century averages and (geometric) average growth


rates.
Short Long Log TRR
Interest Interest Equity
Country Log GDP Log CPI Rate Rate Index

Australia 2.3% 3.9% 4.5% 5.2% 11.9%


Belgium 2.2% 5.7% 5.2% 5.1% 8.1%

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Canada 3.8% 3.2% 4.9% 4.9% 9.7%
Denmark 2.8% 4.1% 6.5% 7.1% 9.3%
France 2.4% 7.8% 4.4% 7.0% 12.1%
Ireland 2.0% 4.4% 5.2% 5.4% 9.4%
Italy 2.9% 9.2% 5.0% 6.8% 12.1%
Japan 4.0% 7.8% 5.5% 6.1% 13.0%
Netherlands 3.0% 3.0% 3.7% 4.1% 9.1%
Norway 3.3% 3.9% 5.0% 5.4% 8.0%
South Africa 3.2% 4.8% 5.6% 6.2% 12.2%
Spain 3.0% 6.2% 6.5% 7.5% 10.2%
Sweden 2.7% 3.8% 5.8% 6.1% 12.4%
Switzerland 2.6% 2.5% 3.3% 5.0% 7.3%
United Kingdom 1.9% 4.1% 5.1% 5.4% 10.2%
Unites States 3.3% 3.0% 4.1% 4.7% 10.3%
Avg 2.8% 4.8% 5.0% 5.7% 10.3%
Stdev 0.6% 2.0% 0.9% 1.0% 1.8%
Min 1.9% 2.5% 3.3% 4.1% 7.3%
Max 4.0% 9.2% 6.5% 7.5% 13.0%
Data from Maddison (2006) and Dimson et al. (2002).
The averages and average growth rates are given for annual GDP (volume), CPI,
(nominal) short- and long-term interest rates and (nominal) total equity returns
(i.e. price changes plus dividend yields) for 16 OECD countries.

in historical long-term cross-country growth studies, but here I propose to


extend it into a forward looking stochastic modelling framework.

15.3.3.2 Frequency models


As indicated in Figure 15.4, next to the trend model there are a number
of so-called frequency models in the proposed methodology. These model
the low and high frequency behaviour of the time series variables around
their underlying long-term trends. The number of frequency models is
the same as the number of frequency segments used in the decomposition
step described in Section 15.3.1, excluding the trend component. Ideally,
the number of frequency segments and their size could be determined by
a spectral analysis of the data. Due to data limitations, this will often be

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
298 Hens Steehouwer

difficult. Furthermore, because in the decomposition step no information is


lost or added, we expect the exact choice of the frequency intervals to have a
limited impact on the final model behaviour. Instead, the split between the
frequency models can be determined based on the sample size and obser-
vation frequencies of the data and the economic phenomena that, based
on a thorough empirical analysis, need to be modelled. As an example of
how this can work, Figure 15.5 shows observations of low frequency filtered

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
times series of five economic and financial variables (the same as in Table
15.1) for the Netherlands, taken every five years during the 1870–2006 sam-
ple period. The second panel of Table 15.2 shows some statistics for the
same low frequency component time series. These time series data capture
the very long-term deviations from the underlying trends and also contain
information about changes in economic regimes. The sample needs to be
as long as the data allows, because we are interested in very low frequency
behaviour and we need a long sample to be able to observe this behaviour
adequately. Observations taken every five years (instead of annually) are
sufficient to capture this low frequency behaviour and facilitate the mod-
elling of the corresponding time series behaviour. If it is required for com-
bining the various frequency models, the observation frequency of such a
model can be increased to, for example, an annual observation frequency
through simple linear interpolation or by other means.

1870 1890 1910 1930 1950 1970 1990 2010


Log National Product Log Consumer Price Index
Log TRR Equity Index Short Interest rate
Long Interest Rate

Figure 15.5 Observations of low frequency filtered times series.


The time series are of five indicated economic and financial variables and are taken for the
Netherlands every five years from 1870–2006.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 299

Figure 15.6 shows annual observations of high frequency filtered times


series of the same five economic and financial variables for the Netherlands,
but now for the sample period 1970–2006. The third panel of Table 15.2
shows some statistics for the same high frequency component time series.
These time series data capture the business cycle behaviour of the variables
around the sum of the trend and the low frequency components. Although
empirically speaking business cycle behaviour is surprisingly stable across

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
samples spanning several centuries, a more recent 1970–2006 sample will
be considered more relevant for the business cycle behaviour in the near
future, and more recent data will also be of a higher quality. Higher obser-
vation frequencies are needed to adequately capture the higher frequency
behaviour of the time series.
If the monthly behaviour of the variables would also need to be mod-
elled, a third frequency model could be included that would run on filtered
monthly time series data for an even more recent sample, say 1990:01–
2005:12, on a monthly observation frequency. These time series data would
capture seasonal patterns and possibly also stochastic volatility patterns. In
principle we can continue in this way by including additional frequency
models for weekly, daily or over ticker data if required.
For each of the frequency models, the most appropriate time series mod-
elling and estimation techniques can be used to model the corresponding

1970 1975 1980 1985 1990 1995 2000 2005


Log National Product Log Consumer Price Index
Short Interest rate Long Interest Rate
Log TRR Equity Index

Figure 15.6 Annual observations of high frequency filtered times series.


The time series is of five indicated economic and financial variables, and is taken for the
Netherlands annually from 1970–2006.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
300 Hens Steehouwer

Table 15.2 Statistics of trend, low frequency and high frequency


components of five economic and financial variables.

Trend Avg Corr and Stdev

Log GDP 2.8% 0.6%


Log CPI 4.8% 0.07 2.0%
Short Interest Rate 5.0% 0.08 0.31 0.9%
Long Interest Rate 5.7% -0.01 0.67 0.70 1.0%

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Log TRR Equity Index 10.3% 0.19 0.55 0.27 0.44 1.8%

Low Frequencies Avg Corr and Stdev

Log GDP 0.0% 12.2%


Log CPI 0.0% 0.23 15.7%
Short Interest Rate 0.0% 0.70 0.50 1.6%
Long Interest Rate 0.0% 0.64 0.31 0.86 1.8%
Log TRR Equity Index 0.0% -0.18 0.39 -0.25 -0.27 33.2%

High Frequencies Avg Corr and Stdev

Log GDP 0.0% 1.7%


Log CPI 0.0% -0.37 1.3%
Short Interest Rate 0.0% 0.39 -0.22 1.8%
Long Interest Rate 0.0% 0.10 0.10 0.74 0.9%
Log TRR Equity Index 0.0% -0.12 -0.20 -0.37 -0.56 24.0%
Source: Original time series are updates of data for the Netherlands from
Steehouwer (2005)
Trend data are from Table 15.1. Low frequency data are observations of
low frequency filtered times series taken every five years from 1870–2006.
High frequency data are annual observations of high frequency filtered
time series for 1970–2006.

economic phenomena and time series behaviour as well as possible. In prin-


ciple, these models need not to come from the same class of models. For
example, a structural business cycle model could be used for the high fre-
quency components from Figure 15.6, while a model with seasonal unit
roots could be used for the seasonal components in the monthly time series
and a (G)ARCH model could be used to model the stochastic volatility in the
seasonally corrected part of the monthly time series. Other time series mod-
elling techniques which could be used in the different frequency models
include classical time series models, VAR models, models from the London
School of Economics (LSE) methodology, theoretical models, Copula mod-
els, historical simulation techniques, etc.

15.3.3.3 State dependencies


Section 15.3.1.3 has already described the usefulness of the zero correlations
between the component time series that are obtained from the filtering pro-
cess in the sense that it simplifies subsequent time series modelling. However,
zero correlations do not need to imply that component time series are also
independent. An example of a complex dependency between component

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 301

0.12
Year end level
0.10 Low frequencies
High frequencies
0.08

0.06

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.04

0.02

−0.00

−0.02

−0.04
1830 1850 1870 1890 1910 1930 1950 1970 1990 2010

0.12

0.10

0.08

0.06

0.04

0.02

0.00
1830 2010 2015 2020 2025 2030 2035 2040 2045 2050

Figure 15.7 Level effect in the high frequency volatility of interest rates.
Note: The left panel shows the empirical positive relation between the underlying (low fre-
quency) level of the nominal long-term interest rate in the Netherlands from Figure 15.2 and its
short-term (high frequency) volatility. The right panel shows three samples from a model that
explicitly captures this level effect by dynamically linking the volatility of the high frequency
model to the level of the simulated underlying low frequency model, according to a simple lin-
ear relation that has been estimated between the value of the low frequency component and the
volatility of the high frequency component from the left panel.

time series is the so-called ‘level effect’ in high frequency interest rate volatil-
ity. In the left panel of Figure 15.7 we again see the long-term nominal inter-
est rate in the Netherlands together with the filtered low and high frequency
components. The low frequency component is shown here as the sum of the
trend and low frequency components from Figure 15.2. If we define this sum

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
302 Hens Steehouwer

as the underlying level of the interest rate then it is clear that there exists a
positive relation between the volatility of the high frequency component
and the level of the interest rate. A similar effect can be found in short- and
long-term interest rates for other countries and also in inflation rates.
The possibly counterintuitive fact is that, despite the zero correlation
property, the decomposition approach actually facilitates the modelling of
such complex dependencies between the component time series. If we can

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
define and estimate some functional relation between the volatility of the
high frequency component and the level or ‘state’ of the lower frequency
models that describe the trend and low frequency components, then it is
easy to implement this relation in, for example, a simulation framework
of the model. What we need to do, then, is to start by simulating from the
trend and low frequency models, and then simulate from the high frequency
model while constantly updating the volatility of the high frequency model
based on the observed level of the trend and low frequency simulations.
The right-hand panel of Figure 15.7 shows three example simulations of the
long-term interest rate that include the level effect in the high frequency
volatility. The functional relation used here is taken from Section 13.2.1 of
Steehouwer (2005). The approach described here can equally well be applied
to more complex types of state dependencies, such as state-dependent busi-
ness cycle dynamics (with time as a special case of the state) and state-de-
pendent asset correlations.

15.3.3.4 Higher moments and non-normal distributions


Spectral analysis techniques as those described in Section 15.2 focus on the
decomposition of the variance of time series and stochastic processes over
a range of frequencies. Spectral densities are calculated as the Fourier trans-
form of auto- and cross-covariances, which shows that spectral analysis
focuses on the second moments of the behaviour of time series and stochas-
tic processes. Although these second moments are of course very important,
they do not cover all relevant information. Furthermore, Section 15.3.2.1
gave a justification for using normal distributions in the Maximum Entropy
spectral analysis framework, although we know that the behaviour of eco-
nomic and financial variables will often be far from normal. When studying
the low and high frequency properties of decomposed time series data, one
soon finds many forms of non-normal distributions with aberrant third and
fourth moments. For example, observe the skewness and (excess) kurtosis
numbers of the cross section trend data in Table 15.1. Another example are
monthly high frequency components of equity returns, in which we often
find empirical distributions that are more peaked with thinner tails than
the normal distribution (leptokurtic). One way of modelling these kinds of
non-normal distributions in the proposed modelling framework is an expli-
cit modelling approach. For example, note that the modelling of the level
effect in high frequency interest rate volatility along the lines described

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 303

in Section 15.3.3.3 will result in a skewed overall distribution. A second


example is modelling a stochastic volatility in the monthly high frequency
component of equity prices, which will result in leptokurtic distributions
for the corresponding frequency model. A second way of modelling non-
normal distributions would of course be to use distributions other than the
normal distribution in the trend and frequency models.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
15.3.3.5 Missing data solutions
Ideally, the methodology described here would run on an abundance of
empirical time series data consisting of samples of hundreds of years on,
say, a daily observation frequency of all economic and financial variables.
Of course, this kind of time series data will not be available in most cases.
In particular, the trend and low frequency models require cross section or
very long time series, which will not be available for all variables. This poses
a problem because every variable must be specified in the trend model and
in each of the frequency models in order to be able to describe the complete
behaviour of the variables. One standpoint could be that if long-term data
is not available for certain variables, one should not try to model the long-
term behaviour of these variables in the first place (also see Section 15.4.3),
but this is not always a feasible standpoint. Another solution to the missing
data problem is to describe the behaviour of variables for which we have
insufficient data in certain frequency ranges as a function of the variables
for which we do have sufficient data. The functional form and parameters
of such relations can be determined in two ways. The first is to perform an
empirical analysis on a shorter sample for which data for all the required
variables are available and estimate an appropriate relation. The second pos-
sibility is to base the relations on economic theory.
Consider the following examples of this theoretic approach:

● describing the low frequency behaviour of a real interest rate as the low
frequency behaviour in a nominal interest rate minus the low frequency
behaviour of price inflation (Fisher relation);
● describing the low frequency behaviour of an exchange rate as the
difference of the low frequency behaviour of the involved price indices
(Purchasing Power Parity);
● describing the low frequency behaviour in private equity prices as the low
frequency behaviour of public equity prices plus some error term.

15.3.4 Model analysis


After having decomposed time series data, analyzed the decomposed time
series and modelled the component time series, the fourth step in the meth-
odology consists of analyzing the constructed models. Such an analysis is
required to check whether the models adequately describe the intended
behaviour, to use the properties of the models as input for the framework

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
304 Hens Steehouwer

in which they are applied, etc. An example of the latter would be to cal-
culate the relevant moments of the implied stochastic behaviour of the
economic and financial variables as input for an SAA optimization routine
across various horizons. In the described methodology, traditional methods
for performing such a model analysis can still be applied. Depending on
the types of time series models that are used, unconditional and condi-
tional distribution characteristics such as means, standard deviations, cor-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
relations and percentiles can still be calculated for the trend and various

1.1
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
Trend
0.2 plus m = 1
0.1 plus m = 2
plus m = 3
0.0 plus m = 4
−0.1
0 50 100 150 200 250 300 350 400 450
1.1
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
Trend
0.2 plus m = 1
plus m = 2
0.1 plus m = 3
0.0 plus m = 4
−0.1
0 50 100 150 200 250 300 350 400 450

Figure 15.8 Variance decompositions evaluated at every 12th month in a 35-year


horizon.
Note: See the left panel for a long-term interest rate and the right panel for a log equity total
rate of return index. The lines show, in a cumulative fashion, which portion of the conditional
variance at the various horizons is caused by a trend model and four frequency models. In gen-
eral, such variance decompositions show that low (high) frequency behaviour causes a relatively
larger part of the variance at long- (short-) term horizons.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 305

frequency models separately. The only extension here is that these charac-
teristics have to be combined to obtain characteristics of the total model.
In most cases this is fairly straightforward because of the zero correlation
property described in Section 15.3.1.3. For example, the total covariance
between two variables is simply the sum of the covariances between those
two variables in the trend and each of the frequency models. In addition to
the traditional methods, two additional types of analysis are available in the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
described frequency domain methodology: spectral analysis and variance
decomposition techniques.

15.3.4.1 Spectral analysis


Section 15.3.2.1 explained that spectral analysis techniques are very power-
ful for unraveling and understanding the behaviour of filtered time series
components. However, spectral analysis techniques can also be fruitfully
used to analyze the behaviour of the constructed time series models.
Multivariate spectral densities in terms of the auto- , coherence and phase
spectra can be calculated equally well for each of the frequency models sep-
arately as for the total model. The resulting model spectral densities can be
compared to the spectral densities of the (decomposed) time series data, for
example. Calculating spectral densities for estimated models can be done in
two ways. The first is a direct calculation of the spectral densities based on
the parameters of the model. Section 4.5 of Steehouwer (2005), for example,
gives the spectral density formulas for a simple white noise process as well
as moving average and autoregressive models. A second and more flexible
possibility is to first calculate, possibly numerically, the auto- and cross-co-
variances of a sufficient high order and then to apply the Fourier transform
to transform these auto- and cross-covariances into the corresponding spec-
tral densities. Because of the zero correlation property, spectral densities for
the total model in terms of the trend and various frequency models can eas-
ily be obtained by first summing the auto- and cross-covariances across the
sub-models before transforming these into the frequency domain.

15.3.4.2 Variance decomposition


Just as a spectral density decomposes the variance of one of the frequency
models over the whole frequency range, a similar decomposition of the vari-
ance of the total model into the variance of the individual trend and fre-
quency models. Again because of the zero correlation property, constructing
such a variance decomposition is fairly straightforward. The variance of the
sum of the models is simply the sum of the individual variances1. A vari-
ance decomposition can be constructed for both the unconditional distri-
butions and the conditional distributions. The latter give insight into the
contribution of the different sub-models to the total variance at different
horizons. For example, consider Figure 15.8, which shows a conditional
variance decomposition for a long-term interest rate and a log equity total

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
306 Hens Steehouwer

rate of return index. The model consists of a stochastic trend model, a low
frequency model, a business cycle model, a seasonal monthly model and
a seasonally corrected monthly model. For every 12th month in a 35-year
horizon, the two panels show, in a cumulative fashion, the proportion of
the total conditional variance for that month that can be attributed to the
various sub-models. From the right-hand panel we can see, for example,
that the business cycle model (m = 2, which here is the same as m = 3)

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
describes around 80% of the variance of the log equity total return index on
a one-year horizon, while on a 35-year horizon this holds for the trend and
low frequency model (m = 1). That is, high frequency models are typically
important on short horizons while low frequency models are important on
long horizons.

15.4 Motivation

Why do I propose this specific frequency domain methodology for time ser-
ies analysis and modelling? The motivation consists of several points, some
of which have already been implicitly given in the description of the meth-
odology in Section 15.3. The sub-sections that follow make these points
regarding motivation more explicit, extending them with further argumen-
tation and also putting forward a number of new motivating points.

15.4.1 Understanding the data and model dynamics


The first reason for proposing the frequency domain methodology for time
series modelling is that it provides very powerful tools for understanding
the dynamic behaviour in historical time series data as well as for analyz-
ing the dynamic properties of models that describe time series behaviour
for the future. If there is one thing about the behaviour of economic and
financial variables we know, it is that they move up and down, and never
move in straight, stable paths. Therefore, frequency domain techniques are
the most natural to use to analyze exactly how they move up and down.
What types of fluctuations dominate the behaviour of a variable, and what
are the correlations and lead–lag relations with other variables at the various
speeds of fluctuations? First, decomposing time series into different compo-
nents allows us to zoom in on the behaviour in various frequency regions,
which provides us with a clearer, more focused insight into the correspond-
ing dynamic behaviour. We can, for example, focus our attention on the
business cycle behaviour of economic and financial variables, which is very
common in business cycle research but much less common in a time series
modelling context. Second, estimating and analyzing spectral densities is
a very efficient way of summarizing the dynamic behaviour of component
time series within a certain (high) frequency region. Although spectral dens-
ities contain exactly the same information as conventional time domain
auto- and cross-correlations, spectral densities represent this information

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 307

in a more efficient and intuitive manner. These appealing properties of fre-


quency domain approaches are certainly not new to economists. It has long
been recognized that when studying macroeconomics, one has to make a
clear distinction as to which aspect of macroeconomics one is interested in.
It seems likely that we will be dealing with very different forces when we
are studying the long-term growth of economies, comprising many decades,
compared to the intra-day trading effects on a stock exchange. If different

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
forces are at work, different models or approaches may also be needed to
adequately analyze and describe the relevant economic behaviour. The first
formalization of this idea dates back to Tinbergen (1946), who proposed a
decomposition of time series of economic variables:

Time Series = Trend + Cycle + Seasonal + Random (4)

The first economic applications of spectral analysis date as far back as


Beveridge (1922), who used a periodogram to analyze the behaviour of a
wheat price index. However, for a successful application of frequency domain
techniques on economic and financial time series, it is crucial to use appro-
priate, special versions of these techniques which can deal with the leakage
problem from Section 15.2.4, caused by the limited sample sizes generally
available in economics and finance. This is exactly what the zero phase
frequency filter from Section 15.3.1.2 and the Maximum Entropy spectral
analysis from Section 15.3.2.1 provide us with.

15.4.2 Different economic and empirical phenomena


The second reason for proposing the frequency domain methodology for
time series modelling concerns the fact that at different horizons (cen-
turies, decades, years, months, etc.) and different observation frequencies
(annual, monthly, weekly, etc.) the empirical behaviour of economic and
financial variables is typically different, and dominated by different well-
known economic phenomena such as long-term trends, business cycles,
seasonal patterns, stochastic volatilities, etc. First of all, decomposing time
series into different components allows us to analyze these economic phe-
nomena separately, while spectral analysis techniques can be used for a fur-
ther unraveling of the corresponding behaviour. Second, by also following
a decomposed modelling approach, as summarized in Figure 15.4, we are
also able to adequately model the different economic phenomena simultan-
eously by using the most appropriate time series models for each of them.
The potential benefits of the decomposition approach to time series mod-
elling can perhaps be best illustrated by the following extreme example.
Suppose one wants to model realistic behaviour of economic and financial
variables up to a horizon of several decades, but on a daily basis. Such a
model should at the same time give an adequate description of the trending,

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
308 Hens Steehouwer

low frequency, business cycle, seasonal and daily behaviour, each with their
very specific properties and for each of the variables. Obviously, achiev-
ing this within one single conventional time series modelling approach is
very difficult. Estimating a conventional VAR model using a sample of daily
observations, for example, will probably not produce the intended result.

0.12

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Year end level
0.10

0.08

0.06

0.04

0.02

0.00
1830 1850 1870 1890 1910 1930 1950 1970 1990 2010

0.12
Year end level
0.10

0.08

0.06

0.04

0.02

0.00
1830 1850 1870 1890 1910 1930 1950 1970 1990 2010

Figure 15.9 The risk of perspective distortion from using short samples.
Note: The risk is illustrated by a long-term interest rate time series for the Netherlands. The left-
hand panel shows a sample from 1970–2007, and the right-hand panel shows the full 1814–2007
sample. From the left panel, one could be lead to believe in a high level, downward trending and
high (short-term) volatility behaviour of the interest rate variable; the right panel shows that
the true behaviour of the interest rate actually consists of long-term fluctuations around a lower
level with short-term volatilities that are positively related to the underlying level of the interest
rate. Thus, by considering only a relatively small part of the available sample, one can be misled
about the behaviour of time series variables and hence can be led to construct erroneous models
to describe this behaviour.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 309

However, with the proposed decomposition approach to time series mod-


elling, this is exactly what is possible in a simple, flexible and theoretic-
ally well-founded way. Thus, the proposed methodology brings together the
empirical behaviour of economic and financial variables observed at dif-
ferent horizons and different observation frequencies in one complete and
consistent modelling approach. Furthermore, this way of modelling also
stimulates the incorporation of economic reasoning and intuition into what

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
can become purely statistical time series modelling exercises.

15.4.3 Appropriate samples and observation frequencies


The third reason for proposing the frequency domain methodology for time
series modelling is that it allows us to use appropriate time series data in terms
of samples and observation frequencies in order to analyze and model the
various economic phenomena mentioned in Section 15.4.2. Furthermore,
the methodology allows us to optimally combine all available sources of
time series information. In some cases it is evident what type of time ser-
ies data is required for modelling a certain economic phenomenon. For
example, if we want to model seasonal patterns it is clear that monthly data
for a sufficient number of years is required. However, in some cases it seems
less evident which data to use. This especially holds in the case of modelling
the very long-term (low frequency) behaviour of economic and financial
variables. For example, consider Figure 15.9, which shows two samples of
the same long-term nominal interest rate in the Netherlands used in Section
15.3.1. Suppose the left-hand 38-year-long annual 1970–2007 sample is used
as the basis for modelling the long-term behaviour of the nominal interest
rate, up to, say, a horizon 30 years into the future. A sample of almost 40
years certainly seems like a long sample at first. However, from a frequency
domain point of view, we are modelling the behaviour, which may have
period lengths of 40 years or more, at very low frequencies based on one sin-
gle sample of 40 years. In a sense, by doing this we are estimating a model
for the low frequency behaviour based on one single observation of the low
frequency behaviour. This is clearly inadequate and can lead to something
Reijnders (1990) calls perspective distortion, meaning that if one looks at too
short a sample, one can be misled about the behaviour of economic and
financial variables. Based on the left-hand panel of Figure 15.9, we could, for
example, conclude that the average interest rate is somewhere around 7%,
and that the interest rate has a strong downward trend (which might lead us
to conclude that the interest rate is a non-stationary process that needs to be
modelled in terms of the annual changes in the interest rate instead of the
levels) and always has a large amount of short-term volatility. However, if
we look at the full 194-year 1814–2007 sample, as shown in the right-hand
panel of Figure 15.9, entirely different information about the behaviour of
the long interest rate is revealed. We can see that the average interest rate is
actually more around 4–5%, and does not have a downward trend; rather,

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
310 Hens Steehouwer

during the 1970–2007 period, the interest rate was returning from excep-
tionally high post-war levels back to the more normal level of 4–5%. (This
shows us that the interest rate is actually a stationary process with a very
slow rate of mean reversion and should be modelled as such in terms of the
levels.) Short-term volatility can also be quite low, especially at low interest
rate levels (hence, the ‘level effect’ discussed in Section 15.3.3.3).
We can conclude that short samples can give insufficient information for

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
modelling the long-term behaviour of economic and financial variables.
However, although the long-term behaviour of economic and financial vari-
ables should be based on long samples, conventional modelling approaches
tend to model it based on extrapolation from the short-term behaviour of
these same variables (using short samples). One might wonder whether this
is important. Is there really a significant difference between the long- and
short-term behaviour? The answer to this question is a strong yes. Consider,
for example, the correlation between equity returns and inflation rates
shown in Table 15.2. The long-term correlation in-terms of the trend and
low frequency components, is very positive (0.55 and 0.39 respectively)
while the short-term correlation in terms of the high frequency component
is actually negative (-0.20). For a discussion of the literature and these data
findings about the inflation hedging capacities of equities at different hori-
zons, refer to Section 16.4.3 of Steehouwer (2005). Here I just want to stress
the potential danger and impact of modelling long-term behaviour based
on short-term data. Conventional annual equity returns and annual infla-
tion rates will show the same type of negative short-term, high frequency
correlation. If we estimate a model based on this data there is a risk that the
implied long-term correlation will also be negative, instead of being positive
as the data tells us. It is not hard to imagine that in terms of SAA decision-
making for a pension plan with inflation-driven liabilities, working with a
negative instead of a positive long-term correlation between equity returns
and inflation rates will have an enormous negative impact on the amount
of equities in the optimal SAA.
To avoid perspective distortion and have sufficient information for mod-
elling both the long-term (low frequency) and short-term (high frequency)
behaviour of economic and financial time series, we would ideally have very
long sample time series (covering, say, several centuries) with high obser-
vation frequencies (say daily). Based on this data, we would then apply the
decomposition, time series analysis, modelling and model analysis steps
of the methodology described in Section 15.3. Of course, such ideal time
series data are not available in most cases. However, by accommodating
the use of samples of different sizes and observation frequencies for the
long- and short-term behaviour in terms of the trend and various frequency
models, the methodology does allow optimal use of all the time series data
that is available. In the examples given in Sections 15.3.3.1 and 15.3.3.2,
I have used 20th century cross section data for 16 countries for the trend

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 311

model, 1870–2006 data with five-year observation intervals for the low fre-
quency model, annual 1970–2006 data for the high frequency (business
cycle) model and monthly 1990:01–2005:12 data for the monthly frequen-
cies. Consistent use of these different data sources in the methodology is
achieved by applying an appropriate decomposition approach. For example,
if we use the annual sample above to describe the business cycle behaviour
as all fluctuation in the time series with a period length between 15 and two

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
years, we should make sure that these types of business cycle fluctuations
are excluded from the monthly sample above by filtering out all fluctua-
tions longer than 24 months from the monthly time series. Note that some
kind of interpolation method might be needed to align all of the frequency
models to the same (highest) observation frequency.
An obvious comment on the use of very long samples of data, in some
cases covering several centuries, might be to ask how relevant and represen-
tative data this far back still is for the current and future behaviour of eco-
nomic and financial variables. There are two answers to this question. The
first is that based on thorough empirical analysis of historical time series data,
one might be surprised about the amount of stability there actually is in the
behaviour of economic and financial variables, not only across time but also
across countries. A nice quote to illustrate this in the context of business cycle
behaviour comes from Lucas (1977), who states that ‘though there is abso-
lutely no theoretical reason to anticipate it, one is led by the facts to conclude
that, with respect to the qualitative behaviour of co-movements among ser-
ies, business cycles are all alike’. This remarkable stability of the business cycle
mechanism has also been reported more recently by Blackburn and Ravn
(1992), Backus and Kehoe (1992), Englund et al. (1992) and Steehouwer (2005),
among others. Nevertheless, business cycle behaviour has gradually changed
over time and therefore we would be inclined to use a relatively recent sample
(say, 1970–2006) to model the business cycle behaviour. The second answer
to the question of the relevance of long-term data is to realize that all science
starts with the analysis and understanding of data that give us information
about the phenomenon that we are studying. Therefore, to understand and
model the long-term behaviour of economic and financial variables, by defin-
ition we have to start by studying long-term time series data. Of course we can
start deviating from what the data tell us by incorporating theoretic or for-
ward looking information, but we should start from the data at the very least.
Jorion and Goetzmann (2000) illustrate this approach, stating that ‘Financial
archaeology involves digging through reams of financial data in search for
answers.’ It is known that different economic regimes and historical circum-
stances underlie the behaviour observed in long-term historical time series
data. One could argue that this is not a problem of using long-term data, but
that in fact such changes in economic regimes and historical circumstances
are exactly what drive the uncertainty and behaviour of financial variables
in the long run. So by modelling directly on long-term historical data, we

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
312 Hens Steehouwer

are taking a kind of stochastic approach to regime switching. Just as we view


annual business cycle observations for a 1970–2006 sample as realizations of
some underlying business cycle process, we can also view five-annual obser-
vations of low frequency behaviour for an 1870–2006 sample as realizations
of some underlying long-term process. To the extent that regime changes
affect the short-term, high frequency behaviour of economic and financial
variables, this is exactly what the high frequency models should adequately

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
describe. It is, for example, well known that at the business cycle frequencies,
consumer prices have changed from a coincidental behaviour into a lagging
behaviour when compared to the GDP. We can model this lagging behav-
iour adequately by using a sufficiently recent sample for the business cycle
frequencies. In other cases we might also want to model the dependency of
the high frequency behaviour on the low frequency behaviour directly, for
example in terms of the level effect on the short-term volatility of interest
rates described in Section 15.3.3.3. In this way the described methodology
allows for an optimal use of the available time series data, and allows us to
use the appropriate samples and observation frequencies for modelling the
various long- and short-term economic phenomena.

15.4.4 The equal importance of all frequencies


The fourth reason for proposing the frequency domain methodology for
time series modelling is that it considers the behaviour of economic and
financial variables in all frequency ranges, and thus the long- and short-
term behaviour, to be of equal importance. Therefore the methodology
does not put the focus on either the long-term low frequency behaviour
or the short-term high frequency behaviour, but allows us to focus on the
long- and short-term behaviour at the same time. Thsis point can be best
explained by using the long-term interest rate time series introduced in
Section 15.3.1. In Section 15.4.3 I have argued that one reason that conven-
tional modelling approaches tend to model the long-term behaviour of eco-
nomic and financial variables based on extrapolation from the short-term
behaviour is that the samples being used are too short to be able to contain
fundamental information about the long-term behaviour in the first place.
However, even if long samples are used, there is still a second risk from
extrapolating the short-term behaviour into the long-term. This is because
in addition to the sample, the representation of the time series data used for
the modelling also plays an important role. In particular, the effects of the
often-applied first order differencing operator are important here. This filter
is often applied to model, for example, period to period changes of variables
such as interest rates or period to period returns of variables such as equity
prices. From the PTF of the filter in Figure 15.1, we already have seen that
in terms of the frequency domain point of view, the first order differencing
filter suppresses the low frequency behaviour and amplifies the high fre-
quency behaviour in time series. This effect is clearly visible in the left-hand

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
0.12

Year and level


0.10
Annual change

0.08

0.06

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.04

0.02

0.00

−0.02

−0.04
1830 1850 1870 1890 1910 1930 1950 1970 1990 2010

0.12
Low frequencies
0.10 High frequencies

0.08

0.06

0.04

0.02

0.00

−0.02

−0.04
1830 1850 1870 1890 1910 1930 1950 1970 1990 2010

Figure 15.10 The benefits of the decomposition approach.


Note: The left-hand panel shows the original level and the annual changes of the long-term
nominal interest rate for the Netherlands from Figure 15.2. The right-hand panel shows the sum
of the trend and low frequency component, together with the high frequency component from
the same figure. Comparing the two panels shows how the low frequency component captures
the behaviour of the long-term level of the interest rate while the high frequency component
captures its short-term annual changes. If one models both the low and high frequency compo-
nent separately, one can therefore adequately model both the level and annual changes of the
interest rate at the same time. The left-hand panel shows that if one, for example, tries to model
the complete behaviour in terms of the annual changes, it becomes rather difficult because the
information about the long-term level of the interest rate has been suppressed and is therefore
hardly visible in the annual changes.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
314 Hens Steehouwer

panel of Figure 15.10, which shows the original (year end) levels together
with annual changes in the interest rate. For the annual changes, the infor-
mation about what maximum and minimum levels the interest rate has
achieved and what type of long-term fluctuations it has experienced are
missing, while at the same time the short-term fluctuations dominate the
time series of the annual changes.
Because of the clearly different types of fluctuations that dominate the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
variance of the level and the annual changes of the interest rate time series
(just by using another representation of the data), it is not hard to imagine
that models estimated on the level will do well at describing the long-term
behaviour while models estimated on the annual changes will do well at
describing the short-term behaviour. By focusing on long sample time ser-
ies but using representations of the time series in terms of period to period
changes or returns, conventional approaches are still focusing on the short-
term high frequency behaviour and running the risk of not adequately
capturing the true long-term low frequency behaviour, simply because the
latter type of information has been severely suppressed in the time series
data used for estimating the models. Of course, the short-term behaviour
of interest rates is important in some applications, such as modelling the
returns on fixed income portfolios. However, in other applications the long-
term behaviour of interest rates is important, for example when determin-
ing how large the uncertainly about the mark to market value of accrued
pension liabilities is. That is, the behaviour of economic and financial vari-
ables at all frequencies is, in principle, of equal importance and we should
therefore avoid implicitly assigning more importance to the behaviour in
one specific frequency range.
How does the proposed methodology solve this problem? The answer is
by using an appropriate decomposition approach, such as the one described
in Section 15.3.1. The frequency domain filter proposed for these purposes
neither amplifies nor suppresses the (unintended) behaviour in certain
frequency ranges. Instead, it only cuts up the behaviour into different fre-
quency regions. In technical terms, the PTF of the filters described that are
used only have values of zero or one, and together cover exactly the range
of all possible frequencies. We can see the benefits of this approach in the
right-hand panel of Figure 15.10. This is the same filter output as shown in
Figure 15.2, with the exception that we added the trend and low frequency
components. If we add the resulting low and high frequency components
we again get the original interest rate time series. This is not possible for the
annual changes because in that case there is no second component time ser-
ies which contains, for example, the suppressed part of the low frequency
behaviour. Comparing the right- and left-hand panel, it is easy to see that the
low frequency component captures the long-term behaviour of the original
level of the interest rate while the high frequency component captures the
short-term behaviour in terms of the annual changes in the interest rate. By

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 315

9.5

9.0

8.5

8.0

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
7.5

7.0

6.5

6.0

5.5
1960 1970 1980 1990 2000 2010 2020 2030 2040 2050

9.5

9.0

8.5

8.0

7.5

7.0

6.5

6.0

5.5
1960 1970 1980 1990 2000 2010 2020 2030 2040 2050

Figure 15.11 Out-of-sample forecasts and confidence intervals of log GDP in the
Netherlands
Note: The left-hand panel show the results of a conventional modelling approach based on
annual growth rates. The right-hand panel shows the results of a decomposition approach based
on separate low and high frequency components in the log GDP time series. Fluctuations with
long period lengths are suppressed by the PTF of the first order differencing operator and are
therefore hardly present in the growth rate time series. As a result, the forecasts of the model
estimated on these growth rates as shown in the left panel are very uninformative in terms of the
low frequency behaviour of the GDP series. The right panel clearly shows low frequency infor-
mation in the forecasts from the decomposition approach.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
316 Hens Steehouwer

constructing separate models for the low and high frequency components
we are able to model both the long-term low frequency and short-term high
frequency behaviour of economic and financial variables adequately at the
same time, instead of focusing on one of the two, since we know that both
can be of equal importance.

15.4.4.1 Forecasting consequences

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
What are the possible consequences of using conventional modelling
approaches that (implicitly) give an unequal importance to the different
frequency ranges? A first possible consequence concerns the forecasting per-
formance, and especially the long-term forecasting performance. To bet-
ter understand this, assume that there is some valuable information in the
long-term low frequency behaviour of economic and financial time series,
and let us compare the forecasts of two modelling approaches. The first is a
conventional modelling approach that ‘ignores’ the low frequency informa-
tion by using relatively short samples and/or using first order differencing
representations of the data that suppress the low frequency information.
The second approach uses the proposed decomposition approach that expli-
citly takes the (unchanged) low frequency information into account. For the
first approach, I estimated an autoregressive model on the annual (log) GDP
growth rates for the Netherlands and produced forecasts and confidence
intervals for a horizon of 50 years into the future. The results are shown in
the left-hand panel of Figure 15.11. The right-hand panel shows the same
results but now for the case in which the (log) GDP time series was first
decomposed into low and high frequency components on which I then esti-
mated separate autoregressive models. The most striking difference between
the two forecasts (the median central solid line), is that for the conventional
approach the forecast soon becomes a rather uninformative flat trending
line, while for the decomposition approach it still fluctuates and is therefore
informative during the complete 50-year horizon. Although this is a topic
for future research, preliminary results from a formal backtesting procedure
have already indicated that the decomposition approach can indeed lead
to smaller forecast errors when compared to conventional approaches in
which no decomposition is applied. This indicates that there is some valu-
able information in the low frequency behaviour of time series that can be
(better) exploited by the decomposition approach.

15.4.4.2 Monte Carlo experiment


I have already explained the intuition regarding why the forecasting per-
formance of the decomposition approach is superior, based on the results
shown in Figure 15.10; the results of the tests in Chapter 19 of Steehouwer
(2005) also point in this direction. Here I describe and give the results
of a Monte Carlo experiment which adds formal evidence to this claim,
again showing that the decomposition approach works better in terms of

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 317

modelling both the long-term low frequency and short-term high frequency
behaviour well at the same time.
I assume the Data Generating Process (DGP) for a variable zt defined by
(5) where t is measured in years and of which the dynamic properties are
inspired by the empirical behaviour of the long-term interest rate time ser-
ies shown in Figure 15.2. This process consists of two independent compo-
nents, a low frequency component xt and a high frequency component yt

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
which are both driven by a specific autoregressive process. Table 15.3 show
the means and variances of these two components from which we see that
the low frequency component describes 80% of the total variance and the
high frequency component 20%.

zt = xt + yt (5)

with

xt = 1.89xt −1 − 0.90 xt −2 + «t «t ~ N ( 0, 0.00272 )

yt = 0.65yt −1 − 0.34 yt −2 − 0.26 yt −3 − 0.20 yt − 4 + 0.26 yt −5 −


0.46 yt −6 + gt gt ~ N ( 0, 0.08102 )

Table 15.4 shows the modulus, frequency and corresponding period length
of the complex roots of the two autoregressive polynomials. From this we

Table 15.3 Mean and variance of low, high fre-


quency and total model from (5).
xt yt zt = xt + yt ∆zt
Mean 0.00 0.00 0.00 0.00
Variance 0.80 0.20 1.00 0.19
Note: The total model has a mean of zero and a variance
of one, of which 80% comes from the low frequency
model and 20% from the high frequency model.

Table 15.4 Complex roots of low and high frequency models


from (5).
Root 1 xt Root 1 yt Root 2 yt Root 3 yt
Modulus 0.95 0.92 0.87 0.85
Frequency 0.02 0.10 0.20 0.40
Period Length 50 10 5 2.5
The low frequency model (xt) describes pseudo period behaviour with a
period length of around 50 years. The low frequency model (yt) describes
fluctuations with a period length of around ten, five and 2.5 years.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
318 Hens Steehouwer

see that the low frequency model describes pseudo periodic behaviour with
a period length of around 50 years, while the high frequency model is com-
posed of three types of pseudo periodic behaviour with a period length of
around respectively ten, five and 2.5 years. These types of pseudo period
behaviour are clearly visible in the (non-normalized) spectral densities in
the top two panels of Figure 15.12, which notably integrate to the total
variances of 0.80 and 0.20. The bottom left panel shows the implied total

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
spectral density of the DGP (5) in terms of the level of zt. The bottom right-
hand panel of Figure 15.12 shows the implied spectral density of the annual
changes of zt, that is, of ∆zt. Note how the PTF of the first order differencing
operator (∆) shown in Figure 15.1 has rescaled the spectral density by redu-
cing the importance of the low frequencies and increased the importance
of the high frequencies in the process. The other two spectral densities are
accompanied by the spectral densities of good approximating autoregressive
models. This shows that the DGP of both zt and ∆zt can be well described
by (different) autoregressive models. Figure 15.13 shows an example simula-
tion of 200 years from the xt, yt, zt = xt + yt and ∆zt processes, from which we
can clearly see the behaviour described by the roots and spectral densities
of the models.
Based on the DGP (5), the Monte Carlo experiment is now set up as follows.
I generate 1000 simulations of sample sizes of 50, 100, 200, 500 and 1000
years. In all cases we simulate a pre-sample of 500 years to guarantee that all
simulations adequately represent the unconditional distributions. For each of
the individual simulations I use the Yule-Walker estimation technique to try to
back out the original GDP of zt from the simulation. Tests described in Section
4.7 of Steehouwer (2005) show that this estimation technique yields the best
performance in terms of estimating spectral densities, especially in small sam-
ples. In large samples many estimation techniques show a similar performance
because of identical asymptotic properties. I compare three approaches:

1. Level approach: Estimate an autoregressive model on the simulated level


zt .
2. Delta approach: Estimate an autoregressive model on the simulated annual
changes ∆zt.
3. Decomposition approach: Estimate separate autoregressive models on the
underlying simulated low and high frequency components xt and yt.

For each simulation I use each of these approaches to estimate the spectral
densities of both zt and ∆zt. The spectral densities can be calculated directly
from the parameters of the autoregressive parameters or by applying the
Fourier transform on the auto-covariances of the estimated model up to a
sufficiently high order. If the estimation has been done in terms of zt, the
spectral density of ∆zt is calculated by applying the PTF of the first order
differencing filter on the estimated spectral density of zt. If, conversely, the

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 319

Table 15.5 Six combinations of DGP representation and fre-


quency ranges for which error (6) is calculated. Frequencies
hold in terms of cycles per year.
Frequency Range Level zt Annual Change ∆zt

Total [0.0000, 0.5000] [0.0000, 0.5000]


Low [0.0000, 0.0667] [0.0000, 0.0667]

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
High [0.0667, 0.5000] [0.0667, 0.5000]

Auto-spectrum low frequencies order 2 Auto-spectrum high frequencies order 6


6 0.6

5 0.5

4 0.4

3 0.3

2 0.2

1 0.1

0 0.0
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50

Auto-spectrum level/order 21 Auto-spectrum delta/order 25


6 0.24

5 0.20

4 0.16

3 0.12

2 0.06

1 0.04

0 0.00
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50
Frequency in cycles per period Frequency in cycles per period

Figure 15.12 Non-normalized spectral densities of xt (top left panel), yt (top right
panel), level of zt (bottom left panel) and annual changes of zt (bottom right panel)
for models from (5).
Note: The latter two are accompanied by the spectral densities of close approximating autore-
gressive models. The complex roots from Table 15.4 are clearly visible in the spectral densities.

estimation has been done in terms of ∆zt, the spectral density of zt is cal-
culated by applying the PTF of the inverse first order differencing filter on
the estimated spectral density of ∆zt. Ithen calculate the errors between the
estimated spectral densities and the true spectral densities given in the bot-
tom two panels of Figure 15.12 as

b b

Serror = ∫ Sˆ( v) − S( v) d v ∫ S( v)d v (6)


a a

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
320 Hens Steehouwer

Simulations low frequencies Simulations high frequencies


1.2 1.6

0.8 1.2
0.4 0.8
−0.0 0.4
−0.4 −0.0
−0.8 −0.4
−1.2 −0.8

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
−1.6 −1.2
−2.0 −1.6
0 20 40 60 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200
Simulations level low frequencies + high frequencies Simulations delta low frequencies + high frequencies
3 1.6

1.2
2
0.8
1
0.4
0 −0.0
−0.4
−1
−0.8
−2
−1.2
−3 −1.6
0 20 40 60 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200

Figure 15.13 Example simulation of 200 years of xt (top left panel), yt (top right
panel), level of zt (bottom left panel) and annual changes of zt (bottom right panel)
for models from (5)

Here S(␻) and Ŝ(␻) are respectively the non-normalized auto-spectrum of the
known underlying DGP and its estimated counterpart. The integration inter-
val [a,b] defines the frequency range over which the error is calculated. The
error (6) measures the ‘distance’ between the estimated and the DGP spectra
on the indicated frequency range. The smaller the distance, the better the
estimated process corresponds to the original DGP and hence the better the
performance of the relevant approach is. I calculate these errors for a total of
six cases that are combinations of the DGP representation in terms of the level
zt or the annual changes ∆zt on the one hand, and different frequency regions
on the other hand. These combinations are indicated in Table 15.5. The split
between the low and high frequency ranges is made at a frequency of 0.0667,
which is equivalent to fluctuations with a period length of 15 years.
To be sure to get the best out of each approach, for each estimation I cal-
culate the errors for autoregressive models of all orders between one and
25 (this range includes the orders needed to closely approximate the model
spectra as shown in the bottom two panels of Figure 15.12), and select the
order which gives the smallest error in terms of the representation of the
DGP used in the different approaches (levels, annual changes or separate
low and high frequency components). Finally, I calculate and compare the
mean errors from the calculated errors for each of the 1000 simulations.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 321

Table 15.6 Mean errors (6) for each of the six combinations in Table 15.5 based on
1000 simulations
Mean Error Level (zt) Total Mean Error Delta (∆zt) Total
Sample Level Delta Decomp Sample Level Delta Decomp
50 0.63 6889% -3% 50 0.65 -26% -20%
100 0.49 7894% -6% 100 0.50 -20% -24%
200 0.37 9655% -8% 200 0.36 -12% -26%

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
500 0.25 13995% -13% 500 0.25 -5% -31%
1000 0.18 15372% -16% 1000 0.19 0% -34%
Mean Error Level (zt) Low Mean Error Delta (∆zt ) Low
Sample Level Delta Decomp Sample Level Delta Decomp

50 0.51 8415% -1% 50 0.06 12% -12%


100 0.40 9742% -3% 100 0.04 32% -14%
200 0.30 11992% -4% 200 0.03 68% -14%
500 0.20 17901% -8% 500 0.02 132% -20%
1000 0.14 20003% -11% 1000 0.01 184% -22%
Mean Error Level (zt ) High Mean Error Delta (∆zt) High
Sample Level Delta Decomp Sample Level Delta Decomp
50 0.11 -13% -14% 50 0.60 -30% -21%
100 0.09 -10% -21% 100 0.46 -25% -25%
200 0.07 -8% -25% 200 0.33 -19% -27%
500 0.05 -9% -32% 500 0.24 -16% -32%
1000 0.04 -9% -34% 1000 0.18 -12% -35%
The mean errors of the delta and decomposition approaches (the third and fourth columns in
each block) are reported as a percentage of the mean error of the level approach (the second col-
umn in each block).

The results of the experiment are reported in Table 15.6. A mean error of,
for example, 0.50 for the level approach (the second column in each block)
means that the level approach on average results in a wrong allocation of
50% of the variance over the frequencies in the estimated spectral densities.
The mean errors for the delta and decomposition approaches (the third and
fourth columns in each block) are reported as a percentage of the mean
error of the level approach. A value of, say, -0.25% for the delta approach
means that the delta approach results in a mean error which is 25% lower
than the mean error for the level approach. In the example, the mean error
of the delta approach is 75% × 0.50 = 0.375.
From the results in Table 15.6 we can observe the following:

1. Larger samples lead to smaller errors in terms of reproducing the original


DGP spectra.
2. In terms of the total frequency range (the top two panels in Table 15.6),
the level approach is best for reproducing the spectral density of the lev-
els, while the delta approach is best for reproducing the spectral density
of the annual changes.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
322 Hens Steehouwer

3. The delta approach is better for reproducing the high frequency part of
the spectral density of the levels (the bottom left panel in Table 15.6),
while the level approach is better at reproducing the low frequency part
of the spectral density of the annual changes (the middle right panel in
Table 15.6).
4. In virtually all cases, the decomposition approach is better for reprodu-
cing the spectral densities of the levels and annual changes, both in terms

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
of the total and in terms of the separate low and high frequency ranges.

Observation (1) is of course no surprise. Observation (2) confirms what I


explained about the left-hand panel of Figure 15.10. That is, models esti-
mated on the levels will do well at describing the long-term behaviour of
the levels, while models estimated on the annual changes will do well at
describing the short-term behaviour of the annual changes because of the
clearly different types of fluctuations that dominate the variance of the
processes of the levels and the annual changes. Note that the errors for the
delta approach in terms of the low (and total) frequency ranges of the spec-
tral density of the levels are so large because of the PTF of the inverse first
order differencing filter, which approaches infinity at very low frequencies.
Another way of understanding this is that in those cases we are model-
ling a stationary process in terms of a non-stationary (integrated) process.
Observation (3) is somewhat more of a surprise, but further emphasizes the
same point as observation (2). As we saw in Section 15.2.3, the PTF of the first
order differencing filter suppresses the low frequency behaviour and ampli-
fies the high frequency behaviour. Therefore, the delta approach performs
even better than the level approach at the high frequencies of the level DGP.
On the other hand, the delta approach performs even worse than the level
approach at the low frequencies of the annual changes DGP. Observation (4)
obviously closes the experiment by confirming what I explained about the
right-hand panel of Figure 15.10. That is, by constructing separate models for
the low and high frequency components, we are capable of modelling both
the long-term low frequency (levels) behaviour and the short-term high fre-
quency (annual changes) behaviour adequately at the same time, instead of
doing well in terms of the low frequency behaviour and poorly in terms of
the high frequency behaviour or the other way around. Thereby, the Monte
Carlo experiment gives formal support for the claim that a decomposition
approach can lead to superior modelling results in terms of describing the
behaviour of economic and financial variables in all frequency ranges.

15.4.5 Complex dependencies between frequency ranges


The fifth and final reason for proposing the frequency domain method-
ology for time series modelling is that it facilitates the modelling of complex
dependencies between the behaviour of economic and financial variables
in different frequency ranges. At first sight, the zero correlation property

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
A Frequency Domain Methodology 323

described in Section 15.3.1.3 may seem like a restrictive simplifying fea-


ture of the decomposition approach. As explained in Section 15.3.3.3, by
modelling different frequency ranges separately we actually get more, rather
than less, possibilities for modelling complex behaviour by explicitly mod-
elling relations between the properties of the different frequency models.
Examples mentioned in Section 15.3.3.3 were the ‘level effect’ in the short-
term volatility of interest and inflation rates, state dependent business cycle

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
dynamics and state dependent asset correlations.

15.5 Conclusions

In this chapter I have described a frequency domain methodology for time


series modelling. With this methodology it is possible to construct time
series models that in the first place, give a better description of the empir-
ical long-term behaviour of economic and financial variables, which is very
important for SAA decision-making. In the second place, the methodology
brings together the empirical behaviour of these variables as observed at
different horizons and observation frequencies, which is required for
constructing a consistent framework to be used in the different steps of
an investment process. In the third place, the methodology gives insight
into and understanding of the corresponding dynamic behaviour, both in
terms of empirical time series data and of the time series models used to
describe this behaviour. In various parts of the chapter I have introduced
the most important frequency domain techniques and concepts, described
and illustrated the methodology and, finally, given the motivation for
doing so. I hope that based on the contents of this chapter, more people
will be inclined to explore the possibilities of using the appropriate fre-
quency domain techniques for analyzing and modelling time series data
and time series processes of economic and financial variables. I am con-
vinced that this can contribute to a higher quality of investment decision
making, implementation and monitoring in general, and for Central Banks
and Sovereign Wealth Managers in particular.

Notes

ORTEC Centre for Financial Research and affiliated with the


Econometric Institute of the Erasmus Universitry Rotterdam.
Please e-mail comments and questions to hens.steehouwer@ortec-
finance.com
1. End of year values of Netherlands 10-year Government Bond Yield 18142007.
1918 and 1945 values based on begining of next year. Source: Global Financial
Database(GFD) code IGNLD10D
2. Note that some kind of interpolation method might be needed to align all fre-
quency models on the same (highest) observation frequency.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
324 Hens Steehouwer

Bibliography
Backus, D.K. and Kehoe, P. J. (1992), ‘International Evidence on the Historical
Properties of Business Cycles’, American Economic Review, 82, 864–88.
Baxter, M. and King, R. G. (1999), ‘Measuring Business Cycles: Approximate Band-pass
Filters for Economic Time Series’, The Review of Economics and Statistics, November,
81(4), 575–593.
Beveridge, W.H. (1922), ‘Wheat prices and rainfall in Western Europe’, Journal of the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Royal Statistical Society, 85, 412–59.
Blackburn, K. and Ravn, M. O. (1992), ‘Business Cycles in the United Kingdom: Facts
and Fictions’, Economica, 59, 383–401.
Bloomfield, P. (1976), Fourier Analysis of Time Series: An Introduction, New York,
Wiley.
Brillinger, D.R. (1981), Time Series: Data Analysis and Theory, Holden-Day, San
Francisco.
Burg, J.P. (1967), ‘Maximum Entropy Spectral Analysis’, Paper presented at the 37th
Annual International S.E.G. Meeting, Oklahoma City. Reprinted in Childers, D.G.
(ed.) (1978), Modern Spectral Analysis, IEEE Press, New York.
Campbell, J.Y. and Viceira, L. M. (2002), Strategic Asset Allocation: Portfolio Choice for
Long-Term Investors, Oxford University Press, Oxford.
Christiano, L.J. and Fitzgerald, T. J. (1999), ‘The Band Pass Filter’, NBER Working
Paper No. W 7257.
Dimson, E., March, P. and Staunton, M. (2002), Millennium Book II: 101 Years of
Investment Returns, ABN-Amro and London Business School, London.
Englund, P., Persson, T. and Svensson, L. E. O. (1992), ‘Swedish Business Cycles:
1861–1988’, Journal of Monetary Economics, 30, 343–371.
Hassler, J., Lundvik, P., Persson, T. and Soderlind, P. (1992), The Swedish Business Cycle:
Stylized Fact over 130 Years, Monograph, 22, Institute for International Economic
Studies, Stockholm.
Jorion, P. and Goetzmann, W. N. (2000), ‘A Century of Global Stock Markets’, NBER
Working Paper No. W 7565.
Lucas, R.E. Jr. (1977), ‘Understanding Business Cycles’, In Brunner, K. and A.H. Meltzer
(eds), Stabilization of the Domestic and International Economy, Vol. 5 of Carnegie-
Rochester Series on Public Policy, North-Holland, 7–29.
Maddison, A. (2006), The World Economy: Historical Statistics, Vol. 2, OECD.
Reijnders, J. (1990), Long Waves in Economic Development, Brookfield, E. Elgar, USA.
Schmidt, R. (1984), Konstruktion von Digitalfiltern and ihre Verwendung bei der Analyse
Ökonomischer Zeitreihen, Bochum, Studienverlag Brock-meyer.
Shannon, C.E. and Weaver, W. (1949), The Mathematical Theory of Communication,
University of Illinois Press, Urbana Illinois.
Steehouwer H. (2005), Macroeconomic Scenarios and Reality. A Frequency Domain
Approach for Analyzing Historical Time Series and Generating Scenarios for the Future,
PhD thesis, University of Amsterdam, Amsterdam, The Netherlands.
Steehouwer, H. (2007), ‘On the Correlations between Time Series Filtered in the
Frequency Domain’, ORTEC Centre for Financial Research (OCFR) Working Paper.
Tinbergen, J. (1946), Economische bewegingsleer, N.V. Noord-Hollandsche
Uitgeversmaatschappij, Amsterdam.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
16
Estimating Mixed Frequency Data:
Stochastic Interpolation with
Preserved Covariance Structure

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Tørres G. Trovik and Couro Kane-Janus

16.1 Introduction

Data are needed when modelling the interaction between relevant varia-
bles in the financial markets. While market related data for many assets are
available on an intraday frequency, some variables such as accounting infor-
mation, macro-related variables or privately traded and less liquid assets are
only observable on a lower frequency, typically quarterly.
In recent years estimation risk has been in focus in the finance literature,
and the importance of estimation risk even for qualitative inference has
been highlighted; see e.g. Barberis (2002). Obviously, when modelling rela-
tions involving stale or infrequently released data, one is faced with a tough
choice of either reducing the sample size to the lowest common frequency,
thus substantially increasing the estimation risk, or engaging some form of
interpolation technique that might impact the reliability of the estimated
relations.
In this chapter we propose a simple method for stochastic interpolation
of infrequent data, to be used together with information from higher fre-
quency data. Our approach preserves the observed annual variance as well
as the observed covariance structure of the lowest common frequency to
other variables in the data set. Importantly, we do not add any structure to
the data that are not observed on the highest available frequency. Nor do we
use observed autocovariance or cross-autocovariance in the sample, as this
may be highly sample-dependent. Our approach is based on a simple appli-
cation of the Brownian Bridge.
Adding ad hoc and false structure to the data is a side effect of some popu-
lar naïve interpolation methods. Popular approaches include linear inter-
polation as well as repetition of the last observation. The well-known Shiller
data of prices, dividends and earnings for the S&P 5001 is an example where
linear interpolation is used. Accounting data in Bloomberg such as dividends

325

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
326 Tørres G. Trovik and Couro Kane-Janus

or earnings are presented as four quarters or 12-month (12M) trailing obser-


vations and paired with the current price; hence, it is an example of the
latter approach if, say, monthly price and dividend yield are downloaded for
use in various analyses.
Testing of our approach is conditional on a choice of one particular data
generating process. While our approach scores very well when a commonly
used VAR structure is the data generating process, we cannot generalize this

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
result to every possible data generating process. However, by simulating
data through a VAR, we show that the estimation risk for the VAR param-
eters when we use our proposed stochastic interpolation method has almost
exactly the same sample distribution as if we could observe the higher fre-
quency data directly. By comparison, linear interpolation adds structure
such as autocorrelation and reduced variance to the data. This spills over
into reduced variance of the estimated parameters, thereby making stand-
ard statistical measures such as the t-ratio biased.
The literature addresses various, sometimes complex types of missing
data and often focuses on enhanced methods for computation of a covari-
ance matrix. We address the simple case of mixed regular frequencies, and
our example application focuses on a set consisting of monthly and quar-
terly data. Our approach would be excessive if used to address the covari-
ance matrix. The matrix computed from our interpolated data set would
be identical to the monthly matrix for those variables where monthly data
are available, appended with a row and a column reflecting the quarterly
variance-covariance structure for each quarterly variable. However, our
approach is very beneficial when the modelling of VAR relationships is
required, as is often the case in an asset allocation setting involving beliefs
in long term predictability, state variables and alternative asset classes for
which data are less readily available.
Data can be missing in many ways: randomly throughout the sample due
to errors or asynchronous trading, at the end of a sample due to asynchron-
ous releases of statistical data, at the beginning of a sample due to differ-
ences in historical coverage or systematically through out the sample due
to differences in the frequency with which the data has been collected.
While we focus solely on the latter category, there is literature covering the
other cases; see, for instance, Stambaugh (1996) for estimating the covari-
ance matrix of financial time series with unequal lengths. Meucci (2005)
addresses the case where some observations are missing randomly from the
time series. Giannone et al. (2008) deal with a ragged edge in the data set
through a Kalman filter.
Since Little and Rubin’s (1987) seminal work on multiple imputa-
tions, a procedure that replaces each missing data with a set of plausible
values, many techniques have been proposed and tested on missing data
problems; see, for instance, McLachlan and Krishnan (1997). The main
principles of the analysis of the data with missing values are laid out in

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Financial Data with Mixed Frequencies 327

Little and Rubin (2002). The missing data can be categorized into dif-
ferent classes based on the reason for their absence, and depending on
the classification, different methods are used. Schafer (1997) uses a full
scale implementation of the EM algorithm (a technique for fitting models
to incomplete data); the formal definition and key properties of the EM
algorithm are reviewed in the same chapter. A Bayesian alternative has
been devised by Tanner and Wong (1987). Other approaches, such as the

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
use of a maximum likelihood estimation and a two step approach com-
bining the maximum likelihood estimation with the EM algorithm, have
been described in Little and Rubin (1987) and Morokoff (1999). The latter
is an adaptation of the classical EM algorithm for Brownian processes in
finance. It uses a Brownian bridge approach to obtain the distribution of
all missing values simultaneously.
The strength of our approach is its simplicity and the fact that no unob-
served structure is added to the simulated data. When estimating a model
with the interpolated data, we rely only on the observed e.g. quarterly
data to define the sample auto- and cross-autodependencies. The interpo-
lated data are simulated based on an assumption that the first differences
of detrended data are an intraquarter Brownian motion or random walk,
and covarying with the observed monthly data by the observed quarterly
covariance matrix – i.e. that the covariance structure of the quarterly data
is frequency invariant.
This chapter is organized as follows. The next section describes our pro-
posed method of stochastic interpolation. Section 16.3 contains a Monte
Carlo study of the efficiency of our approach by way of VAR estimation on
simulated data. We show the superior performance of our approach relative
to two commonly used naïve methods. Finally, Section 16.4 concludes and
summarizes.

16.2 Methodology

For the purpose of bridging the missing data, we assume that the data set is
generated by correlated geometric Brownian motions. Hence, the values of
the data next period is equal to

 1 
Dt +1 = Dt ⋅ exp ( m − ␴ 2 )dt + dt ⋅ C ⋅ ␧t +1  (1)
 2 

where Dt is an N × 1 matrix of N data values at time t, such as prices or


dividends. m − 1 2 ␴ is an N × 1 vector of trends, dt is the length of the time
2

increment from t to t + 1, C is an N × N matrix such that CTC is equal to


the covariance matrix of the data, and ε is an N × 1 vector of uncorrelated
standard normal variates. The Cholesky decomposition of the covariance
matrix is a popular candidate for C.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
328 Tørres G. Trovik and Couro Kane-Janus

The main idea of our approach is to infer ε from the observed data D using
Equation (1), bridge the missing data in ε by a Brownian Bridge technique,
and then repackage the data with Equation (1) again to obtain the bridged
D with an unaltered covariance structure. First, we solve (1) with respect to
ε̂. We invert (1) and apply it to a sample, D̂, of size T producing

 ˆ )− 1
T
 1
«ˆt +1 = Cˆ −1 ⋅  ⌬ log( D ∑ ⌬ log( Dˆ ) (2)

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
t +1 t +1
 T t =0  dt

In other words, we premultiply the detrended first differences of the log of


the data sample with the inverse of the Cholesky decomposition of the sam-
ple covariance matrix.
We use the Cholesky matrix from a covariance matrix from the quarterly
data, i.e. the frequency for which we have observations for all N data series
in D̂. ε̂ Can be divided into two parts: the i series known only quarterly and
the j series known monthly as well. Let ε̂Q ,i denote the uncorrelated stochas-
tic elements of the i series for which monthly data are missing. We proceed
by taking the monthly data set, adding zeros for the missing monthly obser-
vations in series i, then find the ε̂M,j series by Equation (2), using the same
quarterly C. We disregard ε̂M,j.
Noting that
3
«tQQ = ∑«
t M =1
M
tM (3)

we now expand the ε̂Q ,i series by simulating three standard normal variates,
εh, for every ε̂Q,i
t under the condition that

∑«h =1
h = «ˆtQ ,i (4)

for all t. Hence, we have a new series ε̂M̃,i where bundles of three successive
elements sum to the elements of ε̂Q ,i . Such a simulation is very quick with
standard machinery, even for large T.
We construct a new set of uncorrelated stochastic elements by defining
~
Et = [ «tM ,i, «tM , j ] (5)

for all series i that are available on a quarterly frequency and all series j that
have monthly observations. Now, repackaging the data, we obtain a con-
structed data set with monthly observations for all series:
T
1
∑ ⌬ log( D

⌬ log( D M ) = t +1 ) + dt ⋅ C ⋅ E (6)
3T t =0

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Financial Data with Mixed Frequencies 329

where the trend is taken from the quarterly data. One important condition
for this approach to work is that the series for which monthly data are miss-
ing is placed as the first series in the data matrix, i.e. that for all elements
i and j in N, i < j. This is important because we are relying on the fact that
C has a triangular shape with zeros in the upper right triangle to be able to
exactly reproduce the j series after the repackaging in (6).
Summing up, in DM̃ we now have the original data for the j series where

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
monthly observations were available, and new constructed monthly series
that goes through the quarterly data points of series i. The constructed series
have the same annual variance as the quarterly series and the same covari-
ance with the monthly data as the quarterly series i has with the quarterly
observations of the j series.

16.3 Monte Carlo simulation study

We proceed to test our method of stochastic interpolation by simulation. As


an example, we use the classic dividend yield model where a VAR represen-
tation for returns, price growth, dividend growth and the dividend/price
ratio is used to digest the idea of predictability in equity returns. This model
is thoroughly presented in Cochrane (2001). The model is often estimated
on annual or quarterly data for the US, where very long time series are avail-
able. For other countries, annual data might present too few observations
and our approach to estimation on monthly data may prove useful. We
generalize the model slightly relative to Cochrane (2001) by allowing the
detrended log equity price to have an AR1 component as well. The model is
the following:

qt +1 = αqt + b( dt − pt ) + «q ,t +1

dt +1 − pt +1 = f( dt − pt ) + «dq ,t +1 (7)

where q is the detrended logarithm of the price, and d – p is the loga-


rithm of the dividend/price ratio. The model can be thought of as a mean
reverting process where the level to which detrended log prices revert is a
mean reverting process itself. The model is simulated with the following
parameters:

␣ = 0.9958
␤ = 0.0139
f = 0.9917
␴( ␧q ) = 0.15
␴( ␧dp ) = 0.125
r( ␧q , ␧dp ) = −0.2

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
330 Tørres G. Trovik and Couro Kane-Janus

These parameters corresponds to estimated parameters from annual data as


given in Cochrane (2001), translated to a monthly frequency through the
continuous time version of (7) described in Lo and Wang (1995).
We will simulate 1000 paths with a monthly frequency over 10 years for
detrended log prices and log dividend/price ratio by using the VAR in (7).
We then retain only the quarterly observations of the log (D/P) and fill in
the monthly observations by using our proposed stochastic interpolation

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
method as well as the naïve methods. Note that the usual approach with
linear interpolation or 12M trailing methods is to interpolate dividends and
then divide by the current price. Here we apply all three interpolation meth-
ods to the state variable directly.
Having the simulated data set we can estimate (7) with true monthly obser-
vations, with our stochastically interpolated data and with the alternative

Stochastic interpolation, in sample fit of detrended price. Stochastic interpolation, in sample fit of state variable.
Mean = 0.9996 Mean = 1.0251
600 200
500
150
400
300 100
200
50
100
0 0
0.97 0.98 0.99 1 1.01 1.02 1.03 0.4 0.6 0.8 1 1.2 1.4 1.6
12M trailing data, in sample fit of detrended price. 12M trailing data, in sample fit of state variable.
Mean = 1.0002 Mean = 1.0002
600 200
500
150
400
300 100
200
50
100
0 0
0.97 0.98 0.99 1 1.01 1.02 1.03 0.4 0.6 0.8 1 1.2 1.4 1.6
Linear interpolation, in sample fit of detrended price. Linear interpolation, in sample fit of state variable.
Mean = 1.0009 Mean = 0.5849
600 200
500
150
400
300 100
200
50
100
0 0
0.97 0.98 0.99 1 1.01 1.02 1.03 0.4 0.6 0.8 1 1.2 1.4 1.6

Figure 16.1 In-sample fit relative to using true monthly data. The histograms show:
stochastic (top), 12M trailing (middle) and linear (bottom) interpolation. Those on
the left show RMSE for the price equation and those on the right show the state vari-
able equation in (7).

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Financial Data with Mixed Frequencies 331

interpolation methods and compare the accuracy of the estimation to the


true data generating parameters. We use the LeSage Econometrics toolbox
in Matlab, which provides standard ordinary least squares VAR estimation.
In Figure 16.1 we show the root mean square error of the in-sample fit
when using the three interpolation methods, divided by the root mean
square error when using the true monthly data for the state variable. Hence,
a reading of one on this measure means that the estimation error is the same

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
as if we had access to the monthly observations of the state variable.
We see that the linear interpolation method has a much lower error for
the in-sample fit of the state variable than when estimating the true data.
This is a result of the strong intraquarter autocorrelation that is added to the
data by the linear interpolation method, thus producing a spuriously better
fit for the AR1 model for this equation in (7).
We also see that all three methods perform equally well regarding the
in-sample fit of the detrended log price. This result is influenced by the true
a being close to one and the true b being close to zero in our simulation,

True monthly observations, mean = −0.1978 Stochastic interpolation, mean = −0.2257


150 150

100 100

50 50

0 0
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4

Linear interpolation, mean = −0.1199 Intra quarter repetition, mean = −0.0553


150 150

100 100

50 50

0 0
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4

Figure 16.2 Histograms for the overestimated correlation between residuals in (7).
The histograms show: true monthly data (top left), stochastic interpolation (top
right), linear interpolation (bottom, left) and 12M trailing data (bottom right). The
true data generating correlation is –0.2

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
332 Tørres G. Trovik and Couro Kane-Janus

highlighting the illustrative nature of this exercise as the performance of


the different interpolation methods is conditional on an exact specification
of one particular model to be estimated.
The model in (7) is interesting in an asset allocation framework because it
introduces a potential for long-term predictability in asset prices, a feature
that may have far reaching implications for the SAA decision. In particular,
the relation between risk and investment horizon is impacted by parameters

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
in (7). Lo and Wang (1995) describe the term structure of risk in closed form
based on the continuous time equivalent to (7). It is shown there that the
correlation between the residuals ε in (7) is an important determinant of the
term structure of risk.
The correlation between the residuals is not reflected in the in sample
fit illustrated in Figure 16.1. Figure 16.2 shows the histograms for the cor-
relation in our simulations with the true monthly data as well as the three
interpolation methods.
We see that the stochastic interpolation is the only method here that pro-
duces an unbiased estimate; in particular the 12M trailing method has a
rather large bias. This is not surprising as the repetition of the quarterly
value for every month within the quarter, i.e. in the 12M trailing data, will
bias the correlation with any other variable towards zero. However, the sto-
chastic interpolation method does leave more estimation risk; in fact, it is of
the same magnitude as if only quarterly data were used.
The economic significance of the bias in the 12M trailing method as well
as the estimation risk in the stochastic interpolation method is illustrates in
Figure 16.3, which depicts the term structure of risk for true data generating
parameters combined with various estimates for the correlation.
The true data generating process exhibits short term momentum and
longer term mean reversion; the cutoff for when the mean reversion brings
risk reduction relative to a standard geometric Brownian motion for longer
term investors is around two years. We see from the left panel that the
expected bias due to the 12M trailing data extends that estimate to around
three years. Although the expected estimate for the stochastic interpolation
method is unbiased, a 75 per cent confidence band around the true value
produces estimates ranging from a few months to three years.
Inference regarding whether asset returns exhibit a few years of momen-
tum before a pull from equilibrium manifests itself as mean reversion is
of paramount importance when designing TAA or dynamic SAA strategies.
We see from this example that ten years of data leaves a lot of uncertainty
in drawing a conclusion on that issue, even when our proposed method is
used.
The estimation risk is however greatly reduced for other parameters of
interest. Figure 16.4 shows the histograms of the estimation errors for the
parameters in (7) when using only the quarterly data and when using the
stochastic interpolation method for the state variable and monthly data

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Financial Data with Mixed Frequencies 333

Term structure of risk


0.2

0.18

0.16

0.14
Annualized volatility

0.12

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
0.1

0.08

0.06

0.04 Bias due to 12M trailing D/P


Unbiased correlation between residuals
0.02
Standard GBM
0
0 10 20 30 40 50 60 70 80 90 100
Investment horizon: Number of months

Term structure of risk


0.2

0.18

0.16

0.14
Annualized volatility

0.12

0.1

0.08

0.06

0.04
87.5 prctile, stochastic interp.
0.02 12.5 prctile, stochastic interp.
Standard GBM
0
0 10 20 30 40 50 60 70 80 90 100
Investment horizon: Number of months

Figure 16.3 Term structure of risk, true estimate and expected, biased estimate. The
graphs use 12M trailing data (left panel) and 75 per cent confidence band for stochas-
tic interpolation method (right panel)

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
334 Tørres G. Trovik and Couro Kane-Janus

Stochastic interpolation, estimation error for α Quarterly data,, estimation error for α
250 250

200 200

150 150

100 100

50 50

0 0
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Stochastic interpolation, estimation error for β Quarterly data, estimation error for β
300 300
250 250
200 200
150 150
100 100
50 50
0 0
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6

Stochastic interpolation, estimation error for φ Quarterly data, estimation error for φ
250 250

200 200

150 150

100 100

50 50

0 0
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6

Figure 16.4 Comparison of estimation risk when using stochastic interpolation (left)
versus using quarterly data only (right). Estimation error in a (top), b (middle) and f
(bottom) from (7)

available for the detrended log price. We see clearly that using the stochas-
tically interpolated state variable reduces estimation risk substantially.
The reason why the estimation risk for the correlation between the residu-
als is not reduced, as seen in Figure 16.2, is connected to the fact that the
sample εt from Equation (2) is not perfectly independent, but rather is dis-
torted both by estimation risk as well as by the discrepancy between the
true data generating process in (7) and the assumed process for the missing
data as given in (1). The distributions for the estimation risk for the param-
eters given in Figure 16.4 are, however, as if the true monthly observations
had been used directly.

16.4 Concluding remarks

In this chapter we have proposed a simple application of a Brownian Bridge


to fill in missing higher frequency data for some variables, in a data set with
a lower common frequency. A typical application is where some data are on
a quarterly frequency while most data are available on a monthly frequency.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Combining Financial Data with Mixed Frequencies 335

The interpolation method preserves the covariance structure observed on


the lower common frequency, but assumes no autocovariance for the miss-
ing intraquarter data.
The motivation for not adding autocovariance in the interpolated data is
that inference about such autocovariance from one sample is very impre-
cise. Thus, the analyst might commit the error of adding false structure
to the data that could have unclear implications for whatever estimation

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
the analyst is using the data for. Note, however, that any autocorrelation
exhibited in the quarterly data is preserved with our proposed interpolation
method.
If the analyst has a strong conviction that the true data generating process
exhibits autocorrelation, then further precision may be achieved by allow-
ing autocorrelation in the interpolation method. The Ornstein Uhlenbeck
Bridge discussed in e.g. Goldys and Maslowski (2006) may be an interesting
tool in that respect. Further research on this topic is needed.
We have tested our approach on one particular model, central in asset
modelling, where the true data generating process indeed exhibits auto-
covariance. We have shown that for this particular model the stochastic
interpolation method is clearly preferable to using quarterly data only, and
dominates other popular interpolation methods such as linear interpolation
or repetition of the last observed value, i.e. the common method of using
12M trailing data. We have shown that linear interpolation yields spuri-
ously strong estimation results while the 12M trailing approach produce a
strong bias in some estimates of high relevance from this model. The sto-
chastic interpolation method avoids these problems while preserving the
reduction in estimation risk relative to using quarterly data.
Our method may deteriorate when the number of data points to fill by
simulation increases and the true data generating process exhibits auto cov-
ariance. However, the model we have chosen as a test case, i.e. Equation (7),
has only two variables and is estimated over a period of ten years. The sto-
chastic interpolation approach will yield even more added value as the num-
ber of variables increase and the available data decrease in size.

Notes
We would like to thank participants at the BIS/ECB/WB joint conference on Strategic
Asset Allocation for Central Banks & Sovereign Wealth Funds, 2425 November 2008,
for helpful comments.
1. Data available at http://www.econ.yale.edu/~shiller/data.htm

Bibliography
Baberis, N. (2000) ‘Investing for the Long Run When Returns Are Predictable,’ Journal
of Finance, 55: 225–264.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
336 Tørres G. Trovik and Couro Kane-Janus

Cochrane, J.H. (2001) Asset Pricing, Princeton: Princeton University Press.


Giannone, D., Reichlin, L. and Small, D. (2008) ‘Nowcasting: The Real-Time
Informational Content of Macroeconomic Data,’ Journal of Macroeconomics, 27:
53–67.
Goldys, B. and Maslowski, B. (2006) ‘The Ornstein Uhlenbeck Bridge and Applications
to Markov Semigroups,’ ArXiv Mathematics e-prints, October.
Little, R.J.A. and Rubin, D.B. (1987) Statistical Analysis with Missing Data, New York:
John Wiley.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Little, R.J.A. and Rubin, D.B. (2002) Statistical Analysis with Missing Data, 2nd edition,
New York: John Wiley.
Lo, A.W. and Wang, J. (1995) ‘Implementing Option Pricing Models When Asset
Returns Are Predictable,’ The Journal of Finance, 50: 87–129
McLachlan, G.J. and Krishnan, T. (1997) The EM Algorithm and Extensions, New York:
John Wiley.
Meucci, A. (2005) Risk and Asset Allocation, New York: Springer.
Morokoff, W. (1998) ‘The Brownian Bridge E-M Algorithm for Covariance Estimation
with Missing Data,’ Journal of Computational Finance, 2: 75–100.
Schafer, J. (1997) Analysis of Incomplete Multivariate Data, New York: Chapman &
Hall.
Shiller, R. (2000) Irrational Exuberance, Princeton: Princeton University Press.
Stambaugh, R.F. (1996) ‘Analyzing Investments Whose Histories Differ in Length;’
Journal of Financial Economics 45: 285–331.
Tanner, M. and Wong, W. (1987) ‘The Calculation of Posterior Distributions by Data
Augmentation (With Discussion),’ Journal of the American Statistical Association, 82:
528–50.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
17
Statistical Inference for Sharpe Ratio
Friedrich Schmid and Rafael Schmidt

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
17.1 Introduction

Sharpe ratios (Sharpe 1966) are the most popular risk-adjusted performance
measure for investment portfolios and investment funds. Given a riskless
security as a benchmark, its Sharpe ratio is defined by

m−z
SR =
s2

where μ and s 2 denote the portfolio’s mean return and return volatility,
respectively, and z represents the riskless return of the benchmark security.
From an investor’s point of view, a Sharpe ratio describes how well the return
of an investment portfolio compensates the investor for the risk he takes.
Financial information systems, for example, publish lists where investment
funds are ranked by their Sharpe ratios. Investors are then advised to invest
into funds with a high Sharpe ratio. The rationale behind this is that, if
the historical returns of two funds are compared to the same benchmark,
the fund with the higher Sharpe ratio yields a higher return for the same
amount of risk. Though (ex post) Sharpe ratios are computed using historical
returns, it is assumed that they have a predictive ability (ex ante). We refer to
Sharpe (1994) for related discussions and further references.
The riskless benchmark security can be generalized to a risky bench-
mark security or benchmark portfolio. In that case, the Sharpe ratio is the
mean excess return divided by the standard deviation of the excess return.
Formally, let X be a random variable representing the excess return of an
investment fund or investment portfolio over some benchmark. We assume
that the mean E(X) = μ and the volatility var(X) = ␴2 > 0 are well defined
and are finite; then SR = μ/␴. Suppose the observations of the portfolio’s
(excess) returns are described by a time series or stochastic process (Xt)t Z .
For n observations X1, ... , Xn, the Sharpe ratio is then estimated by

337

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
338 Friedrich Schmid and Rafael Schmidt

n = Xn
SR (1)
n
Sn2

1 m n inherits
where Xn = ⌺tn=1 Xt and S2n = 1/n–1 ∑nt=1 (Xt–X̄ n). The estimator SR
n
statistical uncertainty, which is the central theme of this chapter.
The statistical properties of Sharpe ratios have been considered by a num-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
ber of authors. Jobson and Korkie (1981) and Memmel (2003) assume that
the returns X1, ... , Xn, are stochastically independent and normally distrib-
uted, and they derive the relevant test distributions for hypothesis testing.
Further contributions are made by Vinod and Morey (2000), Lo (2002, 2003),
Wolf (2003), Knight and Satchell (2005), Bao and Ullah (2006) and Ledoit
and Wolf (2008), who generalize distributional assumptions on X1, ... , Xn.
In this context we also mention the recent preprints by Christie (2006),
Opdyke (2006) and DeMiguel et al. (2007). The statistics for Sharpe ratios
are relevant for most economic and econometric applications which utilize
Sharpe ratios; see e.g. Okhrin and Schmid (2006), who investigate distribu-
tional properties for optimal portfolio weights based on Sharpe’s ratio.
The present chapter complements the above literature by developing the
(asymptotic) distribution of a Sharpe ratio SR m n under general assumptions
on the temporal correlation structure of X1, ... , Xn. The models considered
include the ARCH- and GARCH models and stochastic volatility models.
Motivated by the frequent findings of volatility clustering in financial data,
we explicitly derive the asymptotic distribution for popular versions of the
GARCH model and the stochastic volatility model. The estimation error of a
two-sample statistics, under general conditions concerning the correlation
structure, is also considered. The theoretical results are illustrated by an
empirical study which examines excess returns of various global bond asset-
management portfolios and exchange-traded funds.
The chapter is organized as follows. Section 17.2 starts with some import-
ant properties of Sharpe ratios such as time aggregation and stochastic dom-
inance. The following section addresses the statistical inference of Sharpe
ratios. Section 17.3.1 states a general result concerning the asymptotic distri-
bution of Sharpe ratios. In Section 17.3.2, we briefly recall the case of tem-
poral independence and provide a variance stabilizing transformation for
general return distributions. Section 17.3.3 deals with the statistical infer-
ence for excess returns which exhibit volatility clustering. A general estima-
tion method is provided thereafter. Finally, we consider the statistics of the
difference between two Sharpe ratios in Section 17.3.5.

17.2 Time aggregation and stochastic dominance

Time aggregation of Sharpe ratios allows us to compare estimates of


Sharpe ratios which are calculated from excess returns based on different

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Statistical Inference for Sharpe’s Ratio 339

frequencies. For example, a Sharpe ratio calculated from monthly data can-
not be directly compared to a Sharpe ratio derived from daily data, since
the units differ. In practice, a Sharpe ratio is represented as an annual per-
formance measure. The corresponding time aggregation can be elaborated
as follows.
Consider annual excess returns for which a Sharpe ratio SR is estimated
according to Formula (1). Suppose that the annual excess return X is observ-

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
able on a finer time scale – for example, on a monthly or daily scale. The
respective returns are denoted by X(1), ... , X(d), where d is the time-scale fac-
tor. Assume X = X(1) + ... + X(d), which holds if we work with excess log-re-
turns, and denote the corresponding Sharpe ratios by SR(1), ... , SR(d). Then the
time aggregation between the Sharpe ratios is

SR =
E (X) − z
=

d
i =1 {E ( X ) − z }
(i ) (i ) d
= ∑ wi SR(i)
Var ( X ) ∑
d
k ,l =1 (
Cov X (k ) , X (l ) ) i =1

with weights wi = {Var ( X ( i ) )/ ∑ dk ,l =1 Cov( X ( k ) , X ( l ) )}1\2 and riskless benchmark


z = ∑ id=1 z( i ) .
Two special cases are interesting: first, the case where the returns X(1), ... ,
(d)
X are uncorrelated (as in the heterogenous volatility models considered
later) and second, the case where the returns are uncorrelated and have equal
variance. In the first case, the weights become wi = {Var ( X ( i ) )/ ⌺dk =1Var ( X ( k ) )}1 2 ,
and in the second case, we have

d
1
SR =
d
∑ SR( )
i =1
i

If in addition SR(1) = ... = SR(d), we obtain the well-known formula SR = √d · SR(1).


Thus, in this case, Sharpe’s ratio is aggregated to an annual Sharpe’s ratio by
multiplying Sharpe’s ratio of the higher frequency by the square root of the
number of periods d which are contained in the lower-frequency period.
Stochastic dominance: Suppose X and Y denote the excess returns of two
different investment funds or investment portfolios. Let us consider second-
order stochastic dominance (SSD); see Müller and Stoyan (2002), Chapter 8,
for the definition and related results. It can be shown that SSD of X over Y
(i.e. X ≥SSD Y) and E(X) = E(Y) implies SRX ≥ SRY (where SRX and SRY denote
the Sharpe ratios of X and Y, respectively). If the distribution of X and Y
are restricted, stronger results can be obtained. For example, if X and Y are
normally distributed, then X ≥SSD Y implies SRX ≥ SRY without any further
assumption on the means of X and Y.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
340 Friedrich Schmid and Rafael Schmidt

17.3 Statistical inference

17.3.1 General result


Suppose the time series (Xt)t Z – representing excess returns – is defined on
some probability space (Ω, F, P). We use the definition of ␣-mixing to describe
temporal dependence. For the time series (Xt)t Z , let Ft = ␴(Xs, s ≤ t) denote the
information accumulated up to time t. Accordingly, define F t = ␴(Xs, s ≥ t).

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Let A and B be two ␴-fields included in F. Define

a ( A ,B ) := sup |P ( A ∩ B ) − P ( A ) P ( B ) |
A∈A , B∈B

and ␣X (r) = sups≥0 ␣(Fs, F s+r ). The process (Xt)t Z is said to be ␣-mixing if

aX ( r ) → 0 for r → ∞

Remark: If (Xt) is a strictly stationary Markov process, then ␣X(r) = ␣{␴(X1),


␴(X1+r)} (Bradley 1986).
The next proposition is essential for the forthcoming elaborations. The
proof is based on the fundamental central limit theorem for ␣-mixing proc-
esses established in Rosenblatt (1956) and Ibragimov (1962). If not stated
otherwise, it is assumed that (Xt)t Z is strictly stationary in order to ease the
presentation. However, strict stationarity can be relaxed in certain cases; for
example, for Markov processes by imposing conditions on the rate of con-
vergence to equilibrium, see Doukhan (1994: 89).
Proposition 1 (CLT): Let X1, ... , Xn be observations of a strictly stationary real-
valued stochastic process (Xt)t Z which is ␣-mixing. If in addition the mixing
coefficients ␣X(r) O(r −(1+1/δ )(2+δ )) and E|X1|4+δ < ∞ for some δ > 0, then


1 n 1 n
{ 2 
}
 n ∑ ( Xt − m ) , n ∑ ( Xt − m ) − s  → N ( 0,⌫ ) as n → ∞
t =1 t =1
2 d

with μ = E(X1), ␴2 = var(X1), positive semi-definite covariance matrix


⌫ = ⌺∞j =−∞ ⌫ j , and


 cov ( X1 , X1+ j ) (
cov X1 , ( X1+ j − m )
2
) 

⌫ j :=  , ∈


(
 cov X1, ( X1+ j − m )
2
) (
cov ( X1 − m ) , ( X1+ j − m )
2 2
)  j ]


(2)

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Statistical Inference for Sharpe’s Ratio 341

Proof: Using Slutsky’s theorem, we may set μ = 0 and ␴ 2 = 1 without loss of


generality. Set s n2 = E{( ⌺tn=1 Xt )2 } , and assume ⌺∞r =1aX( r )d /[ 2 + d ] < ∞ and E|X1|2+δ < ∞ for
some δ > 0. According to Rosenblatt (1956) and Ibragimov (1962), s n2 / n → c 2
as n ∞ and if c > 0, the following CLT holds:

∑X
t =1
t s n → N ( 0,1) as n → ∞ (3)

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Furthermore, . Using Slutsky’s theorem we conclude that Xn/c N(0, 1) as
n ∞. In order to obtain the CLT for the first and the second moment,
observe that for any measurable function f,

af ( X ) ( r ) ≤ aX ( r ) for all r∈] (4)

This follows by the definition of ␣X(r) and the fact that ␴(f(Xs): s ≤ t)
␴(Xs: s ≤ t) and ␴(f(Xs): s ≥ t) ␴(Xs: s ≥ t). In particular, the process f(Xt) is
␣-mixing. Substituting Xt by f(Xt) with f(x) = θ1x + θ 2(x2 − 1) – for arbitrary
but fixed θ = (θ1, θ2) R 2 in Formula (3) – yields the asserted CLT by an
application of the Cramér-Wold device. At this point we need the condition
E|X1|4+δ < ∞.
This yields the following asymptotic result for the Sharpe ratio estimator
mn:
SR

Theorem 2 (Asymptotic normality of SR m n ): Under the conditions and nota-


2
tion of Proposition 1 and σ > 0, the following holds:

n SR (
m n − SR 
d
)
→ N 0,s SR
2
( ) (5)

with:

s s12 
s11 s12 1 2 s22
2
s SR = − SR + SR and ⌫ =:  11 
s2 s3 4 s4  s12 s22 

Proof: Write

u1 − z
m n = f X , s2
SR n n ( ) with f ( u1 , u2 ) =
u2
and ∇f ( u1 , u2 )

 1 ( u1 − z ) 

= , 3/ 2 
 u2 2u2 

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
342 Friedrich Schmid and Rafael Schmidt

The proof now follows by an application of the Delta-method and


Proposition 1. Note that σ2 > 0 implies total differentiability of f at (μ, σ2) .
In particular,

{( )
n f Xn , sn2 − f ( ␮, ␴ 2 ) 
d
}
→ N 0, ␴ SR
2
(
= ∇h’⌫ ∇h u1 = ␮ ,u2 = ␴ 2 )

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
with

u − z ␴ (u − z )
2
␴11
∇h ’ ⌫∇h u1 ,u2 = − ␴12 1 2 + 22 1 3 , u2 > 0
u2 u2 4 u2

Large families of stochastic processes which fulfill the prerequisites of


Theorem 2 are members of polynomial random-coefficient autoregressive
models, generalized hidden Markov models or stochastic autoregressive
volatility models (see e.g. Doukhan 1994). In the next section we examine
some of these processes in more detail. First, we re-examine the case where
the (Xt) are independent and identically distributed.

17.3.2 Temporal independence


Assume that the observations X, X1, ... , Xn are independent and identi-
cally distributed (i.i.d.). In that special case, the assumption E(|Xt|4+δ) < ∞
in Proposition 1 can be relaxed to E(X4) < ∞, i.e. a finite forth moment is
required. The mixing condition is obviously satisfied and the asymptotic
covariance matrix Γ takes the form

 Var ( X )

⌫ = ⌫0 = 
{( X − ␮ ) } 
E
3


E ( X − ␮)

3
{ } Var {( X − ␮ ) } 2

where X is the generic random variable having the stationary distribution of


(Xt)t Z. The asymptotic variance of SRn is given next.
Theorem 3 (The i.i.d. case): Suppose X, X1, ... , Xn are i.i.d. with E(X4) < ∞.
Then the asymptotic variance ␴SR
2
– as given in Formula (5) – takes the form

{( X − ␮ ) } SR + 1 E {( X − ␮ ) } −  E {( X − ␮ ) }
2
3 4 2
E
␴ 2
= 1− SR2
{Var ( X )} {Var ( X )}
SR 3/ 2 2
4 (6)
1
= 1 − ␥1SR + ( ␥2 − 1) SR2
4

with skewness and kurtosis

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Statistical Inference for Sharpe’s Ratio 343

 X − ␮    X − ␮  
3 4

␥1 = E    and ␥2 = E   
 ␴    ␴  

It is well known that γ 2 ≥ 1. Therefore, the kurtosis of the return distri-


bution has an enlarging effect on the asymptotic variance of the Sharpe

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
ratio. Skewness γ 1 can increase or decrease ␴ SR
2
, according to its sign. Note
that the asymptotic variance in Formula (6) depends on the Sharpe ratio,
with the skewness and kurtosis as unknown parameters, which have to be
estimated. We may, however, derive a variance stabilizing transformation
which avoids the estimation of the Sharpe ratio. Moreover, this transform-
ation symmetrizes the finite sample distribution of SR m n and thus reduces a
possible finite sample bias.
Lemma 4 (Variance stabilizing transformation): Under the assumptions of
Theorem 3 and γ 2 > 1, the following holds:

{ ( )
n − ⌿ ( SR ) 
n ⌿ SRn
d
}
→ N ( 0,1)

with transformation

2  ␥ −1 ␥1 ␥2 − 1 2 
⌿ (␪) = ln  2 ␪− + ␪ − ␥1␪ + 1 

␥2 − 1  2 ␥2 − 1 4  (7)

where θ takes only values such that Formula (7) is well defined.
Normally distributed returns: Under the assumption of normally distrib-
uted excess returns Xi ~ N(μ, σ2) we have γ1 = 0 and γ2 = 3. In this case,
Formulas (6) and (7) become

␴ SR
2
= 1+
1 2
2
SR (
and ⌿ ( ␪ ) = 2arcsinh ␪ / 2 , ␪ ∈ R )

In Table 17.1 below, we demonstrate the impact of the variance stabilizing


transformation on the estimation of the Sharpe ratio. Note that under the
assumption of normality, the finite sample distribution of n SRm n is that of a
non-central Student’s t-distribution with n − 1 degrees of freedom and non-
centrality parameter ␦ = nSR .

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
344 Friedrich Schmid and Rafael Schmidt

The first row of Table 17.1 shows that a finite sample bias is present in the
estimation of the Sharpe ratio, in particular for small sample sizes. The sam-
ple bias is reduced by the variance stabilizing transformation as illustrated
in the second row. This reduction is a consequence of the symmetrization
m n . The third and fourth rows, respect-
of the finite sample distribution of SR
ively, show the true and the estimated asymptotic standard deviation. The
fifth row illustrates that the standard deviation is overestimated by 13%

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
for sample size n = 12 and the overestimation becomes less with increasing
sample sizes. The overestimation can be reduced by the variance stabilizing
transformation, as shown in the sixth row. For n = 12, the overestimation of
the asymptotic variance is reduced by 40%.
Table 17.2 illustrates the above theoretical results with real data. We con-
sider the time series of monthly excess returns of seven global bond portfo-
lios (GBP) managed by the following asset managers:

● CGTC = Capital Guardian Trust Company, Global Fixed Income (World


Government Bond),
● FFTW(U) = Fischer Francis Trees & Watts, Inc. Global Unhedged Fixed
Income,
● FFTW(H) = Fischer Francis Trees & Watts, Inc. Global Hedged Fixed
Income,
● BAM = Baring Asset Management Inc., Global Fixed Income,
● SFI = Strategic Fixed Income, L.L.C., Global Short-Term Fixed Income,
● UBS(B) = UBS Global Asset Management, Global Bond Portfolio,
● UBS(A) = UBS Global Asset Management, Global Aggregate Bond.

Table 17.1 Statistical impact of the variance stabilizing


transformation on the estimation of the Sharpe ratio.
The figures are based on 10,000 times n simulations
of i.i.d. normally distributed random variables
with parameters μ = 0.005 and ␴ = 0.01. The benchmark
is set to z = 0. The corresponding true Sharpe ratio is SR
= 0.5. The sample size n ranges from 12 to 60 (which
may refer to months)

Sample length n 12 24 36 48 60
mn )
mean( SR 0.541 0.519 0.511 0.506 0.504
−1 m n )}]
ψ [mean{ψ( SR 0.527 0.513 0.507 0.503 0.501
␴SR 1.060 1.060 1.060 1.060 1.060
␴ˆSR 1.203 1.123 1.112 1.084 1.073
␴ˆSR/␴SR 1.134 1.059 1.049 1.022 1.011
m n )}
n ⋅ stdev{ψ( SR 1.082 1.037 1.036 1.015 1.005

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Statistical Inference for Sharpe’s Ratio 345

The benchmark is the Global Aggregate Bond portfolio composed and pub-
lished by Lehman over the time horizon January 1989 to June 2008. The
length of the time series of excess returns varies between the considered
portfolios, but the time series all end in 2008. Beside the estimate of the
Sharpe ratio, Table 17.2 provides the corresponding 98% confidence inter-
val with and without variance stabilizing transformation and lists some
descriptive statistics. The considered time series of excess returns show

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
no statistically significant autocorrelation of the returns and the squared
returns. Furthermore, except for the SFI series, the assumption of normally
distributed returns cannot be rejected by the Kolmogorov-Smirnov test at
a confidence level of 99%. For example, Figure 17.1 gives the QQ-plot and
the partial autocorrelation function for FFTW. Thus, the assumption of
i.i.d. normally distributed excess returns is reasonable and, consequently,
the confidence intervals are calculated based on the results of Theorem 3
and Lemma 4. The findings in Table 17.2 imply that the estimated confi-
dence intervals for the Sharpe ratio are rather wide, in particular for time
series with short length. Note that the 98% confidence band of the Sharpe
ratio for BAM (n = 52) includes zero, and thus a Sharpe ratio of zero cannot
be rejected. The variance stabilizing transformation symmetrizes the con-
fidence intervals, reduces the finite sample bias, and renders estimation of
the asymptotic variance unnecessary. Section 17.3.5 presents test statistics
which allow the user to build statistical hypothesis tests for testing whether
two Sharpe ratios are significantly different or one is significantly larger
than the other.

m n , mean, standard deviation, maximum, min-


Table 17.2 Estimated Sharpe ratio SR
imum and length n of the excess returns time series for different GBP. The lower and
upper 98% confidence bands are given by CIl and CIu. The corresponding confidence
bands obtained through the variance stabilizing transformation are denoted by VCIl
and VCIu.

GBP mn
SR Mean Stdev Max Min N CIl CIu VCIl VCIu

CGTC 0.375 0.0068 0.018 0.063 −0.043 207 0.208 0.542 0.210 0.545
FFTW(H) 0.578 0.0052 0.009 0.026 −0.022 135 0.362 0.794 0.367 0.801
FFTW(U) 0.276 0.0052 0.019 0.055 −0.040 135 0.072 0.480 0.074 0.483
BAM 0.300 0.0066 0.022 0.058 −0.041 52 −0.030 0.630 −0.025 0.641
SFI 0.739 0.0036 0.005 0.031 −0.015 95 0.470 1.009 0.479 1.020
UBS(B) 0.311 0.0048 0.016 0.049 −0.034 71 0.029 0.594 0.033 0.602
UBS(A) 0.427 0.0082 0.019 0.075 −0.048 315 0.290 0.564 0.292 0.566

Mean and stdev are multiplied by 100.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
346 Friedrich Schmid and Rafael Schmidt

FFTW(U) Lehman Global Agg


−0.0004 0.0000 0.0004

−0.03 −0.01 0.01

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
1998 2002 2006 1998 2002 2006
Year Year

FFTW(U)
FFTW(U)
0.04

-0.1 0.0 0.1


Partial ACF
−0.04 −0.00

0 20 40
−3 −1 1 3 Lag
Quantiles of Standard Normal

Figure 17.1 Time series of excess returns of the GBP FFTW(U) (upper left panel) and
the corresponding benchmark returns of the Lehman Global Aggregate portfolio
(upper right panel). The figure provides the QQ-plot against the standard normal dis-
tribution (lower left panel) and partial autocorrelation function (partial ACF) of the
squared excess returns (lower right panel) of FFTW(U). The horizontal dotted lines
in the partial ACF correspond to the upper and lower bounds of the 95% confidence
interval of an autocorrelation of zero

17.3.3 Statistical inference under volatility clustering


17.3.3.1 Stochastic volatility
Consider the stochastic volatility model Xt = μ + Vt εt, t ] , with (εt)t ] being
a sequence of i.i.d. random variables and εt ~ N(0, 1). Assume

ln Vt − ␣ = ␭ ( ln Vt −1 − ␣ ) + ␩t for t ∈ ] (8)

and (ηt)t ] i.i.d. with ηt ~ N(0, β 2(1−λ2)) such that |λ| < 1 and β > 0. Thus, the
stochastic process (ln Vt − α)t ] is a strictly stationary AR(1)-process with
ln Vt ~ N(α, β 2). Further, we assume that the processes (εt)t ] and (ηt)t ] are
stochastically independent. Note that all moments of (Xt)t ] exist and are
finite, and the process is α-mixing and strictly stationary (cf. Carrasco and

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Statistical Inference for Sharpe’s Ratio 347

Chen 2002). Theorem 2 is thus applicable and we obtain the following


proposition.
Proposition 5: Suppose the temporal structure of the excess returns (Xt)t ]
is that of the stochastic volatility model described at the beginning of this
Section 3.3.1. The stochastic process (Xt)t ] fulfills the prerequisites of
Theorem 2. The asymptotic variance of the Sharpe ratio is then given by

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
␴ SR
2
= 1+
1 
(␥2 − 1) SR2 1 + 2∑
( ) 
∞ exp 4 ␭j ␤2 − 1 
(9)

j =1 3 exp ( 4 ␤ ) − 1 
2
4  

with γ 2 = 3 exp(4β 2) > 3.


Proof: Using the notation of Theorem 2, we observe that
␴ 2 = E {( Xt − ␮ ) } = E (Vt2 ␧t2 ) = E (Vt2 ) = exp ( 2␣ + 2 ␤2 ) . Further, σ11 = Var(X1) because
2

Cov(X1, X1+j) = E(V1ε1 V1+jε1+j) = 0 for all j ≠ 1. Moreover, σ12 = 0 since for all
j ].
∞ 2
{ 2

}
For ␴22 = Var {( X1 − ␮ ) } 1 + 2∑ j =1 Corr ( X1 − ␮ ) , ( X1+ j − ␮ )  we obtain after lengthy
2 

but straightforward computation

Var (( X − ␮ ) ) = (3 exp ( 4␤ ) − 1) exp ( 4␣ + 4␤ )


1
2 2 2

and

exp ( 4 ␭j ␤2 ) − 1
Corr ( ( X1 − ␮ )
2
, ( X1+ j − ␮ )
2
) =
3 exp ( 4 ␤2 ) − 1

which is positive, if λ > 0.

17.3.3.2 GARCH(1,1) model


Consider the following GARCH(1,1) process (Bollerslev 1986):

Xt = ␮ + ␴t ␧t , t ∈ ], with ( ␧t )t ∈Z i.i.d., ␧t ~ N ( 0,1) , and (10)

where for t Z where ω > 0, α ≥ 0, β ≥ 0 and α + β < 1. In order to fulfill


E(|Xt|4+δ) < ∞ for some δ > 0 in Theorem 2, a sufficient condition for α and
β is


 2+ 
E ( ␤ + ␣␧t2 ) 2  < 1 (11)
 

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
348 Friedrich Schmid and Rafael Schmidt

Condition (11) together with Jensen’s inequality imply the strict station-
arity of the GARCH(1,1) process, which is given if E {ln ( ␤ + ␣␧t2 )} < 0 according
to Theorem 2 in Nelson (1990). Further, (Xt)t∈] is α-mixing by Condition
(11) and Corollary 6 in Carrasco and Chen (2002). Note that Condition (11)
{ }
is slightly stronger than E ( ␤ + ␣␧t2 ) = ␤2 + 2␣␤ + 3␣2 < 1 , which corresponds to
2

the finiteness of the forth moment of the stationary distribution of (Xt)t∈] .


Theorem 2 is thus applicable and we obtain the following proposition.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Proposition 6: Let the temporal structure of the excess returns (Xt)t∈Z be that
of the GARCH(1,1) model given in (10). Assume for the moment Condition
(11) holds and thus the prerequisites of Theorem 2 are fulfilled. The asymp-
m n is then given by
totic variance of the Sharpe ratio SR

  1 − ␤2 − ␤␣  2␣ 
(␥2 − 1) SR2 1 + 
1
␴ SR
2
= 1+  
  1 − ␤ − 2 ␤␣  1 − ␣ − ␤ 
2
4

Proof: With the notation of Theorem 2, we have σ11 = Var(X1) due to the
well-known fact Cov(X1, X1+j) = 0. Further direct calculation yields Cov(X1,
(X1+j − μ)2) = 0 for all j Z. This result depends essentially on the fact that
E ( «t ) = E ( «t3 ) = 0 if εt is normally distributed. Hence, σ12 = 0. Finally,

(( X − ␮ ) ) (1 + 2∑ ∞
(( X − ␮ ) , ( X − ␮) ))
2 2 2
␴22 = Var 1 j =1
Corr 1 1+ j

with

(( X − ␮ ) , ( X ) = ␣  11−−␤␤ −−2␤␣␤␣  ( ␤ + ␣)


2
− ␮)
2 2 j −1
Corr 1 1+ j 2
, j∈`

and:

(
2 ␻2 1 − ( ␣ + ␤ ) + ␣2
2
)
Var (( X1 − ␮ ) 2
) =
{1 − ( ␣ + ␤) }{1 − ( ␣ + ␤)
2 2
− 2␣2 . }
Proposition 6 shows that for the frequently applied GARCH(1,1) model it
is possible to derive a closed formula for the asymptotic distribution of the
Sharpe ratio. In the following we apply this result to time series of ETF excess
log-returns which show volatility clustering. In particular, the parameters
α and β in the GARCH(1,1) are significantly different from zero at a 99%
confidence level.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Statistical Inference for Sharpe’s Ratio 349

Exchange-traded funds (or ETFs) are open-ended collective investment


schemes, traded as shares on most global stock exchanges. Typically, ETFs
try to replicate a stock market index such as the S&P 500 Index, a market
sector or a commodity. ETFs are very high liquid financial instruments and
thus are suitable for an analysis of a Sharpe ratio based on weekly or even
daily excess returns. We consider some of the largest ETFs by assets under
management such as the Standard & Poor’s Depositary Receipt, abbreviated

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
SPDR. Shares of SPDR, called “spiders”, are traded on the American Stock
Exchange under the ticker SPY. The ETF time series are:

● SPY = SPDR ‘spiders’ (AMEX),


● IVV = iShares S&P 500 Index Fund (NYSE)
● IWM = iShares Russell 2000 Index Fund (AMEX)
● EEM = iShares MSCI Emerging Markets Index Fund (AMEX)
● DIA = DIAMONDS Trust, Series 1 (AMEX).

The excess log-returns of the ETFs are calculated with respect to weekly ‘risk-
less’ benchmark returns based on the five-year US treasury rate. The time series

SPY DIA
−0.20 −0.10 0.00

−0.20 -0.10 0.00

1995 2005 1998 2002 2006


Year Year
−0.05 0.05 0.15
−0.05 0.05 0.15
Partial ACF

Partial ACF

0 20 40 0 20 40
Lag Lag

Figure 17.2 Weekly excess log-returns of the ETFs SPY and DIA and corresponding
partial ACF for the squared excess returns up to June 2008. The horizontal dotted
lines in the partial ACF correspond to the upper and lower bounds of the 95% confi-
dence interval with an autocorrelation of zero

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
350 Friedrich Schmid and Rafael Schmidt

of weekly ETF excess returns begin at various starting days and all end on 17
October 2008; see also Table 17.3 for the length n of the respective series.
Figure 17.2 shows the excess returns of SPY and DIA together with the par-
tial ACF of the squared excess returns. The lack of homogeneity in the vola-
tility over time is clearly visible. Tables 17.3 and 17.4 provide some further
statistics on the ETF excess time series and the estimation error of the Sharpe
ratio. Some estimates of the Sharpe ratio are negative, which is caused by a

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
negative mean excess return of the ETF over the observation horizon. Note
that the Sharpe ratios of the ETFs are calculated over observation horizons
with different length.
A comparison of the asymptotic standard deviation ␴ˆSR in Table 17.3 –
obtained under the GARCH(1,1) assumption – with the asymptotic stand-
ard deviation calculated under the i.i.d. assumption (see Theorem 3) yields

Table 17.3 Estimated Sharpe ratio SR m n , mean, standard deviation, max-


imum, minimum and length n of the excess return time series for differ-
ent ETFs. The estimate of the asymptotic standard deviation is denoted
by ␴ˆSR . The lower and upper 98% confidence bands for the Sharpe ratio
are given by CIl and CIu. Note that the length of the time series varies, but
all series end on 17 October 2008.
mn
EFT SR Mean Stdev Max Min N ␴ˆSR CIl CIu
SPY 0.010 0.024 0.023 0.072 −0.221 820 1.021 −0.153 0.173
IVV −0.066 −0.159 0.024 0.070 −0.187 439 1.166 −0.282 0.150
IWM −0.016 −0.048 0.030 0.112 −0.179 438 1.000 −0.147 0.115
EEM 0.036 0.134 0.038 0.079 −0.225 288 1.010 −0.165 0.236
DIA −0.013 −0.033 0.025 0.076 −0.209 560 1.003 −0.336 0.310
The mean is multiplied by 100.

Table 17.4 Estimates of the parameters μ, ω, α, and β of the


GARCH(1,1) model as defined in Formula (10) for distinct time
series of ETF excess returns. The last two columns show whether
Condition 1 (stationarity condition) α + β < 1 and Condition 2
(condition for the existence of the fourth moment; see
Proposition 6 and preceding) β 2 + 2 αβ + 3α2 < 1 are fulfilled.

EFT μ ω α Β Condition 1 Condition 2

SPY 0.12 0.001 0.16 0.83 0.99 1.02


IVV 0.03 0.002 0.17 0.78 0.96 0.98
IWM 0.07 0.035 0.18 0.41 0.59 0.42
EEM 0.49 0.017 0.18 0.68 0.86 0.80
DIA 0.11 0.011 0.34 0.51 0.85 0.95
The parameters μ and ω are multiplied by 100.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Statistical Inference for Sharpe’s Ratio 351

confidence bands which are up to 16% wider. Thus, ignoring the lack of
homogeneity in the volatility structure may underestimate the width of the
confidence band. The last two columns of Table 17.4 show that the condi-
tions for stationarity and the existence of the fourth moment of the excess
return-series are fulfilled, except for SPY; see Proposition 6 and the preced-
ing discussion.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
17.3.4 Statistical inference under general conditions
Unlike in the parametric models considered in the previous two sections,
we now leave the temporal dependence structure unspecified. Let X1, ... , Xn
denote observations of a strictly stationary real-valued stochastic process
(Xt)t Z . Under the prerequisites of Theorem 2, the asymptotic covariance Γ
in Proposition 1 is estimated using Formula (2). In particular, for j N {0},
the covariance matrices of Formula (2) are estimated by

 1 n− j 1 n− j 
( )( ) ( )(X )
2
 ∑ Xt − X n Xt + j − X n
n − j t =1
∑ Xt − X n
n − j t =1
t+j − Xn 
ˆ j ,n
⌫ = 
 1 n− j 1 n− j 
( )( ) ( ) (X ) − ( Sn2 ) 
2 2 2
∑ ∑ Xt − X n
2
 Xt − X n Xt + j − X n t+j − Xn
 n − j t =1 n − j t =1 

For j − N, we estimate

 1 
( )( ) ∑ (X )(X )
n n
1 2
 n − j ∑ Xt − X n Xt + j − X n n− j
t − Xn t+j − Xn 
= 
ˆ j ,n t = j +1 t = j +1

 1 
( )( ) ∑ (X ) (X )
n n
1
− ( Sn ) 
2 2 2


∑ Xt − X n Xt + j − X n
n− j
t − Xn t+j − Xn 2 2

 n j t = j +1 t = j +1 

Proposition 7 Under the prerequisites of Proposition 1, the following


holds:
(a)

ˆ j ,n = ⌫ j for j ∈ ]
p lim ⌫
n →∞

and
(b)

l (n) +∞
p lim
n →∞

j =− l ( n )
ˆ j ,n = ⌫ =
⌫ ∑⌫,
j =−∞
j

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
352 Friedrich Schmid and Rafael Schmidt

where l (n) is a sequence of natural numbers satisfying l (n) = o (n1/3).


(c) Further,

p lim ␴ˆ 2SR ,n = ␴ SR
2

n →∞

where

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
␴ˆ11 ␴ˆ12 m 1 m 2 ␴ˆ 22
␴ˆ 2SR ,n = − SRn + SR n
␴ˆ n2 ␴ˆ n3 4 ␴ˆ n4

and

 ␴ˆ11 ␴ˆ12  l (n)


   (12)
 ␴ˆ12 ␴ˆ 22  = ∑ ⌫ j ,n and ␴ˆ nk = Sˆnk , k = 2, 3, 4 .
  j =− l ( n )

(d) Finally,

n
( SR
m n − SR ) → N ( 0,1) .
d

␴ˆ SR ,n

A proof of Proposition 7 can be obtained from the authors on request. In


practice, a particular choice for l(n) must be made in Formulas (12). Often a
small number such as l(n) = 2 or 3 yields a sufficiently good approximation.
Alternatively, l(n) may be chosen such that it is the smallest integer j for
which ||⌫ˆj,n|| ≤ ␧ ≤ ε for a given ε > 0.
An asymptotic (1 − α)-confidence interval for SR is then approximately
given by

m n + ⌽−1  1 − ␣  ␴ˆ / n
SR   SR ,n
 2

and an asymptotic test for H0 : SR = SR0 against H1 : SR ≠ SR0 is performed by


rejecting H0 if

( SR
m n − SR0 ) >⌽ −1  ␣
n 1 − 
␴ˆ SR ,n  2

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Statistical Inference for Sharpe’s Ratio 353

17.3.5 The difference between two Sharpe ratios SR X and SRY


17.3.5.1 General result
In order to decide whether the Sharpe ratio of an investment portfolio is
(statistically) significantly larger than the Sharpe ratio of an alternative
investment portfolio, we investigate the statistical properties of the diffe-
rence of the two Sharpe ratios. Suppose

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
␮X − z X ␮Y − zY
SRX = and SRY =
␴ 2
X ␴Y2

are the Sharpe ratios of the returns X and Y of two investment portfolios.
The stochastic variables X and Y are not necessarily stochastically independ-
ent and thus are written as a bivariate random vector (X, Y). Let zX and zY
be two riskless (possibly different) benchmarks. As in the one-dimensional
case, we denote the random observations by (X1, Y1), ... , (Xn, Yn). The respect-
ive Sharpe ratios are then estimated by

SR (
m X ,n = X n − z / S 2
X )
X ,n ( )
m Y ,n = Y n − zY / SY2 ,n .
and SR

The next proposition states the asymptotic distribution of the difference


m X ,n − SR
SR m Y ,n . Again we use the notion of α-mixing as defined in Section 17.3.1.
Note that for a bivariate stochastic process, the relevant σ -fields are defined
by Ft = σ((Xs, Ys), s ≤ t) and F t = σ((Xs, Ys), s ≥ t).
Proposition 8: Let (X1, Y1), ... , (Xn, Yn), n ` , be observations of a strictly
stationary bivariate stochastic process (Xt, Yt)t∈] which is α-mixing. If in
addition the mixing coefficients α(X,Y)(r) O(r−(1+1/δ)(2+δ)) and E|X1|4+δ < ∞,
E|Y1|4+δ < ∞, E|Y1X1|2+δ < ∞ for some δ > 0, then

(
m X ,n − SR
n SR m Y ,n − ( SR − SR ) 
X Y
d
)
→ N ( 0, ␴ 2Diff ) as n → ∞ (13)

where

␴ 2Diff = ␴ SR
2
X
+ ␴ SR
2
Y
− 2␴ SRX ,SRY

with

␴11 ␴ 1 ␴22
␴ SR
2
= − SRX 12 + SRX2 and ␴ SR
2
X
␴ X2 ␴ X3 4 ␴ X4 Y

␴ ␴ 1 ␴ 44
= 33 − SRY 343 + SRY2
␴Y2 ␴Y 4 ␴Y4

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
354 Friedrich Schmid and Rafael Schmidt

The variables σij are defined in Formula (14) below. For we have

␴31 1 ␴32 ( ␮Y − zY ) 1 ␴ 41 ( ␮ X − z X )
−␴ SRX ,SRY = − + +
␴ X ␴Y 2 ␴ X ␴Y3 2 ␴ 3X ␴Y
1 ␴ 42 ( ␮ X − z X ) ( ␮Y − zY )

4 ␴ X3 ␴Y3

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
␴ 1 ␴ 41 1 ␴32
= − 31 + SRY + SRX
␴ X ␴Y 2 ␴ X ␴ Y 2
2 ␴ 2X ␴Y
1 ␴ 42
− SRX SRY
4 ␴ X2 ␴Y2

Proof: Observe that Inequality (4) holds also for random vectors. Thus,
Proposition 1 together with the Cramér-Wold device yields the weak
convergence

1 n
{ }
n

 n ∑ ( Xt − ␮ X ) , n ∑ ( Xt − ␮ X ) − ␴ X ,
1 2 2

 t =1 t =1
n
1
n ∑ (Y − ␮ ) ,
t =1
t Y

(14)
⬘
∑ {(Yt − ␮Y ) }
n
2
1
n
− ␴Y2 
t =1 

d
(
→ N 0, ⌰ =: ( ␴ij )
( i , j∈{1,2 ,3,4}) ) as n → ∞

The derivation of Θ is analogue to the calculation of Γ in Proposition 1.


Finally, the Delta-method together with lengthy calculations yields the
stated formula for the asymptotic variance.

17.3.5.2 Temporally independent returns


Let the return vectors (X, Y), (X1, Y1), ... , (Xn, Yn) be temporally independent
and identically distributed. Then, the α-mixing condition in Proposition 8
is irrelevant, and for the moment condition we may set δ = 0. Note that we
do not assume (X, Y) to be bivariate normally distributed. The σij in Formula
(14) are derived by

␴31 = Cov ( X ,Y ) , ␴32 = Cov Y , ( X − ␮ X ) { 2


}
{
␴ 41 = Cov X , (Y − ␮Y )
2
}
= Cov {( X − ␮ ) , (Y − ␮Y ) }
2 2
␴ 42 X

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Statistical Inference for Sharpe’s Ratio 355

Lengthy calculation shows that the asymptotic variance in Formula (13)


takes the form

␴ 2Diff = ␴ SR
2
X
+ ␴ SR
2
Y
− 2␴ SRX ,SRY
1 2 1
= 2 − ␥1, X SRX − ␥1,Y SRY + SRX ( ␥2 , X − 1) + SRY2 ( ␥2 ,Y − 1)
4 4
− 2 ␳X ,Y + SRY ␳X ,(Y − m ␥2 ,Y − 1 + SRX ␳Y ,( X − m )2 ␥2 , X − 1

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Y )2 X

1
− SRX SRY ␳( X − ␮ )2 ,(Y − ␮ )2
2 X Y
(␥ 2,X − 1) ( ␥2 ,Y − 1)

where ρX, Y denotes the Pearson correlation coefficient between X and Y.


Normally distributed returns: In the special case where (X, Y) is bivariate
normally distributed, the latter formula becomes

1
␴ 2Diff = 2 +
2
( SRX2 + SRY2 ) − 2 ␳X ,Y − SRX SRY ␳X2,Y (15)

since ␳X ,(Y − ␮ ) = ␳Y ,( X − ␮ ) = 0 and ␳( X − ␮ ) ,(Y − ␮ ) = ␳X ,Y holds (see Isserlis Theorem, in


2
2 2 2 2
Y X X Y

Isserlis 1918). Formula (15) is a key result in Memmel (2003).


For the excess returns of the GBPs introduced and discussed in Section
17.3.3.2, Table 17.5 investigates whether the one-sided hypothesis H0: SR X >
SRY can be rejected at various confidence levels α. The test statistic consid-
ered is

Tn = n SR (
m Y ,n − SR
m X ,n / ŝ
Diff ) (16)

together with the asymptotic distribution established in Proposition 8. The


two corresponding time series of excess returns are truncated such that
they cover the same observation horizon. The hypothesis is rejected if Tn
is greater than the α-quantile of the standard normal distribution. The
asymptotic variance is estimated via Formula (15), under the assumption
of i.i.d. normally distributed excess-returns, which has been verified in
Section 17.3.3.2.
If the performance of the asset management portfolios in Section 17.3.3.2
is measured by the Sharpe ratio, the results in Table 17.5 imply that SFI
significantly outperforms FFTW(U), CGTC and UBS(A) at the 99.5% confi-
dence level. Furthermore, FFTW(H) outperforms FFTW(U) at the 99% con-
fidence level. Finally, SFI outperforms UBS(B) and FFTW(H), and FFTW(H)
outperforms CGTC and UBS(A) at the 95% confidence level.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
356 Friedrich Schmid and Rafael Schmidt

Table 17.5 The one-sided hypothesis H0: SR X > SRY, as defined in (16),
is tested. A rejection of the hypothesis at the confidence level α is
denoted by *** if α = 99.5%, ** if α = 99% and * if α = 95%. The estimate
displayed in the second column is calculated over the full length of
the corresponding time series. For the hypothesis test H0: SR X > SRY,
the two corresponding time series of excess returns are truncated such
that they cover the same observation horizon

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
GBP FFTW(U) BAM UBS(B) CGTC UBS(A) FFTW(H)

FFTW(U) 0.276
BAM 0.300 −
UBS(B) 0.311 − −
CGTC 0.375 − − −
UBS(A) 0.427 − − − −
FFTW(H) 0.578 ** − − * *
SFI 0.739 *** − * *** *** *

Notes
1. The authors would like to thank Joachim Coche, Sandra Gaisser and Christoph
Memmel for fruitful discussions. This paper represents the authors’ personal opin-
ions and does not necessarily reflect the views of the associated institutions.

Bibliography
Bao, Y., and Ullah, A. (2006). ‘Moments of the Estimated Sharpe ratio when the
Observations are not IID’, Finance Research Letters 3, 49–56.
Bollerslev, T., (1986) ‘Generalized Autoregressive Conditional Heteroskedasticity’,
Journal of Econometrics 31, 307–327.
Bradley, R.C., (1986). Basic Properties of Strong Mixing Conditions in Probability and
Statistics, a Survey of Recent Results. Birkhäuser, Oberwolfach.
Carrasco, M. and Chen, X. (2002). ‘Mixing and moment properties of various GARCH
and stochastic volatility models’, Econometric Theory 18, 17–39.
Christie, S., (2006). ‘Is the Sharpe Ratio Useful in Asset Allocation?’, preprint,
Macquarie University, http://www.mafc.mq.edu.au.
DeMiguel, V., Garlappi, L. and Uppal, R.(2007). ‘Optimal Versus Naive Diversification:
How Inefficient is the 1/N Portfolio Strategy?’, Forthcoming in Review of Finanical
Studies 22(5), 2009, 1915–1953.
Doukhan, P., (1994). Mixing – Properties and Examples. Springer Verlag, New York.
Ibragimov, I.A., (1962). ‘Some Limit Theorems for Stationary Processes’, Theory
Probab. Appl. 7, 349–382.
Isserlis, L., (1918). ‘On a Formula for the Product-moment Coefficient of any Order
of a Normal Frequency Distribution in any Number of Variables’, Biometrika 12,
399–407.
Jobson, J.D. and Korkie, B. M. (1981) ‘Performance Hypothesis Testing with the
Sharpe and Treynor Measures’, Journal of Finance 36, 889–908.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Statistical Inference for Sharpe’s Ratio 357

Knight, J. and Satchell, S (2005). ‘A Re-examination of Sharpe’s Ratio for Log-normal


Prices’, Applied Mathematical Finance 12(1), 87–100.
Ledoit, O. and Wolf, M. (2008). ‘Robust Performance Hypothesis Testing with the
Sharpe Ratio’, Journal of Empirical Finance 15(5), 850–859.
Lo, A.W., (2002). ‘The Statistics of Sharpe Ratios’, Financial Analysts Journal 58(4),
36–52.
Lo, A.W., (2003). ‘Author’s Reponse on “The Statistics of Sharpe Ratios” ’, Financial
Analysts Journal, 59(5), 17.

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Meitz, M. and Saikkonen, P. (2004). ‘Ergodicity, Mixing, and Existence of Moments
of a Class of Markov Models with Applications to GARCH and ACD models’, SSE/
EFI Working Paper Series in Economics and Finance 573.
Memmel, C., 2003, ‘Performance Hypothesis Testing with the Sharpe Ratio’, Finance
Letters 1, 21–23.
Müller, A. and Stoyan, D. (2002). Comparison Methods for Stochastic Models and Risks.
Wiley Series in Probability and Statistics, New York.
Nelson, D.B. (1990). ‘Stationarity and Persistency in the GARCH(1,1) Model’,
Econometric Theory 6, 318–334.
Okhrin, Y. and Schmid, W. (2006). ‘Distributional Properties of Portfolio Weights’,
Journal of Econometrics 134, 235–256.
Opdyke, J.D. (2006). ‘Easily Implemented Confidence Intervals and Hypothesis
Tests for Sharpe Ratios under General Conditions’, preprint, http://ssrn.com/
abstract=886728.
Rosenblatt, M. (1956). ‘A Central Limit Theorem and a Strong Mixing Condition’,
Proceedings of the National Academy of Sciences, U.S.A. 42, 437.
Sharpe, W.F. (1966). ‘Mutual Fund Performance’, Journal of Business January, 119–38.
Sharpe, W.F. (1994). ‘The Sharpe ratio’, The Journal of Portfolio Management 21(1),
49–58.
Vinod, H.D. and Morey, M. R. (2000). ‘Confidence Intervals and Hypothesis Testing
for the Sharpe and Treynor Performance Measures: A Bootstrap Approach’,
In Computational Finance 1999, Y.S. Abu-Mostafa, B. LeBaron, A.W. Lo and A.S.
Weigend (eds), MIT Press, Cambridge Mass., Chapter 3, 25–39.
Wolf, M. (2003). ‘A Comment on “The Statistics of Sharpe Ratios” by A.W. Lo’,
Financial Analysts Journal 59(5), 17.

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
This page intentionally left blank

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Index

ACR, see Adjusted for Credit Ratio (ACR) autocovariance, 335


ADF, see augmented Dickey-Fuller autoregressive spectral estimators,

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
(ADF) test 293–5
adjustable rate mortgages (ARMs), 227 auto-spectrum, 284–5
Adjusted for Credit Ratio (ACR), 117–18,
123, 128–31 BarCap Point risk management
Adjusted for Skewness Sharpe Ratio system, 160
(ASSR), 115–16, 123, 127–31 Basel Accord, 159
agency guaranteed mortgage-backed Bayesian frameworks, 4, 6–7,
securities, see mortgage-backed 10–11, 33
securities benchmark yields, 31, 42, 160–1
aging populations, xxii–xxiii Black & Scholes option pricing model,
AIC, see Akaike information criteria 154n11
(AIC) Black-Litterman equation, 153
Akaike information criteria (AIC), 165, bond yield curves
166–8, 175 analyst’s views and, 31–42
analysts’ expectations, yield curves and, global, 38–41
31–42 uncertainty matrix, 38
ARCH-GARCH-based models, 160 US Treasury, 35–7, 58
ARMs, see adjustable rate mortgages bonds
(ARMs) credit-risk, 52
Asian financial crisis, 158 discount, 227
asset accumulation, xxii–xxiv fixed-rate, 227
asset allocation tilting between equities and, 194–7
of long-term investors, 265 Box, George, 202–3
strategic. see strategic asset Brownian Bridge, 325, 327–9, 334–5
allocation (SAA) buffer funds, xxv
tactical, 190–1 buy and hold (BH) strategy, 208
time horizon and, 95–6
asset allocation problem, 66–7, Calmar Ratio (CR), 144, 148
210–11 Canadian interest rate forecasts, 3–27
for public investment funds, xxx–xxxi combined forecasts, 3–5, 10–17
of savings and heritage funds, evaluation of combining, 18–24
xxvi–xxvii models, 5–10
asset allocation return, 183–4 Canadian term structure, of
asset class modelling, xxxvii–xxxix zero-coupon rates, 7, 8
asset management companies, 158 capital preservation, xxv
asset management, stakeholders in, central bank foreign exchange
179–80 reserves, xxii
asset returns, 189–90 central bank reserves, xxv, xxviii
ASSR, see Adjusted for Skewness Sharpe estimates of, 159
Ratio (ASSR) excess, xxviii, 164
augmented Dickey-Fuller (ADF) test, 68, growth of, 158
69, 70, 242–7 managements of, xxv–xxvi

359

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
360 Index

central banks credit-spread modelling, 45, 47


asset allocation by, 66–7 see also spread-risk model
asset classes for, 164–5 Custom Pan-Euro Treasury Index, 260
benchmarks for, 160 CVaR, see conditional value-at-risk
distributions of, 164–70 (CVaR)
interest rate risk management for,
64–88 data
investment horizon, 160 estimating mixed frequency, 325–35

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
strategic asset allocation for, 158–60, missing, 326–7
170–2 Data Generating Process (DGP), 317
strategic policy, 65 debt repayment, xxv
types of investments of, xxxi demographics shifts, xxii–xxiii
Clayton copula, 160 descriptive statistics, 278–9
cokurtosis, 271, 279 development funds, xxix
combined interest rate models, 10–17 Diebold-Li model, 33
advantages of, 3–5 disasters, 140–4
dynamic model averaging, 18–21 discount bonds, 227
equal weights, 11–12 distributions
evaluation of, 18–24 of central banks, 164–70
factor OLS, 13–14 Gaussian, 138, 140, 160, 164–5
inverse error, 12 non-normal, 114, 302–3
log likelihood weights, 16–17 return, 268–9
marginal model likelihood, 15 use of appropriate, 162–4
MARS, 14 diversification, 135
predictive likelihood, 14–15 dominance, 138
simple OLS, 12–13 Dutch disease, xxvii
vs. single models, 22–7 dynamic model averaging, 18–21
static model averaging, 19, 21–2 dynamic Nelson-Siegel model, 45,
commodity funds, xxii 47, 56
commodity prices, xxii
commodity revenues, xxvi efficient frontier (EF), 120–9,
commodity-exporting countries, 136–7, 272
savings and heritage funds in, emerging markets
xxvi–xxvii increase in reserves in, xxv
conditional value-at-risk (CVaR), 184, savings and heritage funds in,
210 xxvi–xxvii
constant proportion portfolio insurance size of domestic markets in, xxxi
(CPPI) strategy, 209 equities, tilting between bonds
convex risk measures, 210 and, 194–7
copula functions, 160, 162–4, estimation risk, 332–4
169–70 EUR securitized debt, 259–60
Cornish-Fisher expansion, 269 Euro-Aggregate Index, 259–60
correlation matrix, 279 European Central Bank, 249
coskewness, 271, 279 event risk, 94, 96, 97
coupon return, 234–5 excess reserves, xxv, xxviii, 164
covariance model, 101 exchange rate risk, xxvii
CR, see Calmar Ratio (CR) exchange-traded funds (ETFs), 349–50
credit risk, 95, 112, 113, 115–18, 132 exit strategies, 97
credit spreads, 44–6, 48–9 expected return, 135–6
credit-risk bonds, 52 MDD-adjusted, 150

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Index 361

expected return model, 101 Government Sponsored Enterprises,


expected shortfall (ES), 160 227–8
exponential-affine functions, 5 Greenspan-Guidotti rule, 158
Exponential Spline (ES) model, 5–6 Gumbel copula, 160
exponential weighted moving average
(EWMA) volatility model, 160 hedge funds, 113, 114–15, 120, 122
heritage funds, xxvi–xxvii
Federal Home Loan Mortgage

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Corporation (FHLMC), 226 implied volatility, 265, 266–8
Federal National Mortgage Association independent component analysis (ICA),
(FNMA), 226 41–2
filtering, 288–91 inflation, 173–4
filters, 285–7 institutional issues, 179–80
5-asset frontier, 155n17 integrated measure of performance,
fixed income analysts, yield curves and, 115–18
31–42 interest rate forecasts, 182–3
fixed proportions (FP) strategy, 208 Canadian, 3–37
fixed-income investing, spread-risk combined, 10–17
model for strategic, 44–62 combined forecasts, 3–5
fixed-rate bonds, 227 evaluation of combining, 18–24
fixed-weight strategy, 200 model risk and, 3
forecasting yields, 33–5 models, 5–10
forecasts, 95, 97 interest rate modelling and forecasting,
interest rate, see interest rate forecasts xxxiii–xxxiv
foreign debt, xxv interest rate models, 5–10
foreign exchange reserves, xxii best-performing model, 22–4
foreign investments, xxxi combination vs. single, 24–7
foreign reserves combinations, 10–17
academic publications on, xxxii forecasts of individual, 7–10
growth of, xxxii performance of, 9–10
Fourier Series (FS) model, 5–6, 23 interest rate risk, dynamic management
frequency data, estimating mixed, of, 64–88
325–35 interest rate volatility, 301–2
frequency domain, 283–8 interest rates, mean reversion in, 67–71,
versus time domain, 284 87–8
for time series modelling, 282–303 investment decisions, 178
frequency models, 297–300 investment grade credit and currency
fund of funds, 207–20 hedges, 197–8
investment horizon, 160
G7 Treasury index, 257–9, 261–2 investment portfolios, monitoring of,
Gaussian distribution, 138, 140, 160, 281–2
164, 165 investment return, xxv
genetic algorithm (GA), 149–52 investment strategies
Global Aggregate index (GlobalAgg), basic, 207–10
250, 256–60, 261–3 buy and hold (BH) strategy, 208
Global Multi-Factor Risk Model, 252 of central banks, xxv–xxvi
government holding management constant proportion portfolio
companies, xxviii, xxix insurance (CPPI) strategy, 209
Government National Mortgage fixed proportions (FP) strategy, 208
Association (GNMA), 226 target date fund (TDF), 208–9

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
362 Index

investment tranche, xxii, xxvi portfolio performance evaluation of,


investor views, 178–87 123–9
mixed frequency data, estimating,
Kolmogorov-Smirnov (KS) test, 165, 325–35
166–8, 175 model averaging, 3–4
KPSS test, 68, 69, 70 Bayesian, 4
kurtosis, 122, 165, 166–8, 266 model risk, 3
modified VaR, 266, 269

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
leakage effect, 287–8 momentum-based strategies, 73,
Lehman Brothers, 227 83–6
level-dependent strategies, 65, 73, 75–7 monotonicity, 142, 143, 155n14
LIBOR/SWAP rates, 49, 51, 57, 60 Monte Carlo simulations, 193, 198–201,
linear filter G(L), 285–7 204n2, 316–22, 329–34
linear regression-based strategy, 73, mortgage-backed securities, 225–47
77–9 attribution model for, 232–41
liquidity crises, 158 comparing to Global Aggregate index,
long volatility (LV), 278 256–60
long-run mean reversion, 189 coupon return, 234–5
long-term investors, volatility as asset historical performance, 228–32
class for, 265–76 implications of market development
long-term time series data, 309–12 in 2007–2008, 241–7
introduction to, 225–6, 249–51
Manipulation-proof Performance investor considerations for, 228–32
Measure (MPPM), 123, 127 market depth and liquidity, 249–50
marginal model likelihood, 15 paydown return, 238–41
market risk, 95, 112, 115–18 price return, 235–8
Markowitz model, 93–5, 114, 134–7 quantitative portfolio strategy and,
maximum drawdown (MDD), 134 249–64
benefits of using, 153 return forecasts, 242–7
as measure of risk, 140–4 as strategic asset class, 226–32
portfolio optimization problem structure of, 227–8
under, 144–9 TBA proxy portfolio, 251–4
maximum entropy spectral analysis, MSCI-Emerging Markets (MSCI-EM),
292–5, 302–3 147
MBS Index, 250, 251–2 multi-objective optimization, 95, 98
MDD, see maximum drawdown (MDD) multi-period mean-variance
MDD-adjusted expected returns analysis, 94
(MDDAER), 150–2 Multiple Adaptive Regression Splines
mean reversion (MARS), 14, 28n13
in asset markets, 190
in interest rates, 67–71, 87–8 negative carry, xxv
long-run, 189 Nelson-Siegel (NS) model, 5–6, 23, 56,
mean-variance analysis, 94 72
mean-variance criteria (MVC), 135, observation equation for, 47
138–9, 144, 170 New Zealand Superannuation Fund
mean-variance dominance, 138–9 (NZSF), 189
mean-variance model, 114 Newton-Raphson type algorithms, 162
mean-variance optimization non-normal distributions, 114, 302–3
empirical study, 118–31 non-parametric spectral estimators, 292
hidden risks in, 112–32 Normal Inversion Gaussian, 160

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Index 363

oil revenues, xxvi scoring strategies, 66, 73, 81–3, 85–6


Omega function, 160, 170 variable time horizon strategic asset
optimal diversifications, between funds, allocation, 93–110
207–20 portfolio risk, 136
optimization, see portfolio optimization portfolio sampling, 114
ordinary least squares (OLS) regressions, portfolio selection, 95
67–8 positive homogeneity, 142, 143
Ornstein Uhlenbeck Bridge, 335 Power Transfer Function (PTF), 285–7

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
premium bonds, 227
parametric spectral estimators, 293 price return, 235–8
Pareto optimality, 98–9 principal component analysis (PCA),
paydown return, 238–41 31, 41–2
pension fund management, case study, probit regression model, 79–80
212–17 proportional exposure, 209
pension reserve funds, xxii–xxiii, PTF, see Power Transfer Function (PTF)
xxvii, xxviii public investment funds
accumulation phase of, xxvii asset allocation for, xxx–xxxi
interest rate mismatch, 67 balance sheet considerations, xxx
interest rate risk management for, largest, xxiii–xxiv
64–88 objectives and liabilities, xxv–xxix
investment horizon, 67 policy objectives, xxx
types of investments of, xxxi reputational considerations, xxxi
withdrawal phase, xxix types of, xxv–xxix
pension reserves, xxii
perspective distortion, 309–11 quantitative portfolio strategy, 249–64
policy benchmarks, 64 quantitative techniques, xxxvii–xxxix,
portfolio design goals, 180–1 179–80
portfolio optimization, xxix
of fund of funds, 207–20 random walk model, 9
Markowitz model, 93–5, 114, 134–7 rebalancing frequencies, 65
maximum drawdown and, 140–9 regime switching models, 160
process inputs, 182–4 regression-based strategies, 65, 73,
risk measurement and, 137–40 77–80
scenario-dependent, 178–87 reserves, xxii–xxiv
using alternative performance academic publications on, xxxii
measures, 130–1 estimates of central bank, 159
wealth creation-MDD, 147–52 growth of, 158
portfolio optimization problem, reserves diversification, xxii
210–11 reserves investment corporations, xxii,
portfolio optimization techniques, xxviii
xxxiv–xxxvii, 65, 114 return distributions, 268–9
dynamic duration strategies, 72–88 return volatility, 112
level-dependent strategies, 65, 73, risk
75–7 credit, 95, 112, 113, 115–18, 132
mixed strategies, 85–6 estimation, 332–4
momentum-based strategies, 73, event, 94, 96, 97, 268–9
83–6 exchange rate, xxvii
multi-objective optimization, 95, 98 interest rate, 64–88
regression-based strategies, 65, 73, market, 95, 112, 115–18
77–80 market price of, 44–5

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
364 Index

risk aversion, xxii techniques, 283–8, 302–3


risk integration, 96, 112, 115–18 spectral densities, 284–5
risk measurement, 137–40 spectral windows, 292
risk measures, 140–4, 170 spread movement, 183
conditional value-at-risk (CVaR), 184, spread-risk model
210 data, 46–7
convex, 210 dynamics for the factors, 55–6
expected shortfall (ES), 160 empirically founded, 48–51

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Omega function, 160, 170 Nelson-Siegel model and, 47
for strategic asset allocation, 158–76 out-of-sample comparison, 56–61
value at risk (VaR), 159 single factor, 46, 48–51
volatility, 159 for strategic fixed-income investors,
risk models, 46, 112, 139 44–62
risk preferences, time horizon and, 95–6 two-factor, 46, 48–51
risk premiums, 112, 115 squared gain, 285
risk scenarios, 180–1, 184–6 SSA, see strategic asset allocation (SAA)
risk-management mechanisms, 191 stabilization funds, xxv, xxviii
RiskMetrics, 160 stakeholders, 179–80
state dependencies, in time series
safety first concept, 139 modelling, 300–2
savings and spending rules, xxvi state-space (SS) model, 5–6
savings funds, xxvi–xxvii static model averaging, 19, 21–2
scenario-dependent portfolio statistical inference
optimization, 178–87 under general conditions, 351–2
scoring strategies, 66, 73, 81–3, 85–6 general result, 340–2
Sharpe ratios, 117–18, 123, 125–6, 128 for Sharpe Ratio, 337–56
differences between two, 353–6 temporal independence, 342–6
introduction to, 337–8 under volatility clustering, 346–51
statistical inference for, 337–56 stochastic dominance, 339
time aggregation and stochastic stochastic interpolation, 325–35
dominance, 338–9 methodology, 327–9
Shiller data, 325 Monte Carlo simulation, 329–34
single factor spread model (SM1), 46, stochastic volatility, 346–51
48–51 strategic asset allocation (SAA),
skewness, 94, 113, 114–18, 121, 122, xxix–xxxiii
125, 132, 266 appropriate distributions and, 162–4
social security funds, xxii, xxii–xxiii, for central banks, 158–60, 170–2
xxvii, xxviii, xxix decision framework for, xxx
types of investments of, xxxi institutional issues, 179–80
sovereign wealth funds (SWFs) interest rate risk and, 64–88
academic publications on, xxxii methodology, 161–2
asset class universe, 164–70 of mortgage-backed securities, 225–47
benchmarks for, 160–1 optimization problems, 94–5
creation of, 158 policy benchmarks, 64
definition, xxiii risk measures for, 158–76
investment horizon, 160 for sovereign wealth funds, 158–60,
reserves estimates for, 159 170, 172–4
strategic asset allocation for, 158–60, strategic tilting around, 189–205
170, 172–4 time horizon and, 95–6, 281
spectral analysis, 305 uncertainty and, 280
maximum entropy, 292–5 using variable time horizon, 93–110

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
Index 365

strategic asset allocation model specification and estimation,


(SAA) – continued 295–303
weakness of traditional approaches Monte Carlo simulations, 316–22
to, 94–6 samples and observation frequencies,
strategic fixed-income investing, 309–12
spread-risk model for, 44–62 state dependencies, 300–2
strategic tilting, 189–205 understanding data and model
enhancing sustainability of, 201–3 dynamics, 306–7

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
between equities and bonds, 194–7 tracking error volatility (TEV), 254–6
future directions, 203–4 transfer rules, xxvi
historical back-tests, 194–8 transition equations, 7
introduction to, 189–91 translation invariance, 142–3
Monte Carlo analysis, 198–201 trend model, 296–7
overview of methodology, 191–3 2-asset frontier, 155n17
as package, 198 two-factor spread model (SM2), 46,
stress scenarios, 97, 139–40 48–51
subadditivity, 142, 143
swap spreads, 49, 59 uncertainty matrix, in yield curves, 38
SWFs, see sovereign wealth funds (SWFs) unconditional forecasts, 34
unit root tests, 68–70
tactical asset allocation (TAA), 190–1 US dollars (USD), wealth accumulation
target date fund (TDF), 208–9 in, xxvii
TBAs (to-be-announced contracts) US mortgage backed-securities, 249–64
normalized tracking error see also mortgage-backed securities
performance, 254–6 US Treasury bond yield curve, 35–7, 58
proxy performance record, 253–4
replicating performance of MBS Index value at risk (VaR), 140, 159, 266, 269
using, 251–2 variable time horizon strategic asset
term structure of risk and return, 281 allocation, 93–110
3-asset frontier, 155n17 data for, 101–2
tilting, see strategic tilting evolutionary algorithm, 99–100
time aggregation, of Sharpe ratios, examples, 102–9
338–9 modelling limitations, 101
time horizon, 97, 281 multi-objective optimization, 98
for central banks, 160 set of objectives for, 96–7
versus frequency domain, 284 variance decomposition, 305–6
impact of, 95–6 variance ratio test, 70–1
for SWFs, 160 vector autoregressive (VAR) model,
time series analysis, 292–5 55–6, 280
time series decomposition, 288–91 vector equilibrium correction model,
filter requirements, 289–90 214, 217–20
zero correlation property, 291 VIX index, 267–8
Zero Phase Frequency Filter, 290–1 volatility, 115–18, 137–8, 159
time series models/modelling, 217–20, as asset class, 265–76
280–323 efficient portfolio with, 272–4
of complex dependencies, 322–3 implied, 265, 266–8
construction of, 280–1 interest rate, 301–2
data used for, 309–16 portfolio construction and, 268–70
frequency domain methodology for, stochastic, 346–51
282–323 volatility risk premium (VRP), 265, 268,
model analysis, 303–6 271, 278

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm
366 Index

wealth creation, 147–9 spread models, 48–55


wealth creation-MDD optimization, uncertainty matrix, 38
146–52 US Treasury, 58
weighted average coupon (WAC), 234–5 US Treasury bond, 35–7
yield paths, 182–3
yield curves
analyst’s views and, 31–42 zero correlation property, 291,
global, 38–41 295, 302

Copyright material from www.palgraveconnect.com - licensed to Taiwan eBook Consortium - PalgraveConnect - 2011-03-03
Nelson-Siegel model, 72 Zero Phase Frequency Filter, 290–1, 293
posterior distribution of, 33 zero-coupon rates, 6, 7, 8, 9

10.1057/9780230251298 - Interest Rate Models, Asset Allocation and Quantitative Techniques for Central Banks and Sovereign Wealth
Funds, Edited by Arjan B. Berkelaar, Joachim Coche and Ken Nyholm

You might also like