Machine Learning in Finance
Machine Learning in Finance
Dixon
Igor Halperin
Paul Bilokon
Machine
Learning in
Finance
From Theory to Practice
Machine Learning in Finance
Matthew F. Dixon • Igor Halperin • Paul Bilokon
Paul Bilokon
Department of Mathematics
Imperial College London
London, UK
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Once you eliminate the impossible, whatever
remains, no matter how improbable, must be
the truth.
—Arthur Conan Doyle
Introduction
vii
viii Introduction
This book is written for advanced graduate students and academics in financial
econometrics, management science, and applied statistics, in addition to quants and
data scientists in the field of quantitative finance. We present machine learning
as a non-linear extension of various topics in quantitative economics such as
financial econometrics and dynamic programming, with an emphasis on novel
algorithmic representations of data, regularization, and techniques for controlling
the bias-variance tradeoff leading to improved out-of-sample forecasting. The book
is presented in three parts, each part covering theory and applications. The first
part presents supervised learning for cross-sectional data from both a Bayesian
and frequentist perspective. The more advanced material places a firm emphasis
on neural networks, including deep learning, as well as Gaussian processes, with
examples in investment management and derivatives. The second part covers
supervised learning for time series data, arguably the most common data type
used in finance with examples in trading, stochastic volatility, and fixed income
modeling. Finally, the third part covers reinforcement learning and its applications
in trading, investment, and wealth management. We provide Python code examples
to support the readers’ understanding of the methodologies and applications. As
a bridge to research in this emergent field, we present the frontiers of machine
learning in finance from a researcher’s perspective, highlighting how many well-
known concepts in statistical physics are likely to emerge as research topics for
machine learning in finance.
Prerequisites
Readers will find this book useful as a bridge from well-established foundational
topics in financial econometrics to applications of machine learning in finance.
Statistical machine learning is presented as a non-parametric extension of financial
econometrics and quantitative finance, with an emphasis on novel algorithmic rep-
resentations of data, regularization, and model averaging to improve out-of-sample
forecasting. The key distinguishing feature from classical financial econometrics
and dynamic programming is the absence of an assumption on the data generation
process. This has important implications for modeling and performance assessment
which are emphasized with examples throughout the book. Some of the main
contributions of the book are as follows:
• The textbook market is saturated with excellent books on machine learning.
However, few present the topic from the prospective of financial econometrics
and cast fundamental concepts in machine learning into canonical modeling and
decision frameworks already well established in finance such as financial time
series analysis, investment science, and financial risk management. Only through
the integration of these disciplines can we develop an intuition into how machine
learning theory informs the practice of financial modeling.
• Machine learning is entrenched in engineering ontology, which makes develop-
ments in the field somewhat intellectually inaccessible for students, academics,
and finance practitioners from quantitative disciplines such as mathematics,
statistics, physics, and economics. Moreover, financial econometrics has not kept
pace with this transformative field, and there is a need to reconcile various
modeling concepts between these disciplines. This textbook is built around
powerful mathematical ideas that shall serve as the basis for a graduate course for
students with prior training in probability and advanced statistics, linear algebra,
times series analysis, and Python programming.
• This book provides financial market motivated and compact theoretical treatment
of financial modeling with machine learning for the benefit of regulators, wealth
managers, federal research agencies, and professionals in other heavily regulated
business functions in finance who seek a more theoretical exposition to allay
concerns about the “black-box” nature of machine learning.
• Reinforcement learning is presented as a model-free framework for stochastic
control problems in finance, covering portfolio optimization, derivative pricing,
and wealth management applications without assuming a data generation
process. We also provide a model-free approach to problems in market
microstructure, such as optimal execution, with Q-learning. Furthermore,
our book is the first to present on methods of inverse reinforcement
learning.
• Multiple-choice questions, numerical examples, and more than 80 end-of-
chapter exercises are used throughout the book to reinforce key technical
concepts.
x Introduction
Chapter 1
Chapter 1 provides the industry context for machine learning in finance, discussing
the critical events that have shaped the finance industry’s need for machine learning
and the unique barriers to adoption. The finance industry has adopted machine
learning to varying degrees of sophistication. How it has been adopted is heavily
fragmented by the academic disciplines underpinning the applications. We view
some key mathematical examples that demonstrate the nature of machine learning
and how it is used in practice, with the focus on building intuition for more technical
expositions in later chapters. In particular, we begin to address many finance
practitioner’s concerns that neural networks are a “black-box” by showing how they
are related to existing well-established techniques such as linear regression, logistic
regression, and autoregressive time series models. Such arguments are developed
further in later chapters.
Chapter 2
Chapter 3
develop intuition for the role and functional form of regularization in a frequentist
setting—the subject of subsequent chapters.
Chapter 4
Chapter 5
Chapter 5 presents a method for interpreting neural networks which imposes mini-
mal restrictions on the neural network design. The chapter demonstrates techniques
for interpreting a feedforward network, including how to rank the importance of
the features. In particular, an example demonstrating how to apply interpretability
analysis to deep learning models for factor modeling is also presented.
Chapter 6
Chapter 7
Chapter 8
Chapter 8 presents various neural network models for financial time series analysis,
providing examples of how they relate to well-known techniques in financial econo-
metrics. Recurrent neural networks (RNNs) are presented as non-linear time series
models and generalize classical linear time series models such as AR(p). They
provide a powerful approach for prediction in financial time series and generalize
to non-stationary data. The chapter also presents convolution neural networks for
filtering time series data and exploiting different scales in the data. Finally, this
chapter demonstrates how autoencoders are used to compress information and
generalize principal component analysis.
Chapter 9
Chapter 10
can be used for dynamic portfolio optimization. For certain specifications of reward
functions, G-learning is semi-analytically tractable and amounts to a probabilistic
version of linear quadratic regulators (LQRs). Detailed analyses of such cases are
presented and we show their solutions with examples from problems of dynamic
portfolio optimization and wealth management.
Chapter 11
Chapter 12
Source Code
Scope
We recognize that the field of machine learning is developing rapidly and to keep
abreast of the research in this field is a challenging pursuit. Machine learning is an
umbrella term for a number of methodology classes, including supervised learning,
unsupervised learning, and reinforcement learning. This book focuses on supervised
learning and reinforcement learning because these are the areas with the most
overlap with econometrics, predictive modeling, and optimal control in finance.
Supervised machine learning can be categorized as generative and discriminative.
Our focus is on discriminative learners which attempt to partition the input
space, either directly, through affine transformations or through projections onto
a manifold. Neural networks have been shown to provide a universal approximation
to a wide class of functions. Moreover, they can be shown to reduce to other well-
known statistical techniques and are adaptable to time series data.
Extending time series models, a number of chapters in this book are devoted to
an introduction to reinforcement learning (RL) and inverse reinforcement learning
(IRL) that deal with problems of optimal control of such time series and show how
many classical financial problems such as portfolio optimization, option pricing, and
wealth management can naturally be posed as problems for RL and IRL. We present
simple RL methods that can be applied for these problems, as well as explain how
neural networks can be used in these applications.
There are already several excellent textbooks covering other classical machine
learning methods, and we instead choose to focus on how to cast machine learning
into various financial modeling and decision frameworks. We emphasize that much
of this material is not unique to neural networks, but comparisons of alternative
supervised learning approaches, such as random forests, are beyond the scope of
this book.
Introduction xv
Multiple-Choice Questions
Multiple-choice questions are included after introducing a key concept. The correct
answers to all questions are provided at the end of each chapter with selected, partial,
explanations to some of the more challenging material.
Exercises
The exercises that appear at the end of every chapter form an important component
of the book. Each exercise has been chosen to reinforce concepts explained in the
text, to stimulate the application of machine learning in finance, and to gently bridge
material in other chapters. It is graded according to difficulty ranging from (*),
which denotes a simple exercise which might take a few minutes to complete,
through to (***), which denotes a significantly more complex exercise. Unless
specified otherwise, all equations referenced in each exercise correspond to those
in the corresponding chapter.
Instructor Materials
Acknowledgements
This book is dedicated to the late Mark Davis (Imperial College) who was an
inspiration in the field of mathematical finance and engineering, and formative in
our careers. Peter Carr, Chair of the Department of Financial Engineering at NYU
Tandon, has been instrumental in supporting the growth of the field of machine
learning in finance. Through providing speaker engagements and machine learning
instructorship positions in the MS in Algorithmic Finance Program, the authors have
been able to write research papers and identify the key areas required by a text
book. Miquel Alonso (AIFI), Agostino Capponi (Columbia), Rama Cont (Oxford),
Kay Giesecke (Stanford), Ali Hirsa (Columbia), Sebastian Jaimungal (University
of Toronto), Gary Kazantsev (Bloomberg), Morton Lane (UIUC), Jörg Osterrieder
(ZHAW) have established various academic and joint academic-industry workshops
xvi Introduction
and community meetings to proliferate the field and serve as input for this book.
At the same time, there has been growing support for the development of a book
in London, where several SIAM/LMS workshops and practitioner special interest
groups, such as the Thalesians, have identified a number of compelling financial
applications. The material has grown from courses and invited lectures at NYU,
UIUC, Illinois Tech, Imperial College and the 2019 Bootcamp on Machine Learning
in Finance at the Fields Institute, Toronto.
Along the way, we have been fortunate to receive the support of Tomasz Bielecki
(Illinois Tech), Igor Cialenco (Illinois Tech), Ali Hirsa (Columbia University),
and Brian Peterson (DV Trading). Special thanks to research collaborators and
colleagues Kay Giesecke (Stanford University), Diego Klabjan (NWU), Nick
Polson (Chicago Booth), and Harvey Stein (Bloomberg), all of whom have shaped
our understanding of the emerging field of machine learning in finance and the many
practical challenges. We are indebted to Sri Krishnamurthy (QuantUniversity),
Saeed Amen (Cuemacro), Tyler Ward (Google), and Nicole Königstein for their
valuable input on this book. We acknowledge the support of a number of Illinois
Tech graduate students who have contributed to the source code examples and
exercises: Xiwen Jing, Bo Wang, and Siliang Xong. Special thanks to Swaminathan
Sethuraman for his support of the code development, to Volod Chernat and George
Gvishiani who provided support and code development for the course taught at
NYU and Coursera. Finally, we would like to thank the students and especially the
organisers of the MSc Finance and Mathematics course at Imperial College, where
many of the ideas presented in this book have been tested: Damiano Brigo, Antoine
(Jack) Jacquier, Mikko Pakkanen, and Rula Murtada. We would also like to thank
Blanka Horvath for many useful suggestions.
xvii
xviii Contents
6.3
Practical Implications of Choosing a Classical
or Bayesian Estimation Framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7 Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.1 Bayesian Inference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.2 Model Selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.3 Model Selection When There Are Many Models . . . . . . . . . . . . . 66
7.4 Occam’s Razor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
7.5 Model Averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8 Probabilistic Graphical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.1 Mixture Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3 Bayesian Regression and Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2 Bayesian Inference with Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
2.1 Maximum Likelihood Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
2.2 Bayesian Prediction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.3 Schur Identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3 Gaussian Process Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.1 Gaussian Processes in Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.2 Gaussian Processes Regression and Prediction . . . . . . . . . . . . . . . 93
3.3 Hyperparameter Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.4 Computational Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4 Massively Scalable Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.1 Structured Kernel Interpolation (SKI) . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.2 Kernel Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5 Example: Pricing and Greeking with Single-GPs. . . . . . . . . . . . . . . . . . . . . 98
5.1 Greeking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.2 Mesh-Free GPs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.3 Massively Scalable GPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6 Multi-response Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.1 Multi-Output Gaussian Process Regression
and Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
8.1 Programming Related Questions* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4 Feedforward Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
2 Feedforward Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
2.2 Geometric Interpretation of Feedforward Networks . . . . . . . . . . 114
2.3 Probabilistic Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Contents xix
6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
10 Applications of Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
2 The QLBS Model for Option Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
3 Discrete-Time Black–Scholes–Merton Model . . . . . . . . . . . . . . . . . . . . . . . . 352
3.1 Hedge Portfolio Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
3.2 Optimal Hedging Strategy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
3.3 Option Pricing in Discrete Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
3.4 Hedging and Pricing in the BS Limit . . . . . . . . . . . . . . . . . . . . . . . . . . 359
4 The QLBS Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
4.1 State Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
4.2 Bellman Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
4.3 Optimal Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
4.4 DP Solution: Monte Carlo Implementation . . . . . . . . . . . . . . . . . . . 368
4.5 RL Solution for QLBS: Fitted Q Iteration . . . . . . . . . . . . . . . . . . . . . 370
4.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
4.7 Option Portfolios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
4.8 Possible Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
5 G-Learning for Stock Portfolios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
5.2 Investment Portfolio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
5.3 Terminal Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
5.4 Asset Returns Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
5.5 Signal Dynamics and State Space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
5.6 One-Period Rewards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
5.7 Multi-period Portfolio Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
5.8 Stochastic Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
5.9 Reference Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
5.10 Bellman Optimality Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
5.11 Entropy-Regularized Bellman Optimality Equation . . . . . . . . . . 389
5.12 G-Function: An Entropy-Regularized Q-Function . . . . . . . . . . . . 391
5.13 G-Learning and F-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
5.14 Portfolio Dynamics with Market Impact . . . . . . . . . . . . . . . . . . . . . . 395
5.15 Zero Friction Limit: LQR with Entropy Regularization . . . . . . 396
5.16 Non-zero Market Impact: Non-linear Dynamics . . . . . . . . . . . . . . 400
6 RL for Wealth Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
6.1 The Merton Consumption Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
6.2 Portfolio Optimization for a Defined Contribution
Retirement Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
6.3 G-Learning for Retirement Plan Optimization . . . . . . . . . . . . . . . . 408
6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
Contents xxiii
8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
11 Inverse Reinforcement Learning and Imitation Learning . . . . . . . . . . . . . 419
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
2 Inverse Reinforcement Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
2.1 RL Versus IRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
2.2 What Are the Criteria for Success in IRL? . . . . . . . . . . . . . . . . . . . . 426
2.3 Can a Truly Portable Reward Function Be Learned
with IRL?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
3 Maximum Entropy Inverse Reinforcement Learning . . . . . . . . . . . . . . . . . 428
3.1 Maximum Entropy Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
3.2 Maximum Causal Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
3.3 G-Learning and Soft Q-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
3.4 Maximum Entropy IRL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
3.5 Estimating the Partition Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
4 Example: MaxEnt IRL for Inference of Customer Preferences . . . . . . 443
4.1 IRL and the Problem of Customer Choice. . . . . . . . . . . . . . . . . . . . . 444
4.2 Customer Utility Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
4.3 Maximum Entropy IRL for Customer Utility . . . . . . . . . . . . . . . . . 446
4.4 How Much Data Is Needed? IRL and Observational
Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
4.5 Counterfactual Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
4.6 Finite-Sample Properties of MLE Estimators . . . . . . . . . . . . . . . . . 454
4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
5 Adversarial Imitation Learning and IRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
5.1 Imitation Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
5.2 GAIL: Generative Adversarial Imitation Learning. . . . . . . . . . . . 459
5.3 GAIL as an Art of Bypassing RL in IRL . . . . . . . . . . . . . . . . . . . . . . 461
5.4 Practical Regularization in GAIL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
5.5 Adversarial Training in GAIL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
5.6 Other Adversarial Approaches* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
5.7 f-Divergence Training* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
5.8 Wasserstein GAN*. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
5.9 Least Squares GAN* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
6 Beyond GAIL: AIRL, f-MAX, FAIRL, RS-GAIL, etc.* . . . . . . . . . . . . . 471
6.1 AIRL: Adversarial Inverse Reinforcement Learning . . . . . . . . . 472
6.2 Forward KL or Backward KL?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
6.3 f-MAX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
6.4 Forward KL: FAIRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
6.5 Risk-Sensitive GAIL (RS-GAIL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
6.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
7 Gaussian Process Inverse Reinforcement Learning. . . . . . . . . . . . . . . . . . . 481
7.1 Bayesian IRL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
7.2 Gaussian Process IRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
xxiv Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
About the Authors
Paul Bilokon is CEO and Founder of Thalesians Ltd. and an expert in electronic
and algorithmic trading across multiple asset classes, having helped build such
businesses at Deutsche Bank and Citigroup. Before focusing on electronic trading,
Paul worked on derivatives and has served in quantitative roles at Nomura, Lehman
Brothers, and Morgan Stanley. Paul has been educated at Christ Church College,
Oxford, and Imperial College. Apart from mathematical and computational finance,
his academic interests include machine learning and mathematical logic.
xxv
Part I
Machine Learning with Cross-Sectional
Data
Chapter 1
Introduction
This chapter introduces the industry context for machine learning in finance, dis-
cussing the critical events that have shaped the finance industry’s need for machine
learning and the unique barriers to adoption. The finance industry has adopted
machine learning to varying degrees of sophistication. How it has been adopted
is heavily fragmented by the academic disciplines underpinning the applications.
We view some key mathematical examples that demonstrate the nature of machine
learning and how it is used in practice, with the focus on building intuition for
more technical expositions in later chapters. In particular, we begin to address
many finance practitioner’s concerns that neural networks are a “black-box” by
showing how they are related to existing well-established techniques such as
linear regression, logistic regression, and autoregressive time series models. Such
arguments are developed further in later chapters. This chapter also introduces
reinforcement learning for finance and is followed by more in-depth case studies
highlighting the design concepts and practical challenges of applying machine
learning in practice.
1 Background
find how to make machines use language, form abstractions and concepts, solve
kinds of problems now reserved for humans, and improve themselves.” Thus the
field of artificial intelligence, or AI, was born.
Since this time, AI has perpetually strived to outperform humans on various judg-
ment tasks (Pinar Saygin et al. 2000). The most fundamental metric for this success
is the Turing test—a test of a machine’s ability to exhibit intelligent behavior equiv-
alent to, or indistinguishable from, that of a human (Turing 1995). In recent years,
a pattern of success in AI has emerged—one in which machines outperform in the
presence of a large number of decision variables, usually with the best solution being
found through evaluating an exponential number of candidates in a constrained
high-dimensional space. Deep learning models, in particular, have proven remark-
ably successful in a wide field of applications (DeepMind 2016; Kubota 2017;
Esteva et al. 2017) including image processing (Simonyan and Zisserman 2014),
learning in games (DeepMind 2017), neuroscience (Poggio 2016), energy conser-
vation (DeepMind 2016), skin cancer diagnostics (Kubota 2017; Esteva et al. 2017).
One popular account of this reasoning points to humans’ perceived inability
to process large amounts of information and make decisions beyond a few key
variables. But this view, even if fractionally representative of the field, does no
justice to AI or human learning. Humans are not being replaced any time soon.
The median estimate for human intelligence in terms of gigaflops is about 104 times
more than the machine that ran alpha-go. Of course, this figure is caveated on the
important question of whether the human mind is even a Turing machine.
In de Prado (2019), some of the properties of these new, alternative datasets are
explored: (a) many of these datasets are unstructured, non-numerical, and/or non-
categorical, like news articles, voice recordings, or satellite images; (b) they tend
to be high-dimensional (e.g., credit card transactions) and the number of variables
may greatly exceed the number of observations; (c) such datasets are often sparse,
containing NaNs (not-a-numbers); (d) they may implicitly contain information
about networks of agents.
Furthermore, de Prado (2019) explains why classical econometric methods fail
on such datasets. These methods are often based on linear algebra, which fail when
the number of variables exceeds the number of observations. Geometric objects,
such as covariance matrices, fail to recognize the topological relationships that
characterize networks. On the other hand, machine learning techniques offer the
numerical power and functional flexibility needed to identify complex patterns
in a high-dimensional space offering a significant improvement over econometric
methods.
The “black-box” view of ML is dismissed in de Prado (2019) as a misconception.
Recent advances in ML make it applicable to the evaluation of plausibility of
scientific theories; determination of the relative informational variables (usually
referred to as features in ML) for explanatory and/or predictive purposes; causal
inference; and visualization of large, high-dimensional, complex datasets.
Advances in ML remedy the shortcomings of econometric methods in goal
setting, outlier detection, feature extraction, regression, and classification when it
comes to modern, complex alternative datasets. For example, in the presence of p
features there may be up to 2p − p − 1 multiplicative interaction effects. For two
features there is only one such interaction effect, x1 x2 . For three features, there are
x1 x2 , x1 x3 , x2 x3 , x1 x2 x3 . For as few as ten features, there are 1,013 multiplicative
interaction effects. Unlike ML algorithms, econometric models do not “learn”
the structure of the data. The model specification may easily miss some of the
interaction effects. The consequences of missing an interaction effect, e.g. fitting
yt = x1,t + x2,t + t instead of yt = x1,t + x2,t + x1,t x2,t + t , can be dramatic.
A machine learning algorithm, such as a decision tree, will recursively partition
a dataset with complex patterns into subsets with simple patterns, which can then
be fit independently with simple linear specifications. Unlike the classical linear
regression, this algorithm “learns” about the existence of the x1,t x2,t effect, yielding
much better out-of-sample results.
There is a draw towards more empirically driven modeling in asset pricing
research—using ever richer sets of firm characteristics and “factors” to describe and
understand differences in expected returns across assets and model the dynamics
of the aggregate market equity risk premium (Gu et al. 2018). For example,
Harvey et al. (2016) study 316 “factors,” which include firm characteristics and
common factors, for describing stock return behavior. Measurement of an asset’s
risk premium is fundamentally a problem of prediction—the risk premium is the
conditional expectation of a future realized excess return. Methodologies that can
reliably attribute excess returns to tradable anomalies are highly prized. Machine
learning provides a non-linear empirical approach for modeling realized security
6 1 Introduction
returns from firm characteristics. Dixon and Polson (2019) review the formulation
of asset pricing models for measuring asset risk premia and cast neural networks in
canonical asset pricing frameworks.
1.2 Fintech
The rise of data and machine learning has led to a “fintech” industry, covering
digital innovations and technology-enabled business model innovations in the
financial sector (Philippon 2016). Examples of innovations that are central to
fintech today include cryptocurrencies and the blockchain, new digital advisory and
trading systems, peer-to-peer lending, equity crowdfunding, and mobile payment
systems. Behavioral prediction is often a critical aspect of product design and risk
management needed for consumer-facing business models; consumers or economic
agents are presented with well-defined choices but have unknown economic needs
and limitations, and in many cases do not behave in a strictly economically rational
fashion. Therefore it is necessary to treat parts of the system as a black-box that
operates under rules that cannot be known in advance.
1.2.1 Robo-Advisors
In 2011 fraud cost the financial industry approximately $80 billion annually
(Consumer Reports, June 2011). According to PwC’s Global Economic Crime
Survey 2016, 46% of respondents in the Financial Services industry reported being
victims of economic crime in the last 24 months—a small increase from 45%
reported in 2014. 16% of those that reported experiencing economic crime had
suffered more than 100 incidents, with 6% suffering more than 1,000. According
to the survey, the top 5 types of economic crime are asset misappropriation (60%,
down from 67% in 2014), cybercrime (49%, up from 39% in 2014), bribery and
corruption (18%, down from 20% in 2014), money laundering (24%, as in 2014),
and accounting fraud (18%, down from 21% in 2014). Detecting economic crimes is
1 Background 7
one of the oldest successful applications of machine learning in the financial services
industry. See Gottlieb et al. (2006) for a straightforward overview of some of the
classical methods: logistic regression, naïve Bayes, and support vector machines.
The rise of electronic trading has led to new kinds of financial fraud and market
manipulation. Some exchanges are investigating the use of deep learning to counter
spoofing.
1.2.3 Cryptocurrencies
a1 a9
a6
t1 t3
a10
a2
a7
a11
a3 a12
a4 t2 a8 t4
a13
a5
Time
Fig. 1.1 A transaction–address graph representation of the Bitcoin network. Addresses are
represented by circles, transactions with rectangles, and edges indicate a transfer of coins. Blocks
order transactions in time, whereas each transaction with its input and output nodes represents an
immutable decision that is encoded as a subgraph on the Bitcoin network. Source: Akcora et al.
(2018)
8 1 Introduction
•
? Multiple Choice Question 1
Select all the following correct statements:
1. Supervised learning involves learning the relationship between input and output
variables.
2. Supervised learning requires a human supervisor to prepare labeled training data.
3. Unsupervised learning does not require a human supervisor and is therefore
superior to supervised learning.
4. Reinforcement learning can be viewed as a generalization of supervised learning
to Markov Decision Processes.
1 The model is referred to as non-parametric if the parameter space is infinite dimensional and
Suppose G ∈ {A, B, C} and the input X ∈ {0, 1}2 are binary 2-vectors given
in Table 1.1.
To match the input and output in this case, one could define a parameter-free
step function g(x) over {0, 1}2 so that
⎧
⎪
⎪ {1, 0, 0} if x = (0, 1)
⎪
⎪
⎨{0, 1, 0} if x = (1, 1)
g(x) = (1.5)
⎪
⎪ {0, 0, 1} if x = (1, 0)
⎪
⎪
⎩
{0, 0, 1} if x = (0, 0).
The discriminative model g(x), defined in Eq. 1.5, specifies a set of fixed rules
which predict the outcome of this experiment with 100% accuracy. Intuitively, it
seems clear that such a model is flawed if the actual relation between inputs and
outputs is non-deterministic. Clearly, a skilled analyst would typically not build such
2 Machine Learning and Prediction 11
a model. Yet, hard-wired rules such as this are ubiquitous in the finance industry
such as rule-based technical analysis and heuristics used for scoring such as credit
ratings.
If the model is allowed to be general, there is no reason why this particular
function should be excluded. Therefore automated systems analyzing datasets such
as this may be prone to construct functions like those given in Eq. 1.5 unless
measures are taken to prevent it. It is therefore incumbent on the model designer
to understand what makes the rules in Eq. 1.5 objectionable, with the goal of using
a theoretically sound process to generalize the input–output map to other data.
If this model were sampled, it would produce the data in Table 1.1
with probability (0.9)4 = 0.6561. We can hardly exclude this model from
consideration on the basis of the results in Table 1.1, so which one do we
choose?
Informally, the heart of the model selection problem is that model g has
excessively high confidence about the data, when that confidence is often not
warranted. Many other functions, such as h, could have easily generated Table 1.1.
Though there is only one model that can produce Table 1.1 with probability 1.0,
there is a whole family of models that can produce the table with probability at least
0.66. Many of these plausible models do not assign overwhelming confidence to the
results. To determine which model is best on average, we need to introduce another
key concept.
2.1 Entropy
Fig. 1.2 (Left) This figure shows the binary entropy of a biased coin. If the coin is fully biased,
then each flip provides no new information as the outcome is already known and hence the entropy
is zero. (Right) The concept of entropy was introduced by Claude Shannon2 in 19483 and was
originally intended to represent an upper limit on the average length of a lossless compression
encoding. Shannon’s entropy is foundational to the mathematical discipline of information theory
The reason why base 2 is chosen is so that the upper bound represents the number
of bits needed to represent the outcome of the random variable, i.e. {0, 1} and hence
1 bit.
The binary entropy for a biased coin is shown in Fig. 1.2. If the coin is fully
biased, then each flip provides no new information as the outcome is already known.
The maximum amount of information that can be revealed by a coin flip is when the
coin is unbiased.
Let us now reintroduce our parameterized mass in the setting of the biased coin.
Let us consider an i.i.d. discrete random variable Y : → Y ⊂ R and let
If g(y|θ ) is a model of the non-fair coin with g(Y = 1|θ ) = pθ , g(Y = 0|θ ) =
1 − pθ . The cross-entropy is
•
? Multiple Choice Question 2
Select all of the following statements that are correct:
1. Neural network classifiers are a discriminative model which output probabilistic
weightings for each category, given an input feature vector.
2. If the data is independent and identically distributed (i.i.d.), then the output
of a dichotomous classifier is a conditional probability of a Bernoulli random
variable.
3. A θ -parameterized discriminative model for a biased coin dependent on the
environment X can be written as {gi (X|θ )}1i=0 .
4. A model of two biased coins, both dependent on the environment X, can be
equivalently modeled with either the pair {gi(1) (X|θ )}1i=0 and {gi(2) (X|θ )}1i=0 , or
the multi-classifier {gi (X|θ )}3i=0 .
Neural networks represent the non-linear map F (X) over a high-dimensional input
space using hierarchical layers of abstractions. An example of a neural network is a
feedforward network—a sequence of L layers4 formed via composition:
(L) (1)
Ŷ (X) := FW,b (X) = fW (L) ,b(L) . . . ◦ fW (1) ,b(1) (X),
where
• fW(l)(l) ,b(l) (X) := σ (l) (W (l) X +b(l) ) is a semi-affine function, where σ (l) is a
univariate and continuous non-linear activation function such as max(·, 0)
or tanh(·).
• W = (W (1) , . . . , W (L) ) and b = (b(1) , . . . , b(L) ) are weight matrices and
offsets (a.k.a. biases), respectively.
4 Note that we do not treat the input as a layer. So there are L − 1 hidden layers and an output layer.
2 Machine Learning and Prediction 15
Fig. 1.4 Examples of neural networks architectures discussed in this book. Source: Van Veen, F.
& Leijnen, S. (2019), “The Neural Network Zoo,” Retrieved from https://www.asimovinstitute.
org/neural-network-zoo. The input nodes are shown in yellow and represent the input variables,
the green nodes are the hidden neurons and present hidden latent variables, the red nodes are
the outputs or responses. Blue nodes denote hidden nodes with recurrence or memory. (a)
Feedforward. (b) Recurrent. (c) Long short-term memory
Ŷ (X) = W (2) (W (1) X + b(1) ) + b(2) = W (2) W (1) X + W (2) b(1) + b(2) = W X + b
(1.10)
is just linear regression, i.e. an affine transformation.5 Clearly, if there are no hidden
layers, the architecture recovers standard linear regression
Y = WX + b
and logistic regression φ(W X + b), where φ is a sigmoid or softmax function, when
the response is continuous or categorical, respectively. Some of the terminology
used here and the details of this model will be described in Chap. 4.
The theoretical roots of feedforward neural networks are given by the
Kolmogorov–Arnold representation theorem (Arnold 1957; Kolmogorov 1957)
of multivariate functions. Remarkably, Hornik et al. (1989) showed how neural
networks, with one hidden layer, are universal approximators to non-linear
functions.
Clearly there are a number of issues in any architecture design and inference of
the model parameters (W, b). How many layers? How many neurons Nl in each
hidden layer? How to perform “variable selection”? How to avoid over-fitting? The
details and considerations given to these important questions will be addressed in
Chap. 4.
5 While the functional form of the map is the same as linear regression, neural networks do not
assume a data generation process and hence inference is not identical to ordinary least squares
regression.
16 1 Introduction
Table 1.2 This table contrasts maximum likelihood estimation-based inference with supervised
machine learning. The comparison is somewhat exaggerated for ease of explanation; however, the
two should be viewed as opposite ends of a continuum of methods. Regularized linear regression
techniques such as LASSO and ridge regression, or hybrids such as Elastic Net, provide some
combination of the explanatory power of maximum likelihood estimation while retaining out-of-
sample predictive performance on high-dimensional datasets
Property Statistical inference Supervised machine learning
Goal Causal models with explanatory Prediction performance, often with
power limited explanatory power
Data The data is generated by a The data generation process is unknown
model
Framework Probabilistic Algorithmic and Probabilistic
Expressibility Typically linear Non-linear
Model selection Based on information criteria Numerical optimization
Scalability Limited to lower-dimensional Scales to high-dimensional input data
data
Robustness Prone to over-fitting Designed for out-of-sample performance
Diagnostics Extensive Limited
and falls under the more general and divisive topic of “Bayesian versus frequentist
modeling.”
Fig. 1.5 Overview of how machine learning generalizes parametric econometrics, together with
the section references to the material in the first two parts of this book
3 Statistical Modeling vs. Machine Learning 19
(2)
response: Ŷt =fW (2) ,b(2) (Zt ):=σ (2) (W (2) Zt +b(2) ),
(1)
hidden states: Zt−j = fW (1) ,b(1) (Xt,j )
where σ (1) is an activation function such as tanh(x) and σ (2) is either a softmax
function, sigmoid function, or identity function depending on whether the response
is categorical, binary, or continuous, respectively. The connections between the
extremal inputs Xt and the H hidden units are weighted by the time invariant matrix
Wx ∈ RH ×P . The recurrent connections between the H hidden units are weighted
(1)
remain fixed over all repetitions—there is only one recurrent layer with weights
Wz(1) .
The amount of memory in the model is equal to the sequence length T . This
means that the maximum lagged input that affects the output, Ŷt , is Xt−T . We
shall see later in Chap. 8 that RNNs are simply non-linear autoregressive models
with exogenous variables (NARX). In the special case of the univariate time series
p
prediction X̂t = F (Xt−1 ), using T = p previous observations {Xt−i }i=1 , only one
neuron in the recurrent layer with weight φ and no activation function, a RNN is an
AR(p) model with zero drift and geometric weights:
with |φ| < 1 to ensure that the model is stationary. The order p can be found through
autocorrelation tests of the residual if we make the additional assumption that the
error Xt − X̂t is Gaussian. Example tests include the Ljung–Box and Lagrange
multiplier tests. However, the over-reliance on parametric diagnostic tests should be
used with caution since the conditions for satisfying the tests may not be satisfied
on complex time series data. Because the weights are time independent, plain RNNs
are static time series models and not suited to non-covariance stationary time series
data.
Additional layers can be added to create deep RNNs by stacking them on top
of each other, using the hidden state of the RNN as the input to the next layer.
However, RNNs have difficulty in learning long-term dynamics, due in part to the
vanishing and exploding gradients that can result from propagating the gradients
down through the many unfolded layers of the network. Moreover, RNNs like most
methods in supervised machine learning are inherently designed for stationary data.
Oftentimes, financial time series data is non-stationary.
In Chap. 8, we shall introduce gated recurrent units (GRUs) and long short term
memory (LSTM) networks, the latter is shown in Fig. 1.4c as a particular form
of recurrent network which provide a solution to this problem by incorporating
memory units. In the language of time series modeling, we shall construct dynamic
RNNs which are suitable for non-stationary data. More precisely, we shall see that
these architecture shall learn when to forget previous hidden states and when to
update hidden states given new information.
This ability to model hidden states is of central importance in financial time
series modeling and applications in trading. Mixture models and hidden Markov
models have historically been the primary probabilistic methods used in quantitative
finance and econometrics to model regimes and are reviewed in Chap. 2 and Chap. 7
respectively. Readers are encouraged to review Chap. 2, before reading Chap. 7.
3.3 Over-fitting
Undoubtedly the pivotal concern with machine learning, and especially deep
learning, is the propensity for over-fitting given the number of parameters in the
model. This is why skill is needed to fit deep neural networks.
In frequentist statistics, over-fitting is addressed by penalizing the likelihood
function with a penalty term. A common approach is to select models based on
Akaike’s information criteria (Akaike 1973), which assumes that the model error is
Gaussian. The penalty term is in fact a sample bias correction term to the Kullback–
Leibler divergence (the relative Entropy) and is applied post-hoc to the unpenalized
maximized loss likelihood.
Machine learning methods such as least absolute shrinkage and selection oper-
ator (LASSO) and ridge regression more conveniently directly optimize a loss
function with a penalty term. Moreover the approach is not restricted to model-
ing error distributional assumptions. LASSO or L1 regularization favors sparser
parameterizations, whereas ridge regression or L2 reduces the magnitude of the
parameters. Regularization is arguably the most important aspect of why machine
learning methods have been so successful in finance and other distributions.
Conversely, its absence is why neural networks fell out-of-favor in the finance
industry in the 1990s.
Regularization and information criteria are closely related, a key observation
which enables us to express model selection in terms of information entropy and
hence root our discourse in the works of Shannon (1948), Wiener (1964), Kullback
and Leibler (1951). How to choose weights, the concept of regularization for model
selection, and cross-validation is discussed in Chap. 4.
It turns out that the choice of priors in Bayesian modeling provides a probabilistic
analog to LASSO and ridge regression. L2 regularization is equivalent to a Gaussian
prior and L1 is an equivalent to a Laplacian prior. Another important feature of
Bayesian models is that they have a natural mechanism for prevention of over-fitting
built-in. Introductory Bayesian modeling is covered extensively in Chap. 2.
22 1 Introduction
4 Reinforcement Learning
Recall that supervised learning is essentially a paradigm for inferring the parameters
of a map between input data and an output through minimizing an error over training
samples. Performance generalization is achieved through estimating regularization
parameters on cross-validation data. Once the weights of a network are learned,
they are not updated in response to new data. For this reason, supervised learning
can be considered as an “offline” form of learning, i.e. the model is fitted offline.
Note that we avoid referring to the model as static since it is possible, under certain
types of architectures, to create a dynamical model in which the map between input
and output changes over time. For example, as we shall see in Chap. 8, a LSTM
maintains a set of hidden state variables which result in a different form of the map
over time.
In such learning, a “teacher” provides an exact right output for each data point
in a training set. This can be viewed as “feedback” from the teacher, which for
supervised learning amounts to informing the agent with the correct label each time
the agent classifies a new data point in the training dataset. Note that this is opposite
to unsupervised learning, where there is no teacher to provide correct answers to a
ML algorithm, which can be viewed as a setting with no teacher, and, respectively,
no feedback from a teacher.
An alternative learning paradigm, referred to as “reinforcement learning,” exists
which models a sequence of decisions over state space. The key difference of
this setting from supervised learning is feedback from the teacher is somewhat
in between of the two extremes of unsupervised learning (no feedback at all) and
supervised learning that can be viewed as feedback by providing the right labels.
Instead, such partial feedback is provided by “rewards” which encourage a desired
behavior, but without explicitly instructing the agent what exactly it should do, as in
supervised learning.
The simplest way to reason about reinforcement learning is to consider machine
learning tasks as a problem of an agent interacting with an environment, as
illustrated in Fig. 1.6.
Fig. 1.6 This figure shows a reinforcement learning agent which performs actions at times
t0 , . . . , tn . The agent perceives the environment through the state variable St . In order to perform
better on its task, feedback on an action at is provided to the agent at the next time step in the form
of a reward Rt
4 Reinforcement Learning 23
The agent learns about the environment in order to perform better on its task,
which can be formulated as the problem of performing an optimal action. If
an action performed by an agent is always the same and does not impact the
environment, in this case we simply have a perception task, because learning about
the environment helps to improve performance on this task. For example, you might
have a model for prediction of mortgage defaults where the action is to compute the
default probability for a given mortgage. The agent, in this case, is just a predictive
model that produces a number and there is measurement of how the model impacts
the environment. For example, if a model at a large mortgage broker predicted that
all borrowers will default, it is very likely that this would have an impact on the
mortgage market, and consequently future predictions. However, this feedback is
ignored as the agent just performs perception tasks, ideally suited for supervised
learning. Another example is in trading. Once an action is taken by the strategy
there is feedback from the market which is referred to as “market impact.”
Such a learner is configured to maximize a long-run utility function under
some assumptions about the environment. One simple assumption is to treat the
environment as being fully observable and evolving as a first-order Markov process.
A Markov Decision Process (MDP) is then the simplest modeling framework that
allows us to formalize the problem of reinforcement learning. A task solved by
MDPs is the problem of optimal control, which is the problem of choosing action
variables over some period of time, in order to maximize some objective function
that depends both on the future states and action taken. In a discrete-time setting,
the state of the environment St ∈ S is used by the learner (a.k.a. agent) to
decide which action at ∈ A(St ) to take at each time step. This decision is made
dynamic by updating the probabilities of selecting each action conditioned on St .
These conditional probabilities πt (a|s) are referred to as the agent’s policy. The
mechanism for updating the policy as a result of its learning is as follows: one time
step later and as a consequence of its action, the learner receives a reward defined
by a reward function, an immediate reward given the current state St and action
taken at .
As a result of the dynamic environment and the action of the agent, we transition
to a new state St+1 . A reinforcement learning method specifies how to change the
policy so as to maximize the total amount of reward received over the long-run. The
constructs for reinforcement learning will be formalized in Chap. 9 but we shall
informally discuss some of the challenges of reinforcement learning in finance here.
Most of the impressive progress reported recently with reinforcement learning
by researchers and companies such as Google’s DeepMind or OpenAI, such as
playing video games, walking robots, self-driving cars, etc., assumes complete
observability, using Markovian dynamics. The much more challenging problem,
which is a better setting for finance, is how to formulate reinforcement learning
for partially observable environments, where one or more variables are hidden.
Another, more modest, challenge exists in how to choose the optimal policy when
no environment is fully observable but the dynamic process for how the states evolve
over time is unknown. It may be possible, for simple problems, to reason about how
the states evolve, perhaps adding constraints on the state-action space. However,
24 1 Introduction
Intuitively, an agent should pick arms that performed well in the past, yet
the agent needs to ensure that no good option has been missed.
Bellman optimality. The following optimal payoff example will likely just serve as a
simple review exercise in dynamic programming, albeit with uncertainty introduced
into the problem. As we follow the mechanics of solving the problem, the example
exposes the inherent difficulty of relaxing our assumptions about the distribution of
the uncertainty.
v3 (x) = R3 (x), ∀x ∈ K,
v2 (x) = max{R2 (k) + v3 (x − k)}, ∀x ∈ K + 200,
k∈K
v1 (x) = max{R1 (k) + v2 (x − k)}, x = 600,
k∈K
R2 + v3 M2
M3 100 200 300 M1 (M2∗ , M3∗ ) v2 R1 R1 + v2
100 1.5 2.15 2.7 100 (300, 200) 3.45 0.8 4.25
200 2.25 2.9 3.45 200 (200, 200) 2.9 1.4 4.3
300 2.55 3.2 3.75 300 (100, 200) 2.25 1.8 4.05
In the above example, we can see that the allocation {200, 200, 200} maximizes
the reward to give v1 (600) = 4.3. While this exercise is a straightforward
application of a Bellman optimality recurrence relation, it provides a glimpse of
the types of stochastic dynamic programming problems that can be solved with
reinforcement learning. In particular, if the fill probabilities are unknown but must
be learned over time by observing the outcome over each period [ti , ti+1 ), then
the problem above cannot be solved by just using backward recursion. Instead we
will move to the framework of reinforcement learning and attempt to learn the
best actions given the data. Clearly, in practice, the example is much too simple
to be representative of real-world problems in finance—the profits will be unknown
and the state space is significantly larger, compounding the need for reinforcement
learning. However, it is often very useful to benchmark reinforcement learning on
simple stochastic dynamic programming problems with closed-form solutions.
In the previous example, we assumed that the problem was static—the variables
in the problem did not change over time. This is the so-called static allocation
problem and is somewhat idealized. Our next example will provide a glimpse of the
types of problems that typically arise in optimal portfolio investment where random
variables are dynamic. The example is also seated in more classical finance theory,
that of a “Markowitz portfolio” in which the investor seeks to maximize a risk-
adjusted long-term return and the wealth process is self-financing.6
6A wealth process is self-financing if, at each time step, any purchase of an additional quantity
of the risky asset is funded from the bank account. Vice versa, any proceeds from a sale of some
quantity of the asset go to the bank account.
4 Reinforcement Learning 27
Wt+1 = (1 − ut ) Wt + ut Wt (1 + φt ) . (1.13)
Wt+1 − Wt
rt = = ut φt . (1.14)
Wt
Note this is a random function of the asset price St . We define one-step rewards
Rt for t = 0, . . . , T − 1 as risk-adjusted returns
T
T
V π (s)= max E Rt St =s = max E ut φt −λu2t Var [φt |St ] St =s .
ut ut
t=0 t=0
(1.16)
Equation 1.16 defines an optimal investment problem for T − 1 steps faced
by an investor whose objective is to optimize risk-adjusted returns over each
period. This optimization problem is equivalent to maximizing the long-run
(continued)
a Note, for avoidance of doubt, that the risk-aversion parameter must be scaled by a factor of
1
2 to ensure consistency with the finance literature.
28 1 Introduction
E [ φt | St ]
u∗t = , (1.17)
2λVar [φt |St ]
where we allow for short selling in the ETF (i.e., ut < 0) and borrowing of
cash ut > 1.
Renaissance Technologies, WorldQuant, D.E. Shaw, and Two Sigma who have
embraced novel machine learning techniques although there are mixed degrees of
adoption and a healthy skepticism exists that machine learning is a panacea for
quantitative trading. In 2015, Bridgewater Associates announced a new artificial
intelligence unit, having hired people from IBM Watson with expertise in deep
learning. Anthony Ledford, chief scientist at MAN AHL: “It’s at an early stage. We
have set aside a pot of money for test trading. With deep learning, if all goes well,
it will go into test trading, as other machine learning approaches have.” Winton
Capital Management’s CEO David Harding: “People started saying, ‘There’s an
amazing new computing technique that’s going to blow away everything that’s gone
before.’ There was also a fashion for genetic algorithms. Well, I can tell you none
of those companies exist today—not a sausage of them.”
Some qualifications are needed to more accurately assess the extent of adoption.
For instance, there is a false line of reasoning that ordinary least squares regres-
sion and logistic regression, as well as Bayesian methods, are machine learning
techniques. Only if the modeling approach is algorithmic, without positing a data
generation process, can the approach be correctly categorized as machine learning.
So regularized regression without use of parametric assumptions on the error
distribution is an example of machine learning. Unregularized regression with, say,
Gaussian error is not a machine learning technique. The functional form of the
input–output map is the same in both cases, which is why we emphasize that the
functional form of the map is not a sufficient condition for distinguishing ML from
statistical methods.
With that caveat, we shall view some examples that not only illustrate some of
the important practical applications of machine learning prediction in algorithmic
trading, high-frequency market making, and mortgage modeling but also provide a
brief introduction to applications that will be covered in more detail in later chapters.
Algorithmic trading is a natural playground for machine learning. The idea behind
algorithmic trading is that trading decisions should be based on data, not intuition.
Therefore, it should be viable to automate this decision-making process using
an algorithm, either specified or learned. The advantages of algorithmic trading
include complex market pattern recognition, reduced human produced error, ability
to test on historic data, etc. In recent times, as more and more information is being
digitized, the feasibility and capacity of algorithmic trading has been expanding
drastically. The number of hedge funds, for example, that apply machine learning for
algorithmic trading is steadily increasing.
Here we provide a simple example of how machine learning techniques can
be used to improve traditional algorithmic trading methods, but also provide new
trading strategy suggestions. The example here is not intended to be the “best”
approach, but rather indicative of more out-of-the-box strategies that machine
learning facilitates, with the emphasis on minimizing out-of-sample error by pattern
matching through efficient compression across high-dimensional datasets.
30 1 Introduction
i
where rt+h,t is the return of stock i between times t and t + h, r̃t+h,t is the return
of the S&P 500 index in the same period, and is some target next period excess
portfolio return. Without loss of generality, we could invest in the universe (N =
500), although this is likely to have adverse practical implications such as excessive
transaction costs. We could easily just have restricted the number of stocks to a
subset, such as the top decile of performing stocks in the last period. Framed this
way, the machine learner is thus informing us when our stock selection strategy
will outperform the market. It is largely agnostic to how the stocks are selected,
provided the procedure is systematic and based solely on the historic data provided
to the classifier. It is further worth noting that the map between the decision to hold
the customized portfolio has a non-linear relationship with the past returns of the
universe.
To make the problem more concrete, let us set h = 5 days. The algorithmic
strategy here is therefore automating the decision to invest in the customized
7 Thequestion of how much data is needed to train a neural network is a central one, with the
immediate concern being insufficient data to avoid over-fitting. The amount of data needed is
complex to assess; however, it is partly dependent on the number of edges in the network and
can be assessed through bias–variance analysis, as described in Chap. 4.
5 Examples of Supervised Machine Learning in Practice 31
portfolio or the S&P 500 index every week based on the previous 5-day realized
returns of all stocks. To apply machine learning to this decision, the problem
translates into finding the weights in the network between past returns and the
decision to invest in the equally weighted portfolio. For avoidance of doubt, we
emphasize that the interpretation of the optimal weights differs substantially from
Markowitz’s mean–variance portfolios, which simply finds the portfolio weights to
optimize expected returns for a given risk tolerance. Here, we either invest equal
amounts in all stocks of the portfolio or invest the same amount in the S&P 500
index and the weights in the network signify the relevance of past stock returns in
the expected excess portfolio return outperforming the market.
Data
Feature engineering is always important in building models and requires careful
consideration. Since the original price data does not meet several machine learning
requirements, such as stationarity and i.i.d. distributional properties, one needs to
engineer input features to prevent potential “garbage-in-garbage-out” phenomena.
In this example, we take a simple approach by using only the 5-day realized returns
of all S&P 500 stocks.8 Returns are scale-free and no further standardization is
needed. So for each time t, the input features are
Xt = rt,t−5
1 500
, . . . , rt,t−5 . (1.19)
Now we can aggregate the features and labels into a panel indexed by date. Each
column is an entry in Eq. 1.19, except for the last column which is the assigned
label from Eq. 1.18, based on the realized excess stock returns of the portfolio. An
example of the labeled input data (X, G) is shown in Table 1.3.
The process by which we train the classifier and evaluate its performance will be
described in Chap. 4, but this example illustrates how algo-trading strategies can be
crafted around supervised machine learning. Our model problem could be tailored
8 Note that the composition of the S&P 500 changes over time and so we should interpret a feature
as a fixed symbol.
32 1 Introduction
for specific risk-reward and performance reporting metrics such as, for example,
Sharpe or information ratios meeting or exceeding a threshold.
is typically chosen to be a small value so that the labels are not too imbalanced.
As the value is increased, the problem becomes an “outlier prediction problem”—
a highly imbalanced classification problem which requires more advanced sampling
and interpolation techniques beyond an off-the-shelf classifier.
In the next example, we shall turn to another important aspect of machine
learning in algorithmic trading, namely execution. How the trades are placed is a
significant aspect of algorithmic trading strategy performance, not only to minimize
price impact of market taking strategies but also for market making. Here we shall
look to transactional data to perfect the execution, an engineering challenge by itself
just to process market feeds of tick-by-tick exchange transactions. The example
considers a market making application but could be adapted for price impact and
other execution considerations in algorithmic trading by moving to a reinforcement
learning framework.
A common mistake is to assume that building a predictive model will result in a
profitable trading strategy. Clearly, the consideration given to reliably evaluating
machine learning in the context of trading strategy performance is a critical
component of its assessment.
Table 1.4 This table shows a snapshot of the limit order book of S&P 500 e-mini futures (ES).
The top half (“sell-side”) shows the ask volumes and prices and the lower half (“buy side”) shows
the bid volumes and prices. The quote levels are ranked by the most competitive at the center (the
“inside market”), outward to the least competitive prices at the top and bottom of the limit order
book. Note that only five bid or ask levels are shown in this example, but the actual book is much
deeper
Bid Price Ask
2170.25 1284
2170.00 1642
2169.75 1401
2169.50 1266
2169.25 290
477 2169.00
1038 2168.75
950 2168.50
1349 2168.25
1559 2168.00
Fig. 1.7 (Top) A snapshot of the limit order book is taken at time t. Limit orders placed by the
market marker are denoted with the “+” symbol—red denotes a bid and blue denotes an ask. A buy
market order subsequently arrives and matches the entire resting quantity of best ask quotes. Then
at event time t + 1 the limit order book is updated. The market maker’s ask has been filled (blue
minus symbol) and the bid rests away from the inside market. (Bottom) A pre-emptive strategy
for avoiding adverse price selection is illustrated. The ask is requoted at a higher ask price. In this
case, the bid is not replaced and the market maker may capture a tick more than the spread if both
orders are filled
Machine learning can be used to predict these price movements (Kearns and
Nevmyvaka 2013; Kercheval and Zhang 2015; Sirignano 2016; Dixon et al. 2018;
Dixon 2018b,a) and thus to potentially avoid adverse selection. Following Cont and
de Larrard (2013) we can treat queue sizes at each price level as input variables.
We can additionally include properties of market orders, albeit in a form which
our machines deem most relevant to predicting the direction of price movements
(a.k.a. feature engineering). In contrast to stochastic modeling, we do not impose
conditional distributional assumptions on the independent variables (a.k.a. features)
nor assume that price movements are Markovian. Chapter 8 presents a RNN for
34 1 Introduction
mid-price prediction from the limit order book history which is the starting point
for the more in-depth study of Dixon (2018b) which includes market orders and
demonstrates the superiority of RNNs compared to other time series methods such
as Kalman filters.
We reiterate that the ability to accurately predict does not imply profitability
of the strategy. Complex issues concerning queue position, exchange matching
rules, latency, position constraints, and price impact are central considerations for
practitioners. The design of profitable strategies goes beyond the scope of this book
but the reader is referred to de Prado (2018) for the pitfalls of backtesting and
designing algorithms for trading. Dixon (2018a) presents a framework for evaluating
the performance of supervised machine learning algorithms which accounts for
latency, position constraints, and queue position. However, supervised learning is
ultimately not the best machine learning approach as it cannot capture the effect
of market impact and is too inflexible to incorporate more complex strategies.
Chapter 9 presents examples of reinforcement learning which demonstrate how to
capture market impact and also how to flexibly formulate market making strategies.
Beyond the data rich environment of algorithmic trading, does machine learning
have a place in finance? One perspective is that there simply is not sufficient data
for some “low-frequency” application areas in finance, especially where traditional
models have failed catastrophically. The purpose of this section is to serve as a
sobering reminder that long-term forecasting goes far beyond merely selecting the
best choice of machine learning algorithm and why there is no substitute for strong
domain knowledge and an understanding of the limitations of data.
In the USA, a mortgage is a loan collateralized by real-estate. Mortgages are
used to securitize financial instruments such as mortgage backed securities and
collateralized mortgage obligations. The analysis of such securities is complex and
has changed significantly over the last decade in response to the 2007–2008 financial
crises (Stein 2012).
Unless otherwise specified, a mortgage will be taken to mean a “residential
mortgage,” which is a loan with payments due monthly that is collateralized by a
single family home. Commercial mortgages do exist, covering office towers, rental
apartment buildings, and industrial facilities, but they are different enough to be
considered separate classes of financial instruments. Borrowing money to buy a
house is one of the most common, and largest balance, loans that an individual
borrower is ever likely to commit to. Within the USA alone, mortgages comprise
a staggering $15 trillion dollars in debt. This is approximately the same balance as
the total federal debt outstanding (Fig. 1.8).
Within the USA, mortgages may be repaid (typically without penalty) at will by
the borrower. Usually, borrowers use this feature to refinance their loans in favorable
interest rate regimes, or to liquidate the loan when selling the underlying house. This
5 Examples of Supervised Machine Learning in Practice 35
Fig. 1.8 Total mortgage debt in the USA compared to total federal debt, millions of dollars,
unadjusted for inflation. Source: https://fred.stlouisfed.org/series/MDOAH, https://fred.stlouisfed.
org/series/GFDEBTN
Table 1.5 At any time, the states of any US style residential mortgage is in one of the several
possible states
Symbol Name Definition
P Paid All balances paid, loan is dissolved
C Current All payments due have been paid
3 30-days delinquent Mortgage is delinquent by one payment
6 60-days delinquent Delinquent by 2 payments
9 90+ delinquent Delinquent by 3 or more payments
F Foreclosure Foreclosure has been initiated by the lender
R Real-Estate-Owned (REO) The lender has possession of the property
D Default liquidation Loan is involuntarily liquidated for nonpayment
has the effect of moving a great deal of financial risk off of individual borrowers,
and into the financial system. It also drives a lively and well developed industry
around modeling the behavior of these loans.
The mortgage model description here will generally follow the comprehensive
work in Sirignano et al. (2016), with only a few minor deviations.
Any US style residential mortgage, in each month, can be in one of the several
possible states listed in Table 1.5.
Consider this set of K available states to be K = {P , C, 3, 6, 9, F, R, D}.
Following the problem formulation in Sirignano et al. (2016), we will refer to the
status of loan n at time t as Utn ∈ K, and this will be represented as a probability
vector using a standard one-hot encoding.
If X = (X1 , . . . , XP ) is the input matrix of P explanatory variables, then we
define a probability transition density function g : RP → [0, 1]K×K parameterized
by θ so that
P(Ut+1
n
= i | Utn = j, Xtn ) = gi,j (Xtn | θ ), ∀i, j ∈ K. (1.20)
36 1 Introduction
Our classifier gi,j (Xtn | θ ) can thus be constructed so that only the probability
of transition between the commutative states are outputs
and we can apply softmax
functions on a subset of the outputs to ensure that j ∈K gi,j (Xtn | θ ) = 1 and hence
the transition probabilities in each row sum to one.
For the purposes of financial modeling, it is important to realize that both states
P and D are loan liquidation terminal states. However, state P is considered to
be voluntary loan liquidation (e.g., prepayment due to refinance), whereas state
D is considered to be involuntary liquidation (e.g., liquidation via foreclosure and
auction). These states are not distinguishable in the mortgage data itself, but rather
the driving force behind liquidation must be inferred from the events leading up to
the liquidation.
One contributor to mortgage model misprediction in the run up to the 2008
financial crisis was that some (but not all) modeling groups considered loans
liquidating from deep delinquency (e.g., status 9) to be the transition 9 → P
if no losses were incurred. However, behaviorally, these were typically defaults
due to financial hardship, and they would have had losses in a more difficult
house price regime. They were really 9 → D transitions that just happened to be
lossless due to strong house price gains over the life of the mortgage. Considering
them to be voluntary prepayments (status P ) resulted in systematic over-prediction
of prepayments in the aftermath of major house price drops. The matrix above
therefore explicitly excludes this possibility and forces delinquent loan liquidation
to be always considered involuntary.
The reverse of this problem does not typically exist. In most states it is illegal
to force a borrower into liquidation until at least 2 payments have been missed.
Therefore, liquidation from C or 3 is always voluntary, and hence C → P
and 3 → P . Except in cases of fraud or severe malfeasance, it is almost never
economically advantageous for a lender to force liquidation from status 6, but it is
not illegal. Therefore the transition 3 → D is typically a data error, but 6 → D is
merely very rare.
5 Examples of Supervised Machine Learning in Practice 37
P(Ut+1
n
| Xtn ) = g(Xtn | θ ) · P(Utn ) = (0.05, 0.9, 0.05, 0, 0, 0, 0, 0)T .
(1.22)
Common mortgage models sometimes use additional states, often ones that are
(without additional information) indistinguishable from the states listed above.
Table 1.6 describes a few of these.
The reason for including these is the same as the reason for breaking out states
like REO, status R. It is known on theoretical grounds that some model regressors
from Xtn should not be relevant for R. For instance, since the property is now owned
by the lender, and the loan itself no longer exists, the interest rate (and rate incentive)
of the original loan should no longer have a bearing on the outcome. To avoid
over-fitting due to highly colinear variables, these known-useless variables are then
excluded from transitions models starting in status R.
This is the same reason status T is sometimes broken out, especially for logistic
regressions. Without an extra status listed in this way, strong rate disincentives
could drive prepayments in the model to (almost) zero, but we know that people
die and divorce in all rate regimes, so at least some minimal level of premature loan
liquidations must still occur based on demographic factors, not financial ones.
Unlike many other models, mortgage models are designed to accurately predict
events a decade or more in the future. Generally, this requires that they be built on
regressors that themselves can be accurately predicted, or at least hedged. Therefore,
it is common to see regressors like FICO at origination, loan age in months, rate
incentive, and loan-to-value (LTV) ratio. Often LTV would be called MTMLTV if
it is marked-to-market against projected or realized housing price moves. Of these
regressors, original FICO is static over the life of the loan, age is deterministic,
Fig. 1.9 Sample mortgage model predicting C → 3 and fit on loans originated in 2001 and
observed until 2006, by loan age (in months). The prepayment probabilities are shown on the
y-axis
rates can be hedged, and MTMLTV is rapidly driven down by loan amortization
and inflation thus eliminating the need to predict it accurately far into the future.
Consider the Freddie Mac loan level dataset of 30 year fixed rate mortgages
originated through 2014. This includes each monthly observation from each loan
present in the dataset. Table 1.7 shows the loan count by year for this dataset.
When a model is fit on 1 million observations from loans originated in 2001 and
observed until the end of 2006, its C → P probability charted against age is shown
in Fig. 1.9.
In Fig. 1.9 the curve observed is the actual prepayment probability of the
observations with the given age in the test dataset, “Model” is the model prediction,
5 Examples of Supervised Machine Learning in Practice 39
Fig. 1.10 Sample mortgage model predicting C → 3 and fit on loans originated in 2006 and
observed until 2015, by loan age (in months). The prepayment probabilities are shown on the y-
axis
and “Theoretical” is the response to age by a theoretical loan with all other
regressors from Xtn held constant. Two observations are worth noting:
1. The marginal response to age closely matches the model predictions; and
2. The model predictions match actual behavior almost perfectly.
This is a regime where prepayment behavior is largely driven by age. When
that same model is run on observations from loans originated in 2006 (the peak
of housing prices before the crisis), and observed until 2015, Fig. 1.10 is produced.
Three observations are warranted from this figure:
1. The observed distribution is significantly different from Fig. 1.9;
2. The model predicted a decline of 25%, but the actual decline was approximately
56%; and
3. Prepayment probabilities are largely indifferent to age.
The regime shown here is clearly not driven by age. In order to provide even this
level of accuracy, the model had to extrapolate far from any of the available data and
“imagine” a regime where loan age is almost irrelevant to prepayment. This model
meets with mixed success. This particular one was fit on only 8 regressors, a more
complicated model might have done better, but the actual driver of this inaccuracy
was a general tightening of lending standards. Moreover, there was no good data
series available before the crisis to represent lending standards.
This model was reasonably accurate even though almost 15 years separated the
start of the fitting data from the end of the projection period, and a lot happened in
that time. Mortgage models in particular place a high premium on model stability,
and the ability to provide as much accuracy as possible even though the underlying
distribution may have changed dramatically from the one that generated the fitting
data. Notice also that cross-validation would not help here, as we cannot draw
testing data from the distribution we care about, since that distribution comes from
the future.
40 1 Introduction
Most importantly, this model shows that the low-dimensional projections of this
(moderately) high-dimensional problem are extremely deceptive. No modeler would
have chosen a shape like the model prediction from Fig. 1.9 as function of age.
That prediction arises due to the interaction of several variables, interactions that
are not interpretable from one-dimensional plots such as this. As we will see in
subsequent chapters, such complexities in data are well suited to machine learning,
but not without a cost. That cost is understanding the “bias–variance tradeoff”
and understanding machine learning with sufficient rigor for its decisions to be
defensible.
6 Summary
In this chapter, we have identified some of the key elements of supervised machine
learning. Supervised machine learning
1. is an algorithmic approach to statistical inference which, crucially, does not
depend on a data generation process;
2. estimates a parameterized map between inputs and outputs, with the functional
form defined by the methodology such as a neural network or a random forest;
3. automates model selection, using regularization and ensemble averaging tech-
niques to iterate through possible models and arrive at a model with the best
out-of-sample performance; and
4. is often well suited to large sample sizes of high-dimensional non-linear covari-
ates.
The emphasis on out-of-sample performance, automated model selection, and
absence of a pre-determined parametric data generation process is really the key
to machine learning being a more robust approach than many parametric, financial
econometrics techniques in use today. The key to adoption of machine learning in
finance is the ability to run machine learners alongside their parametric counterparts,
observing over time the differences and limitations of parametric modeling based on
in-sample fitting metrics. Statistical tests must be used to characterize the data and
guide the choice of algorithm, such as, for example, tests for stationary. See Dixon
and Halperin (2019) for a checklist and brief but rounded discussion of some of the
challenges in adopting machine learning in the finance industry.
Capacity to readily exploit a wide form of data is their other advantage, but only if
that data is sufficiently high quality and adds a new source of information. We close
this chapter with a reminder of the failings of forecasting models during the financial
crisis of 2008 and emphasize the importance of avoiding siloed data extraction. The
application of machine learning requires strong scientific reasoning skills and is not
a panacea for commoditized and automated decision-making.
7 Exercises 41
7 Exercises
or
0 with probability p
V2 (G, p) = 1
(1−α) with probability (1 − p)
1. Given that α is known to Player 2, state the strategy9 that will give Player 2 an
expected payoff, over multiple games, of $1 without knowing p.
2. Suppose now that p is known to both players. In a given round, what is the
optimal choice of α for Player 1?
3. Suppose Player 2 knows with complete certainty, that G will be 1 for a particular
round, what will be the payoff for Player 2?
4. Suppose Player 2 has complete knowledge in rounds {1, . . . , i} and can reinvest
payoffs from earlier rounds into later rounds. Further suppose without loss of
generality that G = 1 for each of these rounds. What will be the payoff for
Player 2 after i rounds? You may assume that the each game can be played with
fractional dollar costs, so that, for example, if Player 2 pays Player 1 $1.5 to enter
the game, then the payoff will be 1.5V1 .
Exercise 1.2**: Model Comparison
Recall Example 1.2. Suppose additional information was added such that it is no
longer possible to predict the outcome with 100% probability. Consider Table 1.8
as the results of some experiment.
Now if we are presented with x = (1, 0), the result could be B or C. Consider
three different models applied to this value of x which encode the value A, B, or C.
9 The strategy refers the choice of weight if Player 2 is to choose a payoff V = wV1 + (1 − w)V2 ,
i.e. a weighted combination of payoffs V1 and V2 .
42 1 Introduction
h((1, 0)) = (0, 0.5, 0.5), Predicts B or C with 50% certainty. (1.25)
1. Show that each model has the same total absolute error, over the samples where
x = (1, 0).
2. Show that all three models assign the same average probability to the values from
Table 1.8 when x = (1, 0).
3. Suppose that the market game in Exercise 1 is now played with models f or g.
B or C each triggers two separate payoffs, V1 and V2 , respectively. Show that the
losses to Player 1 are unbounded when x = (1, 0) and α = 1 − p.
4. Show also that if the market game in Exercise 1 is now played with model h, the
losses to Player 1 are bounded.
Exercise 1.3**: Model Comparison
Example 1.1 and the associated discussion alluded to the notion that some types
of models are more common than others. This exercise will explore that concept
briefly.
Recall Table 1.1 from Example 1.1:
G x
A (0, 1)
B (1, 1)
C (1, 0)
C (0, 0)
For this exercise, consider two models “similar” if they produce the same
projections for G when applied to the values of x from Table 1.1 with probability
strictly greater than 0.95.
In the following subsections, the goal will be to produce sets of mutually
dissimilar models that all produce Table 1.1 with a given likelihood.
1. How many similar models produce Table 1.1 with likelihood 1.0?
2. Produce at least 4 dissimilar models that produce Table 1.1 with likelihood 0.9.
3. How many dissimilar models can produce Table 1.1 with likelihood exactly 0.95?
Appendix 43
n
E(θ ) = − Gi ln (g1 (xi | θ )) + (1 − Gi )ln (g0 (xi | θ )).
i=1
Suppose now that there is a probability πi that the class label on a training data
point xi has been correctly set. Write down the error function corresponding to
the negative log-likelihood. Verify that the error function in the above equation is
obtained when πi = 1. Note that this error function renders the model robust to
incorrectly labeled data, in contrast to the usual least squares error function.
Exercise 1.5**: Optimal Action
Derive Eq. 1.17 by setting the derivative of Eq. 1.16 with respect to the time-
t action ut to zero. Note that Eq. 1.17 gives a non-parametric expression for the
optimal action ut in terms of a ratio of two conditional expectations. To be useful in
practice, the approach might need some further modification as you will use in the
next exercise.
Exercise 1.6***: Basis Functions
Instead of non-parametric specifications of an optimal action in Eq. 1.17, we can
develop a parametric model of optimal action. To this end, assume we have a set
of basic functions ψk (S) with k = 1, . . . , K. Here K is the total number of basis
functions—the same as the dimension of your model space.
We now define the optimal action ut = ut (St ) in terms of coefficients θk of
expansion over basis functions k (for example, we could use spline basis functions,
Fourier bases, etc.) :
K
ut = ut (St ) = θk (t) k (St ).
k=1
Compute the optimal coefficients θk (t) by substituting the above equation for ut into
Eq. 1.16 and maximizing it with respect to a set of weights θk (t) for a t-th time step.
Appendix
Question 1
Answer: 1, 2.
Answer 3 is incorrect. While it is true that unsupervised learning does not require
a human supervisor to train the model, it is false to presume that the approach is
superior.
44 1 Introduction
exp{(XT θ )i }
gi (X|θ ) = softmax(exp{XT θ }) = K . (1.26)
T
k=0 exp{(X θ )k }
References
Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle (pp.
267–281).
Akcora, C. G., Dixon, M. F., Gel, Y. R., & Kantarcioglu, M. (2018). Bitcoin risk modeling with
blockchain graphs. Economics Letters, 173(C), 138–142.
Arnold, V. I. (1957). On functions of three variables (Vol. 114, pp. 679–681).
Bollerslev, T. (1986). Generalized autoregressive conditional heteroskedasticity. Journal of Econo-
metrics, 31, 307–327.
Box, G. E. P., & Jenkins, G. M. (1976). Time series analysis, forecasting, and control. San
Francisco: Holden-Day.
References 45
Box, G. E. P., Jenkins, G. M., & Reinsel, G. C. (1994). Time series analysis, forecasting, and
control (third ed.). Englewood Cliffs, NJ: Prentice-Hall.
Breiman, L. (2001). Statistical modeling: the two cultures (with comments and a rejoinder by the
author). Statistical Science, 16(3), 199–231.
Cont, R., & de Larrard, A. (2013). Price dynamics in a Markovian limit order market. SIAM Journal
on Financial Mathematics, 4(1), 1–25.
de Prado, M. (2018). Advances in financial machine learning. Wiley.
de Prado, M. L. (2019). Beyond econometrics: A roadmap towards financial machine learning.
SSRN. Available at SSRN: https://ssrn.com/abstract=3365282 or http://dx.doi.org/10.2139/
ssrn.3365282.
DeepMind (2016). DeepMind AI reduces Google data centre cooling bill by 40%. https://
deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-40/.
DeepMind (2017). The story of AlphaGo so far. https://deepmind.com/research/alphago/.
Dhar, V. (2013, December). Data science and prediction. Commun. ACM, 56(12), 64–73.
Dixon, M. (2018a). A high frequency trade execution model for supervised learning. High
Frequency, 1(1), 32–52.
Dixon, M. (2018b). Sequence classification of the limit order book using recurrent neural networks.
Journal of Computational Science, 24, 277–286.
Dixon, M., & Halperin, I. (2019). The four horsemen of machine learning in finance.
Dixon, M., Polson, N., & Sokolov, V. (2018). Deep learning for spatio-temporal modeling:
Dynamic traffic flows and high frequency trading. ASMB.
Dixon, M. F., & Polson, N. G. (2019, Mar). Deep fundamental factor models. arXiv e-prints,
arXiv:1903.07677.
Dyhrberg, A. (2016). Bitcoin, gold and the dollar – a GARCH volatility analysis. Finance Research
Letters.
Elman, J. L. (1991, Sep). Distributed representations, simple recurrent networks, and grammatical
structure. Machine Learning, 7(2), 195–225.
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., et al. (2017).
Dermatologist-level classification of skin cancer with deep neural networks. Nature,
542(7639), 115–118.
Flood, M., Jagadish, H. V., & Raschid, L. (2016). Big data challenges and opportunities in financial
stability monitoring. Financial Stability Review, (20), 129–142.
Gomber, P., Koch, J.-A., & Siering, M. (2017). Digital finance and fintech: current research and
future research directions. Journal of Business Economics, 7(5), 537–580.
Gottlieb, O., Salisbury, C., Shek, H., & Vaidyanathan, V. (2006). Detecting corporate fraud:
An application of machine learning. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.
1.142.7470.
Graves, A. (2012). Supervised sequence labelling with recurrent neural networks. Studies in
Computational intelligence. Heidelberg, New York: Springer.
Gu, S., Kelly, B. T., & Xiu, D. (2018). Empirical asset pricing via machine learning. Chicago
Booth Research Paper 18–04.
Harvey, C. R., Liu, Y., & Zhu, H. (2016). . . . and the cross-section of expected returns. The Review
of Financial Studies, 29(1), 5–68.
Hornik, K., Stinchcombe, M., & White, H. (1989, July). Multilayer feedforward networks are
universal approximators. Neural Netw., 2(5), 359–366.
Kearns, M., & Nevmyvaka, Y. (2013). Machine learning for market microstructure and high
frequency trading. High Frequency Trading - New Realities for Traders.
Kercheval, A., & Zhang, Y. (2015). Modeling high-frequency limit order book dynamics with
support vector machines. Journal of Quantitative Finance, 15(8), 1315–1329.
Kolmogorov, A. N. (1957). On the representation of continuous functions of many variables by
superposition of continuous functions of one variable and addition. Dokl. Akad. Nauk SSSR,
114, 953–956.
Kubota, T. (2017, January). Artificial intelligence used to identify skin cancer.
46 1 Introduction
Kullback, S., & Leibler, R. A. (1951, 03). On information and sufficiency. Ann. Math. Statist.,
22(1), 79–86.
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955, August). A proposal for the
Dartmouth summer research project on artificial intelligence. http://www-formal.stanford.edu/
jmc/history/dartmouth/dartmouth.html.
Philipp, G., & Carbonell, J. G. (2017, Dec). Nonparametric neural networks. arXiv e-prints,
arXiv:1712.05440.
Philippon, T. (2016). The fintech opportunity. CEPR Discussion Papers 11409, C.E.P.R. Discussion
Papers.
Pinar Saygin, A., Cicekli, I., & Akman, V. (2000, November). Turing test: 50 years later. Minds
Mach., 10(4), 463–518.
Poggio, T. (2016). Deep learning: mathematics and neuroscience. A Sponsored Supplement to
Science Brain-Inspired intelligent robotics: The intersection of robotics and neuroscience, 9–
12.
Shannon, C. (1948). A mathematical theory of communication. Bell System Technical Journal, 27.
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image
recognition.
Sirignano, J., Sadhwani, A., & Giesecke, K. (2016, July). Deep learning for mortgage risk. ArXiv
e-prints.
Sirignano, J. A. (2016). Deep learning for limit order books. arXiv preprint arXiv:1601.01987.
Sovbetov, Y. (2018). Factors influencing cryptocurrency prices: Evidence from Bitcoin, Ethereum,
Dash, Litcoin, and Monero. Journal of Economics and Financial Analysis, 2(2), 1–27.
Stein, H. (2012). Counterparty risk, CVA, and Basel III.
Turing, A. M. (1995). Computers & thought. Chapter Computing Machinery and Intelligence (pp.
11–35). Cambridge, MA, USA: MIT Press.
Wiener, N. (1964). Extrapolation, interpolation, and smoothing of stationary time series. The MIT
Press.
Chapter 2
Probabilistic Modeling
1 Introduction
Not only is statistical inference from data intrinsically uncertain, but the type of data
and relationships in the data that we seek to model are growing ever more complex.
In this chapter, we turn to probabilistic modeling, a class of statistical models, which
are broadly designed to characterize uncertainty and allow the expression of causal-
ity between variables. Probabilistic modeling is a meta-class of models, including
generative modeling—a class of statistical inference models which maximizes the
joint distribution, p(X, Y ), and Bayesian modeling, employing either maximum
likelihood estimation or “fully Bayesian” inference. Probabilistic graphical models
put the emphasis on causal modeling to simplify statistical inference of parameters
from data. This chapter shall focus on the constructs of probabilistic modeling, as
they relate to the application of both unsupervised and supervised machine learning
in financial modeling.
While it seems natural to extend the previous chapters directly to a probabilistic
neural network counterpart, it turns out that this does not develop the type of
intuitive explanation of complex data that is needed in finance. It also turns out
that neural networks are not a natural fit for probabilistic modeling. In other words,
neural networks are well suited to pointwise estimation but lead to many difficulties
in a probabilistic setting. In particular, they tend to be very data intensive—offsetting
one of the major advantages of Bayesian modeling.
Chapter Objectives
The key learning points of this chapter are:
• Apply Bayesian inference to data using simple probabilistic models;
• Understand how linear regression with probabilistic weights can be viewed as a
simple probabilistic graphical model; and
• Develop more versatile representations of complex data with probabilistic graph-
ical models such as mixture models and hidden Markov models.
Note that section headers ending with * are more mathematically advanced, often
requiring some background in analysis and probability theory, and can be skipped
by the less mathematically inclined reader.
1 Throughout the first part of this chapter we will largely remain within the realm of parametric
analysis. However, we shall later see examples of Bayesian methods for non- and semi-parametric
modeling.
2 Bayesian vs. Frequentist Estimation 49
Both Bayesian and classical econometricians aim to learn more about a set of
parameters, say θ . In the classical mindset, θ contains fixed but unknown elements,
usually associated with an underlying population of interest (e.g., the mean and
variance for credit card debt among US college students). Bayesians share with
classicals the interest in θ and the definition of the population of interest.
However, they assign ex ante a prior probability to θ , labeled p (θ ), which
usually takes the form of a probability distribution with “known” moments. For
example, Bayesians might state that the aforementioned debt amount has a normal
distribution with mean $3000 and standard deviation of $1500. This prior may be
based on previous research, related findings in the published literature, or it may be
completely arbitrary. In any case, it is an inherently subjective construct.
Both schools then develop a theoretical framework that relates θ to observed
data, say a “dependent variable” y, and a matrix of explanatory variables X. This
relationship is formalized via a likelihood function, say p (y | θ , X) to stay with
Bayesian notation. To stress, this likelihood function takes the exact same analytical
form for both schools.
The classical analyst then collects a sample of observations from the underlying
population of interest and, combining these data with the formulated statistical
model, produces an estimate of θ , say θ̂. Any and all uncertainty surrounding the
accuracy of this estimate is solely related to the notion that results are based on
a sample, not data for the entire population. A different sample (of equal size)
may produce slightly different estimates. Classicals express this uncertainty via
“standard errors” assigned to each element of θ̂. They also have a strong focus on
the behavior of θ̂ as the sample size increases. The behavior of estimators under
increasing sample size falls under the heading of “asymptotic theory.”
The properties of most estimators in the classical world can only be assessed
“asymptotically,” i.e. are only understood for the hypothetical case of an infinitely
large sample. Also, virtually all specification tests used by frequentists hinge on
asymptotic theory. This is a major limitation when the data size is finite.
Bayesians, in turn, combine prior and likelihood via Bayes’ rule to derive the
posterior distribution of θ as
p (θ , y | X) p (θ ) p (y | θ , X)
p (θ | y, X) = = ∝ p (θ) p (y | θ , X) . (2.1)
p (y | X) p (y | X)
Bayesian Learning
Simply put, the posterior distribution is just an updated version of the prior. More
specifically, the posterior is proportional to the prior multiplied by the likelihood.
The likelihood carries all the current information about the parameters and the data.
If the data has high informational content (i.e., allows for substantial learning about
θ ), the posterior will generally look very different from the prior. In most cases, it is
much “tighter” (i.e., has a much smaller variance) than the prior. There is no room
in Bayesian analysis for the classical notions of “sampling uncertainty,” and less a
priori focus on the “asymptotic behavior” of estimators.2
Taking the Bayesian paradigm to its logical extreme, Duembgen and Rogers
(2014) suggest to “estimate nothing.” They propose the replacement of the industry-
standard estimation-based paradigm of calibration with an approach based on
Bayesian techniques, wherein a posterior is iteratively obtained from a prior, namely
stochastic filtering and MCMC. Calibration attempts to estimate, and then uses the
estimates as if they were known true values—ignoring the estimation error. On the
contrary, an approach based on a systematic application of the Bayesian principle
is consistent: “There is never any doubt about what we should be doing to hedge
or to mark-to-market a portfolio of derivatives, and whatever we do today will
be consistent with what we did before, and with what we will do in the future.”
Moreover, Bayesian model comparison methods enable one to easily compare
models of very different types.
Marginal Likelihood
The term in the denominator of Eq. 2.1 is called the “marginal likelihood,” it is not a
function of θ , and can usually be ignored for most components of Bayesian analysis.
Thus, we usually work only with the numerator (i.e., prior times likelihood) for
inference about θ . From Eq. 2.1 we know that this expression is proportional (“∝
”) to the actual posterior. However, the marginal likelihood is crucial for model
comparison, so we will learn a few methods to derive it as a by-product of or
following the actual posterior analysis. For some choices of prior and likelihood
there exist analytical solutions for this term.
In summary, frequentists start with a “blank mind” regarding θ . They collect data
to produce an estimate θ̂. They formalize the characteristics and uncertainty of θ̂ for
a finite sample context (if possible) and a hypothetical large sample (asymptotic)
case.
Bayesians collect data to update a prior, i.e. a pre-conceived probabilistic notion
regarding θ.
2 However, at times Bayesian analysis does rest on asymptotic results. Naturally, the general notion
that a larger sample, i.e. more empirical information, is better than a small one also holds for
Bayesian analysis.
3 Frequentist Inference from Data 51
Let us begin this section with a simple example which illustrates frequentist
inference.
p(y | θ ) = θ y (1 − θ )1−y ,
where, for 1 ≤ i ≤ n, yi is the result of the ith toss. What is the probability
density of y?
Since the coin tosses are independent, the probability density of y, i.e. the
joint probability density of y1 , y2 , . . . , yn , is given by the product rule
n
p(y | θ ) = p(y1 , y2 , . . . , yn | θ ) = θ yi (1 − θ )1−yi = θ yi
(1 − θ )n− yi
.
i=1
0 0 1 0 0 1 0 0 0 0 0 1 0 1 1 1 0 0 0 0 1 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 1 0 0
Both the frequentists and Bayesians regard the density p(y | θ ) as a likelihood.
Bayesians maintain this notation, whereas frequentists reinterpret p(y | θ ), which is
52 2 Probabilistic Modeling
Notice that we have merely reinterpreted this probability density, whereas its
functional form remains the same, in our case:
L(θ ) = θ yi
(1 − θ )n− yi
.
•> Likelihood
Equating this to zero and solving for θ , we obtain the maximum likelihood
estimate for θ :
4 Assessing the Quality of Our Estimator: Bias and Variance 53
yi
θ̂ML = .
n
To confirm that this value does indeed maximize the log-likelihood, we take the
second derivative with respect to θ ,
∂2 yi n − yi
ln L(θ ) = − 2 − < 0.
∂θ 2 θ (θ − 1)2
Since this quantity is strictly negative for all 0 < θ < 1, it is negative at θ̂ML , and
we do indeed have a maximum.
Note that θ̂ML depends only on the sum of yi s, we can answer our question: if
in a sequence of 50 coin tosses exactly twelve heads come up, then
yi 12
θ̂ML = = = 0.24.
n 50
When we obtained
our maximum likelihood estimate, we plugged in a specific
number for yi , 12. In this sense the estimator is an ordinary function. However,
we could also view it as a function of the random sample,
Yi
θ̂ML = ,
n
each Yi being a random variable. A function of a random variable is itself a random
variable, so we can compute its expectation and variance.
In particular, an expectation of the error
54 2 Probabilistic Modeling
e = θ̂ − θ
is known as bias,
! " ! "
bias(θ̂, θ ) = E(e) = E θ̂ − θ = E θ̂ − E [θ ] .
In our case it is
Yi 1
E[θ̂ML − θ ] = E[θ̂ML ] − θ = E −θ = E[Yi ] − θ
n n
1
= · n(θ · 1 + (1 − θ ) · 0) − θ = 0,
n
we see that the bias is zero, so this particular maximum likelihood estimator is
unbiased (otherwise it would be biased).
What about the variance of this estimator?
Yi independence 1 1 1
Var[θ̂ML ] = Var = 2
Var[Yi ] = 2 ·n·θ (1−θ ) = θ (1−θ ),
n n n n
and we see that the variance of the estimator depends on the true value of θ .
For multivariate θ, it is useful to examine the error covariance matrix given by
! "
P = E[ee ] = E (θ̂ − θ )(θ̂ − θ ) .
When estimating θ , our goal is to minimize the estimation error. This error can
be expressed using loss functions. Supposing our parameter vector θ takes values
on some space , a loss function L(θ̂ ) is a mapping from × into R which
quantifies the “loss” incurred by estimating θ with θ̂ .
We have already seen loss functions in earlier chapters, but we shall restate the
definitions here for completeness. One frequently used loss function is the absolute
error,
#
L1 (θ̂ , θ ) := θ̂ − θ 2 = (θ̂ − θ ) (θ̂ − θ ),
where · 2 is the Euclidean norm (it coincides with the absolute value when ⊆
R). One advantage of the absolute error is that it has the same units as θ .
We use the squared error perhaps even more frequently than the absolute error:
L2 (θ̂ , θ ) := θ̂ − θ 2
2 = (θ̂ − θ ) (θ̂ − θ ).
5 The Bias–Variance Tradeoff (Dilemma) for Estimators 55
While the squared error has the disadvantage compared to the absolute error of
being expressed in√quadratic units of θ , rather than the units of θ , it does not contain
the cumbersome · and is therefore easier to deal with mathematically.
The expected value of a loss function is known as the statistical risk of the
estimator.
The statistical risks corresponding to the above loss functions are, respectively,
the mean absolute error,
! " ! " #
MAE(θ̂ , θ ):=R1 (θ̂ , θ ):=E L1 (θ̂, θ ) :=E θ̂ − θ 2 = E (θ̂ − θ ) (θ̂ − θ) ,
and, by far the most commonly used, mean squared error (MSE),
! " ! " ! "
MSE(θ̂ , θ ) := R2 (θ̂ , θ ) := E L2 (θ̂, θ ) := E θ̂ − θ 2
2 = E (θ̂ − θ ) (θ̂ − θ ) .
The square root of the mean squared error is called the root mean squared error
(RMSE). The minimum mean squared error (MMSE) estimator is the estimator that
minimizes the mean squared error.
It can easily be shown that the mean squared error separates into a variance and bias
term:
! "
MSE(θ̂ , θ ) = trVar θ̂ + bias(θ̂ , θ ) 22 ,
where tr(·) is the trace operator. In the case of a scalar θ , this expression simplifies
to
! "
MSE(θ̂ , θ ) = Var θ̂ + bias(θ̂ , θ )2 .
In other words, the MSE is equal to the sum of the variance of the estimator and
the squared bias.
The bias–variance tradeoff or bias–variance dilemma consists in the need to
minimize these two sources of error, the variance and bias of an estimator, in order to
minimize the mean squared error. Sometimes there is a tradeoff between minimizing
bias and minimizing variance to achieve the least possible MSE. The concept of a
bias–variance tradeoff in machine learning will be revisited in Chap. 4, within the
context of statistical learning theory.
56 2 Probabilistic Modeling
p(y | θ )
p(θ | y) = p(θ )
p(y)
The following example shall illustrate the application of Bayesian inference for
the Bernoulli parameter θ .
1
p(θ ) =
b−a
Let us derive the posterior based on this prior assumption. Bayes’ theorem
tells us that
where ∝ stands for “proportional to,” so the left- and right-hand side are equal
up to a normalizing constant which depends on the data but not on θ . The
posterior is
p(θ | x1:n ) ∝ p(x1:n | θ )p(θ ) = θ xi
(1 − θ )n− xi
· 1.
If the prior is uniform, i.e. p(θ ) = 1, then after n = 5 trials we see from the
data that
From the shape of the resulting pdf, we recognize it as the pdf of the Beta
distributiona
Beta θ | xi , n − xi ,
Let us now assume that we have tossed the coin fifty times and, out of
those fifty coin tosses, we get heads on twelve. Then our posterior distribution
becomes
0 1 2 3 4 5 6 7
0 1 2 3 4 5 6 7
successive numbers of trials.
The x-axis shows the values
of theta. The shape of the
distribution tightens as the
Bayesian model is observed
to “learn”
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
θ θ
0 1 2 3 4 5 6 7
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
θ θ
Note that we did not need to evaluate the marginal likelihood in the example
above, only the θ dependent terms were evaluated for the purpose of the plot. Thus
each plot in Fig. 2.1 is only representative of the posterior up to a scaling.
Continuing with the above example, let us question our prior. Is it somewhat too
uninformative? After all, most coins in the world are probably close to being
unbiased. We could use a Beta(α, β) prior instead of the Uniform prior. Picking
α = β = 2, for example, will give a distribution on [0, 1] centered on 12 ,
incorporating a prior assumption that the coin is unbiased.
The pdf of this prior is given by
1
p(θ ) = θ α−1 (1 − θ )β−1 , ∀θ ∈ [0, 1],
B(α, β)
Why did we pick this prior distribution? One reason is that its pdf is defined
over the compact interval [0, 1], unlike, for example, the normal distribution, which
has tails extending to −∞ and +∞. Another reason is that we are able to choose
parameters which center the pdf at θ = 12 , incorporating the prior assumption that
the coin is unbiased.
If we initially assume a Beta(θ | α = 2, β = 2) prior, then the posterior
expectation is
α + xi α + xi
E [θ | x1:n ] = =
α + xi + β + n − xi α+β +n
2 + 12 7
= = ≈ 0.259.
2 + 2 + 50 27
Notice that both the prior and posterior belong to the same probability distribu-
tion family. In Bayesian estimation theory we refer to such prior and posterior as
conjugate distributions (with respect to this particular likelihood function).
Unsurprisingly, since now our prior assumption is that the coin is unbiased, 12
50 <
E [θ | x1:n ] < 2 .
1
Perhaps surprisingly, we are also somewhat more certain about the posterior (its
variance is smaller) than when we assumed the uniform prior.
6 Bayesian Inference from Data 61
Notice that the results of Bayesian estimation are sensitive—to varying degree in
each specific case—to the choice of prior distribution:
(α + β − 1)! α−1
p(θ | α, β) = θ (1 − θ )β−1 = (α, β)θ α−1 (1 − θ )β−1 .
(α − 1)!(β − 1)!
(2.6)
So for the above example, this marginal likelihood would be evaluated with α = 13
and β = 39 since there are 12 observed 1s and 38 observed 0s.
Beta (θ | α, β) ,
What would happen if, instead of observing all twelve coin tosses at once, we
(i) considered each coin toss in turn; (ii) obtained our posterior; and (iii) used that
posterior as a prior for an update based on the information from the next coin toss?
The above two formulae give the answer to this question. We start with our initial
prior,
Beta (θ | α, β) ,
Beta (θ | α + x1 , β + 1 − x1 ) .
Using this posterior as a prior before the second coin toss, we obtain the next
posterior as
Beta (θ | α + x1 + x2 , β + 2 − x1 − x2 ) .
Proceeding along these lines, after all ten coin tosses, we end up with
Beta θ | α + xi , β + n − xi ,
62 2 Probabilistic Modeling
the same result that we would have attained if we processed all ten coin tosses as a
single “batch,” as we did in the previous section.
This insight forms the basis for a sequential or iterative application of Bayes’
theorem—sequential Bayesian updates—the foundation for real-time Bayesian
filtering. In machine learning, this mechanism for updating our beliefs in response
to new data is referred to as “online learning.”
p(y | θ )p(θ | y)
p(θ | y , y) = %
. (2.7)
θ∈ p(y | θ )p(θ | y)dθ
6.2.2 Prediction
If the sample size is large and the likelihood function “well-behaved” (which usually
means a simple function with a clear maximum, plus a small dimension for θ),
classical and Bayesian analysis are essentially on the same footing and will produce
virtually identical results. This is because the likelihood function and empirical data
will dominate any prior assumptions in the Bayesian approach.
If the sample size is large but the dimensionality of θ is high and the likelihood
function is less tractable (which usually means highly non-linear, with local
maxima, flat spots, etc.), a Bayesian approach may be preferable purely from a
computational standpoint. It can be very difficult to attain reliable estimates via
maximum likelihood estimation (MLE) techniques, but it is usually straightforward
to derive a posterior distribution for the parameters of interest using Bayesian
estimation approaches, which often operate via sequential draws from known
distributions.
If the sample size is small, Bayesian analysis can have substantial advantages
over a classical approach. First, Bayesian results do not depend on asymptotic theory
to hold for their interpretability. Second, the Bayesian approach combines the sparse
data with subjective priors. Well-informed priors can increase the accuracy and
efficiency of the model. Conversely, of course, poorly chosen priors3 can produce
misleading posterior inference in this case. Thus, under small sample conditions, the
choice between Bayesian and classical estimation often distills to a choice between
trusting the asymptotic properties of estimators and trusting one’s priors.
7 Model Selection
Beyond the inference challenges described above, there are a number of problems
with the classical approach to model selection which Bayesian statistics solves.
For example, it has been shown by Breiman (2001) that the following three linear
regression models have a residual sum of squares (RSS) which are all within 1%:
3 For
example, priors that place substantial probability mass on practically infeasible ranges of
θ—this often happens inadvertently when parameter transformations are involved in the analysis.
64 2 Probabilistic Modeling
You could, for example, think of each model being used to find the fair price of an
asset Y , where each Xi are the contemporaneous (i.e., measured at the same time)
firm characteristics.
• Which model is better?
• How would your interpretation of which variables are the most important change
between models?
• Would you arrive at different conclusions about the market signals if you picked,
say, Model 1 versus Model 2?
• How would you eliminate some of the ambiguity resulting from this outcome of
statistical inference?
Of course one direction is to simply analyze the F-scores of each independent
variable and select the model which has the most statistically significant fitted
coefficients. But this is unlikely to reliably discern the models when the fitted
coefficients are comparable in statistical significance.
It is well known that the goodness-of-fit measures, such as RSS’s and F-scores,
do not scale well to more complex datasets where there are several independent
variables. This leads to modelers drawing different conclusions about the same
data, and is famously known as the “Rashomon effect.” Yet many studies and
models in finance are still built this way and make use of information criterion and
regularization techniques such as Akaike’s information criteria (AIC).
A limitation for more robust frequentist model comparison is the requirement
that the models being compared are “nested.” That is, one model should be a subset
of the other model being compared, e.g.
Model 1 Ŷ = β0 + β1 X1 + β2 X2 (2.12)
Model 2 Ŷ = β0 + β1 X1 + β2 X2 + β11 X12 . (2.13)
Model 1 is nested in Model 2 and we refer to the model selection as a “nested model
selection.” In contrast to classical model selection, Bayesian model selection need
not be restricted to nested models.
For example, consider the case of flipping a coin n times with an unknown bias
θ ∈ ≡ [0, 1]. The data x n = {xi }ni=1 is now i.i.d. Bernoulli and if we observe the
number of heads X = x, the model is the family of binomial distributions
) *
n x
M := {P [X = x | n, θ ] = θ (1 − θ )n−x }θ∈ . (2.14)
x
Each one of these distributions is a potential explanation of the observed head count
x. In the Bayesian method, we maintain a belief over which elements in the model
are considered plausible by reasoning about p(θ | x n ). See Example 1.1 for further
details of this experiment.
We start by re-writing the Bayesian inference formula with explicit inclusion of
model indexes. You will see that we have dropped X since the exact composition of
explanatory data is implicitly covered by model index Mi :
p (θ i | Mi ) p (x n | θ i , Mi )
p (θ i | x n , Mi ) = i = 1, 2. (2.15)
p (x n | Mi )
This expression shows that differences across models can occur due to differing
priors for θ and/or differences in the likelihood function. The marginal likelihood in
the denominator will usually also differ across models.
So far, we just considered parameter inference when the model has already been
selected. The Bayesian setting offers a very flexible framework for the comparison
of competing models—this is formally referred to as “model selection.” The models
do not have to be nested—all that is required is that the competing specifications
share the same x n .
Suppose there are two models, denoted M1 and M2 , each associated with a
respective set of parameters θ 1 and θ 2 . We seek the most “probable” model given
the observed data x n . We first apply Bayes’ rule to derive an expression for the
posterior model probability
p (Mi ) p (x n | Mi )
p (Mi | x n ) = & ' & ' i = 1, 2. (2.16)
j p x n | Mj p Mj
Here p (Mi ) is a prior distribution over models that we have selected; a common
practice is to set this to a uniform distribution over the models. The value
p (x n | Mi ) is a marginal likelihood function—a likelihood function over the space
of models in which the parameters have been marginalized out:
66 2 Probabilistic Modeling
(
p(x n | Mi ) = p(x n | θ i , Mi )p(θ i | Mi )dθ i . (2.17)
θ i ∈i
p (M1 | x n ) p (M1 ) p (x n | M1 )
= , (2.18)
p (M2 | x n ) p (M2 ) p (x n | M2 )
which is simply the prior odds multiplied by the ratio of the evidence for each model.
Under equal model priors (i.e., p (M1 ) = p (M2 )) this reduces to the Bayes’
factor for Model 1 vs. 2, i.e.
p (x n | M1 )
B1,2 = , (2.19)
p (x n | M2 )
which is simply the ratio of marginal likelihoods for the two models. Since Bayes’
factors can become quite large, we usually prefer to work with its logged version
Suppose now that a set of models {Mi } may be used to explain the data x n . θ i
represents the parameters of model Mi . Which model is “best”?
We answer this question by estimating the posterior distribution over models:
%
θ i ∈i p(x n | θ i , Mi )p(θ i | Mi )dθ i p(Mi )
p(Mi | x n ) = . (2.21)
j p(x n | Mj )p(Mj )
7 Model Selection 67
As before we can compare any two models via the posterior odds, or if we
assume equal priors, by the BFs. Model selection is always relative rather than
absolute. We must always pick a reference model M2 and decide whether model
M1 has more strength. We use Jeffreys’ scale to assess the strength of evidence as
shown in Table 2.1.
(α + β − 1)! α−1
p(θ | α, β) = θ (1 − θ )β−1 (2.26)
(α − 1)!(β − 1)!
•! Frequentist Approach
An interesting aside here is that a frequentist hypothesis test would reject the
null hypothesis θ = 0.5 at the α = 0.05 level. The probability of generating
at least 115 heads under model M1 is approximately 0.02. The probability
of generating at least 115 tails is also 0.02. So a two-sided test would give a
p-value of approximately 4%.
•! Hyperparameters
We note in passing that the prior distribution in the example above does
not involve any parameterization. If the prior is a parameterized distribution,
then the parameters of the prior are referred to as hyperparameters. The
distributions of the hyperparameters are known as hyperpriors. “Bayesian
hierarchical modeling” is a statistical model written in multiple levels (hierar-
chical form) that estimates the parameters of the posterior distribution using
the Bayesian method.
7 Model Selection 69
The model evidence performs a vital role in the prevention of model over-fitting.
Models that are too simple are unlikely to generate the dataset. On the other hand,
models that are too complex can generate many possible data sets, but they are
unlikely to generate any particular dataset at random. Bayesian inference therefore
automates the determination of model complexity using the training data x n alone
and does not need special “fixes” (a.k.a regularization and information criteria) to
prevent over-fitting. The underlying philosophical principle of selecting the simplest
model, if given a choice, is known as “Occam’s razor” (Fig. 2.2).
We maintain a belief over which parameters in the model we consider plausible
by reasoning with the posterior
p(x n | θ i , Mi )p(θ i | Mi )
p(θ i | x n , Mi ) = , (2.27)
p(x n | Mi )
and we may choose the parameter value which maximizes the posterior distribution
(MAP).
Marginal likelihoods can also be used to derive model weights in Bayesian model
averaging (BMA). Informally, the intuition behind BMA is that we are never
fully convinced that a single model is the correct one for our analysis at hand.
There are usually several (and often millions of) competing specifications. To
explicitly incorporate this notion of “model uncertainty,” one can estimate every
model separately, compute relative probability weights for each model, and then
generate model-averaged posterior distributions for the parameters (and predictions)
of interest. We often choose BMA when there is not strong enough evidence for any
particular model. Prediction of a new point y∗ under BMA is given over m models
as the weighted average
m
p(y∗ |y) = p(y∗ |y, Mi )p(Mi |y). (2.28)
i=1
Graphical models (a.k.a. Bayesian networks) are a method for representing relation-
ships between random variables in a probabilistic model. They provide a useful tool
for big data, providing graphical representations of complex datasets.
To see how graphical models arise, we can revisit the familiar perceptron model
from the previous chapter in a probabilistic framework, i.e. the network weights are
now assumed to be probabilistic. As a starting point, consider a logistic regression
classifier with probabilistic output:
1
P [G | X] = σ (U ) = , U = wX + b, G ∈ {0, 1}, X ∈ Rp . (2.29)
1 + e−U
8 Probabilistic Graphical Models 71
By Bayes’ law, we know that the posterior probabilities must be given by the
likelihood, prior and evidence:
P [X | G] P [G] 1
P [G | X] = = , (2.30)
P [X] − log P[X | G])
+log P[G]
1+e P[X | Gc ] P[Gc ]
(b)
72 2 Probabilistic Modeling
This type of graphical model corresponds to that of factor analysis. Other related
types of graphical models include mixture models.
K
p(x; υ) = πk p(x; θk ). (2.33)
k=1
The mixture density has K components (or states) and is defined by the parameter
set υ = {θ, π }, where π = {π1 , · · · , πK } is the set of weights given to
each component and θ = {θ1 , · · · , θK } is the set of parameters describing each
component distribution. A well-known mixture model is the Gaussian mixture
model (GMM):
K
p(x) = πk N x; μk , σk2 , (2.34)
k=1
where each component parameter vector θk is the mean and variance parameters, μk
and σk2 .
8 Probabilistic Graphical Models 73
Let us first suppose that the independent random variable, X, has been observed
over N data points, xN = {x1 , · · · , xN }. The set is assumed to be generated by a
K-component mixture model.
To indicate the mixture component from which a sample was drawn, we intro-
duce an independent hidden (a.k.a. latent) discrete random variable, S ∈ {1, . . . , K}.
For each observation xi , the value of S is denoted as si , and is encoded as a binary
vector of length K. We set the vector’s k-th component, (si )k = 1 to indicate that
the k-th mixture component is selected, while all other states are set to 0. As a
consequence,
K
1= (si )k . (2.35)
k=1
N
p(xn , sn ; υ) = p(xi |si ; θ )p(si ; π ), (2.36)
i
where the marginal densities p(si ; π ) are drawn from a multinomial distribution
that is parameterized by the mixing weights π = {π1 , · · · , πK }:
K
(si )k
p(si ; π ) = πk , (2.37)
k=1
P [(si )k = 1] = πk . (2.38)
K
1= πk . (2.39)
k=1
If the sequence of states sn were known, then the estimation of the model parameters
π, θ would be straightforward; conditioned on the state variables and the observa-
tions, Eq. 2.40 could be maximized with respect to the model parameters. However,
the value of the state variable is unknown. This suggests an alternative two-stage
iterated optimization algorithm: If we know the expected value of S, one could
use this expectation in the first step to perform a weighted maximum likelihood
estimation of Eq. 2.40 with respect to the model parameters. These estimates will
be incorrect since the expectation S is inaccurate. So, in the second step, one could
update the expected value of all S pretending the model parameters υ := (π, θ )
are known and held fixed at their values from the past iteration. This is precisely
the strategy of the expectation–maximization (EM) algorithm—a statistically self-
consistent, iterative, algorithm for maximum likelihood estimation. With the context
of mixture models, the EM algorithm is outlined as follows:
• E-step:
In this step, the parameters υ are held fixed at the old values, υ old , obtained
from the previous iteration (or at their initial settings during the algorithm’s
initialization). Conditioned on the observations, the E-step then computes the
probability density of the state variables Si , ∀i given the current model parame-
ters and observation data, i.e.
In particular, we compute
8 Probabilistic Graphical Models 75
The likelihood terms p(xi | (si )k = 1; θk ) are evaluated using the observation
densities defined for each of the states.
• M-step:
In this step, the hidden state probabilities are considered given and maximiza-
tion is performed with respect to the parameters:
This results in the following update equations for the parameters in the probabil-
ity distributions:
1 Ni=1 (γi )k xi
μk = N (2.44)
N i=1 (γi )k
N
1 i=1 (γi )k (xi − μk )2
σk2 = N , ∀k ∈ {1, . . . , K}, (2.45)
N i=1 (γi )k
9 Summary
10 Exercises
n
p(G | X, θ ) = (g1 (xi | θ ))Gi · (g0 (xi | θ ))1−Gi (2.46)
i=1
n
ln p(G | X, θ ) = Gi ln(g1 (xi | θ )) + (1 − Gi )ln(g0 (xi | θ )). (2.47)
i=1
Using Bayes’ rule, write the condition probability density function of θ (the
“posterior”) given the data (X, G) in terms of the above likelihood function.
From the previous example, suppose that G = 1 corresponds to JPY strengthen-
ing against the dollar and X are the S&P 500 daily returns and now
Starting with a neutral view on the parameter θ (i.e., θ ∈ [0, 1]), learn the
distribution of the parameter θ given that JPY strengthens against the dollar for
two of the three days and S&P 500 is observed to rise for 3 consecutive days. Hint:
You can use the Beta density function with a scaling constant (α, β)
(α + β − 1)! α−1
p(θ |α, β) = θ (1 − θ )β−1 = (α, β)θ α−1 (1 − θ )β−1
(α − 1)!(β − 1)!
(2.49)
to evaluate the integral in the marginal density function.
If θ represents the currency analyst’s opinion of JPY strengthening against the
dollar, what is the probability that the model overestimates the analyst’s estimate?
Exercise 2.4*: Bayesian Inference in Trading
Suppose that you observe the following daily sequence of directional changes in the
JPY/USD exchange rate (U (up), D(down or stays flat)):
U, D, U, U, D
and the corresponding daily sequence of S&P 500 returns is
-0.05, 0.01, -0.01, -0.02, 0.03
You propose the following probability model to explain the behavior of JPY
against USD given the directional changes in S&P 500 returns: Let G denote a
Bernoulli R.V., where G = 1 corresponds to JPY strengthening against the dollar
and r are the S&P 500 daily returns. All observations of G are conditionally
independent (but *not* identical) so that the likelihood is
n
p(G | r, θ ) = p(G = Gi | r = ri , θ )
i=1
78 2 Probabilistic Modeling
where
θu , ri > 0
p(Gi = 1 | r = ri , θ ) =
θd , ri ≤ 0.
Compute the full expression for the likelihood that the data was generated by this
model.
Exercise 2.5: Model Comparison
Suppose you observe the following daily sequence of direction changes in the stock
market (U (up), D(down)):
U, D, U, U, D, D, D, D, U, U, U, U, U, U, U, D, U, D, U, D,
U, D, D, D, D, U, U, D, U, D, U, U, U, D, U, D, D, D, U, U,
D, D, D, U, D, U, D, U, D, D
You compare two models for explaining its behavior. The first model, M1 , assumes
that the probability of an upward movement is fixed to 0.5 and the data is i.i.d.
The second model, M2 , also assumes the data is i.i.d. but that the probability of
an upward movement is set to an unknown θ ∈ = (0, 1) with a uniform prior
on θ : p(θ |M2 ) = 1. For simplicity, we additionally choose a uniform model prior
p(M1 ) = p(M2 ).
Compute the model evidence for each model.
Compute the Bayes’ factor and indicate which model should we prefer in light
of this data?
Exercise 2.6: Bayesian Prediction and Updating
Using Bayesian prediction, predict the probability of an upward movement given
the best model and data in Exercise 2.5.
Suppose now that you observe the following new daily sequence of direction
changes in the stock market (U (up), D(down)):
D, U, D, D, D, D, U, D, U, D, U, D, D, D, U, U, D, U, D, D, D,
U, U, D, D, D, U, D, U, D, U, D, D, D, U, D, U, D, U, D, D, D,
D, U, U, D, U, D, U, U
Using the best model from Exercise 2.5, compute the new posterior distribution
function based on the new data and the data in the previous question and predict
the probability of an upward price movement given all data. State all modeling
assumptions clearly.
Exercise 2.7: Logistic Regression Is Naive Bayes
Suppose that G and X ∈ {0, 1}p are Bernoulli random $ variables and the Xi s are
p
mutually independent given G—that is, P [X | G] = i=1 P [Xi | G]. Given a
naive Bayes’ classifier P [G | X], show that the following logistic regression model
produces equivalent output if the weights are
10 Exercises 79
P [G] p
P [Xi = 0 ∈ G]
w0 = log + log
P [Gc ] P [Xi = 0 ∈ Gc ]
i=1
P [Xi = 1 ∈ G] P [Xi = 0 ∈ Gc ]
wi = log · , i = 1, . . . , p.
P [Xi = 1 ∈ Gc ] P [Xi = 0 ∈ G]
1
p(v, h) = exp (−E(v, h)) , Z = exp (−E(v, h))
Z
v,h
D
F
D
F
E(v, h) = −v W h − b v − a h = −
T T T
Wij vi hj − bi vi − aj hj ,
i=1 j =1 i=1 j =1
Appendix
Question 1
Answer: 1,3,4.
Question 2
Answer: 1,3,5.
Question 3
Answer: 1,2. Mixture models assume that the data is multi-modal—the data is
drawn from a linear combination of uni-modal distributions. The expectation–
maximization (EM) algorithm is a type of iterative, self-consistent, unsupervised
80 2 Probabilistic Modeling
learning algorithm which alternates between updating the probability density of the
state variables, based on model parameters (E-step) and updating the parameters by
maximum likelihood estimation (M-step). The EM algorithm does not automatically
determine the modality of the data distribution, although there are statistical tests to
determine this. A mixture model assigns a probabilistic weight for every component
that each observation might belong to. The component with the highest weight is
chosen.
References
Breiman, L. (2001). Statistical modeling: The two cultures (with comments and a rejoinder by the
author). Statistical Science 16(3), 199–231.
Duembgen, M., & Rogers, L. C. G. (2014). Estimate nothing. https://arxiv.org/abs/1401.5666.
Finger, C. (1997). A methodology to stress correlations, fourth Quarter. RiskMetrics Monitor.
Rasmussen, C. E., & Ghahramani, Z. (2001). Occam’s razor. In In Advances in Neural Information
Processing Systems 13, (pp. 294–300). MIT Press.
Chapter 3
Bayesian Regression and Gaussian
Processes
This chapter introduces Bayesian regression and shows how it extends many of
the concepts in the previous chapter. We develop kernel based machine learning
methods—specifically Gaussian process regression, an important class of Bayesian
machine learning methods—and demonstrate their application to “surrogate” mod-
els of derivative prices. This chapter also provides a natural starting point from
which to develop intuition for the role and functional form of regularization in a
frequentist setting—the subject of subsequent chapters.
1 Introduction
1 Surrogate models learn the output of an existing mathematical or statistical model as a function
of input data.
Chapter Objectives
The key learning points of this chapter are:
– Formulate a Bayesian linear regression model;
– Derive the posterior distribution and the predictive distribution;
– Describe the role of the prior as an equivalent form of regularization in maximum
likelihood estimation; and
– Formulate and implement Gaussian Processes for kernel based probabilistic
modeling, with programming examples involving derivative modeling.
and suppose that we observe the value of the function over the inputs x :=
[x1 , . . . , xn ]. The random parameter vector θ := [θ0 , θ1 ] is unknown. This setup
is referred to as “noise-free,” since we assume that y is strictly given by the function
f (x) without noise.
The graphical model representation of this model is given in Fig. 3.1 and clearly
specifies that the ith model output only depends on xi . Note that the graphical model
also holds in the case when there is noise.
In the noise-free setting, the expectation of the function under known data is
Then the covariance of the function values between any two points, xi and xj is
where the last term is zero because of the independence of θ0 and θ1 . Then
any collection of function values [f (x1 ), . . . , f (xn )] with given data has a joint
Gaussian distribution with covariance matrix Kij := Eθ [f (xi )f (xj )|xi , xj ] =
1 + xi xj . Such a probabilistic model is the simplest example of a more general,
non-linear, Bayesian kernel learning method referred to as “Gaussian Process
Regression” or simply “Gaussian Processes” (GPs) and is the subject of the later
material in this chapter.
Noisy Data
Hence the observed i.i.d. data is D := (x, y). Following Rasmussen and Williams
(2006), under this noise assumption and the linear model we can write down the
likelihood function of the data:
n
p(y|x, θ ) = p(yi |xi , θ )
i=1
1
= √ exp{−(yi − xi θ1 − θ0 )2 /(2σn2 )}
2π σn
p(y|x, θi )p(θi )
p(θi |y, x) = , i ∈ {0, 1}, (3.8)
p(y|x)
where the marginal likelihood in the denominator is given by integrating over the
parameters as
84 3 Bayesian Regression and Gaussian Processes
(
p(y|x) = p(y|x, θ )p(θ)dθ . (3.9)
If we define the matrix X, where [X]i := [1, xi ], and under more general conjugate
priors, we have
θ ∼ N(μ, ),
y|X, θ ∼ N(θ T X, σn2 I ),
and the product of Gaussian densities is also Gaussian, we can simply use standard
results of moments of affine transformations to give
1
p(θ ) ∝ exp{− (θ − μ)T −1 (θ − μ)}, (3.12)
2
1
∝ exp{μT −1 θ − θ T −1 θ}, (3.13)
2
1 1
p(y|X, θ )p(θ ) ∝ exp{− (y − θ T X)T (y − θ T X)} exp{μT −1 − θ T −1 θ}
2σn2 2
(3.14)
1 1
∝ exp{− (−2yθ T X + θ T XXT θ )} exp{μT −1 − θ T −1 θ }
2σn2 2
(3.15)
1 T T T 1 1
= exp{( −1 μ + y X) θ − θ T ( −1 + 2 XXT )θ }
σn2 2 σn
(3.16)
1
= exp{a T θ − θ T Aθ }. (3.17)
2
2 Bayesian Inference with Linear Regression 85
θ | D ∼ N(μ , ), (3.18)
1 1
μ = a = ( −1 + 2
XXT )−1 ( −1 μ + 2 yT X) (3.19)
σn σn
1
= A−1 = ( −1 + XXT )−1 (3.20)
σn2
and we use the inverse of transformation above, from natural back to moment
parameterization to write
1
p(θ |D) ∝ exp{− (θ − μ )T ( )−1 (θ − μ )}. (3.21)
2
Fig. 3.2 This figure demonstrates Bayesian inference for the linear model. The data has been
generated from the function f (x) = 0.3 + 0.5x with a small amount of additive white noise.
Source: Bishop (2006)
where the constant c := − n2 (log(2π ) + log(σn2 )). Setting this gradient to zero gives
the orthogonal projection of y on to the subspace spanned by X:
2 Bayesian Inference with Linear Regression 87
where θ̂ is the vector in the subspace spanned by X which is closest to y. This result
states that the maximum likelihood estimate of an unpenalized loss function (i.e.,
without including the prior) is the OLS estimate when the noise variance is known.
If the noise variance is unknown then the loss function is
n 1
L(θ , σn2 ) = log(σn2 ) + ||y − θ T X||22 + c, (3.23)
2 2σn2
∂L(θ, σn2 ) n 1
2
= 2
− ||y − θ T X||22 , (3.24)
∂σn 2σn 2σn4
1 T
(y X − θ T XXT ) = (θ − μ)T −1 , (3.25)
σn2
2 Note that the factor of 2 in the denominator of the second term does not cancel out because the
derivative is w.r.t. σn2 and not σn .
88 3 Bayesian Regression and Gaussian Processes
reducing the condition number of XT X. Forgetting the mean of the prior, the linear
system (XT X)θ = XT y becomes the regularized linear system: Aθ = σn−2 XT y.
Note that choosing the isotropic Gaussian prior p(θ ) = N(0, 2λ 1
I ) gives the
2
ridge penalty term in the loss function: λ||θ||2 , i.e. the negative log Gaussian prior
matches the ridge penalty term up to a constant. In the limit, λ → 0 recovers
maximum likelihood estimation—this corresponds to using the uninformative prior.
Of course, in Bayesian inference, we do not perform point-estimation of the
parameters, however it was a useful exercise to confirm that the mean of the
posterior in Eq. 3.19 did indeed match the MAP estimate. Furthermore, we have
made explicit the interpretation of the prior as a regularization term used in ridge
regression.
Recall from Chap. 2 that Bayesian prediction requires evaluating the density of
f∗ := f (x∗ ) w.r.t. a new data point x∗ and the training data D.
In general, we predict the model output at a new point, f∗ , by averaging the
model output over all possible weights, with the weight density function given
by the posterior. That is we seek to find the marginal density p(f∗ |x∗ , D) =
Eθ |D [p(f∗ |x∗ , θ )], where the dependency on θ has been integrated out. This
conditional density is Gaussian
with moments
μ∗ = Eθ|D [f∗ |x∗ , D] = x∗T Eθ|D [θ|x∗ , D] = x∗T Eθ|D [θ|D] = x∗T μ
∗ = Eθ|D [(f∗ − μ∗ )(f∗ − μ∗ )T |x∗ , D]
= x∗T Eθ|D [(θ − μ )(θ − μ )|x∗ , D]x∗
= x∗T Eθ|D [(θ − μ )(θ − μ )|D]x∗ = x∗T x∗ ,
where we have avoided taking the expectation of the entire density function
p(f∗ |x∗ , θ ), but rather just the moments because we know that f∗ is Gaussian.
There is another approach to deriving the predictive distribution from the conditional
distribution of the model output which relies on properties of inverse matrices. We
can write the joint density between Gaussian random variables X and Y in terms of
the partitioned covariance matrix:
) *
X μx xx xy
=N , ,
Y μy Σyx Σyy
where xx = V(X), xy = Cov(XY ) and yy = V(Y ), how can we find the
conditional density p(y|x)?
In order to express the moments in terms of the partitioned covariance matrix we
shall use the following Schur identity:
−1
AB M −MBD −1
= .
CD −D −1 CM D −1 + D −1 CMBD −1
where
−1
Ayy = (yy − yx xx xy )−1 (3.29)
−1
Ayx = −(yy − yx xx xy )−1 yx xx
−1
, (3.30)
σyx
μy|x = μy + (x − μx ), (3.33)
σx2
2
σyx
y|x = σy − , (3.34)
σx2
Since we know the form of the function f (x), we can simplify this expression by
writing that
where KX,X is the covariance of f (X), which for linear regression takes the form
KX,X = E[θ12 (X − μx )2 ],
f∗ f∗ = Kx∗ ,x∗ and yf∗ = KX,x∗ . Now we can write the moments of the predictive
distribution as
−1
μf∗ |X,y,x∗ = μf∗ + Kx∗ ,X KX,X (y − μy ), (3.39)
−1
Kf∗ |X,y,x∗ = Kx∗ ,x∗ − Kx∗ ,X KX,X KX,x∗ . (3.40)
Discussion
Note that we have assumed that the functional form of the map, f (x) is known
and parameterized. Here we assumed that the map is linear in the parameters and
affine in the features. Hence our approximation of the map is in the data space
and, for prediction, we can subsequently forget about the map and work with its
moments. The moments of the prior on the weights also no longer need to be
specified.
3 Gaussian Process Regression 91
If we do not know the form of the map but want to specify structure on the
covariance of the map (i.e., the kernel), then we are said to be approximating
in the kernel space rather than in the data space. If the kernels are given by
continuous functions of X, then such an approximation corresponds to learning a
posterior distribution over an infinite dimensional function space rather than a finite
dimensional vector space. Put differently, we perform non-parametric regression
rather than parametric regression. This is the remaining topic of this chapter and is
precisely how Gaussian process regression models data.
The adoption of GPs in financial derivative modeling is more recent and sometimes
under the name of “kriging” (see, e.g., (Cousin et al. 2016) or (Ludkovski 2018)).
Examples of applying GPs to financial time series prediction are presented in
(Roberts et al. 2013). These authors helpfully note that AR(p) processes are
discrete-time equivalents of GP models with a certain class of covariance functions,
known as Matérn covariance functions. Hence, GPs can be viewed as a Bayesian
non-parametric generalization of well-known econometrics techniques. da Barrosa
et al. (2016) present a GP method for optimizing financial asset portfolios. Other
examples of GPs include metamodeling for expected shortfall computations (Liu
and Staum 2010), where GPs are used to infer portfolio values in a scenario
based on inner-level simulation of nearby scenarios, and Crépey and Dixon (2020),
where multiple GPs infer derivative prices in a portfolio for market and credit
risk modeling. The approach of Liu and Staum (2010) significantly reduces the
required computational effort by avoiding inner-level simulation in every scenario
and naturally takes account of the variance that arises from inner-level simulation.
The caveat is that the portfolio remains fixed. The approach of Crépey and Dixon
(2020), on the other hand, allows for the composition of the portfolio to be changed,
which is especially useful for portfolio sensitivity analysis, risk attribution and stress
testing.
Derivative Pricing, Greeking, and Hedging
In the general context of derivative pricing, Spiegeleer et al. (2018) noted that
many of the calculations required for pricing a wide array of complex instruments,
are often similar. The market conditions affecting OTC derivatives may often
only slightly vary between observations by a few variables, such as interest rates.
Accordingly, for fast derivative pricing, greeking, and hedging, Spiegeleer et al.
(2018) propose offline learning the pricing function, through Gaussian Process
regression. Specifically, the authors configure the training set over a grid and
then use the GP to interpolate at the test points. We emphasize that such GP
estimates depend on option pricing models, rather than just market data - somewhat
counter the motivation for adopting machine learning, but also the case in other
computational finance applications such as Hernandez (2017), Weinan et al. (2017),
or Hans Bühler et al. (2018).
Spiegeleer et al. (2018) demonstrate the speed up of GPs relative to Monte-
Carlo methods and tolerable accuracy loss applied to pricing and Greek estimation
with a Heston model, in addition to approximating the implied volatility surface.
The increased expressibility of GPs compared to cubic spline interpolation, a
popular numerical approximation techniques useful for fast point estimation, is
also demonstrated. However, the applications shown in (Spiegeleer et al. 2018) are
limited to single instrument pricing and do not consider risk modeling aspects.
In particular, their study is limited to single-output GPs, without consideration
3 Gaussian Process Regression 93
for some mean vector μ, such that μi = μ(xi ), and covariance matrix KX,X
that satisfies (KX,X )ij = k(xi , xj ). We follow the convention4 in the literature of
assuming μ = 0.
Kernels k can be any symmetric positive semidefinite function, which is the
infinite dimensional analogue of the notion of a symmetric positive semidefinite
(i.e., covariance) matrix, i.e. such that
n
k(xi , xj )ξi ξj ≥ 0, for any points xk ∈ Rp and reals ξk .
i,j =1
Radial basis functions (RBF) are kernels that only depend on ||x − x ||, such as the
squared exponential (SE) kernel
4 This
choice is not a real limitation in practice (since it is for the prior) and does not prevent the
mean of the predictor from being nonzero.
94 3 Bayesian Regression and Gaussian Processes
1
k(x, x ) = exp{− ||x − x ||2 }, (3.41)
22
where the length-scale parameter can be interpreted as “how far you need to move
in input space for the function values to become uncorrelated,” or the Matern (MA)
kernel
) *ν ) *
21−ν √ ||x − x || √ ||x − x ||
k(x, x ) = σ 2 2ν Kν 2ν (3.42)
(ν)
(which converges to (3.41) in the limit where ν goes to infinity), where is the
gamma function, Kν is the modified Bessel function of the second kind, and and
ν are non-negative parameters.
GPs can be seen as distributions over the reproducing kernel Hilbert space
(RKHS) of functions which is uniquely defined by the kernel function, k (Scholkopf
and Smola 2001). GPs with RBF kernels are known to be universal approximators
with prior support to within an arbitrarily small epsilon band of any continuous
function (Micchelli et al. 2006).
Assuming additive Gaussian noise, y | x ∼ N(f (x), σn2 ), and a GP prior on
f (x), given training inputs x ∈ X and training targets y ∈ Y , the predictive
distribution of the GP evaluated at an arbitrary test point x∗ ∈ X∗ is:
Here, KX∗ ,X , KX,X∗ , KX,X , and KX∗ ,X∗ are matrices that consist of the kernel,
k : Rp × Rp → R, evaluated at the corresponding points, X and X∗ , and μX∗ is the
mean function evaluated on the test inputs X∗ .
One key advantage of GPs over interpolation methods is their expressibility. In
particular, one can combine the kernels, using convolution, to generalize the base
kernels (c.f. “multi-kernel” GPs (Melkumyan and Ramos 2011)).
GPs are fit to the data by optimizing the evidence-the marginal probability of the
data given the model with respect to the learned kernel hyperparameters.
The evidence has the form (see, e.g., (Murphy 2012, Section 15.2.4, p. 523)):
3 Gaussian Process Regression 95
! " n
log p(Y | X, λ) = − Y (KX,X + σn2 I )−1 Y + log det(KX,X + σn2 I ) − log 2π,
2
(3.45)
where KX,X implicitly depends on the kernel hyperparameters λ (e.g., [, σ ],
assuming an SE kernel as per (3.41) or an MA kernel for some exogenously fixed
value of ν in (3.42) ).
The first and second term in the [· · · ] in (3.45) can be interpreted as a model fit
and a complexity penalty term (see (Rasmussen and Williams 2006, Section 5.4.1)).
Maximizing the evidence with respect to the kernel hyperparameters, i.e. computing
λ∗ = argmaxλ log p(y | x, λ), results in an automatic Occam’s razor (see (Alvarez
et al. 2012, Section 2.3) and (Rasmussen and Ghahramani 2001)), through which we
effectively learn the structure of the space of functional relationships between the
inputs and the targets. In practice, the negative evidence is minimized by stochastic
gradient descent (SGD). The gradient of the evidence is given analytically by
with
If uniform grids
$pare used (as opposed to a mesh-free GP as described in Sect. 5.2),
we have n = k=1 nk , where nk are the number of grid points per variable.
However, although each kernel matrix KX,X is n × n, we only store the n-vector
α in (3.46), which brings reduced memory requirements.
Training time, required for maximizing (3.45) numerically, scales poorly with
the number of observations n. This complexity stems from the need to solve linear
systems and compute log determinants involving an n × n symmetric positive
definite covariance matrix K. This task is commonly performed by computing the
Cholesky decomposition of K incurring O(n3 ) complexity. Prediction, however,
is faster and can be performed in O(n2 ) with a matrix–vector multiplication for
each test point, and hence the primary motivation for using GPs is real-time risk
estimation performance.
Online Learning
where the previous posterior p(f∗ |X, Y, x∗ ) becomes the prior in the update. Hence
the GP learns over time as model parameters (which are an input to the GP) are
updated through pricing model recalibration.
which allows to approximate KX,X ≈ WX KU,U WX . Gardner et al. (2018) note
that standard inducing point approaches, such as subset of regression (SoR) or
fully independent training conditional (FITC), can be reinterpreted from the SKI
perspective. Importantly, the efficiency of SKI-based MSGP methods comes from,
first, a clever choice of a set of inducing points which exploit the algebraic structure
of KU,U , and second, from using very sparse local interpolation matrices. In
practice, local cubic interpolation is used.
KU,U = T1 ⊗ T2 ⊗ · · · ⊗ TP . (3.52)
5 Gardner et al. (2018) explored 5 different approximation methods known in the numerical analysis
literature.
98 3 Bayesian Regression and Gaussian Processes
To perform inference, we need to solve (KSKI + σn2 I )−1 y; kernel learning requires
evaluating log det(KSKI + σn2 I ). The first task can be accomplished by using
an iterative scheme—linear conjugate gradients—which depends only on matrix
vector multiplications with (KSKI + σn2 I ). The second is performed by exploiting
the Kronecker and Toeplitz structure of KU,U for computing an approximate
eigendecomposition, as described above.
In this chapter, we primarily use the basic interpolation approach for simplicity.
However for completeness, Sect. 5.3 shows the scaling of the time taken to train and
predict with MSGPs.
In the following example, the portfolio holds a long position in both a European call
and a put option struck on the same underlying, with K = 100. We assume that the
underlying follows Heston dynamics:
dSt .
= μdt + Vt dWt1 , (3.53)
St
.
dVt = κ(θ − Vt )dt + σ Vt dWt2 , (3.54)
dW 1 , W 2 t = ρdt, (3.55)
where the notation and fixed parameter values used for experiments are given in
Table 3.1 under μ = r0 . We use a Fourier Cosine method (Fang and Oosterlee
2008) to generate the European Heston option price training and testing data for the
GP. We also use this method to compare the GP Greeks, obtained by differentiating
the kernel function.
Table 3.1 lists the values of the parameters for the Heston dynamics and terms
of the European Call and Put option contract used in our numerical experiments.
Table 3.2 shows the values for the Euler time stepper used for simulating Heston
dynamics and the credit risk model.
For each pricing time ti , we simultaneously√fit a multi-GP to both gridded call
and put prices over stock price S and volatility V , keeping time to maturity fixed.
Figure 3.3 shows the gridded call (top) and put (bottom) price surfaces at various
time to maturities, together with the GP estimate. Within each column in the figure,
the same GP model has been simultaneously fitted to both the call and put price
5 Example: Pricing and Greeking with Single-GPs 99
Fig. 3.3 This figure compares the gridded Heston model call (top) and put (bottom) price surfaces
at various time to maturities, with the GP estimate. The GP estimate is observed to be practically
identical (slightly below in the first five panels and slightly above in the last one). Within each
column in the figure, the same GP model has been simultaneously fitted to both the Heston model
call and put price surfaces over a 30 × 30 grid of prices and volatilities, fixing the time to maturity.
Across each column, corresponding to different time to maturities, a different GP model has been
fitted. The GP is then evaluated out-of-sample over a 40 × 40 grid, so that many of the test samples
are new to the model. This is repeated over various time to maturities
6 Note that the plot uses the original coordinates and not the re-scaled coordinates.
100 3 Bayesian Regression and Gaussian Processes
Fig. 3.4 This figure assesses the GP option price prediction in the setup of a Black–Scholes model.
The GP with a mixture of a linear and SE kernel is trained on n = 50 X, Y pairs, where X ∈ h ⊂
(0, 300] is the gridded underlying of the option prices and Y is a vector of call or put prices. These
training points are shown by the black “+” symbols. The exact result using the Black–Scholes
pricing formula is given by the black line. The predicted mean (blue solid line) and variance of the
posterior are estimated from Eq. 3.44 over m = 100 gridded test points, X∗ ∈ h∗ ⊂ [300, 400],
for the (left) call option struck at 110 and (center) put option struck at 90. The shaded envelope
represents the 95% confidence interval about the mean of the posterior. This confidence interval is
observed to increase the further the test point is from the training set. The time to maturity of the
options are fixed to two years. (a) Call price. (b) Put price
5.1 Greeking
where ∂X∗ KX∗ ,X = 12 (X − X∗ )KX∗ ,X and we recall from Sect. (3.46) that α =
[KX,X + σn2 I ]−1 y (and in the numerical experiments we set μ = 0). Second-order
sensitivities are obtained by differentiating once more with respect to X∗ .
Note that α is already calculated at training time (for pricing) by Cholesky
matrix factorization of [KX,X + σn2 I ] with O(n3 ) complexity, so there is no
significant computational overhead from Greeking. Once the GP has learned
the derivative prices, Eq. 3.56 is used to evaluate the first order MtM Greeks
with respect to the input variables over the test set. Example source code
illustrating the implementation of this calculation is presented in the notebook
Example-2-GP-BS-Derivatives.ipynb.
Figure 3.5 shows (left) the GP estimate of a call option’s delta := ∂C ∂S
and (right) vega ν := ∂C ∂σ , having trained on the underlying, respectively implied
volatility, and on the BS option model prices. For avoidance of doubt, the model is
not trained on the BS Greeks. For comparison in the figure, the BS delta and vega
are also shown. In each case, the two graphs are practically indistinguishable, with
one graph superimposed over the other.
The above numerical examples have trained and tested GPs on uniform grids. This
approach suffers from the curse of dimensionality, as the number of training points
grows exponentially with the dimensionality of the data. This is why, in order to
estimate the MtM cube, we advocate divide-and-conquer, i.e. the use of numerous
Fig. 3.5 This figure shows (left) the GP estimate of the call option’s delta := ∂C∂S and (right)
vega ν := ∂C
∂σ , having trained on the underlying, respectively implied volatility, and on the BS
option model prices
102 3 Bayesian Regression and Gaussian Processes
low input dimensional space, p, GPs run in parallel on specific asset classes.
However, use of fixed grids is by no means necessary. We show here how GPs can
show favorable approximation properties with a relatively few number of simulated
reference points (cf. also (Gramacy and Apley 2015)).
Figure 3.6 shows the predicted Heston call prices using (left) 50 and (right)
100 simulated training points, indicated by “+”s, drawn from a uniform random
distribution. The Heston call option is struck at K = 100 with a maturity of T = 2
years.
Figure 3.7 (left) shows the convergence of the GP MSE of the prediction, based
on the number of Heston simulated training points. Fixing the number of simulated
points to 100, but increasing the input space dimensionality, p, of each observation
point (to include varying Heston parameters, Fig. 3.7 (right) shows the wall-clock
time for training a GP with SKI (see Sect. 3.4), Note that the number of SGD
iterations has been fixed to 1000.
120 120
GP Prediction GP Prediction
Analytical Model Analytical Model
100 100
80 80
V 60 V 60
40 40
20 20
0 0
60 80 100 120 140 160 180 200 60 80 100 120 140 160 180 200
S S
Fig. 3.6 Predicted Heston Call prices using (left) 50 and (right) 100 simulated training points,
indicated by “+”s, drawn from a uniform random distribution
Fig. 3.7 (Left) The convergence of the GP MSE of the prediction is shown based on the number
of simulated Heston training points. (Right) Fixing the number of simulated points to 100, but
increasing the dimensionality p of each observation point (to include varying Heston parameters),
the figure shows the wall-clock time for training a GP with SKI
6 Multi-response Gaussian Processes 103
Fig. 3.8 (Left) The elapsed wall-clock time is shown for training against the number of training
points generated by a Black–Scholes model. (Right) The elapsed wall-clock time for prediction
of a single point is shown against the number of testing points. The reason that the prediction
time increases (whereas the theory reviewed in Sect. 3.4 says it should be constant) is due to
memory latency in our implementation—each point prediction involves loading a new test point
into memory
Figure 3.8 shows the increase of MSGP training time and prediction time against
the number of training points n from a Black Scholes model. Fixing the number of
inducing points to m = 30 (see Sect. 3.4), we increase the number of observations,
n, in the p = 1 dimensional training set.
Setting the number of SGD iterations to 1000, we observe an approximate 1.4x
increase in training time for a 10x increase in the training sample. We observe an
approximate 2x increase in prediction time for a 10x increase in the training sample.
The reason that the prediction time does not scale independently of n is due to
memory latency in our implementation—each point prediction involves loading a
new test point into memory. Fast caching approaches can be used to reduce this
memory latency, but are beyond the scope of this section.
Note that training and testing times could be improved with CUDA on a GPU,
but are not evaluated here.
where f(xi ) ∈ Rd is a column vector whose components are the functions fl (xi )}dl=1 ,
M is a matrix in Rd×n with Mli = μl (xi ), is a matrix in Rn×n with ij =
k(xi , xj ), and ⊗ is the Kronecker product
⎛ ⎞
11 · · · 1n
⎜ .. .. .. ⎟.
⎝ . . . ⎠
m1 · · · mn
Sometimes is called the column covariance matrix while is the row (or task)
covariance matrix. We denote f ∼ MGP(mμ, k, ). As explained after Eq. (10) in
(Alvarez et al. 2012), the matrices and encode dependencies among the inputs,
respectively outputs.
where K is the n × n covariance matrix of which the (i, j )-th element [K ]ij =
k (xi , xj ).
To predict a new variable f∗ = [f∗1 , . . . , f∗m ] at the test locations
X∗ = [xn+1 , . . . , xn+m ], the joint distribution of the training observations
Y = [y1 , . . . , yn ] and the predictive targets f∗ are given by
) *
Y K (X, X) K (X∗ , X)T
∼ MN 0, , , (3.57)
f∗ K (X∗ , X) K (X∗ , X∗ )
where K (X, X) is an n × n matrix of which the (i, j )-th element [K (X, X)]ij =
k (xi , xj ), K (X∗ , X) is an m × n matrix of which the (i, j )-th element
7 Summary 105
[K (X∗ , X)]ij = k (xn+i , xj ), and K (X∗ , X∗ ) is an m × m matrix with the (i, j )-th
element [K (X∗ , X∗ )]ij = k (xn+i , xn+j ). Thus, taking advantage of conditional
distribution of multivariate Gaussian process, the predictive distribution is:
ˆ ⊗ ),
p(vec(f∗ )|X, Y, X∗ ) = N(vec(M̂), ˆ (3.58)
where
nd d n 1
L(Y |X, λ, ) = ln(2π ) + ln |K | + ln || + tr((K )−1 Y −1 Y T ).
2 2 2 2
(3.62)
Further details of the multi-GP are given in (Bonilla et al. 2007; Alvarez et al.
2012; Chen et al. 2017). The computational remarks made in Sect. 3.4 also apply
here, with the additional comment that the training and prediction time also scale
linearly (proportionally) with the number of dimensions d. Note that the task
covariance matrix is estimated via a d-vector factor b by = bbT + σΩ2 I (where
the σ2 component corresponds to a standard white noise term). An alternative
computational approach, which exploits separability of the kernel, is the one
described in Section 6.1 of (Alvarez et al. 2012), with complexity O(d 3 + n3 ).
7 Summary
In this chapter we have introduced Bayesian regression and shown how it extends
many of the concepts in the previous chapter. We develop kernel based machine
learning methods, known as Gaussian processes, and demonstrate their application
to surrogate models of derivative prices. The key learning points of this chapter
are:
– Introduced Bayesian linear regression;
– Derived the posterior distribution and the predictive distribution;
– Described the role of the prior as an equivalent form of regularization in
maximum likelihood estimation; and
– Developed Gaussian Processes for kernel based probabilistic modeling, with
programming examples in derivative modeling.
106 3 Bayesian Regression and Gaussian Processes
8 Exercises
θ|D ∼ N(μ , ),
with moments:
1 1
μ = a = ( −1 + 2
XXT )−1 ( −1 μ + 2 yT X)
σn σn
1
= A−1 = ( −1 + XXT )−1 .
σn2
n
p(x1:n | θ ) = φ(xi ; θ, σ 2 ),
i=1
σ02 σ2
μpost = x̄ + μ0 ,
σ2 σ2
n + σ02 n + σ02
1
2
σpost = ,
1
+ n
σ02 σ2
n
where x̄ := 1
n i=1 xi .
Exercise 3.3: Prediction with GPs
Show that the predictive distribution for a Gaussian Process, with model output over
a test point, f∗ , and assumed Gaussian noise with variance σn2 , is given by
Appendix
Question 1
Answer: 1,4,5.
Parametric Bayesian regression always treats the regression weights as random
variables.
In Bayesian regression the data function f (x) is only observed if the data is
assumed to be noise-free. Otherwise, the function is not directly observed.
The posterior distribution of the parameters will only be Gaussian if both the
prior and the likelihood function are Gaussian. The distribution of the likelihood
function depends on the assumed error distribution.
The posterior distribution of the regression weights will typically contract with
increasing data. The precision matrix grows with decreasing variance and hence the
variance of the posterior shrinks with increasing data. There are exceptions if, for
example, there are outliers in the data.
The mean of the posterior distribution depends on both the mean and covariance
of the prior if it is Gaussian. We can see this from Eq. 3.19.
Question 2
Answer: 1, 2, 4. Prediction under a Bayesian linear model requires first estimating
the moments of the posterior distribution of the parameters. This is because the
prediction is the expected likelihood of the new data under the posterior distribution.
108 3 Bayesian Regression and Gaussian Processes
Python Notebooks
References
Alvarez, M., Rosasco, L., & Lawrence, N. (2012). Kernels for vector-valued functions: A review.
Foundations and Trends in Machine Learning, 4(3), 195–266.
Bishop, C. M. (2006). Pattern recognition and machine learning (information science and
statistics). Berlin, Heidelberg: Springer-Verlag.
Bonilla, E. V., Chai, K. M. A., & Williams, C. K. I. (2007). Multi-task Gaussian process prediction.
In Proceedings of the 20th International Conference on Neural Information Processing
Systems, NIPS’07, USA (pp. 153–160). Curran Associates Inc.
Chen, Z., Wang, B., & Gorban, A. N. (2017, March). Multivariate Gaussian and student−t process
regression for multi-output prediction. ArXiv e-prints.
Cousin, A., Maatouk, H., & Rullière, D. (2016). Kriging of financial term structures. European
Journal of Operational Research, 255, 631–648.
References 109
Crépey, S., & M. Dixon (2020). Gaussian process regression for derivative portfolio modeling and
application to CVA computations. Computational Finance.
da Barrosa, M. R., Salles, A. V., & de Oliveira Ribeiro, C. (2016). Portfolio optimization through
kriging methods. Applied Economics, 48(50), 4894–4905.
Fang, F., & Oosterlee, C. W. (2008). A novel pricing method for European options based on
Fourier-cosine series expansions. SIAM J. SCI. COMPUT.
Gardner, J., Pleiss, G., Wu, R., Weinberger, K., & Wilson, A. (2018). Product kernel interpolation
for scalable Gaussian processes. In International Conference on Artificial Intelligence and
Statistics (pp. 1407–1416).
Gramacy, R., & D. Apley (2015). Local Gaussian process approximation for large computer
experiments. Journal of Computational and Graphical Statistics, 24(2), 561–578.
Hans Bühler, H., Gonon, L., Teichmann, J., & Wood, B. (2018). Deep hedging. Quantitative
Finance. Forthcoming (preprint version available as arXiv:1802.03042).
Hernandez, A. (2017). Model calibration with neural networks. Risk Magazine (June 1–5). Preprint
version available at SSRN.2812140, code available at https://github.com/Andres-Hernandez/
CalibrationNN.
Liu, M., & Staum, J. (2010). Stochastic kriging for efficient nested simulation of expected shortfall.
Journal of Risk, 12(3), 3–27.
Ludkovski, M. (2018). Kriging metamodels and experimental design for Bermudan option pricing.
Journal of Computational Finance, 22(1), 37–77.
MacKay, D. J. (1998). Introduction to Gaussian processes. In C. M. Bishop (Ed.), Neural networks
and machine learning. Springer-Verlag.
Melkumyan, A., & Ramos, F. (2011). Multi-kernel Gaussian processes. In Proceedings of
the Twenty-Second International Joint Conference on Artificial Intelligence - Volume Two,
IJCAI’11 (pp. 1408–1413). AAAI Press.
Micchelli, C. A., Xu, Y., & Zhang, H. (2006, December). Universal kernels. J. Mach. Learn.
Res., 7, 2651–2667.
Murphy, K. (2012). Machine learning: a probabilistic perspective. The MIT Press.
Neal, R. M. (1996). Bayesian learning for neural networks, Volume 118 of Lecture Notes in
Statistics. Springer.
Pillonetto, G., Dinuzzo, F., & Nicolao, G. D. (2010, Feb). Bayesian online multitask learning of
Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(2),
193–205.
Rasmussen, C. E., & Ghahramani, Z. (2001). Occam’s razor. In In Advances in Neural Information
Processing Systems 13 (pp. 294–300). MIT Press.
Rasmussen, C. E., & Williams, C. K. I. (2006). Gaussian processes for machine learning. MIT
Press.
Roberts, S., Osborne, M., Ebden, M., Reece, S., Gibson, N., & Aigrain, S. (2013). Gaussian
processes for time-series modelling. Philosophical Transactions of the Royal Society of London
A: Mathematical, Physical and Engineering Sciences, 371(1984).
Scholkopf, B., & Smola, A. J. (2001). Learning with kernels: support vector machines, regulariza-
tion, optimization, and beyond. Cambridge, MA, USA: MIT Press.
Spiegeleer, J. D., Madan, D. B., Reyners, S., & Schoutens, W. (2018). Machine learning for
quantitative finance: fast derivative pricing, hedging and fitting. Quantitative Finance, 0(0),
1–9.
Weinan, E, Han, J., & Jentzen, A. (2017). Deep learning-based numerical methods for high-
dimensional parabolic partial differential equations and backward stochastic differential
equations. arXiv:1706.04702.
Whittle, P., & Sargent, T. J. (1983). Prediction and regulation by linear least-square methods (NED
- New edition ed.). University of Minnesota Press.
Chapter 4
Feedforward Neural Networks
1 Introduction
Artificial neural networks have a long history in financial and economic statistics.
Building on the seminal work of (Gallant and White 1988; Andrews 1989; Hornik
et al. 1989; Swanson and White 1995; Kuan and White 1994; Lo 1994; Hutchinson,
Lo, and Poggio Hutchinson et al.; Baillie and Kapetanios 2007; Racine 2001)
develop various studies in the finance, economics, and business literature. Most
recently, the literature has been extended to include deep neural networks (Sirignano
et al. 2016; Dixon et al. 2016; Feng et al. 2018; Heaton et al. 2017).
In this chapter we shall introduce some of the theory of function approximation
and out-of-sample estimation with neural networks when the observation points are
independent and typically also identically distributed. Such a case is not suitable
for times series data and shall be the subject of later chapters. We shall restrict our
attention to feedforward neural networks in order to explore some of the theoretical
arguments which help us reason scientifically about architecture design and approx-
imation error. Understanding these networks from a statistical, mathematical, and
information-theoretic perspective is key to being able to successfully apply them
in practice. While this chapter does present some simple financial examples to
Chapter Objectives
By the end of this chapter, the reader should expect to accomplish the following:
– Develop mathematical reasoning skills to guide the design of neural networks;
– Gain familiarity with the main theory supporting statistical inference with neural
networks;
– Relate feedforward neural networks with other types of machine learning
methods;
– Perform model selection with ridge and LASSO neural network regression;
– Learn how to train and test a neural network; and
– Gain familiarity with Bayesian neural networks.
Note that section headers ending with * are more mathematically advanced, often
requiring some background in analysis and probability theory, and can be skipped
by the less mathematically advanced reader.
2 Feedforward Architectures
2.1 Preliminaries
Srivastava et al. (2014), which we discuss in Sect. 5.2.2. Recall from Chap. 1 that a
feedforward neural network model takes the general form of a parameterized map
where FW,b is a deep neural network with L layers (Fig. 4.1) and is i.i.d. error.
The deep neural network takes the form of a composition of simpler functions:
(L) (1)
Ŷ (X) := FW,b (X) = fW (L) ,b(L) ◦ · · · ◦ fW (1) ,b(1) (X), (4.2)
where W = (W (1) , . . . , W (L) ) and b = (b(1) , . . . , b(L) ) are weight matrices and
bias vectors. Any weight matrix W () ∈ Rm×n can be expressed as n column m-
()
vectors W () = [w() () ()
,1 , . . . , w,n ]. We denote each weight as wij := W ij
.
More formally and under additional restrictions, we can form this parameterized
map in the class of compositions of semi-affine functions.
(continued)
Input
layer
Hidden
x1
layer 1 Hidden
layer 2
x2
Output
layer
x3
ŷ1
x4
ŷ2
x5
x6
Fig. 4.1 An illustrative example of a feedforward neural network with two hidden layers, six
features, and two outputs. Deep learning network classifiers typically have many more layers, use
a large number of features and several outputs or classes. The goal of learning is to find the weight
on every edge and the bias for every neuron (not illustrated) that minimizes the out-of-sample error
114 4 Feedforward Neural Networks
Rn → Rm , given by f (v) = W () σ (−1) (v) + b() , W () ∈ Rm×n and b() ∈
Rm , is a semi-affine function in v, e.g. f (v) = wtanh(v) + b. σ (·) are the
activation functions of the output from the previous layer.
If all the activation functions are linear, FW,b is just linear regression, regardless
of the number of layers L and the hidden layers are redundant. For any such network
we can always find an equivalent network without hidden units. This follows from
the fact that the composition of successive linear transformations is itself a linear
transformation.1 For example if there is one hidden layer and σ (1) is the identify
function, then
Ŷ (X) = W (2) (W (1) X + b(1) ) + b(2) = W (2) W (1) X + W (2) b(1) + b(2) = W̃ X + b̃.
(4.3)
Informally, the main effect of activation is to introduce non-linearity into the
model, and in particular, interaction terms between the input. The geometric
interpretation of the activation units will be discussed in the next section. We
can view the special case when the network has one hidden layer and will see
that the activation function introduces interaction terms Xi Xj . Consider the partial
derivative
(2)
∂Xj Ŷ = w,i σ (Ii(1) )wij
(1)
, (4.4)
i
(2)
where w,i is the ith column vector of W (2) , I () (X) := W () X + b() , and
differentiate again with respect to Xk , k = i to give
(1) (1)
∂X2 j ,Xk Ŷ = −2 w(2) (1) (1)
,i σ (Ii )σ (Ii )wij wik , (4.5)
i
1 Note that there is a potential degeneracy in this case; There may exist “flat directions”—hyper-
surfaces in the parameter space that have exactly the same loss function.
2 Feedforward Architectures 115
x1 x2 x1 x2 x1 x2
No hidden units (linear) Two hidden units Many hidden units
Fig. 4.2 Simple two variable feedforward networks with and without hidden layers. The yellow
nodes denote input variables, the green nodes denote hidden units, and the red nodes are outputs. A
feedforward network without hidden layers is a linear regressor. A feedforward network with one
hidden layer is a shallow learner and a feedforward network with two or more hidden layers is a
deep learner
is a logistic regression. Recall that only one output unit is required to represent
the probability of a positive label, i.e. P [G = 1 | X]. The next configuration we
shall consider has one hidden layer—the number of hidden units shall be equal
to the number of input neurons. This choice serves as a useful reference point
as many hidden units are often needed for sufficient expressibility. The final
configuration has substantially more hidden units. Note that the second layer has
been introduced purely to visualize the output from the hidden layer. This set
of simple configurations (a.k.a. architectures) is ample to illustrate how a neural
network method works.
In Fig. 4.3 the data has been arranged so that no separating linear plane can
perfectly separate the points in [−1, 1] × [−1, 1]. The activation function is chosen
to be ReLU (x). The weight and biases of the network have been trained on this
data. For each network, we can observe how the input space is transformed by the
layers by viewing the top row of the figure. We can also view the linear regression
in the original, input, space in the bottom row of the figure. The number of units in
the first hidden layers is observed to significantly affect the classifier performance.2
Determining the weight and bias matrices, together with how many hidden units
are needed for generalizable performance is the goal of parameter estimation and
model selection. However, we emphasize that some conceptual understanding of
neural networks is needed to derive interpretability, the topic of Chap. 5.
Partitioning
2 There is some redundancy in the construction of the network and around 50 units are needed.
116 4 Feedforward Neural Networks
which divides the input space into convex regions. In other words, each unit in the
hidden layer implements a half-space predictor. In the case of a ReLU activation
function f (x) = max(x, 0), each manifold is simply a hyperplane and the neuron
gets activated when the observation is on the “best” side of this hyperplane, the
activation amount is equal to how far from the boundary the given point is. The set
of hyperplanes defines a hyperplane arrangement (Montúfar pet al.
& '2014). In general,
an arrangement of n ≥ p hyperplanes in Rp has at most j =0 nj convex regions.
For example, in a two-dimensional input space, threeneurons & ' with ReLU
activation functions will divide the space into no more than 2j =0 j3 = 7 regions,
as shown in Fig. 4.4.
Multiple Hidden Layers
they represent convex regions in the input space. This means that a neuron from the
second layer is active if and only if the network input corresponds to a point in the
input space that is located simultaneously in all half-spaces, which are classified by
selected neurons from the first layer.
The maximal number of linear regions of the functions computed by a neural
network with p input units and L − 1 hidden layers, with equal)! width n() = n ≥ p
"(L−2)p *
rectifiers at the th layer, can compute functions that have pn np linear
regions (Montúfar et al. 2014). We see that the number of linear regions of deep
models grows exponentially in L and polynomially in n. See Montúfar et al. (2014)
for a more detailed exposition of how the additional layers partition the input space.
While this form of reasoning guides our intuition towards designing neural
network architectures it falls short at explaining why projection into a higher
dimensional space is complementary to how the networks partition the input space.
To address this, we turn to some informal probabilistic reasoning to aid our
understanding.
Data Dimensionality
p
d(X, Y )2 := ||X − Y ||22 = (Xi − Yi )2 . (4.6)
i=1
p
E[d(X, Y )2 ] = E[Xi2 ] + E[Yi2 ] = 2p. (4.7)
i=1
Under these i.i.d. assumptions, the mean of the pairwise distance squared between
any random points in Rp is increasingly linear with the√dimensionality of the space.
By Jensen’s inequality for concave functions, such as x
. . .
E[d(X, Y )] = E[ d(X, Y )2 ] ≤ E[d(X, Y )2 ] = 2p, (4.8)
and hence the expected distance is bounded above by a function which grows
to the power of p1/2 . This simple observation supports the characterization of
random points as being less concentrated as the dimensionality of the input space
increases. In particular, this property suggests machine learning techniques which
rely on concentration of points in the input space, such as linear kernel methods,
may not scale well with dimensionality. More importantly, this notion of loss of
concentration with dimensionality of the input space does not conflict with how the
input space is partitioned—the model defines a convex polytope with a less stringent
requirement for locality of data for approximation accuracy.
Size of Hidden Layer
A similar simple probabilistic reasoning can be applied to the output from a one-
layer network to understand how concentration varies with the number of units in
the hidden layer. Consider, as before two i.i.d. random vectors X and Y in Rp .
Suppose now that these vectors are projected by a bounded semi-affine function
g : Rp → Rq . Assume that the output vectors g(X), g(Y ) ∈ Rq are i.i.d. with
zero mean and variance σ 2 I . Defining the distance between the output vectors as
the 2−norm
q
dg2 := ||g(X) − g(Y )||22 = (gi (X) − gi (Y ))2 . (4.9)
i=1
Under expectations
p
E[dg2 ] = E[g(X)2i ] + E[g(Y )2i ] = 2qσ 2 ≤ q(ḡ − g) (4.10)
i=1
2 Feedforward Architectures 119
we observe that the distance between the two output vectors, corresponding to the
output of a hidden layer g under different inputs X and Y , can be less concentrated
as the dimensionality of the output space increases. In other words, points in the
codomain of g are on average more separate as q increases.
While the above informal geometric and probabilistic reasoning provides some intu-
ition for the need for multiple units in the hidden layer of a two-layer MLP, it does
not address why deep networks are needed. The most fundamental mathematical
concept in neural networks is the universal representation theorem. Simply put, this
is a statement about the ability of a neural network to approximate any continuous,
and unknown, function between input and output pairs with a simple, and known,
functional representation. Hornik et al. (1989) show that a feedforward network
with a single hidden layer can approximate any continuous function, regardless of
the choice of activation function or data.
Formally, let C p := {F : Rp → R | F (x) ∈ C(R)} be the set of continuous
functions from Rp to R. Denote p (g) as the class of functions {F : Rp →
R : F (x) = W (2) σ (W (1) x + b(1) ) + b(2) }. Consider = (0, 1] and let C0 be the
collection of all open intervals in (0, 1]. Then σ (C0 ), the σ -algebra generated by C0 ,
is called the Borel σ -algebra. It is denoted by B((0, 1]). An element of B((0, 1]) is
called a Borel set. A map f : X → Y between two topological spaces X, Y is called
Borel measurable if f −1 (A) is a Borel set for any open set A.
Let M p := {F : Rp → R | F (x) ∈ B(R)} be the set of all Borel measurable
functions from Rp to R. We denote the Borel σ -algebra of Rp as Bp .
This theorem shows that standard feedforward networks with only a single
hidden layer can approximate any continuous function uniformly on any compact
set and any measurable function arbitrarily well in the ρμ metric, regardless of the
activation function (provided it is measurable), regardless of the dimension of the
input space, p, and regardless of the input space. In other words, by taking the
number of hidden units, k, large enough, every continuous function over Rp can
be approximated arbitrarily closely, uniformly over any bounded set by functions
realized by neural networks with one hidden layer.
The universal approximation theorem is important because it characterizes
feedforward networks with a single hidden layer as a class of approximate solutions.
However, the theorem is not constructive—it does not specify how to configure a
multilayer perceptron with the required approximation properties.
The theorem has some important limitations. It says nothing about the effect of
adding more layers, other than to suggest they are redundant. It assumes that the
optimal network weight vectors are reachable by gradient descent from the initial
weight values, but this may not be possible in finite computations. Hence there are
additional limitations introduced by the learning algorithm which are not apparent
from a functional approximation perspective. The theorem cannot characterize the
prediction error in any way, the result is purely based on approximation theory. An
important concern is over-fitting and performance generalization on out-of-sample
datasets, both of which it does not address. Moreover, it does not inform how MLPs
can recover other approximation techniques, as a special case, such as polynomial
spline interpolation. As such we shall turn to alternative theory in this section to
assess the learnability of a neural network and to further understand it, beginning
with a perceptron binary classifier. The reason why multiple hidden layers are
needed is still an open problem, but various clues are provided in the next section
and later in Sect. 2.7.
2.5 VC Dimension
Right point activated Left point activated None activated Both activated
Fig. 4.5 For the points {−0.5, 0.5}, there are weights and biases that activate only one of them
(W = 1, b = 0 or W = −1, b = 0), none of them (W = 1, b = −0.75), and both of them
(W = 1, b = 0.75)
only two distinct points can always be correctly classified under all possible binary
label assignments.
As shown in Fig. 4.5, for the points {−0.5, 0.5}, there are weights and biases that
activate both of them (W = 1, b = 0.75), only one of them (W = 1, b = 0 or
W = −1, b = 0), and none of them (W = 1, b = −0.75). Every distinct pair of
points is separable with the linear threshold perceptron. So every dataset of size 2 is
shattered by the perceptron. However, this linear threshold perceptron is incapable
of shattering triplets, for example, X ∈ {−0.5, 0, 0.5} and Y ∈ {0, 1, 0}. In general,
the VC dimension of the class of half-spaces in Rk is k + 1. For example, a 2d plane
shatters any three points, but cannot shatter four points.
The VC dimension determines both the necessary and sufficient conditions for
the consistency and rate of convergence of learning processes (i.e., the process
of choosing an appropriate function from a given set of functions). If a class of
functions has a finite VC dimension, then it is learnable. This measure of capacity is
more robust than arbitrary measures such as the number of parameters. It is possible,
for example, to find a simple set of functions that depends on only one parameter
and that has infinite VC dimension.
•
? VC Dimension of an Indicator Function
Determine the VC dimension of the indicator function over = [0, 1]
F (x) = {f : → {0, 1}, f (x) = 1x∈[t1 ,t2 ) , or f (x) = 1−1x∈[t1 ,t2 ) , t1 < t2 ∈ }.
(4.12)
Suppose there are three points x1 , x2 , and x3 and assume x1 < x2 < x3 without
loss of generality. All possible labeling of the points is reachable; therefore, we
assert that V C(F ) ≥ 3. With four points x1 , x2 , x3 , and x4 (assumed increasing as
always), you cannot label x1 and x3 with the value 1 and x2 and x4 with the value 0,
for example. Hence V C(F ) = 3.
Recently (Bartlett et al. 2017a) prove upper and lower bounds on the VC
dimension of deep feedforward neural network classifiers with the piecewise linear
activation function, such as ReLU activation functions. These bounds are tight for
almost the entire range of parameters. Letting |W | be the number of weights and L
be the number of layers, they proved that the VC dimension is O(|W |Llog(|W |)).
122 4 Feedforward Neural Networks
They further showed the effect of network depth on VC dimension with different
non-linearities: there is no dependence for piecewise constant, linear dependence
for piecewise-linear, and no more than quadratic dependence for general piecewise-
polynomials.
Vapnik (1998) formulated a method of VC dimension based inductive infer-
ence. This approach, known as structural empirical risk minimization, achieved
the smallest bound on the test error by using the training errors and choosing
the machine (i.e., the set or functions) with the smallest VC dimension. The
minimization problem expresses the bias–variance tradeoff . On the one hand, to
minimize the bias, one needs to choose a function from a wide set of functions, not
necessarily with a low VC dimension. On the other hand, the difference between the
training error and the test error (i.e., variance) increases with VC dimension (a.k.a.
expressibility).
The expected risk is an out-of-sample measure of performance of the learned
model and is based on the joint probability density function (pdf) p(x, y):
(
R[F̂ ] = E[L(F̂ (X), Y )] = L(F̂ (x), y)dp(x, y). (4.13)
If one could choose F̂ to minimize the expected risk, then one would have a definite
measure of optimal learning. Unfortunately, the expected risk cannot be measured
directly since this underlying pdf is unknown. Instead, we typically use the risk over
the training set of N observations, also known as the empirical risk measure (ERM):
1
N
Remp (F̂ ) := L(F̂ (xi ), yi ). (4.14)
N
i=1
Under i.i.d. data assumptions, the law of large numbers ensures that the empirical
risk will asymptotically converge to the expected risk. However, for small samples,
one cannot guarantee that ERM will also minimize the expected risk. A famous
result from statistical learning theory (Vapnik 1998) is that the VC dimension
provides bounds on the expected risk as a function of the ERM and the number
of training observations N , which holds with probability (1 − η):
/
0 &η'
0 h ln 2N
+ 1 − ln
1 h 4
R[F̂ ] ≤ Remp (F̂ ) + , (4.15)
N
where h is the VC dimension of F̂ (X) and N > h. Figure 4.6 shows the tradeoff
between VC dimension and the tightness of the bound. As the ratio N/ h gets larger,
i.e. for a fixed N, we decrease h, the VC confidence becomes smaller, and the actual
risk becomes closer to the empirical risk. On the other hand, choosing a model
with a higher VC dimension reduces the ERM at the expense of increasing the VC
confidence.
2 Feedforward Architectures 123
Under certain choices of the activation function, we can construct MLPs which are a
certain type of piecewise polynomial interpolants referred to as “splines.” Let f (x)
be any function whose domain is and the function values fk := f (xk ) are known
only at grid points h := {xk | xk = kh, k ∈ {1, . . . , K}} ⊂ ⊂ R which
are spaced by h. Note that the requirement that the data is gridded is for ease of
exposition and is not necessary. We construct an orthogonal basis over to give the
interpolant
K
fˆ(x) = φk (x)fk , x ∈ , (4.16)
i=1
for some constant L ≥ 0. Using Heaviside functions to activate the hidden units
1, x ≥ 0,
H (x) = (4.17)
0, x < 0,
The {φk }Kk=1 are piecewise constant basis functions, φi (xj ) = δij , and the first few
are illustrated
in Fig. 4.7 below. The basis functions satisfy the partition of unity
property K k=1 φk (x) = 1, ∀x ∈ .
2 Feedforward Architectures 125
0.8
φ1(x)
0.4
0.0
● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
Fig. 4.7 The first three piecewise constant basis functions produced by the difference of neigh-
boring step function activated units, φk (x) = H (x − (xk − )) − H (x − (xk + ))
Then W (2) is set equal to exact function values and their differences:
so that
126 4 Feedforward Neural Networks
|f (x)−f(x)|
0.0
y
^
−0.5
−1.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x x
Function values Absolute error
Fig. 4.8 The approximation of cos(2π x) using gridded input data and Heaviside activation
functions. The error in approximation is at most with K = 2
L
+ 1 hidden units
⎧
⎪
⎪f (x1 ), x ≤ 2 ,
⎪
⎪
⎪
⎪ 2 < x ≤ 4 ,
⎪
⎪f (x2 ),
⎪
⎨. . . ...
fˆ = (4.21)
⎪
⎪f (xk ), 2(k) < x ≤ 2(k + 1) ,
⎪
⎪
⎪
⎪
⎪
⎪... ...
⎪
⎩
f (xK−1 ), 2(K − 1) < x ≤ 2K .
Figure 4.8 illustrates the function approximation for the case when f (x) =
cos(2π x).
Since xk = (2k −1) ,we have that Ŷ = f (xk ) over the interval [xk − , xk + ],
which is the support of φk (x). By the Lipschitz continuity of f (x), it follows that
the worst-case error appears at the mid-point of any interval [xk , xk+1 )
Hence, under a special configuration of the weights and biases, with the hidden
units defining Voronoi cells for each observation, we can show that a neural
network is a univariate spline. This result generalizes to higher dimensional and
higher order splines. Such a result enables us to view splines as a special case
of a neural network which is consistent with our reasoning of neural networks as
generalized approximation and regression techniques. The formulation of neural
networks as splines allows approximation theory to guide the design of the network.
Unfortunately, equating neural networks with splines says little about why and when
multiples layers are needed.
They further showed the effect of network depth on VC dimension with different
non-linearities: there is no dependence for piecewise constant, linear dependence
for piecewise-linear, and no more than quadratic dependence for general piecewise-
polynomial. Thus the relationship between expressibility and depth is determined by
the degree of the activation function. There is further ample theoretical evidence to
128 4 Feedforward Neural Networks
suggest that shallow networks cannot approximate the class of non-linear functions
represented by deep ReLU networks without blow-up. Telgarsky (2016) shows that
there is a ReLU network with L layers such that any network approximating it with
1/3
only O(L1/3 ) layers must have (2L ) units. Mhaskar et al. (2016) discuss the
differences between composition versus additive models and show that it is possible
to approximate higher polynomials much more efficiently with several hidden layers
than a single hidden layer.
Martin and Mahoney (2018) shows that deep networks are implicitly self-
regularizing behaving like Tikhonov regularization. Tishby and Zaslavsky (2015)
characterizes the layers as “statistically decoupling” the input variables.
To gain some intuition as to why function composition can lead to successively more
accurate function representation with each layer, consider the following example of
a binary expansion of a decimal x.
x − ni=1 x2ii . For example, we can find the first binary digit, x1 as either 1 or
0 depending on whether x0 = x ≥ 12 . Now consider X1 = x − x1 /2 and set
x2 = 1 if X1 ≥ 212 or x2 = 0 otherwise.
− 2−1
1
1
W () = ,
− 2−1
1
1
() ()
and σ1 (x) = H (x, 21 ) and σ2 (x) = id(x) = x. There are no bias terms.
The output after hidden layers is the error, X ≤ 21 .
2 Feedforward Architectures 129
X3
x3
H (x − x 1 /2 − x 2 / 4, 1/8) id (x − x 1 /2 − x 2 / 4)
x2 x2
x1 x1
H (x, 1/2) id (x )
Fig. 4.9 An illustrative example of a deep feedforward neural network for binary expansion of a
decimal. Each layer has two neurons with different activations—Heaviside and identity functions
130 4 Feedforward Neural Networks
approximations. For example, consider the shallow ReLU network with 2 and 4
perceptrons in Fig. 4.10:
f(x) + g(x)
0.6
0.6
f(x) f(x)
0.4
0.4
y
g(x) g(x)
0.2
0.2
f(g(x))
0.0
0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x x
(a) (b)
Fig. 4.11 Adding versus composing 2-sawtooth functions. (a) Adding 2-sawtooths. (b) Compos-
ing 2-sawtooths
2 Feedforward Architectures 131
Note that fm can be represented by a two-layer ReLU activated network with two
neurons; For instance, fm (x) = (2σ (x)−4σ (x−1/2)). Hence fmk is the composition
of k (identical) ReLU sub-networks. A key observation is that fewer hidden units are
needed to shatter a set of points when the network is deep versus shallow.
Consider, for example, the sequence of n = 2k points with alternating labels,
referred to as the n-ap, and illustrated in Fig. 4.13 for the case when k = 3. As the
x values pass from left to right, the labels change as often as possible and provide
the most challenging arrangement for shattering n points.
There are many ways to measure the representation power of a network, but
we will consider the classification error here. Suppose that we have a σ activated
network with m units per layer and l layers. Given a function f : Rp → R let
f˜ : Rp → {0, 1} denote the corresponding classifier f˜(x) := 1f (x)≥1/2 , and
additionally given a sequence of points ((xi , yi ))ni=1 with xi ∈ Rp and yi ∈ {0, 1},
define the classification error as E(f ) := n1 i 1f˜(xi )=yi .
Given a sawtooth function, its classification error on the n-ap may be lower
bounded as follows.
1
1 1
fm fm2 fm3 .
0 1/2 1
Fig. 4.13 The n-ap consists of n uniformly spaced points with alternating labels over the interval
[0, 1 − 2−n ]. That is, the points ((xi , yi ))ni=1 with xi = i2−n and yi = 0 when i is even, and
otherwise yi = 1
132 4 Feedforward Neural Networks
Lemma 4.2 Let ((xi , yi ))ni=1 be given according to the n-ap. Then every t-sawtooth
function f : R → R satisfies E(f ) ≥ (n − 4t)/(3n).
The proof in the appendix relies on a simple counting argument for the number
of crossings of 1/2. If there are m t-saw-tooth functions, then by Lemma 4.1, the
resultant is a piecewise affine function over mt intervals. The main theorem now
directly follows from Lemma 4.2.
Theorem 4.3 Let positive integer k, number of layers l, and number of nodes per
layer m be given. Given a t-sawtooth σ : R → R and n := 2k points as specified
by the n-ap, then
n − 4(tm)l
min E(f ) ≥ .
W,b 3n
From this result one can say, for example, that on the n-ap one needs m = 2k−3
many units when classifying with a ReLU activated shallow network versus only
m = 2(1/ l(k−2)−1) units per layer for a l ≥ 2 deep network.
Research on deep learning is very active and there are still many questions that
need to be addressed before deep learning is fully understood. However, the purpose
of these examples is to build intuition and motivate the need for many hidden layers
in addition to the effect of increasing the number of neurons in each hidden layer.
In the remaining part of this chapter we turn towards the practical application
of neural networks and consider some of the primary challenges in the context of
financial modeling. We shall begin by considering how to preserve the shape of
functions being approximated and, indeed, how to train and evaluate a network.
It may be necessary to restrict the range of fˆ(x) or impose certain properties which
are known about the shape of the function f (x) being approximated. For example,
V = f (S) might be an option price and S the value of the underlying asset
and convexity and non-negativity of fˆ(S) are necessary. Consider the following
feedforward network architecture FW,b (X) : Rp → Rd :
(L) (1)
Ŷ = FW,b (X) = fW (L) ,b(L) ◦ · · · ◦ fW (1) ,b(1) (X), (4.23)
where
()
fW () ,b() (x) = σ (W () x + b() ), ∀ ∈ {1, . . . , L}. (4.24)
3 Convexity and Inequality Constraints 133
Convexity
()
wij ≥ 0, ∀i, j, ∀ ∈ {2, . . . , L}. (4.25)
by setting
(L−1)
n
bi(L) = ci − (L)
min(sij |σ |, sij |σ̄ |)|wij (L)
|, sij := sign(wij ). (4.28)
j =1
3 The parameterized softplus function σ (x; t)= 1t ln(1+exp{tx}), with a model parameter t >> 1,
converges to the ReLU function in the limit t → ∞.
134 4 Feedforward Neural Networks
Note that the expression inside the min function can be simplified further to
min(sij |σ |, sij |σ̄ |)|wij | = min(wij |σ |, wij |σ̄ |). Training of the weights and biases
is a constrained optimization problem with the linear constraints
(L−1)
n
(L)
max(sij |σ |, sij |σ̄ |)|wij | ≤ di − bi(L) , (4.29)
j =1
(L)
and solving the underdetermined system for wij , ∀j :
(L−1)
n
(L)
σ̄ wij ≤ di − bi(L) (4.31)
j =1
(L−1)
n
(L) di − ci
wij ≤ . (4.32)
(σ̄ − σ )
j =1
1
ˆ
(X) = ∂X Ŷ = (W (2) )T DW (1) , Dii = . (4.33)
1 + exp{−w(1) (1)
i, X − bi }
Under the BS model, the delta of a call option is in the interval [0, 1]. Note
that the delta, although observed positive here, could be negative since there
are no restrictions on W (1) . Similarly, the delta approximation is observed to
exceed unity. Thus, additional constraints are needed to bound the delta. For
(1)
this architecture, imposing wij ≥ 0 preserves the non-negativity of the delta
(1) (2) (1)
and nj wij wj, ≤ 1, ∀i bounds the delta at unity.
136 4 Feedforward Neural Networks
Fig. 4.14 (a) The out-of-sample call prices are estimated using a single-layer neural network with
constraints to ensure non-negativity and convexity of the price approximation w.r.t. the underlying
price S. (b) The analytic derivative of Ŷ is taken as the approximation of delta and compared over
the test set with the Black–Scholes delta. We observe that additional constraints on the weights are
needed to ensure that ∂X Ŷ ∈ [0, 1]
(continued)
3 Convexity and Inequality Constraints 137
No-Arbitrage Pricing
The previous examples are simple enough to illustrate the application of con-
straints in neural networks. However, one would typically need to enforce more
complex constraints for no-arbitrage pricing and calibration. Pricing approximations
should be monotonically increasing w.r.t. to maturity and convex w.r.t. strike. Such
constraints require that the neural network is fitted with more input variables K
and T .
Accelerating Calibrations
One promising direction, which does not require neural network derivative
pricing, is to simply learn a stochastic volatility based pricing model, such as the
Heston model, as a function of underlying price, strike, and maturity, and then use
the neural network pricing function to calibrate the pricing model. Such a calibration
avoids fitting a few parameters to the chain of observed option prices or implied
volatilities. Replacement of expensive pricing functions, which may require FFTs
or Monte Carlo methods, with trained neural networks reduces calibration time
considerably. See Horvath et al. (2019) for further details.
Dupire Local Volatility
Another challenge is how to price exotic options consistently with the market
prices of their European counterpart. The former are typically traded over-the-
counter, whereas the latter are often exchange traded and therefore “fully” observ-
able. To fix ideas, let C(K, T ) denote an observed call price, for some fixed strike,
K, maturity, T , and underlying price St . Modulo a short rate and dividend term, the
unique “effective” volatility, σ02 , is given by the Dupire formula:
2∂T C(K, T )
σ02 = 2 C(K, T )
. (4.34)
K 2 ∂K
The challenge arises when calibrating the local volatility model, extracting effective
volatility from market option prices is an ill-posed inverse problem. Such a
challenge has recently been addressed by Chataigner et al. (2020) in their paper
on deep local volatility.
one hidden layer is essentially a projection pursuit regression (PPR), both project the
input vector onto a hyperplane, apply a non-linear transformation into feature space,
followed by an affine transformation. The mapping of input vectors to feature space
by the hidden layer is conceptually similar to kernel methods, such as support vector
machines (SVMs), which map to a kernel space, where classification and regression
are subsequently performed. Boosted decision stumps, one level boosted decision
trees, can even be expressed as a single-layer MLP. Caution must be exercised in
over-stretching these conceptual similarities. Data generation assumptions aside,
there are differences in the classes of non-linear functions and learning algorithms
used. For example, the non-linear function being fitted in PPR can be different for
each combination of input variables and is sequentially estimated before updating
the weights. In contrast, neural networks fix these functions and estimate all the
weights belonging to a single layer simultaneously. A summary of other machine
learning approaches is given in Table 4.1 and we refer the reader to numerous
excellent textbooks (Bishop 2006; Hastie et al. 2009) covering such methods.
Table 4.1 This table compares supervised machine learning algorithms (reproduced from Mul-
lainathan and Spiess (2017))
Function class F (and its parameteri- Regularizer R(f )
zation)
Global/parametric predictors
Linear β x (and generalizations) Subset selection β 0 = kj =1 1βj =0
k
LASSO β 1 = j =1 |βj |
Ridge β 22 = kj =1 βj2
Elastic net α β 1 + (1 − α) β 2
2
Local/non-parametric predictors
Decision/regression trees Depth, number of nodes/leaves, minimal leaf size,
information gain at splits
Random forest (linear combination of Number of trees, number of variables used in each
trees) tree, size of bootstrap sample, complexity of trees
(see above)
Nearest neighbors Number of neighbors
Kernel regression Kernel bandwidth
Mixed predictors
Deep learning, neural nets, convolu- Number of levels, number of neurons per level, con-
tional neural networks nectivity between neurons
Splines Number of knots, order
Combined predictors
Bagging: unweighted average of pre- Number of draws, size of bootstrap samples (and
dictors from bootstrap draws individual regularization parameters)
Boosting: linear combination of pre- Learning rate, number of iterations (and individual
dictions of residual regularization parameters)
Ensemble: weighted combination of Ensemble weights (and individual regularization
different predictors parameters)
140 4 Feedforward Neural Networks
1
N
f (W, b) = L(Y (i) , Ŷ (X(i) ))
N
i=1
with a regularization penalty, φ(W, b). The loss function is non-convex, possessing
many local minima and is generally difficult to find a global minimum. An important
assumption, which is often not explicitly stated, is that the errors are assumed
to be “homoscedastic.” Homoscedasticity is the assumption that the error has an
identical distribution over each observation. This assumption can be relaxed by
4 Training, Validation, and Testing 141
For example, if there are K = 3 classes, then G = [0, 0, 1], G = [0, 1, 0], or
G = [1, 0, 0] to represent the three classes. When K > 2, the output layer has K
neurons and the loss function is the negative cross-entropy
K
L(G, Ĝ(X)) = − Gk lnĜk . (4.36)
k=1
For the case when K = 2, i.e. binary classification, there is only one neuron in the
output layer and the loss function is
exp(xk )
σs (x)k = , k ∈ {1, . . . , K}, (4.38)
|| exp(x)||1
where ||σs (x)||1 =1. The softmax function is used to represent a probability distribu-
tion over K possible states:
exp((W X + b)k )
Ĝk = P (G = k | X) = σs (W X + b) = . (4.39)
|| exp(W X + b)||1
142 4 Feedforward Neural Networks
g (x)h(x)−h (x)g(x)
Using the quotient rule f (x) = [h(x)]2
, the derivative σ := σs (x)
can be written as:
∂σi exp(xi )|| exp(x)||1 − exp(xi ) exp(xi )
= (4.40)
∂xi || exp(x)||21
exp(xi ) || exp(x)||1 − exp(xi )
= · (4.41)
|| exp(x)||1 || exp(x)||1
= σi (1 − σi ) (4.42)
∂σi
This can be written compactly as ∂xj = σi (δij − σj ), where δij is the
Kronecker delta function.
1
gk = ∇LW,b (Y (i) , Ŷ k (X(i) )),
bk
i∈Ek
5 Stochastic Gradient Descent (SGD) 143
1
N
E(g k ) = ∇LW,b Y (i) , Ŷ k (X(i) ) = ∇f (W k , bk ).
N
i=1
5.1 Back-Propagation
Staying with a multi-classifier, we can begin by informally motivating the need for
a recursive approach to updating the weights and biases. Let us express Ŷ ∈ [0, 1]K
as a function of the final weight matrix W ∈ RK×M and output bias RK so that
K
L(Y, Ŷ (X)) = − Yk lnŶk . (4.49)
k=1
1
N
(Ŵ , b̂) = arg minW,b L(yi , Ŷ W,b (xi )). (4.53)
N
i=1
Because of the compositional form of the model, the gradient must be derived
using the chain rule for differentiation. This can be computed by a forward and then
a backward sweep (“back-propagation”) over the network, keeping track only of
quantities local to each neuron.
Forward Pass
()
Z () = fW () ,b() (Z (−1) ) = σ () (W () Z (−1) + b() ). (4.54)
Back-Propagation
and ⊗ is the outer product of two vectors. See Appendix “Back-Propagation” for a
derivation of Eqs. 4.55 and 4.56.
The weights and biases are updated for all ∈ {1, . . . , L} according to the
expression
where γ is a user defined learning rate parameter. Note the negative sign: this
indicates that weight changes are in the direction of decrease in error. Mini-batch or
off-line updates involve using many observations of X at the same time. The batch
size refers to the number of observations of X used in each pass. An epoch refers to
a round-trip (i.e., forward+backward pass) over all training samples.
δ (3) = Ŷ − Y
∇W (3) L = δ (3) ⊗ Z (2) .
Now using Eqs. 4.55 and 4.56, we update the back-propagation error and
weight updates for hidden layer 2
(continued)
146 4 Feedforward Neural Networks
We update the weights and biases using Eqs. 4.57 and 4.57, so that b(3) →
b(3) − γ δ (3) , W (3) → W (3) − γ δ (3) ⊗ Z (2) and repeat for the other weight-bias
pairs, {(W () , b() )}2=1 . See the back-propagation notebook for further details
of a worked example in Python and then complete Exercise 4.12.
5.2 Momentum
One disadvantage of SGD is that the descent in f is not guaranteed or can be very
slow at every iteration. Furthermore, the variance of the gradient estimate g k is near
zero as the iterates converge to a solution. To address those problems a coordinate
descent (CD) and momentum-based modifications of SGD are used. Each CD step
evaluates a single component Ek of the gradient ∇f at the current point and then
updates the Ek th component of the variable vector in the negative gradient direction.
The momentum-based versions of SGD or the so-called accelerated algorithms were
originally proposed by Nesterov (2013).
The use of momentum in the choice of step in the search direction combines
new gradient information with the previous search direction. These methods are
also related to other classical techniques such as the heavy-ball method and
conjugate gradient methods. Empirically momentum-based methods show a far
better convergence for deep learning networks. The key idea is that the gradient
only influences changes in the “velocity” of the update
v k+1 =μv k − tk g k ,
(W, b)k+1 =(W, b)k + v k .
The parameter μ controls the dumping effect on the rate of update of the variables.
The physical analogy is the reduction in kinetic energy that allows “slow down”
in the movements at the minima. This parameter is also chosen empirically using
cross-validation.
Nesterov’s momentum method (a.k.a. Nesterov acceleration) instead calculate
gradient at the point predicted by the momentum. We can think of it as a look-ahead
strategy. The resulting update equations are
5 Stochastic Gradient Descent (SGD) 147
Another popular modification to the SGD method is the AdaGrad method, which
adaptively scales each of the learning parameters at each iteration
where a is usually a small number, e.g. a = 10−6 that prevents dividing by zero.
PRMSprop takes the AdaGrad idea further and places more weight on recent values
of the gradient squared to scale the update direction, i.e. we have
The Adam method combines both PRMSprop and momentum methods and leads to
the following update equations:
dataset dimensions for the deep learner. Polson et al. (2015) consider a proximal
Newton method, a Bayesian optimization technique which provides an efficient
solution for estimation and optimization of such models and for calculating a
regularization path. The authors present a splitting approach, alternating direction
method of multipliers (ADMM), which overcomes the inherent bottlenecks in back-
propagation by providing a simultaneous block update of parameters at all layers.
ADMM facilitates the use of large-scale computing.
A significant factor in the widespread adoption of deep learning has been the
creation of TensorFlow (Abadi et al. 2016), an interface for easily expressing
machine learning algorithms and mapping compute intensive operations onto a
wide variety of different hardware platforms and in particular GPU cards. Recently,
TensorFlow has been augmented by Edward (Tran et al. 2017) to combine
concepts in Bayesian statistics and probabilistic programming with deep learning.
We close this section by briefly mentioning one final technique which has proved
indispensable in preventing neural networks from over-fitting. Dropout is a compu-
tationally efficient technique to reduce model variance by considering many model
configurations and then averaging the predictions. The layer input space Z =
(Z1 , . . . , Zn ), where n is large, needs dimension reduction techniques which are
designed to avoid over-fitting in the training process. Dropout works by removing
layer inputs randomly with a given probability θ . The probability, θ , can be viewed
as a further hyperparameter (like λ) which can be tuned via cross-validation.
Heuristically, if there are 1000 variables, then a choice of θ = 0.1 will result in
a search for models with 100 variables. The dropout architecture with stochastic
search for the predictors can be used
()
di ∼ Ber(θ ),
Z̃ () = d () ◦ Z () , 1 ≤ < L,
Z () = σ () (W () Z̃ (−1) + b() ).
Effectively, this replaces the layer input Z by d ◦ Z, where ◦ denotes the element-
wise product and d is a vector of independent Bernoulli, Ber(θ ), distributed random
variables. The overall objective function is closely related to ridge regression with a
g-prior (Heaton et al. 2017).
6 Bayesian Neural Networks 149
Bayesian deep learning (Neal 1990; Saul et al. 1996; Frey and Hinton 1999;
Lawrence 2005; Adams et al. 2010; Mnih and Gregor 2014; Kingma and Welling
2013; Rezende et al. 2014) provides a powerful and general framework for statistical
modeling. Such a framework allows for a completely new approach to data modeling
and solves a number of problems that conventional models cannot address: (i) DLs
(deep learners) permit complex dependencies between variables to be explicitly
represented which are difficult, if not impossible, to model with copulas; (ii) they
capture correlations between variables in high-dimensional datasets; and (iii) they
characterize the degree of uncertainty in predicting large-scale effects from large
datasets relevant for quantifying uncertainty.
Uncertainty refers to the statistically unmeasurable situation of Knightian uncer-
tainty, where the event space is known but the probabilities are not (Chen et al.
2017). Oftentimes, a forecast may be shrouded in uncertainty arising from noisy data
or model uncertainty, either through incorrect modeling assumptions or parameter
error. It is desirable to characterize this uncertainty in the forecast. In conventional
Bayesian modeling, uncertainty is used to learn from small amounts of low-
dimensional data under parametric assumptions on the prior. The choice of the prior
is typically the point of contention and chosen for solution tractability rather than
modeling fidelity. Recently, deterministic deep learners have been shown to scale
well to large, high-dimensional, datasets. However, the probability vector obtained
from the network is often erroneously interpreted as model confidence (Gal 2016).
A typical approach to model uncertainty in neural network models is to assume
that model parameters (weights and biases) are random variables (as illustrated in
Fig. 4.16). Then ANN model approaches Gaussian process as the number of weights
goes to infinity (Neal 2012; Williams 1997). In the case of finite number of weights,
a network with random parameters is called a Bayesian neural network (MacKay
1992b). Recent advances in “variational inference” techniques and software that
represent mathematical models as a computational graph (Blundell et al. 2015a)
enable probabilistic deep learning models to be built, without having to worry
about how to perform testing (forward propagation) or inference (gradient- based
optimization, with back-propagation and automatic differentiation). Variational
inference is an approximate technique which allows multi-modal likelihood func-
tions to be extremized with standard stochastic gradient descent algorithm. An
alternative to variational and MCMC algorithms was recently proposed by Gal
(2016) and builds on efficient dropout regularization technique.
All of the current techniques rely on approximating the true posterior over
the model parameters p(w | X, Y ) by another distribution qθ (w) which can be
evaluated in a computationally tractable way. Such a distribution is chosen to be as
close as possible to the true posterior and is found by minimizing the Kullback–
Leibler (KL) divergence
150 4 Feedforward Neural Networks
3 1.05
0.75
1
0.60
x2 0
0.45
-1
0.30
-2
0.15
-3 0.00
-3 -2 -1 0 1 2 3
x1
3 0.40
0.35
2
0.30
Uncertainty
0.25
x2 0 0.20
0.15
-1
0.10
-2
0.05
-3 0.00
-3 -2 -1 0 1 2 3
x1
Fig. 4.16 Bayesian classification of the half-moon problem with neural networks. (top) The
posterior mean and (bottom) the posterior std. dev.
(
qθ (w)
θ ∗ ∈ arg min qθ (w) log dw.
θ p(w | X, Y )
The sum does not depend on φ, thus minimizing KL(q || p) is the same as
maximizing ELBO(q). Also, since KL(q || p) ≥ 0, which follows from Jensen’s
inequality, we have log p(D) ≥ ELBO(φ). Thus, the evidence lower bound name.
The resulting maximization problem ELBO(φ) → maxφ is solved using stochastic
gradient descent.
To calculate the gradient, it is convenient to write the ELBO as
( (
q(θ | D, φ)
ELBO(φ) = q(θ | D, φ) log p(Y | X, θ )dθ − q(θ | D, φ) log dθ
p(θ )
%
The gradient of the first term ∇φ q(θ | D, φ) log p(Y | X, θ )dθ =
∇φ Eq log p(Y | X, θ ) is not an expectation and thus cannot be calculated using
Monte Carlo methods. The idea is to represent the gradient ∇φ Eq log p(Y | X, θ )
as an expectation of some random variable, so that Monte Carlo techniques can
be used to calculate it. There are two standard methods to do it. First, the log-
derivative trick uses the following identity ∇x f (x) = f (x)∇x log f (x) to obtain
∇φ Eq log p(Y | θ ). Thus, if we select q(θ | φ) so that it is easy to compute its
derivative and generate samples from it, the gradient can be efficiently calculated
using Monte Carlo techniques. Second, we can use the reparameterization trick
by representing θ as a value of a deterministic function, θ = g(, x, φ), where
∼ r() does not depend on φ. The derivative is given by
(
∇φ Eq log p(Y | X, θ ) = r()∇φ log p(Y | X, g(, x, φ))d
The reparameterization is trivial when q(θ | D, φ) = N(θ | μ(D, φ), (D, φ)),
and θ = μ(D, φ) + (D, φ), ∼ N(0, I ). Kingma and Welling (2013) propose
using (D, φ) = I and representing μ(D, φ) and as outputs of a neural network
(multilayer perceptron), the resulting approach was called a variational autoencoder.
A generalized reparameterization has been proposed by Ruiz et al. (2016) and
combines both log-derivative and reparameterization techniques by assuming that
can depend on φ.
8 Exercises 153
7 Summary
8 Exercises
Exercise 4.1
Show that substituting
Xj , i = k,
∇ij Ik =
0, i = k,
∂σk
∇ij σk ≡ = ∇i σk Xj = σk (δki − σi )Xj .
∂wij
Exercise 4.2
Show that substituting the derivative of the softmax function w.r.t. wij into Eq. 4.52
gives for the special case when the output is Yk = 1, k = i, and Yk = 0, ∀k = i:
154 4 Feedforward Neural Networks
(σi − 1)Xj , Yi = 1,
∇ij L(W, b) := [∇W L(W, b)]ij =
0, Yk = 0, ∀k = i.
Exercise 4.3
Consider feedforward neural networks constructed using the following two types of
activation functions:
– Identity
I d(x) := x
Construct a neural network with one input x and one hidden layer, whose
response is u(x; a). Draw the structure of the neural network, specify the
activation function for each unit (either I d or H ), and specify the values for
all weights (in terms of a and y).
b. Now consider the indicator function
2
1, if x ∈ [a, b),
1[a,b) (x) =
0, otherwise.
Construct a neural network with one input x and one hidden layer, whose
response is y1[a,b) (x), for given real values y, a and b. Draw the structure of
the neural network, specify the activation function for each unit (either I d or
H ), and specify the values for all weights (in terms of a, b and y).
8 Exercises 155
Exercise 4.4
A neural network with a single hidden layer can provide an arbitrarily close
approximation to any 1-dimensional bounded smooth function. This question will
guide you through the proof. Let f (x) be any function whose domain is [C, D), for
real values C < D. Suppose that the function is Lipschitz continuous, that is,
for some constant L ≥ 0. Use the building blocks constructed in the previous part
to construct a neural network with one hidden layer that approximates this function
within > 0, that is, ∀x ∈ [C, D), |f (x) − fˆ(x)| ≤ , where fˆ(x) is the output of
your neural network given input x. Your network should use only the identity or the
Heaviside activation functions. You need to specify the number K of hidden units,
the activation function for each unit, and a formula for calculating each weight w0 ,
(k) (k)
wk , w0 , and w1 , for each k ∈ {1, . . . , K}. These weights may be specified in
terms of C, D, L, and , as well as the values of f (x) evaluated at a finite number
of x values of your choosing (you need to explicitly specify which x values you
use). You do not need to explicitly write the fˆ(x) function. Why does your network
attain the given accuracy ?
Exercise 4.5
Consider a shallow neural network regression model with n tanh activated units in
(2)
the hidden layer and d outputs. The hidden-outer weight matrix Wij = n1 and
the input-hidden weight matrix W (1) = 1. The biases are zero. If the features,
X1 , . . . , Xp are i.i.d. Gaussian random variables with mean μ = 0, variance σ 2 ,
show that
a. Ŷ ∈ [−1, 1].
b. Ŷ is independent of the number of hidden units, n ≥ 1.
c. The expectation, E[Ŷ ] = 0, and the variance V[Ŷ ] ≤ 1.
Exercise 4.6
Determine the VC dimension of the sum of indicator functions where = [0, 1]
k
Fk (x) = {f : → {0, 1}, f (x) = 1x∈[t2i ,t2i+1 ) , 0 ≤ t0 < · · · < t2k+1 ≤ 1, k ≥ 1}.
i=0
Exercise 4.7
Show that a feedforward binary classifier with two Heaviside activated units shatters
the data {0.25, 0.5, 0.75}.
Exercise 4.8
Compute the weight and bias updates of W (2) and b(2) given a shallow binary
classifier (with one hidden layer) with unit weights, zero biases, and ReLU
activation of two hidden units for the labeled observation (x = 1, y = 1).
156 4 Feedforward Neural Networks
Exercise 4.9
Consider the following dataset (taken from Anscombe’s quartet):
a. Use a neural network library of your choice to show that a feedforward network
with one hidden layer consisting of one unit and a feedforward network with
no hidden layers, each using only linear activation functions, do not outperform
linear regression based on ordinary least squares (OLS).
b. Also demonstrate that a neural network with a hidden layer of three neurons
using the tanh activation function and an output layer using the linear activation
function captures the non-linearity and outperforms the linear regression.
Exercise 4.10
Review the Python notebook deep_classifiers.ipynb. This notebook uses
Keras to build three simple feedforward networks applied to the half-moon problem:
a logistic regression (with no hidden layer); a feedforward network with one
hidden layer; and a feedforward architecture with two hidden layers. The half-
moons problem is not linearly separable in the original coordinates. However you
will observe—after plotting the fitted weights and biases—that a network with
many hidden neurons gives a linearly separable representation of the classification
problem in the coordinates of the output from the final hidden layer.
Complete the following questions in your own words.
a. Did we need more than one hidden layer to perfectly classify the half-moons
dataset? If not, why might multiple hidden layers be useful for other datasets?
b. Why not use a very large number of neurons since it is clear that the classification
accuracy improves with more degrees of freedom?
c. Repeat the plotting of the hyperplane, in Part 1b of the notebook, only without the
ReLU function (i.e., activation=“linear”). Describe qualitatively how the decision
surface changes with increasing neurons. Why is a (non-linear) activation
function needed? The use of figures to support your answer is expected.
Exercise 4.11
Using the EarlyStopping callback in Keras, modify the notebook
Deep_Classifiers.ipynb to terminate training under the following stopping
criterion |L(k+1) − L(k) | ≤ δ with δ = 0.1.
Appendix 157
Exercise 4.12***
Consider a feedforward neural network with three inputs, two units in the first
hidden layer, two units in the second hidden layer, and three units in the output layer.
The activation function for hidden layer 1 is ReLU, for hidden layer 2 is sigmoid,
and for the output layer is softmax.
The initial weights are given by the matrices
⎛ ⎞
) * ) * 0.5 0.6
0.1 0.3 0.7 0.4 0.3
W (1) = , W (2) = , W (3) = ⎝0.6 0.7⎠ ,
0.9 0.4 0.4 0.7 0.2
0.3 0.2
Appendix
Question 1
Answer: 1, 2, 3, 4. All answers are found in the text.
Question 2
Answer: 1,2. A feedforward architecture is always convex w.r.t. each input variable
if every activation function is convex and the weights are constrained to be either all
positive or all negative. Simply using convex activation functions is not sufficient,
since the composition of a convex function and the affine transformation of a convex
function do not preserve the convexity. For example, if σ (x) = x 2 , w = −1, and
b = 1, then σ (wσ (x) + b) = (−x 2 + 1)2 is not convex in x.
A feedforward architecture with positive weights is a monotonically increasing
function of the input for any choice of monotonically increasing activation function.
The weights of a feedforward architecture need not be constrained for the output
of a feedforward network to be bounded. For example, activating the output with a
softmax function will bound the output. Only if the output is not activated, should
the weights and bias in the final layer be bounded to ensure bounded output.
158 4 Feedforward Neural Networks
The bias terms in a network shift the output but also effect the derivatives of the
output w.r.t. to the input when the layer is activated.
Question 3
Answer: 1,2,3,4. The training of a neural network involves minimizing a loss
function w.r.t. the weights and biases over the training data. L1 regularization is
used during model selection to penalize models with too many parameters. The
loss function is augmented with a Lagrange penalty for the number of weights. In
deep learning, regularization can be applied to each layer of the network. Therefore
each layer has an associated regularization parameter. Back-propagation uses the
chain rule to update the weights of the network but is not guaranteed to convergence
to a unique minimum. This is because the loss function is not convex w.r.t. the
weights. Stochastic gradient descent is a type of optimization method which is
implemented with back-propagation. There are variants of SGD, however, such as
adding Nestov’s momentum term, ADAM , or RMSProp.
Back-Propagation
K
L := − Yk log Ŷk .
k=1
Z (0) = X.
W () = −γ ∇W () L,
b() = −γ ∇b() L.
Appendix 159
∂L
K (L)
∂L ∂Zk
(L)
= (L) (L)
∂wij k=1 ∂Zk ∂wij
K
∂L ∂Zk ∂Im
K (L) (L)
= (L) (L) (L)
k=1 ∂Zk m=1 ∂Im ∂wij
But
∂L Yk
(L)
=− (L)
∂Zk Zk
(L)
∂Zk ∂
(L)
= (L)
[σ (I (L) )]k
∂Im ∂Im
(L)
∂ exp[Ik ]
= (L) K (L)
∂Im n=1 exp[In ]
⎧ (L)
⎪ exp[I ] (L)
⎨− K k (L) Kexp[Im ](L) if k = m
n=1 exp[In ] n=1 exp[In ]
=
⎪
⎩
(L)
exp[Ik ]
−
(L)
exp[Ik ] (L)
exp[Im ]
otherwise
K (L) K (L) K (L)
n=1 exp[In ] n=1 exp[In ] n=1 exp[In ]
−σk σm if k = m
=
σk (1 − σm ) otherwise
= σk (δkm − σm ) where δkm is the Kronecker’s Delta
(L)
∂Im (L−1)
(L)
= δmi Zj
∂wij
∂L
K
Yk
K
(L−1)
⇒ (L)
=− (L)
(L)
Zm (δkm − Zm
(L)
)δmi Zj
∂wij k=1 Zk m=1
K
= −Zj(L−1) Yk (δki − Zi(L) )
k=1
(L−1) (L)
= Zj (Zi − Yi ),
160 4 Feedforward Neural Networks
K
where we have used the fact that k=1 Yk = 1 in the last equality. Similarly for
b(L) , we have
∂L
K
∂L ∂Zk ∂Im
K (L) (L)
(L)
= (L) (L) (L)
∂bi k=1 ∂Zk m=1 ∂Im ∂bi
= Zi(L) − Yi
It follows that
∇b(L) L = Z (L) − Y
∇W (L) L = ∇b(L) L ⊗ Z (L−1) ,
∂L
K
∂L ∂Zk
(L)
(L−1)
=
∂wij k=1 ∂Zk(L) ∂wij
(L−1)
(L−1) (L−1)
K
∂L ∂Zk
K (L) n (L)
∂Im
n (L−1)
∂Zn
(L−1)
∂Ip
= (L) (L) (L−1) (L−1) (L−1)
k=1 ∂Zk m=1 ∂Im n=1 ∂Zn p=1 ∂Ip ∂wij
(L)
∂Im
(L−1)
= wmn
(L)
∂Zn
(L−1) ) *
∂Zn ∂ 1
(L−1)
= (L−1) (L−1)
∂Ip ∂Ip 1 + exp(−In )
(L−1)
1 exp(−In )
= (L−1) (L−1)
δnp
1 + exp(−In ) 1 + exp(−In )
= Zn(L−1) (1 − Zn(L−1) ) δnp = σn(L−1) (1 − σn(L−1) )δnp
∂Ip(L−1)
(L−1)
= δpi Zj(L−2)
∂wij
∂L
K
Yk
K
(L)
⇒ (L)
=− (L)
Zk (δkm − Zm
(L)
)
∂wij k=1 Zk m=1
Appendix 161
(L−1)
n (L−1)
n
(L−2)
(L)
wmn Zn(L−1) (1 − Zn(L−1) ) δnp δpi Zj
n=1 p=1
(L−1)
K
K n
(L−2)
= − Yk (δkm − Zm )
(L) (L) (L−1)
wmn Zn (1 − Zn(L−1) ) δni Zj
k=1 m=1 n=1
K
K
(L) (L−2)
=− Yk (δkm − Zm
(L)
)wmi Zi (1 − Zi(L−1) )Zj(L−2)
k=1 m=1
Similarly we have
It follows that we can define the following recursion relation for the loss gradient:
T
∇b(L−1) L = Z (L−1) ◦ (1 − Z (L−1) ) ◦ (W (L) ∇b(L) L)
∇W (L−1) L = ∇b(L−1) L ⊗ Z (L−2)
T
= Z (L−1) ◦ (1 − Z (L−1) ) ◦ (W (L) ∇W (L) L),
∂σ () ∂σi()
=
∂I () ∂Ij
()
ij
we can write, in general, for any choice of activation function for the hidden layer,
and
Using the same deep structure shown in Fig. 4.9, Liang and Srikant (2016) find the
binary expansion sequence {x0 , .. . , xn }. In this step, they used n binary steps units
in total. Then they rewrite gm+1 ( ni=0 2xni ),
, n - , n -
xi
n
1 xi
gm+1 = xj · j gm
2i 2 2i
i=0 j =0 i=0
, n -
n
1 xi
= max 2(xj − 1) + j gm ,0 . (4.57)
2 2i
j =0 i=0
Clearly Eq. 4.57 defines iterations between the outputs of neighboring layers. Defin-
p n x j
ing the output of the multilayer neural network as fˆ(x) = i=0 ai gi j =0 2j .
For this multilayer network, the approximation error is
⎛ ⎞
p
n
xj
p
|f (x) − fˆ(x)| = ai gi ⎝ ⎠− ai x i
2j
i=0 j =0 i=0
⎡ ⎛ ⎞ ⎤
p
n
xj p
≤ ⎣|ai | · gi ⎝ ⎠ − xi ⎦ ≤ .
2j 2n−1
i=0 j =0
3 4
This indicates, to achieve ε-approximation error, one should choose n = log pε +
1. Besides, since O(n + p) layers with O(n) binary step units& and O(pn)' ReLU
units are used in total, this multilayer neural network thus has O p + log pε layers,
& ' & '
O log pε binary step units, and O p log pε ReLU units.
Appendix 163
Proof (Proof of 4.1) Let cIf denote the partition of R corresponding to f , and cIg
denote the partition of R corresponding to g.
First consider f + g, and moreover any intervals Uf ∈ cIf and Ug ∈ cIg .
Necessarily, f + g has a single slope along Uf ∩ Ug . Consequently, f + g is
|cI|-sawtooth, where cI is the set of all intersections of intervals from cIf and cIg ,
meaning cI := {Uf ∩ Ug : Uf ∈ cIf , Ug ∈ cIg }. By sorting the left endpoints
of elements of cIf and cIg , it follows that |cI| ≤ k + l (the other intersections are
empty).
For example, consider the example in Fig. 4.11 with partitions given in Table 4.2.
The set of all intersections of intervals from cIf and cIg contains 3 elements:
1 1 1 1 1 1
cI = {[0, ] ∩ [0, ], ( , 1] ∩ [0, ], ( , 1] ∩ ( , 1]} (4.58)
4 2 4 2 4 2
Now consider f ◦ g, and in particular consider the image f (g(Ug )) for some
interval Ug ∈ cIg . g is affine with a single slope along Ug ; therefore, f is being
considered along a single unbroken interval g(Ug ). However, nothing prevents
g(Ug ) from hitting all the elements of cIf ; since Ug was arbitrary, it holds that
f ◦ g is (|cIf | · |cIg |)-sawtooth.
Proof Recall the notation f˜(x) := [f (x) ≥ 1/2], whereby E(f ) := n i [yi =
1
Python Notebooks
The notebooks provided in the accompanying source code repository are designed
to gain insight in toy classification datasets. They provide examples of deep
feedforward classification, back-propagation, and Bayesian network classifiers.
Further details of the notebooks are included in the README.md file.
164 4 Feedforward Neural Networks
References
Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., et al. (2016). Tensor flow: A system
for large-scale machine learning. In Proceedings of the 12th USENIX Conference on Operating
Systems Design and Implementation, OSDI’16 (pp. 265–283).
Adams, R., Wallach, H., & Ghahramani, Z. (2010). Learning the structure of deep sparse graphical
models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence
and Statistics (pp. 1–8).
Andrews, D. (1989). A unified theory of estimation and inference for nonlinear dynamic models
a.r. gallant and h. white. Econometric Theory, 5(01), 166–171.
Baillie, R. T., & Kapetanios, G. (2007). Testing for neglected nonlinearity in long-memory models.
Journal of Business & Economic Statistics, 25(4), 447–461.
Barber, D., & Bishop, C. M. (1998). Ensemble learning in Bayesian neural networks. Neural
Networks and Machine Learning, 168, 215–238.
Bartlett, P., Harvey, N., Liaw, C., & Mehrabian, A. (2017a). Nearly-tight VC-dimension bounds
for piecewise linear neural networks. CoRR, abs/1703.02930.
Bartlett, P., Harvey, N., Liaw, C., & Mehrabian, A. (2017b). Nearly-tight VC-dimension bounds
for piecewise linear neural networks. CoRR, abs/1703.02930.
Bengio, Y., Roux, N. L., Vincent, P., Delalleau, O., & Marcotte, P. (2006). Convex neural networks.
In Y. Weiss, Schölkopf, B., & Platt, J. C. (Eds.), Advances in neural information processing
systems 18 (pp. 123–130). MIT Press.
Bishop, C. M. (2006). Pattern recognition and machine learning (information science and
statistics). Berlin, Heidelberg: Springer-Verlag.
Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015a, May). Weight uncertainty in
neural networks. arXiv:1505.05424 [cs, stat].
Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. (2015b). Weight uncertainty in neural
networks. arXiv preprint arXiv:1505.05424.
Chataigner, Crepe, & Dixon. (2020). Deep local volatility.
Chen, J., Flood, M. D., & Sowers, R. B. (2017). Measuring the unmeasurable: an application of
uncertainty quantification to treasury bond portfolios. Quantitative Finance, 17(10), 1491–
1507.
Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., et al. (2012). Large scale
distributed deep networks. In Advances in neural information processing systems (pp. 1223–
1231).
Dixon, M., Klabjan, D., & Bang, J. H. (2016). Classification-based financial markets prediction
using deep neural networks. CoRR, abs/1603.08604.
Feng, G., He, J., & Polson, N. G. (2018, Apr). Deep learning for predicting asset returns. arXiv
e-prints, arXiv:1804.09314.
Frey, B. J., & Hinton, G. E. (1999). Variational learning in nonlinear Gaussian belief networks.
Neural Computation, 11(1), 193–213.
Gal, Y. (2015). A theoretically grounded application of dropout in recurrent neural networks.
arXiv:1512.05287.
Gal, Y. (2016). Uncertainty in deep learning. Ph.D. thesis, University of Cambridge.
Gal, Y., & Ghahramani, Z. (2016). Dropout as a Bayesian approximation: Representing model
uncertainty in deep learning. In international Conference on Machine Learning (pp. 1050–
1059).
Gallant, A., & White, H. (1988, July). There exists a neural network that does not make avoidable
mistakes. In IEEE 1988 International Conference on Neural Networks (vol.1 ,pp. 657–664).
Graves, A. (2011). Practical variational inference for neural networks. In Advances in Neural
Information Processing Systems (pp. 2348–2356).
Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning: data mining,
inference and prediction. Springer.
References 165
Heaton, J. B., Polson, N. G., & Witte, J. H. (2017). Deep learning for finance: deep portfolios.
Applied Stochastic Models in Business and Industry, 33(1), 3–12.
Hernández-Lobato, J. M., & Adams, R. (2015). Probabilistic backpropagation for scalable learning
of Bayesian neural networks. In International Conference on Machine Learning (pp. 1861–
1869).
Hinton, G. E., & Sejnowski, T. J. (1983). Optimal perceptual inference. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition (pp. 448–453). IEEE New York.
Hinton, G. E., & Van Camp, D. (1993). Keeping the neural networks simple by minimizing
the description length of the weights. In Proceedings of the Sixth Annual Conference on
Computational Learning Theory (pp. 5–13). ACM.
Hornik, K., Stinchcombe, M., & White, H. (1989, July). Multilayer feedforward networks are
universal approximators. Neural Netw., 2(5), 359–366.
Horvath, B., Muguruza, A., & Tomas, M. (2019, Jan). Deep learning volatility. arXiv e-prints,
arXiv:1901.09647.
Hutchinson, J. M., Lo, A. W., & Poggio, T. (1994). A nonparametric approach to pricing and
hedging derivative securities via learning networks. The Journal of Finance, 49(3), 851–889.
Kingma, D. P., & Welling, M. (2013). Auto-encoding variational Bayes. arXiv preprint
arXiv:1312.6114.
Kuan, C.-M., & White, H. (1994). Artificial neural networks: an econometric perspective.
Econometric Reviews, 13(1), 1–91.
Lawrence, N. (2005). Probabilistic non-linear principal component analysis with Gaussian process
latent variable models. Journal of Machine Learning Research, 6(Nov), 1783–1816.
Liang, S., & Srikant, R. (2016). Why deep neural networks? CoRR abs/1610.04161.
Lo, A. (1994). Neural networks and other nonparametric techniques in economics and finance. In
AIMR Conference Proceedings, Number 9.
MacKay, D. J. (1992a). A practical Bayesian framework for backpropagation networks. Neural
Computation, 4(3), 448–472.
MacKay, D. J. C. (1992b, May). A practical Bayesian framework for backpropagation networks.
Neural Computation, 4(3), 448–472.
Martin, C. H., & Mahoney, M. W. (2018). Implicit self-regularization in deep neural networks:
Evidence from random matrix theory and implications for learning. CoRR abs/1810.01075.
Mhaskar, H., Liao, Q., & Poggio, T. A. (2016). Learning real and Boolean functions: When is deep
better than shallow. CoRR abs/1603.00988.
Mnih, A., & Gregor, K. (2014). Neural variational inference and learning in belief networks. arXiv
preprint arXiv:1402.0030.
Montúfar, G., Pascanu, R., Cho, K., & Bengio, Y. (2014, Feb). On the number of linear regions of
deep neural networks. arXiv e-prints, arXiv:1402.1869.
Mullainathan, S., & Spiess, J. (2017). Machine learning: An applied econometric approach.
Journal of Economic Perspectives, 31(2), 87–106.
Neal, R. M. (1990). Learning stochastic feedforward networks, Vol. 64. Technical report, Depart-
ment of Computer Science, University of Toronto.
Neal, R. M. (1992). Bayesian training of backpropagation networks by the hybrid Monte Carlo
method. Technical report, CRG-TR-92-1, Dept. of Computer Science, University of Toronto.
Neal, R. M. (2012). Bayesian learning for neural networks, Vol. 118. Springer Science & Business
Media. bibtex: aneal2012bayesian.
Nesterov, Y. (2013). Introductory lectures on convex optimization: A basic course, Volume 87.
Springer Science & Business Media.
Poggio, T. (2016). Deep learning: mathematics and neuroscience. A sponsored supplement to
science brain-inspired intelligent robotics: The intersection of robotics and neuroscience,
pp. 9–12.
Polson, N., & Rockova, V. (2018, Mar). Posterior concentration for sparse deep learning. arXiv
e-prints, arXiv:1803.09138.
Polson, N. G., Willard, B. T., & Heidari, M. (2015). A statistical theory of deep learning via
proximal splitting. arXiv:1509.06061.
166 4 Feedforward Neural Networks
Racine, J. (2001). On the nonlinear predictability of stock returns using financial and economic
variables. Journal of Business & Economic Statistics, 19(3), 380–382.
Rezende, D. J., Mohamed, S., & Wierstra, D. (2014). Stochastic backpropagation and approximate
inference in deep generative models. arXiv preprint arXiv:1401.4082.
Ruiz, F. R., Aueb, M. T. R., & Blei, D. (2016). The generalized reparameterization gradient. In
Advances in Neural Information Processing Systems (pp. 460–468).
Salakhutdinov, R. (2008). Learning and evaluating Boltzmann machines. Tech. Rep., Technical
Report UTML TR 2008-002, Department of Computer Science, University of Toronto.
Salakhutdinov, R., & Hinton, G. (2009). Deep Boltzmann machines. In Artificial Intelligence and
Statistics (pp. 448–455).
Saul, L. K., Jaakkola, T., & Jordan, M. I. (1996). Mean field theory for sigmoid belief networks.
Journal of Artificial Intelligence Research, 4, 61–76.
Sirignano, J., Sadhwani, A., & Giesecke, K. (2016, July). Deep learning for mortgage risk. ArXiv
e-prints.
Smolensky, P. (1986). Parallel distributed processing: explorations in the microstructure of
cognition (Vol. 1. pp. 194–281). Cambridge, MA, USA: MIT Press.
Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout:
a simple way to prevent neural networks from overfitting. Journal of Machine Learning
Research, 15(1), 1929–1958.
Swanson, N. R., & White, H. (1995). A model-selection approach to assessing the information in
the term structure using linear models and artificial neural networks. Journal of Business &
Economic Statistics, 13(3), 265–275.
Telgarsky, M. (2016). Benefits of depth in neural networks. CoRR abs/1602.04485.
Tieleman, T. (2008). Training restricted Boltzmann machines using approximations to the likeli-
hood gradient. In Proceedings of the 25th International Conference on Machine Learning (pp.
1064–1071). ACM.
Tishby, N., & Zaslavsky, N. (2015). Deep learning and the information bottleneck principle.
CoRR abs/1503.02406.
Tran, D., Hoffman, M. D., Saurous, R. A., Brevdo, E., Murphy, K., & Blei, D. M. (2017, January).
Deep probabilistic programming. arXiv:1701.03757 [cs, stat].
Vapnik, V. N. (1998). Statistical learning theory. Wiley-Interscience.
Welling, M., Rosen-Zvi, M., & Hinton, G. E. (2005). Exponential family harmoniums with an
application to information retrieval. In Advances in Neural Information Processing Systems
(pp. 1481–1488).
Williams, C. K. (1997). Computing with infinite networks. In Advances in Neural Information
Processing systems (pp. 295–301).
Chapter 5
Interpretability
This chapter presents a method for interpreting neural networks which imposes
minimal restrictions on the neural network design. The chapter demonstrates
techniques for interpreting a feedforward network, including how to rank the
importance of the features. An example demonstrating how to apply interpretability
analysis to deep learning models for factor modeling is also presented.
1 Introduction
Once the neural network has been trained, a number of important issues surface
around how to interpret the model parameters. This aspect is a prominent issue for
practitioners in deciding whether to use neural networks in favor of other machine
learning and statistical methods for estimating factor realizations, sometimes even
if the latter’s predictive accuracy is inferior.
In this section, we shall introduce a method for interpreting multilayer percep-
trons which imposes minimal restrictions on the neural network design.
Chapter Objectives
By the end of this chapter, the reader should expect to accomplish the following:
– Apply techniques for interpreting a feedforward network, including how to rank
the importance of the features.
– Learn how to apply interpretability analysis to deep learning models for factor
modeling.
2 Background on Interpretability
There are numerous techniques for interpreting machine learning methods which
treat the model as a black-box. A good example are Partial Dependence Plots
(PDPs) as described by Greenwell et al. (2018). Other approaches also exist
in the literature. Garson (1991) partitions hidden-output connection weights into
components associated with each input neuron using absolute values of connection
weights. Olden and Jackson (2002) determine the relative importance, [R]ij , of the
ith output to the j th predictor variable of the model as a function of the weights,
according to a simple linear expression.
We seek to understand the limitations on the choice of activation functions and
understand the effect of increasing layers and numbers of neurons on probabilistic
interpretability. For example, under standard Gaussian i.i.d. data, how robust are the
model’s estimate of the importance of each input variable with variable number of
neurons?
2.1 Sensitivities
the interpretation, we shall require that f (x) is Lipschitz continuous.1 That is, there
is a positive real constant K s.t. ∀x1 , x2 ∈ Rp , |F (x1 ) − F (x2 )| ≤ K|x1 − x2 |. Such
a constraint is necessary for the first derivative to be bounded and hence amenable
to the derivatives, w.r.t. to the inputs, providing interpretability.
Fortunately, provided that the weights and biases are finite, each semi-affine
function is Lipschitz continuous everywhere. For example, the function tanh(x) is
continuously differentiable and its derivative 1−tanh2 (x) is globally bounded. With
finite weights, the composition of tanh(x) with an affine function is also Lipschitz.
Clearly ReLU(x) := max(·, 0) is not continuously differentiable and one cannot use
the approach described here. Note that for the following examples, we are indifferent
to the choice of homoscedastic or heteroscedastic error, since the model sensitivities
are independent of the error.
Ŷ = Fβ (X) := β0 + β1 X1 + · · · + βK XK , (5.1)
∂Xi Ŷ = βi . (5.2)
In a feedforward neural network, we can use the chain rule to obtain the model
sensitivities
(L) (1)
∂Xi Ŷ = ∂Xi FW,b (X) = ∂Xi σW (L) ,b(L) ◦ · · · ◦ σW (1) ,b(1) (X). (5.3)
(1)
For example, with one hidden layer, σ (x) := tanh(x) and σW (1) ,b(1) (X) :=
In matrix form, with general σ , the Jacobian2 of σ w.r.t X is J = D(I (1) )W (1) of σ ,
1 IfLipschitz continuity is not imposed, then a small change in one of the input values could result
in an undesirable large variation in the derivative.
2 When σ is an identity function, the Jacobian J (I (1) ) = W (1) .
170 5 Interpretability
∂X Ŷ = W (L) J (I (L−1) ) = W (L) D(I (L−1) )W (L−1) . . . D(I (1) )W (1) . (5.7)
10
Ŷ = iXi , Xi ∼ U(0, 1). (5.8)
i=1
Figure 5.1 shows the ranked importance of the input variables in a neural network
with one hidden layer. Our interpretability method is compared with well-known
black-box interpretability methods such as Garson’s algorithm (Garson 1991) and
Olden’s algorithm (Olden and Jackson 2002). Our approach is the only technique to
interpret the fitted neural network which is consistent with how a linear regression
model would interpret the input variables.
4 Interaction Effects
X9 X9 X3
X8 X8 X1
X7 X7 X4
X6 X6 X2
Variable
X5 X5 X6
X4 X4 X5
X3 X3 X7
X2 X2 X8
X1 X1 X9
0.0 2.5 5.0 7.5 10.0 0.00 0.05 0.10 0.15 0.20 0 10 20 30 40
Importance (Sensitivity) Importance (Garson’s algorithm) Importance (Olden’s algorithm)
Fig. 5.1 Step test: This figure shows the ranked importance of the input variables in a neural
network with one hidden layer. (left) Our sensitivity based approach for input interpretability.
(Center) Garson’s algorithm and (Right) Olden’s algorithm. Our approach is the only technique
to interpret the fitted neural network which is consistent with how a linear regression model would
interpret the input variables
To illustrate our input variable and interaction effect ranking approach, we will use
a classical nonlinear benchmark regression problem. The input space consists of
ten i.i.d. uniform U (0, 1) random variables; however, only five out of these ten
actually appear in the true model. The response is related to the inputs according to
the formula
172 5 Interpretability
X4 X10 X1
X1 X1 X10
X2 X2 X7
X5 X3 X6
Variable
X3 X4 X9
X7 X9 X8
X8 X7 X5
X6 X8 X2
X10 X5 X4
X9 X6 X3
0.0 2.5 5.0 7.5 10.0 0.0 0.1 0.2 0.3 −10 0 10 20 30
Importance (Sensitivity) Importance (Garson's algorithm) Importance (Olden's algorithm)
Fig. 5.2 Friedman test: Ranked model sensitivities of the fitted neural network to the input. (left)
Our sensitivity based approach for input interpretability. (Center) Garson’s algorithm and (Right)
Olden’s algorithm
General results on the bound of the variance of the Jacobian for any activation
function are difficult to derive. However, we derive the following result for a ReLU
activated single-layer feedforward network. In matrix form, with σ (x) = max(x, 0),
the Jacobian, J , can be written as a linear combination of Heaviside functions:
x_12
x_24
x_25
Interaction Term
x_14
x_45
x_15
x_310
x_28
x_34
x_23
0 1 2 3 4
Importance (Sensitivity)
Fig. 5.3 Friedman test: Ranked pairwise interaction terms in the fitted neural network to the input.
(Left) Our sensitivity based approach for ranking interaction terms. (Center) Garson’s algorithm
and (Right) Olden’s algorithm
(1)
where Hii (Z) = H (Ii ) = 1{I (1) >0} , Hij = 0, j ≥ i. We assume that the mean
i
of the Jacobian is independent of the number of hidden units, μij := E[Jij ]. Then
we can state the following bound on the Jacobian of the network for the special case
when the input is one-dimensional.
Theorem (Dixon and Polson 2019) If X ∈ Rp is i.i.d. and there are n hidden units,
with ReLU activation i, then the variance of a single-layer feedforward network with
K outputs is bounded by μij
n−1
V[Jij ] = μij < μij , ∀i ∈ {1, . . . , K} and ∀j ∈ {1, . . . , p}. (5.11)
n
See Appendix “Proof of Variance Bound on Jacobian” for the proof.
Remark 5.1 The theorem establishes a negative result for a ReLU activated shallow
network—increasing the number of hidden units, increases the bound on the
variance of the Jacobian, and hence reduces interpretability of the sensitivities. Note
that if we do not assume that the mean of the Jacobian is fixed under varying n, then
we have the more general bound:
and hence the effect of network architecture on the bound of the variance of
the Jacobian is not clear. Note that the theorem holds without (i) distributional
assumptions on X other than i.i.d. data and (ii) specifying the number of data
points.
Remark 5.2 This result also suggests that the inputs should be rescaled so that each
μij , the expected value of the Jacobian, is a small positive value, although it may
not be possible to find such a scaling for all (i, j ) pairs.
We can derive probabilistic bounds on the Jacobians for any choice of activation
function. Let δ > 0 and a1 , . . . , an−1 be reals in (0, 1]. Let X1 , . . . , Xn−1 be
independent Bernoulli trials with E[Xk ] = pk so that
n−1
E[J ] = ak pk = μ. (5.13)
k=1
A similar bound exists for deviations of J below the mean. For γ ∈ (0, 1]:
μ
eγ
Pr(J − μ < −γ μ) < . (5.15)
(1 + γ )1+γ
These bounds are generally weak and are suited to large deviations, i.e. the tail
regions. The bounds are shown in the Fig. 5.4 for different values of μ. Here, μ is
increasing towards the upper right-hand corner of the plot.
1.00
bounds for deviations of J
above the mean, μ. Various μ
are shown in the plot, with μ
0.95
increasing towards the upper
right-hand corner of the plot
0.90
Pr[J> μ(1+δ)]
0.85
0.80
0.75
0.70
Table 5.1 This table compares the functional form of the variable sensitivities and values with an
OLS estimator. NN0 is a zero hidden layer feedforward network and NN1 is a one hidden layer
feedforward network with 10 hidden neurons and tanh activation functions
Model Intercept Sensitivity of X1 Sensitivity of X2
OLS β̂0 0.011 β̂1 1.015 β̂2 1.018
(1) (1)
NN0 b̂(1) 0.020 Ŵ1 1.018 Ŵ2 1.021
NN1 Ŵ (2) σ (b̂(1) ) + b̂(2) 0.021 E[Ŵ (2) D(I (1) )Ŵ1(1) ] 1.014 E[Ŵ (2) D(I (1) )Ŵ2(1) ] 1.022
Table 5.1 compares an OLS estimator with a zero hidden layer feedforward network
(NN0 ) and a one hidden layer feedforward network with 10 hidden neurons and tanh
activation functions (NN1 ). The functional form of the first two regression models
is equivalent, although the OLS estimator has been computed using a matrix solver,
whereas the zero layer hidden network parameters have been fitted with stochastic
gradient descent.
The fitted parameters values will vary slightly with each optimization as the
stochastic gradient descent is randomized. However, the sensitivity terms are given
in closed form and easily mapped to the linear model. In an industrial setting, such
a one-to-one mapping is useful for migrating to a deep factor model where, for
model validation purposes, compatibility with linear models should be recovered in
a limiting case. Clearly, if the data is not generated from a linear model, then the
parameter values would vary across models.
176 5 Interpretability
Fig. 5.5 This figure shows the empirical distribution of the sensitivities β̂1 and β̂2 . The sharpness
of the distribution is observed to converge with the number of hidden units. (a) Density of β̂1 . (b)
Density of β̂2
Table 5.2 This table shows the moments and 99% confidence interval of the empirical distribution
of the sensitivity β̂1 . The sharpness of the distribution is observed to converge monotonically with
the number of hidden units
Hidden Units Mean Median Std.dev 1% C.I. 99% C.I.
2 0.980875 1.0232913 0.10898393 0.58121675 1.0729908
10 0.9866159 1.0083131 0.056483902 0.76814914 1.0322522
50 0.99183553 1.0029879 0.03123002 0.8698967 1.0182846
100 1.0071343 1.0175397 0.028034585 0.89689034 1.0296803
200 1.0152218 1.0249312 0.026156902 0.9119074 1.0363332
Table 5.3 This table shows the moments and the 99% confidence interval of the empirical
distribution of the sensitivity β̂2 . The sharpness of the distribution is observed to converge
monotonically with the number of hidden units
Hidden Units Mean Median Std.dev 1% C.I. 99% C.I.
2 0.98129386 1.0233982 0.10931312 0.5787732 1.073728
10 0.9876832 1.0091512 0.057096474 0.76264584 1.0339714
50 0.9903236 1.0020974 0.031827927 0.86471796 1.0152498
100 0.9842479 0.9946766 0.028286876 0.87199813 1.0065105
200 0.9976638 1.0074166 0.026751818 0.8920307 1.0189484
Figure 5.5 and Tables 5.2 and 5.3 show the empirical distribution of the fitted
sensitivities using the single hidden layer model with increasing hidden units.
The sharpness of the distributions is observed to converge monotonically with
the number of hidden units. The confidence intervals are estimated under a non-
parametric distribution.
In general, provided the weights and biases of the network are finite, the variances
of the sensitivities are bounded for any input and choice of activation function.
6 Factor Modeling 177
6 Factor Modeling
rt = Bt ft + t , t = 1, . . . , T , (5.17)
rt = Ft (Bt ) + t , (5.18)
NN NN
OLS OLS
10−1
MSE (out-of-sample)
MSE (in-sample)
10−2
10−2
0 20 40 60 80 100 0 20 40 60 80 100
t t
(a) (b)
Fig. 5.6 This figures compares the in-sample and out-of-sample performances of an OLS estima-
tor (OLS) with a feedforward neural network (NN), as measured by the mean squared error (MSE).
The neural network is observed to always exhibit slightly lower out-of-sample MSE, although the
effect of deep networks on this problem is marginal because the dataset is too simplistic. (a) In-
sample error. (b) Out-of-sample error
0.0045
0.0040
Zero 1 10 25 50 100
Number of Neurons
0.0052 0.0135
MSE Out-of-sample (50N)
0.0130
MSE in-sample (50N)
0.0050
0.0125
0.0048
0.0120
0.0046 0.0115
0.0110
0.0044
0.0105
0.0042
0.00 0.02 0.04 0.06 0.08 0.10 0.00 0.02 0.04 0.06 0.08 0.10
L1 Regularization L1 Regularization
(a) (b)
Fig. 5.8 These figures show the effect of L1 regularization on the MSE errors for a network with
10 neurons in each of the two hidden layers. (a) In-sample. (b) Out-of-sample
180 5 Interpretability
Fig. 5.9 The distribution of sensitivities to each factor over the entire 100 month period using
the neural network (top). The sensitivities are sorted in ascending order from left to right by their
median values. The same sensitivities using OLS linear regression (bottom)
sensitivities are sorted in ascending order from left to right by their median values.
We observe that the OLS regression is much more sensitive to the factors than the
NN. We further note that the NN ranks the top sensitivities differently to OLS.
Clearly, the above results are purely illustrative of the interpretability method-
ology and not intended to be representative of a real-world factor model. Such a
choice of factors is observed to provide little benefit on the information ratios of a
simple stock selection strategy.
Larger Dataset
For completeness, we provide evidence that our neural network factor model
generates positive and higher information ratios than OLS when used to sort
portfolios from a larger universe, using up to 50 factors (see Table 5.4 for a
description of the factors). The dataset is not provided due to data licensing
restrictions.
We define the universe as 3290 stocks from the Russell 3000 index. Factors are
given by Bloomberg and reported monthly over the period from November 2008 to
November 2018. We train a two-hidden layer deep network with 50 hidden units
using ReLU activation.
Figure 5.10 compares the out-of-sample performance of neural networks and
OLS regression by the MSE (left) and the information ratios of a portfolio selection
strategy which selects the n stocks with the highest predicted monthly returns
(right). The information ratios are evaluated for various size portfolios, using the
Russell 3000 index as the benchmark. Also shown, for control, are randomly
selected portfolios.
6 Factor Modeling 181
Table 5.4 A short description of the factors used in the Russell 3000 deep learning factor model
demonstrated at the end of this chapter
Value factors
B/P Book to price
CF/P Cash flow to price
E/P Earning to price
S/EV Sales to enterprise value (EV). EV is given by
EV=Market Cap + LT Debt + max(ST Debt-Cash,0),
where LT (ST) stands for long (short) term
EB/EV EBIDTA to EV
FE/P Forecasted E/P. Forecast earnings are calculated from Bloomberg earnings
consensus estimates data
For coverage reasons, Bloomberg uses the 1-year and 2-year forward earnings
DIV Dividend yield. The exposure to this factor is just the most recently announced
annual net dividends
divided by the market price
Stocks with high dividend yields have high exposures to this factor
Size factors
MC Log (Market capitalization)
S Log (sales)
TA Log (total assets)
Trading activity factors
TrA Trading activity is a turnover based measure
Bloomberg focuses on turnover which is trading volume normalized by shares
outstanding
This indirectly controls for the Size effect
The exponential weighted average (EWMA) of the ratio of shares traded to
shares outstanding
In addition, to mitigate the impacts of those sharp short-lived spikes in trading
volume,
Bloomberg winsorizes the data
first daily trading volume data is compared to the long-term EWMA
volume(180 day half-life),
then the data is capped at 3 standard deviations away from the EWMA average
Earnings variability factors
EaV/TA Earnings volatility to total assets
Earnings volatility is measured
over the last 5 years/median total assets over the last 5 years
CFV/TA Cash flow volatility to total assets
Cash flow volatility is measured over the last 5 years/median total assets over
the last 5 years
SV/TA Sales volatility to total assets
Sales volatility over the last 5 years/median total assets over the last 5 year
(continued)
182 5 Interpretability
Fig. 5.10 (a) The out-of-sample MSE is compared between OLS and a two-hidden layer deep
network applied to a universe of 3290 stocks from the Russell 3000 index over the period from
November 2008 to November 2018. (b) The information ratios of a portfolio selection strategy
which selects the n stocks from the universe with the highest predicted monthly returns. The
information ratios are evaluated for various size portfolios. The information ratios are based on out-
of-sample predicted asset returns using OLS regression, neural networks, and randomized selection
with no predictive model
Finally, Fig. 5.11 compares the distribution of sensitivities to each factor over
the entire 100 month period using the neural network (top) and OLS regression
(bottom). The sensitivities are sorted in ascending order from left to right by their
median values. We observe that the NN ranking of the factors differs substantially
from the OLS regression.
7 Summary 183
Fig. 5.11 The distribution of factor model sensitivities to each factor over the entire ten-year
period using the neural network applied to the Russell 3000 asset factor loadings (top). The
sensitivities are sorted in ascending order from left to right by their median values. The same
sensitivities using OLS linear regression (bottom). See Table 5.4 for a short description of the
fundamental factors
7 Summary
8 Exercises
Exercise 5.1*
Consider the following data generation process
Y = X1 + X2 + , X1 , X2 , ∼ N (0, 1),
i.e. β0 = 0 and β1 = β2 = 1.
a. For this data, write down the mathematical expression for the sensitivities of the
fitted neural network when the network has
– zero hidden layers;
– one hidden layer, with n unactivated hidden units;
– one hidden layer, with n tanh activated hidden units;
– one hidden layer, with n ReLU activated hidden units; and
– two hidden layers, each with n tanh activated hidden units.
Exercise 5.2**
Consider the following data generation process
i.e. β0 = 0 and β1 = β2 = β12 = 1, where β12 is the interaction term. σn2 is the
variance of the noise and σn = 0.01.
a. For this data, write down the mathematical expression for the interaction term
(i.e., the off-diagonal components of the Hessian matrix) of the fitted neural
network when the network has
– zero hidden layers;
– one hidden layer, with n unactivated hidden units;
– one hidden layer, with n tanh activated hidden units;
– one hidden layer, with n ReLU activated hidden units; and
– two hidden layers, each with n tanh activated hidden units.
Why is the ReLU activated network problematic for estimating interaction terms?
Exercise 5.3*
For the same problem in the previous exercise, use 5000 simulations to gen-
erate a regression training set dataset for the neural network with one hidden
layer. Produce a table showing how the mean and standard deviation of the
sensitivities βi behave as the number of hidden units is increased. Compare
your result with tanh and ReLU activation. What do you conclude about which
Appendix 185
activation function to use for interpretability? Note that you should use the note-
book Deep-Learning-Interpretability.ipynb as the starting point for
experimental analysis.
Exercise 5.4*
Generalize the sensitivities function in Exercise 5.3 to L layers for either
tanh or ReLU activated hidden layers. Test your function on the data generation
process given in Exercise 5.1.
Exercise 5.5**
Fixing the total number of hidden units, how do the mean and standard deviation
of the sensitivities βi behave as the number of layers is increased? Your answer
should compare using either tanh or ReLU activation functions. Note, do not mix
the type of activation functions across layers. What you conclude about the effect of
the number of layers, keeping the total number of units fixed, on the interpretability
of the sensitivities?
Exercise 5.6**
For the same data generation process as the previous exercise, use 5000 simulations
to generate a regression training set for the neural network with one hidden layer.
Produce a table showing how the mean and standard deviation of the interaction
term behave as the number of hidden units is increased, fixing all other parameters.
What do you conclude about the effect of the number of hidden units on the
interpretability of the interaction term? Note that you should use the notebook
Deep-Learning-Interaction.ipynb as the starting point for experimental
analysis.
Appendix
Partial Dependence Plots (PDPs) evaluate the expected output w.r.t. the marginal
density function of each input variable, and allow the importance of the predictors
to be ranked. More precisely, partitioning the data X into an interest set, Xs , and its
complement, Xc = X \ Xs , then the “partial dependence” of the response on Xs is
defined as
(
fs (Xs ) = EXc f5(Xs , X c ) = f5(Xs , X c ) pc (Xc ) dXc , (5.20)
%
where pc (Xc ) is the marginal probability density of X c : pc (Xc ) = p (x) dx s .
Equation (5.20) can be estimated from a set of training data by
1 5&
n
'
f¯s (Xs ) = f X s , X i,c , (5.21)
n
i=1
186 5 Interpretability
(2) (1)
rij = Wj k Wki . (5.22)
The approach does not account for non-linearity introduced into the activation,
which is the most critical aspects of the model. Furthermore, the approach presented
was limited to a single hidden layer.
n
(2) (1) (1)
n
Jij = [∂X Ŷ ]ij = wik wkj H (Ik ) = ck Hk (I ) (5.23)
k=1 k=1
(2) (1)
where ck := cij k := wik wkj and Hk (I ) := H (Ik(1) ) is the Heaviside function. As
a linear combination of indicator functions, we have
n−1
k
Jij = ak 1{I (1) >0,I (1) ≤0} + an 1{I (1) >0} , ak := ci . (5.24)
k k+1 n
k=1 i=1
n−1
Jij = ak 1{w(1) X>−b(1) ,w(1)} (1) + an 1{w(1) X>−b(1) } . (5.25)
k, k k+1, X≤−bk+1 } n, n
k=1
Without loss of generality, consider the case when p = 1, the dimension of the input
space is one. Then Eq. 5.25 simplifies to:
n−1
Jij = ak 1xk <X≤xk+1 + an 1xn <X , j = 1, (5.26)
k=1
(1)
bk
where xk := − (1) . The expectation of the Jacobian is given by
Wk
n
μij := E[Jij ] = ak pk , (5.27)
k=1
n−1
V[Jij ] = μij < μij . (5.29)
n
If we relax the assumption that μij is independent of n then, under the original
weights ak := ki=1 ci :
n
V[Jij ] = ak pk (1 − pk )
k=1
n
≤ ak pk
k=1
= μij
n
≤ ak .
k=1
188 5 Interpretability
Python Notebooks
The notebooks provided in the accompanying source code repository are designed to
gain familiarity with how to implement interpretable deep networks. The examples
include toy simulated data and a simple factor model. Further details of the
notebooks are included in the README.md file.
References
Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., et al. (2016). Tensor flow: A system
for large-scale machine learning. In Proceedings of the 12th USENIX Conference on Operating
Systems Design and Implementation, OSDI’16 (pp. 265–283).
Dimopoulos, Y., Bourret, P., & Lek, S. (1995, Dec). Use of some sensitivity criteria for choosing
networks with good generalization ability. Neural Processing Letters, 2(6), 1–4.
Dixon, M. F., & Polson, N. G. (2019). Deep fundamental factor models.
Garson, G. D. (1991, April). Interpreting neural-network connection weights. AI Expert, 6(4), 46–
51.
Greenwell, B. M., Boehmke, B. C., & McCarthy, A. J. (2018, May). A simple and effective model-
based variable importance measure. arXiv e-prints, arXiv:1805.04755.
Nielsen, F., & Bender, J. (2010). The fundamentals of fundamental factor models. Technical
Report 24, MSCI Barra Research Paper.
Olden, J. D., & Jackson, D. A. (2002). Illuminating the “black box”: a randomization approach
for understanding variable contributions in artificial neural networks. Ecological Mod-
elling, 154(1), 135–150.
Rosenberg, B., & Marathe, V. (1976). Common factors in security returns: Microeconomic
determinants and macroeconomic correlates. Research Program in Finance Working Papers 44,
University of California at Berkeley.
Part II
Sequential Learning
Chapter 6
Sequence Modeling
1 Introduction
More often in finance, the data consists of observations of a variable over time, e.g.
stock prices, bond yields, etc. In such a case, the observations are not independent
over time, rather observations are often strongly related to their recent histories.
For this reason, the ordering of the data matters (unlike cross-sectional data). This
is in contrast to most methods of machine learning which assume that the data is
i.i.d. Moreover algorithms and techniques for fitting machine learning models, such
as back-propagation for neural networks and cross-validation for hyperparameter
tuning, must be modified for use on time series data.
“Stationarity” of the data is a further important delineation necessary to success-
fully apply models to time series data. If the estimated moments of the data change
depending on the window of observation, then the modeling problem is much more
difficult. Neural network approaches to addressing these challenges are presented in
the next chapter.
An additional consideration is the data frequency—the frequency at which the
data is observed assuming that the timestamps are uniform. In general, the frequency
of the data governs the frequency of the time series model. For example, support that
we seek to predict the week ahead stock price from daily historical adjusted close
prices on business days. In such a case, we would build a model from daily prices
and then predict 5 daily steps ahead, rather than building a model using only weekly
intervals of data.
In this chapter we shall primarily consider applications of parametric, linear, and
frequentist models to uniform time series data. Such methods form the conceptual
basis and performance baseline for more advanced neural network architectures
presented in the next chapter. In fact, each type of architecture is a generalization
of many of the models presented here. Please note that the material presented in
this chapter is not intended as a substitute for a more comprehensive and rigorous
treatment of econometrics, but rather to provide enough background for Chap. 8.
Chapter Objectives
By the end of this chapter, the reader should expect to accomplish the following:
– Explain and analyze linear autoregressive models;
– Understand the classical approaches to identifying, fitting, and diagnosing
autoregressive models;
– Apply simple heteroscedastic regression techniques to time series data;
– Understand how exponential smoothing can be used to predict and filter time
series; and
– Project multivariate time series data onto lower dimensional spaces with principal
component analysis.
Note that this chapter can be skipped if the reader is already familiar with
econometrics. This chapter is especially useful for students from an engineering or
physical sciences background, with little exposure to econometrics and time series
analysis.
2 Autoregressive Modeling
2.1 Preliminaries
Before we can build a model to predict Yt , we recall some basic definitions and
terminology, starting with a continuous time setting and then continuing thereafter
solely in a discrete-time setting.
2 Autoregressive Modeling 193
•> Autocovariance
The j th autocovariance of a time series is γj t := E[(yt − μt )(yt−j − μt−j )], where
μt := E[yt ].
μt = μ, ∀t
γj t = γj , ∀t.
As we have seen, this implies that γj = γ−j : the autocovariances depend only
on the interval between observations, but not the time of the observations.
•> Autocorrelation
The j th autocorrelation, τj is just the j th autocovariance divided by the variance:
γj
τj = . (6.1)
γ0
a. E[t ] = 0, ∀t;
b. V[t ] = σ 2 , ∀t; and
c. t and s are independent, t = s, ∀t, s.
Gaussian white noise just adds a normality assumption to the error. White noise
error is often referred to as a “disturbance,” “shock,” or “innovation” in the financial
econometrics literature.
p
yt = μ + φi yt−i + t , (6.2)
i=1
p
where t is independent of {yt−1 }i=1 . We refer to μ as the drift term. p is referred
to as the order of the model.
φ(L)[yt ] = μ + t . (6.3)
1 We shall identify statistical tests for establishing autocorrelation later in this chapter.
2 Autoregressive Modeling 195
This compact form shall be conducive to analysis describing the properties of the
AR(p) process. We mention in passing that the identification of the parameter p
from data, i.e. the number of lags in the model rests on the data being weakly
covariance stationary.2
2.3 Stability
∞
1
yt = [μ + t ] = φ j Lj [μ + t ], (6.5)
1 − φL
j =0
and the infinite sum will be stable, i.e. the φ j terms do not grow with j , provided that
|φ| < 1. Conversely, unstable AR(p) processes exhibit the counter-intuitive behavior
that the error disturbance terms become increasingly influential as the lag increases.
We can calculate the Impulse Response Function (IRF), ∂∂yt−j t
∀j , to characterize
the influence of past disturbances. For the AR(p) model, the IRF is given by φ j and
hence is geometrically decaying when the model is stable.
2.4 Stationarity
2 Statistical tests for identifying the order of the model will be discussed later in the chapter.
196 6 Sequence Modeling
z z z
(z) = (1 − ) · (1 − ) · . . . · (1 − ) = 0, (6.6)
λ1 λ2 λp
it follows that a AR(p) model is strictly stationary and ergodic if all the roots lie
outside the unit sphere in the complex plane C. That is |λi | > 1, i ∈ {1, . . . , p} and
| · | is the modulus of a complex number. Note that if the characteristic equation has
at least one unit root, with all other roots lying outside the unit sphere, then this is a
special case of non-stationarity but not strict stationarity.
yt = yt−1 + t (6.7)
and the characteristic polynomial, (z) = 1 − z = 0, implies that the real root
z = 1. Hence the root is on the unit circle and the model is a special case of non-
stationarity.
(z) = 1 − φ1 z − · · · − φp zp (6.11)
(z) 1 φ1
q(z) = − =− + z + · · · + zp (6.12)
φp φp φp
γ̃h
τ̃h := τ̃t,t−h := . . ,
γ̃t,h γ̃t−h,h
198 6 Sequence Modeling
where γ̃h := γ̃t,t−h := E[yt − P (yt | yt−1 , . . . , yt−h+1 ), yt−h − P (yt−h | yt−1 , . . . ,
yt−h+1 )] is the lag-h partial autocovariance, P (W | Z) is an orthogonal projection
of W onto the set Z and
The partial autocorrelation function τ̃h : N → [−1, 1] is a map h :→ τ̃h . The plot
of τ̃h against h is referred to as the partial correlogram.
AR(p) Processes
Using the property that a linear orthogonal projection ŷt = P (yt | yt−1 , . . . ,
yt−h+1 ) is given by the OLS estimator as ŷt = φ1 yt−1 + · · · + φh−1 yt−h+1 , gives
the Yule-Walker equations for an AR(p) process, relating the partial autocorrelations
T̃p := [τ̃1 , . . . , τ̃p ] to the autocorrelations Tp := [τ1 , . . . , τp ]:
⎡ ⎤
1 τ1 . . . τp−1
⎢ .. .. . ⎥
⎢ τ1 . . .. ⎥
Rp T̃p = Tp , Rp = ⎢
⎢ ..
⎥
.. ⎥ . (6.15)
⎣ .. ..
. . . . ⎦
τp−1 τp−2 . . . 1
For h ≤ p, we can solve for the hth lag partial autocorrelation by writing
|Rh∗ |
τ̃h = , (6.16)
|Rh |
where | · | is the matrix determinant and the j th column of [Rh∗ ],j = [Rh ],j , j = h
and the hth column is [Rh∗ ],h = Th .
For example, the lag-1 partial autocorrelation is τ̃1 = τ1 and the lag-2 partial
autocorrelation is
1 τ1
τ1 τ2 τ2 − τ12
τ̃2 = = . (6.17)
1 τ1 1 − τ12
τ1 1
τ12 − τ12
τ̃2 = = 0, (6.18)
1 − τ12
2 Autoregressive Modeling 199
and this is true for all lag orders greater than the order of the AR process.
We can reason about this property from another perspective—through the partial
autocovariances. The lag-2 partial autocovariance of an AR(1) process is
where ŷ = P (yt | yt−1 ) and ŷt−2 = P (yt−2 | yt−1 ). When P is a linear orthogonal
projection, we have from the property of an orthogonal projection
Cov(W, Z)
P (W | Z) = μW + (Z − μZ ) (6.20)
V[Z]
The exact likelihood when the density of the data is independent of (φ, σn2 ) is
T
L(y, x; φ, σn2 ) = fYt |Xt (yt |xt ; φ, σn2 )fXt (xt ). (6.23)
t=1
1
T
= (σn2 2π )−T /2 exp{− (yt − φ T xt )2 }.
2σn2
t=1
200 6 Sequence Modeling
In many cases such an assumption about the independence of the density of the data
and the parameters is not warranted. For example, consider the zero mean AR(1)
with unknown noise variance:
L(x; φ, σn2 ) = fYt |Yt−1 (yt |yt−1 ; φ, σn2 )fY1 (y1 ; φ, σn2 )
) *−1/2
σn2 1 − φ2 2 2 T −1
= 2π exp{− y1 }(σn 2π )− 2
1−φ 2 2σn2
1
T
exp{− (yt − φyt−1 )2 },
2σn2
t=2
where we made use of the moments of Yt —a result which is derived in Sect. 2.8.
Despite the dependence of the density of the data on the parameters, there
may be practically little advantage of using exact maximum likelihood against the
conditional likelihood method (i.e., dropping the fY1 (y1 ; φ, σn2 ) term). This turns
out to be the case for linear models. Maximizing the conditional likelihood is
equivalent to ordinary least squares estimation.
2.7 Heteroscedasticity
The AR model assumes that the noise is i.i.d. This may be an overly optimistic
assumption which can be relaxed by assuming that the noise is time dependent.
Treating the noise as time dependent is exemplified by a heteroscedastic AR(p)
model
p
yt = μ + φi yt−i + t , t ∼ N(0, σn,t
2
). (6.25)
i=1
There are many tests for heteroscedasticity in time series models and one of
them, the ARCH test, is summarized in Table 6.3. The estimation procedure for
heteroscedastic models is more complex and involves two steps: (i) estimation of the
errors from the maximum likelihood function which treats the errors as independent
and (ii) estimation of model parameters under a more general maximum likelihood
2 Autoregressive Modeling 201
estimation which treats the errors as time-dependent. Note that such a procedure
could be generalized further to account for correlation in the errors but requires the
inversion of the covariance matrix, which is computationally intractable with large
time series.
The conditional likelihood is
T
L(y|X; φ, σn2 ) = fYt |Xt (yt |xt ; φ, σn2 )
t=1
1
= (2π )−T /2 det (D)−1/2 exp{− (y − φ T X)T D −1 (y − φ T X)},
2
where Dtt = σn,t 2 is the diagonal covariance matrix and X ∈ RT ×p is the data
The Wold representation theorem (a.k.a. Wold decomposition) states that every
covariance stationary time series can be written as the sum of two time series,
one deterministic and one stochastic. In effect, we have already considered the
deterministic component when choosing an AR process.4 The stochastic component
can be represented as a “moving average process” or MA(q) process which
expresses yt as a linear combination of current and q past disturbances. Its definition
is as follows:
4 Thisis an overly simplistic statement because the AR(1) process can be expressed as a MA(∞)
process and vice versa.
202 6 Sequence Modeling
q
yt = μ + θi t−i + t . (6.26)
i=1
It turns out that yt−1 depends on {t−1 , t−2 , . . . }, but not t and hence γt,t−2
2 =
0. It should be apparent that this property holds even when P is a non-linear
projection provided that the errors are independent (but not necessarily identical).
Another brief point of discussion is that an AR(1) process can be rewritten as a
MA(∞) process. Suppose that the AR(1) process has a mean μ and the variance of
the noise is σn2 , then by a binomial expansion of the operator (1 − φL)−1 we have
∞
μ
yt = + φ j t−j , (6.27)
1−φ
j =0
2.9 GARCH
Recall from Sect. 2.7 that heteroscedastic time series models treat the error as time
dependent. A popular parametric, linear, and heteroscedastic method used in finan-
cial econometrics is the Generalized Autoregressive Conditional Heteroscedastic
(GARCH) model (Bollerslev and Taylor) . A GARCH(p,q) model specifies that the
conditional variance (i.e., volatility) is given by an ARMA(p,q) model—there are p
lagged conditional variances and q lagged squared noise terms:
2 Autoregressive Modeling 203
q
p
σt2 := E[t2 |t−1 ] = α0 + 2
αi t−i + 2
βi σt−i .
i=1 i=1
This model gives an explicit relationship between the current volatility and previous
volatilities. Such a relationship is useful for predicting volatility in the model, with
obvious benefits for volatility modeling in trading and risk management. This simple
relationship enables us to characterize the behavior of the model, as we shall see
shortly.
A necessary condition for model stationarity is the following constraint:
q
p
( αi + βi ) < 1.
i=1 i=1
When the model is stationary, the long-run volatility converges to the uncondi-
tional variance of t :
α0
σ 2 := var(t ) = q p .
1−( i=1 αi + i=1 βi )
To see this, let us consider the l-step ahead forecast using a GARCH(1,1) model:
σt2 = α0 + α1 t−1
2
+ β1 σt−1
2
(6.28)
2
σ̂t+1 = α0 + α1 E[t2 |t−1 ] + β1 σt2 (6.29)
= σ 2 + (α1 + β1 )(σt2 − σ 2 ) (6.30)
2
σ̂t+2 = α0 + α1 E[t+1
2
|t−1 ] + β1 E[σt+1
2
|t−1 ] (6.31)
= σ 2 + (α1 + β1 )2 (σt2 − σ 2 ) (6.32)
2
σ̂t+l = α0 + α1 E[t+l−1
2
|t−1 ] + β1 E[σt+l−1
2
|t−1 ] (6.33)
= σ 2 + (α1 + β1 )l (σt2 − σ 2 ), (6.34)
horizon goes to infinity, the variance forecast approaches the unconditional variance
of t . From the l-step ahead variance forecast, we can see that (α1 + β1 ) determines
how quickly the variance forecast converges to the unconditional variance. If the
variance sharply rises during a crisis, the number of periods, K, until it is halfway
between the first forecast and the unconditional variance is (α1 + β1 )K = 0.5, so
the half-life5 is given by K = ln(0.5)/ ln(α1 + β1 ).
For example, if
(α1 + β1 ) = 0.97
or equivalently
Writing this as a geometric decaying autoregressive series back to the first observa-
tion:
ỹt+1 = αyt + α(1 − α)yt−1 + α(1 − α)2 yt−2 + α(1 − α)3 yt−3
+ · · · + α(1 − α)t−1 y1 + α(1 − α)t ỹ1 ,
While maximum likelihood estimation is the approach of choice for fitting the
ARMA models described in this chapter, there are many considerations beyond
fitting the model parameters. In particular, we know from earlier chapters that the
bias–variance tradeoff is a central consideration which is not addressed in maximum
likelihood estimation without adding a penalty term.
Machine learning achieves generalized performance through optimizing the
bias–variance tradeoff, with many of the parameters being optimized through cross-
validation. This is both a blessing and a curse. On the one hand, the heavy reliance
on numerical optimization provides substantial flexibility but at the expense of com-
putational cost and, often-times, under-exploitation of structure in the time series.
There are also potential instabilities whereby small changes in hyperparameters lead
to substantial differences in model performance.
If one were able to restrict the class of functions represented by the model, using
knowledge of the relationship and dependencies between variables, then one could
in principle reduce the complexity and improve the stability of the fitting procedure.
For some 75 years, econometricians and statisticians have approached the
problem of time series modeling with ARIMA in a simple and intuitive way. They
follow a three-step process to fit and assess AR(p). This process is referred to as
a Box–Jenkins approach or framework. The three basic steps of the Box–Jenkins
modeling approach are:
a. (I)dentification—determining the order of the model (a.k.a. model selection);
b. (E)stimation—estimation of model parameters; and
c. (D)iagnostic checking—evaluating the fit of the model.
This modeling approach is iterative and parsimonious—it favors models with
fewer parameters.
3.1 Stationarity
Before the order of the model can be determined, the time series must be tested for
stationarity. A standard statistical test for covariance stationarity is the Augmented
Dickey-Fuller (ADF) test which often accounts for the (c)onstant drift and (t)ime
trend. The ADF test is a unit root test—the Null hypothesis is that the characteristic
polynomial exhibits at least a unit root and hence the data is non-stationary. If the
Null can be rejected at a confidence level, α, then the data is stationary. Attempting
to fit a time series model to non-stationary data will result in dubious interpretations
of the estimated partial autocorrelation function and poor predictions and should
therefore be avoided.
206 6 Sequence Modeling
Any trending time series process is non-stationary. Before we can fit an AR(p)
model, it is first necessary to transform the original time series into a stationary
form. In some instances, it may be possible to simply detrend the time series (a
transformation which works in a limited number of cases). However this is rarely
full proof. To the potential detriment of the predictive accuracy of the model, we can
however systematically difference the original time series one or more times until
we arrive at a stationary time series.
To gain insight, let us consider a simple example. Suppose we are given the
following linear model with a time trend of the form:
and thus this is model is non-stationary. Instead we can difference the process to
give
and hence the mean and the variance of this difference process are constant and the
difference process is stationary:
3.3 Identification
A common approach for determining the order of a AR(p) from a stationary time
series is to estimate the partial autocorrelations and determine the largest lag which
is significant. Figure 6.1 shows the partial correlogram, the plot of the estimated
3 Fitting Time Series Models: The Box–Jenkins Approach 207
partial autocorrelations against the lag. The solid horizontal lines define the 95%
confidence interval which can be constructed for each coefficient using
1
± 1.96 × √ , (6.44)
T
While the partial autocorrelation function is useful for determining the AR(p)
model order, in many cases there is an undesirable element of subjectively in the
choice.
It is often preferable to use the Akaike Information Criteria (AIC) to measure the
quality of fit. The AIC is given by
2k
AI C = ln(σ̂ 2 ) + , (6.45)
T
where σ̂ 2 is the residual variance (the residual sums of squares divided by the
number of observations T ) and k = p + q + 1 is the total number of parameters
estimated. This criterion expresses a bias–variance tradeoff between the first term,
the quality of fit, and the second term, a penalty function proportional to the number
of parameters. The goal is to select the model which minimizes the AIC by first
using maximum likelihood estimation and then adding the penalty term. Adding
more parameters to the model reduces the residuals but increases the right-hand
term, thus the AIC favors the best fit with the fewest number of parameters.
On the surface, the overall approach has many similarities with regularization
in machine learning where the loss function is penalized by a LASSO penalty (L1
norm of the parameters) or a ridge penalty (L2 norm of the parameters). However,
we emphasize that AIC is estimated post-hoc, once the maximum likelihood
function is evaluated, whereas in machine learning models, the penalized loss
function is directly minimized.
Once the model is fitted we must assess whether the residual exhibits autocorrela-
tion, suggesting the model is underfitting. The residual of fitted time series model
should be white noise. To test for autocorrelation in the residual, Box and Pierce
propose the Portmanteau statistic
m
Q∗ (m) = T ρ̂l2 ,
l=1
H0 : ρ̂1 = · · · = ρ̂m = 0
Ha : ρ̂i = 0
for some i ∈ {1, . . . , m}. ρ̂i are the sample autocorrelations of the residual.
The Box-Pierce statistic follows an asymptotically chi-squared distribution with
m degrees of freedom. The closely related Ljung–Box test statistic increases the
power of the test in finite samples:
m
ρ̂l2
Q(m) = T (T + 2) . (6.46)
T −l
l=1
3 Fitting Time Series Models: The Box–Jenkins Approach 209
Fig. 6.2 This plot shows the results of applying a Ljung–Box test to the residuals of an AR(p)
model. (Top) The standardized residuals are shown against time. (Center) The estimated ACF of
the residuals is shown against the lag index. (Bottom) The p-values of the Ljung–Box test statistic
are shown against the lag index
such as Tsay (2010) for further details of these tests and elaboration on their
application to linear models.
4 Prediction
p
ŷt+h = E[yt+h | t ] = φi ŷt+h−i , (6.47)
i=1
If the output is categorical, rather than continuous, then the ARMA model is used to
predict the log-odds ratio of the binary event rather than the conditional expectation
of the response. This is analogous to using a logit function as a link in logistic
regression.
Other general metrics are also used to assess model accuracy such as a confusion
matrix, the F1-score and Receiver Operating Characteristic (ROC) curves. These
metrics are not specific to time series data and could be applied to cross-section
models to. The following example will illustrate a binary event prediction problem
using time series data.
In this example, the accuracy is (12+2)/24—the ratio of the sum of the diagonal
terms to the set size. Of special interest are the type I (false positive) and type II
(false negative) errors, shown by the off-diagonal elements as 8 and 2, respectively.
In practice, careful consideration must be given as to whether there is equal tolerance
for type 1 and type 2 errors.
The significance of the classifier can be estimated from a chi-squared statistic
with one degree of freedom under a Null hypothesis that the classifier is a white
noise. In general, Chi-squared testing is used to determine whether two variables
are independent of one another. In this case, if the Chi-squared statistic is above a
given critical threshold value, associated with a significance level, then we can say
that the classifier is not white noise.
Let us label the elements of the confusion matrix as in Table 6.2 below. The
column and row sums of the confusion matrices and the total number of test samples,
m, are also shown.
The chi-squared statistic with one degree of freedom is given by the squared
difference of the expected result (i.e., a white noise model where the prediction
is independent of the observations) and the model prediction, Ŷ , relative to the
expected result. When normalized by the number of observations, each element of
the confusion matrix is the joint probability [P(Y, Ŷ )]ij . Under a white noise model,
212 6 Sequence Modeling
the observed outcome, Y , and the predicted outcome, Ŷ , are independent and so
[P(Y, Ŷ )]ij = [P(Y )]i [P(Ŷ )]j which is the ith row sum, mi, , multiplied by the j th
column sum, m,j , divided by m. Since mi,j is based on the model prediction, the
chi-squared statistic is thus
2
2
(mij − mi, m,j /m)2
χ2 = . (6.49)
mi, m,j /m
i=1 j =1
This value is far below the threshold value of 6.635 for a chi-squared statistic with
one degree of freedom to be significant. Thus we cannot reject the Null hypothesis.
The predicted model is not sufficiently indistinguishable from white noise.
The example classification model shown above used a threshold of pt >= 0.5 to
classify an event as positive. This choice of threshold is intuitive but arbitrary. How
can we measure the performance of a classifier for a range of thresholds?
A ROC-Curve contains information about all possible thresholds. The ROC-
Curve plots true positive rates against false positive rates, where these terms are
defined as follows:
– True Positive Rate (TPR) is T P /(T P + F N): fraction of positive samples which
the classifier correctly identified. This is also known as Recall or Sensitivity.
Using the confusion matrix in Table 6.1, the TPR=12/(12 + 2) = 6/7.
– False Positive Rate (FPR) is F P /(F P +T N): fraction of positive samples which
the classifier misidentified. In the example confusion matrix, the FPR=8/(8 +
2) = 4/5.
– Precision is T P /(T P + F P ): fraction of samples that were positive from the
group that the classifier predicted to be positive. From the example confusion
matrix, the precision is 12/(12 + 8) = 3/5.
5 Principal Component Analysis 213
Each point in a ROC curve is a (TPR, FPR) pair for a particular choice of the
threshold in the classifier. The straight dashed black line in Fig. 6.3 represents a
random model. The green line shows the ROC curve of the model—importantly it
is should always be above the line. The perfect model would exhibit a TPR of unity
for all FPRs, so that there is no area above the curve.
The advantage of this performance measure is that it is robust to class imbalance,
e.g. rare positive events. This is not true of classification accuracy which leads to
misinterpretation of the quality of the fit when the data is imbalanced. For example,
a constant model Ŷ = f (X) = 1 would be x% accurate if the data consists of x%
positive events. Additional related metrics can be derived. Common ones include
Area Under the Curve (AUC), which is the area under the green line in Fig. 6.3.
The F1-score is the harmonic mean of the precision and recall and is also
frequently used. The F1-score reaches its best value at unity and worst at zero and is
given by F 1 = 2·precision·recall 2×3/5×6/7
precision+recall . From the example above F1 = 3/5+6/7 = 0.706.
The final section in this chapter approaches data modeling from quite a different
perspective, with the goal being to reduce the dimension of multivariate time series
214 6 Sequence Modeling
Out-of-
sample split
In-sample period period
Historical period
Fig. 6.4 Times series cross-validation, also referred to as “walk forward optimization,” is used
instead of standard cross-validation for cross-sectional data to preserve the ordering of observations
in time series data. This experimental design avoids look-ahead bias in the fitted model which
occurs when one or more observations in the training set are from the future
data. The approach is widely used in finance, especially when the dimensionality of
the data presents barriers to computational tractability or practical risk management
and trading challenges such as hedging exposure to market risk factors. For example,
it may be advantageous to monitor a few risk factors in a large portfolio rather than
each instrument. Moreover such factors should provide economic insight into the
behavior of the financial markets and be actionable from an investment management
perspective.
6 7N
Formally, let yi i=1 be a set of N observation vectors, each of dimension n. We
6 7N
assume that n ≤ N . Let Y ∈ Rn×N be a matrix whose columns are yi i=1 ,
⎡ ⎤
| |
Y = ⎣ y1 · · · yN ⎦ .
| |
1
N
1
ȳ = yi = Y1N ,
N N
i=1
Y0 = Y − ȳ1TN .
Projection
xi = WT yi ,
When the matrix WT represents the transformation that applies principal component
6 7n we denote W = P, and the columns of the orthonormal matrix, P, denoted
analysis, 7
pj j =1 , are referred to as loading vectors. The transformed vectors {xi }N i=1 are
referred to as principal components or scores.
The first loading vector is defined as the unit vector with which the inner products
of the observations have the greatest variance:
The solution to Eq. 6.50 is known to be the eigenvector of the sample covariance
matrix Y0 YT0 corresponding to its largest eigenvalue.8
Next, p2 is the unit vector which has the largest variance of inner products
between it and the observations after removing the orthogonal projections of the
observations onto p1 . It may be found by solving:
T
p2 = max wT2 Y0 − p1 pT1 Y0 Y0 − p1 pT1 Y0 w2 s.t. wT2 w2 = 1. (6.51)
w2
where " = X0 XT0 is a diagonal matrix whose diagonal elements {λi }ni=1 are sorted
in descending order.
The transformation back to the observations is Y = PX. The fact that the
covariance matrix of X is diagonal means that PCA is a decorrelation transformation
and is often used to denoise data.
PCA is often used as a method for dimensionality reduction, the process of reducing
the number of variables in a model in order to avoid the curse of dimensionality.
PCA gives the first m principal components (m < n) by applying the truncated
transformation
Xm = PTm Y,
where each column of Xm ∈ Rm×N is a vector whose elements are the first m
principal components, and Pm is a matrix whose columns are the first m loading
vectors,
⎡ ⎤
| |
Pm = ⎣ p1 · · · pm ⎦ ∈ Rn×m .
| |
The m leading loading vectors form an orthonormal basis which spans the m
dimensional subspace onto which the projections of the demeaned observations have
the minimum squared difference from the original demeaned observations.
In other words, Pm compresses each demeaned vector of length n into a vector
of length m (where m ≤ n) in such a way that minimizes the sum of total squared
reconstruction errors.
The minimizer of Eq. 6.52 is not unique: W = Pm Q is also a solution, where
Q ∈ Rm×m is any orthogonal matrix, QT = Q−1 . Multiplying Pm from the right by
Q transforms the first m loading vectors into a different orthonormal basis for the
same subspace.
6 Summary
This chapter has reviewed foundational material in time series analysis and econo-
metrics. Such material is not intended to substitute more comprehensive and formal
treatment of methodology, but rather provide enough background for Chap. 8
where we shall develop neural networks analogues. We have covered the following
objectives:
– Explain and analyze linear autoregressive models;
– Understand the classical approaches to identifying, fitting, and diagnosing
autoregressive models;
– Apply simple heteroscedastic regression techniques to time series data;
– Understand how exponential smoothing can be used to predict and filter time
series; and
– Project multivariate time series data onto lower dimensional spaces with principal
component analysis.
It is worth noting that in industrial applications the need to forecast more than
a few steps ahead often arises. For example, in algorithmic trading and electronic
market making, one needs to forecast far enough into the future, so as to make
the forecast economically realizable either through passive trading (skewing of the
price) or through aggressive placement of trading orders. This economic realization
of the trading signals takes time, whose actual duration is dependent on the
frequency of trading.
218 6 Sequence Modeling
We should also note that in practice linear regressions predicting the difference
between a future and current price, taking as inputs various moving averages, are
often used in preference to parametric models, such as GARCH. These linear
regressions are often cumbersome, taking as inputs hundreds or thousands of
variables.
7 Exercises
Exercise 6.1
Calculate the mean, variance, and autocorrelation function (acf) of the following
zero-mean AR(1) process:
yt = φ1 yt−1 + t ,
where φ1 = 0.5. Determine whether the process is stationary by computing the root
of the characteristic equation (z) = 0.
Exercise 6.2
You have estimated the following ARMA(1,1) model for some time series data
where you are given the data at time t − 1, yt−1 = 3.4 and ût−1 = −1.3. Obtain the
forecasts for the series y for times t, t + 1, t + 2 using the estimated ARMA model.
If the actual values for the series are −0.032, 0.961, 0.203 for t, t + 1, t + 2,
calculate the out-of-sample Mean Squared Error (MSE) and Mean Absolute Error
(MAE).
Exercise 6.3
Derive the mean, variance, and autocorrelation function (ACF) of a zero mean
MA(1) process.
Exercise 6.4
Consider the following log-GARCH(1,1) model with a constant for the mean
equation
yt = μ + ut , ut ∼ N(0, σt2 )
ln(σt2 ) = α0 + α1 u2t−1 + β1 lnσt−1
2
Et = αXt + (1 − α)Et−1 ,
where N is the time horizon of the SMA and the coefficient α represents the degree
of weighting decrease of the EMA, a constant smoothing factor between 0 and 1. A
higher α discounts older observations faster.
a. Suppose that, when computing the EMA, we stop after k terms, instead of going
after the initial value. What fraction of the total weight is obtained?
b. Suppose that we require 99.9% of the weight. What k do we require?
c. Show that, by picking α = 2/(N + 1), one achieves the same center of mass in
the EMA as in the SMA with the time horizon N .
d. Suppose that we have set α = 2/(N + 1). Show that the first N points in an EMA
represent about 87.48% of the total weight.
Exercise 6.6
Suppose that, for the sequence of random variables {yt }∞
t=0 the following model
holds:
Appendix
Hypothesis Tests
220 6 Sequence Modeling
Table 6.3 A short summary of some of the most useful diagnostic tests for time series modeling
in finance
Name Description
Chi-squared test Used to determine whether
the confusion matrix of a classifier
is statistically significant, or merely white noise
t-test Used to determine whether the output
of two separate regression models
are statistically different on i.i.d. data
Mariano-Diebold test Used to determine whether the output of two separate
time series models are statistically different
ARCH test The ARCH Engle’s test is constructed based on
the property that if the residuals are heteroscedastic,
the squared residuals are autocorrelated. The Ljung–Box
test is then applied to the squared residuals
Portmanteau test A general test for whether the error
in a time series model is auto-correlated
Example tests include the Box-Ljung and the Box-Pierce test
Python Notebooks
Please see the code folder of Chap. 6 for e.g., implementations of ARIMA models
applied to time series prediction. An example, applying PCA to decompose stock
prices is also provided in this folder. Further details of these notebooks are included
in the README.md file for Chap. 6.
Reference
This chapter presents a powerful class of probabilistic models for financial data.
Many of these models overcome some of the severe stationarity limitations of the
frequentist models in the previous chapters. The fitting procedure demonstrated is
also different—the use of Kalman filtering algorithms for state-space models rather
than maximum likelihood estimation or Bayesian inference. Simple examples of
hidden Markov models and particle filters in finance and various algorithms are
presented.
1 Introduction
So far we have seen how sequences can be modeled using autoregressive processes,
moving averages, GARCH, and similar methods. There exists another school of
thought, which gave rise to hidden Markov models, Baum–Welch and Viterbi
algorithms, Kalman and particle filters.
In this school of thought, one assumes the existence of a certain latent process
(say X), which evolves over time (so we may write Xt ). This unobservable, latent
process drives another, observable process (say Yt ), which we may observe either at
all times or at some subset of times.
The evolution of the latent process Xt , as well as the dependence of the
observable process Yt on Xt , may be driven by random factors. We therefore talk
about a stochastic or probabilistic model. We also refer to such a model as a state-
space model. The state-space model consists in a description of the evolution of the
latent state over time and the dependence of the observables on the latent state.
We have already seen probabilistic methods presented in Chaps. 2 and 3. These
methods primarily assume that the data is i.i.d. On the other hand, the time series
methods presented in the previous chapter are designed for time series data but are
not probabilistic. This chapter shall build on these earlier chapters by considering a
powerful class of models for financial data. Many of these models overcome some of
the severe stationarity limitations of the frequentist models in the previous chapters.
The fitting procedure is also different—we will see the use of Kalman filtering
algorithms for state-space models rather than maximum likelihood estimation or
Bayesian inference.
Chapter Objectives
By the end of this chapter, the reader should expect to accomplish the following:
– Formulate hidden Markov models (HMMs) for probabilistic modeling over
hidden states;
– Gain familiarity with the Baum–Welch algorithmic for fitting HMMs to time
series data;
– Use the Viterbi algorithm to find the most likely path;
– Gain familiarity with state-space models and the application of Kalman filters to
fit them; and
– Apply particle filters to financial time series.
1 Dynamic Bayesian networks models are a graphical model used to model dynamic processes
through hidden state evolution.
2 With the exception of heteroscedastic modeling in Chap. 6.
2 Hidden Markov Modeling 223
Fig. 7.1 This figure shows the probabilistic graph representing the conditional dependence
relations between the observed and the hidden variables in the HMM
T
p(s, y) = p(s1 )p(y1 | s1 ) p(st | st−1 )p(yt | st ). (7.1)
t=2
Figure 7.1 shows the Bayesian network representing the conditional dependence
relations between the observed and the hidden variables in the HMM. The condi-
tional dependence relationships define the edges of a graph between parent nodes,
Yt , and child nodes St .
and the transition probability density matrix for the Markov process {St } is
given by
0.9 0.1
A= , [A]ij := P(St = si | St−1 = sj ). (7.3)
0.1 0.9
Given the observed sequence {−1, 1, 1} (i.e., T = 3), we can compute the
probability of a realization of the hidden state sequence {1, 0, 0} using Eq. 7.1.
Assuming that P(s1 = 0) = P(s1 = 1) = 12 , the computation is
(continued)
224 7 Probabilistic Sequence Modeling
with the convention that BT (s) = 1. For all t ∈ {1, . . . , T } and for all r, s ∈
{1, . . . , K} we have
y = {y1 , . . . , yT }.
x = {x1 , x2 , . . . , xT }?
2 Hidden Markov Modeling 225
To answer this question, we need to introduce a few more constructs. First, the
set of initial probabilities must be given:
π = {π1 , . . . , πK },
A dealer has two coins, a fair coin, with P(Heads) = 12 , and a loaded coin,
with P(Heads) = 45 . The dealer starts with the fair coin with probability 35 . The
dealer then tosses the coin several times. After each toss, there is a 25 probability
of a switch to the other coin. The observed sequence is Heads, Tails, Heads,
Tails, Heads, Heads, Heads, Tails, Heads.
In this case, the state space and observation space are, respectively,
transition probabilities
) *
0.6 0.4
A= ,
0.4 0.6
One way to answer this question is by applying the Viterbi algorithm as detailed
in the notebook Viterbi.ipynb. We note that the most likely state sequence s,
which produces the observation sequence y = {y1 , . . . , yT }, satisfies the recurrence
relations
V1,k = P(y1 | s1 = ∫k ) · πk ,
& '
Vt,k = max P(yt | st = ∫k ) · Aik · Vt−1,i ,
1≤i≤K
where Vt,k is the probability of the most probable state sequence {s1 , . . . , st } such
that st = ∫k ,
Vt,k = P(s1 , . . . , st , y1 , . . . , yt | st = ∫k ).
The actual Viterbi path can be obtained by, at each step, keeping track of which
state index i was used in the second equation. Let ξ(k, t) be the function that returns
the value of i that was used to compute Vt,k if t > 1, or k if t = 1. Then
Financial data is typically noisy and we need techniques which can extract the
signal from the noise. There are many techniques for reducing the noise. Filtering
is a general term for extracting information from a noisy signal. Smoothing is a
particular kind of filtering in which low-frequency components are passed and high-
frequency components are attenuated. Filtering and smoothing produce distributions
of states at each time step. Whereas maximum likelihood estimation chooses the
state with the highest probability at the “best” estimate at each time step, this may
not lead to the best path in HMMs. We have seen that the Baum–Welch algorithm
can be deployed to find the optimal state trajectory, not just the optimal sequence of
“best” states.
3 Particle Filtering 227
HMMs belong in the same class as linear Gaussian state-space models. These are
known as “Kalman filters” which are continuous latent state analogues of HMMs.
Note that we have already seen examples of continuous state-space models, which
are not necessarily Gaussian, in our exposition on RNNs in Chap. 8.
The state transition probability p(st | st−1 ) can be decomposed into deterministic
and noise:
st = Ft (st−1 ) + t , (7.7)
for some deterministic function and t is zero-mean i.i.d. noise. Similarly, the
emission probability p(yt | st ) can be decomposed as:
yt = Gt (st ) + ξt , (7.8)
with zero-mean i.i.d. observation noise. If the functions Ft and Gt are linear and
time independent, then we have
st = Ast−1 + t , (7.9)
yt = Cst + ξt , (7.10)
where A is the state transition matrix and C is the observation matrix. For
completeness, we contrast the Kalman filter with a univariate RNN, as described
in Chap. 8. When the observations are predictors, xt , and the hidden variables are st
we have
where we have ignored the bias terms for simplicity. Hence, the RNN state equation
differs from the Kalman filter in that (i) it is a non-linear function of both the
previous state and the observation; and (ii) it is noise-free.
3 Particle Filtering
(i)
br (i) ω
λt := M t (k)
.
k=1 ωt
M
br (k) (k)
λt δ(xt − xt | t−1 ),
k=1
3 Particle Filtering 229
where δ(·) denotes the Dirac delta generalized function, and set the normalized
weights after resampling, λ(i)
t , appropriately (for most common resampling
(i)
algorithms this means λt := M 1
).
Informally, SIR shares some of the characteristics of genetic algorithms; based
(i)
on the likelihoods p(yt | x̂t | t−1 ), we increase the weights of the more “successful”
particles, allowing them to “thrive” at the resampling step.
The resampling step was introduced to avoid the degeneration of the particles,
with all the weight concentrating on a single point. The most common resampling
scheme is the so-called multinomial resampling which we now review.
Notice, from above, that we are using with the normalized weights computed before
resampling, br λ(1) br (M) :
t , . . . , λt
i
br
"(i)
t = br (k)
λt ,
k=1
(M)
so that, by construction, br "t = 1.
b. Generate M random samples from U(0, 1), u1 , u2 , . . . , uM .
(i) (j )
c. For each i = 1, . . . , M, choose the particle x̂t | t = x̂t | t−1 with j ∈
! "
(j ) (j +1)
{1, 2, . . . , M − 1} such that ui ∈ br "t , br "t .
(1) (M)
Thus we end up with M new particles (children), xt | t , . . . , xt | t sampled from
(1) (M)
the existing set xt | t−1 , . . . , xt | t−1 , so that some of the existing particles may
disappear, while others may appear multiple times. For each i = 1, . . . , M the
(i)
number of times xt | t−1 appears in the resampled set of particles is known as the
particle’s replication factor, Nt(i) .
(i)
We set the normalized weights after resampling: λt := M 1
. We could view
(1) (M)
this algorithm as the sampling of the replication factors Nt , . . . Nt from the
br (1) br (M)
multinomial distribution with probabilities λt , . . . , λt , respectively. Hence
the name of the method.
230 7 Probabilistic Sequence Modeling
Stochastic Volatility (SV) models have been studied extensively in the literature,
often as applications of particle filtering and Markov chain Monte Carlo (MCMC).
Their broad appeal in finance is their ability to capture the “leverage effect”—the
observed tendency of an asset’s volatility to be negatively correlated with the asset’s
returns (Black 1976).
In particular, Pitt, Malik, and Doucet apply the particle filter to the stochastic
volatility with leverage and jumps (SVLJ) (Malik and Pitt 2009, 2011a,b; Pitt et al.
2014). The model has the general form of Taylor (1982) with two modifications. For
t ∈ N ∪ {0}, let yt denote the log-return on an asset and xt denote the log-variance
of that return. Then
yt = t ext /2 + Jt #t , (7.13)
xt+1 = μ(1 − φ) + φxt + σv ηt , (7.14)
The correlation ρ is the leverage parameter. In general, ρ < 0, due to the leverage
effect.
The second change is the introduction of jumps. Jt ∈ {0, 1} is a Bernoulli counter
with intensity p (thus p is the jump intensity parameter), #t ∼ N(0, σJ2 ) determines
the jump size (thus σJ is the jump volatility parameter).
We obtain a stochastic volatility with leverage (SVL), but no jumps, if we delete
the Jt #t term or, equivalently, set p to zero. Taylor’s original model is a special
case of SVLJ with p = 0, ρ = 0.
This, then, leads to the following adaptation of SIR, developed by Doucet,
Malik, and Pitt, for this special case
& with nonadditive,
' correlated noises. The initial
distribution of x0 is taken to be N 0, σv2 /(1 − φ 2 ) .
a. Initialization step: At time t = 0, draw M i.i.d. particles from the initial
distribution N(0, σv2 /(1 − φ 2 )). Also, initialize M normalized (to 1) weights to
an identical value of M1
. For i = 1, 2, . . . , M, the samples will be denoted x̂0(i)| 0
(i)
and the normalized weights λ0 .
(i)
b. Recursive step: At time t ∈ N, let (x̂t−1 | t−1 )i=1,...,M be the particles generated
at time t − 1.
4 Point Calibration of Stochastic Filters 231
i Importance sampling:
– First,
(i) (i)
– For i = 1, . . . , M, sample ˆt−1 from p(t−1 | xt−1 = x̂t−1 | t−1 , yt−1 ). (If no
(i)
yt−1 is available, as at t = 1, sample from p(t−1 | xt−1 = x̂t−1 | t−1 )).
(i) (i) (i)
– For i = 1, . . . , M, sample x̂t | t−1 from p(xt | xt−1 = x̂t−1 | t−1 , yt−1 , ˆt−1 ).
– For i = 1, . . . , M, compute the non-normalized weights:
) *−1/2 ) *
(i) (i)
x̂t
p 2π(e | t−1 + σJ2 ) exp −yt2 /(2ex̂t | t−1 + 2σJ2 ) ,
(i)
br (i) ω
λt := M t (k)
.
k=1 ωt
M
br (k) (k)
λt δ(xt − x̂t | t−1 ),
k=1
where δ(·) denotes the Dirac delta generalized function, and set the normalized
(i)
weights after resampling, λt , according to the resampling algorithm.
We have seen in the example of the stochastic volatility model with leverage and
jumps (SVLJ) that the state-space model may be parameterized by a parameter
vector, θ ∈ Rdθ , dθ ∈ N. In that particular case,
232 7 Probabilistic Sequence Modeling
⎛ ⎞
μ
⎜φ ⎟
⎜ ⎟
⎜ 2⎟
⎜σ ⎟
θ = ⎜ η ⎟.
⎜ρ ⎟
⎜ 2⎟
⎝ σJ ⎠
p
We may not know the true value of this parameter. How do we estimate it? In
other words, how do we calibrate the model, given a time series of either historical
or generated observations, y1 , . . . , yT , T ∈ N.
The frequentist approach relies on the (joint) probability density function of
the observations, which depends on the parameters, p(y1 , y2 , . . . , yT | θ). We can
regard this as a function of θ with y1 , . . . , yT fixed, p(y1 , . . . , yT | θ ) =: L(θ )—
the likelihood function.
This function is sometimes referred to as marginal likelihood, since the hidden
states, x1 , . . . , xT , are marginalized out. We seek a maximum likelihood estimator
(MLE), θ̂ ML , the value of θ that maximizes the likelihood function.
Each evaluation of the objective function, L(θ ), constitutes a run of the stochastic
filter over the observations y1 , . . . , yT . By the chain rule (i), and since we use a
Markov chain (ii),
T T (
(i) (ii)
p(y1 , . . . , yT ) = p(yt | y0 , . . . , yt−1 ) = p(yt | xt )p(xt | y0 , . . . , yt−1 ) dxt .
t=1 t=1
Note that, for ease of notation, we have omitted the dependence of all the probability
densities on θ , e.g., instead of writing p(y1 , . . . , yT ; θ ).
For the particle filter, we can estimate the log-likelihood function from the non-
normalized weights:
T (
, -
T
1 (k)
M
p(y1 , . . . , yT ) = p(yt | xt )p(xt | y0 , . . . , yt−1 ) dxt ≈ ωt ,
M
t=1 t=1 k=1
whence
, -+ , -
T
1 (k)
M
T
1 (k)
M
ln(L(θ )) = ln ωt = ln ωt . (7.16)
M M
t=1 k=1 t=1 k=1
This was first proposed by Kitagawa (1993, 1996) for the purposes of approxi-
mating θ̂ ML .
In most practical applications one needs to resort to numerical methods, perhaps
quasi-Newton methods, such as Broyden–Fletcher–Goldfarb–Shanno (BFGS) (Gill
et al. 1982), to find θ̂ ML .
5 Bayesian Calibration of Stochastic Filters 233
Pitt et al. (2014) point out the practical difficulties which result when using the
above as an objective function in an optimizer. In the resampling (or selection) step
of the particle filter, we are sampling from a discontinuous empirical distribution
function. Therefore, ln(L(θ)) will not be continuous as a function of θ. To remedy
this, they rely on an alternative, continuous, resampling procedure. A quasi-Newton
method is then used to find θ̂ ML for the parameters θ = (μ, φ, σv2 , ρ, p, σJ2 ) of
the SVLJ model.
We note in passing that Kalman filters can also be calibrated using a similar
maximum likelihood approach.
Let us briefly discuss how filtering methods relate to Markov chain Monte Carlo
methods (MCMC)—a vast subject in its own right; therefore, our discussion will be
cursory at best. The technique takes its origin from Metropolis et al. (1953).
Following Kim et al. (1998) and Meyer and Yu (2000); Yu (2005), we demon-
strate how MCMC techniques can be used to estimate the parameters of the SVL
model. They calibrate the parameters to the time series of observations of daily
mean-adjusted log-returns, y1 , . . . , yT to obtain the joint prior density
T
p(θ , x0 , . . . , xT ) = p(θ )p(x0 | θ ) p(xt | xt−1 , θ )
t=1
by successive conditioning. Here θ := (μ, φ, σv2 , ρ) is, as before, the vector of the
model parameters. We assume prior independence of the parameters and choose the
same priors (as in Kim et al. (1998)) for μ, φ, and σv2 , and a uniform prior for ρ. The
observation model and the conditional independence assumption give the likelihood
T
p(y1 , . . . , yT | θ , x0 , . . . , xT ) = p(yt | xt ),
t=1
and the joint posterior distribution of the unobservables (the parameters θ and the
hidden states x0 , . . . , xT ; in the Bayesian perspective these are treated identically
and estimated in a similar manner) follows from Bayes’ theorem; for the SVL
model, this posterior satisfies
where p(μ), p(φ), p(σv2 ), p(ρ) are the appropriately chosen priors,
Meyer and Yu use the software package BUGS3 (Spiegelhalter et al. 1996; Lunn
et al. 2000) represent the resulting Bayesian model as a directed acyclic graph
(DAG), where the nodes are either constants (denoted by rectangles), stochastic
nodes (variables that are given a distribution, denoted by ellipses), or deterministic
nodes (logical functions of other nodes); the arrows either indicate stochastic
dependence (solid arrows) or logical functions (hollow arrows). This graph helps
visualize the conditional (in)dependence assumptions and is used by BUGS to
construct full univariate conditional posterior distributions for all unobservables. It
then uses Markov chain Monte Carlo algorithms to sample from these distributions.
The algorithm based on the original work (Metropolis et al. 1953) is now known
as the Metropolis algorithm. It has been generalized by Hastings (1930–2016) to
obtain the Metropolis–Hastings algorithm (Hastings 1970) and further by Green to
obtain what is known as the Metropolis–Hastings–Green algorithm (Green 1995).
A popular algorithm based on a special case of the Metropolis–Hastings algorithm,
known as the Gibbs sampler, was developed by Geman and Geman (1984) and,
independently, Tanner and Wong (1987).4 It was further popularized by Gelfand and
Smith (1990). Gibbs sampling and related algorithms (Gilks and Wild 1992; Ritter
and Tanner 1992) are used by BUGS to sample from the univariate conditional
posterior distributions for all unobservables. As a result we perform Bayesian
estimation—obtain estimates of the distributions of the parameters μ, φ, σv2 , ρ—
rather than frequentist estimation, where a single value of the parameters vector,
which maximizes the likelihood, θ̂ ML , is produced. Stochastic filtering, sometimes
in combination with MCMC, can be used for both frequentist and Bayesian
parameter estimation (Chen 2003). Filtering methods that update estimates of the
parameters online, while processing observations in real-time, are referred to as
adaptive filtering (see Sayed (2008); Vega and Rey (2013); Crisan and Míguez
(2013); Naesseth et al. (2015) and references therein).
We note that a Gibbs sampler (or variants thereof) is a highly nontrivial piece
of software. In addition to the now classical BUGS/WinBUGS there exist powerful
Gibbs samplers accessible via modern libraries, such as Stan, Edward, and PyMC3.
6 Summary
This chapter extends Chap. 2 by presenting probabilistic methods for time series
data. The key modeling assumption is the existence of a certain latent process
Xt , which evolves over time. This unobservable, latent process drives another,
observable process. Such an approach overcomes limitations of stationarity imposed
on the methods in the previous chapter. The reader should verify that they have
achieved the primary learning objectives of this chapter:
– Formulate hidden Markov models (HMMs) for probabilistic modeling over
hidden states;
– Gain familiarity with the Baum–Welch algorithm for fitting HMMs to time series
data;
– Use the Viterbi algorithm to find the most likely path;
– Gain familiarity with state-space models and the application of Kalman filters to
fit them; and
– Apply particle filters to financial time series.
7 Exercises
where ηt ∼ N(0, σ 2 ) and includes as special cases all AR(p) and MA(q) models.
Such models are often fitted to financial time series. Suppose that we would like to
filter this time series using a Kalman filter. Write down a suitable process and the
observation models.
Exercise 7.2: The Ornstein–Uhlenbeck Process
Consider the one-dimensional Ornstein–Uhlenbeck (OU) process, the stationary
Gauss–Markov process given by the SDE
dXt = θ (μ − Xt ) dt + σ dWt ,
We shall regard the log-variance xt as the hidden states and the log-returns yt as
observations. How can we use the particle filter to estimate xt on the basis of the
observations yt ?
a. Show that, in the absence of jumps,
#
xt = μ(1 − φ) + φxt−1 + σv ρyt−1 e−xt−1 /2 + σv 1 − ρ 2 ξt−1
i.i.d.
for some ξt ∼ N(0, 1).
b. Show that
where
yt exp(xt /2)
μt | Jt =1 =
exp(xt ) + σJ2
and
σJ2
σ2t | Jt =1 = .
exp(xt ) + σJ2
c. Explain how you could implement random sampling from the probability
distribution given by the density p(t | xt , yt ).
d. Write down the probability density p(xt | xt−1 , yt−1 , t−1 ).
e. Explain how you could sample from this distribution.
f. Show that the observation density is given by
) *−1/2 ) *
(i) (i)
(i) x̂t | t−1 x̂t | t−1
p(yt | x̂t | t−1 , p, σJ2 ) = (1 − p) 2π e exp −yt /(2e
2
) +
) *−1/2 ) *
(i) (i)
x̂t | t−1 x̂t | t−1
p 2π(e + σJ )
2
exp −yt /(2e
2
+ 2σJ ) .
2
Appendix
Python Notebooks
The notebooks provided in the accompanying source code repository are designed
to gain familiarity with how to implement the Viterbi algorithm and particle filtering
for stochastic volatility model calibration. Further details of the notebooks are
included in the README.md file.
References
Black, F. (1976). Studies of stock price volatility changes. In Proceedings of the Business and
Economic Statistics Section.
Chen, Z. (2003). Bayesian filtering: From Kalman filters to particle filters, and beyond. Statis-
tics, 182(1), 1–69.
Crisan, D., & Míguez, J. (2013). Nested particle filters for online parameter estimation in discrete-
time state-space Markov models. ArXiv:1308.1883.
Gelfand, A. E., & Smith, A. F. M. (1990, June). Sampling-based approaches to calculating marginal
densities. Journal of the American Statistical Association, 85(410), 398–409.
Geman, S. J., & Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian
restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6,
721–741.
Gilks, W. R., & Wild, P. P. (1992). Adaptive rejection sampling for Gibbs sampling, Vol. 41, pp.
337–348.
Gill, P. E., Murray, W., & Wright, M. H. (1982). Practical optimization. Emerald Group Publishing
Limited.
Gordon, N. J., Salmond, D. J., & Smith, A. F. M. (1993). Novel approach to nonlinear/non-
Gaussian Bayesian state estimation. In IEE Proceedings F (Radar and Signal Processing).
Green, P. J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model
determination. Biometrika, 82(4), 711–32.
Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applica-
tions. Biometrika, 57(1), 97–109.
Kim, S., Shephard, N., & Chib, S. (1998, July). Stochastic volatility: Likelihood inference and
comparison with ARCH models. The Review of Economic Studies, 65(3), 361–393.
Kitagawa, G. (1993). A Monte Carlo filtering and smoothing method for non-Gaussian nonlinear
state space models. In Proceedings of the 2nd U.S.-Japan Joint Seminar on Statistical Time
Series Analysis (pp. 110–131).
Kitagawa, G. (1996). Monte Carlo filter and smoother for non-Gaussian nonlinear state space
models. Journal of Computational and Graphical Statistics, 5(1), 1–25.
Lunn, D. J., Thomas, A., Best, N. G., & Spiegelhalter, D. (2000). WinBUGS – a Bayesian
modelling framework: Concepts, structure and extensibility. Statistics and Computing, 10, 325–
337.
Malik, S., & Pitt, M. K. (2009, April). Modelling stochastic volatility with leverage and jumps:
A simulated maximum likelihood approach via particle filtering. Warwick Economic Research
Papers 897, The University of Warwick, Department of Economics, Coventry CV4 7AL.
Malik, S., & Pitt, M. K. (2011a, February). Modelling stochastic volatility with leverage and
jumps: A simulated maximum likelihood approach via particle filtering. document de travail
318, Banque de France Eurosystème.
238 7 Probabilistic Sequence Modeling
Malik, S., & Pitt, M. K. (2011b). Particle filters for continuous likelihood evaluation and
maximisation. Journal of Econometrics, 165, 190–209.
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., & Teller, E. (1953). Equation
of state calculations by fast computing machines. Journal of Chemical Physics, 21.
Meyer, R., & Yu, J. (2000). BUGS for a Bayesian analysis of stochastic volatility models.
Econometrics Journal, 3, 198–215.
Naesseth, C. A., Lindsten, F., & Schön, T. B. (2015). Nested sequential Monte Carlo methods. In
Proceedings of the 32nd International Conference on Machine Learning.
Pitt, M. K., Malik, S., & Doucet, A. (2014). Simulated likelihood inference for stochastic volatility
models using continuous particle filtering. Annals of the Institute of Statistical Mathematics, 66,
527–552.
Ritter, C., & Tanner, M. A. (1992). Facilitating the Gibbs sampler: The Gibbs stopper and the
Griddy-Gibbs sampler. Journal of the American Statistical Association, 87(419), 861–868.
Sayed, A. H. (2008). Adaptive filters. Wiley-Interscience.
Spiegelhalter, D., Thomas, A., Best, N. G., & Gilks, W. R. (1996, August). BUGS 0.5: Bayesian
inference using Gibbs sampling manual (version ii). Robinson Way, Cambridge CB2 2SR:
MRC Biostatistics Unit, Institute of Public Health.
Tanner, M. A., & Wong, W. H. (1987, June). The calculation of posterior distributions by data
augmentation. Journal of the American Statistical Association, 82(398), 528–540.
Taylor, S. J. (1982). Time series analysis: theory and practice. Chapter Financial returns modelled
by the product of two stochastic processes, a study of daily sugar prices, pp. 203–226. North-
Holland.
Vega, L. R., & H. Rey (2013). A rapid introduction to adaptive filtering. Springer Briefs in
Electrical and Computer Engineering. Springer.
Yu, J. (2005). On leverage in a stochastic volatility model. Journal of Econometrics, 127, 165–178.
Chapter 8
Advanced Neural Networks
This chapter presents various neural network models for financial time series
analysis, providing examples of how they relate to well-known techniques in
financial econometrics. Recurrent neural networks (RNNs) are presented as non-
linear time series models and generalize classical linear time series models such
as AR(p). They provide a powerful approach for prediction in financial time
series and generalize to non-stationary data. This chapter also presents convolution
neural networks for filtering time series data and exploiting different scales in the
data. Finally, this chapter demonstrates how autoencoders are used to compress
information and generalize principal component analysis.
1 Introduction
Recurrent neural networks (RNNs) are non-linear time series models and gener-
alize classical linear time series models such as AR(p). They provide a powerful
approach for prediction in financial time series and share parameters across time.
Convolution neural networks are useful as spectral transformations of spatial and
temporal data and generalize techniques such as wavelets, which use fixed basis
functions. They share parameters across space. Finally, autoencoders are used to
compress information and generalize principal component analysis.
Chapter Objectives
By the end of this chapter, the reader should expect to accomplish the following:
– Characterize RNNs as non-linear autoregressive models and analyze their stabil-
ity;
– Understand how gated recurrent units and long short-term memory architectures
give a dynamic autoregressive model with variable memory;
– Characterize CNNs as regression, classification, and time series regression of
filtered data;
– Understand principal component analysis for dimension reduction;
– Formulate a linear autoencoder and extract the principal components; and
– Understand how to build more complex networks by aggregating these different
concepts.
The notebooks provided in the accompanying source code repository demonstrate
many of the methods in this chapter. See Appendix “Python Notebooks” for further
details.
Recurrent neural networks (RNNs) are times series methods or sequence learners
which have achieved much success in applications such as natural language under-
standing, language generation, video processing, and many other tasks (Graves
2012). There are many types of RNNs—we will just concentrate on simple RNN
models for brevity of notation. Like multivariate structural autoregressive models,
(1)
RNNs apply an autoregressive function fW (1) ,b(1) (Xt ) to each input sequence Xt ,
where T denotes the look back period at each time step—the maximum number of
lags. However, rather than directly imposing an autocovariance structure, a RNN
provides a flexible functional form to directly model the predictor, Ŷ .
As illustrated in Fig. 8.1, this simple RNN is an unfolding of a single hidden
layer neural network (a.k.a. Elman network (Elman 1991) ) over all time steps in
the sequence, j = 0, . . . , T − 1. For each time step, j , this function fW(1)(1) ,b(1) (Xt,j )
generates a hidden state zt−j from the current input xt and the previous hidden state
zt−1 and Xt,j = seqT ,t−j (X) ⊂ Xt :
ŷt +h
xt −5 xt −4 xt −3 xt −2 xt −1 xt
Fig. 8.1 An illustrative example of a recurrent neural network with one hidden layer, “unfolded”
over a sequence of six time steps. Each input xt is in the sequence Xt . The hidden layer contains H
units and the ith output at time step t is denoted by zti . The connections between the hidden units
(1)
are recurrent and are weighted by the matrix Wz . At the last time step t, the hidden units connect
to a single unit output layer with continuous ŷt+h
242 8 Advanced Neural Networks
(2)
response: ŷt+h = fW (2) ,b(2) (zt ) := σ (2) (W (2) zt + b(2) ),
(1)
hidden states: zt−j = fW (1) ,b(1) (Xt,j )
where σ (1) is an activation function such as tanh(x), and σ (2) is either a softmax
function or identity map depending on whether the response is categorical or
continuous, respectively. The connections between the extremal inputs xt and the H
hidden units are weighted by the time invariant matrix Wx ∈ RH ×P . The recurrent
(1)
connections between the H hidden units are weighted by the time invariant matrix
Wz ∈ RH ×H . Without such a matrix, the architecture is simply a single-layered
(1)
zt−p = φx xt−p
zt−T +2 = φz zt−T +1 + φx xt−T +2
... = ...
zt−1 = φz zt−2 + φx xt−1
x̂t = zt−1 + μ
then
p−1 p
x̂t = μ + φx (L + φz L2 + · · · + φz L )[xt ]
= μ+ φi xt−i
i=1
This special type of autoregressive model x̂t is “stable” and the order can be
identified through autocorrelation tests on X such as the Durbin–Watson, Ljung–
Box, or Box–Pierce tests. Note that if we modify the architecture so that the
(1)
recurrence weights Wz,i = φz,i are lag dependent then the unactivated hidden
layer is
which gives
p−1
x̂t = μ + φx (L + φz,1 L + · · · +
2
φz,i Lp )[xt ], (8.2)
i=1
$j −1
and thus the weights in this AR(p) model are φj = φx i=1 φz,i which allows
a more flexible presentation of the autocorrelation structure than the plain RNN—
which is limited to geometrically decaying weights. Note that a linear RNN with
infinite number of lags and no bias corresponds to an exponential smoother, zt =
αxt + (1 − α)zt−1 when Wz = 1 − α, Wx = α, and Wy = 1.
244 8 Advanced Neural Networks
The generalization of a linear RNN from AR(p) to V AR(p) is trivial and can be
written as
p
p
x̂t =μ+ φj xt−j , φj :=W (2) (Wz(1) )j −1 Wx(1) , μ := W (2) (Wz(1) )j −1 b(1) +b(2) ,
j =1 j =1
(8.3)
where the square matrix φj ∈ RP ×P and bias vector μ ∈ RP .
and using the RNN(1) model with, for simplicity, a single recurrence weight, φ:
gives
where we have assumed μ = 0 in the second part of the expression. Checking that
we recover the AR(1) covariance, set σ := I d so that
γ̃1 = φE[yt−1
2
] = φV[yt−1 ]. (8.7)
we see, crucially, that ŷt−2 depends on t−1 but not on t . yt−2 − P (yt−2 | yt−1 ),
hence depends on {t−1 , t−2 , . . . }. Thus we have that γ̃2 = 0.
As a counterexample, consider the lag-2 partial autocovariance of the RNN(2)
process
& '
ŷt−2 = σ φσ (φ(ŷt + t ) + t−1 ) , (8.12)
which depends on t and hence the lag-2 partial autocovariance is not zero.
It is easy to show that the partial autocorrelation τ̃s = 0, s > p and, thus, like
the AR(p) process, the partial autocorrelation function for a RNN(p) has a cut-off at
p lags. The partial autocorrelation function is independent of time. Such a property
can be used to identify the order of the RNN model from the estimated PACF.
2.2 Stability
We can generalize the stability constraint on AR(p) models presented in Sect. 2.3 to
RNNs by considering the RNN(1) model:
−1
yt = (L)[t ] = (1 − σ (Wz L + b))−1 [t ], (8.13)
∞
1
yt = [t ] = σ j (Wz L + b)[t ], (8.14)
1 − σ (Wz L + b)
j =0
and the infinite sum will be stable when the σ j (·) terms do not grow with j , i.e.
|σ | ≤ 1 for all values of φ and yt−1 . In particular, the choice tanh satisfies the
requirement on σ . For higher order models, we follow an induction argument and
show first that for a RNN(2) model we obtain
1
yt = [t ]
1 − σ (Wz σ (Wz L2 + b) + Wx L + b)
∞
= σ j (Wz σ (Wz L2 + b) + Wx L + b)[t ],
j =0
246 8 Advanced Neural Networks
which again is stable if |σ | ≤ 1 and it follows for any model order that the stability
condition holds.
It follows that lagged unit impulses of the data strictly decay with the order of the
lag when |σ | ≤ 1. Again by induction, at lag 1, the output from the hidden layer is
The absolute value of each component of the hidden variable under a unit vector
impulse at lag 1 is strictly less than 1:
The implication of this stability result is the reassuring attribute that past random
disturbances decay in the model and the effect of lagged data becomes less relevant
to the model output with increasing lag.
2.3 Stationarity
and it turns out that for φ = 0 this model is non-stationary. We can hence rule out
the choice of a linear activation since this would leave us with a linear RNN. Hence,
it appears that some non-linear activation is necessary for the model to be stationary,
but we cannot use the Cayley–Hamilton theorem to prove stationarity.
2 Recurrent Neural Networks 247
Half-Life
Suppose that the output of the RNN is in Rd . The half-life of the lag is the
smallest number of function compositions, k, of σ̃ (x) := σ (Wz x + b) with itself
such that the normalized j th output is
zt = σ (zt−1 + xt ).
Then the lag-1 impulse is x̂t = σ̃ (1) = σ (0+1), the lag-2 impulse is x̂t = σ (σ (1)+
0) = σ̃ ◦ σ̃ (1), and so on. If σ (x) := tanh(x) and we normalize over the output
from the lag-1 impulse to give the values in Table 8.1.
Table 8.1 The half-life characterizes the memory decay of the architecture by measuring the
number of periods before a lagged unit impulse has at least half of its effect at lag 1. The
calculation of the half-life involves nested composition of the recursion relation for the hidden
(k)
layer until rj is less than a half. The calculations are repeated for each j , hence the half-life may
vary depending on the component of the output. In this example, the half-life of the univariate
RNN is 9 periods
Lag k r (k)
1 1.000
2 0.843
3 0.744
4 0.673
5 0.620
6 0.577
7 0.543
8 0.514
9 0.489
Classical RNNs, such as those described above, treat the error as homoscedastic—
that is, the error is i.i.d. We mention in passing that we can generalize RNNs to
heteroscedastic models by modifying the loss function to the squared Mahalanobis
length of the residual vector. Such an approach is referred to here as generalized
recurrent neural networks (GRNNs) and is mentioned briefly here with the
caveat that the field of machine learning in econometrics is nascent and therefore
incomplete and such a methodology, while appealing from a theoretic perspective,
is not yet proven in practice. Hence the purpose of this subsection is simply to
illustrate how more complex models can be developed which mirror some of the
developments in parametric econometrics.
In its simplest form, we solve a weighted least squares minimization problem
using data, Dt :
1
T
f (W, b) = L (yt , ŷt ), (8.22)
T
t=1
ˆ is estimated accordingly:
2) The sample conditional covariance matrix
1 T
T
ˆ =
t t . (8.24)
T −1
i=1
3) Perform the generalized least squares minimization using Eq. 8.20 to obtain a
fitted heteroscedastic neural network model, with refined error
The fitted GRNN FŴ ,b̂ can then be used for forecasting without any further
t t
modification. The effect of the sample covariance matrix is to adjust the
importance of the observation in the training set, based on the variance of its
error and the error correlation. Such an approach can be broadly viewed as a
RNN analogue of how GARCH models extend AR models. Of course, GARCH
models treat the error distribution as parametric and provide a recurrence
relation for forecasting the conditional volatility. In contrast, GRNNs rely on
the empirical error distribution and do not forecast the conditional volatility.
However, a separate regression could be performed over diagonals of the
empirical conditional volatility by using time series cross-validation.
3.1 α-RNNs
with the starting condition in each sequence, ĥt−p+1 = yt−p+1 . This model
augments the plain RNN by replacing ĥt−1 in the hidden layer with an exponentially
smoothed hidden state h̃t−1 . The effect of the smoothing is to provide infinite
memory when α = 1. For the special case when α = 1, we recover the plain
RNN with short memory of length p .
We can easily study this model by simplifying the parameterization and consid-
ering the unactivated case. Setting by = bh = 0, Uh = Wh = φ and Wy = 1:
Without loss of generality, consider p = 2 lags in the model so that ĥt−1 = φyt−1 .
Then
where αt ∈ [0, 1] denotes the dynamic smoothing factor which can be equivalently
written in the one-step-ahead forecast of the form
Hence the smoothing can be viewed as a form of dynamic forecast error correction;
When αt = 0, the forecast error is ignored and the smoothing merely repeats the
current hidden state h̃t to the effect of the model losing its memory. When αt = 1,
the forecast error overwrites the current hidden state h̃t .
The smoothing can also be viewed$a weighted sum of the lagged observations,
with lower or equal weights, αt−s sr=1 (1 − αt−r+1 ) at the lag s ≥ 1 past
observation, yt−s :
t−1
s
t−1
ỹt+1 = αt yt + αt−s (1 − αt−r+1 )yt−s + (1 − αt−r )ỹ1 , (8.36)
s=1 r=1 r=0
where the last term is a time-dependent constant and typically we initialize the
exponential smoother with ỹ1 = y1 . Note that for any αt−r+1 = 1, the prediction
ỹt+1 will have no dependency on all lags {yt−s }s≥r . The model simply forgets the
observations at or beyond the rth lag.
In the special case when the smoothing is constant and equal to 1 − α, then the
above expression simplifies to
for the linear operator (z) := 1 + (α − 1)z and where L is the lag operator.
Let us suppose now that instead of smoothing the observed time series {ys }s≤1 , we
instead smooth a hidden vector ĥt with α̂t ∈ [0, 1]H to give a filtered time series
We see that when αt = 0, the smoothed hidden variable h̃t is not updated by the
input xt . Conversely, when αt = 1, we observe that the hidden variable locally
behaves like a non-linear autoregressive series. Thus the smoothing parameter can
be viewed as the sensitivity of the smoothed hidden state to the input xt .
The challenge becomes how to determine dynamically how much error cor-
rection is needed. GRUs address this problem by learning α̂ = F(Wα ,Uα ,bα ) (X)
from the input variables with a plain RNN parameterized by weights and biases
(Wα , Uα , bα ). The one-step-ahead forecast of the smoothed hidden state, h̃t , is the
filtered output of another plain RNN with weights and biases (Wh , Uh , bh ). Putting
this together gives the following α − t model (simple GRU):
Fig. 8.2 An illustrative example of the response of an αt -RNN and comparison with a plain RNN
and a RNN with an exponentially smoothed hidden state, under a constant α (α-RNN). The RNN(3)
model loses memory of the unit impulse after three lags, whereas the α-RN N (3) models maintain
memory of the first unit impulse even when the second unit impulse arrives. The difference between
the αt -RNN (the toy GRU) and the α-RNN appears insignificant. Keep in mind however that the
dynamical smoothing model has much more flexibility in how it controls the sensitivity of the
smoothing to the unit impulses
The effect of introducing a reset, or switch, r̂t , is to forget the dependence of ĥt on
the smoothed hidden state. Effectively, we turn the update for ĥt from a plain RNN
to a FFN and entirely neglect the recurrence. The recurrence in the update of ĥt is
thus dynamic. It may appear that the combination of a reset and adaptive smoothing
is redundant. But remember that α̂t effects the level of error correction in the update
of the smoothed hidden state, h̃t , whereas r̂t adjusts the level of recurrence in the
unsmoothed hidden state ĥt . Put differently, α̂t by itself cannot disable the memory
in the smoothed hidden state (internal memory), whereas r̂t in combination with α̂t
can. More precisely, when αt = 1 and r̂t = 0, h̃t = ĥt = σ (Wh xt + bh ) which
is reset to the latest input, xt , and the GRU is just a FFNN. Also, when αt = 1
and r̂t > 0, a GRU acts like a plain RNN. Thus a GRU can be seen as a more
general architecture which is capable of being a FFNN or a plain RNN under certain
parameter values.
These additional layers (or cells) enable a GRU to learn extremely complex long-
term temporal dynamics that a plain RNN is not capable of. The price to pay for
this flexibility is the additional complexity of the model. Clearly, one must choose
whether to opt for a simpler model, such as an αt -RNN, or use a GRU. Lastly, we
254 8 Advanced Neural Networks
comment in passing that in the GRU, as in a RNN, there is a final feedforward layer
to transform the (smoothed) hidden state to a response:
The GRU provides a gating mechanism for propagating a smoothed hidden state—a
long-term memory—which can be overridden and even turn the GRU into a plain
RNN (with short memory) or even a memoryless FFN. More complex models using
hidden units with varying connections within the memory unit have been proposed
in the engineering literature with empirical success (Hochreiter and Schmidhuber
1997; Gers et al. 2001; Zheng et al. 2017). LSTMs are similar to GRUs but have
a separate (cell) memory, Ct , in addition to a hidden state ht . LSTMs also do
not require that the memory updates are a convex combination. Hence they are
more general than exponential smoothing. The mathematical description of LSTMs
is rarely given in an intuitive form, but the model can be found in, for example,
Hochreiter and Schmidhuber (1997).
The cell memory is updated by the following expression involving a forget gate,
α̂t , an input gate ẑt , and a cell gate ĉt
In the language of LSTMs, the triple (α̂t , r̂t , ẑt ) are, respectively, referred to as the
forget gate, output gate, and input gate. Our change of terminology is deliberate and
designed to provide more intuition and continuity with GRUs and econometrics.
We note that in the special case when ẑt = 1 − α̂t we obtain a similar exponential
smoothing expression to that used in the GRU. Beyond that, the role of the input gate
appears superfluous and difficult to reason with using time series analysis. Likely
it merely arose from a contextual engineering model; however, it is tempting to
speculate how the additional variable provides the LSTM with a more elaborate
representation of complex temporal dynamics.
When the forget gate, α̂t = 0, then the cell memory depends solely on the
cell memory gate update ĉt . By the term α̂t ◦ ct−1 , the cell memory has long-term
memory which is only forgotten beyond lag s if α̂t−s = 0. Thus the cell memory
has an adaptive autoregressive structure.
The extra “memory,” treated as a hidden state and separate from the cell memory,
is nothing more than a Hadamard product:
which is reset if r̂t = 0. If rˆt = 1, then the cell memory directly determines the
hidden state.
4 Python Notebook Examples 255
Thus the reset gate can entirely override the effect of the cell memory’s
autoregressive structure, without erasing it. In contrast, the GRU has one memory,
which serves as the hidden state, and it is directly affected by the reset gate.
The reset, forget, input, and cell memory gates are updated by plain RNNs all
depending on the hidden state ht .
Like the GRU, the LSTM can function as a short memory, plain RNN; just set
αt = 0 in Eq. 8.50. However, the LSTM can also function as a coupling of FFNs;
just set r̂t = 0 so that ht = 0 and hence there is no recurrence structure in any of
the gates. Both GRUs and LSTMs, even if the nomenclature does not suggest it, can
model long- and short-term autoregressive memory. The GRU couple these through
a smoothed hidden state variable. The LSTM separates out the long memory,
stored in the cellular memory, but uses a copy of it, which may additionally be
reset. Strictly speaking, the cellular memory has long-short autoregressive memory
structure, so it would be misleading in the context of time series analysis to strictly
discern the two memories as long and short (as the nomenclature suggests). The
latter can be thought of as a truncated version of the former.
•
? Multiple Choice Question 2
Which of the following statements are true:
a. A gated recurrent unit uses dynamic exponential smoothing to propagate a hidden
state with infinite memory.
b. The gated recurrent unit requires that the data is covariance stationary.
c. Gated recurrent units are unconditionally stable, for any choice of activation
functions and weights.
d. A GRU only has one memory, the hidden state, whereas a LSTM has an
additional, cellular, memory.
The following Python examples demonstrate the application of RNNs and GRUs to
financial time series prediction.
256 8 Advanced Neural Networks
Fig. 8.3 A comparison of out-of-sample forecasting errors produced by a RNN and GRU trained
on minute snapshots of Coinbase mid-prices
The dataset is tick-by-tick, top of the limit order book, data such as mid-prices and
volume weighted mid-prices (VWAP) collected from ZN futures. This dataset is
heavily truncated for demonstration purposes and consists of 1033492 observations.
The data has also been labeled to indicate whether the prices up-tick (1), remain
5 Convolutional Neural Networks 257
Fig. 8.4 A comparison of out-of-sample forecasting errors produced by a plain RNN and GRU
trained on tick-by-tick smart prices of ZN futures
the same, or down-tick (−1) over the next tick. For demonstration purposes, the
timestamps have been removed. In the simple forecasting experiment, we predict
VWAPs (a.k.a. “smart prices”) from historical smart prices. Note that a classification
experiment is also possible but not shown here.
The ADF test is performed over the first 200k observations as it is compu-
tationally intensive to apply it to the entire dataset. The ADF test statistic is
−3.9706 and the p-value is smaller in absolute value than 0.01 and we thus reject
the Null of the ADF test at the 99% confidence level in favor of the data being
stationary (i.e., there are no unit roots). The Ljung–Box test is used to identify
the number of lags needed in the model. A comparison of out-of-sample VWAP
prices produced by a plain RNN and GRU is shown in Fig. 8.4. Because the
data is stationary, we observe little advantage in using a GRU over a plain RNN.
See ML_in_Finance-RNNs-HFT.ipynb for further details of the network
architectures and experiment.
Convolutional neural networks (CNNs) are feedforward neural networks that can
exploit local spatial structures in the input data. Flattening high-dimensional time
series, such as limit order book depth histories, would require a very large number
of weights in a feedforward architecture. CNNs attempt to reduce the network size
by exploiting data locality (Fig. 8.5).
258 8 Advanced Neural Networks
A common technique in time series analysis and signal processing is to filter the
time series. We have already seen exponential smoothing as a special case of a
class of smoothers known as “weighted moving average (WMA)” smoothers. WMA
smoothers take the form
1
x̃t = wi xt−i , (8.56)
i∈I wi i∈I
where x̃t is the local mean of the time series. The weights are specified to emphasize
or deemphasize particular observations of xt−i in the span |I|. Examples of well-
known smoothers include the Hanning smoother h(3):
Such smoothers have the effect of reducing noise in the time series. The moving
average filter is a simple low pass finite impulse response (FIR) filter commonly
used for regulating an array of sampled data. It takes |I| samples of input at a time
and takes the weighted average of those to produce a single output point. As the
5 Convolutional Neural Networks 259
length of the filter increases, the smoothness of the output increases, whereas the
sharp modulations in the data are smoothed out.
The moving average filter is in fact a convolution using a very simple filter
kernel. More generally, we can write a univariate time series prediction problem
as a convolution with a filter as follows. First, the discrete convolution gives the
relation between xi and xj :
t−1
xt−i = δij xt−j , i ∈ {0, . . . , t − 1} (8.58)
j =0
where we have used the Kronecker delta δ. The kernel filtered time series is a
convolution
x̃t−i = Kj +k+1 xt−i−j , i ∈ {k + 1, . . . , p − k}, (8.59)
j ∈J
p
x̂t = μ + φi x̃t−i (8.60)
i=1
where x̃t−i is the ith output from a convolution of the p length input sequence with a
kernel consisting of 2k + 1 weights. These weights are fixed over time and hence the
CNN is only suited to prediction from stationary time series. Note also, in contrast
to a RNN, that the size of the weight matrix Wy increases with the number of lags
in the model.
The univariate CNN predictor with p lags and H activated hidden units
(kernels) is
[zt ]i,m = σ ( Km,j +k+1 xt−i−j + [bh ]m ) (8.65)
j ∈J
= σ (K ∗ xt + bh ), (8.66)
where m ∈ {1, . . . , H } denotes the index of the kernel and the kernel matrix K ∈
RH ×2k+1 , hidden bias vector bh ∈ RH and output matrix Wy ∈ R1×pH .
Dimension Reduction
Since the size of Wy increases with both the number of lags and the number of
kernels, it may be preferable to reduce the dimensionality of the weights with an
additional layer and hence avoid over-fitting. We will return to this concept later,
but one might view it as an alternative to auto-shrinkage or dropout.
Non-sequential Models
Convolutional neural networks are not limited to sequential models. One might,
p
for example, sample the past lags non-uniformly so that I = {2i }i=1 then the
p
maximum lag in the model is 2 . Such a non-sequential model allows a large
maximum lag without capturing all the intermediate lags. We will also return to
non-sequential models in the section on dilated convolution.
Stationarity
A univariate CNN predictor, with one kernel and no activation, can be written in
the canonical form
where, by the linearity of (L) in xt , the convolution commutes and thus we can
write φ̃ := K ∗ φ. Finding the roots of the characteristic equation
˜ (z) = 0, (8.70)
it follows that the CNN is strictly stationary and ergodic if all the roots lie outside
the unit circle in the complex plane, |λi | > 1, i ∈ {1, . . . , p}. As before, we would
compute the eigenvalues of the companion matrix to find the roots. Provided that
˜ (L)−1 forms a divergent sequence in the noise process {s }t then the model is
s=1
stable.
5 Convolutional Neural Networks 261
5.2 2D Convolution
k
yi,j = [K ∗ X]i,j = Kk+1+p,k+1+q xi+p+1,j +q+1 ,
p,q=−k
k
yi,j = [K ∗ X]i,j = Kk+1+p,k+1+q xi+p+1,j +q+1 ,
p,q=−k
We leave it as an exercise for the reader to compute the output for the remaining
values of i and j .
As in the example above, when we perform convolution over the 4 × 4 image with
a 3 × 3 kernel, we get a 2 × 2 feature map. This is because there are only 4 unique
positions where we can place our filter inside this image.
As convolutional neural networks were designed for image processing, it is
common to represent the color values of the pixels with c color channels. For
262 8 Advanced Neural Networks
example, RGB values are represented with three channels. The general form of the
convolution layer map for a m × n × c input tensor and outputs m × n × H (with
stride 1 and padding) is
θ : Rm×n×c → Rm×n×H .
Writing
⎛ ⎞
f1
⎜ .. ⎟
f =⎝.⎠ (8.72)
fc
θ (f ) = K ∗ f + b, (8.73)
c
[θ (f )]j = [K]i,j ∗ [f ]i + bj , j ∈ {1, . . . , H }, (8.74)
i=1
where [·]i,j contracts the 4-tensor to a 2-tensor by indexing the ith third component
and j th fourth component of the tensor and for any g ∈ Rm×n and H ∈
R(2k+1)×(2k+1)
k
[H ∗ g]i,j = Hk+1+p,k+1+q gi+p,j +q , i ∈ {1, . . . , m}, j ∈ {1, . . . .n}.
p,q=−k
(8.75)
By analogy to a fully connected feedforward architecture, the weights in the layer
are given by the kernel tensor, K, and the biases, b are H -vectors. Instead of a
semi-affine transformation, the layer is given by an activated convolution σ (θ (f)).
Furthermore, we note that not all neurons in the two consecutive layers are
connected to each other. In fact, only the neurons which correspond to inputs within
a 2k + 1 × 2k + 1 square connect to the same output neuron. Thus the filter size
controls the receptive field of each output. We note, therefore, that some neurons
share the same weights. Both of these properties result in far fewer parameters to
learn than a fully connected feedforward architecture.
Padding is needed to extend the size of the image f so that the filtered image
has the same dimensions as the original image. Specifically padding means how to
choose fi+p,j +q when (i + p, j + q) is outside of {1, . . . , m} or {1, . . . , n}. The
following three choices are often used
5 Convolutional Neural Networks 263
⎧
⎪
⎪
⎨0, zero padding,
fi+p,j +q = f(i+p) (mod m),(s+q) (mod n) , periodic padding, (8.76)
⎪
⎪
⎩f reflected padding,
|i−1+p|,|j −1+q| ,
if
i+p ∈
/ {1, . . . , m} or j + q ∈
/ {1, . . . , n}. (8.77)
k
m
[K ∗s f ]i,j = Kp,q fs(i−1)+1+p,s(j −1)+1+q , i ∈ {1, . . . , % &},
s
p,q=−k
n
j ∈ {1, . . . , % &}. (8.78)
s
Here % ms & denotes the smallest integer greater than m
s.
5.3 Pooling
Data with high spatial structure often results in observations which have similar
values within a neighborhood. Such a characteristic leads to redundancy in data
representation and motivates the use of data reduction techniques such as pooling.
In addition to a convolution layer, a pooling layer is a map:
One popular pooling is the so-called average pooling Ravr which can be a
convolution with stride 2 or bigger using the kernel K in the form of
⎛ ⎞
111
1⎝
K= 1 1 1⎠ . (8.80)
9
111
Non-linear pooling operator is also used, for example, the (2k + 1) × (2k + 1)
max-pooling operator with stride s as follows:
In addition to image processing, CNNs have also been successfully applied to time
series. WaveNet, for example, is a CNN developed for audio processing (van den
Oord et al. 2016).
Time series often displays long-term correlations. Moreover, the dependent vari-
able(s) may exhibit non-linear dependence on the lagged predictors. The WaveNet
architecture is a non-linear p-autoregression of the form
p
yt = φi (xt−i ) + t (8.82)
i=1
k
(−1) m
[K () ∗d () f (−1) ]i = Kp() fd () (i−1)+1+p , i ∈ {1, . . . , % &}, (8.83)
d ()
p=−k
where d is the dilation factor and we can choose the dilations to increase by a factor
of two: d () = 2−1 . The filters for each layer, K () , are chosen to be of size 1 ×
(2k + 1) = 1 × 2.
An example of a three-layer dilated convolutional network is shown in Fig. 8.6.
Using the dilated convolutions instead of regular ones allows the output y to be
influenced by more nodes in the input. The input of the network is given by the time
series X. In each subsequent layer we apply the dilated convolution, followed by a
non-linearity, giving the output feature maps f () , ∈ {1, . . . , L}.
Since we are interested in forecasting the subsequent values of the time series,
we will train the model so that this output is the forecasted time series Ŷ = {ŷt }Nt=1 .
The receptive field of a neuron was defined as the set of elements in its input that
modifies the output value of that neuron. Now, we define the receptive field r of the
model to be the number of neurons in the input in the first layer, i.e. the time series,
that can modify the output in the final layer, i.e. the forecasted time series. This then
depends on the number of layers L and the filter size 2k + 1, and is given by
5 Convolutional Neural Networks 265
In Fig. 8.6, the receptive field is given by r = 8, one output value is influenced by
eight input neurons.
•
? Multiple Choice Question 3
Which of the following statements are true:
a. CNNs apply a collection of different, but equal width, filters to the data before
using a feedforward network for regression or classification.
b. CNNs are sparse networks, exploiting locality of the data, to reduce the number
of weights.
c. A dilated CNN is appropriate for multi-scale time series analysis—it captures a
hierarchy of patterns at different resolutions (i.e., dependencies on past lags at
different frequencies, e.g. days, weeks, months)
d. The number of layers in a CNN is automatically determined during training.
6 Autoencoders
In the case that no non-linear activation function is used, xi = W (1) yi + b(1) and
ŷi = W (2) xi + b(2) . If the cost function is the total squared difference between
output and input, then training the autoencoder on the input data matrix Y solves
8 82
8 8
min 8Y − W(2) W (1) Y + b(1) 1TN + b(2) 1TN 8 . (8.85)
W (1) ,b(1) ,W (2) ,b(2) F
If we set the partial derivative with respect to b2 to zero and insert the solution into
(8.85), then the problem becomes
8 82
8 8
min 8Y0 − W (2) W (1) Y0 8
W (1) ,W (2) F
Thus, for any b1 , the optimal b2 is such that the problem becomes independent of
b1 and of ȳ. Therefore, we may focus only on the weights W (1) , W (2) .
Linear autoencoders give orthogonal projections, even though the columns of the
weight matrices are not orthogonal. To see this, set the gradients to zero, W (1) is the
left Moore–Penrose pseudoinverse of W (2) (and W (2) is the right pseudoinverse of
W (1) ):
† −1
W (1) = (W (2) ) = W (2)T W (2) (W (2) )T
& '−1
The matrix W (2) (W (2) )† = W (2) (W (2) )T W (2) (W (2) )T is the orthogonal
projection operator onto the column space of W (2) when its columns are not
268 8 Advanced Neural Networks
necessarily orthonormal. This problem is very similar to (6.52), but without the
orthonormality constraint.
It can be shown that W (2) is a minimizer of Eq. 8.86 if and only if its column
space is spanned by the first m loading vectors of Y.
The linear autoencoder is said to apply PCA to the input data in the sense that
its output is a projection of the data onto the low-dimensional principal subspace.
However, unlike actual PCA, the coordinates of the output of the bottleneck are
correlated and are not sorted in descending order of variance. The solutions for
reduction to different dimensions are not nested: when reducing the data from
dimension n to dimension m1 , the first m2 vectors (m2 < m1 ) are not an optimal
solution to reduction from dimension n to m2 , which therefore requires training an
entirely new autoencoder.
Theorem The first m loading vectors of Y are the first m left singular vectors of the
matrix W (2) which minimizes (8.86).
A sketch of the proof now follows. We train the linear autoencoder on the original
dataset Y and then compute the first m left singular vectors of W (2) ∈ Rn×m , where
typically m << N. The loading vectors may also be recovered from the weights of
the hidden layer, W (1) , by a singular value decomposition, If W (2) = UVT , which
we assume is full-rank, then
W (1) = (W (2) )† = V † UT
and
Fig. 8.8 This figure shows the yield curve over time, each line corresponds to a different maturity
in the term structure of interest rates
Scheinkman 1991). Figure 8.8 shows the yield curve over a 25-year period, where
each line corresponds to each of the maturities in the term structure of fixed income
securities.
We can illustrate the comparison by finding the principal components of the
sample covariance matrix from the time series of the yield curve, as shown in
Fig. 8.9a. The eigenvalues are the diagonal of the transformed matrix, are all
positive, and arranged in descending order. In this case we have plotted the first
m = 3 components from a high-dimensional dataset where n > m. The percentage
of variance attributed to these components is 95.6%, 4.07%, 0.34%, respectively.
Figure 8.9c shows the decomposition of the sample covariance matrix using the left
singular vectors of the autoencoder weights and observed to be similar to Fig. 8.9a.
The percentage of variance attributed to the components is 95.63%, 4.10%, 0.27%.
For completeness, Fig. 8.9b shows the transformation using W (2) , which results in
correlated values.
Performing PCA on the daily change of the yield curve leads to more inter-
pretable components: the first eigenvalue can be attributed to parallel shift of the
curve, the second to twist, and the third to curvature (a.k.a. butterfly). Figure 8.10
compares the first two principal components of Y0 using either the m = 3 loading
vectors, Pm or the m = 3 singular vectors, Um . For the purposes of interpreting the
behavior of the yield curve over time, both give similar results. Periods in which
270 8 Advanced Neural Networks
Fig. 8.9 The covariance matrix of the data in the transformed coordinates, according to (a) the
loading vectors computed by applying SVD to the entire dataset, (b) the weights of the linear
autoencoder, and (c) the left singular vectors of the autoencoder weights. (a) PTm Y0 YT0 Pm , (b)
(W (2) )T Y0 YT0 W (2) , (c) UTm Y0 YT0 Um
(a) (b)
Fig. 8.10 The first two principal components of Y0 , projected using Pm are shown in (a). The
first two approximated principal components (up to a sign change) using Um . The first principal
component is represented by the x-axis and the second by the y-axis. (a) PTm Y0 . (b) UTm Y0
the yield curve is dominated by parallel shift exhibit a large absolute first principal
component compared to the second component. And conversely, periods exhibiting
a large second component compared to the first indicates that the curve movement
is dominated by twist. The latter phenomenal often occurs when the curve moves
from an upward sloping to a download sloping regime. In both cases, we note that
the period following the financial crisis, 2009, exhibits a relatively large amount of
shift and twist when compared to other years.
As we saw in the previous chapter, merely adding more layers to the linear
autoencoder does not change the properties of the autoencoder—it remains a linear
7 Summary 271
(a) (b)
Fig. 8.11 The reconstruction error in Y is shown for (a) the linear encoder and (b) a deep
autoencoder, with two tanh activated layers for each of the encoder and decoders
autoencoder and if there are L layers in the encoder, then the first m singular
values of W (1) W (2) . . . W (L) will correspond to the loading vectors. With non-linear
activation, the autoencoder can no longer resolve the loading vectors. However, the
addition of a more expressive, non-linear, model is used to reduce the reconstruction
error for a given compression dimension m. Figure 8.11 compares the reconstruction
error in Y using the linear autoencoder and a deep autoencoder, with two tanh
activated layers in each of the encoder and decoder.
Recently, the application of deep autoencoders to statistical equity factor models
has been demonstrated by Heaton et al. (2017). The authors compress the asset
returns in a portfolio to give a small set of deep factors which explain the variability
in the portfolio returns more reliably than PCA or fundamental equity factors.
Such a representation provides a general portfolio selection process which relies
on encoding of the asset return histories into deep factors and then decoding, to
predict the asset returns. One practical challenge with this approach, and indeed all
statistical factor models, is their lack of investability and hedgeability. For ReLU
activated autoencoders, deep factors can be interpreted as compositions of financial
put and call options on linear combinations of the assets represented. As such,
the authors speculate that deep factors could be potentially investable and hence
hedgeable.
See the notebook ML_in_Finance-Autoencoders.ipynb for an imple-
mentation of the methodology and results presented in this section.
7 Summary
In this chapter we have seen how different neural network architectures can be used
to exploit the structure in the data, resulting in fewer weights, and broadening their
application as a wider class of models than regression and classification.
272 8 Advanced Neural Networks
8 Exercises
Exercise 8.1*
Calculate the half-life of the following univariate RNN
x̂t = Wy zt−1 + by ,
zt−1 = tanh(Wz zt−2 + Wx xt−1 ),
p
ŷt = μ + φi yt−i
i=1
yt = σ (φyt−1 ) + ut ,
for some monotonically increasing, positive and convex activation function, σ (x)
and positive constant φ. Note that Jensen’s inequality states that E[g(X)] ≥
g(E[X]) for any convex function g of a random variable X.
Exercise 8.4*
Show that the discrete convolution of the input sequence X = {3, 1, 2} and the filter
F = {3, 2, 1} given by Y = X ∗ F where
∞
yi = X ∗ Fi = xj Fi−j
j =−∞
Exercise 8.6***
Modify the RNN notebook to predict Coindesk prices using a univariate RNN
applied to the data coindesk.csv. Then complete the following tasks
a. Determine whether the data is stationary by applying the augmented Dickey–
Fuller test.
b. Estimate the partial autocorrelation and determine the optimum lag at the 99%
confidence level. Note that you will not be able to draw conclusions if your data
is not stationary. Choose the sequence length to be equal to this optimum lag.
c. Evaluate the MSE in-sample and out-of-sample as you vary the number of hidden
neurons. What do you conclude about the level of over-fitting?
274 8 Advanced Neural Networks
Appendix
Question 1
Answer: 1,2,4,5. An augmented Dickey–Fuller test can be applied to time series to
determine whether they are covariance stationary.
The estimated partial autocorrelation of a covariance stationary time series can
be used to identify the design sequence length in a plain RNN because the network
has a fixed partial autocorrelation matrix.
Plain recurrent neural networks are not guaranteed to be stable—the stability
constraint restricts the choice of activation in the hidden state update.
Once the model is fitted, the Ljung–Box test is used to test whether the residual
error is auto-correlated. A well-specified model should exhibit white noise error
both in and out-of-sample.
The half-life of a lag-1 unit impulse is the number of lags before the impulse has
half its effect on the model output.
Question 2
Answer: 1,4. A gated recurrent unit uses dynamic exponential smoothing to
propagate a hidden state with infinite memory. However, there is no requirement for
References 275
covariance stationarity of the data in order to fit a GRU, or LSTM. This is because
the later are dynamic models with a time-dependent partial autocorrelation structure.
Gated recurrent units are conditionally stable—the choice of activation in the
hidden state update is especially important. For example, a tanh function for the
hidden state update satisfies the stability constraint. A GRU only has one memory,
the hidden state, whereas a LSTM indeed has an additional, cellular, memory.
Question 3
Answer: 1,2,3.
CNNs apply a collection of different, but equal width, filters to the data. Each
filter is a unit in the CNN hidden layer and is activated before using a feedforward
network for regression or classification. CNNs are sparse networks, exploiting
locality of the data, to reduce the number of weights. CNNs are especially relevant
for spatial, temporal, or even spatio-temporal datasets (e.g., implied volatility
surfaces). A dilated CNN, such as the WaveNet architecture, is appropriate for multi-
scale time series analysis—it captures a hierarchy of patterns at different resolutions
(i.e., dependencies on past lags at different frequencies, e.g., days, weeks, months).
The number of layers in a CNN must be determined manually during training.
Python Notebooks
References
Baldi, P., & Hornik, K. (1989, January). Neural networks and principal component analysis:
Learning from examples without local minima. Neural Netw., 2(1), 53–58.
Borovykh, A., Bohte, S., & Oosterlee, C. W. (2017, Mar). Conditional time series forecasting with
convolutional neural networks. arXiv e-prints, arXiv:1703.04691.
Elman, J. L. (1991, Sep). Distributed representations, simple recurrent networks, and grammatical
structure. Machine Learning, 7(2), 195–225.
Gers, F. A., Eck, D., & Schmidhuber, J. (2001). Applying LSTM to time series predictable through
time-window approaches (pp. 669–676). Berlin, Heidelberg: Springer Berlin Heidelberg.
Graves, A. (2012). Supervised sequence labelling with recurrent neural networks. Studies in
Computational intelligence. Heidelberg, New York: Springer.
Heaton, J. B., Polson, N. G., & Witte, J. H. (2017). Deep learning for finance: deep portfolios.
Applied Stochastic Models in Business and Industry, 33(1), 3–12.
Hochreiter, S., & Schmidhuber, J. (1997, November). Long short-term memory. Neural Com-
put., 9(8), 1735–1780.
276 8 Advanced Neural Networks
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolu-
tional neural networks. In Advances in neural information processing systems (pp. 1097–1105).
Litterman, R. B., & Scheinkman, J. (1991). Common factors affecting bond returns. The Journal
of Fixed Income, 1(1), 54–61.
Plaut, E. (2018, Apr). From principal subspaces to principal components with linear autoencoders.
arXiv e-prints, arXiv:1804.10253.
van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., et al. (2016).
WaveNet: A generative model for raw audio. CoRR, abs/1609.03499.
Zheng, J., Xu, C., Zhang, Z., & Li, X. (2017, March). Electric load forecasting in smart grids using
long-short-term-memory based recurrent neural network. In 2017 51st Annual Conference on
Information Sciences and Systems (CISS) (pp. 1–6).
Part III
Sequential Data with Decision-Making
Chapter 9
Introduction to Reinforcement Learning
This chapter introduces Markov Decision Processes and the classical methods of
dynamic programming, before building familiarity with the ideas of reinforcement
learning and other approximate methods for solving MDPs. After describing Bell-
man optimality and iterative value and policy updates before moving to Q-learning,
the chapter quickly advances towards a more engineering style exposition of the
topic, covering key computational concepts such as greediness, batch learning, and
Q-learning. Through a number of mini-case studies, the chapter provides insight
into how RL is applied to optimization problems in asset management and trading.
1 Introduction
In the previous chapters, we dealt with supervised and unsupervised learning. Recall
that supervised learning involves training an agent to produce an output given an
input, where a teacher provides some training examples of input–output pairs. The
task of the agent is to generalize from these examples, that is to find a function that
produces outputs given inputs which are consistent with examples provided by the
teacher. In unsupervised learning, the task is again to generalize, i.e. provide some
outputs given inputs; however, there is no teacher to provide examples of a “ground
truth.”
In this chapter, we address a different type of learning where an agent should
learn to act optimally, given a certain goal, in a setting of sequential decision-making
given a state of its environment. The latter serves as an input, while the agent’s
actions are outputs. Acting optimally given a goal is mathematically formulated
as a problem of maximization of a certain objective function. Problems of such
sort belong to the area of machine learning known as of reinforcement learning
(RL). Such an area of machine learning is tremendously important in trading and
investment management.
Chapter Objectives
By the end of this chapter, the reader should expect to accomplish the following:
– Gain familiarity with Markov Decision Processes;
– Understand the Bellman equation and classical methods of dynamic program-
ming;
– Gain familiarity with the ideas of reinforcement learning and other approximate
methods of solving MDPs;
– Understand the difference between off-policy and on-policy learning algorithms;
and
– Gain insight into how RL is applied to optimization problems in asset manage-
ment and trading.
The notebooks provided in the accompanying source code repository accompany
many of the examples in this chapter. See Appendix “Python Notebooks” for further
details.
The task of finding an optimal mapping of inputs into outputs given an objective
function looks superfluously similar to tasks of both supervised and unsupervised
learning. Indeed, in all these cases, and in a sense in all problems of machine
learning in general, the objective is always formulated as a sample-based problem
of mapping some inputs into some outputs given some criteria for optimality of
such mapping. This can be generally viewed as a special case of optimization or
alternatively as a special case of function approximation. There are however at least
three distinct differences between the problem setting in RL and the settings of both
supervised learning and unsupervised learning.
The first difference is the presence and role of a teacher. In RL, like supervised
learning and unlike unsupervised learning, there is a teacher. However, a feedback
provided by the teacher to an agent is different from a feedback from a teacher in
supervised learning. In the latter case, a teacher gives correct outputs for a given
training dataset. The role of a supervised learning algorithm is to generalize from
such explicit examples, i.e. to provide a function that maps any inputs to outputs,
including inputs not encountered in the training set.
In reinforcement learning , a teacher provides only a partial feedback for actions
taken by an agent. Such a partial feedback is given in terms of rewards that the agent
receives upon taking a certain action. Rewards have numerical values, therefore a
higher reward for a particular action generally implies that this particular action
taken by the agent is better than other actions that would produce lower rewards.
However, there is no explicit information from a teacher on what action is the best
or is “right” to produce the highest reward possible. Therefore, a teacher in this case
provides only a partial feedback to an agent during training. The goal of a RL agent
is to maximize a total cumulative reward over a sequence of steps.