0% found this document useful (0 votes)
30 views

Full Download Nonlinear Estimation Methods and Applications with Deterministic Sample Points 1st Edition Shovan Bhaumik (Author) PDF DOCX

Sample

Uploaded by

friedmattarh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Full Download Nonlinear Estimation Methods and Applications with Deterministic Sample Points 1st Edition Shovan Bhaumik (Author) PDF DOCX

Sample

Uploaded by

friedmattarh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Visit https://ebookfinal.

com to download the full version and


explore more ebook

Nonlinear Estimation Methods and Applications with


Deterministic Sample Points 1st Edition Shovan
Bhaumik (Author)

_____ Click the link below to download _____


https://ebookfinal.com/download/nonlinear-estimation-
methods-and-applications-with-deterministic-sample-
points-1st-edition-shovan-bhaumik-author/

Explore and download more ebook at ebookfinal.com


Here are some recommended products that might interest you.
You can download now and explore!

Nonlinear Time Series Theory Methods and Applications with


R Examples 1st Edition Randal Douc

https://ebookfinal.com/download/nonlinear-time-series-theory-methods-
and-applications-with-r-examples-1st-edition-randal-douc/

ebookfinal.com

Estimation with applications to tracking navigation 1st


Edition Yaakov Bar-Shalom

https://ebookfinal.com/download/estimation-with-applications-to-
tracking-navigation-1st-edition-yaakov-bar-shalom/

ebookfinal.com

Handbook of Statistics_29B Volume 29 Sample Surveys


Inference and Analysis 1° Edition Danny Pfeffermann

https://ebookfinal.com/download/handbook-of-
statistics_29b-volume-29-sample-surveys-inference-and-
analysis-1-edition-danny-pfeffermann/
ebookfinal.com

Nonlinear Signal and Image Processing Theory Methods and


Applications 1st Edition Kenneth E. Barner

https://ebookfinal.com/download/nonlinear-signal-and-image-processing-
theory-methods-and-applications-1st-edition-kenneth-e-barner/

ebookfinal.com
Theory of Preliminary Test and Stein Type Estimation with
Applications 1st Edition A. K. Md. Ehsanes Saleh

https://ebookfinal.com/download/theory-of-preliminary-test-and-stein-
type-estimation-with-applications-1st-edition-a-k-md-ehsanes-saleh/

ebookfinal.com

Nonlinear dynamics and chaos with applications to physics


biology chemistry and engineering Second Edition Physik.

https://ebookfinal.com/download/nonlinear-dynamics-and-chaos-with-
applications-to-physics-biology-chemistry-and-engineering-second-
edition-physik/
ebookfinal.com

Chromatography Basic Principles Sample Preparations and


Related Methods 1st Edition Lundanes Elsa

https://ebookfinal.com/download/chromatography-basic-principles-
sample-preparations-and-related-methods-1st-edition-lundanes-elsa/

ebookfinal.com

Econometric methods with applications in business and


economics 1st Edition Christiaan Heij

https://ebookfinal.com/download/econometric-methods-with-applications-
in-business-and-economics-1st-edition-christiaan-heij/

ebookfinal.com

Cost of Capital Estimation and Applications 2nd Edition


Shannon P. Pratt

https://ebookfinal.com/download/cost-of-capital-estimation-and-
applications-2nd-edition-shannon-p-pratt/

ebookfinal.com
Nonlinear Estimation Methods and Applications with
Deterministic Sample Points 1st Edition Shovan Bhaumik
(Author) Digital Instant Download
Author(s): Shovan Bhaumik (Author); Paresh Date (Author)
ISBN(s): 9781351012355, 1351012347
Edition: 1
File Details: PDF, 14.93 MB
Year: 2019
Language: english
Nonlinear Estimation
Methods and Applications with
Deterministic Sample Points
Nonlinear Estimation
Methods and Applications with
Deterministic Sample Points

Shovan Bhaumik
Paresh Date
MATLABr and Simulinkr are a trademark of The MathWorks, Inc. and is used with
permission. The MathWorks does not warrant the accuracy of the text or exercises in this
book. This book’s use or discussion of MATLABr and Simulinkr software or related prod-
ucts does not constitute endorsement or sponsorship by The MathWorks of a particular
pedagogical approach or particular use of the MATLABr and Simulinkr software.

CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742

c 2020 by Taylor & Francis Group, LLC


CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works

Printed on acid-free paper

International Standard Book Number-13: 978-0-8153-9432-7 (Hardback)

This book contains information obtained from authentic and highly regarded sources. Rea-
sonable efforts have been made to publish reliable data and information, but the author
and publisher cannot assume responsibility for the validity of all materials or the conse-
quences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if
permission to publish in this form has not been obtained. If any copyright material has not
been acknowledged please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted,
reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other
means, now known or hereafter invented, including photocopying, microfilming, and record-
ing, or in any information storage or retrieval system, without written permission from the
publishers.

For permission to photocopy or use material electronically from this work, please access
www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Cen-
ter, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-
for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system
of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trade-


marks, and are used only for identification and explanation without intent to infringe.

Library of Congress Control Number:2019946259

Visit the Taylor & Francis Web site at


http://www.taylorandfrancis.com

and the CRC Press Web site at


http://www.crcpress.com
To my parents
Shovan

To Bhagyashree
Paresh
Contents

Preface xiii

About the Authors xvii

Abbreviations xix

Symbol Description xxi

1 Introduction 1

1.1 Nonlinear systems . . . . . . . . . . . . . . . . . . . . . . . . 2


1.1.1 Continuous time state space model . . . . . . . . . . . 2
1.1.2 Discrete time state space model . . . . . . . . . . . . . 3
1.2 Discrete time systems with noises . . . . . . . . . . . . . . . 5
1.2.1 Solution of discrete time LTI system . . . . . . . . . . 6
1.2.2 States as a Markov process . . . . . . . . . . . . . . . 6
1.3 Stochastic filtering problem . . . . . . . . . . . . . . . . . . . 7
1.4 Maximum likelihood and maximum a posterori estimate . . . 8
1.4.1 Maximum likelihood (ML) estimator . . . . . . . . . . 8
1.4.2 Maximum a posteriori (MAP) estimate . . . . . . . . 9
1.5 Bayesian framework of filtering . . . . . . . . . . . . . . . . . 9
1.5.1 Bayesian statistics . . . . . . . . . . . . . . . . . . . . 9
1.5.2 Recursive Bayesian filtering: a conceptual solution . . . 10
1.6 Particle filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.1 Importance sampling . . . . . . . . . . . . . . . . . . . 12
1.6.2 Resampling . . . . . . . . . . . . . . . . . . . . . . . . 15
1.7 Gaussian filter . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7.1 Propagation of mean and covariance of a linear
system . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7.2 Nonlinear filter with Gaussian approximations . . . . 19
1.8 Performance measure . . . . . . . . . . . . . . . . . . . . . . 22
1.8.1 When truth is known . . . . . . . . . . . . . . . . . . 22
1.8.2 When truth is unknown . . . . . . . . . . . . . . . . . 23
1.9 A few applications . . . . . . . . . . . . . . . . . . . . . . . . 23
1.9.1 Target tracking . . . . . . . . . . . . . . . . . . . . . . 23
1.9.2 Navigation . . . . . . . . . . . . . . . . . . . . . . . . 24

vii
viii Contents

1.9.3 Process control . . . . . . . . . . . . . . . . . . . . . . 24


1.9.4 Weather prediction . . . . . . . . . . . . . . . . . . . . 24
1.9.5 Estimating state-of-charge (SoC) . . . . . . . . . . . . 24
1.10 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.11 Organization of chapters . . . . . . . . . . . . . . . . . . . . 25

2 The Kalman filter and the extended Kalman filter 27

2.1 Linear Gaussian case (the Kalman filter) . . . . . . . . . . . 27


2.1.1 Kalman filter: a brief history . . . . . . . . . . . . . . 27
2.1.2 Assumptions . . . . . . . . . . . . . . . . . . . . . . . 28
2.1.3 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.4 Properties: convergence and stability . . . . . . . . . . 31
2.1.5 Numerical issues . . . . . . . . . . . . . . . . . . . . . 32
2.1.6 The information filter . . . . . . . . . . . . . . . . . . 33
2.1.7 Consistency of state estimators . . . . . . . . . . . . . 34
2.1.8 Simulation example for the Kalman filter . . . . . . . 35
2.1.9 MATLABr -based filtering exercises . . . . . . . . . . 37
2.2 The extended Kalman filter (EKF) . . . . . . . . . . . . . . 38
2.2.1 Simulation example for the EKF . . . . . . . . . . . . 40
2.3 Important variants of the EKF . . . . . . . . . . . . . . . . . 43
2.3.1 The iterated EKF (IEKF) . . . . . . . . . . . . . . . . 43
2.3.2 The second order EKF (SEKF) . . . . . . . . . . . . . 45
2.3.3 Divided difference Kalman filter (DDKF) . . . . . . . 45
2.3.4 MATLAB-based filtering exercises . . . . . . . . . . . 49
2.4 Alternative approaches towards nonlinear filtering . . . . . . 49
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3 Unscented Kalman filter 51

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.2 Sigma point generation . . . . . . . . . . . . . . . . . . . . . 52
3.3 Basic UKF algorithm . . . . . . . . . . . . . . . . . . . . . . 54
3.3.1 Simulation example for the unscented Kalman
filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4 Important variants of the UKF . . . . . . . . . . . . . . . . . 60
3.4.1 Spherical simplex unscented transformation . . . . . . 60
3.4.2 Sigma point filter with 4n + 1 points . . . . . . . . . . 61
3.4.3 MATLAB-based filtering exercises . . . . . . . . . . . 64
3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

4 Filters based on cubature and quadrature points 65

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2 Spherical cubature rule of integration . . . . . . . . . . . . . 66
Contents ix

4.3 Gauss-Laguerre rule of integration . . . . . . . . . . . . . . . 67


4.4 Cubature Kalman filter . . . . . . . . . . . . . . . . . . . . . 68
4.5 Cubature quadrature Kalman filter . . . . . . . . . . . . . . 70
4.5.1 Calculation of cubature quadrature (CQ) points . . . 70
4.5.2 CQKF algorithm . . . . . . . . . . . . . . . . . . . . . 71
4.6 Square root cubature quadrature Kalman filter . . . . . . . . 75
4.7 High-degree (odd) cubature quadrature Kalman filter . . . . 77
4.7.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.7.2 High-degree cubature rule . . . . . . . . . . . . . . . . 77
4.7.3 High-degree cubature quadrature rule . . . . . . . . . 79
4.7.4 Calculation of HDCQ points and weights . . . . . . . 80
4.7.5 Illustrations . . . . . . . . . . . . . . . . . . . . . . . . 80
4.7.6 High-degree cubature quadrature Kalman filter . . . . 86
4.8 Simulation examples . . . . . . . . . . . . . . . . . . . . . . . 87
4.8.1 Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.8.2 Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

5 Gauss-Hermite filter 95

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.2 Gauss-Hermite rule of integration . . . . . . . . . . . . . . . 96
5.2.1 Single dimension . . . . . . . . . . . . . . . . . . . . . 96
5.2.2 Multidimensional integral . . . . . . . . . . . . . . . . 97
5.3 Sparse-grid Gauss-Hermite filter (SGHF) . . . . . . . . . . . 99
5.3.1 Smolyak’s rule . . . . . . . . . . . . . . . . . . . . . . 100
5.4 Generation of points using moment matching method . . . . 104
5.5 Simulation examples . . . . . . . . . . . . . . . . . . . . . . . 105
5.5.1 Tracking an aircraft . . . . . . . . . . . . . . . . . . . 105
5.6 Multiple sparse-grid Gauss-Hermite filter (MSGHF) . . . . . 109
5.6.1 State-space partitioning . . . . . . . . . . . . . . . . . 109
5.6.2 Bayesian filtering formulation for multiple
approach . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.6.3 Algorithm of MSGHF . . . . . . . . . . . . . . . . . . 111
5.6.4 Simulation example . . . . . . . . . . . . . . . . . . . 113
5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

6 Gaussian sum filters 117

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 117


6.2 Gaussian sum approximation . . . . . . . . . . . . . . . . . . 118
6.2.1 Theoretical foundation . . . . . . . . . . . . . . . . . . 118
6.2.2 Implementation . . . . . . . . . . . . . . . . . . . . . . 120
6.2.3 Multidimensional systems . . . . . . . . . . . . . . . . 121
6.3 Gaussian sum filter . . . . . . . . . . . . . . . . . . . . . . . 122
x Contents

6.3.1 Time update . . . . . . . . . . . . . . . . . . . . . . . 122


6.3.2 Measurement update . . . . . . . . . . . . . . . . . . . 123
6.4 Adaptive Gaussian sum filtering . . . . . . . . . . . . . . . . 124
6.5 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . 125
6.5.1 Problem 1: Single dimensional nonlinear system . . . . 125
6.5.2 RADAR target tracking problem . . . . . . . . . . . . 129
6.5.3 Estimation of harmonics . . . . . . . . . . . . . . . . . 133
6.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

7 Quadrature filters with randomly delayed measurements 139

7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 139


7.2 Kalman filter for one step randomly delayed
measurements . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.3 Nonlinear filters for one step randomly delayed
measurements . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.3.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . 144
7.3.2 Measurement noise estimation . . . . . . . . . . . . . 144
7.3.3 State estimation . . . . . . . . . . . . . . . . . . . . . 145
7.4 Nonlinear filter for any arbitrary step randomly
delayed measurement . . . . . . . . . . . . . . . . . . . . . . 146
7.4.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.5 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

8 Continuous-discrete filtering 159

8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 159


8.2 Continuous time filtering . . . . . . . . . . . . . . . . . . . . 160
8.2.1 Continuous filter for a linear Gaussian system . . . . . 161
8.2.2 Nonlinear continuous time system . . . . . . . . . . . 167
8.2.2.1 The extended Kalman-Bucy filter . . . . . . 167
8.3 Continuous-discrete filtering . . . . . . . . . . . . . . . . . . 168
8.3.1 Nonlinear continuous time process model . . . . . . . 171
8.3.2 Discretization of process model using
Runge-Kutta method . . . . . . . . . . . . . . . . . . 172
8.3.3 Discretization using Ito-Taylor expansion of
order 1.5 . . . . . . . . . . . . . . . . . . . . . . . . . 172
8.3.4 Continuous-discrete filter with deterministic
sample points . . . . . . . . . . . . . . . . . . . . . . . 174
8.4 Simulation examples . . . . . . . . . . . . . . . . . . . . . . . 176
8.4.1 Single dimensional filtering problem . . . . . . . . . . 176
8.4.2 Estimation of harmonics . . . . . . . . . . . . . . . . . 177
8.4.3 RADAR target tracking problem . . . . . . . . . . . . 179
8.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Contents xi

9 Case studies 187

9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 187


9.2 Bearing only underwater target tracking problem . . . . . . 188
9.3 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . 189
9.3.1 Tracking scenarios . . . . . . . . . . . . . . . . . . . . 190
9.4 Shifted Rayleigh filter (SRF) . . . . . . . . . . . . . . . . . . 191
9.5 Gaussian sum shifted Rayleigh filter (GS-SRF) . . . . . . . . 193
9.5.1 Bearing density . . . . . . . . . . . . . . . . . . . . . . 194
9.6 Continuous-discrete shifted Rayleigh filter
(CD-SRF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
9.6.1 Time update of CD-SRF . . . . . . . . . . . . . . . . . 196
9.7 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . 196
9.7.1 Filter initialization . . . . . . . . . . . . . . . . . . . . 199
9.7.2 Performance criteria . . . . . . . . . . . . . . . . . . . 201
9.7.3 Performance analysis of Gaussian sum filters . . . . . 201
9.7.4 Performance analysis of continuous-discrete
filters . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
9.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
9.9 Tracking of a ballistic target . . . . . . . . . . . . . . . . . . 216
9.10 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . 219
9.10.1 Process model . . . . . . . . . . . . . . . . . . . . . . 219
9.10.1.1 Process model in discrete domain . . . . . . 219
9.10.1.2 Process model in continuous time
domain . . . . . . . . . . . . . . . . . . . . . 220
9.10.2 Seeker measurement model . . . . . . . . . . . . . . . 220
9.10.3 Target acceleration model . . . . . . . . . . . . . . . . 223
9.11 Proportional navigation guidance (PNG) law . . . . . . . . . 225
9.12 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . 226
9.12.1 Performance of adaptive Gaussian sum filters . . . . . 228
9.12.2 Performance of continuous-discrete filters . . . . . . . 229
9.13 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230

Bibliography 235

Index 251
Preface

This book deals with nonlinear state estimation. It is well known that, for
a linear system and additive Gaussian noise, an optimal solution is available
for the state estimation problem. This well known solution is known as the
Kalman filter. However, if the systems are nonlinear, the posterior and the
prior probability density functions (pdfs) are no longer Gaussian. For such
systems, no optimal solution is available in general. The primitive approach
is to linearize the system and apply the Kalman filter. The method is known
as the extended Kalman filter (EKF). However, the estimate fails to converge
in many cases if the system is highly nonlinear. To overcome the limitations
associated with the extended Kalman filter, many techniques are proposed.
All the post-EKF techniques could be divided into two categories, namely (i)
the estimation with probabilistic sample points and (ii) the estimation with
deterministic sample points. The probabilistic sample point methods approx-
imately reconstruct the posterior and the prior pdfs with the help of many
points in the state space (also known as particles) sampled from an appropri-
ate probability distribution and their associated probability weights. On the
other hand, deterministic sample point techniques approximate the posterior
and the prior pdfs with a multidimensional Gaussian distribution and calcu-
late the mean and covariance with a few wisely chosen points and weights.
For this reason, they are also called Gaussian filters. They are popular in real
time applications due to their ease of implementation and faster execution,
when compared to the techniques based on probabilistic sample points.
There are good books on filtering with probabilistic sample points, i.e.,
particle filtering. However, the same is not true for approximate Gaussian fil-
ters. Moreover, over the last few years there is considerable development on
the said topic. This motivates us to write a book which presents a complete
coverage of the Bayesian estimation with deterministic sample points. The
purpose of the book is to educate the readers about all the available Gaus-
sian estimators. Learning of various available methods becomes essential for
a designer as in filtering there is no ‘holy grail’ which will always provide the
best result irrespective of the problems encountered. In other words, the best
choice of estimator is highly problem specific.
There are prerequisites to understand the material presented in this book.
These include (i) understanding of linear algebra and linear systems (ii)
Bayesian probability theory, (iii) state space analysis. Assuming the readers
are exposed to the above prerequisites, the book starts with the conceptual

xiii
xiv Preface

solution of the nonlinear estimation problems and describes all the Gaussian
filters in depth with rigorous mathematical analysis.
The style of writing is suitable for engineers and scientists. The material
of the book is presented with the emphasis on key ideas, underlying assump-
tions behind them, algorithms, and properties. In this book, readers will get
a comprehensive idea and understanding about the approximate solutions of
the nonlinear estimation problem. The designers, who want to implement the
filters, will benefit from the algorithms, flow charts and MATLABr code pro-
vided in the book. Rigorous, state of the art mathematical treatment will also
be provided where relevant, for the analyst who wants to analyze the algorithm
in depth for deeper understanding and further contribution. Further, begin-
ners can verify their understanding with the help of numerical illustrations
and MATLAB codes.
The book contains nine chapters. It starts with the formulation of the state
estimation problem and the conceptual solution of it. Chapter 2 provides an
optimal solution of the problem for a linear system and Gaussian noises. Fur-
ther, it provides a detailed overview of several nonlinear estimators available
in the literature. The next chapter deals with the unscented Kalman filter.
Chapters 4 and 5 describe cubature and quadrature based Kalman filters, the
Gauss-Hermite filter and their variants respectively. The next chapter presents
the Gaussian sum filter, where the prior and the posterior pdfs are approxi-
mated with the weighted sum of several Gaussian pdfs. Chapter 7 considers the
problem where measurements are randomly delayed. Such filters are finding
more and more applications in networked control systems. Chapter 8 presents
an estimation method for the continuous-discrete system. Such systems nat-
urally arise because process equations are in continuous time domain as they
are modeled from physical laws and the measurement equations are in discrete
time domain as they arrive from the sampled sensor measurement. Finally, in
the last chapter two case studies namely (i) bearing only underwater target
tracking and (ii) tracking a ballistic target on reentry have been considered.
All the Gaussian filters are applied to them and results are compared. Readers
are suggested to start with the first two chapters because the rest of the book
depends on them. Next, the reader can either read all the chapters from 3 to
6, or any of them (based on necessity). In other words, Chapters 3-6 are not
dependent on one another. However, to read Chapters 7 to 9 understanding
of the previous chapters is required.
This book is an outcome of many years of our research work, which was
carried out with the active participation of our PhD students. We are thank-
ful to them. Particularly, we would like to express our special appreciation
and thanks to Dr Rahul Radhakrishnan and Dr Abhinoy Kumar Singh. Fur-
ther, we thank anonymous reviewers, who reviewed our book proposal, for
their constructive comments which help to uplift the quality of the book. We
acknowledge the help of Mr Rajesh Kumar for drawing some of the figures
included in the book. Finally, we would like to acknowledge with gratitude,
Preface xv

the support and love of our families who all help us to move forward and this
book would not have been possible without them.
We hope that the book will make significant contribution in the literature
of Bayesian estimation and the readers will appreciate the effort. Further, it is
anticipated that the book will open up many new avenues of both theoretical
and applied research in various fields of science and technology.

Shovan Bhaumik
Paresh Date

MATLABr and Simulinkr are the registered trademark of The MathWorks,


Inc. For product information, please contact:

The MathWorks, Inc.


3 Apple Hill Drive
Natick, MA, 01760-2098 USA
Tel: 508-647-7000
Fax: 508-647-7001
E-mail: [email protected]
Web: https://www.mathworks.com
About the Authors

Dr. Shovan Bhaumik was born in Kolkata, India, in


1978. He received the B.Sc. degree in Physics in 1999 from
Calcutta University, Kolkata, India, the B.Tech degree in
Instrumentation and Electronics Engineering in 2002, the
Master of Control System Engineering degree in 2004, and
the PhD degree in Electrical Engineering in 2009, all from
Jadavpur University, Kolkata, India.
He is currently Associate Professor of Electrical Engi-
neering at Indian Institute of Technology Patna, India. From May 2007 to June
2009, he was a Research Engineer, at GE Global Research, John F Welch Tech-
nology Centre, Bangalore, India. From July 2009 to March 2017, he was an
Assistant Professor of Electrical Engineering at Indian Institute of Technology
Patna.
Shovan Bhaumik’s research interests include nonlinear estimation, statis-
tical signal processing, aerospace and underwater target tracking, and net-
worked control systems. He has published more than 20 papers in refereed in-
ternational journals. He is a holder of the Young Faculty Research Fellowship
(YFRF) award from the Ministry of Electronics and Information Technology,
MeitY, Government of India.

Dr. Paresh Date was born in 1971 in Mumbai, India.


He completed his B.E. in Electronics and Telecommuni-
cation in 1993 from Pune University, India, his M.Tech.
in Control and Instrumentation in 1995 from the Indian
Institute of Technology Bombay (Mumbai), India and his
doctoral studies in engineering at Cambridge University in
2001. His studies were funded by the Cambridge Common-
wealth Trust (under the Cambridge Nehru Fellowship) and
the CVCP, UK. He worked as a postdoctoral researcher at the University of
Cambridge from 2000 to 2002. He joined Brunel University London in 2002,
where he is currently a senior lecturer and Director of Research in the De-
partment of Mathematics.
Dr. Date’s principal research interests include filtering and its applications,
especially in financial mathematics. He has published more than 50 refereed
papers and supervised 10 PhD students to completion as their principal su-
pervisor. His research has been funded by grants from the Engineering and

xvii
xviii About the Authors

Physical Sciences Research Council, UK, from charitable bodies such as the
London Mathematical Society, the Royal Society and from the industry. He
has held visiting positions at universities in Australia, Canada and India. He is
a Fellow of the Institute of Mathematics and its Applications and an Associate
Editor for the IMA Journal of Management Mathematics.
Abbreviations

AGS Adaptive Gaussian sum


AGSF Adaptive Gaussian sum filter
ATC Air traffic control
BOT Bearings-only tracking
CKF Cubature Kalman filter
CD Continuous-discrete
CD-CKF Continuous-discrete cubature Kalman filter
CD-CQKF Continuous-discrete cubature quadrature Kalman filter
CD-EKF Continuous-discrete extended Kalman filter
CDF Central difference filter
CD-GHF Continuous-discrete Gauss-Hermite filter
CD-NUKF Continuous-discrete new unscented Kalman filter
CD-SGHF Continuous-discrete sparse-grid Gauss-Hermite filter
CD-SRF Continuous-discrete shifted Rayleigh filter
CD-UKF Continuous-discrete unscented Kalman filter
CQKF Cubature quadrature Kalman fiter
CQKF-RD Cubature Kalman filter for random delay
CRLB Cramer Rao lower bound
DDF Divided difference filter
EKF Extended Kalman filter
GHF Gauss-Hermite filter
GPS Global positioning system
GSF Gaussian sum filter
GS-EKF Gaussian sum extended Kalman filter
GS-CKF Gaussian sum cubature Kalman filter
GS-CQKF Gaussian sum cubature quadrature Kalman filter
GS-GHF Gaussian sum Gauss-Hermite filter
GS-NUKF Gaussian sum new unscented Kalman filter
GS-SGHF Gaussian sum sparse-grid Gauss-Hermite filter
GS-SRF Gaussian sum shifted Rayleigh filter
GS-UKF Gaussian sum unscented Kalman filter
HDCKF High-degree cubature Kalman filter
HDCQKF High-degree cubature quadrature Kalman filter
IEKF Iterated extended Kalman filter
MSGHF Multiple sparse-grid Gauss-Hermite filter
NEES Normalized estimation error squared

xix
xx Abbreviations

NIS Normalized innovation squared


NUKF New unscented Kalman filter
pdf probability density function
PF Particle filter
PNG Proportional navigation guidance
RADAR Radio detection and ranging
RMSE Root mean square error
RSUKF Risk sensitive unscented Kalman filter
SDC State dependent coefficient
SEKF Second order extended Kalman filter
SGHF Sparse-grid Gauss-Hermite filter
SGQ Sparse-grid quadrature
SIS Sequential importance sampling
SIS-PF Sequential importance sampling particle filter
SRCKF Square-root cubature Kalman filter
SRCQKF Square-root cubature quadrature Kalman filter
SRF Shifted Rayleigh filter
SRNUKF Square-root new unscented Kalman filter
SRUKF Square-root unscented Kalman filter
SoC State of charge
SONAR Sound navigation and ranging
TMA Target motion analysis
UKF Unscented Kalman filter
Symbol Description

X State vector k Time step


Y Measurement Rn Real space of dimension n
U Input N Normal distribution
Q Process noise covariance φ(·) Process nonlinear function
R Measurement noise covari- γ(·) Measurement nonlinear
ance function
Z Delayed measurement K Kalman gain
p Probability density function A System matrix
E Expectation
C Measurement matrix
Σ Error covariance matrix S
Union of sample points
ΣX X Error covariance matrix of
states p Latency parameter of delay
Σ YY
Error covariance matrix of Ri ith order Runge-Kutta oper-
measurement ator
ΣX Y Cross covariance of state ∇γk Jacobian of γk (·)
and measurement S Cholesky factor
η Process noise U Input vector
v Measurement noise B Input matrix
X̂ Expectation of state vector P Probability

xxi
Chapter 1
Introduction

1.1 Nonlinear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2


1.1.1 Continuous time state space model . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Discrete time state space model . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Discrete time systems with noises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Solution of discrete time LTI system . . . . . . . . . . . . . . . . . . . . 6
1.2.2 States as a Markov process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Stochastic filtering problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Maximum likelihood and maximum a posterori estimate . . . . . . . 8
1.4.1 Maximum likelihood (ML) estimator . . . . . . . . . . . . . . . . . . . . 8
1.4.2 Maximum a posteriori (MAP) estimate . . . . . . . . . . . . . . . . . 9
1.5 Bayesian framework of filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.1 Bayesian statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.2 Recursive Bayesian filtering: a conceptual solution . . . . . 10
1.6 Particle filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.1 Importance sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.2 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.7 Gaussian filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7.1 Propagation of mean and covariance of a linear
system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7.2 Nonlinear filter with Gaussian approximations . . . . . . . . . 19
1.8 Performance measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.8.1 When truth is known . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.8.2 When truth is unknown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.9 A few applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.9.1 Target tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.9.2 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.9.3 Process control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.9.4 Weather prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.9.5 Estimating state-of-charge (SoC) . . . . . . . . . . . . . . . . . . . . . . . . 24
1.10 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.11 Organization of chapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

1
2 Nonlinear Estimation

1.1 Nonlinear systems


All of us know that the dynamical behavior of a system can be described
with mathematical models. For example, the flight path of an airplane sub-
jected to engine thrust and under certain deflection of control surfaces, or the
position and velocity of an underwater target can be described using mathe-
matical models. The dynamics of a system is generally described with the help
of a set of first order differential equations. Such a representation of a system
is known as a state space model. The variables (which generally represent
physical parameters) are known as states. Higher order differential equation
models can often be re-formulated as augmented, first order differential equa-
tion models.

1.1.1 Continuous time state space model


From the above discussion, we see that a system can be represented with
a set of variables (known as states) and the input output relationship can be
represented with a set of first order differential equations. In mathematical
language, X = [x1 x2 · · · xn ]T represents a state vector in n dimensional real
space. In general, in continuous time a dynamic system can be represented by
equations of the form [3]

ẋi = fi (x1 , x2 , · · · , xn , u1 , u2 , · · · , um , t), i = 1, · · · , n, (1.1)

where ui , i = 1, · · · , m denotes the external (or exogenous) inputs to the


system. Output of the system is obtained from sensor measurements which
can be represented as

yi = gi (x1 , x2 , · · · , xn , u1 , u2 , · · · , um , t), i = 1, · · · , p. (1.2)

fi and gi are nonlinear real valued functions of state, input and time. A
complete description of such a system could be obtained if the Eqs. (1.1) and
(1.2) are known along with a set of initial conditions of state vector. In a
compact form and with matrix-vector notation, the above two equations can
be represented as
Ẋ = f (X , U, t), (1.3)
and
Y = g(X , U, t), (1.4)
where f = [f1 f2 · · · fn ] , g = [g1 g2 · · · gp ] , U = [u1 u2 · · · um ]T .
T T

Moreover, X ∈ Rn , Y ∈ Rp , and U ∈ Rm .
Important special cases of Eqs. (1.3), (1.4) are the linear time varying state
and output equations given by

Ẋ = A(t)X + B(t)U, (1.5)


Introduction 3

and
Y = C(t)X + D(t)U, (1.6)
n×n n×m p×n p×m
where A(t) ∈ R , B(t) ∈ R , C(t) ∈ R , and D(t) ∈ R respec-
tively.
Now, if the system is time invariant, the state and output equations become

Ẋ = AX + BU, (1.7)

and
Y = CX + DU, (1.8)
where A, B, C, D are real matrices with appropriate dimensions. The process
is a first order differential equation and can be solved to obtain X (t) if initial
condition is known [88].

1.1.2 Discrete time state space model


The state vector of a dynamic system in a discrete time domain can be rep-
resented as Xk = [x1,k x2,k · · · xn,k ]T , where Xk ∈ Rn , and k ∈ 0, 1, · · · , N ,
is the step count. When the step count is multiplied by sampling time, T , i.e.,
kT , we receive a time unit. Similar to Eqs. (1.1) and (1.2), in a discrete time
nonlinear system the process and measurement equations can be written as

xi,k+1 = φi,k (x1,k , x2,k , · · · , xn,k , u1,k , u2,k , · · · , um,k , k),


i = 1, · · · , n,
(1.9)
where ui,k , i = 1, · · · , m denotes the ith input of the system at any time step
k. The output of the system which is generally sensor measurements can be
represented as

yi,k = γi,k (x1,k , x2,k , · · · , xn,k , u1,k , u2,k , · · · , um,k , k),


i = 1, · · · , p.
(1.10)
In a compact form and with vector-matrix notation, the above two equations
can be represented as
Xk+1 = φk (Xk , Uk , k), (1.11)
and
Yk = γk (Xk , Uk , k), (1.12)
where φk (·) and γk (·) are arbitrary nonlinear functions, and Xk ∈ Rn , Yk ∈ Rp ,
and Uk ∈ Rm .
For a linear time varying discrete system the process and measurement
equations become
Xk+1 = Ak Xk + Bk Uk , (1.13)
and
Yk = Ck Xk + Dk Uk , (1.14)
4 Nonlinear Estimation

where Ak , Bk , Ck , Dk are real matrices with appropriate dimensions. For a


time invariant linear system the above two equations can further be written
as
Xk+1 = AXk + BUk , (1.15)
and
Yk = CXk + DUk . (1.16)
The Eq. (1.15) is a difference equation and can be solved recursively if the
initial state vector is known.
Very often, in filtering literature, the term Uk is dropped keeping in mind
that it can be incorporated with the final algorithm very easily even if we
do not consider it during formulation. Without loss of generality, from now
onwards we write general process and measurement equations as

Xk+1 = φk (Xk , k), (1.17)

and
Yk = γk (Xk , k). (1.18)
The propagation of state and measurement expressed by the above two equa-
tions could be represented by Figure 1.1.

FIGURE 1.1: Graphical representation of evolution of state and measure-


ment.
Introduction 5

1.2 Discrete time systems with noises


The process equation models a system, generally using the law of physics.
Most often, the model is not absolutely accurate. This inaccuracy is incurred in
the system due to an error in modeling or the absence of full knowledge about
the system and its input. Let us take an example of tracking a maneuvering
enemy aircraft [141]. If we model the system with a constant turn rate, but
actually it maneuvers with a variable turn rate, the developed model would
be inaccurate. A popular method of handling this inaccuracy in modeling is
to incorporate random process, which follows certain probability distribution,
with the process model. The random number incorporated to compensate the
modeling error is known as process noise. The process noise could be additive
or multiplicative [1, 183]. Modeling the process uncertainty with additive noise
is most popular and mathematically it could be written as

Xk+1 = φk (Xk , k) + ηk , (1.19)

where ηk is the process noise. It should be noted that a caveat is necessary on


the term process noise which may be misleading. The ηk is not actually noise;
rather it is process excitation. However, the term process noise is very popular
in the literature and throughout the book we shall call it by the same.
Some or all of the states of the system, or a function thereof, are measured
by a set of sensors. We denote these measurements as outputs of the system,
and the relationship between the state vector and the outputs is governed by
the measurement equation of the system. The relation between the state and
output is governed by the measurement equation of the system. For additive
sensor noise, the measurement equation becomes

Yk = γk (Xk , k) + vk , (1.20)

where vk is sensor noise and is characterized with a probability density


function.
If the process and measurement noise are not additive, the process and
measurement equations in general could be written as,

Xk+1 = φk (Xk , k, ηk ),

and
Yk = γk (Xk , k, vk ).
In what follows, we will assume that the systems have additive process and
measurement noise. In addition, we will also assume that the process and
measurement noises are stationary white signals with known statistics [180].
Generally, they are described with a multidimensional Gaussian probability
density function (pdf) with zero mean and appropriate covariance.
6 Nonlinear Estimation

1.2.1 Solution of discrete time LTI system


No analytical solution is available to solve a nonlinear discrete time system
with additive noise. However, for a linear system and with additive noise, the
probability density function of the states can be evaluated at any particular
instant of time if the initial states are given. A linear state space equation
with additive noise can be written as

Xk+1 = Ak Xk + ηk ,

and

Yk = Ck Xk + vk .
From the above equation we can write,

Xk+1 = Ak Xk + ηk
= Ak Ak−1 Xk−1 + Ak ηk−1 + ηk
..
. (1.21)
k
hY i Xk h k−i−1
Y i
= Ak−j X0 + Ak−j ηi
j=0 i=0 j=0

If the upper index of the product term is lower than


Qk the lower index, the
result is taken as an identity matrix. The matrix j=0 Ak−j is known as a
state transition matrix. For a discrete time LTI system, the above equation
becomes
Xk
Xk+1 = Ak X0 + Ak−i ηi . (1.22)
i=0

For a nonlinear discrete state space equation, the evolution of state could be
obtained sequentially by passing the previous state, Xk , through the nonlinear
function φ(Xk , k) and adding a noise sequence.

1.2.2 States as a Markov process


From Eq. (1.21), the expression of state at any instant k, evolved from any
instant l < k, can be expressed as
h k−l−1
Y i k−1
X h k−i−2
Y i
Xk = Ak−1−j Xl + Ak−1−j ηi . (1.23)
j=0 i=l j=0

The state sequence X0:l (where X0:l means {X0 , X1 , · · · , Xl }) only depends
on η0:l−1 . The noise sequence, η1:l−1 is independent of X0:l . So, we could
write p(Xk |X0:l ) = p(Xk |Xl ) , which, in turn, means that the state vector is
a Markov sequence. It should be noted that the above discussion holds true
Introduction 7

when the process noise is white, i.e., completely unpredictable [13]. If the noise
is colored, the statement made above would not hold true as the state prior
up to time l could be used to predict the process noise sequence incorporated
during k = l, · · · , k − 1.

1.3 Stochastic filtering problem


We see that Eq. (1.19), mentioned earlier is a stochastic difference equa-
tion. At each point of time, the state vector is characterized by a multidi-
mensional probability density function. The objective of state estimation is
to compute the full joint probability distribution of states at each time steps.
Computing the full joint distribution of the states at all time steps is compu-
tationally inefficient in real life implementation. So, most of the time marginal
distributions of states are considered. Three different types of state estima-
tions may be done as described below [137]:
• Filtering: Filtering is an operation that involves the determination of the
pdf of states at any time step k, given that current and previous mea-
surements are received. Mathematically, filtering is the determination of
p(Xk |Y1:k ), where Y1:k = {Y1 , Y2 , · · · , Yk }.
• Prediction: Prediction is an priori form of estimation. Its aim is to com-
pute marginal distributions of Xk+n , (where n = 1, 2, · · · ), n step in
advance of current measurement. Mathematically, prediction is the de-
termination of p(Xk+n |Y1:k ). Unless specified otherwise, prediction is
generally referred to as one step ahead estimation i.e., n = 1.
• Smoothing: Smoothing is an posteriori estimation of marginal distribu-
tion of Xk , when measurements are received up to N step, where N > k.
In other words, smoothing is the computation of p(Xk |Y1:N ).
The concept of filtering, prediction and smoothing is illustrated in Figure 1.2.
Throughout the book, we shall consider only the filtering problem for a non-
linear system with additive white Gaussian noise as described in Eqs. (1.19)
and (1.20). Further, a marginalized distribution of state will be considered.
It must be noted that Eq. (1.19) characterizes the state transition density
p(Xk |Xk−1 ) and Eq. (1.20) expresses the measurement density p(Yk |Xk ).
As discussed above, the objective of the filtering is to estimate the pdf of
the present state at time step k, given that the observation up to the k th step
has been received i.e., p(Xk |Y1:k ), which is essentially the posterior density
function. However, in many practical problems, the user wants a single vector
as the estimated value of the state vector rather than a description of pdf. In
such cases, the mean of the pdf is declared as the point estimate of the states.
8 Nonlinear Estimation

FIGURE 1.2: Illustration of filtering, prediction and smoothing.

1.4 Maximum likelihood and maximum a posterori


estimate
1.4.1 Maximum likelihood (ML) estimator
In the non-Bayesian approach (we shall discuss what is the Bayesian ap-
proach very soon) there is no a priori pdf. So Bayes’ formula could not be used.
In this case, the pdf of the measurement conditioned on state or parameter,
known as likelihood, can be obtained. So the likelihood function is

Λk (Xk ) , p(Yk |Xk ), (1.24)

which measures how likely the state value is given the observations. The states
or parameters can be estimated by maximizing the likelihood function. Thus
the maximum likelihood estimator is

X̂kM L = arg max Λk (Xk ) = arg max p(Yk |Xk ). (1.25)


Xk Xk

The above optimization problem needs to be solved to obtain the maximum


likelihood estimate.
Introduction 9

1.4.2 Maximum a posteriori (MAP) estimate


The maximum a posteriori (MAP) estimator maximizes the posterior pdf.
So the MAP estimate is

X̂kM AP = arg max p(Xk |Yk ). (1.26)


Xk

With the help of Bayes’ theorem which says


p(Yk |Xk )p(Xk )
p(Xk |Yk ) = , (1.27)
p(Yk )
the MAP estimate becomes

X̂kM AP = arg max [p(Yk |Xk )p(Xk )]. (1.28)


Xk

Here, we drop the term p(Yk ) because it is a normalization constant and is


irrelevant for a maximization problem.

1.5 Bayesian framework of filtering


1.5.1 Bayesian statistics
Bayesian theory is a branch of probability theory that allows us to model an
uncertainty about the outcome of interest by incorporating prior knowledge
about the system and observational evidence with the help of Bayes’ theo-
rem. The Bayesian method which interprets the probability as a conditional
measure of uncertainty, is a very popular and useful tool as far as practical
applications are concerned. In the filtering context, the Bayesian approach
uses the prior pdf and the measurement knowledge to infer the conditional
probability of states.
At the onset, we assume that (i) the states of a system follow a first order
Markov process, so p(Xk |X0:k−1 ) = p(Xk |Xk−1 ); (ii) the states are independent
of the given measurements. From the Bayes’ rule we have [76]

p(Y1:k |Xk )p(Xk )


p(Xk |Y1:k ) =
p(Y1:k )
p(Yk , Y1:k−1 |Xk )p(Xk )
=
p(Yk , Y1:k−1 )
p(Yk |Y1:k−1 , Xk )p(Y1:k−1 |Xk )p(Xk )
=
p(Yk |Y1:k−1 )p(Y1:k−1 )
p(Yk |Y1:k−1 , Xk )p(Xk |Y1:k−1 )p(Y1:k−1 )p(Xk )
= .
p(Yk |Y1:k−1 )p(Y1:k−1 )p(Xk )
10 Nonlinear Estimation

As we can write p(Yk |Y1:k−1 , Xk ) = p(Yk |Xk ), the above equation becomes
p(Yk |Xk )p(Xk |Y1:k−1 )
p(Xk |Y1:k ) = . (1.29)
p(Yk |Y1:k−1 )
Eq. (1.29) expresses the posterior pdf which consists of three terms, explained
below.
• Likelihood: p(Yk |Xk ) is the likelihood which essentially is determined
from the measurement noise model of Eq. (1.20).
• Prior: p(Xk |Y1:k−1 ) is defined as prior which can be obtained through
the Chapman-Kolmogorov equation [8],
Z
p(Xk |Y1:k−1 ) = p(Xk |Xk−1 , Y1:k−1 )p(Xk−1 |Y1:k−1 )dXk−1 . (1.30)

As we assume the system to follow a first order Markov process,


p(Xk |Xk−1 , Y1:k−1 ) = p(Xk |Xk−1 ). Under such condition,
Z
p(Xk |Y1:k−1 ) = p(Xk |Xk−1 )p(Xk−1 |Y1:k−1 )dXk−1 . (1.31)

The above equation is used to construct the prior pdf. p(Xk |Xk−1 ) could
be determined from the process model of the system described in Eq.
(1.19). If the likelihood and the prior could be calculated, the posterior
pdf of states is estimated using Bayes’ rule described in Eq. (1.29).
• Normalization constant: The denominator of the Eq. (1.29) is known as
the normalization constant, or evidence and is expressed as
Z
p(Yk |Y1:k−1 ) = p(Yk |Xk )p(Xk |Y1:k−1 )dXk . (1.32)

1.5.2 Recursive Bayesian filtering: a conceptual solution


The key equations for state estimation are Eq. (1.31) and (1.29). It should
also be noted that they are recursive in nature. This means that the state
estimators designed with the help of Bayesian statistics, described above, have
a potential to be implemented on a digital computer. The estimation may start
with an initial pdf of the states, p(X0 |Y0 ) and may continue recursively. To
summarize, the estimation of the posterior pdf of states can be performed
with two steps:
• Time update: Time update, alternatively known as the prediction step,
for a nonlinear filter is performed with the help of the process model de-
scribed in Eq. (1.19) and the Chapman-Kolgomorov equation described
in Eq. (1.31). The process model provides the state transition density,
p(Xk |Xk−1 ) and the further Chapman-Kolgomorov equation determines
the prior pdf p(Xk |Y1:k−1 ).
Introduction 11

• Measurement update: In this step (alternatively known as the correction


step), the posterior pdf is calculated with the help of the prior pdf and
likelihood. Likelihood is obtained from the measurement Eq. (1.20) and
noise statistics. From Eq. (1.29), we know that

p(Xk |Y1:k ) ∝ p(Yk |Xk )p(Xk |Y1:k−1 ). (1.33)

The above equation is utilized to determine the posterior pdf of state.

Figure 1.3 shows the iterative time and measurement update process. Fur-
ther, it should be kept in mind that the filtering strategy described above
is only conceptual in nature. For a linear Gaussian system, p(Xk |Xk−1 ) and
p(Xk |Y1:k ) will be Gaussian and a closed form solution is available. For any
arbitrary nonlinear system, in general, no closed form solution is achievable.
To estimate the states in such cases, the equations described above must be
solved numerically with acceptable accuracy.

FIGURE 1.3: Recursive filtering in two steps.


12 Nonlinear Estimation

1.6 Particle filter


We have seen earlier that our objective is to determine the prior and the
posterior pdfs of the states. These pdfs can be represented with many points in
space and their corresponding weights. As you may guess, this is a discretized
representation of a continuous pdf. The representation can be made in two
ways:
(i) The point mass description where the continuous pdfs are approximated
with a set of support points whose weights depend on the location of the
points. Definitely the weights are not equal.
(ii) Independent and identically distributed (iid) sample points where all
the weights are the same but the points are denser where the probability
density is more and vice versa. Both the point mass description and iid samples
representation are illustrated in Figure 1.4.
In filtering literature, the points in state space are popularly known as
particles from which the name particle filter (PF) [24] is derived. The key
idea is to represent the posterior and prior pdf by a set of randomly sam-
pled points and their weights. It is expected that as the number of particles
becomes very large, the Monte Carlo characterization becomes an equivalent
representation to the pdf under consideration (prior or posterior). As the par-
ticles are randomly sampled, the estimation method is also called filtering
with random support points. This class of methods is also known variously
as sequential importance sampling, sequential Monte Carlo method [41, 92],
bootstrap filtering [57], condensation algorithm [19] etc.
As we mentioned earlier, the posterior pdf of state p(Xk |Y1:k ) is presented
with Ns number of points in real space or particles denoted with Xki , where
i = 1, · · · , Ns , and their corresponding weights wki . So the posterior pdf could
be represented as
Ns
X
p(Xk |Y1:k ) = wki δ(Xk − Xki ), (1.34)
i=1

where δ denotes the Dirac Delta function which are only defined at the location
of particle. Similarly, the prior probability density function p(Xk |Y1:k−1 ) can
be represented.
PNs i It should also be noted that the weights must be normalized,
i.e., i=1 wk = 1. From the above equation we see that we need to determine
the weight and obviously, we do not know the posterior pdf.

1.6.1 Importance sampling


A popular way is to determine the weights with the help of a powerful
technique, known as importance sampling. Although we cannot draw sam-
ples from the posterior pdf, at each particle the p(.) can be evaluated up to
proportionality. Further, the posterior pdf is assigned with some guessed dis-
Introduction 13

tribution, known as proposal density, from which the samples are generated
easily. If we draw Ns samples from the proposal density, i.e., Xki ∼ q(Xk |Y1:k ),
for i = 1, · · · , Ns the expression of weights in Eq. (1.34) becomes

p(Xki |Y1:k )
wki ∝ . (1.35)
q(Xki |Y1:k )

At each iteration, say k, we have particles from an earlier step posterior pdf,
p(Xk−1 |Y1:k−1 ), and we want to draw samples from the present proposal den-
sity, q(Xk |Y1:k ). One way of doing this is by choosing an importance density

FIGURE 1.4: Point mass vs. iid sample approximation of a pdf.


14 Nonlinear Estimation

which could be factorized as

q(Xk |Y1:k ) = q(Xk |Xk−1 , Y1:k )q(Xk−1 |Y1:k−1 ). (1.36)

Then the new particles Xki could be generated from the proposal, q(Xk |Xk−1 ).
Now we shall proceed to derive the weight update equation. Recall the Eq.
(1.29) which can further be written as

p(Yk |Xk )p(Xk |Y1:k−1 )


p(Xk |Y1:k ) =
p(Yk |Y1:k−1 )
p(Yk |Xk )p(Xk−1 |Y1:k−1 )p(Xk |Xk−1 ) (1.37)
=
p(Yk |Y1:k−1 )
∝ p(Yk |Xk )p(Xk |Xk−1 )p(Xk−1 |Y1:k−1 ).

With the help of Eq. (1.36) and (1.37) the Eq. (1.35) could be expressed as

p(Yk |Xki )p(Xki |Xk−1


i i
)p(Xk−1 |Y1:k−1 )
wki ∝
q(Xki |Xk−1
i i
, Y1:k )q(Xk−1 |Y1:k−1 )
(1.38)
i p(Yk |Xki )p(Xki |Xk−1
i
)
= wk−1 i i
.
q(Xk |Xk−1 , Y1:k )

The readers should note the following points:


(i) p(Yk |Xki ) is known as likelihood which is the probability of obtaining a
measurement corresponding to a particular particle. p(Xki |Xk−1 i
) is the
transitional density.
(ii) q(Xki |Xk−1
i
, Y1:k ) is called the proposal density which is the practi-
tioner’s choice. The easiest way to choose proposal density is prior, i.e.,
q(Xki |Xk−1
i
, Y1:k ) = p(Xki |Xk−1
i
). Under such a choice, the weight update
equation becomes wk ∝ wk−1 p(Yk |Xki ).
i i

(iii) A posterior probability density function obtained from any nonlinear


filter such as the extended Kalman filter, the unscented Kalman filter
[169], the cubature Kalman filter [159, 173], the Gaussian sum filter [95],
or even the particle filter could be used as the proposal. It is reported that
the accuracy of PF may be enhanced with the more accurate proposal.
PNs i
(iv) At each step wki should be normalized, i.e., wki = wki / i=1 wk , so that
the particles and weights represent a probability mass function.
The methodology discussed above is commonly known as sequential im-
portance sampling particle filter (SIS-PF) whose implementation at time step
k is given in the Algorithm 1.
Introduction 15

Algorithm 1 Sequential importance sampling particle filter


[{Xki , wki }N i i Ns
i=1 ] = SIS[{Xk−1 , wk−1 }i=1 , Yk ]
s

• for i = 1 : Ns

– Draw Xki ∼ q(Xk |Xk−1


i
, Yk )
p(Yk |Xki )p(Xki |Xk−1
i
)
– Compute weight wki = wk−1
i
i
q(Xki |Xk−1 ,Y1:k )

• end for
PNs
• Normalize the weights wki = wki / i=1 wki

1.6.2 Resampling
After a few iterations with the SIS algorithm, most of the particles will
have a very small weight. This problem is known as weight degeneracy of the
samples [106]. After a certain number of steps in the recursive algorithm, a
large number of generated particles does not contribute to the posterior pdf
and the computational effort is wasted. Further, it can be shown that the
variance of the importance weight will only increase with time. Thus sample
degeneracy in SIS algorithm is inevitable.
To get rid of such a problem, one may take a brute force approach by
incorporating an increasingly large (potentially infinite) number of particles.
However, that is not practically feasible to implement. Instead, it is beneficial
to insert a resampling stage between two consecutive SIS recursions. Such
a method is known as sequential importance sampling resampling (SISR).
During the resampling stage, based on the existing particles and their weights,
a new set of particles is generated and then equal weights are assigned to all
of them.
It is argued that the resampling stage may not be necessary at each step.
When the effective number of particles falls below a certain threshold limit,
the resampling algorithm can be executed. An estimate of the effective sample
size is given by [8]
1
Neff = PNs , (1.39)
i 2
i=1 (wk )

where wki is normalized weights. Large Nef f represents high degeneracy. When
Nef f < NT which is user defined, then we run the resampling algorithm. The
SISR steps are mentioned in Algorithm 2. SISR filtering algorithm is depicted
with the help of Figure 1.5.
In an earlier subsection, we have mentioned that the resampling is required
to avoid the degeneracy problem of the particles. The basic idea behind re-
sampling is to eliminate the particles that have small weights and repeat the
particles with large weights. In other words, during the resampling process
16 Nonlinear Estimation

Algorithm 2 Sequential importance sampling resampling filter


[{Xki , wki }N i i Ns
i=1 ] = SISR[{Xk−1 , wk−1 }i=1 , Yk ]
s

• for i = 1 : Ns

– Draw Xki ∼ q(Xk |Xk−1


i
, Yk )
p(Yk |Xki )p(Xki |Xk−1
i
)
– Compute weight wki = wk−1
i
i
q(Xki |Xk−1 ,Y1:k )

• end for
PNs
• Normalize the weights wki = wki / i=1 wki
PNs i 2
• Compute the effective sample size as Nef f = 1/ i=1 (wk )

• Set the threshold (NT ) for effective sample size


• if Nef f < NT
– Resample using the importance weights
[Xki , wki }N i i Ns
i=1 ] = RESAMPLE[Xk , wk }i=1 ]
s

• end if

the particles with small weights are ignored whereas the particles with large
weights are considered repeatedly. Further, all the new particles are assigned
with equal weights. There are few practical disadvantages with the resampling
process. Firstly, it limits the parallel implementation of the particle filtering
algorithm; secondly as the particles with higher weights are taken repeatedly
the particles after resampling become less diversified. This problem is known
as particle impoverishment which is very severe for small process noise. In
such a case, all the particles succumb to a single point leading to an erroneous
estimate of the posterior pdf.
The most popular resampling strategy is systematic resampling [94, 8].
The pseudocode of systematic resampling [8] is described in Algorithm 3.
Apart from systematic resampling various resampling schemes, such as multi-
nomial resampling, stratified resampling [39], residual resampling [106] etc.,
are available in the literature. Interested readers are referred to [39] for various
resampling strategies and their comparison.
Introduction 17

FIGURE 1.5: Sequential importance sampling resampling filter.

1.7 Gaussian filter


We have seen that the prior and the posterior pdf of a nonlinear system are
in general non-Gaussian. However, many filtering methods assume the pdfs
are Gaussian and characterize them with mean and covariance. Such filters
are known as Gaussian filters. Sometimes they are also called deterministic
sample point filters as they use a few deterministic points and their weights
to determine mean and covariance of the pdfs. In this section we shall discuss
the basic concept and formulation of such filters.

1.7.1 Propagation of mean and covariance of a linear system


If the system is linear, the process noise is Gaussian and the pdf of the
current state is Gaussian, the updated pdf of the next time step is also Gaus-
sian, but with a different mean and a different covariance. But the same is
not true for a nonlinear system, the reason why no closed form estimator is
available for such cases. The statement is illustrated in Figure 1.6.
Exploring the Variety of Random
Documents with Different Content
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot make


any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.

Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.

Section 5. General Information About Project


Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.

Project Gutenberg™ eBooks are often created from several printed


editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

ebookfinal.com

You might also like