Full Download Nonlinear Estimation Methods and Applications with Deterministic Sample Points 1st Edition Shovan Bhaumik (Author) PDF DOCX
Full Download Nonlinear Estimation Methods and Applications with Deterministic Sample Points 1st Edition Shovan Bhaumik (Author) PDF DOCX
https://ebookfinal.com/download/nonlinear-time-series-theory-methods-
and-applications-with-r-examples-1st-edition-randal-douc/
ebookfinal.com
https://ebookfinal.com/download/estimation-with-applications-to-
tracking-navigation-1st-edition-yaakov-bar-shalom/
ebookfinal.com
https://ebookfinal.com/download/handbook-of-
statistics_29b-volume-29-sample-surveys-inference-and-
analysis-1-edition-danny-pfeffermann/
ebookfinal.com
https://ebookfinal.com/download/nonlinear-signal-and-image-processing-
theory-methods-and-applications-1st-edition-kenneth-e-barner/
ebookfinal.com
Theory of Preliminary Test and Stein Type Estimation with
Applications 1st Edition A. K. Md. Ehsanes Saleh
https://ebookfinal.com/download/theory-of-preliminary-test-and-stein-
type-estimation-with-applications-1st-edition-a-k-md-ehsanes-saleh/
ebookfinal.com
https://ebookfinal.com/download/nonlinear-dynamics-and-chaos-with-
applications-to-physics-biology-chemistry-and-engineering-second-
edition-physik/
ebookfinal.com
https://ebookfinal.com/download/chromatography-basic-principles-
sample-preparations-and-related-methods-1st-edition-lundanes-elsa/
ebookfinal.com
https://ebookfinal.com/download/econometric-methods-with-applications-
in-business-and-economics-1st-edition-christiaan-heij/
ebookfinal.com
https://ebookfinal.com/download/cost-of-capital-estimation-and-
applications-2nd-edition-shannon-p-pratt/
ebookfinal.com
Nonlinear Estimation Methods and Applications with
Deterministic Sample Points 1st Edition Shovan Bhaumik
(Author) Digital Instant Download
Author(s): Shovan Bhaumik (Author); Paresh Date (Author)
ISBN(s): 9781351012355, 1351012347
Edition: 1
File Details: PDF, 14.93 MB
Year: 2019
Language: english
Nonlinear Estimation
Methods and Applications with
Deterministic Sample Points
Nonlinear Estimation
Methods and Applications with
Deterministic Sample Points
Shovan Bhaumik
Paresh Date
MATLABr and Simulinkr are a trademark of The MathWorks, Inc. and is used with
permission. The MathWorks does not warrant the accuracy of the text or exercises in this
book. This book’s use or discussion of MATLABr and Simulinkr software or related prod-
ucts does not constitute endorsement or sponsorship by The MathWorks of a particular
pedagogical approach or particular use of the MATLABr and Simulinkr software.
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Rea-
sonable efforts have been made to publish reliable data and information, but the author
and publisher cannot assume responsibility for the validity of all materials or the conse-
quences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if
permission to publish in this form has not been obtained. If any copyright material has not
been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted,
reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other
means, now known or hereafter invented, including photocopying, microfilming, and record-
ing, or in any information storage or retrieval system, without written permission from the
publishers.
For permission to photocopy or use material electronically from this work, please access
www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Cen-
ter, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-
for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system
of payment has been arranged.
To Bhagyashree
Paresh
Contents
Preface xiii
Abbreviations xix
1 Introduction 1
vii
viii Contents
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.2 Sigma point generation . . . . . . . . . . . . . . . . . . . . . 52
3.3 Basic UKF algorithm . . . . . . . . . . . . . . . . . . . . . . 54
3.3.1 Simulation example for the unscented Kalman
filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4 Important variants of the UKF . . . . . . . . . . . . . . . . . 60
3.4.1 Spherical simplex unscented transformation . . . . . . 60
3.4.2 Sigma point filter with 4n + 1 points . . . . . . . . . . 61
3.4.3 MATLAB-based filtering exercises . . . . . . . . . . . 64
3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2 Spherical cubature rule of integration . . . . . . . . . . . . . 66
Contents ix
5 Gauss-Hermite filter 95
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.2 Gauss-Hermite rule of integration . . . . . . . . . . . . . . . 96
5.2.1 Single dimension . . . . . . . . . . . . . . . . . . . . . 96
5.2.2 Multidimensional integral . . . . . . . . . . . . . . . . 97
5.3 Sparse-grid Gauss-Hermite filter (SGHF) . . . . . . . . . . . 99
5.3.1 Smolyak’s rule . . . . . . . . . . . . . . . . . . . . . . 100
5.4 Generation of points using moment matching method . . . . 104
5.5 Simulation examples . . . . . . . . . . . . . . . . . . . . . . . 105
5.5.1 Tracking an aircraft . . . . . . . . . . . . . . . . . . . 105
5.6 Multiple sparse-grid Gauss-Hermite filter (MSGHF) . . . . . 109
5.6.1 State-space partitioning . . . . . . . . . . . . . . . . . 109
5.6.2 Bayesian filtering formulation for multiple
approach . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.6.3 Algorithm of MSGHF . . . . . . . . . . . . . . . . . . 111
5.6.4 Simulation example . . . . . . . . . . . . . . . . . . . 113
5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Bibliography 235
Index 251
Preface
This book deals with nonlinear state estimation. It is well known that, for
a linear system and additive Gaussian noise, an optimal solution is available
for the state estimation problem. This well known solution is known as the
Kalman filter. However, if the systems are nonlinear, the posterior and the
prior probability density functions (pdfs) are no longer Gaussian. For such
systems, no optimal solution is available in general. The primitive approach
is to linearize the system and apply the Kalman filter. The method is known
as the extended Kalman filter (EKF). However, the estimate fails to converge
in many cases if the system is highly nonlinear. To overcome the limitations
associated with the extended Kalman filter, many techniques are proposed.
All the post-EKF techniques could be divided into two categories, namely (i)
the estimation with probabilistic sample points and (ii) the estimation with
deterministic sample points. The probabilistic sample point methods approx-
imately reconstruct the posterior and the prior pdfs with the help of many
points in the state space (also known as particles) sampled from an appropri-
ate probability distribution and their associated probability weights. On the
other hand, deterministic sample point techniques approximate the posterior
and the prior pdfs with a multidimensional Gaussian distribution and calcu-
late the mean and covariance with a few wisely chosen points and weights.
For this reason, they are also called Gaussian filters. They are popular in real
time applications due to their ease of implementation and faster execution,
when compared to the techniques based on probabilistic sample points.
There are good books on filtering with probabilistic sample points, i.e.,
particle filtering. However, the same is not true for approximate Gaussian fil-
ters. Moreover, over the last few years there is considerable development on
the said topic. This motivates us to write a book which presents a complete
coverage of the Bayesian estimation with deterministic sample points. The
purpose of the book is to educate the readers about all the available Gaus-
sian estimators. Learning of various available methods becomes essential for
a designer as in filtering there is no ‘holy grail’ which will always provide the
best result irrespective of the problems encountered. In other words, the best
choice of estimator is highly problem specific.
There are prerequisites to understand the material presented in this book.
These include (i) understanding of linear algebra and linear systems (ii)
Bayesian probability theory, (iii) state space analysis. Assuming the readers
are exposed to the above prerequisites, the book starts with the conceptual
xiii
xiv Preface
solution of the nonlinear estimation problems and describes all the Gaussian
filters in depth with rigorous mathematical analysis.
The style of writing is suitable for engineers and scientists. The material
of the book is presented with the emphasis on key ideas, underlying assump-
tions behind them, algorithms, and properties. In this book, readers will get
a comprehensive idea and understanding about the approximate solutions of
the nonlinear estimation problem. The designers, who want to implement the
filters, will benefit from the algorithms, flow charts and MATLABr code pro-
vided in the book. Rigorous, state of the art mathematical treatment will also
be provided where relevant, for the analyst who wants to analyze the algorithm
in depth for deeper understanding and further contribution. Further, begin-
ners can verify their understanding with the help of numerical illustrations
and MATLAB codes.
The book contains nine chapters. It starts with the formulation of the state
estimation problem and the conceptual solution of it. Chapter 2 provides an
optimal solution of the problem for a linear system and Gaussian noises. Fur-
ther, it provides a detailed overview of several nonlinear estimators available
in the literature. The next chapter deals with the unscented Kalman filter.
Chapters 4 and 5 describe cubature and quadrature based Kalman filters, the
Gauss-Hermite filter and their variants respectively. The next chapter presents
the Gaussian sum filter, where the prior and the posterior pdfs are approxi-
mated with the weighted sum of several Gaussian pdfs. Chapter 7 considers the
problem where measurements are randomly delayed. Such filters are finding
more and more applications in networked control systems. Chapter 8 presents
an estimation method for the continuous-discrete system. Such systems nat-
urally arise because process equations are in continuous time domain as they
are modeled from physical laws and the measurement equations are in discrete
time domain as they arrive from the sampled sensor measurement. Finally, in
the last chapter two case studies namely (i) bearing only underwater target
tracking and (ii) tracking a ballistic target on reentry have been considered.
All the Gaussian filters are applied to them and results are compared. Readers
are suggested to start with the first two chapters because the rest of the book
depends on them. Next, the reader can either read all the chapters from 3 to
6, or any of them (based on necessity). In other words, Chapters 3-6 are not
dependent on one another. However, to read Chapters 7 to 9 understanding
of the previous chapters is required.
This book is an outcome of many years of our research work, which was
carried out with the active participation of our PhD students. We are thank-
ful to them. Particularly, we would like to express our special appreciation
and thanks to Dr Rahul Radhakrishnan and Dr Abhinoy Kumar Singh. Fur-
ther, we thank anonymous reviewers, who reviewed our book proposal, for
their constructive comments which help to uplift the quality of the book. We
acknowledge the help of Mr Rajesh Kumar for drawing some of the figures
included in the book. Finally, we would like to acknowledge with gratitude,
Preface xv
the support and love of our families who all help us to move forward and this
book would not have been possible without them.
We hope that the book will make significant contribution in the literature
of Bayesian estimation and the readers will appreciate the effort. Further, it is
anticipated that the book will open up many new avenues of both theoretical
and applied research in various fields of science and technology.
Shovan Bhaumik
Paresh Date
xvii
xviii About the Authors
Physical Sciences Research Council, UK, from charitable bodies such as the
London Mathematical Society, the Royal Society and from the industry. He
has held visiting positions at universities in Australia, Canada and India. He is
a Fellow of the Institute of Mathematics and its Applications and an Associate
Editor for the IMA Journal of Management Mathematics.
Abbreviations
xix
xx Abbreviations
xxi
Chapter 1
Introduction
1
2 Nonlinear Estimation
fi and gi are nonlinear real valued functions of state, input and time. A
complete description of such a system could be obtained if the Eqs. (1.1) and
(1.2) are known along with a set of initial conditions of state vector. In a
compact form and with matrix-vector notation, the above two equations can
be represented as
Ẋ = f (X , U, t), (1.3)
and
Y = g(X , U, t), (1.4)
where f = [f1 f2 · · · fn ] , g = [g1 g2 · · · gp ] , U = [u1 u2 · · · um ]T .
T T
Moreover, X ∈ Rn , Y ∈ Rp , and U ∈ Rm .
Important special cases of Eqs. (1.3), (1.4) are the linear time varying state
and output equations given by
and
Y = C(t)X + D(t)U, (1.6)
n×n n×m p×n p×m
where A(t) ∈ R , B(t) ∈ R , C(t) ∈ R , and D(t) ∈ R respec-
tively.
Now, if the system is time invariant, the state and output equations become
Ẋ = AX + BU, (1.7)
and
Y = CX + DU, (1.8)
where A, B, C, D are real matrices with appropriate dimensions. The process
is a first order differential equation and can be solved to obtain X (t) if initial
condition is known [88].
and
Yk = γk (Xk , k). (1.18)
The propagation of state and measurement expressed by the above two equa-
tions could be represented by Figure 1.1.
Yk = γk (Xk , k) + vk , (1.20)
Xk+1 = φk (Xk , k, ηk ),
and
Yk = γk (Xk , k, vk ).
In what follows, we will assume that the systems have additive process and
measurement noise. In addition, we will also assume that the process and
measurement noises are stationary white signals with known statistics [180].
Generally, they are described with a multidimensional Gaussian probability
density function (pdf) with zero mean and appropriate covariance.
6 Nonlinear Estimation
Xk+1 = Ak Xk + ηk ,
and
Yk = Ck Xk + vk .
From the above equation we can write,
Xk+1 = Ak Xk + ηk
= Ak Ak−1 Xk−1 + Ak ηk−1 + ηk
..
. (1.21)
k
hY i Xk h k−i−1
Y i
= Ak−j X0 + Ak−j ηi
j=0 i=0 j=0
For a nonlinear discrete state space equation, the evolution of state could be
obtained sequentially by passing the previous state, Xk , through the nonlinear
function φ(Xk , k) and adding a noise sequence.
The state sequence X0:l (where X0:l means {X0 , X1 , · · · , Xl }) only depends
on η0:l−1 . The noise sequence, η1:l−1 is independent of X0:l . So, we could
write p(Xk |X0:l ) = p(Xk |Xl ) , which, in turn, means that the state vector is
a Markov sequence. It should be noted that the above discussion holds true
Introduction 7
when the process noise is white, i.e., completely unpredictable [13]. If the noise
is colored, the statement made above would not hold true as the state prior
up to time l could be used to predict the process noise sequence incorporated
during k = l, · · · , k − 1.
which measures how likely the state value is given the observations. The states
or parameters can be estimated by maximizing the likelihood function. Thus
the maximum likelihood estimator is
As we can write p(Yk |Y1:k−1 , Xk ) = p(Yk |Xk ), the above equation becomes
p(Yk |Xk )p(Xk |Y1:k−1 )
p(Xk |Y1:k ) = . (1.29)
p(Yk |Y1:k−1 )
Eq. (1.29) expresses the posterior pdf which consists of three terms, explained
below.
• Likelihood: p(Yk |Xk ) is the likelihood which essentially is determined
from the measurement noise model of Eq. (1.20).
• Prior: p(Xk |Y1:k−1 ) is defined as prior which can be obtained through
the Chapman-Kolmogorov equation [8],
Z
p(Xk |Y1:k−1 ) = p(Xk |Xk−1 , Y1:k−1 )p(Xk−1 |Y1:k−1 )dXk−1 . (1.30)
The above equation is used to construct the prior pdf. p(Xk |Xk−1 ) could
be determined from the process model of the system described in Eq.
(1.19). If the likelihood and the prior could be calculated, the posterior
pdf of states is estimated using Bayes’ rule described in Eq. (1.29).
• Normalization constant: The denominator of the Eq. (1.29) is known as
the normalization constant, or evidence and is expressed as
Z
p(Yk |Y1:k−1 ) = p(Yk |Xk )p(Xk |Y1:k−1 )dXk . (1.32)
Figure 1.3 shows the iterative time and measurement update process. Fur-
ther, it should be kept in mind that the filtering strategy described above
is only conceptual in nature. For a linear Gaussian system, p(Xk |Xk−1 ) and
p(Xk |Y1:k ) will be Gaussian and a closed form solution is available. For any
arbitrary nonlinear system, in general, no closed form solution is achievable.
To estimate the states in such cases, the equations described above must be
solved numerically with acceptable accuracy.
where δ denotes the Dirac Delta function which are only defined at the location
of particle. Similarly, the prior probability density function p(Xk |Y1:k−1 ) can
be represented.
PNs i It should also be noted that the weights must be normalized,
i.e., i=1 wk = 1. From the above equation we see that we need to determine
the weight and obviously, we do not know the posterior pdf.
tribution, known as proposal density, from which the samples are generated
easily. If we draw Ns samples from the proposal density, i.e., Xki ∼ q(Xk |Y1:k ),
for i = 1, · · · , Ns the expression of weights in Eq. (1.34) becomes
p(Xki |Y1:k )
wki ∝ . (1.35)
q(Xki |Y1:k )
At each iteration, say k, we have particles from an earlier step posterior pdf,
p(Xk−1 |Y1:k−1 ), and we want to draw samples from the present proposal den-
sity, q(Xk |Y1:k ). One way of doing this is by choosing an importance density
Then the new particles Xki could be generated from the proposal, q(Xk |Xk−1 ).
Now we shall proceed to derive the weight update equation. Recall the Eq.
(1.29) which can further be written as
With the help of Eq. (1.36) and (1.37) the Eq. (1.35) could be expressed as
• for i = 1 : Ns
• end for
PNs
• Normalize the weights wki = wki / i=1 wki
1.6.2 Resampling
After a few iterations with the SIS algorithm, most of the particles will
have a very small weight. This problem is known as weight degeneracy of the
samples [106]. After a certain number of steps in the recursive algorithm, a
large number of generated particles does not contribute to the posterior pdf
and the computational effort is wasted. Further, it can be shown that the
variance of the importance weight will only increase with time. Thus sample
degeneracy in SIS algorithm is inevitable.
To get rid of such a problem, one may take a brute force approach by
incorporating an increasingly large (potentially infinite) number of particles.
However, that is not practically feasible to implement. Instead, it is beneficial
to insert a resampling stage between two consecutive SIS recursions. Such
a method is known as sequential importance sampling resampling (SISR).
During the resampling stage, based on the existing particles and their weights,
a new set of particles is generated and then equal weights are assigned to all
of them.
It is argued that the resampling stage may not be necessary at each step.
When the effective number of particles falls below a certain threshold limit,
the resampling algorithm can be executed. An estimate of the effective sample
size is given by [8]
1
Neff = PNs , (1.39)
i 2
i=1 (wk )
where wki is normalized weights. Large Nef f represents high degeneracy. When
Nef f < NT which is user defined, then we run the resampling algorithm. The
SISR steps are mentioned in Algorithm 2. SISR filtering algorithm is depicted
with the help of Figure 1.5.
In an earlier subsection, we have mentioned that the resampling is required
to avoid the degeneracy problem of the particles. The basic idea behind re-
sampling is to eliminate the particles that have small weights and repeat the
particles with large weights. In other words, during the resampling process
16 Nonlinear Estimation
• for i = 1 : Ns
• end for
PNs
• Normalize the weights wki = wki / i=1 wki
PNs i 2
• Compute the effective sample size as Nef f = 1/ i=1 (wk )
• end if
the particles with small weights are ignored whereas the particles with large
weights are considered repeatedly. Further, all the new particles are assigned
with equal weights. There are few practical disadvantages with the resampling
process. Firstly, it limits the parallel implementation of the particle filtering
algorithm; secondly as the particles with higher weights are taken repeatedly
the particles after resampling become less diversified. This problem is known
as particle impoverishment which is very severe for small process noise. In
such a case, all the particles succumb to a single point leading to an erroneous
estimate of the posterior pdf.
The most popular resampling strategy is systematic resampling [94, 8].
The pseudocode of systematic resampling [8] is described in Algorithm 3.
Apart from systematic resampling various resampling schemes, such as multi-
nomial resampling, stratified resampling [39], residual resampling [106] etc.,
are available in the literature. Interested readers are referred to [39] for various
resampling strategies and their comparison.
Introduction 17
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebookfinal.com