0% found this document useful (0 votes)
13 views

Lecture 4 - Estimation - BMSLec03

This lecture introduces estimation principles including least squares estimation and the Kalman filter. It provides the theoretical background and assumptions of these methods. Examples are also presented to illustrate the techniques.

Uploaded by

Anchal Anchal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Lecture 4 - Estimation - BMSLec03

This lecture introduces estimation principles including least squares estimation and the Kalman filter. It provides the theoretical background and assumptions of these methods. Examples are also presented to illustrate the techniques.

Uploaded by

Anchal Anchal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

ELEC-8900 Special Topics:

Advanced Energy Storage Systems


Lecture 03: Basics of Estimation
Instructor: Dr. Balakumar Balasingam
(Link to video from last year: https://tinyurl.com/yxcfzq5o)
(Acknowledging Prof. Jim Reilly at McMaster University
for lecture notes on Least Squares Estimation)
June 3, 2022

Contents
1 Introduction 2
1.1 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Least Square Estimation 2


2.1 Scalar Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Assumptions of the LS estimator . . . . . . . . . . . . . . . . . . 6
2.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Kalman Filter 8
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Assumptions of Kalman filter . . . . . . . . . . . . . . . . . . . . 12
3.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4 Extended Kalman Filter 14


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Assumptions of the EKF . . . . . . . . . . . . . . . . . . . . . . . 16
4.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

5 Summary 18

6 Questions 18

1
1 Introduction
This lecture introduces basic estimation principles. For in-depth details of es-
timation, the students are referred to [1] as well as graduate courses offered by
the author at the University of Windsor.

1.1 Estimation
(discussion will be given in class)

1.2 Filtering
(discussion will be given in class)

2 Least Square Estimation


2.1 Scalar Example
Let us assume that we have n pairs of observations:
(x, y) = {(x1 , y1 ), (x2 , y2 ), . . . , (xn , yn )}
We suspect that these observations may have the following (approximate) linear
relationship
y = a0 + a1 x + n (1)
where n denotes the error/mismatch in the assumed relationship. The goal is
to find the model parameters a0 and a1 . This problem is also known as curve
fitting.
Given all the observations, the sum of the error squares is written as
n
X n
X 2
Sr = e2i = (yi − a0 − a1 xi ) (2)
i=1 i=1

Let us find the values of a0 and a1 that minimize the sum of square errors
n
∂Sr X
= −2 (yi − a0 − a1 xi ) =0 (3)
∂a0 i=1
n
∂Sr X
= −2xi (yi − a0 − a1 xi ) =0 (4)
∂a1 i=1

2 equations, 2 unknowns (a0 and a1 )


n
X n
X n
X
yi − a0 − a1 xi = 0 (5)
i=1 i=1 i=1
n
X Xn Xn
xi yi − a0 xi − a1 x2i = 0 (6)
i=1 i=1 i=1

2
which simplify to the following normal equations
n
X n
X
na0 + a1 xi = yi (7)
i=1 i=1
n
X Xn Xn
a0 xi + a1 x2i = xi yi (8)
i=1 i=1 i=1

Which can be written as


    
n Sx a0 Sy
= (9)
Sx Sxx a1 Sxy

where
n
X n
X
Sxx = x2i , Sx = xi , (10)
i=1 i=1
Xn n
X
Sxy = xi yi , Sy = yi (11)
i=1 i=1

It can be noted that (9) is in the form of

Ax = b (12)

where A and b are known. Hence,

x = A−1 b (13)

For a 2 × 2 matrix
 
a b
A= (14)
c d

the inverse is given by


 
−1 1 d −b
A = (15)
|A| −c a

and

|A| = ad − bc (16)

is the determinant of A.

Sxx Sy − Sxy Sx nSxy − Sx Sy


a0 = , a1 = (17)
nSxx − Sx2 nSxx − Sx2

3
2.2 Theory
Let us consider the following observation model

b = Ax + n (18)

and we wish to determine the value x̂LS which solves

x̂LS = arg min kAx − bk22 (19)


x

where A ∈ Rm×n , m > n, b ∈ Rm . The matrix A is assumed full rank. In


this general context, we note that b is a vector of observations, which correspond
to a linear model of the form Ax, contaminated by a noise contribution, n. The
matrix A is a known constant. In determining x̂LS , we find that value of x which
provides the best fit of the observations to the model, in the 2-norm sense.
We now discuss a few relevant points concerning the LS problem:

• The system (19) is overdetermined and hence no solution exists in the


general case for which Ax = b exactly.
• Of all commonly used values of p for the norm, p = 2 is the only one for
which the norm is differentiable for all values of x. Thus, for any other
value of p, the optimal solution is not obtainable by differentiation.
2
• We define the minimum sum of squares of the residual kAx LS − bk2 as
ρ2LS .
We wish to estimate the parameters x by solving (19). The method we
2
choose to solve (19) is to differentiate the quantity kAx − bk2 with respect to x
and set the result to zero. Thus, the remaining portion of this section is devoted
to this differentiation. The result is a closed-form expression for the solution of
(19).
2
The expression kAx − bk2 can be written as
2
kAx − bk2 = (Ax − b)T (Ax − b)
(20)
= bT b − xT AT b − bT Ax + xT AT Ax

The solution x̂LS is that value of xwhich satisfies


d  T T T T T T

b b−x A b} − b Ax} + x A{z Ax} = 0 (21)
dx |{z} | {z | {z |
t1 t2 (x) t3 (x) t4 (x)

Define each term in the square brackets above as t1 (x), ...., t4 (x) respectively.
Therefore we solve
d
[t2 (x) + t3 (x) + t4 (x)] = 0 (22)
dx

4
d
where we have noted that the derivative t1 = 0 , since b is independent
dx
of x.
We see that every term of (22) is a scalar. To differentiate (22) with respect
to the vector x, we differentiate each term of (22) with respect to each element
of x, and then assemble all the results back into a vector. We now discuss the
differentiation of each term of (22):

Differentiation of t2 (x) and t3 (x) with respect to x


Let us define the quantity c , AT b. This implies that the component ck of c
is aTk b, k = 1, ..., n, where aTk is the transpose of the kth column of A.
Thus t2 (a) = −xT c. Therefore,

d d
t2 (x) = (−xT c) = −ck = −aTk b, k = 1, . . . , n. (23)
dxk dxk
Combining the results for k = 1, ..., n back into a column vector, we get

d d
t2 (x) = (−xT Ab) = −AT b. (24)
dx dx
Since Term 3 of (22) is the transpose of term 2 and both are scalars, the
terms are equal. Hence,

d
t3 (x) = −AT b. (25)
dx

Differentiation of t4 (x) with respect to x


For notational convenience, let us define the matrix R , AT A. Thus, the
quadratic form t4 (x) may be expressed as

t4 (x) = xT Rx
n Xn
X (26)
= xi xj rij
i=1 j=1

When differentiating the above with respect to a particular element xk , we


need only consider the terms when either index i or j equals k. Therefore:
" n #
d d X X
2
t4 (x) = xk xj rkj + xi xk rik + x rkk (27)
dxk dxk j=1 i=1
j6=k i6=k

where the first term of (27) corresponds to holding i constant at the value
k, and the second corresponds to holding j constant at k. Care must be taken

5
to include the term x2k rkk corresponding to i = j = k only once; therefore, it is
excluded in the first two terms and added in separately. Eq. (27) evaluates to

d X X
t4 (x) = xj rkj + xi rik + 2xk rkk
dxk
j6=k i6=k
X X
= xj rkj + xi rik
j i

= 2(Rxk )

where (.)k denotes kth element of the corresponding vecto. In the above, we
have used the fact that Ris symmetric; hence, rki = rik . Assembling all these
components for k = 1, ..., n together into a column vector, we get

d d T T
t4 (x) = (x A Ax) = 2AT Ax. (28)
dx dx
Substituting (24), (25) and (28) into (22) we get the important desired result:

AT Ax = AT b. (29)

From the above, the LS estimate of the vector x is obtained as


−1
x̂LS = AT Ax AT b (30)

2.3 Assumptions of the LS estimator


The LS estimator derived and demonstrated above
1. Linearity observation model: The observation model is (shown in (18) and
(33)) is linear

2. Additive noise: The noise is additive – this is reflected in the cost func-
tion (19). The following is an example of non-additive noise, particularly,
multiplicative noise.

b = Ax n (31)

where is the element-by-element multiplication operator.


3. Known model: The matrix A in (18) (or the vector i in (33)) are known
as the observation matrix. The LS estimate assumes that the observation
matrix is completely known.

6
1A

V
+ −
v(k)

+ −
R

Figure 1: Measurement setup. A resistor is connected to a constant current


source that continuously provides 1 A. The voltage across the resistor is mea-
sured by a voltmeter at fixed sampling intervals and these measured voltages
are denoted as v(k), k = 1, 2, . . . .

2.4 Example
Figure 1 shows a measurement setup designed to estimate the resistance of an
unknown resistor R. The resistor is connected to a current source that provides
a constant current of 1 A. While the resistor is connected to the current source,
several voltage measurements across the resistor were taken using a volt meter
shown in the diagram.
Table 1 shows ten different voltage measurements v(k) for k = 1, 2, . . . , 10.
One would expect the measured voltage to be constant since the current through
the resistor is set to be constant and the resistance itself is a constant. Hence,
the difference in the measured voltage are attributed to be the measurement
noise. Now, the measured voltage across the resistor for a given current can be
written as
v(k) = i(k)R + n(k) (32)
where v(k) is the measured voltage at time (k), i(k) = Ic = 1 A is the current
(assumed known and constant) at time (k), and n(k) is the measurement noise.
Considering all m = 10 consecutive observations, the above observation can be
written in vector form as
v = iR + n (33)
where v, i and n are vectors of length m = 10, i.e.,
     
v(1) Ic n(1)
 v(2)  Ic   n(2) 
v =  . ,i =  . ,n =  .  (34)
     
 ..   ..   .. 
v(m) Ic n(m)

7
Table 1: Voltage measurements across R

k v(k)
1 0.2054
2 0.2183
3 0.1774
4 0.2086
5 0.2032
6 0.1869
7 0.1957
8 0.2034
9 0.2358
10 0.2277

Given the measured (simulated in this case) voltages v(k), i = 1, . . . , m, our


goal is to estimate the resistance R.
The least squares estimator
iT v
R̂LS = (35)
iT i
Using the data from Table 1, we can conclude
R̂LS = 0.2062 Ω (36)
is the LS estimate of the resistance R in Figure 1. After it is given that the true
resistance is R = 0.2 Ω, one can see that the LS estimate is much superior to
the individual estimates one could obtain from measurements in Table 1.

3 Kalman Filter
3.1 Introduction
The observation model(s) introduced in Section 2 assume that the unknown vari-
able x. In many practical applications, the parameters of interest change with
time. For example, the internal resistance (impedance in general) of a battery
changes over time; the capacity of the batter decays over time; the state of charge
of a battery changes instantaneously based on the rate of charge/discharge.
When a parameter of interest x change over time, we refer to it as a state. By
considering equal sampling time, we denote the state of x at time k as x(k).
The change of x(k) over time in many practical applications is considered to be
a random (stochastic) process. Often, there is some partial knowledge about
how x(k) changes over time as well. The evolution of x(k) over time is captured
in the following process model:
x(k) = Fx(k − 1) + v(k) (37)

8
where the known (deterministic) portion of state transition over time is given
by the first part, Fx(k − 1), and the stochastic portion of the state transition is
captured by the second part, v(k). Here, we assume x(k) to be an m × 1 vector,
the state-transition matrix F is a known m × m matrix, and the process noise
vector v(k) is assumed to be a Gaussian noise with zero-mean Gaussian with
covariance

E v(k)v(k)T = Q
 
(38)

which is a m × m matrix.
Similar to before, the observation model is written as

z(k) = Hx(k) + w(k) (39)

where, z(k) is assumed to be a n × 1 observation vector, H is known as the


observation matrix and the measurement noise w(k) is assumed to be zero-mean
Gaussian with known covariance

E w(k)w(k)T = R
 
(40)

which is a n × n matrix.
Now, the state estimation problem can be formally stated as follows: Given
consecutive measurements z(1), z(2), . . ., estimate the corresponding state x(1),
z(2), ..... The Kalman filter provides the solution to the above problem. In this
section, we will introduce Kalman filter algorithm without giving a proof of it.
For the proof and other important details of Kalman filter, the reader is directed
to a graduate course offered by the author or to [1].
The Kalman filter algorithm (summarized in Algorithm 1) allows to recursive
compute the state estimates without having to store all the previous estimates
or measurements. that is given the state estimate at time k, x̂(k|k) and the
corresponding estimation error covariance P(k|k), and the new measurement
z(k), the Kalman filter algorithm produces the updated estimate x̂(k + 1|k + 1)
and the corresponding estimation error covariance P(k + 1|k + 1).
Figure 2 shows the Kalman filter, summarized in Algorithm 1, as a block di-
agram. The inputs are the previous estimate x̂(k|k), estimation error covariance
of the previous estimate P(k|k), and the measurement z(k + 1). The outputs
are the new estimate x̂(k + 1|k + 1) and the estimation error covariance of the
new estimate P(k + 1|k + 1). The Kalman filter recursion continues as the new
measurement arrives, as long as the assumptions listed in Section 3.2 are not
violated.

9
Algorithm 1 (Kalman Filter)
[x̂(k + 1|k + 1), P(k + 1|k + 1)] = KF(x̂(k|k), P(k|k), z(k + 1))
1: State-prediction:
x̂(k + 1|k) = Fx̂(k|k)
2: Covariance of state-prediction error:
P(k + 1|k) = FP(k|k)F0 + Q
3: Measurement prediction:
ẑ(k + 1|k) = Hx̂(k + 1|k)
4: Measurement prediction error (innovation/residual)
ν(k + 1) = z(k + 1) − ẑ(k + 1|k)
5: Covariance of the innovation/residual:
S(k + 1) = R + HP(k + 1|k)H0
6: Filter gain:
W(k + 1) = P(k + 1|k)H0 S(k + 1)−1
7: State-update:
x̂(k + 1|k + 1) = x̂(k + 1|k) + W(k + 1)ν(k + 1)
8: Covariance of the state-update error:
P(k + 1|k + 1) = P̂(k + 1|k) − W(k + 1)S(k + 1)W(k + 1)0

10
ated in this case) voltages v(k), i = P(k|k), and the new measurement z(k), the Ka
mulated
ate the resistance in this case) R. voltages v(k), i = produces P(k|k),the and the newestimate
updated measurement x̂(k + z(k), 1|k +the 1) aK
voltagesthev(k),
stimate resistance i = R.P(k|k), and the new measurement
produces
estimation the updated
error z(k), the
covariance Kalman
estimateP(k x̂(k
+ filter
+ 1|k
1|k + algorit
+ 1)
1).
ured
R. (simulated in this case)
ator voltages v(k), i = estimation
produces the updated estimate P(k|k), and error
x̂(k the covariance
+ new
1|k + measurement
1) and P(k the+ 1|k the
z(k),
correspond
+ 1). Kalm
is to estimate T the resistance R. produces the updated estimate x̂(k + 1|k + 1) and
v(k), ii = v T P(k|k), andestimation
the new measurement error z(k), theP(k
covariance Kalman + 1|k filter
+ 1).algorithm
sLSestimator = T i v (18) Algorithm estimation 1 error
(Kalman covariance Filter) P(k + 1|k + 1).
R̂ LS i=i T produces the updated estimate (18) x̂(k + 1|k + 1) and the corresponding
Algorithm 1the(Kalman Filter)
d (simulated i in this
iestimation
T case) voltages v(k), i =[x̂(k P(k|k),
+ 1|k and
+ 1), new measurement
P(k + 1|k + 1)] z(k), the Kalm
= KF(x̂(k|k
we can conclude i v
(18) error covariance P(k + 1|k + 1).
ito=estimate R̂LSthe
P(k|k), = and resistancethe new Algorithm
R.measurement 1 (Kalman
z(k),(18)the produces
Filter)
Kalman
[x̂(k + 1|k
Algorithm the
filter
+ 1updated
algorithm
1),
(KalmanP(k estimate
+Filter)
1|k x̂(k
+ 1)] + 1|k= KF(x̂(k
+ 1) and
ehis I, we
this
case) case) can
voltages
case)produces conclude
voltages
voltagesthe i Tv(k),
v(k), i
v(k), i i
= =
i = estimate P(k|k),
P(k|k),
P(k|k),x̂(k and the
the
and+the new
new measurement
measurement
State-prediction:
1: estimation
new measurement z(k),z(k),
z(k),the Kalman
the Kalman
the Kalman filter algorithm
filter
filter algorithm
algorithm
estimator updated [x̂(k + 1|k + 1), 1|k P(k + 1)+ and
[x̂(k 1|k +the+1|k error
1)] + =covariance
corresponding
1), P(k +
KF(x̂(k|k), 1|kP(k
+ + 1|k
1)]
P(k|k),= + 1).
KF(x̂(k|k),
z(k + 1
m stance resistance
= 0.2062
TableR.I,R.
esistance (18) R. ⌦
we can Algorithmconclude 1
produces
produces
(Kalmanproduces (19)
the
the
the
Filter)
updated
updated
updated estimate
1: State-prediction:
estimate
estimate
x̂(k +
x̂(kx̂(k
1|k)
+ 1|k
+ =
+
1|k
1|k 1)
++ and
1)
1)
Fx̂(k|k)
the
and
and corresponding
the
the corresponding
corresponding
R̂LS = estimation 0.2062iT⌦verror covariance estimation
P(k +error
estimation
estimation
(19)
1|k covariance
error
error
+ 1).
covariance
covariance P(k ++1|k) 1|k1|k
State-prediction:
1:x̂(k + 1).
= Fx̂(k|k)
R̂ = [x̂(k + 1|k + 1:
1), State-prediction:
P(k + 1|k +
(18) 1)]
(19) 2: = Covariance
KF(x̂(k|k),P(k
P(k +
of
P(k|k), ++ 1).1).
state-prediction
z(k + 1)) error:
stance
T
v R LS
LS in= Figure
0.2062
i T i(19) 1.
⌦ After it is given Algorithm
2: Covariance
x̂(k + 1 1|k)(Kalman
of= Filter)0
state-prediction
Fx̂(k|k) error:
v resistance
(18) Algorithm
T i ⌦, one can see that
R in1: Figure
1 (Kalman 1.
State-prediction:
(18) After
the 1. LS
x̂(k
Filter) it
+
estimate
Algorithm
is given
1|k) = Fx̂(k|k)
P(k + 1|k) = FP(k|k)F + Q
0 =error:
0.2
of
Table =the I,resistance
we
(19) can conclude
one R insee
can
(18)(18)
Figure
that 2: the After
Algorithm
Algorithm
Covariance
LS it is111given
estimate
(Kalman
(Kalman
(Kalman
of
Filter)
[x̂(k Covariance
+
2:Filter)
Filter)
P(k
state-prediction
1|k + 1), P(k
+ 1|k) of
=
error:
state-prediction
+ 1|k + 1)]
FP(k|k)F + KF(x̂(k|k),
Q
Ridual i 1. 0.2 ⌦, 3: +Measurement prediction:
isAfter
[x̂(kit+is1|k
= 0.2 ⌦,given
estimates x̂(k
one+ 1), +
could 1|k)
P(k + =1|k +
obtain
[x̂(k Fx̂(k|k)
+1|k 1)] += 1),
from KF(x̂(k|k),
P(k 1|kP(k+ 1)]+=1|k)
P(k|k), z(k =+FP(k|k)F
KF(x̂(k|k), 1)) P(k|k),+ 0 z(k + 1))
Qz(k
nce conclude R one can see that [x̂(k
[x̂(k the+ + LS
1|k1|kestimate
+ + 1),1), P(k
P(k 1:
3:+ + State-prediction:
Measurement
1|k
1|k ++ 01)] = prediction:
KF(x̂(k|k),
KF(x̂(k|k), P(k|k),
P(k|k), z(k++1)) 1))
ndividual
cludeonclude
that
it is R̂the
given LS
= estimates 2:
estimate
0.2062
1: State-prediction: ⌦ one
Covariance could of
P(k
1: obtain
state-prediction
+ 1|k)from
State-prediction: (19)= error:
FP(k|k)F
ẑ(k
3: + 1|k)
Measurement + Q
= Hx̂(k +
prediction: 1|k)
o the individual
LS estimates(19) one could obtain from
State-prediction:
1:FP(k|k)F x̂(k
ẑ(k ++ 1|k)
1|k) =
= Fx̂(k|k)
Hx̂(k++1|k) 1|k)
062
LS (19) ⌦
estimate P(k + 1|k)3: 1:
= State-prediction:
Measurement
0
+prediction:
Q
4: Measurement prediction error (innovation/r
⌦62eable could
⌦ I.
the resistance R obtainx̂(k +from 1|k)(19)
in
=(19) Fx̂(k|k)
Figure 1. After
x̂(k
it
+
is
1|k)
given
= Fx̂(k|k)
2: ẑ(k + 1|k)of
Covariance = Hx̂(k
state-prediction error:
obtain
ALMAN in Figure from F ILTER
2: 1.Covariance3: Measurement
After it is given of state-prediction
ẑ(k
x̂(k
prediction:
x̂(k
2: Covariance + +
+ 1|k)error:
1|k)
1|k) =ofof = = Fx̂(k|k)
4:
Fx̂(k|k) Measurement
state-prediction
Hx̂(k +Measurement
4:P(k 1|k) error: prediction
prediction error
error (innovation
(innovation/res
R iven
e. inisKFigure oneẑ(k can see that=2:the 2: Covariance
LS0 estimate ⌫(k +
state-prediction 1)
+ =
1|k) z(k
=
error: + 1)
FP(k|k)F ẑ(k
0
+ +Q 1|k)
one Figure
RALMAN
III.
can
=K
see
0.2
1. 1. After
ALMAN
that
⌦,FKalman
AfterILTER
the itLS it
F
is isgiven
ILTER given
+
estimate 1|k) Covariance
P(k
Hx̂(k ++Q 1|k)
1|k)= of FP(k|k)F
state-prediction
⌫(k 0
++
+ Q1) error:
= z(k + +1) 1) ẑ(k ẑ(k++ 1|k)
mate
ne
rement
he can
z(k),
individual
theP(k +
estimates
1|k)
filter =
one 4:
FP(k|k)F
algorithm
couldMeasurement
obtain
P(k + + from
1|k) prediction
= 5: 3:
FP(k|k)F
⌫(kerror
Covariance 0 1) of
Measurement + Q
=(innovation/residual)
z(k
the innovation/residual:
prediction: 1|k)
see+see that 1)the 4:LS estimate Measurement 0
stimates
can te x̂(k one
that
1|k 3:+the could LS
and
Measurement Measurement
obtain
estimate
the from
corresponding
prediction:
3:prediction
P(k + 1|k) error =prediction:
(innovation/residual)
5:5:Covariance
FP(k|k)F Covariance + Q of the the innovation/residual:
innovation/residual:
fromle mates I. one could obtain from ⌫(k
3:
3: +
Measurement
ẑ(k +1)
Measurement =
1|k) z(k
= + 1)
prediction:
S(k
prediction:
Hx̂(k + ẑ(k
+
1|k) ẑ(k
+1) 1|k)
= + 1|k)
=
R Hx̂(k
+ HP(k + 1|k)+ 1|k)H00 0
ntroduced
P(k +
tes one could 1|k + inẑ(kSection
1).
obtain
+ 1|k) ⌫(k II
from
+ assume
1)
= Hx̂(k = z(k that
+
II4:+assume
1|k) 1) the ẑ(k + 1|k) S(k
S(k ++1)
Measurement
1) = R R ++HP(k
predictionHP(k ++1|k)H
error 1|k)H
(innovation/res
model(s)
s) introduced introduced in
5:
in Section
Section
Covariance II Covariance
5:assume
of
ẑ(kthat
Measurement
ẑ(k
the +
+ that
1|k)
1|k) the ofthe
=
innovation/residual:
the
=prediction
6:Hx̂(k
Hx̂(k 4:innovation/residual:
Filter +
+ gain:
1|k)
error
1|k)
Filter
(innovation/residual)
gain:
x. practical
FIII. K ALMAN
In practical
ILTER many 4: applications,
Measurement
practical F ILTER applications,the
prediction parameters
4:the
⌫(k error
Measurement (innovation/residual)
parameters
+ 1)==R 6: 6:
prediction Filter
1)0 ⌫(k
z(kprediction
+HP(k ẑ(k + gain:
error
1) =(innovation/residual)
z(k 0 + 1) ẑ(k + 00 1|k) 1
many er) applications,
S(k + 1) = the
S(k
4:
R +parameters
+
Measurement
HP(k 1) + 1|k)H+ W(k W(k + ++1)
error +
1|k)
1|k)H =
1) P(k +
(innovation/residual)
= P(k + 1|k)H
1|k)H S(k
S(k ++ 1)
1) 1
Fme nwith
+ ForII
ILTER
1)] assume
that example,
=time. the
KF(x̂(k|k), Forthat
⌫(k + the
1)theinternal
example,
P(k|k), = z(k
z(kthe +
+ 1)) 1) resistance
internal
5: ẑ(k
Covariance + 1|k)
+ 1) =ofz(k
⌫(k resistance the + 5: W(k ẑ(k++1)
innovation/residual:
1)Covariance 1|k)= the
of P(kinnovation/residual: 0
+ 1|k)H S(k + 1) 1
ime. For5: example,
TER
Covariance6: Filter theofgain:internal
the Filter gain:
resistance
6: innovation/residual:
⌫(k + 1) = z(k
Covariance 1) = Rof+the
7: +State-update:
1) State-update:
7: ẑ(k + 1|k)
innovation/residual:
edions,
eral)attery
parameters
in of the
Sectionchanges
a parameters
battery
II assumeover
changes thattime; over
the the 5:
5:capacity
time; S(k the
Covariance
+capacity
of 7:
HP(k State-update:
S(k ++ 1|k)H
0 the innovation/residual: 1) 0 =
0
R + HP(k 1 + 1|k)H
0
)odel(s) a battery introducedchanges
S(kthe + in W(k
1) over
Section
=R + time;
+ II=W(k
1)charge
HP(k the
assume
6:P(k +capacity
Filter
S(k + that
+1|k)H
+ 1)1)=
1|k)H
gain: the
= P(k
0
S(k
R +
+x̂(k
+
HP(k 1|k)H
1) ++
x̂(k 1
1|k
+1|k)H S(k
1|k++ 1)+
0 1)=1)=x̂(k x̂(k+ +1|k)1|k) + + W(kW(k++1)⌫ 1
ecal
dme; ysin internal
overthe
resistance
tdiction
the Section
applications, time; resistance
state
II assume
the of charge
state
that
parameters of
the of a battery
of a battery 6: Filter
x̂(k + gain:
1|k + 0 1) = x̂(k + 1|k) + W(k +
6:S(k + +1) ==R 8:+ Covariance
HP(k +0 S(k
Covariance 1|k)H +of1)of 1the state-update error:
ously rIn Sectionmany
time; error:
based 6:
IIpractical
the Filter
assume
on state
7: gain:
the
applications,
ofofthe
State-update:
that
rate charge the
7: State-update:
of
charge/discharge. aparameters
Filter
W(k battery
gain:
1)
When P(k + 8:1|k)H the state-update
1|k)H0 S(k error: 1) 1
er he
eters on
xample,
l capacity
the
time;
applications,
0 rate
the the of
internal
the charge/discharge.
capacity resistance
parameters 6: FilterWhen gain: 8: Covariance
W(k + 1) = ofP(k the +state-update +error:
)F
ased ith time.+
pplications,
Q
on theFor W(krate
the example,
of
+ x̂(k
parameters 1) +the
charge/discharge.
= 1|k
P(k internal
++ 7:1|k)H
1) = +resistance
W(k 0
State-update:
x̂(k When
S(k+++ 1)+1|k)=1)P(k 1
+ P(kW(k P(k
+ 1|k)H
+ +1|k 0
+ S(k
1)⌫(k 1|k + +W(k1)==P̂(k
1)
1) 1 P̂(k+ +1|k)1|k)+ 1) W(k
W(k++1)S
est
changes
ance
ample,
fcharge
:e aover change
xbattery over
the over
time;
internal time,
the we
capacity
resistance refer x̂(k
to it as a1|kstate. 1)
By = x̂(k
7: + 1|k)
State-update: 0 ++ 1) 1 + 1)⌫(k 1
l)stateofthe aover time,
of a we refer
battery to it as a state.
W(k By
+ 1) = P(k + P(k
1|k)H + 1|k
S(k ++ 1)
1) = P̂(k + 1|k) W(k +
ofbattery changes
State-update: Covariance over 8:time;
itof 7:thethe capacity
State-update:
state-update error:
7:charge 8:of x̂(k + 1|k + 1) = x̂(k + 1|k) + W(k + 1)⌫(k + 1)
hange
ple,
ampling
acityanges
arge.
1|k) over
When internal
time, time,
time; we we
resistance
denote
the refer
acapacitythe to
battery state as ax
Covariance state.
ofState-update:
at timeBy kof asthe state-update
x̂(k + 1|k +error:
1) = x̂(k + 1|k) 0 1)+ W(k + 1)⌫(
me,ges over
error we
rge/discharge.
over denote
time; x̂(kthe
(innovation/residual)
time; the
theWhen
+ state
1|k
P(kstate + of
+ of
1) 1|k x
charge
= + at7:
x̂(k8:
1) time
Covariance
of
+=x̂(k ak
1|k)
P̂(k+ as
battery
+1|k
+ W(kof
+
1|k) the
1) + =state-update
x̂(k
1)⌫(k
W(k ++ +1|k)1)
1)S(k error:
+ W(k
+ 1)W(k + 1)⌫(k
+ 1)+
gof rate
atime,
state
ttery x(k)of
state. charge/discharge.
ofwe over
Bychargedenotetime of capacity
the
ina many When
state2:practical
battery of at
x cycle
P(k P(k time
+applications
1|k
+ k+ as
+ 1)
1) =
= 8:P̂(k
P̂(k Covariance
++ +
Figure1|k) 1|k)
2 shows W(k of the
W(k state-update
the1)S(k
+ +
Kalman 1)S(k
+ filter
1)W(k +error:
as
+ 1)W(k
1)a0 block+d
sly erer
time,ẑ(k
te over time
to
based
+ we it
1|k)
ofrandom as in
8:on
refer to
charge a many
state.
the
Covariance rate practical
By
it asa abatteryof
state. applications
charge/discharge.
of
Figure the 8: x̂(k
state-update
One
By is computed +
Covariance
of 1|k
When
Kalman +
error: 1)
of =
the
filter. x̂(k
It must be1|k)
state-update noted + W(k
error:
that the +
updated1)⌫(k + 1)
Whene)at
ate atime
vation/residual:
charge/discharge.
ktime as inof
(stochastic)
many When process.
practical
covariance
8:z(k
Often,without
applications
Covariance
P(k +
thererequiring
1|k
is Figure
of 1)
+ the =
the2state
P(k
state-update
either
Figure
P̂(k + +shows
2 1|k
1|k)
estimate
shows + the
error:
W(k =Kalman
1)x̂(k|k)
the +P̂(k
or the + 1|k)
0Kalman
1)S(k +
filter
filter
1)W(k
as a+ block
W(k as a+
1) 01)S
bloc
m tme,
denote
ktate
e. of
edgex (stochastic)
By
changethe
ofabout
charge/discharge.
we refer over
state
x athow to of
time
P(k xprocess.
time,
it as+ at
x(k) 1|k we
time
as
When
ka process.+
state. k 1)
changes Often,
refer
as =
By Often,
measurement to
P̂(k it
over timethere
as
++ a
1). state.
1|k) is By
as well.W(k + 1)S(k + 1)W(k + 1)
ndom
applications
mpling
+
in
1|k)H
many (stochastic)
time,
0
practical
we Figure
applications
denote the 2 shows
state of P(k
the at
there
+ 1|k
Kalman
time
is
+
as 1) =asP̂(k
filter a + 1|k)
block diagram. W(k + 1)S(k + 1)W(k + 1)0
(k) notehow
ractical
we refer
the x(k)
over time
k as0there to
state it changes
applications
ofas a atstate.
time over
By as time
is captured in the following process
x k x as
Figure well.
2 k
shows the Kalman filter as a block diagram.
en,
bout
hastic) howprocess.
over is x(k)1 Often,
time changes
in manythere over
is Figure time 2 as shows
well. the Kalman filter as a block diagram.
me k)H x(k)
neess.
ions many
the isS(kstate+practical
captured
Often, 1)of atin
xthere applications
timethe is aspracticalFigure
kfollowing applications
process 2 shows the Figure
Kalman 2 shows
filter as the
a Kalman
block filter as a block dia
diagram.
me a r(k) timeas
random well.
changes is captured
Figure
over time
(stochastic) 2 shows in the
well.the
as process. following
Kalman
Often, process
filter
there asisa block diagram.
many
re stic) process.
practical
isprocess Often,
applications there is
+ sptured
ing over
1|k)
x(k) +in = time
W(k the
Fx(k + as
following
1)⌫(k well.
1) ++ process
1) v(k) Figure 2 shows (20)the Kalman filter as a block diagram.
c) k)
well.ge about
changes
process.
-update error: howover
Often, time
x(k) changes
as
there well.
is over time as well.
he turedoverfollowing
1|k) intime the process
following
is+portion
captured process
instate
the following process
Fx(k )changes
+ cess
deterministic) W(k over
1) + time
v(k)as
1)S(k well.
+ 1)W(k
of
0
+ transition
1) (20)
over time
= Fx(k 1) + v(k) (20) (20)
ed 1) in + v(k)
t part, (20) the following process
Fx(k 1), and the stochastic portion of Dummy Figure
c) portion of state transition over time
nistic)
)on
an is+filter
captured
of asportion
=state
v(k) a transition
block of
by diagram.
the
1)(20) state
second
over transition
time
(20) part, v(k). overHere, time we 11
x(k)
(k
), n
an
(20)over
andm⇥ 1),
the
Fx(k
time and and
1 stochastic
vector, the +
theportion
the
v(k)
stochastic of
state-transition
stochastic
portion matrixofFofis a
portion
(20)
Dummy Figure Dummy
Dummy Figure
Figure
Needs to be replaced

+nportion
dturede
rix,
Fx(k
v(k)
of
by
second
erministic)state
the
andbythethe
of1),
transition
second
part, portion
process
over
part,
Here,
of
noise
(20)
time
state we transition
v(k).
vector Here, Dummy
over we
is assumedtime
Figure
transition
time
and the we
over
stochastic
v(k).
second time
portion part, of v(k). v(k)Here, we Dummy Figure Dummy
Needs to be replaced
.oise Here, Needs toFigure
Needs to be replaced
he
⇥atrixector,
part,
1
state-transition
fntochastic
state
of
ssecond
with
Fx(k
vector, the zero-mean
transition
part,
is portion
the
aby v(k).
1), matrix
state-transition and
over of
state-transition
isiHere,
F
the
Gaussian
time is
we
a matrix
stochastic
withDummy
matrix F is
portion
covariance
F a Figure
is
of
Needs
a to be replaced Dummy Figure be replaced

d,art, noise
captured
wethev(k).
process hvector
Fstochastic
noise
v(k)
Here,theportionsecond
vector we
assumed of part, is
v(k) v(k).assumed Here, we Dummy Figure
Needs to be replaced
dean the
is Gaussian process
state-transition
assumed
E v(k)v(k) noise
matrix
with covariance
T
vector
=Q F is a v(k) is assumed
Needs to be (21)
replaced Needs to be replaced
3.2 Assumptions of Kalman filter
1. Linearity: The process model 37 and measurement model 39 are linear. In
order to see what a non-linear model will look like, please refer to Section
4.
2. Gaussian assumption: The process noise v(k) and the measurement noise
w(k) are assumed to be Gaussian.
3. Known model: The process model (37) and measurement model (39) are
together known as the state-space model. The parameters of this model
are the m × m state-transition matrix F, the n × m observation matrix F,
the m × m process-noise covariance matrix Q, and the n × n measirement-
noise covariance matrix R. Kalman filter assumes perfect knowledge of
F, H, Q, and R.
4. No time-correlation: There is no time-correlation in the process and mea-
surement noise sequences, i.e.,
E v(i)v(j)T = 0 when i 6= j
 
(41)
T
 
E w(i)w(j) = 0 when i 6= j (42)
where 0 is a zero-matrix of appropriate size.

3.3 Example
The internal resistance of a battery is known to vary due to temperature. The
following 48 measurements were made from a battery over the course of 24
hours at a fixed sampling time of T = 30 minutes: 0.4056, 0.4393, 0.4911,
0.4782, 0.5242, 0.6304, 0.6377, 0.6988, 0.7638, 0.8448, 0.9070, 0.7848, 0.7609,
0.9075, 0.8922, 0.9035, 0.8959, 0.9381, 0.9920, 0.9335, 0.9548, 0.9606, 1.0311,
1.0104, 1.0508, 1.0074, 0.9973, 0.9830, 0.9323, 0.9236, 0.9394, 0.8536, 0.8623,
0.9219, 0.9151, 0.8612, 0.8118, 0.8407, 0.8280, 0.7382, 0.6412, 0.5701, 0.6060,
0.6495, 0.5463, 0.5991, 0.3954, 0.3618. The measured resistances were in Ω.
The measurements suffer from measurement noise.
We saw that the measurements were also suffering from noise in the example
discussed in Section 2.4. The least square estimate reduced to the average of
the observations (and then divided by the current). However, averaging will
produce just one value over the entire 24 hours, it is against our knowledge that
the resistance indeed changes over time. Figure 3 shows the measurements as a
plot agains time. How can we get rid of the measurement noise while preserving
the fact that the true value changes over time?
We will develop a model that incorporates the fact that the resistance
changes over time. Then, we will employ an appropriate filter (Kalman fil-
ter obviously) to estimate the resistance that seem to suffer (see Figure 3) from
measurement noise. Let us define the following vector state x(k)
 
R(k)
x(k) = (43)
Ṙ(k)

12
Measured resistance over 24 hours
1.1

0.9
Resistance ( )

0.8

0.7

0.6

0.5

0.4

0.3
0 4 8 12 16 20 24 28 32 36 40 44 48
Time (k)

Figure 3: Measured resistance values of a resistor over 24 hours. The


measured values are corrupted by measurement noise.

where k, k = 1, 2, . . . , 48 denotes the sampling instant, R(k) is the resistance


(in Ω) at time k, and Ṙ(k) is the rate of change of resistance (in Ω/hr) at time
k.
We build the following process model to incorporate the change of resistance
over time

x(k + 1) = Fx(k) + Γv(k) (44)


| {z }
process noise

where
   2 
1 T T /2
F= , Γ= (45)
0 1 T

and the process noise v(k) is assumed to be zero-mean Gaussian with variance
σv2 . We can now show that the process noise covariance is
 1 4 1 3 
T T
Q = E Γv(k)v(k)T ΓT = 14 3 2 2 σv2

(46)
2T T

Now, each measured resistance can be modelled as the following:

z(k) = Hx(k) + w(k) (47)

13
where
 
H= 1 0 (48)

and the measurement noise w(k) is assumed to be zero-mean Gaussian with


variance σr2 , i.e.,

R = E w(k)2 = σr2

(49)

The measurement noise usually a feature of the measuring equipment.


In this case, we assume that the knowledge of the measurement noise s.d.
is known to be σr = 0.05 Ω. Also, we will assume that the process noise is,
i.e., how fast the resistant-gradiant could change over time, also known to be
σv = 0.01 Ω/hr2 .
Now, we have identified all the required parameters of the Kalman filter
algorithm: F, H, Q, and R. The measurements, z(1), z(2), . . . , z(48) are given
at the start of this section. We can now recursively estimate x(k) over time
using the Kalman filter algorithm summarized in Algorithm 1 and in Figure 2.
In order to initialize the filter, we will use the following approach:
" #  
R̂(0|0) z(1)
x̂(0|0) = ˆ = z(2)−z(1)
Ṙ(0|0) T
 
1 −1/T 2
P(0|0) = E x̂(0|0)x̂(0|0)T =
 
σ (50)
−1/T 2/T 2 r

Figure 4(a) shows the estimated values of the resistance R̂(k) along with
the measurements z(k) against time. The smoothing nature of the KF can be
ˆ
observed from this figure. Figure 4(b) shows the estimate resistant-rate Ṙ(k)
over time — it can be observed that it changes from positive to negative around
the halfway point.

4 Extended Kalman Filter


4.1 Introduction
A non-linear state-space model is written as

x(k) = f (x(k − 1)) + v(k) (51)


z(k) = h(x(k)) + w(k) (52)

which differs from (37) and (39) due to the fact that the state-transition model
and the measurement-model are written in a general form. The state-transition
model f (x(k − 1)) indicates a function of x(k − 1) without the explicit linearity
shown by Fx(k − 1) in the earlier version. Similarly, the measurement model
replaces the explicitly linear model Hx(k) with h(x(k)).

14
1.1
Measurement
Kalman Filter
1

0.9

Resistance ( )
0.8

0.7

0.6

0.5

0.4

0.3
0 4 8 12 16 20 24 28 32 36 40 44 48
Time (k)

(a) KF estimated resistance R̂(k) over time

Estimated resistant-change over time


0.12

0.1

0.08
Resistant-change ( /T)

0.06

0.04

0.02

-0.02

-0.04

-0.06

-0.08
0 4 8 12 16 20 24 28 32 36 40 44 48
Time (k)

ˆ
(b) KF estimated resistant-rate Ṙ(k) over time

Figure 4: Kalman filter estimates. The Matlab codes of this demonstration


can be downloaded by clicking here.

Similar to before, the process noise and the measurement noise vectors are
assumed to be zero-mean Gaussian with covariance matrices
E v(k)v(k)T = Q
 
(53)
E w(k)w(k)T = R
 

It must be noted that the Kalman filter requires linear-Gaussian assumption,


as summarized in Section 3.2. For non-linear model the extended Kalman filter
(EKF) was developed. In this section, we summarize the EKF without giving
any proof. For detailed information on EKF, the readers are directed to [1].
In short, the EKF follows the same KF procedure presented in Algorithm 1,

15
with the following linearization steps:
∂f (x(k))
F= (54)
∂x(k) x̂(k|k)
∂h(x(k))
H= (55)
∂x(k) x̂(k+1|k)

An example, provided later in Section 4.3, will further illustrate the EKF.

4.2 Assumptions of the EKF


1. Gaussian assumption: The process noise v(k) and the measurement noise
w(k) are assumed to be Gaussian.
2. Known model: Similar to the KF, the parameters of this model needs to
be known, i.e., the parameters defining the function f (·), the parameters
defining the function h(·), the m × m process-noise covariance matrix Q,
and the n×n measirement-noise covariance matrix R are assumed known.
3. No time-correlation: There is no time-correlation in

E v(i)v(j)T = 0 when i 6= j
 
(56)
E w(i)w(j)T = 0 when i 6= j
 
(57)

where 0 is a zero-matrix of appropriate size.

4.3 Example
Let us consider the following non-linear state-space model from [2]
x(k − 1) 25x(k − 1)
x(k) = + + 8 cos(1.2k) + v(k) (58)
2 1 + x(k − 1)2
x(k)2
z(k) = + w(k) (59)
20
where, it is given that the process and measurement noises are zero-mean Gaus-
sian with variances

E v(k)2 = q = 1 and
 
(60)
2
 
E w(k) = r = 1 (61)

respectively.
First, let us perform the linearization step in (54) and (55) as follows:
1 25 50x(k/k)2
F = + − (62)
2 x(k/k) + 1 (x(k/k)2 + 1)2
2

x̂(k + 1|k)
H= (63)
10

16
Now, we will follow the KF procedure presented in Algorithm 1 to write
the EKF procedure for the non-linear state-space model given in (51)–(52) as
summarized in Algorithm 2. It must be noted that since the state and the
measurement are scaler, Algorithm 2 uses scalar notation (regular, lower-case).

Algorithm 2 (Extended Kalman Filter)


[x̂(k + 1|k + 1), P (k + 1|k + 1)] = EKF(x̂(k|k), P (k|k), z(k + 1))
1: State-prediction:
x̂(k + 1|k) = x̂(k|k)
2
25x̂(k|k)
+ 1+x̂(k|k) 2 + 8 cos(1.2k)

2: Linearization to obtain F :
50x̂(k+1/k)2
25
F = 21 + x̂(k+1/k) 2 +1 − (x̂(k+1/k)2 +1)2

3: Covariance of state-prediction error:


P (k + 1|k) = F P (k|k)F 0 + q
4: Measurement prediction:
2
ẑ(k + 1|k) = x̂(k+1|k)
20
5: Measurement prediction error (innovation/residual)
ν(k + 1) = z(k + 1) − ẑ(k + 1|k)
6: Linearization to obtain H:
H = x̂(k+1|k)
10
7: Covariance of the innovation/residual:
S(k + 1) = r + HP (k + 1|k)H 0
8: Filter gain:
W (k + 1) = P (k + 1|k)H 0 S(k + 1)−1
9: State-update:
x̂(k + 1|k + 1) = x̂(k + 1|k) + W (k + 1)ν(k + 1)
10: Covariance of the state-update error:
P (k + 1|k + 1) = P (k + 1|k) − W (k + 1)S(k + 1)W (k + 1)0

In order to implement an EKF for the given example in (58)–(61), we will


follow the two steps described below.
• Step 1: Simulate the measurements. First, we will simulate the
measurements z(k) in (59). In order to do this, we will start with x(0) = 0
and continue the simulation of z(1), z(2), . . . , z(20). Figure 5 shows the
simulated measurements x(k) and z(k) for k = 1, 2, . . . , 20.
• Step 2: Apply EKF. Now, we will use the simulated measurements
z(k), k = 1, 2, . . . , 20 to estimate the states x(k) using the EKF algorithms
summarized in Algorithm 2. In order to initialize the EKF algorithm, we
will use
x̂(0|0) = 0 (64)
P (0|0) = 1 (65)
Figure 6 shows the true value (simulated x(k)) and the estimate values
x̂(k|k) by the extended Kalman filter. To those who are interested, the
Matlab simulation codes can be downloaded by clicking here.

17
20

15

10

-5

-10

-15

-20
0 5 10 15 20
Time index, k

Figure 5: EKF measurement simulation.

5 Summary
• All measured quantities are susceptible to measurement noise.
• Most of the measurement noise can be modelled as zero-mean Gaussian.
• When the true quantity is known to be constant, then the measurement
noise can be reduced by averaging.

• If the true quantity is known to vary with time, then an appropriate


filtering technique must be employed.
• In this lecture, we briefly studied two such filters: Kalman filter and the
extended Kalman filter.

6 Questions
No homework at this time. Those who wanted to learn more about estimation
and filtering are encouraged to simulate and reproduce the figures in this lecture.
The following questions are for the curious minds to help think further.

Question 1.

18
20

15

10

-5

-10

-15

-20
0 5 10 15 20
Time index, k

Figure 6: Extended Kalman filter. Demonstration of EKF for a simulated


problem presented in [2]. The Matlab codes of this simulation can be down-
loaded by clicking here.

Your friends suggests to simply average the voltage measurements in Table


1 and then divide by the current (which is known to be constant). Is that the
answer the same as the LS estimate? Then, what is the benefit of least square
estimation approach?
Question 2.
Figure 7 shows the equivalent circuit model of a battery. The electromotive
force of the battery (in volts) is denoted as EMF and the internal resistance
of the battery is denoted as R0 Ω. In order to estimate the internal resistance,
a constant current of i(k) = 2 A is applied to the battery and the resulting
terminal voltage v(k) is measured for 10 consecutive times. The measured volt-
ages v(1), v(2), . . . , v(10), are given by 4.2057, 4.2154, 4.2041, 4.2024, 4.2174,
4.1839, 4.1927, 4.2048, 4.1838, and 4.2037, respectively. Then, the current is
changed to i(k) = 1 A and the resulting terminal voltage v(k) is measured for
10 consecutive times. The measured voltages v(11), v(12), . . . , v(20), are given
by 4.0057, 3.9980, 3.9967, 3.9898, 3.9947, 4.0055, 4.0038, 3.9894, 4.0041, and
3.9955, respectively. Use least square estimation approach to estimate the EMF
and internal resistance R0 of the battery. Assume that the EMF remained a
constant throughout the experiment.
Question 3.

19
R0 i(k)
+

+
EMF v(k)

Figure 7: Equivalent circuit model of a batteryl.

Instead of using a Kalman filter in Section 3.3, can you use a curve fitting
approach? (curve fitting will be taught in the next lecture). Discuss the pros
and cons of employing a curve fitting approach compared to a Kalman filter.

References
[1] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with applications
to tracking and navigation: theory algorithms and software. John Wiley &
Sons, 2004.
[2] M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on
particle filters for online nonlinear/non-gaussian bayesian tracking,” IEEE
Transactions on signal processing, vol. 50, no. 2, pp. 174–188, 2002.

20

You might also like