0% found this document useful (0 votes)
30 views

Optimal State Estimation

(1) Optimal state estimation utilizes optimization to generate state estimations considering inputs and measurements. (2) The problem is formulated as an optimization problem by defining an objective function and constraints based on the system model, measurements, and estimation goals. (3) For linear systems, the optimal state estimation solution is the Kalman filter, which can be derived analytically using techniques like substitution or numerically using tools like MATLAB.

Uploaded by

vicenc puig
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Optimal State Estimation

(1) Optimal state estimation utilizes optimization to generate state estimations considering inputs and measurements. (2) The problem is formulated as an optimization problem by defining an objective function and constraints based on the system model, measurements, and estimation goals. (3) For linear systems, the optimal state estimation solution is the Kalman filter, which can be derived analytically using techniques like substitution or numerically using tools like MATLAB.

Uploaded by

vicenc puig
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Master's degree in Automatic Control and Robotics

Chapter 4:
Optimal State Estimation
Master's degree in Automatic Control and Robotics

Conceptual Idea

• Optimal estimation utilizes optimization to generate the state estimation considering the
inputs and measurements
• As in the case of optimal control, the practical implementation of this state estimation is
done using a computer. For this reason, optimal state estimation is also typically
implemented in discrete-time.
Master's degree in Automatic Control and Robotics

Problem Formulation (1)

• The problem of state estimation is transformed in an optimization problem:

(1) The state estimation goals are transformed into an objective function named J that is
function of the states and controls in a time horizon N
– The objective function contain several goals: (1) estimation error due to the unknown initial condition; (2) estimation
error due to the disturbances and (3) estimation error due to the sensor noise.
(2) The model and the physical limitations of the states are the constraints of the optimization
problem

• Optimal state estimation can be applied to multivariable linear and non-linear systems.
• The theory of optimal state estimation can developed in continuous time or in discrete-time.
• As in the case of optimal control, we will focus in discrete-time formulation because is the
one more used in practice and because fits the optimization theory presented in the first part
of the course:

x  fc ( x(t ),u(t )) 
Ts
 x(k  1)  f ( x(k),u(k)
Master's degree in Automatic Control and Robotics

Problem Formulation (2)

• As in the case of optimal control, we will focus in discrete-time formulation because is the
one more used in practice and because fits the optimization theory presented in the first part
of the course.

• There are many methods for discretizing a numerical system (as e.g., Euler, Runge-Kutta, etc.)

• Let’s consider the simplest one based on the Euler method


x(k  1)  x(k)
x(t ) 
Ts

• Then, the non-linear model of the system in continuous-time can be expressed in discrete-
time as follows
x  fc ( x(t ),u(t )) 
Ts
 x(k  1)  f ( x(k),u(k)
Master's degree in Automatic Control and Robotics

Problem Formulation (3)

• The problem of state estimation is transformed in an optimization problem:

(1) The estimation goals are transformed into an objective function named J that is function of
the estimated states in a time horizon N
(2) The model and the physical limitations of the states are the constraints of the optimization
problem

N 1
T
ˆ 0 )   [w T (k )Q 1w(k )  v T (k )R 1v(k )]
1
min xˆ ( 0 )P( 0 ) x(
ˆ 0 ), ,x(
x( ˆ N)
k 0

suject to :
ˆ  1)  f ( x(k
x(k ˆ ),u(k ))  w(k ) k  0 , ,N
y(k )  g( x(k
ˆ ),u(k ))  v(k ) k  0 , ,N
x̂(k )  [ x ,x ] k  0 , ,N
Master's degree in Automatic Control and Robotics

Problem Solution

• To solve the optimization problem associated to the optimal state estimation problem, there
are two procedures

(1) Analytically using the theory learned in the first part of the course.
(2) Numerically using numerical solvers as the ones available in Matlab.

• For linear systems, both solutions are possible when neglecting the physical constraints
affecting the states.

• For non-linear systems, the analytical solutions is almost impossible, so only the numerical
one is possible.
Master's degree in Automatic Control and Robotics

Problem Formulation: The Linear Case (Kalman)

• When the system to be estimated can be formulated or approximated using a linear model,
the system can be represented in the standard linear form that after discretising can be
expressed in the following form:

x  Ac x(t )  Bcu(t ) 
Ts
 x(k  1)  Ax(k)  Bu(k)

• Then, the previous optimization problem can reformulated in the following way

N 1
T
ˆ 0 )   [w T (k )Q 1w(k )  v T (k )R 1v(k )]
1
min xˆ ( 0 )P( 0 ) x(
ˆ 0 ), ,x(
x( ˆ N)
k 0

suject to :
ˆ  1)  Ax(k
x(k ˆ )  Bu(k )  w(k ) k  0 , ,N
y(k )  Cx(k
ˆ )  v(k ) k  0 , ,N
x̂(k )  [ x ,x ] k  1, ,N
Master's degree in Automatic Control and Robotics

Numerical Solution of the Kalman Filter

• The numerical solution of the optimal state estimation problem problem can be obtained
using an optimization language as Yalmip

x = sdpvar((nx,1,N+1),(1,1,N+1));
w = sdpvar((nx,1,N),(1,1,N));
v = sdpvar((ny,1,N),(1,1,N));

constraints = [];
objective = x’{1}*(P0^-1)*x{1} ;
for k = 1:N
objective = objective + w’{k}*(Q^-1)*w{k} + v’{k}*(R-^1)*v{k};
constraints = [constraints, x{k+1} == A*x{k} + B*u(k)+w{k}];
constraints = [constraints, y(k) == C*x{k} +v{k}];
end

options = sdpsettings(‘solver', ‘quadprog');


optimize(constraints,objective,options);
Master's degree in Automatic Control and Robotics

Analytical Solution of Kalman Filter

• The analytical solution of the optimal state estimation problem can be done using several
approaches:

1. Lagrange (or substitution) method


2. Dynamic programming

• To obtain the analytical solution, physical constraints are neglected and the objective
function is expressed in vector/matrix form as follows

N 1
T
ˆ 0 )   [w T (k )Q 1w(k )  v T (k )R 1v(k )]
1
min xˆ ( 0 )P( 0 ) x(
ˆ 0 ), ,x(
x( ˆ N)
k 0

suject to :
ˆ  1)  Ax(k
x(k ˆ )  Bu(k )  w(k ) k  0 , ,N
y(k )  Cx(k
ˆ )  v(k ) k  0 , ,N
Master's degree in Automatic Control and Robotics

Analytical Solution of Kalman Filter: Substitution Method (1)

• To solve the previous optimization problem a new objective function is created as follows

N 1
ˆ 0 )   [( x(k
J  xˆ T ( 0 )P( 0 )x( ˆ  Bu(k))T Q 1 ( x(k
ˆ  1)  Ax(k) ˆ  1)  Ax(k)
ˆ  Bu(k))  ( y(k)  Cx(k))
ˆ T 1
R ( y(k)  Cx(k))]
ˆ
k 0

• To obtain the analytical solution: J  0

• This implies the following partial derivatives:

J
 0 k  0 ,1 , 2 , ,N
x̂(k)
Master's degree in Automatic Control and Robotics

Analytical Solution of Kalman Filter: Substitution Method (2)

Optimal Estimation

The optimal estimation obtained from the analytical solution is a state observer:

ˆx(k  1)  Ax(k)
ˆ  Bu(k)  L(k)( y(k)  Cx(k))
ˆ
where
L(k)  AP(k)C T [R  CP(k)C T ] 1

Ricatti Equation

P(k  1)  Q  [ A  L(k)C ]P(k)AT

where P( 0 )  P0
Master's degree in Automatic Control and Robotics

Approximate Analytical Solution: Steady State Approximation

If the horizon N is long enough, the Ricatti Equation reach a steady state solution:

P(k  1)  P(k )  Pss

Optimal Estimation

The optimal estimation obtained from the analytical solution is a state observer:

ˆx(k  1)  Ax(k)
ˆ  Bu(k)  Lss ( y(k)  Cx(k))
ˆ
where
Lss  APssC T [R  CPssC T ] 1

Ricatti Equation

Pss  Q  [ A  LssC ]Pss AT


Master's degree in Automatic Control and Robotics

Analytical Solution of Kalman Filter: Example (1)

• Calculate the optimal state estimation with the analytical solution for the following system:

x(k  1)  x(k)  w(k)


y(k)  x(k)  v(k)
• Consider that the disturbance and the noise have a covariance matrix Q=1 and R=1/4
respectively. x̂( 0 )  0
• Since the initial condition is assumed unknown
P( 0 )  1012

• The estimation at k=0 is:

1
L( 0 )  1  P( 0 )  1  [  1  P( 0 )  1] 1  1
4
ˆx( 1)  1  x(
ˆ 0 )  L( 0 )( y( 0 )  Cx(ˆ 0 ))  y( 0 )
x( 1)   x( ˆ 1)   P( 1) 
ˆ 1)   P( 1) ,x(

P( 1)  1  [ 1  L( 0 )  1]P( 0 )  1  1
Master's degree in Automatic Control and Robotics

Analytical Solution of Kalman Filter: Example (2)

• The estimation at k=1 is:

1 4
L( 1)  1  P( 1)  1  [  1  P( 1)  1] 1 
4 5
1 4
ˆ 2 )  1  x(
x( ˆ 1)  L( 1)( y( 1)  Cx(
ˆ 1))  x(
ˆ 1)  y( 1) x( 2 )   x( ˆ 2 )   P( 2 ) 
ˆ 2 )   P( 2 ) ,x(

5 5

6
P( 2 )  1  [ 1  L( 1)  1]P( 1)  1 
5
Master's degree in Automatic Control and Robotics

Analytical Solution of Kalman Filter: Example (3)

• The steady state approximation can be found as follows:

1 P
Lss  1  Pss  1  [  1  Pss  1] 1  ss
4 1
 Pss Pss  12071
.
4 1
Pss2  Pss   0
4 Pss  0.2701
Pss  1  [ 1  Lss ]Pss

Lss  0.8284
Master's degree in Automatic Control and Robotics

Analytical Solution of Kalman Filter: Linear Matrix Inequalities (1)

• The LMI for solving the Kalman filter can be obtained from the one of LQR by duality

 Y YA  W T C YH T WT 
 T 
A Y C W Y
T
0 0 
0
 HY 0 I 0 
 
 W 0 0  R 1 

Y  P 1  I I 
• The value LTss  WY 1 can be obtained considering that J opt  xoT Pxo     0
• Leading to the following optimization problem  I Y 

min 
W ,Y
 I I 
subject to:  I Y 0
 

 Y YA  W T C YH T WT 
 T 
A Y C W Y
T
0 0 
0
 HY 0 I 0 
 
 W 0 0  R 1 
Master's degree in Automatic Control and Robotics

Analytical Solution of Kalman Filter: Linear Matrix Inequalities (2)

• This LMI problem can be solved with Yalmip and SeDuMi solver as follows:

Y = sdpvar(nx,nx);
W = sdpvar(nu,nx,'full');
gamma = sdpvar(1,1)
constraints=[Y>=0];
constraints=[constraints, [gamma*I I;I gamma];
constraints= [constraints, [-Y Y*A-W’*C Y*H’ W’; A’*Y-C’*W –Y 0 0; H*Y 0 –I 0; W 0 0 –
R* -1]] <= 0];
optimize(F,gamma)
Lss= value(W)*inv(value(Y));

You might also like