0% found this document useful (0 votes)
12 views

Lab 4 Guide State Estimation

The document describes implementing a Kalman filter for state estimation. It begins by generating simulated data from a discrete-time system with noise. It then formulates the Kalman filter estimation problem as a numerical optimization that minimizes the error between the true and estimated states over time. The Kalman filter equations are also presented, including computing the Kalman gain and updating the error covariance matrix at each time step. Finally, the steady-state Kalman filter is discussed, where the Kalman gain and error covariance matrix converge to constant values over time.

Uploaded by

vicenc puig
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Lab 4 Guide State Estimation

The document describes implementing a Kalman filter for state estimation. It begins by generating simulated data from a discrete-time system with noise. It then formulates the Kalman filter estimation problem as a numerical optimization that minimizes the error between the true and estimated states over time. The Kalman filter equations are also presented, including computing the Kalman gain and updating the error covariance matrix at each time step. Finally, the steady-state Kalman filter is discussed, where the Kalman gain and error covariance matrix converge to constant values over time.

Uploaded by

vicenc puig
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Master's degree in Automatic Control and Robotics

Lab 4: Optimal Estimation


Master's degree in Automatic Control and Robotics
Data generation
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% generating data
rng(5); % initializing seed for random numbers
u=ones(N,1); % input vector
x=zeros(N+1,1); % initialization of states
y=zeros(N,1); % initialization of outputs
x(1)=x1; % initial condition
N=100; % number of data points

for k=1:N % data generation


x(k+1)=a*x(k)+b*u(k)+randn(1)*sqrt(Q);
y(k)=c*x(k)+d*u(k)+randn(1)*sqrt(R);
% state
% output
x(k  1)  A  x(k)  B  u(k)  w(k)

end y(k)  C  x(k)  D  u(k)  v(k) 
data=[u x(1:N) y];
save data2 data; % saving data (don't save x(N+1)
plot(x);
title('practice 4.2 DATA: True generated state')
xlabel('time')
practice 4.2 DATA: True generated state
ylabel('volume') 22

20
data = 18
1.0000 10.0000 1.5269
volume

1.0000 10.8432 1.9177 16


1.0000 11.4721 1.2434
1.0000 12.2783 1.2455 14
1.0000 13.2295 0.8928
12
1.0000 13.4305 1.6766
… 10
0 20 40 60 80 100 120
time
Master's degree in Automatic Control and Robotics
Lab 4: Optimal Estimation
Master's degree in Automatic Control and Robotics
Optimization formulation of the prediction problem

N N
min J  w(k )
k 1
T
Q 1
 w(k )   v(k )
k 1
T
 R 1  v(k ) 
1
w(1)T  P1  w(1)

s.t. w(k )  x̂(k  1)  A  x̂(k )  B  u(k ) k  2,...,N


v(k )  y(k )  C  x̂(k )  D  u(k ) k  1,...,N
w(1)  x(1)  x̂(1)

N
min J  x̂(k  1)  A  x̂(k)  B  u(k)
k 1
T
 Q 1  x̂(k  1)  A  x̂(k)  B  u(k) 

N
  y(k)  C  x̂(k)  D  u(k)
k 1
T
 R 1  y(k)  C  x̂(k)  D  u(k) 

 x 1  x̂(1)T  P1 1  x 1  x̂(1)
Master's degree in Automatic Control and Robotics

Model discretization

x(k  1)  A  x(k )  B  u(k )  w(k ) w(k)  ex(k) : system noise (disturbance)



y(k )  C  x(k )  D  u(k )  v(k )  v(k)  ey(k) : measurement noise

Given u(1)…u(N) and y(1), …y(N) determining the optimal estimate x̂(N  1)

  a  e k v T
V  k v  V  k b  v b   V(k  1)  a  V(k )  b  v b (k )
  1  e k v T
qout  k v  V  q out (k )  k v  V (k )  b  kb
kv
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Problem formulation

T=1; % sample time


kv=0.1; % parameter kv
kb=2; % parameter kb
a=exp(-kv*T); % parameter a (using exact discretization)
b=(1-exp(-kv*T))*kb/kv; % parameter b (using exact discretization)
c=kv; % parameter c
d=0; % parameter d

Q=0.1; % variance of system noise (disturbance)


R=0.1; % variance of measurement noise
x1=10;P1=0.2; % Initial conditions from estimation
Master's degree in Automatic Control and Robotics

Kalman filter via numerical optimization


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Implement Kalman filter as a numerical optimization
% Estimating x(N+1) from x(1:N) and u(1:N)
load data; % load data u,x,y (only uses u and y)
N=size(data,1); % determine the number of points
u=data(:,1); % read the inputs
y=data(:,3); % read the outputs
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% formulation optimization problem
x_hat = sdpvar(repmat(1,1,N+1),repmat(1,1,N+1)); 28
objective = (x(1)-x_hat_0)'*P_0^(-1)*(x(1)-x_hat_0);
constraints = [];
for k = 1:N
objective = objective+...
(x_hat{k+1}-A*x_hat{k}-B*u(k))'*Q^(-1)*(x_hat{k+1}-A*x_hat{k}-B*u(k))...
+(y(k)-C*x_hat{k})'*R^(-1)*(y(k)-C*x_hat{k});
end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% solution of the problem
options = sdpsettings('solver',‘quadprog');
optimize(constraints,objective,options);

x_est = zeros(N+1,1);
for k = 1:N+1
x_est(k,1) = value(x_hat{k});
end
Master's degree in Automatic Control and Robotics
Optimization formulation of the prediction problem
true state (blue) ; estimation (red) ; output/kv(green)

25

20
volume

15

10

0
0 20 40 60 80 100 120
time
Master's degree in Automatic Control and Robotics
Lab 4: Optimal Estimation
Master's degree in Automatic Control and Robotics

Kalman filter

P(1)
x̂(1) 
L(k )  A  P(k )  C T  R  C  P(k )  C T 1

y(k) k  1  ...N x̂(k  1)  A  x̂(k )  B  u(k )  L(k )  y(k )  C  x̂(k )


u(k) k  1  ...N P(k  1)  Q  A  L(k )  C  P(k )  A T
Master's degree in Automatic Control and Robotics
Kalman filter with confidence interval

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Kalman filter with confidence interval
x=zeros(N+1,1); % initialization of states
P=zeros(N+1,1); % initialization of P matrix
L=zeros(N,1); % initialization of L matrix
lb=zeros(N+1,1); % lower bounds of states
ub=zeros(N+1,1); % upper bounds of states
alpha= % select according confidence interval
x(1)=x1; % initial condition for x;
P(1)=P1; % initial condition for P;
lb(1)=x(1)-sqrt(P(1))*1.96;
ub(1)=x(1)+sqrt(P(1))*1.96;

for k=1:N % codify Kalman filter in Matlab


...
lb(k+1)=x(k+1)-alpha*sqrt(P(k+1))
ub(k+1)=x(k+1)+alpha*sqrt(P(k+1))
end

figure(2)
plot(x,'r'); % plot state estimation
hold on;
plot(lb,'b'); % plot lower bounds interval
plot(ub,'b'); % plot upper bound interval
Master's degree in Automatic Control and Robotics
Lab 4: Optimal Estimation
Master's degree in Automatic Control and Robotics

Steady state kalman filter


>> [L P(1:100)]


L(k )  A  P(k )  C  R  C  P(k )  C
T

T 1
ans =

0.1774 0.2000

P(k  1)  Q  A  L(k )  C  P(k )  A T


0.2298 0.2605
0.2703 0.3079
0.3014 0.3446
0.3251 0.3727
0.3431 0.3942

 
0.3568 0.4105

T 1
0.3671 0.4228

L  A P  C  R  C P  C
T 0.3748 0.4321
0.3806 0.4391
0.3850 0.4444

P  Q  A  L  C  P  A T
0.3883 0.4484
0.3908 0.4513
0.3926 0.4536
0.3940 0.4552
0.3950 0.4565

 
0.3958 0.4574

P  Q   A  A  P  CT  R  C  P  CT  C   P  A T
0.3964 0.4581
1 0.3968 0.4587
0.3971 0.4590
  0.3974
0.3976
0.4593
0.4596
0.3977 0.4597
syms z 0.3978 0.4598
0.3979 0.4599
z=vpa(solve(z==Q+(a-a*z*c*(R+c*z*c)^-1*c)*z*a,z,'Real',true)) 0.3979 0.4600
PSS=z(1); 0.3980 0.4601
0.3980 0.4601
LSS=a*PSS*c*(R+c*PSS*c)^-1; 0.3980 0.4601
0.3980 0.4601
0.3981 0.4602
0.3981 0.4602
0.3981 0.4602
0.3981 0.4602
0.3981 0.4602
0.3981 0.4602
0.3981 0.4602
0.3981 0.4602
0.3981 0.4602

Master's degree in Automatic Control and Robotics

Steady state Kalman filter

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Kalman filter with confidence interval using steady state
x=zeros(N+1,1); % initialization of states
lb=zeros(N+1,1); % lower bounds of states
ub=zeros(N+1,1); % upper bounds of states
alpha= % choose the desired confidence
x(1)=x1; % initial condition for x;
lb(1)=x(1)-alpha*sqrt(PSS);
ub(1)=x(1)+alpha*sqrt(PSS);

for k=1:N % codify steady state Kalman filter


...
lb(k+1)=x(k+1)-alpha*sqrt(PSS);
ub(k+1)=x(k+1)+alpha*sqrt(PSS);
end

figure
plot(x,'r'); % plot state estimation
hold on;
plot(lb,'b'); % plot lower bounds interval
plot(ub,'b'); % plot upper bound interval
Master's degree in Automatic Control and Robotics

Steady state Kalman filter


STEADY STATE: Estimation with 95% confidence
22

20

18

16

14

12

10

8
0 20 40 60 80 100 120
Master's degree in Automatic Control and Robotics

Steady state Kalman filter

Estimation with 95% confidence


STEADY STATE: Estimation with 95% confidence
22
22

20 20

18 18

16 16

14 14

12 12

10 10

8 8
0 20 40 60 80 100 120 0 20 40 60 80 100 120
Master's degree in Automatic Control and Robotics

MATLAB kalman function


%%%%%%%%%%%%%%%%%%%%%%%%%%%%% >> KEST
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 𝑥 𝑘 + 1 = 𝐴 · 𝑥 𝑘 + 𝐵 · 𝑢 𝑘 + 𝐿 · (𝑦 𝑘 − 𝐶 · 𝑥(𝑘))
%MATLAB Kalman function KEST =
a =
SYS=ss(a,[b 1],c,[0 0],-1);
x1_e
x1_e 0.865
[KEST,L,P] = 𝑢(𝑘)
kalman(SYS,Q,R,'delayed'); b = 𝑥 𝑘+1 = 𝐴−𝐿·𝐶 ·𝑥 𝑘 + 𝐵 𝐿 ·
𝑦(𝑘)
[L,P] u1 y1
lsim(KEST,[u,y],x1); x1_e 1.903 0.3981
𝑦(𝑘) 𝐶 0 0 𝑢(𝑘)
= ·𝑥 𝑘 + ·
c = 𝑥(𝑘) 1 0 0 𝑦(𝑘)
x1_e
>> [L P] y1_e 0.1
x1_e 1
ans = Linear Simulation Results
3
0.3981 0.4602 d =
u1 y1 2.5
y1_e 0 0

e
x1_e 0 0 2

To: y1
1.5

𝑢(𝑘) 𝑦(𝑘)
Amplitude
0.5
𝑥(𝑘) 25

20

𝑢(𝑘)
e
15
𝑦(𝑘)
To: x1

𝑦(𝑘) 𝑥 (𝑘)
10

𝑥 (𝑘) 5

0
10 20 30 40 50 60 70 80 90 100
Time (seconds)
Master's degree in Automatic Control and Robotics

Steady state Kalman filter using LQR LMIs and Duality

The Ricatti equation for the LQR problem can be formulated as the following LMI:
 Y YAT  W T BT YH T WT  where:
 
 AY  BW Y 0 0  Q  HTH
0
 HY 0 I 0  Y  P 1
 
 W 0 0  R 1  K  WY 1

The optimal value of K can be found considering that J opt  xo Pxo   by solving
T

min   I I 
where:
 I Y 0
W ,Y
xoT Pxo  
 
where:
 Y YAT  W T BT YH T WT  Y  P 1
 
 AY  BW Y 0 0 
0
 HY  I I 
0 I 0   I Y 0
   
 W 0 0  R 1 
Master's degree in Automatic Control and Robotics
Steady state Kalman filter using LMIs
T=1; % sample time
kv=0.1; % parameter kv
kb=2; % parameter kb
a=exp(-kv*T); % parameter a (using exact discretization)
b=(1-exp(-kv*T))*kb/kv; % parameter b (using exact discretization)
c=kv; % parameter c
d=0; % parameter d
Q=0.1; % deviation of system noise (disturbance)
R=0.1; % deviation of measurement noise

%LMI solution
A=a'; % Duality principle for observer design
B=c'; % Duality principle for observer design

Y = sdpvar(1,1);
W = sdpvar(1,1);
gamma=sdpvar(1,1);

H=sqrt(Q);

F = [Y >= 0];
F = [F, [gamma*eye(1) eye(1);eye(1) Y]>=0];

% LQR LMI

% ... include here the LQR LMI provided in previous slide

optimize(F,gamma)

L = (value(W)*inv(value(Y)))‘% Duality principle for observer design


Master's degree in Automatic Control and Robotics
Lab 4: Optimal Estimation
Master's degree in Automatic Control and Robotics

MHE via numerical optimization


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Implement MHE via numerical optimization
% Estimating x(N+1) from x(1:N) and u(1:N)
load data; % load data u,x,y (only uses u and y)
N=10; % determine the number of points
u=data(:,1); % read the inputs
y=data(:,3); % read the outputs
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% formulation optimization problem
x_hat = sdpvar(repmat(1,1,N+1),repmat(1,1,N+1));

x_0=sdpvar(1,1);
yd=sdpvar(N,1);
ud=sdpvar(N,1);

objective = (x_0-x_hat{1})'*P_0^(-1)*(x_0-x_hat{1});
constraints = [];
for k = 1:N
objective = objective+...
(x_hat{k+1}-A*x_hat{k}-B*ud(k))'*Q^(-1)*(x_hat{k+1}-A*x_hat{k}-B*ud(k))...
+(yd(k)-C*x_hat{k})'*R^(-1)*(yd(k)-C*x_hat{k});
end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% solution of the problem
options = sdpsettings('solver',‘quadprog');
estimator=optimizer(constraints,objective,options, {yd,ud,x_0},x_hat{N+1});
Master's degree in Automatic Control and Robotics
MHE via numerical optimization
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% implement MHE via numerical optimization
M=100;
x_est=zeros(M+1,1); % initialization of states

%initialize the first N state estimation using the result of exercise (a)
...

% run the MHE in real time

for k=N+1:M+1
x_est(k+1)=estimator(y(k-N:k-1),u(k-N:k-1),x_est(k-N));
end

true state (blue) ; estimation (red) ; output/kv(green)

25

20

volume
15

10

0
0 20 40 60 80 100 120
time

You might also like