Introduction To Kalman Filtering: An Engineer's Perspective
Introduction To Kalman Filtering: An Engineer's Perspective
Gilbert Gede
Outline
1
Introduction Motivation History My Approach Filter Overview What it is Step by Step Covariance Matrices? Simple Example System Filter Formulation Simulation Conclusions Conclusions
Gilbert Gede Introduction to Kalman Filtering
Lets say we have a physical, dynamic system Normally, we want to measure the states This can be problematic, due to real-world limitations on sensors So we use lters and observers
Gilbert Gede
Limitations of Sensors
Unfortunate things about the real world we are all familiar with Not all quantities can be directly measured Size & Cost Noise & Biases
Gilbert Gede
Sensor Fusion
Sensor fusion is the process of combining multiple sensor readings to create a more useful measurement. I do not believe all Kalman Fitlers are necessarily fusing multiple sensors. Most do though.
Gilbert Gede
Developed around 1960 mainly by Rudolf E. Kalman. It was originally designed for aerospace guidance applications. While it is the optimal observer for system with noise, this only true for the linear case. A non-linear Kalman Filter can not be proven to be optimal.
Gilbert Gede
Im really only interested in measuring mechanical systems, so most of what I describe and present will be with this focus. The sensors I have tried to build my understanding around are similar to the MEMs sensors we have been using in the lab.
Gilbert Gede
Im not going to go through the derivation of the lter, mainly because I havent done it myself. Im also not going to discuss more than the linear and Extended Kalman Filters. There are other versions, such as the continuous lter, or the continuous-discrete lter (for a continuous system with discrete measurement points), but I have not studied these yet.
Gilbert Gede
I didnt completely understand the Kalman Filter until I though of it in a specic sense: A discrete/digital lter, with two dierent steps as part of each cycle. This doesnt really dene a Kalman Filter, but it is how I am thinking about it.
Gilbert Gede
As stated, this is a two-step lter. One step is based on the system dynamics The other is based on the sensor inputs These are tied together with 3 covariance matrices and the Kalman Gain
Gilbert Gede
System Description
It is very important to note that z represents the sensor measurement, but calculated from the states. Another way to say this is that you need to be able to calculate something that ideally would give the sensor measurement, from only the states.
Gilbert Gede
Time Update
The states get updated based on the known system inputs and system dynamics. The state covariance matrix (Pk ) is updated by the state matrix and process noise covariance matrix (Q). It should be noted that if (Pk ) = 0, that it will have the process noise updated after this step.
Gilbert Gede
SystemDescription x = f (x , u ) z = h(x ) TimeUpdate x = f (x , u ) Pk +1 = APk AT + Q MeasurementUpdate K = Pk HT (HPk HT + R)1 xk +1 = xk + K(yk Hxk ) Pk +1 = (I KH)Pk
When dealing with the extended Kalman Filter, before the time update step, you would linearize the state space model to get the state matrix, A. If the process noise covariance matrix, Q, is dependent on the states, then it needs to be calculated before the time update as well. The measurement, u, will be the from the next step (we use the old one).
Gilbert Gede
Measurement Update
The Kalman gain is calculated from state covariance matrix, P, observation matrix,H and the measurement noise covariance matrix, R. The state is updated from the Kalman gain and the error between the calculated sensor output and the actual sensor output (measured at this point). The state covariance matrix, P, is updated by the Kalman gain, K, and the observation matrix, H.
Gilbert Gede
Measurement Update
I feel it is important to note that the Kalman gain, while calculated from the measurement noise covariance matrix, R, does not necessarily have to have non-zero entries in all rows. In fact, R does not have to be the same size as the state matrix. This can lead to not all states being aected during the measurement update step, which can lead to an unsuccessful lter implementation.
Gilbert Gede
SystemDescription x = f (x , u ) z = h(x ) TimeUpdate x = f (x , u ) Pk +1 = APk AT + Q MeasurementUpdate K = Pk HT (HPk HT + R)1 xk +1 = xk + K(yk Hxk ) Pk +1 = (I KH)Pk
Gilbert Gede
As seen in the lter equations, there are three covariance matrices: Pk , the state covariance matrix Q, the process covariance matrix R, the measurement covariance matrix
Gilbert Gede
MeasurementUpdate cov (xj , xk ) = E [(xj j )(xk k )] K = Pk HT (HPk HT + R)1 xk +1 = xk + K(yk Hxk ) Remember, standard deviation is Pk +1 = (I KH)Pk 2
E [(x ) ]
Gilbert Gede
As you can imagine, you could provide a standard deviation for every state, parameter, and sensor measurement. One would think this would allow a straightforward calculation of these covariance matrices. This would be incorrect though, as not all variables are correlated. This is where a lot of the work in creating a successful lter is as far as I can tell.
Gilbert Gede
0
2 2
R=
MeasurementUpdate 2 0 n K = Pk HT (HPk HT + R)1 xk +1 = xk + K(yk Hxk ) There will probably be some constant Pk +1 = (I KH)Pk
..
Gilbert Gede
The process noise covariance matrix is a little more dicult. Im not sure I am in the best position to describe it, but it is basically describing the error in the state matrix. There is a more correct way to dene, but I will leave that at the end.
Gilbert Gede
In an informal sense, similar to my previous description of the process covariance matrix, the state covariance matrix represents the estimated error. It is dierent however, in that it if updated along with the state at each step. We can look at these values and then get an idea for how accurate our current estimation is.
Gilbert Gede
I have not yet completely explored all of the information that this matrix represents, and have instead just left it alone. One important observation: for initialization, setting it to 0 seems to work, as the update steps seem to correct it quickly. From what I have read however, this will not always be the case (especially for the EKF).
Gilbert Gede
Example System
Now I will go through an example problem. The system will be a particle moving in a plane, with 4 states and 2 inputs. x 0 1 0 0 0 0 u = 0 0 0 0 x + 1 0 ax x= y , x 0 0 0 1 0 0 ay v 0 0 0 0 0 1 z= 1 0 0 0 0 0 1 0
It is basically a double integrator, with acceleration as an input, and positions which can be measured by sensors.
Gilbert Gede
Example System
So, for this example, we have two sets of sensors: accelerometers and position sensors (like GPS). These sensors will have simulated noise, just like real sensors. We will use the Kalman Filter to estimate fuse the sensor readings and estimate the position.
Gilbert Gede
Discretization
The rst step is to discrete model. 1 0 k +1 = x 0 0 x u where x = y v
Gilbert Gede Introduction to Kalman Filtering
Covariance Matrices
The next thing we have to provide to the lter description is the sensor & measurement covariance matrices. We will start with the sensor covariance matrix.
Gilbert Gede
0
2 gps y
The value is simply the standard deviation of the sensor squared, or the variance of the sensor. There is no coupling (in this example) between the x and y positions as reported by the GPS sensor. Im not sure if this is true in real life as well...
Gilbert Gede
Gilbert Gede
We can then take the derivative of the state vector with regards to the noises. x This will be dened as F = v . It can be seen that this will simply be the B matrix, due to linearity.
Gilbert Gede
If we dene Rv = E [(vi v )(vi v )], where v is the noise vector, we would get a 2x2 matrix. This matrix should actually be diagonal though, because the sensor error will not be coupled. 0 2 This gives Rv = accx . 2 0 accy
Gilbert Gede
Finally, we can dene Q. 4 t /4 t 3 /2 0 0 3 t 2 0 0 2 t /2 Q = FRv FT = acc 4 0 0 t /4 t 3 /2 0 0 t 3 /2 t 2 When dealing with the EKF, F might need to be linearized at each time step. This would happen at the same time as the linearization of A.
Gilbert Gede
Simulation Parameters
We are now ready to simulate this system. We need to specify the simulation parameters rst. t = .1sec gps = 2m acc = .5m
Gilbert Gede
Simulation Inputs
The inputs to the model follow: accx = 1 + sin(t ) accy = 2 + 5sin(t ) x = t 2 /2 sin(t ) y = t 2 5cos (t ) The appropriate noises are added on top of these measurements.
Gilbert Gede
Simulation Results
Gilbert Gede
Simulation Results
70
60
50
40
Y pos
30
20
10
Real
0
10
Gilbert Gede
Simulation Results
70 60 50 40 Y pos 30 20 10 0
10 5
10
Gilbert Gede
15 X pos
20
25
35
Simulation Results
We can see that the results with the lter are much better. The calculated standard deviations for each method are: kalman 0.6m gps 1.6m acc 2.5m Obviously, for longer simulation times the error gets much larger with the dead reckoning case.
Gilbert Gede
Conclusions
Success?
I think it is safe to say that this lter implementation is successful. I believe we can also say it is optimal.
Gilbert Gede
Conclusions
Next Steps
With this simple example, we could next try estimating sensor biases. I did not include any here, but estimating biases usually seems to be successful. This would be done by adding an extra state, assuming it was constant in time updates, and ensuring that there were proper covariances describing it.
Gilbert Gede
Conclusions
Next Steps
Exploration of other Kalman Filter types could be useful too. There exists the continuous lter and continuous-discrete lter; we have discussed the EKF, but there also exists the Unscented Kalman Filter for highly nonlinear systems. There are also square root forms and triangular forms.
Gilbert Gede
Conclusions
References
References:
1
Fiorenzani T., Manes C, Oriolo G., Peliti P. Comparative Study of Unscented Kalman Filter and Extended Kalman Filter for Position/Attitude Estimation in Unmanned Aerial Vehicles, IASI-CNR, R. 08-08, 2008 Sabatini, A.M. Quaternion-based extended Kalman lter for determining orientation by inertial and magnetic sensing. IEEE Transactions on Biomedical Engineering 53, 1346-1356 (2006). Kalman Filter. Wikipedia, the Free Encyclopedia. 20 Jan 2011. http://en.wikipedia.org/wiki/Kalman_filter.
An Introduction to the Kalman Filter, SIGGRAPH 2001 Course, Greg Welch and Gary Bishop. http://www.cs.unc.edu/~tracker/media/pdf/SIGGRAPH2001_Course
Gilbert Gede Introduction to Kalman Filtering