0% found this document useful (0 votes)
72 views

Mobile Robot Localization Using Odometry and Kinect Sensor PDF

Uploaded by

Gaby Hayek
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views

Mobile Robot Localization Using Odometry and Kinect Sensor PDF

Uploaded by

Gaby Hayek
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

MOBILE ROBOT LOCALIZATION USING ODOMETRY AND KINECT SENSOR

N. Ganganath and H. Leung

Department of Electrical and Computer Engineering


Schulich School of Engineering
University of Calgary, Calgary, Alberta, Canada
Email:{ngmarasi, leungh}@ucalgary.ca

ABSTRACT ing Kalman filtering [4], particle filtering [3, 5], and Markov
localization [6].
This paper presents a mobile robot localization system for an
In this paper, we propose an accurate and low cost mo-
indoor environment using an inexpensive sensor system. The
bile robot localization method using odometry and a Kinect
extended Kalman filter (EKF) and the particle filter (PF) is
sensor. The odometry model used in this paper is capable
used for sensor fusion in pose estimation in order to mini-
of tracking any arbitrary robot motion. The odometry and
mize uncertainty in robot localization. The robot is maneu-
the Kinect sensor measurements are fused using the extended
vered in a known environment with some visual landmarks.
Kalman filter (EKF) [4] and the particle filter (PF) [3] to pro-
The prediction phase of the EKF and the PF are implemented
vide more accurate localization results. The correct detection
using the information from the robot odometry whose error
of landmarks by applying Hough transform and depth estima-
may accumulate over time. The update phase uses the Kinect
tion using the Kinect sensor have significantly contributed to
measurements of the landmarks to correct the robot’s pose.
the better performance of the robot localization. The exper-
Experiment results show that, despite its low cost, the accu-
iments are carried out with H20 mobile robot (see Fig 1(a))
racy of the localization is comparable with most state-of-the-
and the results are provided in order to interpret the accuracy
art odometry based methods.
of the proposed method.
Index Terms— Extended Kalman filter, particle filter,
robot localization, odometry, Kinect sensor 2. ODOMETRY SYSTEM

1. INTRODUCTION As shown in Fig 1(b), the H20 WMR has two drive wheels
and two castors. The two drive wheels are connected to DC
Localization is one of the fundamental problems in mobile motors with 15-bit quadrature encoders. These encoders are
robot navigation and motion planning. In an indoor envi- capable of monitoring the revolutions and steering angles of
ronment with a flat floor plan, localization is identified as the drive wheels. The main advantage offered by these en-
a problem of estimating the pose, i.e. position and orienta- coders is their high resolution. The odometry is implemented
tion of a mobile robot, when the map of the environment, using the results from these encoders. The two output chan-
sensor readings, and executed actions of the robot are pro- nels of the quadrature encoder indicate both position and di-
vided [1]. Odometry is one of the most popular solutions for rection of rotation.
wheeled mobile robots (WMRs). The encoders mounted on
the wheels provide robot motion information to update the
mobile robot pose. However, odometric pose estimation un- θ x

boundedly accumulates errors due to different wheel diame- wheel


ters, wheel-slippage, wheel misalignment, and finite encoder
resolution [2]. Experiment results presented in this paper, to- y

gether with previous studies [3], concur that the largest factor
in odometry error is due to the rotation of the robot. DC motors with
quadrature encoders
The odometry errors can be eliminated using an auxil- castors
iary sensor to observe landmarks in an environment. Differ-
ent sensors, such as cameras [4], sonar sensors [5], and laser
range finders [5] have been used to detect landmarks and ob-
tain the required measurements. Many solutions have been (a) H20 WMR with a Kinect sensor (b) Odometry system
proposed for the pose estimation of mobile robots employ- Fig. 1. The robot design.

978-1-4673-0898-4/12/$31.00 ©2012 IEEE 91 ESPA 2012


y
Bi(xBi,yBi) 400 Experimental values
Theoretical values
350
λk
δθk

Actual depth (cm)


300
αk
θk 250

200
(xk,yk,θk)
150
δrk
100

δlk 500 600 700 800 900 1000


Kinect disparity

Fig. 3. Kenect depth calibration results.


W

(xk-1,yk-1,θk-1)
θ 3. KINECT SENSOR
x
Kinect is a motion sensing input device introduced by Mi-
Fig. 2. The robot coordinate system and the position of ith
crosoft for video gaming. It enables users to control the game
landmark.
with physical gestures and voice-based commands. This
touch-free sensor is powered by an RGB camera and a depth
2.1. Odometry Model sensor. Its operation range is 0.8m – 3.5m with the resolution
of 1cm at a distance of 2m. In this work, a Kinect sensor is
The system state vector xk = [xk yk θk ]T describes the pose mounted on the robot so that the optical axes of both RGB
of the WMR at k th time step, where (xk , yk ) is the position and IR cameras are perpendicular to the wheel axle. The
of the mid-axis point and θk is the heading angle as illustrated height of the Kinect is arbitrary.
in Fig 2. When the wheel base (W ) is given, the change in
orientation in k th time step δθk can be calculated as follows 3.1. Kinect Calibration
(δrk − δlk ) The Kinect depth image is acquired using the Light Coding
δθk = , (1)
W technology [7]. The coded light is captured by the IR camera
where δlk and δrk represent the distances traveled by the left in order to produce the Kinect disparity matrix. The relation-
and right wheels, respectively. The distance traveled by the ship between the Kinect disparity and actual depth value is
mid-axis point in k th time step can be calculated as the aver- given by,
b×f
age of the distances traveled by left and right wheels. z= 1 , (6)
8 (μ − dkinect )
(δlk + δrk )
δdk = . (2) where z is the actual depth, b is the distance between the
2 IR camera and laser-based IR projector lenses, f is the fo-
According to Fig 2, the displacement of the WMR in k th time cal length of the IR camera in pixels, μ is an offset value for a
step can be determined by, given Kinect device, and dkinect is the Kinect disparity which
provides 2048 levels of sensitivity in VGA resolution with 11-
δdk bit depth. The factor 1/8 is used due to the fact that dkinect
δxk = [sin(θk−1 + δθk ) − sin(θk−1 )], (3)
δθk is in 1/8 pixel units. Unlike ordinary stereo pairs, the actual
depth does not become infinity at the zero Kinect disparity.
δdk
δyk = [cos(θk−1 ) − cos(θk−1 + δθk )]. (4) Fig 3 illustrates the Kinect depth calibration results. The
δθk theoretical curve represents the results obtained using (6).
For a small value of δθk , the next state of the system can be The experimental values were obtained by manually measur-
predicted as ing the physical distance to the target object from the Kinect
⎡ ⎤ sensor. The target object was moved away from the Kinect
x̂k−1 + δdk cos(θ̂k−1 + δθ2k )
x̂− ⎣ ŷk−1 + δdk sin(θ̂k−1 + δθk ) ⎦ . sensor by 10cm each time and corresponding Kinect disparity
k = 2
(5)
θ̂k−1 + δθk values were obtained. However this calibration results may
slightly differ from one sensor to another. Therefore each
Here, x̂k−1 = [xk−1 yk−1 θk−1 ]T is the previous state esti- sensor should calibrated separately before using it for depth
mate of the system. estimation.

92
3.2. Landmark Detection with Kinect Sensor unit because the encoder measurements are statistically inde-
pendent and they accumulate errors over time. Therefore it is
In the measurement update phase, the WMR uses landmarks
possible to assume that the variance of each unit of travel is
around it to estimate its pose. In this work, different colored
proportional to the total distance traveled. The covariance of
circles are used as landmarks. The images and their depth
the input vector uk = [δlk δrk ]T can be given by,
values are acquired by the Kinect sensor. Hough transform
filters are used to detect the landmarks (circles) in RGB im-  2 
σL 0
age frames. The landmarks are distinguished from each other Uk = 2 , (10)
0 σR
using HSI color model.
The azimuth angle αk with respect to the WMR x-axis 2 2
k 2 2
k
where σL = kL i=0 |δli | and σR = kR i=0 |δri |. Here,
and the distance λk to the ith landmark Bi (xBi , yBi ) at a time 2
kL and kR2
are positive constants [2].
instant k can be calculated using the Kinect measurements
and the focal length of the RGB camera. The value of the 4.2. Particle Filter
measurement function can be determined as
⎡ ⎤ As a solution to the Gaussian density assumption inherent in

−1 yBi −ŷk − EKF localization, Monte Carlo localization was introduced
tan ( − ) − θ̂k
h(x̂−
k)=
⎣  xBi −x̂k ⎦ . (7) (also known as PF localization) for mobile robots [5]. It can
(xBi − x̂k ) + (yBi − ŷk− )2
− 2
globally localize the robot due to its capability of representing
multi-modal distributions. It requires less amount of mem-
The difference between Kinect observation and measurement ory and returns more accurate results compared to grid-based
function is used to update the odometry predication in both Markov localization [5]. The uncertainty in state estimation
the EKF and the PF. is represented by a set of samples that are randomly drawn
from the probability density function, which are also known
4. FILTER REALIZATION as particles. If a landmark is detected, particle weights are re-
evaluated based on the Kinect measurements. In addition to
In this work, the EKF and the PF are used for the pose es- the prediction and the update steps common to both the EKF
timation of the WMR. The initial state estimate is taken as and the PF, the PF performs a resampling step. It avoids the
xk = 0, i.e. the initial vehicle pose defines the base coor- depletion of the sample population after few iterations. In this
dinate frame. The prediction steps of the filters are executed work, we have used linear time resampling [8] to eliminate the
using the odometry information. The measurement update near-zero-weight particles.
step takes place only when a landmark is detected. Whenever
a landmark is not detected, the predicted state is taken as the 5. RESULTS
state estimate for the next iteration of the filter, i.e x̂k = x̂−
k.
The experiments were carried out in an indoor environment
4.1. Eextended Kalman Filter of 7m×2m with 4 landmarks. The starting point and 5 way
points were marked on the floor with exact measurements.
The EKF is applicable to non-linear systems where the asso- The WMR was maneuvered through the way points and mea-
ciated uncertainties are assumed to be Gaussian. Here, the surements were taken.
system, measurement and input noises are assumed to be zero Fig 4 illustrates the statistical results obtain from 20 in-
mean and uncorrelated. Hence the noise covariance matrices, dividual experiments. The ellipses around the mean values
Qk , Rk , and Uk become diagonal. The system and measure- represent the standard deviations of the results. The mean val-
ment noises are also assumed to be time invariant, which leads ues of the estimated results are close to the way points while
Qk and Rk to be time invariant as well [4]. their standard deviations are very small. It confirms that the
⎡ 2 ⎤ proposed methods are capable of providing more accurate lo-
σx k 0 0
calization results. It should be noted that some pose errors,
Qk = ⎣ 0 σy2k 0 ⎦, (8)
especially the errors in y-direction were due to the deviations
0 0 σθ2k
in maneuvering the WMR onto the way points. Therefore
  the actual errors are expected to be slightly smaller. Since
σα2 k 0
Rk = . (9) all methods under test are affected by such errors, the results
0 σλ2 k
remain qualitatively unchanged.
The system position noise variances of (x, y) coordinates are The root mean squared error (RMSE) of localization at
denoted by σx2k and σy2k . Here, σθ2k is the orientation noise each way point is shown in Fig 5. According to the given
variance, while σα2 k and σλ2 k are the measurement noise vari- results, the RMSE error of odometry measurement keeps in-
ances. creasing considerably with the distance traveled. This leads
The variance of the noise generated by each encoders can to an erroneous pose estimation of the WMR. However the
be determined as the sum of the variance of each independent RMSE has been reduced substantially using the EKF and the

93
100

50
y (cm)

0 Starting point
Way points
Landmarks
-50 Odometry measurement
EKF estimation
PF estimation
-100 Ideal robot path

0 100 200 300 400 500 600 700 800 900


x (cm)

Fig. 4. Standard deviations of the localization results.


150 Table 1. Euclidean distance error at each way point (cm).
Odometry
EKF Way point 1 2 3 4 5
PF Odometry 7.3634 5.6828 18.6643 34.9313 60.7425
100 EKF 4.5655 5.1575 11.7987 16.5151 10.0205
RMSE (cm)

PF 3.9186 4.4614 3.9726 1.4753 4.8919

50 7. REFERENCES

[1] J.-S. Gutmann and D. Fox, “An Experimental Com-


parison of Localization Methods Continued,” in Proc.
0 IEEE/RSJ Int. Conf. Intelligent Robots and Systems,
0 100 200 300 400 500 600 700
Distance travelled (cm) 2002, vol. 1, pp. 454 – 459.

Fig. 5. Root mean squared localization error. [2] K.S. Chong and L. Kleeman, “Accurate Odometry and
Error Modelling for a Mobile Robot,” in Proc. IEEE Int.
PF with the Kinect sensor. This proves that the proposed es- Conf. Robotics and Automation (ICRA), 1997, vol. 4, pp.
timation methods are robust and stable, while the results ob- 2783–2788.
tained using the PF are slightly more stable compared to the
the EKF. [3] Ioannis Rekleitis, Cooperative Localization and Multi-
The Euclidean distance error at each way point is given in robot Exploration, Ph.D. thesis, School of Computer Sci-
Table 1. These values were obtained by calculating the Eu- ence, McGill University, Montreal, Canada, 2003.
clidean distance between the way points and the average of
the measureed/estimated robot positions. The errors in esti- [4] E. Kiriy and M. Buehler, “Three-State Extended Kalman
mation results are significantly lower compared to the errors Filter for Mobile Robot Localization,” Tech. Rep.,
in odometry measurements. This suggests that the proposed McGill University, Montreal, Canada, 2002.
methods using the Kinect sensor can perform accurate mo- [5] F. Dellaert, D. Fox, W. Burgard, and S. Thrun, “Monte
bile robot localization. According to the given results, the PF Carlo Localization for Mobile Robots,” in Proc. IEEE
pose estimation is more accurate compared to the EKF pose Int. Conf. Robotics and Automation, May 1999, vol. 2,
estimation with the Kinect sensor. pp. 1322 –1328.
6. CONCLUSION
[6] D. Fox, W. Burgard, and S. Thrun, “Markov Localization
The odometric pose estimation accumulates errors with time. for Mobile Robots in Dynamic Environments,” Journal
More precise mobile robot localization can be achieved by re- of Artificial Intelligence Research, vol. 11, no. 3, pp. 391–
ducing the uncertainty in the odometric pose estimation using 427, 1999.
a Kinect sensor to observe landmarks in an environment. The
EKF and the PF can be used for the sensor fusion in pose es- [7] PrimeSense Ltd., Palo Alto, CA 94301, USA, The Prime-
timation. The proposed measurement model for the Kinect Sensor Reference Design, 1.08 edition, 2010.
sensor together with odometry model is capable of providing [8] J. Carpenter, P. Clifford, and P. Fearnhead, “Improved
an accurate system model for a WMR. A robust and accurate Particle Filter for Nonlinear Problems,” Proc. IEEE
mobile robot localization method using an inexpensive sen- Radar, Sonar and Navigation, vol. 146, no. 1, pp. 2–7,
sor system was proposed, implemented, and tested on H20 Feb 1999.
mobile robot.

94

You might also like