Simultaneous Localization and Mapping For Autonomous Robot Navigation
Simultaneous Localization and Mapping For Autonomous Robot Navigation
Abstract—Simultaneous localization and mapping (SLAM) camera is used as the sensor in this work because its three vital
is a challenging vision-based task that provides the ability for an pieces detect the motion of an object and create a virtual
autonomous robot to simultaneously map the environment and reality image. Additionally, the depth-sensing facility of this
localized itself concerning it. It is a joint estimation task and camera provides unmatched sensing of real-time objects. The
cannot be decoupled. Robot operating system (ROS) provides basic concept behind SLAM is to generate the map of the
many algorithmic and communicating nodes to work with unknown environment by exploring the environment only
autonomous robots. Several algorithms have been developed to once. While exploring the environment, the robot understands
solve this intractable time problem in unknown real-time its location also covers the entire space. Localization while
environments assuming different constraints. In this paper, we mapping is briefly described below [12].
introduce a new computer vision-based technique for SLAM
using an extended Kalman filter for efficiently generating the A. Localization
map of the environment and navigate in the environment Localization is the estimation of the pose of the robot
autonomously. The proposed algorithm in this paper effectively given how the environment looks, i.e., given a set of
generates the shortest collision-free path to read the goal. The landmarks that represent a map. In the problem of localization,
sensor used in this project is the Kinect sensor which is a cost- the map is known, but the current pose of the bot is unknown.
effective sensor having a camera and a depth sensor that uses In other words, localization is the process of finding a relation
infrared rays. The performance of the autonomous navigation is
between the given coordinate system of a map and the actual
also presented in terms of the path following accuracy and
pose of the bot.
obstacle avoidance.
B. Mapping
Keywords—Autonomous robot, motion planning, SLAM,
ROS, Kalman filter The process of estimation of the model of the environment
using the sensor data given the location of the robot at all
I. INTRODUCTION times. In simple words, it is the process of building a map of
the environment. In this problem, the actual pose of the bot is
The design and development of autonomous robots for
known but the surroundings are unknown. Hence it is only
different purposes can be easier and less expensive by using
concerned with the position of landmarks.
predefined frameworks such as ROS. The ROS is an open-
source framework for robots that comprises of library and C. SLAM
packages [1, 2]. The paper focuses on the SLAM method for In simultaneous localization and mapping, the robot has to
localizing the robot and building the map. The robot creates do both tasks of localization and mapping at the same time. It
the map by using the point clouds generated by the Kinect is a challenging problem requires sophisticated computing
sensor and localizes itself in the map concerning the process for simultaneously creating and updating the map of
landmarks it recognizes during the navigation. In this paper, the environment while localizing itself. SLAM process
the presented approach is to develop an autonomous bot usually done in unknown environment once to make it known.
surveying in a closed area that can automatically traverse the When the environment is known, a better estimation of robots’
dynamic environment by correcting the previously generated pose can be done, and if a good estimation of the pose is
map, if necessary. In practical terms, this can be deducted that known, a better map can be built for further navigation from a
the more a bot explores the world, the better it builds its map. given start to goal positions. Because of this dependency
The methods which are used for SLAM can be broadly between the robot’s poses and the map. The robot estimates
classified into three categories [3]. The covariance matrix in the locations of the landmarks and estimates its own position
combination with the extended Kalman filter is the first relative to the landmarks.
category of methods. These methods have been accepted
widely but suffer from scaling and convergence problems [4]. Now if the robot deviates from its actual path given it
The Rao-Blackwellized particle function-based method knows the position of the landmarks it can correct itself. If the
belongs to the second category which suffers from the robot is not on the actual path even after the correction step,
correlation between the maps generated and particles in the then again a correction step is taken after which the robot
filter [5]. The third category involves scan matching methods comes on its correct path. This is to be noted that the
but is limited to odometer data [3, 6]. By considering the estimation of the location of landmarks depends on the
facility and the drawbacks of each of the methods, it has been accuracy of the sensors (see Fig. 1).
decided to use the combination of extended Kalman filter and Here, the robot could not take the correction step as a false
the particle filter in this paper. location of the landmark is estimated. The data provided by
Autonomous robots use some sort of visual sensors to the sensors is full of uncertainties and these uncertainties
sense and generate the map of the environment and line cannot be ignored. Small errors gradually accumulate and
sensors to navigate after that. Laser, ultrasonic sensors, and become larger in the end. So probabilistic approaches are
cameras are often used for SLAM purposes [7-11]. A kinetic necessary to appropriately model the uncertainties.
thorized licensed use limited to: NUST School of Electrical Engineering and Computer Science (SEECS). Downloaded on October 04,2023 at 08:00:26 UTC from IEEE Xplore. Restrictions appl
Estimate robot’s poses given landmarks
Observations can be resistance, telling the proximity of The updating process is recursive that is (i) a command to
nearby objects or camera images in our observations, are execute the system and (ii) system observation to update the
point clouds by the Kinect sensor. probability distribution. The Bayesian belief component can
be divided into two parts – motion and observation. The
Wanted: motion model is used to predict the model and the observation
(i) Map of the environment, M model is responsible for the correction at each time step. The
Bayesian belief considers the one step previous time i.e.
(ii) Path of the robot 0: ∶ 1, 2, 3, … , * 1 to estimate the position at present. When the robot
The ultimate goal is to estimate the robot path and its map traverses the generated trajectory, some path following error
in the environment which is expressed in (1): is generated that needs to be corrected as shown in (3) above.
There are different families of algorithms in which each
: , | : , : (1) algorithm works efficiently on a particular type of motion and
A sample initial configuration is shown in Fig. 2 where the sensor model differentiated based on the framework used,
robot has a random initial location and orientation. The task of different properties like linear vs non-linear models for
the robot is to explore the entire environment without motion, and observational models.
repetition to generate the map while remembering its current B. Kalman filters:
position.
These filters are based on Bayesian state estimation. The
III. METHODOLOGY filters estimate the future system state in the next time step.
The filters also measure/correct and check the fitness of the
As stated above, the SLAM problem is often solved by estimated system state. The filters assume that states are
probabilistic approaches include extended Kalman filter Gaussian distribution and motion models and observation
(EKF) and particle filter (PF). Both of the approaches have models are linearized and are used in the implementation of
their limitations. In this paper, the combination of EKF and PF SLAM for state estimation [4]. The SLAM process based on
has been used to explore the environment and finally traverse the system state prediction is shown in Fig. 3. Since most of
the shortest path. Given this, let us first briefly explain the the odometry-based models are non-linear models and so
functionality of EKF and PF as follows. follow non-Gaussian distribution, the odometry-based models
The probabilistic approach used in this paper includes are needed to be first linearized.
EKF and PF and these filters are based on recursive Bayes It is an estimation of the iterative correction process. First
filter. The current location and orientation can be represented of all, it estimates the current position of the robot by using
by the belief at time t using Bayes’ rule as follows [13] the previous position and the command given. It then predicts
_ | | : , : ! (2) the environmental observations, i.e., how the environment is
supposed to look like. Then it senses the actual environment
thorized licensed use limited to: NUST School of Electrical Engineering and Computer Science (SEECS). Downloaded on October 04,2023 at 08:00:26 UTC from IEEE Xplore. Restrictions appl
where, = 6ℎ ,. , . ⁄6,. |, ,- is a Jacobian matrix
. .*1
of ℎ . at , - .|.*1 , ?. is the covariance matrix of system
process noise.
c) Update process:
- . and variance matrix 5. :
Update estimation of system state ,
-.
, - .|.*1 0 9. 2.|.*1
, (9)
5/ @ * 9/ =% 5/|/! (10)
D. Particle filters:
Kalman filter-based SLAM algorithm requires the
probabilistic model to follow Gaussian distribution. Most of
the real-world systems are nonlinear and their probabilistic
distribution follows the non-Gaussian distribution. Therefore,
for non-linear models, the Rao-Blackwellized particle filter-
based SLAM algorithm is used.
The main idea of that method is to decompose a SLAM
problem into path estimation and landmark estimation. It
decomposes the SLAM problem into landmark and path
estimation.
Fig. 3: Kalman filter-based simultaneous localization and
mapping process for the robot in an unknown environment.
thorized licensed use limited to: NUST School of Electrical Engineering and Computer Science (SEECS). Downloaded on October 04,2023 at 08:00:26 UTC from IEEE Xplore. Restrictions appl
AB. : B. , CB1,. , ∑B1,., … , CBE,. , ∑B1,..; (11)
where ith is the number of particle sequences, B. denotes the
robot position of the ith particle, CB1,. and ∑B1,.. corresponds to
the mean and variance of the nth landmark of the ith particle.
The entire process is shown in Fig. 4.
IV. SIMULATION RESULTS
The experiments have been performed in the Gazebo
simulation environment. A scene with unknown boundaries
and unknown static obstacles inside the environment has been
developed as shown in Fig. 5.
The autonomous robot initially localized at a random (c)
location inside the boundaries. The robot exploring the
environment while keeping memorize its position and
generating a map of the environment. The various stages of
the SLAM process are shown in Fig. 6. Finally, the complete
trajectory planning from random starting positions to various
goal positions are shown in Fig. 7. The method is able to
compute the optimal paths while avoiding obstacles.
(d)
thorized licensed use limited to: NUST School of Electrical Engineering and Computer Science (SEECS). Downloaded on October 04,2023 at 08:00:26 UTC from IEEE Xplore. Restrictions appl
generate the map of the environment. After that, the robot can
generate and follow the shortest path between any two points
of the environment. The path has been generated using
Euclidean distance and the unknown static obstacles are
avoided while travelling. The Gazebo simulation
environment has been used to configure the autonomous
robot and the arena. The performance of the robot in the
navigation phase is observed and the results presented are
quite encouraging. Further, the performance of the robot is
also observed for the different maximum allowed velocities
for the robot. In the future, we shall deal with dynamic
obstacles also having random acceleration.
(a)
REFERENCES
[1] M. Quigley, K. Conley, B.P. Gerkey et al., “ROS: an open-source
Robot Operating System,” ICRA Workshop on Open Source Software,
3(2), 2009, pp. 1-5.
[2] A. Koubaa, Robot Operating System (ROS): The Complete Reference.
vol. 2, Springer, 2017.
[3] C. Cadena, L. Carlone, H. Carrillo, et al., “Past, present, and future of
simultaneous localization and mapping: Toward the robust-perception
age,” IEEE Transactions on Robotics, vol. 32, No. 6, pp. 1309–1332,
2016.
[4] I. Ullah, X. Su, X. Zhang, and D. Choi, “Simultaneous localization and
mapping based on Kalman filter and extended Kalman filter,” Wireless
Communications and Mobile Computing, vol. 2020, Article ID
2138643, 2020. https://doi.org/10.1155/2020/2138643
[5] N.M. Yatima and N. Buniyamina, “Particle filter in simultaneous
(b) localization and mapping (SLAM) using differential drive mobile
robot,” Jurnal Teknologi, vol. 77, no. 20, pp. 91–97, 2015.
[6] M. Luis, G. Santiago, B. Dolores, and M.M. Luisa, “Differential
evolution solution to the SLAM problem,” Robotics and Autonomous
Systems, vol. 57, no. 4, pp. 441-450, 2009.
[7] C. Debeunne and D. Vivet, “A review of visual-LiDAR fusion based
simultaneous localization and mapping,” Sensors, vol. 20, no. 7, 2020.
https://doi.org/10.3390/s20072068
[8] J. Sungyoung, K. Jungmin, and K. Sungshin, “Simultaneous
localization and mapping of a wheel-based autonomous vehicle with
ultrasonic sensors,” Artificial Life and Robotics, vol. 14, no. 2, pp. 186-
190, 2009.
[9] G. Jiang, L. Yin, S. Jin, C. Tian, X. Ma, and Y. Ou, “A simultaneous
localization and mapping (SLAM) framework for 2.5D map building
based on low-cost LiDAR and vision fusion,” Applied Sciences, vol. 9,
no. 10, 2105, 2019.
(c) [10] J. Quenzel and S. Behnke, “Real-time multi-adaptive-resolution-surfel
Fig. 7: Path planning and trajectory generation while 6D LiDAR odometry using continuous-time trajectory optimization,”
avoiding obstacles are shown in (a), (b), and (c) for various arXiv:2105.02010v1 [cs.RO] 2021.
[11] S. Banerjee, J. Thomason, and J.J. Corso, “The RobotSlang
starting positions. benchmark: Dialog-guided robot localization and navigation,”
arXiv:2010.12639v1 [cs RO] 2020.
V. CONCLUSION [12] H. Durrant-Whyte and T. Baileyi, “Simultaneous localization and
A vision-based method based on extended Kalman filter mapping: Part I,” IEEE Robotics & Automation Magazine, vol. 13, no.
2, pp. 99-110, 2006.
and particle filter is introduced and implemented for the [13] K. Madhira and J. Patel, “A quantitative study of mapping and
SLAM process. The robot is positioned randomly inside the localization algorithms on ROS based differential robot,” 2017 Nirma
unknown and bounded environment to start the SLAM University Int. Conf. on Engg. (NUiCONE), 2017, DOI:
process. The complete environment is being explored once to 10.1109/NUICONE.2017.8325609
thorized licensed use limited to: NUST School of Electrical Engineering and Computer Science (SEECS). Downloaded on October 04,2023 at 08:00:26 UTC from IEEE Xplore. Restrictions appl