0% found this document useful (0 votes)
68 views

Unit 4

ROS is a software framework used for creating robotic applications. It provides capabilities for building powerful robot applications that can be reused. ROS has tools, libraries and packages that make robot software development easier. It uses message passing for communication between processes and hardware abstraction enables robot-agnostic applications. Packages contain source code, configuration files and are organized. Third party libraries can also be integrated.

Uploaded by

Sagar Maruthi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views

Unit 4

ROS is a software framework used for creating robotic applications. It provides capabilities for building powerful robot applications that can be reused. ROS has tools, libraries and packages that make robot software development easier. It uses message passing for communication between processes and hardware abstraction enables robot-agnostic applications. Packages contain source code, configuration files and are organized. Third party libraries can also be integrated.

Uploaded by

Sagar Maruthi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Unit IV

(chapter 1 and 7)
Text book: Learning Robotic using Python

Robotic Operating System (ROS):


Introduction to OpenCV, OpenNI, PCL – Programming
Kinect with Python using ROS, OpenCV, OpenNI–
Point clouds using Kinect, ROS, OpenNI, PCL
Introduction to ROS
• ROS is a software framework used for creating robotic applications.
• The main aim of the ROS framework is to provide the capabilities that
you can use to create powerful robotics applications that can be
reused for other robots.
• ROS has a collection of software tools, libraries, and collection of
packages that makes robot software development easy.
• ROS stands for Robot Operating System, it is not a real operating
system.
• Rather, it is a meta-operating system, which provides the features of a
real operating system
Major features of ROS ..
• Message passing interface: This is the core feature of ROS, and it enables
interprocess communication.
• Using this message-passing capability, the ROS program can communicate
with its linked systems and exchange data.
• Hardware abstraction: ROS has a degree of abstraction that enables
developers to create robot-agnostic applications. These kinds of application
can be used with any robot; the developers need only worry about the
underlying robot hardware
• Package management: The ROS nodes are organized in packages called
ROS packages.
• ROS packages consist of source codes, configuration files, build files,
and so on. There is a build system in ROS that helps to build these
packages.
• The package management in ROS makes ROS development more
systematic and organized.
Contd..
• Third-party library integration: The ROS framework is integrated with
many third-party libraries, such as Open-CV, PCL, OpenNI, and so on.
• This helps developers to create all kinds of application in ROS.
• Low-level device control: When we work with robots, we may need to
work with low-level devices, such as those that control I/O pins, sending
data through serial ports, and so on. This can also be done using ROS.
• Distributed computing: The amount of computation required to process
the data from robot sensors is very high.
• Using ROS, we can easily distribute the computation to a cluster of
computing nodes.
• This distributes the computing power and allows you to process the data
faster than you could using a single computer.
Contd..
• Code reuse: The main goal of ROS is code reuse. Code reuse enables the
growth of a good research and development community around the world.
• ROS executables are called nodes. These executables can be grouped into a
single entity called a ROS package. A group of packages is called a meta
package, and both packages and meta packages can be shared and
distributed.
• Language independence: The ROS framework can be programmed using
popular languages (such as Python, C++, and Lisp).
The nodes can be written in any language and can communicate through
ROS without any issues.
• Easy testing: ROS has a built-in unit/integration test framework called
rostest to test ROS packages.
Contd..
• Scaling: ROS can be scaled to perform complex computation in
robots.
• Free and open source: The source code of ROS is open and it's
absolutely free to use. The core part of ROS is licensed under a BSD
license, and it can be reused in commercial and closed source
products.
ROS Equation
• ROS is a combination of plumbing (message passing), tools,
capabilities, and ecosystem.
• There are powerful tools in ROS to debug and visualize the robot
data.
• Inbuilt robot capabilities in ROS, such as robot navigation, localization,
mapping, manipulation, and so on. They help to create powerful
robotics applications.

Refer to http://wiki.ros.org/ROS/Introduction for more information


on ROS.
ROS Concepts
There are three main organizational levels in ROS:
• The ROS filesystem
• The ROS computation graph
• The ROS community
• Packages: ROS packages are the individual unit of the ROS software
framework.
• A ROS package may contain source code, third-party libraries,
configuration
files, and so on. ROS packages can be reused and shared.
• Package manifests: The manifests (package.xml) file will have all the
details of the packages, including the name, description, license, and,
more importantly, the dependencies of the package.
The ROS Computation graph
• The ROS Computation Graph is the peer-to-peer network of ROS
systems that processes data. The basic features of ROS Computation
Graph are nodes, ROS Master, the parameter server, messages, and
services:
• Nodes: The ROS node is a process that uses ROS functionalities to
process the data. A node basically computes. For example, a node can
process the laser scanner data to check whether there is any collision.
A ROS node is written with the help of an ROS client library (such as
roscpp and rospy
• ROS Master: The ROS nodes can connect to each other using a
program called ROS Master. This provides the name, registration, and
lookup to the rest of the computation graph. Without starting the
master, the nodes will not find each other and send messages.
Contd..
• Parameter server: The ROS parameters are static values that are stored in a
global location called the parameter server. From the parameter server, all the
nodes can access these values. We can even set the scope of the parameter
server as private or public so that it can access one node or access all nodes.
• ROS topics: The ROS nodes communicate with each other using a named bus
called ROS topic. The data flows through the topic in the form of messages.
• The sending of messages over a topic is called publishing, and receiving the data
through a topic is called subscribing.
• Messages: A ROS message is a data type that can consist of primitive data types,
such as integers, floating points, and Booleans. The ROS messages flow through
the ROS topic. A topic can only send/receive one type of message at a time. We
can create our own message definition and send it through the topics.
• Services:
Contd..
• Bags: These are formats in which to save and play back the ROS
topics. ROS bags are an important tool to log the sensor data and the
processed data. These bags can be used later for testing our
algorithm offline.
• Communication
between ROS Nodes
and ROS Master
Hello_world_publisher.py
• The hello_world_publisher.py node basically publishes a greeting message called hello
world to a topic called /hello_pub. The greeting message is published to the topic at a
rate of 10 Hz.
• #!/usr/bin/env python
• import rospy
• from std_msgs.msg import String
• def talker(): pub = rospy.Publisher('hello_pub', String, queue_size=10)
• rospy.init_node('hello_world_publisher', anonymous=True)
• r = rospy.Rate(10) # 10hz
• while not rospy.is_shutdown():
• str = "hello world %s"%rospy.get_time()
• rospy.loginfo(str)
• pub.publish(str)
• r.sleep()
• if __name__ == '__main__': try: talker() except rospy.ROSInterruptException: pass
Hello_world_subscriber.py
• The subscriber code is as follows:
• #!/usr/bin/env python
• import rospy
• from std_msgs.msg import String
• def callback(data):
• rospy.loginfo(rospy.get_caller_id()+"I heard %s",data.data)
• def listener():
• rospy.init_node('hello_world_subscriber', anonymous=True)
rospy.Subscriber("hello_pub", String, callback)
• rospy.spin() block the main thread from exiting
• if __name__ == '__main__’:
• listener()
Execution of two Python codes
• After saving two Python nodes, we need to change the permission to
executable using the
• chmod commands:
• chmod +x hello_world_publisher.py
• chmod +x hello_world_subscriber.py
• Build the package using the catkin_make command:
• cd ~/catkin_ws catkin_make
• The following command adds the current ROS workspace path in all
terminals so that we can access the ROS packages inside this
workspace:
• echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc
• source ~/.bashrc
Contd..
• First, we need to run roscore before starting the nodes. The roscore
command or ROS master is needed to communicate between nodes.
So, the first command is as follows:
• $ roscore
• After executing roscore, run each node using the following
commands:
• The following command will run the publisher:
• $ rosrun hello_world hello_world_publisher.py
• The following command will run the subscriber node. This node
subscribes to the hello_pub topic, as shown in the following code:
• $ rosrun hello_world hello_world_subscriber.py
Output of the subscriber and publisher nodes:
The ROS Community Level
• Distributions: A ROS distribution has a set of packages that come with a
specific version. The distribution that we are using in this book is ROS
Kinetic. There are other versions available, such as ROS Lunar and Indigo,
which has a specific version that we can install. It is easier to maintain the
packages in each distribution. In most cases, the packages inside a
distribution will be relatively stable.
• Repositories: The online repositories are the locations where we keep our
packages. Normally, developers keep a set of similar packages called meta
packages in a repository. We can also keep an individual package in a single
repository. We can simply clone these repositories and build or reuse the
packages.
• The ROS wiki: The ROS (http:/ / wiki. ros. org).
Contd..
• Mailing lists: If you want to get updates regarding ROS,
• ROS mailing list (http://lists.ros.org/mailman/listinfo/ros-users).
• You can also get the latest ROS news from ROS Discourse
• (https://discourse.ros.org).
• ROS answers: This is very similar to the Stack Overflow website. You
can ask questions related to ROS in this portal, and you might get
support from developers across the world
(https://answers.ros.org/questions/).
Installating ROS on Ubuntu
• Distribution Release date
• ROS Melodic Morenia May 23 2018
• ROS Lunar Loggerhead May 23 2017
• ROS Kinetic Kame May 23 2016
• ROS Indigo Igloo July 22 2014
• Ubuntu 16.04.3 LTS – ROS Kinetic Kame
• The steps are as follows:
• Step 1:- Configure your Ubuntu repositories to allow restricted, universe, and
multiverse downloadable files. We can configure it using Ubuntu's Software
& Update tool. We can get this tool by simply searching on the Ubuntu Unity
search menu and ticking the shown in the following screenshot:
Contd..
Contd..
• Step 2. Set up your system to accept ROS packages from packages.ros.org. ROS
Kinetic is supported only on Ubuntu 15.10 and 16.04. The following command will
store packages.ros.org in Ubuntu's apt repository list:
$ sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" >
/etc/apt/sources.list.d/ros-latest.list'
• Step 3. Next, we have to add apt-keys. An apt-key is used to manage the list of
keys used by apt to authenticate the packages. Packages that have been
authenticated using these keys will be considered trusted.
The following command will add apt-keys for the ROS packages:
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key

421C365BD9FF1F717815A3895523BAEEB01FA116
• Step 4. After adding the apt-keys, we have to update the Ubuntu package list.
The following command will add and update the ROS packages, along with the
Ubuntu packages:
$ sudo apt-get update
Contd..
• Step 5 - After updating the ROS packages, we can install the packages. The
following command will install all the necessary packages, tools, and
libraries of ROS:
$ sudo apt-get install ros-kinetic-desktop-full
• Step 6 - We may need to install additional packages even after the desktop
full installation. Each additional installation will be mentioned in the
appropriate section. The desktop full install will take some time. After the
installation of ROS, you will almost be done. The next step is to initialize
rosdep, which enables you to easily install the system dependencies for
ROS source packages:
• $ sudo rosdep init
• $ rosdep update
• Step 7 - To access ROS's tools and commands on the current bash shell, we
can add ROS environmental variables to the .bashrc file. This will execute at
the beginning of each bash session. The following is a command to add the
ROS variable to .bashrc:
• echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
• The following command will execute the .bashrc script on the current
shell to generate the change in the current shell:
• source ~/.bashrc
• Step 8- A useful tool to install the dependency of a package is
rosinstall. This tool has to be installed separately. It enables you to
easily download many source trees for the ROS package with one
command:
• $ sudo apt-get install python-rosinstall python-rosinstall-generator python-
wstool build-essential
Contd..
• Install the ROS as shown earlier.
• Introduction to catkin
• Catkin is the official build system of ROS
• Catkin combines Cmake macros and Python scripts to provide the same
normal workflow that CMake produces.
• Catkin provides better distribution of packages, better cross-compilation, and
better portability than the rosbuild system.
• For more information, refer to wiki.ros.org/catkin
Creating a ROS Package
Example communication between two ROS nodes
as a publisher and subscriber shown below
Detailed Python Codes and execution steps refer slides 12-15
Introducing Gazebo
• Gazebo is a free and open source robot simulator
• In which we can test our own algorithms, design robots, and test robots in
different simulated environments.
• Gazebo can accurately and efficiently simulate complex robots in indoor and
outdoor environments.
• Gazebo is built with a physics engine with which we can create high-quality
graphics and rendering
• Features of Gazebo
• Dynamic simulation
• Gazebo can simulate the dynamics of a robot using physics engines such as Open
Dynamics Engine (ODE).
• Advanced 3D graphics: Gazebo provides high-quality rendering, lighting, shadows, and
texturing using the OGRE framework (http://www.ogre3d.org/).
• Sensor support: Gazebo supports a wide range of sensors, including laser range finders,
Kinect-style sensors, 2D/3D cameras, and so on. We can also use it to simulate noise to
test audio sensors.
Features of Gazebo Contd..
• Plugins: We can develop custom plugins for the robot, sensors, and
environmental controls. Plugins can access Gazebo's API.
• Robot models: Gazebo provides models for popular robots, such as
PR2, Pioneer 2 DX, iRobot Create, and TurtleBot. We can also build
custom models of robots.
• TCP/IP transport: We can run simulations on a remote machine and a
Gazebo interface through a socket-based message-passing service.
• Cloud simulation: We can run simulations on the cloud server using
the CloudSim framework (http://cloudsim.io/).
• Command-line tools: Extensive command-line tools are used to check
and log simulations.
Gazebo Installations
• To install completely
• http://gazebosim.org/download
• The complete gazebo_ros_pkgs can be installed in ROS Indigo using
the following (Gazebo + ROS integrated)
• command:
• $ sudo apt-get install ros-kinetic-gazebo-ros-pkgs ros-kinetic-ros-
control
• Gazebo runs two executables-the Gazebo server and the Gazebo
client.
• The Gazebo server will execute the simulation process and the
Gazebo client can be the Gazebo GUI. Using the
• The Gazebo client and server will run in parallel.
• Commands $roscore, $rosrun gazebo_ros_gazebo
The Gazebo GUI …

The
3D Vision sensors
• The main application of 3 D Vision sensor for autonomous navigation
of robot.
• 3d sensors give depth information of the object w.r.t sensor axis along
with x &y information
• Interfaces with vision libraries
• OpenCV – Open source computer vision library
• OpenNI – Open Natural Interaction
• PCL –Point Cloud Library
• List of Vision Sensors
• Pixy2/CMUcam5 – detect color object, high speed accuracy and can be
trained to track the objects.The Pixy module has a CMOS sensor and NXP
LPC4330 (http://www.nxp.com/) based on Arm Cortex M4/M0 cores for picture
processing.
Contd.. List of vision sensors
• Logitech C920 webcam – 2D vision camera,
• They contain a CMOS sensor
• USB interface, but they do not have any inbuilt vision-processing capabilities
• 5 Megapixels resolution and HD videos
• Kinect 360 – 3D vision sensor
• Microsoft Xbox 360 game console
• RGB camera – captures 2D images- resolution of 640 x 480 at 30 Hz
• Depth camera – captures monochrome depth images – depth range -0.8m
to 4m
• Microphone array, motors to tilt the position
• Some of the applications of Kinect are 3D motion capture, skeleton tracking,
face recognition, and voice recognition.
• Kinect can be interfaced with a PC using the USB 2.0
• programmed using Kinect SDK, OpenNI, and OpenCV.
Contd.. List of vision sensor
• Intel RealSense D400 series
• The Intel RealSense D400 depth sensors are stereo cameras that
come with an IR projector to enhance the depth data (see https:/ /
software. intel. com/ en- us/ realsense/ d400 for more details).
• The more popular sensor models in the D400 series are
• D415 and D435.
• Each consists of a stereo camera pair, an RGB camera, and an IR
projector.
• The stereo camera pair computes the depth of the environment with
the help of the IR projector.
Intel RealSense D400 series
• The major features of this depth camera are that it can work in an
indoor and outdoor environment.
• It can deliver the depth image stream with 1280 x 720 resolution at
90 fps
• the RGB camera can deliver a resolution of up to 1920 x 1080. It has a
USB-C interface, which enables fast data transfer between the sensor
and the computer.
• It has a small form factor and is lightweight, which is ideal for a
robotics vision application.
Block diagram D400 series
Orbbec Astra depth sensor
• The Astra sensor comes in two models: Astra and Astra S. The main
difference between these two models is the depth range. The Astra
has a depth range of 0.6-8 m, whereas the
• Astra S has a range of 0.4-2 m. The Astra S is best suited for 3D
scanning, whereas the Astra can be used in robotics applications. The
size and weight of Astra is much lower than that of Kinect.
• These two models can both deliver depth data and an RGB image of
640 x 480
• resolution at 30 fps. a higher resolution, such as 1280 x 960, but it
may reduce the frame rate.
• They also have the ability to track skeletons, like Kinect.
What is OpenCV?
• It is open source, BSD licensed , computer vision libraries
• It includes implementation of hundreds of computer vision
algorithms.
• Developed by Intel Russia’s research and supported by Itseez (https:/
/ github. com/ Itseez) Intel acquired Itseez, 2016
• It mainly written in C and C++, good interface by Python, Java and
MATLAB/OCTAV and also wrappers in other languages.
• OpenCV runs on all platforms(such as Windows, Linux, Mac OS X,
Android,FreeBSD, OpenBSD, iOS, and BlackBerry).
• In Ubuntu, OpenCV, the Python wrapper, and the ROS wrapper are
already installed when we install the ros-kinetic-desktop-full or ros-
melodic-desktop-full package.
OpenCV installation:
• In Kinetic:
• $ sudo apt-get install ros-kinetic-vision-opencv
• In Melodic:
• $ sudo apt-get install ros-melodic-vision-opencv
• OpenCV installation:
• >>> import cv2
• >>> cv2.__version__
• If this command is successful, this version of OpenCV will be installed
on your system. The version might be either 3.3.x or 3.2.x.
Main Applications of OpenCV are..

• Object detection
• Gesture recognition
• Human-computer interaction
• Mobile robotics
• Motion tracking
• Facial-recognition systems
Reading and displaying an image using the Python-
OpenCV interface
#!/usr/bin/env python
import numpy as np
import cv2
img = cv2.imread('robot.jpg',0)
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
• Save the preceding code as image_read.py and copy a JPG file and name it
robot.jpg.
• Execute the code using the following command:
• $python image_read.py
Contd..

• The output will load a greyscale image as shown below


Capturing from the web camera
• The following code will capture an image using the webcam with the device name
/dev/video0 or /dev/video1
#!/usr/bin/env python
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
• The following section of code is looped to read image frames from the VideoCapture
• object, and shows each frame. It will quit when any key is pressed:
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Display the resulting frame
cv2.imshow('frame', frame)
k = cv2.waitKey(30)
if k > 0:
break
What is OpenNI?
• OpenNI is a multilanguage, cross-platform framework that defines
APIs in order to write applications using natural interaction (NI) (see
https:/ / structure. io/ openni for more information).
• Natural interaction refers to the way in which people naturally
communicate
• through gestures, expressions, and movements, and discover the
world by looking around and manipulating physical objects and
materials.
• OpenNI APIs are composed of a set of interfaces that are used to
write NI applications
The following figure shows a three-layered view of
the OpenNI library:

OpenNI is cross platform, and


has been successfully compiled
and deployed on Linux, Mac
OS X, and Windows.
Installation OpenNI in Ubuntu

• The following command will install the ROS-OpenNI library (which is


mainly supported by the Kinect Xbox 360 sensor) in Kinetic and
Melodic:
• $ sudo apt-get install ros-<version>-openni-launch
• The following command will install the ROS-OpenNI 2 library (which is
mainly supported by Asus Xtion Pro and Primesense Carmine):
• $ sudo apt-get install ros-<version>-openni2-launch
What is PCL?
• A point cloud is a set of data points in space that represent a 3D
object or an environment.
• A point cloud is generated from depth sensors, such as Kinect and
LIDAR.
• PCL (Point Cloud Library) is a large scale, open project for 2D/3D
images and point-cloud processing.
• The PCL framework contains numerous algorithms that perform
• filtering, feature estimation, surface reconstruction, registration, model
fitting, and segmentation.
• Using these methods, we can process the point cloud, extract key
descriptors to recognize objects in the world based on their geometric
appearance, create surfaces from the point clouds, and visualize
them.
Contd..
• It's open source, free for commercial and research use.
• PCL is cross platform and has been successfully compiled and
deployed
• PCL is already integrated into ROS.
• PCL is the 3D-processing backbone of ROS
• http://wiki.ros.org/pcl
How to launch the OpenNI driver?
• To get the data from Kinect sensor using ROS OpenNI driver
• Kinect 360 sensor should be connected to PC through USB and detect
the device
• Launch the following command
• $ roslaunch openni_launch openni.launch /*openNI and load all the
nodelets*/
• to convert raw depth/RGB/IR streams to depth images, disparity
images, and point clouds.
• After starting the driver, we can list out the various topics published
by the driver using the following command:
• $ rostopic list
Contd..
• we can view the RGB image using a ROS tool called image_view:
• $ rosrun image_view image_view image:=/camera/rgb/image_color
The ROS interface with OpenCV

• OpenCV is also integrated into ROS, mainly for image processing. The
vision_opencv
• ROS stack includes the complete OpenCV library and the interface
with ROS.
• The vision_opencv meta package consists of individual packages:
• cv_bridge: This contains the CvBridge class. This class converts ROS
image messages to the OpenCV image data type and vice versa.
• image_geometry: This contains a collection of methods to handle
image and pixel geometry.
Creating a ROS Package and work on images
• To create a ROS package; this package contains a node to subscribe to
RGB and depth images, process RGB images to detect edges and
display all images after converting them to an image type equivalent
to OpenCV
• We can create a package called sample_opencv_pkg with the
following dependencies: sensor_msgs, cv_bridge, rospy, and
std_msgs. The sensor_msgs dependency defines ROS messages for
commonly used sensors, including cameras and scanning-laser
rangefinders. The cv_bridge dependency is the OpenCV interface of
ROS.
• The following command will create the ROS package with the
aforementioned dependencies:
• $ catkin-create-pkg sample_opencv_pkg sensor_msgs cv_bridge
rospy std_msgs
Displaying Kinect images using Python, ROS, and
cv_bridge
#It mainly involves importing rospy, sys, cv2, sensor_msgs, cv_bridge,
and the numpy module
import rospy
import sys
import cv2
from sensor_msgs.msg import Image, CameraInfo
from cv_bridge import CvBridge, CvBridgeError
from std_msgs.msg import String
import numpy as np
Class definition in python cvBridgeDemo:
class cvBridgeDemo(): # Subscribe to the camera image and
def __init__(self): depth topics and set the appropriate
callbacks
self.node_name = "cv_bridge_demo"
self.image_sub =
#Initialize the ros node
rospy.Subscriber("/camera/rgb/image_
rospy.init_node(self.node_name) raw", Image,
# What we do during shutdown self.image_callback)
rospy.on_shutdown(self.cleanup) self.depth_sub =
# Create the cv_bridge object rospy.Subscriber("/camera/depth/imag
self.bridge = CvBridge() e_raw", Image,
self.depth_callback)

#Callback executed when the timer timeout


rospy.Timer(rospy.Duration(0.03), self.show_img_cb)
rospy.loginfo("Waiting for image topics...")
The callback function to visualize the actual RGB
image, processed RGB image, and depth image:
def show_img_cb(self,event): # And one for the depth image
try: cv2.moveWindow("Depth_Image", 950, 75)
cv2.namedWindow("RGB_Image", cv2.namedWindow("Depth_Image",
cv2.WINDOW_NORMAL) cv2.WINDOW_NORMAL)
cv2.moveWindow("RGB_Image", 25, 75) cv2.imshow("RGB_Image",self.frame)
cv2.namedWindow("Processed_Image", cv2.imshow("Processed_Image",self.display_ima
cv2.WINDOW_NORMAL) ge)
cv2.moveWindow("Processed_Image", 500, cv2.imshow("Depth_Image",self.depth_display_i
75) mage)
cv2.waitKey(3)
except:
pass
The following code gives a callback function of the
color image from Kinect.
def image_callback(self, ros_image): # Convert the image to a Numpy array
# Use cv_bridge() to convert the ROS since most cv2 functions require
image to OpenCV format Numpy arrays.
try: frame = np.array(self.frame,
dtype=np.uint8)
self.frame =
self.bridge.imgmsg_to_cv2(ros_image, # Process the frame using the
"bgr8") process_image() function
except CvBridgeError, e: self.display_image =
self.process_image(frame)
print e
pass

When a color image is received on the /camera/rgb/image_raw topic, it will call this function. This function will
process the color frame for edge detection and show the edge detected and the raw color image:
The following code gives a callback function of the depth
image from Kinect
def depth_callback(self, ros_image):
# Convert the depth image to a Numpy
# Use cv_bridge() to convert the ROS array since most cv2 functions require
image to OpenCV format Numpy arrays.
try: depth_array = np.array(depth_image,
# The depth image is a single-channel dtype=np.float32)
float32 image # Normalize the depth image to fall
depth_image = between 0 (black) and 1 (white)
self.bridge.imgmsg_to_cv2(ros_image, cv2.normalize(depth_array, depth_array,
"32FC1") 0, 1, cv2.NORM_MINMAX)
except CvBridgeError, e: # Process the depth image
print e self.depth_display_image =
pass self.process_depth_image(depth_array)

When a depth image is received on the /camera/depth/raw_image topic, it will call this function. This
function will show the raw depth image:
The following function is called process_image(), and will convert the color
image to grayscale, then blur the image, and find the edges using the canny
edge filter:

def process_image(self, frame): # Blur the image


# Convert to grayscale grey = cv2.blur(grey, (7, 7))
grey = cv2.cvtColor(frame, # Compute edges using the Canny
cv.CV_BGR2GRAY) edge filter
edges = cv2.Canny(grey, 15.0,
30.0)
return edges
def process_depth_image(self, def cleanup(self):
frame): print "Shutting down vision node."
# Just return the raw image for this cv2.destroyAllWindows()
demo
# The above function will close
Return frame the image window when the node
#The above function is called shuts down:
process_depth_image(). It simply
returns the depth frame:
The following code is the main() function. It will initialize the
cvBridgeDemo() class and call the rospy.spin() function:

def main(args): • Save the preceding code as


try: cv_bridge_demo.py and change the
permission of the node
cvBridgeDemo()
• using the following command. The
rospy.spin() nodes are only visible to the rosrun
except KeyboardInterrupt: command if we give it executable
print "Shutting down vision node." permission:
cv.DestroyAllWindows() • $ chmod +X cv_bridge_demo.py
if __name__ == '__main__':
main(sys.argv)
Start the Kinect driver
• $ roslaunch openni_launch
openni.launch
• Run the node using the following
command:
• $ rosrun sample_opencv_pkg
cv_bridge_demo.py
Working with point clouds using Kinect, ROS,
OpenNI, and PCL
• A 3D point cloud is a way of representing a 3D environment and 3D
objects as collection points along the x, y, and z axes.
• We can get a point cloud from various sources:
• Either we can create our point cloud by writing a program or generate
it from depth sensors or laser scanners.
• PCL supports the OpenNI 3D interfaces natively; thus, it can acquire
and process data from devices (such as Prime Sensor's 3D cameras,
Microsoft Kinect, or Asus Xtion Pro).
• PCL will be included in the ROS full desktop installation
• RViz, a data visualization tool in ROS.
Generate a point cloud from Kinect/Orbbec Astra
3D sensors
• Open a new terminal and launch the ROS-OpenNI driver, along with the
point cloud generator nodes, using the following command:
$ roslaunch openni_launch openni.launch
• We will use the RViz 3D visualization tool to view our point clouds.
• The following command will start the RViz tool:
$ rosrun rviz rviz
• The following screenshot shows a screenshot of the RViz point cloud data.
• The nearest objects are marked in red and the farthest objects are marked
in violet and blue.
• The objects in front of the Kinect are represented as a cylinder and cube:
Conversion of point cloud data to laser scan
data
• The depth image is processed and converted to the data equivalent of
a laser scanner using ROS's depthimage_to_laserscan package
• The main function of this package is to slice a section of the depth
image and convert it to an equivalent laser scan data type.
• The ROS sensor_msgs/LaserScan message type is used for publishing
the laser scan data.
• This depthimage_to_laserscan package will perform this conversion
and fake the laser scanner data.
• The laser scanner output can be viewed using RViz.
• In order to run the conversion, The following is the required code in
the launch file to start the depthimage_to_laserscan conversion:
Code in launch file
<!-- Fake laser -->
<node pkg="nodelet" type="nodelet"
name="laserscan_nodelet_manager" args="manager"/> <node
pkg="nodelet"
type="nodelet"
name="depthimage_to_laserscan" args="load
depthimage_to_laserscan/DepthImageToLaserScanNodelet
laserscan_nodelet_manager">
<param name="scan_height" value="10"/>
<param name="output_frame_id" value="/camera_depth_frame"/>
<param name="range_min" value="0.45"/>
<remap from="image" to="/camera/depth/image_raw"/>
<remap from="scan" to="/scan"/>
</node>
The topic name changed as per the sensor
Working with SLAM using ROS and Kinect
how mobile robots navigate?
• The main aim of deploying vision sensors in robot is to detect objects and
navigate the robot through an environment.
• Simultaneous Localization and Mapping (SLAM) is a algorithm that is used
in mobile robots to build up a map of an unknown environment or update
a map within a known environment by tracking the current location of the
robot.
• Two main challenges in mobile robot are
• Mapping and Localization
• Maps are used to plan the robot's trajectory and to navigate through this
path.
• Using maps, the robot will get an idea about the environment.
• Mapping involves generating a profile of obstacles around the robot.
• Through mapping, the robot will understand what the world looks like.
Contd..
• Localization is the process of estimating the position of the robot relative
to the map we build.
• 2D/3D – sensors are inputs to SLAM algorithm
• A SLAM library called OpenSlam (http://openslam.org/gmapping.html) is
integrated with ROS as a package called gmapping.
• The gmapping package provides a node to perform laser-based SLAM
processing, called slam_gmapping. This can create a 2D map from the laser
and position data collected by the mobile robot.
• To use the slam_gmapping node
• input the odometry data(Odometry is the use of data from motion sensors to
estimate change in position over time. It is used in robotics by some legged or
wheeled robots to estimate their position relative to a starting location. ) of the
robot
• The laser scan output from the laser range finder, which is mounted horizontally on
the robot
Contd..

• The slam_gmapping node subscribes to the sensor_msgs/LaserScan


messages and nav_msgs/Odometry messages to build the map
(nav_msgs/OccupancyGrid).
• The generated map can be retrieved via a ROS topic or service.

You might also like