Unit 4
Unit 4
(chapter 1 and 7)
Text book: Learning Robotic using Python
421C365BD9FF1F717815A3895523BAEEB01FA116
• Step 4. After adding the apt-keys, we have to update the Ubuntu package list.
The following command will add and update the ROS packages, along with the
Ubuntu packages:
$ sudo apt-get update
Contd..
• Step 5 - After updating the ROS packages, we can install the packages. The
following command will install all the necessary packages, tools, and
libraries of ROS:
$ sudo apt-get install ros-kinetic-desktop-full
• Step 6 - We may need to install additional packages even after the desktop
full installation. Each additional installation will be mentioned in the
appropriate section. The desktop full install will take some time. After the
installation of ROS, you will almost be done. The next step is to initialize
rosdep, which enables you to easily install the system dependencies for
ROS source packages:
• $ sudo rosdep init
• $ rosdep update
• Step 7 - To access ROS's tools and commands on the current bash shell, we
can add ROS environmental variables to the .bashrc file. This will execute at
the beginning of each bash session. The following is a command to add the
ROS variable to .bashrc:
• echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
• The following command will execute the .bashrc script on the current
shell to generate the change in the current shell:
• source ~/.bashrc
• Step 8- A useful tool to install the dependency of a package is
rosinstall. This tool has to be installed separately. It enables you to
easily download many source trees for the ROS package with one
command:
• $ sudo apt-get install python-rosinstall python-rosinstall-generator python-
wstool build-essential
Contd..
• Install the ROS as shown earlier.
• Introduction to catkin
• Catkin is the official build system of ROS
• Catkin combines Cmake macros and Python scripts to provide the same
normal workflow that CMake produces.
• Catkin provides better distribution of packages, better cross-compilation, and
better portability than the rosbuild system.
• For more information, refer to wiki.ros.org/catkin
Creating a ROS Package
Example communication between two ROS nodes
as a publisher and subscriber shown below
Detailed Python Codes and execution steps refer slides 12-15
Introducing Gazebo
• Gazebo is a free and open source robot simulator
• In which we can test our own algorithms, design robots, and test robots in
different simulated environments.
• Gazebo can accurately and efficiently simulate complex robots in indoor and
outdoor environments.
• Gazebo is built with a physics engine with which we can create high-quality
graphics and rendering
• Features of Gazebo
• Dynamic simulation
• Gazebo can simulate the dynamics of a robot using physics engines such as Open
Dynamics Engine (ODE).
• Advanced 3D graphics: Gazebo provides high-quality rendering, lighting, shadows, and
texturing using the OGRE framework (http://www.ogre3d.org/).
• Sensor support: Gazebo supports a wide range of sensors, including laser range finders,
Kinect-style sensors, 2D/3D cameras, and so on. We can also use it to simulate noise to
test audio sensors.
Features of Gazebo Contd..
• Plugins: We can develop custom plugins for the robot, sensors, and
environmental controls. Plugins can access Gazebo's API.
• Robot models: Gazebo provides models for popular robots, such as
PR2, Pioneer 2 DX, iRobot Create, and TurtleBot. We can also build
custom models of robots.
• TCP/IP transport: We can run simulations on a remote machine and a
Gazebo interface through a socket-based message-passing service.
• Cloud simulation: We can run simulations on the cloud server using
the CloudSim framework (http://cloudsim.io/).
• Command-line tools: Extensive command-line tools are used to check
and log simulations.
Gazebo Installations
• To install completely
• http://gazebosim.org/download
• The complete gazebo_ros_pkgs can be installed in ROS Indigo using
the following (Gazebo + ROS integrated)
• command:
• $ sudo apt-get install ros-kinetic-gazebo-ros-pkgs ros-kinetic-ros-
control
• Gazebo runs two executables-the Gazebo server and the Gazebo
client.
• The Gazebo server will execute the simulation process and the
Gazebo client can be the Gazebo GUI. Using the
• The Gazebo client and server will run in parallel.
• Commands $roscore, $rosrun gazebo_ros_gazebo
The Gazebo GUI …
The
3D Vision sensors
• The main application of 3 D Vision sensor for autonomous navigation
of robot.
• 3d sensors give depth information of the object w.r.t sensor axis along
with x &y information
• Interfaces with vision libraries
• OpenCV – Open source computer vision library
• OpenNI – Open Natural Interaction
• PCL –Point Cloud Library
• List of Vision Sensors
• Pixy2/CMUcam5 – detect color object, high speed accuracy and can be
trained to track the objects.The Pixy module has a CMOS sensor and NXP
LPC4330 (http://www.nxp.com/) based on Arm Cortex M4/M0 cores for picture
processing.
Contd.. List of vision sensors
• Logitech C920 webcam – 2D vision camera,
• They contain a CMOS sensor
• USB interface, but they do not have any inbuilt vision-processing capabilities
• 5 Megapixels resolution and HD videos
• Kinect 360 – 3D vision sensor
• Microsoft Xbox 360 game console
• RGB camera – captures 2D images- resolution of 640 x 480 at 30 Hz
• Depth camera – captures monochrome depth images – depth range -0.8m
to 4m
• Microphone array, motors to tilt the position
• Some of the applications of Kinect are 3D motion capture, skeleton tracking,
face recognition, and voice recognition.
• Kinect can be interfaced with a PC using the USB 2.0
• programmed using Kinect SDK, OpenNI, and OpenCV.
Contd.. List of vision sensor
• Intel RealSense D400 series
• The Intel RealSense D400 depth sensors are stereo cameras that
come with an IR projector to enhance the depth data (see https:/ /
software. intel. com/ en- us/ realsense/ d400 for more details).
• The more popular sensor models in the D400 series are
• D415 and D435.
• Each consists of a stereo camera pair, an RGB camera, and an IR
projector.
• The stereo camera pair computes the depth of the environment with
the help of the IR projector.
Intel RealSense D400 series
• The major features of this depth camera are that it can work in an
indoor and outdoor environment.
• It can deliver the depth image stream with 1280 x 720 resolution at
90 fps
• the RGB camera can deliver a resolution of up to 1920 x 1080. It has a
USB-C interface, which enables fast data transfer between the sensor
and the computer.
• It has a small form factor and is lightweight, which is ideal for a
robotics vision application.
Block diagram D400 series
Orbbec Astra depth sensor
• The Astra sensor comes in two models: Astra and Astra S. The main
difference between these two models is the depth range. The Astra
has a depth range of 0.6-8 m, whereas the
• Astra S has a range of 0.4-2 m. The Astra S is best suited for 3D
scanning, whereas the Astra can be used in robotics applications. The
size and weight of Astra is much lower than that of Kinect.
• These two models can both deliver depth data and an RGB image of
640 x 480
• resolution at 30 fps. a higher resolution, such as 1280 x 960, but it
may reduce the frame rate.
• They also have the ability to track skeletons, like Kinect.
What is OpenCV?
• It is open source, BSD licensed , computer vision libraries
• It includes implementation of hundreds of computer vision
algorithms.
• Developed by Intel Russia’s research and supported by Itseez (https:/
/ github. com/ Itseez) Intel acquired Itseez, 2016
• It mainly written in C and C++, good interface by Python, Java and
MATLAB/OCTAV and also wrappers in other languages.
• OpenCV runs on all platforms(such as Windows, Linux, Mac OS X,
Android,FreeBSD, OpenBSD, iOS, and BlackBerry).
• In Ubuntu, OpenCV, the Python wrapper, and the ROS wrapper are
already installed when we install the ros-kinetic-desktop-full or ros-
melodic-desktop-full package.
OpenCV installation:
• In Kinetic:
• $ sudo apt-get install ros-kinetic-vision-opencv
• In Melodic:
• $ sudo apt-get install ros-melodic-vision-opencv
• OpenCV installation:
• >>> import cv2
• >>> cv2.__version__
• If this command is successful, this version of OpenCV will be installed
on your system. The version might be either 3.3.x or 3.2.x.
Main Applications of OpenCV are..
• Object detection
• Gesture recognition
• Human-computer interaction
• Mobile robotics
• Motion tracking
• Facial-recognition systems
Reading and displaying an image using the Python-
OpenCV interface
#!/usr/bin/env python
import numpy as np
import cv2
img = cv2.imread('robot.jpg',0)
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
• Save the preceding code as image_read.py and copy a JPG file and name it
robot.jpg.
• Execute the code using the following command:
• $python image_read.py
Contd..
• OpenCV is also integrated into ROS, mainly for image processing. The
vision_opencv
• ROS stack includes the complete OpenCV library and the interface
with ROS.
• The vision_opencv meta package consists of individual packages:
• cv_bridge: This contains the CvBridge class. This class converts ROS
image messages to the OpenCV image data type and vice versa.
• image_geometry: This contains a collection of methods to handle
image and pixel geometry.
Creating a ROS Package and work on images
• To create a ROS package; this package contains a node to subscribe to
RGB and depth images, process RGB images to detect edges and
display all images after converting them to an image type equivalent
to OpenCV
• We can create a package called sample_opencv_pkg with the
following dependencies: sensor_msgs, cv_bridge, rospy, and
std_msgs. The sensor_msgs dependency defines ROS messages for
commonly used sensors, including cameras and scanning-laser
rangefinders. The cv_bridge dependency is the OpenCV interface of
ROS.
• The following command will create the ROS package with the
aforementioned dependencies:
• $ catkin-create-pkg sample_opencv_pkg sensor_msgs cv_bridge
rospy std_msgs
Displaying Kinect images using Python, ROS, and
cv_bridge
#It mainly involves importing rospy, sys, cv2, sensor_msgs, cv_bridge,
and the numpy module
import rospy
import sys
import cv2
from sensor_msgs.msg import Image, CameraInfo
from cv_bridge import CvBridge, CvBridgeError
from std_msgs.msg import String
import numpy as np
Class definition in python cvBridgeDemo:
class cvBridgeDemo(): # Subscribe to the camera image and
def __init__(self): depth topics and set the appropriate
callbacks
self.node_name = "cv_bridge_demo"
self.image_sub =
#Initialize the ros node
rospy.Subscriber("/camera/rgb/image_
rospy.init_node(self.node_name) raw", Image,
# What we do during shutdown self.image_callback)
rospy.on_shutdown(self.cleanup) self.depth_sub =
# Create the cv_bridge object rospy.Subscriber("/camera/depth/imag
self.bridge = CvBridge() e_raw", Image,
self.depth_callback)
When a color image is received on the /camera/rgb/image_raw topic, it will call this function. This function will
process the color frame for edge detection and show the edge detected and the raw color image:
The following code gives a callback function of the depth
image from Kinect
def depth_callback(self, ros_image):
# Convert the depth image to a Numpy
# Use cv_bridge() to convert the ROS array since most cv2 functions require
image to OpenCV format Numpy arrays.
try: depth_array = np.array(depth_image,
# The depth image is a single-channel dtype=np.float32)
float32 image # Normalize the depth image to fall
depth_image = between 0 (black) and 1 (white)
self.bridge.imgmsg_to_cv2(ros_image, cv2.normalize(depth_array, depth_array,
"32FC1") 0, 1, cv2.NORM_MINMAX)
except CvBridgeError, e: # Process the depth image
print e self.depth_display_image =
pass self.process_depth_image(depth_array)
When a depth image is received on the /camera/depth/raw_image topic, it will call this function. This
function will show the raw depth image:
The following function is called process_image(), and will convert the color
image to grayscale, then blur the image, and find the edges using the canny
edge filter: