Raspberry Pi Robotic Projects Sample Chapter
Raspberry Pi Robotic Projects Sample Chapter
Richard Grimmett
Chapter 9, Using a GPS Receiver to Locate Your Robot, shows you how to use a GPS receiver so that your robot will know its location since your robot may get lost if it is mobile. Chapter 10, System Dynamics, focuses on how to bring together all the capabilities of the system to make complex robots. Chapter 11, By Land, By Sea, and By Air, teaches you how to add capabilities to robots that sail, fly, and even go under water.
To add vision to our projects, we'll need a Raspberry Pi with a LAN connection and a 5V power supply. We'll also need to add a USB webcam; try to nd a recently manufactured one. You may have an older webcam sitting on your project shelf, but it will probably cause problems as Linux may not have driver support for these devices, and the money you save will not be worth the frustration you might have later. You should stick with webcams from major players, such as Logitech or Creative Labs.
In most cases, you won't need to connect this device through a powered USB hub; however, if you encounter problems, for example, if the system does not recognize that your webcam is connected, realize that lack of USB power could be the problem.
[ 84 ]
Chapter 4
Look for video0, as this is the entry for your webcam. If you see it, the system knows your camera is there. Now, let's use guvcview to see the output of the camera. Since it will need to output some graphics, you either need to use a monitor connected to the board, as well as a keyboard and mouse, or you can use vncserver as described in Chapter 1, Getting Started with Raspberry Pi. If you are going to use vncserver, make sure you start the server on Raspberry Pi by typing vncserver via SSH. Then, start up VNC Viewer as described in Chapter 1, Getting Started with Raspberry Pi. Open a terminal window and type sudo guvcview. You should see something as shown in the following screenshot:
[ 85 ]
The video window displays what the webcam sees, and the GUVCViewer Controls window controls the different characteristics of the camera. The default settings of the Logitech C110 camera work ne. However, if you get a black screen for the camera, you may need to adjust the settings. Select the GUVCViewer Controls window and the Video & Files tab. You will see a window where you can adjust the settings for your camera, as shown in the following screenshot:
[ 86 ]
Chapter 4
The most important setting is Resolution. If you see a black screen, lower the resolution; this will often resolve the issue. This window will also tell you what resolutions are supported by your camera. Also, you can display the frame rate by checking the box to the right of the Frame Rate setting. Be aware, however, that if you are going through vncviewer, the refresh rate (how quickly the video window will update itself) will be much slower than if you're using Raspberry Pi and a monitor directly. Once you have the camera up and running and the desired resolution set, we can go on to download and install OpenCV.
You can connect more than one webcam to the system. Follow the same steps, but connect to cameras via a USB hub. List the devices in the /dev directory. Use guvcview to see the different images. One challenge, however, is that connecting too many cameras can overwhelm the bandwidth of the USB port.
to do this now before you start. You're going to download a number of new software packages, so it is good to make sure everything is up to date. previous chapter. In case you skipped that part, you will have to refer to it now, as you need this package. code and decode audio and video streams.
sudo apt-get install libavformat-dev This library provides a way to sudo apt-get install ffmpeg This library provides a way to transcode
[ 87 ]
sudo apt-get install libcv2.3 libcvaux2.3 libhighgui2.3 This command shows the basic OpenCV libraries. Note the number in the command. This will almost certainly change as new versions of OpenCV become available. If 2.3 does not work, either try 2.4 or google for the latest version of OpenCV. sudo apt-get install python-opencv This is the Python development kit needed for OpenCV, as you are going to use Python. sudo apt-get install opencv-doc This command will show the
sudo apt-get install libcv-dev This command shows the header le and static libraries to compile OpenCV. sudo apt-get install libcvaux-dev This command shows more development tools for compiling OpenCV. sudo apt-get install libhighgui-dev This is another package that provides header les and static libraries to compile OpenCV.
Make sure you are in your home directory, and then type cp -r /usr/share/doc/ opencv-doc/examples. This will copy all the examples to your home directory. Now you are ready to try out the OpenCV library. I prefer to use Python while programming simple tasks; hence, I'll show the Python examples. If you prefer the C examples, feel free to explore. In order to use the Python examples, you'll need one more library. So type sudo apt-get install python-numpy, as you will need this to manipulate the matrices that OpenCV uses to hold images. Now that you have these, you can try one of the Python examples. Switch to the directory with the Python examples by typing cd /home/pi/examples/python. In this directory, you will nd a number of useful examples; we'll only look at the most basic, which is called camera.py. If camera.py is not created, you can create it by typing in the code shown in the next few pages. You can try running this example; however, to do this you'll either need to have a display connected to Raspberry Pi or you can do this over the vncserver connection. Bring up the LXTerminal window and type python camera.py. You should see something as shown in the following screenshot:
[ 88 ]
Chapter 4
The camera window is quite large; you can change the resolution of the image to a lower one, which will make the update rate faster and the storage requirement for the image smaller. To do this, edit the camera.py le and add two lines, as shown in the following screenshot:
[ 89 ]
import time This line imports the time library so you can access the cv.NamedWindow("camera", 1) This line creates a window that you will
capture = cvCaptureFromCAM(0) This line creates a structure that knows how to capture images from the connected webcam. cv.SetCaptureProperty(capture, 3, 360) This line sets the image
while True: Here you are creating a loop that will capture and display
the image over and over until you press the Esc key.
img = cv.QueryFrame(capture) This line captures the image and stores cv.ShowImage("camera", img) This line maps the img variable to the
pressed, and if the pressed key is the Esc key, it executes the break, which stops the while loop and the program reach its end and stop. You need this statement in your code because it also signals OpenCV to display the image now.
Now run camera.py, and you should see the following screenshot:
[ 90 ]
Chapter 4
You may want to play with the resolution to nd the optimum settings for your application. Bigger images are greatthey give you a more detailed view on the worldbut they also take up signicantly more processing power. We'll play with this more as we actually ask our system to do some real image processing. Be careful if you are going to use vncserver to understand your system performance, as this will signicantly slow down the update rate. An image that is twice the size (width/ height) will involve four times more processing. Your project can now see! You will use this capability to do a number of impressive tasks that will use this vision capability.
[ 91 ]
[ 92 ]
Chapter 4
Let's look specically at the following changes you need to make to camera.py:
cv.Smooth(img,img,cv.CV_BLUR,3) You are going to use the OpenCV
library rst to smoothen the image, taking out any large deviations.
creates a default image that can hold the hue image you create in the next statement.
image that stores the image as per the values of hue (color), saturation, and value (HSV) instead of the red, green, and blue (RGB) pixel values of the original image. Converting to HSV focuses our processing more on the color as opposed to the amount of light hitting it. are going to create yet another image, this time a black and white image that is black for any pixel that is not between two certain color values.
color range. In this case, I have a green ball and I want to detect the color green. For a good tutorial on using hue to specify color, try http://www. tomjewett.com/colors/hsb.html. Also, http://www.shervinemami. info/colorConversion.html includes a program that you can use to determine your values by selecting a specic color. the original image in it.
cv.InRangeS(hue_img, (38,120, 60), (75, 255, 255), threshold_ img) The (38, 160, 60), (75, 255, 255) parameters determine the
cv.ShowImage("Colour Tracking", img) This shows a window with cv.ShowImage("Threshold", threshold_img) This shows a window
[ 93 ]
Now run the program. You'll need to either have a display, keyboard, and mouse connected to the board, or you can run it remotely using vncserver. Run the program by typing sudo python ./camera.py. You should see a single black image, but move this window and you will expose the original image window as well. Now take your target (I used my green ball) and move it into the frame. You should see something as shown in the following screenshot:
Notice the white pixels in our threshold image showing where the ball is located. You can add more OpenCV code that gives the actual location of the ball. In our original image le of the ball's location, you can actually draw a rectangle around the ball as an indicator. Edit the camera.py le to look as follows:
[ 94 ]
Chapter 4
Start by editing just below the line cv.InRangeS(hue_img, (38,120, 60), (75, 255, 255), threshold_img). The lines used are as follows:
storage = cv.CreateMemStorage(0) This line creates some memory for
image that are within the threshold. There could be more than one, so you may want to capture them all.
contour = cv.FindContours(threshold_img, storage, cv.CV_RETR_ CCOMP, cv.CV_CHAIN_APPROX_SIMPLE) This nds all the areas on your
[ 95 ]
points = [] The creates an array for us to hold all the different possible
color points.
while contour: Now add a while loop that will let you step through all the possible contours. By the way, it is important to note that if there is another larger green blob in the background, you will "nd" that location. Just to keep this simple, we'll assume your green ball is unique.
rectangle for each area of color. The rectangle is dened by the corners of a rectangle around the "blob" of color.
contour = contour.h_next() This will prepare you for the next contour,
if one exists.
size = (rect[2] * rect[3]): This calculates the diagonal length of the rectangle you are evaluating. The data structure rect contains four integers; 0 and 1 for the pixel values of the lower-left corner of the box, and 2 and 3
if size > 100: Here you check to see if the area is big enough to be of concern. 100 tells your program to not worry about any rectangles
that are less than 100 pixels in area. You may want to vary this based on the application.
pt1 = (rect[0], rect[1]) Dene a pt1 variable and set its two values to the x and y coordinates of the left side of the blob's rectangular location.
and set its two values to the x and y coordinates of the right side of the blob's rectangular location.
cv.Rectangle(img, pt1, pt2, (38, 160, 60)) Now you add a rectangle to your original image by identifying where it is located.
Now that the code is ready, you can run it. You should see something as shown in the following screenshot:
[ 96 ]
Chapter 4
You can now track your object. Now that you have the code, you can modify the color or add more colors. You also have the location of your object, so later you can attempt to follow the object or manipulate it in some way. OpenCV is an amazing, powerful library of functions. You can do all sorts of incredible things with just a few lines of code. Another common feature you may want to add to your projects is motion detection. If you'd like to try, there are several good tutorials; try looking at the following links:
http://derek.simkowiak.net/motion-tracking-with-python/ http://stackoverflow.com/questions/3374828/how-do-i-trackmotion-using-opencv-in-python https://www.youtube.com/watch?v=8QouvYMfmQo https://github.com/RobinDavid/Motion-detection-OpenCV
[ 97 ]
Having a webcam connected to your system provides all kinds of complex vision capabilities. You can get 3D vision with OpenCV using two cameras. There are several good places; for example, the code in the samples/cpp directory that came with OpenCV has a sample stereo_match.cpp. For more information, refer to http://code.google.com/p/opencvstereovision/source/checkout.
Summary
As we learned in this chapter, your projects can now speak and see! You can issue commands, and your projects can respond to changes in the physical environment sensed by the webcam. In the next chapter, you will add mobility using motors, servos, and other methods.
[ 98 ]
Alternatively, you can buy the book from Amazon, BN.com, Computer Manuals and most internet book retailers.
www.PacktPub.com