Reader small image

You're reading from  Mastering Beaglebone Robotics

Product typeBook
Published inDec 2014
Reading LevelBeginner
Publisher
ISBN-139781783988907
Edition1st Edition
Languages
Concepts
Right arrow
Author (1)
Richard Grimmett
Richard Grimmett
author image
Richard Grimmett

Richard Grimmett has more fun that should be allowed working on robotics projects while teaching Computer Science and Electrical Engineering at Brigham Young University Idaho. He has a Bachelors and Masters degree in Electrical Engineering and a PhD in Leadership Studies. He also has 26 years of experience in the Radar and Telecommunications industries, and even has one of the original brick phones. He has written books on the basics of using the BeagleBone Black for robotics projects, and another for the Raspberry PI and yet another for the Arduino.
Read more about Richard Grimmett

Right arrow

Chapter 4. Vision and Image Processing

Now that your tracked platform can move around, you'll want to add a more complex sensor to provide information to it—the webcam. Using a webcam, you can allow your robot to see its environment. You'll learn how to use a powerful open source software platform called OpenCV to add powerful vision algorithms to your robotic platform.

In this chapter, you will be doing the following:

  • Connecting a webcam

  • Learning image processing using OpenCV

  • Discovering edge detection for barrier finding

  • Adding color and motion detection for targeting

Connecting a webcam to the BeagleBone Black


In order to enable computer vision, you'll need to connect a USB web camera to the USB port. Most standard USB webcams will work. This example uses a Logitech HD 720.

Here are the steps:

  1. Check if your USB webcam is connected. You'll do this using a program called guvcview. Install this by typing sudo apt-get install guvcview.

  2. Connect your USB camera and power up the BeagleBone Black. After the system is booted, go to the /dev directory and type ls. You should see the following screenshot:

  3. The video0 device is the USB webcam. If you see its entry in the directory, then it means the system can access your camera.

Now you can use guvcview to see images from the camera. Since these are graphic images, you'll need to use either a monitor connected directly to the BeagleBone Black or the remote graphics connection VNC server. If you want to use the remote connection, make sure you start the server on the BeagleBone Black by typing vncserver via SSH. Then...

Using OpenCV


With your camera connected, you can access amazing vision capabilities that have been provided by the open source community. One of the most powerful capabilities is OpenCV.

You already installed OpenCV in Chapter 1, Preparing the BeagleBone Black. If you'd like a good overview on OpenCV and more documentation, see http://docs.opencv.org/.

Now you can try OpenCV. It is easiest to use Python when programming simple tasks, so let's start with the Python examples. If you prefer the C examples, they are also available. In order to use the Python examples, you'll need the python-numpy library. Type sudo apt-get install python-numpy. You will need this to manipulate the matrices that OpenCV uses to hold the images.

Start with one of the Python examples. You can access the Python examples by typing cd /home/ubuntu/examples/python. There are a number of useful examples; you'll start with the most basic. It is called camera.py. To run this example, you'll either need to have a display connected...

Finding colored objects in your vision system


OpenCV can be used to track objects. As an example, let's build a system that tracks and follows a colored ball. OpenCV makes this activity amazingly simple; here are the steps:

  1. Create a directory to hold your image-based work. Once you have created the directory, go there and begin with your camera.py file.

  2. Now edit the file until it looks similar to the following screenshot:

    Let's look specifically at the changes you need to make to camera.py. The first three lines you add are as follows:

    cv.Smooth(img,img,cv.CV_BLUR,3)
    hue_img = cv.CreateImage(cv.GetSize(img), 8, 3)
    cv.CvtColor(img,hue_img, cv.CV_BGR2HSV)
    

    We are going to use the OpenCV library to first smooth the image, taking out any large deviations. The next two lines create a new image that stores the image in values of Hue (color), Saturation, and Value (HSV) instead of the Red, Green, and Blue (RGB) pixel values of the original image. Converting to HSV focuses your processing more on the...

Following colored objects with your vision system


Now that your robot can sense colored objects, let's take this a step forward and use this capability to actually guide your robot. If you look at the code from the last section, you'll notice two variables, pt1 and pt2. These variables hold the x and y coordinates of the color that your robot found. We can use these with our dcmotor.py program to move the robot so that when the color reaches the edge of the viewing area, the robot will move to put the colored object back into the middle of the viewing area. Here is the code:

When your robot finds a color, it also finds the x boundaries of that color. These are held in the rect[0] and rect[2] variables. Each time through the loop, your program checks to see if the left side is smaller than 20 or the right side is larger than 310. In either case, your program will call the dcmotor.py program you wrote earlier and move the robot in a turn either to the right or left. As you move the target,...

Finding movement in your vision system


Another interesting behavior of your tracked robot is the ability to find motion. Here is the first part of the code to use the OpenCV library and your webcam to follow motion:

The following is the second part of the code to use the OpenCV library and your webcam to follow motion:

It is useful to look at the code in detail. This first section of the code is very similar to the setup code you used in the colored object section:

  • #!/usr/bin/env python: This line allows the program to be executed as a regular program.

  • import cv: This line imports the OpenCV library.

  • capture = cv.CaptureFromCAM(0): This opens the connection to webcam(0).

  • cv.NamedWindow("Target", 1): This creates a window on the display with the name as Target.

  • frame = cv.QueryFrame(capture): This grabs an image from the webcam and sticks it in the variable frame.

  • frame_size = cv.GetSize(frame): This returns the frame size of the image as a tuple (set of two numbers) of width and height.

  • color_image...

Following movement with your robot


To follow the movement, you will do something very similar to the previous section on following color. Your program will hold the center of the movement in the center_point variable, which will hold and the x and y value. You can then use it to move your robot if it is on either edge of the vision field. The first additions you'll need to add to your program are the import time and import os lines at the top of the file, to include these libraries. Here are the changes:

Here are the main additions at the bottom of the program on motion detection:

These lines should look almost exactly like the lines you added to move your robot when tracking color. The timing statements prevent our robot from overreacting to the movement. Now your robot should be able to follow movement within its field of vision.

Summary


In this chapter, your robot added a video camera and video processing so that it can really see its environment. Now that your robot can move and fully sense its environment, feel free to explore, literally. In the next chapter, you'll start on a new robot. You'll build a robot that can walk on four legs; a quadruped that will walk instead of rolling. You'll learn how to control multiple servos to make your robot walk, wave, and dance.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Mastering Beaglebone Robotics
Published in: Dec 2014Publisher: ISBN-13: 9781783988907
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Richard Grimmett

Richard Grimmett has more fun that should be allowed working on robotics projects while teaching Computer Science and Electrical Engineering at Brigham Young University Idaho. He has a Bachelors and Masters degree in Electrical Engineering and a PhD in Leadership Studies. He also has 26 years of experience in the Radar and Telecommunications industries, and even has one of the original brick phones. He has written books on the basics of using the BeagleBone Black for robotics projects, and another for the Raspberry PI and yet another for the Arduino.
Read more about Richard Grimmett