Reader small image

You're reading from  Mastering ROS for Robotics Programming

Product typeBook
Published inDec 2015
Reading LevelIntermediate
PublisherPackt
ISBN-139781783551798
Edition1st Edition
Languages
Concepts
Right arrow
Author (1)
Lentin Joseph
Lentin Joseph
author image
Lentin Joseph

Lentin Joseph is an author and robotics entrepreneur from India. He runs a robotics software company called Qbotics Labs in India. He has 7 years of experience in the robotics domain primarily in ROS, OpenCV, and PCL. He has authored four books in ROS, namely, Learning Robotics using Python, Mastering ROS for Robotics Programming, ROS Robotics Projects, and Robot Operating System for Absolute Beginners. He is currently pursuing his master's in Robotics from India and is also doing research at Robotics Institute, CMU, USA.
Read more about Lentin Joseph

Right arrow

Chapter 8. Programming Vision Sensors using ROS, Open-CV, and PCL

In the last chapter, we discussed interfacing of sensors and actuators using I/O board in ROS. In this chapter, we are going to discuss how to interface various vision sensors in ROS and program it using libraries such as OpenCV (Open Source Computer Vision) and PCL (Point Cloud Library). The vision in a robot is an important aspect of the robot for manipulating object and navigation. There are lots of 2D/3D vision sensors available in the market and most of the sensors have an interface driver package in ROS. We will discuss interfacing of new vision sensors to ROS and programming it using OpenCV and PCL.

We will cover the following topics in this chapter:

  • Understanding ROS—OpenCV interfacing packages

  • Understanding ROS—PCL interfacing packages

  • Installing OpenCV and PCL interfaces in ROS

  • Interfacing USB webcams in ROS

  • Working with ROS camera calibration

  • Converting images between ROS and OpenCV using cv_bridge

  • Displaying images from...

Understanding ROS – OpenCV interfacing packages


OpenCV is one of the popular open source real time computer vision libraries, which is mainly written in C/C++. OpenCV comes with a BSD license and is free for academic and commercial application. OpenCV can be programmed using C/C++, Python, and Java, and it has multi-platform support such as Windows, Linux, OSX, Android, and iOS. OpenCV has tons of computer vision APIs, which can be used for implementing computer vision applications. The web page of OpenCV library is http://opencv.org/.

The OpenCV library is interfaced to ROS via ROS stack called vision_opencv. vision_opencv consists of two important packages for interfacing OpenCV to ROS. They are:

  • cv_bridge: The cv_bridge package contains a library that provides APIs for converting the OpenCV image data type cv::Mat to the ROS image message called sensor_msgs/Image and vice versa. In short, it can act as a bridge between OpenCV and ROS. We can use OpenCV APIs to process the image and convert...

Understanding ROS – PCL interfacing packages


The point cloud data can be defined as a group of data points in some coordinate system. In 3D, it has X, Y, and Z coordinates. PCL library is an open source project for handling 2D/3D image and point clouds processing.

Like OpenCV, it is under BSD license and free for academic and commercial purposes. It is also a cross platform, which has support in Linux, Windows, Mac OS, and Android/iOS.

The library consists of standard algorithms for filtering, segmentation, feature estimation, and so on, which is required to implement different point cloud applications. The main web page of point cloud library is http://pointclouds.org/.

The point cloud data can be acquired by sensors such as Kinect, Asus Xtion Pro, Intel Real Sense, and such others. We can use this data for robotic applications such as robot object manipulation and grasping. PCL is tightly integrated into ROS for handling point cloud data from various sensors. The perception_pcl stack is the...

Interfacing USB webcams in ROS


We can start interfacing with an ordinary webcam or a laptop cam in ROS. There are no exact specific packages for webcam - ROS interfaces. If the camera is working in Ubuntu/Linux, it may be supported by the ROS driver too. After plugging the camera, check whether a /dev/videoX device file has been created, or check with some application such as Cheese, VLC, and such others. The guide to check whether the web cam is supported on Ubuntu is available at https://help.ubuntu.com/community/Webcam.

We can find the video devices present on the system using the following command:

$ ls /dev/ | grep video

If you get an output of video0, you can confirm a USB cam is available for use.

After ensuring the webcam support in Ubuntu, we can install a ROS webcam driver called usb_cam using the following command:

  • In ROS Jade

    $ sudo apt-get install ros-jade-usb-cam
    
  • In ROS Indigo

    $ sudo apt-get install ros-indigo-usb-cam
    

We can install the latest package of usb_cam from the source...

Working with ROS camera calibration


Like all sensors, cameras also need calibration for correcting the distortions in the camera images due to the camera's internal parameters and for finding the world coordinates from the camera coordinates.

The primary parameters that cause image distortions are radial distortions and tangential distortions. Using camera calibration algorithm, we can model these parameters and also calculate the real world coordinates from the camera coordinates by computing the camera calibration matrix, which contains the focal distance and the principle points.

Camera calibration can be done using a classic black-white chessboard, symmetrical circle pattern, or asymmetrical circle pattern. According to each different pattern, we use different equations to get the calibration parameters. Using the calibration tools, we detect the patterns and each detected pattern is taken as a new equation. When the calibration tool gets enough detected patterns, it can compute the final...

Interfacing Kinect and Asus Xtion Pro in ROS


The web cams that we have worked with till now can only provide 2D visual information of the surroundings. For getting 3D information about the surroundings, we have to use 3D vision sensors or range finders such as laser finders. Some of the 3D vision sensors that we are discussing in this chapter are Kinect, Asus Xtion Pro, Intel Real sense, Velodyne, and Hokuyo laser scanner.

Figure 7 : Top: Kinect , Bottom: Asus Xtion Pro

The first two sensors we are going to discuss are Kinect and Asus Xtion Pro. Both of these devices need OpenNI (Open source Natural Interaction) driver library for operating in Linux system. OpenNI acts as a middleware between the 3D vision devices and the application software. The OpenNI driver is integrated to ROS and we can install these drivers using the following commands. These packages help to interface the OpenNI complaint device, such as Kinect and Asus Xtion Pro.

  • In Jade:

    $ sudo apt-get install ros-jade-openni-launch...

Interfacing Intel Real Sense camera with ROS


One of the new 3D depth sensors from Intel is Real Sense. The following link is the ROS interface of Intel Real Sense: https://github.com/BlazingForests/realsense_camera

Figure 11: Intel Real Sense

Before installing the ROS driver, we have to install the following packages for building the source code:

$ sudo apt-get install libudev-dev libv4l-dev

After installing, clone the ROS package to the src folder of catkin workspace:

$ cd ~/catkin_ws/src
$ git clone https://github.com/BlazingForests/realsense_camera.git
$ catkin_make

Launch the Real Sense camera driver and RViz using the following command:

$ roslaunch realsense_camera realsense_rviz.launch

Launch Real Sense camera driver only:

$ roslaunch realsense_camera realsense_camera.launch

Figure 12: Intel Real Sense view in RViz

Following are the topics generated by the Real Sense driver:

sensor_msgs::PointCloud2
/camera/depth/points                point cloud without RGB
/camera/depth_registered/points...

Interfacing Hokuyo Laser in ROS


We can interface different ranges of laser scanners in ROS. One of the popular laser scanner available in the market is Hokuyo Laser scanner (http://www.robotshop.com/en/hokuyo-utm-03lx-laser-scanning-rangefinder.html).

Figure 14: Different series of Hokuyo laser scanner

One of the commonly used Hokuyo laser scanner models is UTM-30LX. This sensor is fast and accurate, suitable for robotic applications. The device has USB 2.0 interface for communication, and has up to 30 meter range with millimeter resolution. The arc range of the scan is about 270 degrees.

Figure 15 : Hokuyo UTM-30LX

There is already a driver available in ROS for interfacing these scanners. One of the interfaces is called hokuyo_node (http://wiki.ros.org/hokuyo_node).

We can install this package using the following command:

  • In Jade:

    $ sudo apt-get install ros-jade-hokuyo-node
    
  • In Indigo:

    $ sudo apt-get install ros-indigo-hokuyo-node
    

When the device connects to the Ubuntu system, it will create a...

Interfacing Velodyne LIDAR in ROS


One of the trending areas in robotics is autonomous cars or driverless cars. One of the essential ingredients in this robot is a Light Detection and Ranging (LIDAR). One of the commonly used LIDARs is Velodyne LIDAR. Velodyne LIDARs are used in Google driverless cars and also in most of the research in driver less cars. There are three models of Velodyne LIDAR available in the market. Following are the three models and their diagrams:

Velodyne HDL-64E, Velodyne HDL-32E, and Velodyne VLP-16/Puck.

Figure 17: Different series of Velodyne

Velodyne can interface to ROS and can generate point cloud data from its raw data. The link for the velodyne ROS package for model HDL-32E is http://wiki.ros.org/velodyne.

We can install the velodyne driver in Ubuntu using the following command:

  • In Jade:

    $ sudo apt-get install ros-jade-velodyne
    
  • In Indigo:

    $ sudo apt-get install ros-indigo-velodyne
    

After installing these packages, connect the LIDAR power supply and connect Ethernet...

Working with point cloud data


We can handle the point cloud data from Kinect or the other 3D sensors for performing wide variety of tasks such as 3D object detection and recognition, obstacle avoidance, 3D modeling, and so on. In this section, we will see some basic functionalities using the PCL library and its ROS interface. We will discuss the following examples:

  • How to publish a point cloud in ROS

  • How to subscribe and process point cloud

  • How to write point cloud data to a PCD file

  • How to read and publish point cloud from a PCD file

How to publish a point cloud

In this example, we will see how to publish a point cloud data using the sensor_msgs/PointCloud2 message. The code will use PCL APIs for handling and creating the point cloud, and converting the PCL cloud data to PointCloud2 message type. You will get the example code pcl_publisher.cpp from the chapter_8_codes/pcl_ros_tutorial/src folder.

#include <ros/ros.h>

// point cloud headers
#include <pcl/point_cloud.h>
//Header which...

Streaming webcam from Odroid using ROS


ROS system is designed mainly for distributive computing. We can write and run the ROS nodes on multiple machines and communicate each node to a single master. For communicating between two devices using ROS, we should follow the following rules:

  • Only single ROS master should run; we can decide which machine should run the master

  • All machines should be configured to use the same master URI through ROS_MASTER_URI

  • Bi-directional connectivity should be ensured between all the pairs of machines

  • Each machine should have a name that can be identified by the other machines

In this section, we will see how to run the ROS master in Odroid and stream the camera images to a PC. First, we will look at the setup required for the distributing computing between Odroid and PC.

Connect Odroid to the PC directly using the LAN cable and create a Ethernet hotspot, as we mentioned in the previous chapter. Find the IPs of Odroid and the PC and set the following lines of command...

Questions


  1. What are the packages in the vision_opencv stack?

  2. What are the packages in the perception_pcl stack?

  3. What are the functions of cv_bridge?

  4. How do we convert PCL cloud to ROS message?

  5. How do we do distributive computing using ROS?

Summary


This chapter was about vision sensors and its programming in ROS. We saw the interfacing packages to interface the cameras and 3D vision sensors such as vision_opencv and perception_pcl. We looked at each package and its functions on these stacks. We saw interfacing of basic webcam and processing image using ROS cv_bridge. After discussing cv_bridge, we looked at the interfacing of various 3D vision sensors and laser scanners with ROS. After interfacing, we learned how to process the data from these sensors using PCL library and ROS. At the end of the chapter, we understood how to stream a camera from an embedded device called Odroid to the PC. In the next chapter, we will see the interfacing of robotic hardware in ROS.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Mastering ROS for Robotics Programming
Published in: Dec 2015Publisher: PacktISBN-13: 9781783551798
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Lentin Joseph

Lentin Joseph is an author and robotics entrepreneur from India. He runs a robotics software company called Qbotics Labs in India. He has 7 years of experience in the robotics domain primarily in ROS, OpenCV, and PCL. He has authored four books in ROS, namely, Learning Robotics using Python, Mastering ROS for Robotics Programming, ROS Robotics Projects, and Robot Operating System for Absolute Beginners. He is currently pursuing his master's in Robotics from India and is also doing research at Robotics Institute, CMU, USA.
Read more about Lentin Joseph