Reader small image

You're reading from  Mastering ROS for Robotics Programming, Third edition - Third Edition

Product typeBook
Published inOct 2021
PublisherPackt
ISBN-139781801071024
Edition3rd Edition
Concepts
Right arrow
Authors (2):
Lentin Joseph
Lentin Joseph
author image
Lentin Joseph

Lentin Joseph is an author and robotics entrepreneur from India. He runs a robotics software company called Qbotics Labs in India. He has 7 years of experience in the robotics domain primarily in ROS, OpenCV, and PCL. He has authored four books in ROS, namely, Learning Robotics using Python, Mastering ROS for Robotics Programming, ROS Robotics Projects, and Robot Operating System for Absolute Beginners. He is currently pursuing his master's in Robotics from India and is also doing research at Robotics Institute, CMU, USA.
Read more about Lentin Joseph

Jonathan Cacace
Jonathan Cacace
author image
Jonathan Cacace

Jonathan Cacace was born in Naples, Italy, on December 13, 1987. He received his Master's degree in computer science, and a Ph.D. degree in Information and Automation Engineering, from the University of Naples Federico II. Currently, he is an Assistant Professor at the University of Naples Federico II. He is also a member of PRISMA Lab where he is involved in different research projects focused on industrial and service robotics in which he has developed several ROS-based applications integrating robot perception and control.
Read more about Jonathan Cacace

View More author details
Right arrow

Chapter 10: Programming Vision Sensors Using ROS, OpenCV, and PCL

In the previous chapter, we discussed how to interface sensors and actuators using I/O boards in ROS. In this chapter, we are going to discuss how to interface various vision sensors in ROS and program them using libraries such as Open Source Computer Vision (OpenCV) and Point Cloud Library (PCL). The robotic vision is an important aspect of any robot for manipulating objects and navigating the environment. There are lots of 2D/3D vision sensors available on the market, and most of these sensors have driver packages to interface with ROS. First, we will discuss how to interface vision sensors with ROS and how to program them using OpenCV and PCL. Finally, we will discuss how to use fiducial marker libraries to develop vision-based robotic applications.

We will cover the following topics in this chapter:

  • Understanding ROS – OpenCV interfacing packages
  • Understanding ROS – PCL interfacing...

Technical requirements

To follow this chapter, you will need the following software and hardware set up:

  • Hardware: A good laptop, a webcam supported in Linux and, optionally, a depth camera and LIDAR.
  • Software: Ubuntu 20.04 with ROS Noetic.

Let's start by configuring our system with the necessary ROS packages and libraries for working with robotic vision applications using ROS. We will provide a brief introduction to the OpenCV library and its interfacing package in ROS in the next section.

The reference code for this chapter can be downloaded from the following Git repository: https://github.com/PacktPublishing/Mastering-ROS-for-Robotics-Programming-Third-edition/tree/main/Chapter10

You can view this chapter's code in action here: https://bit.ly/3yZYao1.

Understanding ROS – OpenCV interfacing packages

OpenCV is one of the most popular open source, real-time computer vision libraries, and it is mainly written in C/C++. OpenCV comes with a BSD license and is free for both academic and commercial applications. OpenCV can be programmed using C/C++, Python, and Java, and it has multi-platform support, such as Windows, Linux, Mac OS X, Android, and iOS. OpenCV has tons of computer vision APIs that can be used for implementing computer vision applications. The web page of the OpenCV library can be found at https://opencv.org/.

The OpenCV library is interfaced with ROS via a ROS stack called vision_opencv. vision_opencv consists of two important packages for interfacing OpenCV with ROS, as follows:

  • cv_bridge: The cv_bridge package contains a library that provides APIs for converting the OpenCV image data type, cv::Mat, into a ROS image message called sensor_msgs/Image and vice versa. In short, it can act as...

Understanding ROS – PCL interfacing packages

The point cloud is a group of 3D points in space that represent a 3D shape/object. Each point in the point cloud data is represented using X, Y, and Z values. Also, more than just a point in space, it can hold values such as RGB or HSV at each point (https://en.wikipedia.org/wiki/Point_cloud). The PCL library is an open source project for performing 3D point cloud processing.

Like OpenCV, it is under the BSD license, and free for academic and commercial purposes. It is also a cross-platform package that has support in Linux, Windows, macOS, and Android/iOS.

The library consists of standard algorithms for filtering, segmentation, feature estimation, and so on, which are required to implement different point cloud-based applications. The main web page of the point cloud library can be found at http://pointclouds.org/.

Point cloud data can be acquired by sensors such as Kinect, Asus Xtion Pro, Intel RealSense, and others...

Interfacing USB webcams in ROS

We can interface with an ordinary webcam or a laptop cam in ROS. Overall, there are no ROS-specific packages we must install to use web cameras. If the camera is working in Ubuntu/Linux, it may be supported by the ROS driver too. After plugging in the camera, check whether a /dev/videoX device file has been created. You can also check this by using applications such as Cheese, VLC, and others. A guide for checking whether the webcam is supported on Ubuntu is available at https://help.ubuntu.com/community/Webcam.

We can find the video devices that are present on the system by using the following command:

ls /dev/ | grep video    

If you get an output of video0, then this confirms that a USB camera is available for use.

After ensuring the webcam supports Ubuntu, we can install a ROS webcam driver called usb_cam using the following command:

sudo apt install ros-noetic-usb-cam  

We can install the latest...

Working with ROS camera calibration

Like all sensors, cameras also need to be calibrated so that we can correct the distortions in the camera's images due to its internal parameters, as well as for finding the world coordinates from the camera coordinates.

The primary parameters that cause image distortions are radial distortions and tangential distortions. Using the camera calibration algorithm, we can model these parameters and also calculate the real-world coordinates from the camera coordinates by computing the camera calibration matrix, which contains the focal distance and the principal points.

Camera calibration can be done using a classic black-white chessboard, symmetrical circle pattern, or asymmetrical circle pattern. According to each pattern, we can use different equations to get the calibration parameters. Using certain calibration tools, we can detect these patterns, and each detected pattern is taken as a new equation. When the calibration tool detects enough...

Interfacing Kinect and Asus Xtion Pro with ROS

The webcams that we have worked with until now can only provide 2D visual information of their surroundings. To get 3D information about our surroundings, we must use 3D vision sensors or range finders, such as laser finders. Some of the 3D vision sensors that we will be discussing in this chapter are Kinect, Asus Xtion Pro, Intel RealSense, and Hokuyo laser scanner:

Figure 10.7 – Top: Kinect; bottom: Asus Xtion Pro

The first two sensors we are going to discuss are Kinect and Asus Xtion Pro. Both of these devices need the Open Source Natural Interaction (OpenNI) driver library to operate in Linux. OpenNI acts as a middleware between the 3D vision devices and the application software. The OpenNI driver is integrated into ROS, and we can install these drivers by using the following commands. These packages help us interface OpenNI-supported devices such as Kinect and Asus Xtion Pro:

sudo apt install...

Interfacing the Intel RealSense camera with ROS

One of the new 3D depth sensors from Intel is RealSense. At the time of writing, different versions of this sensor have been released (LIDAR camera L515, D400 family, D435, T265, F200, R200, and SR30). To interface RealSense sensors with ROS, we must install the librealsense library.

You can install the librealsense library using the apt package manager. Detailed instructions for setting up this library can be found at https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md.

We can also build the librealsense library from source code manually. Let's learn how to install the library.

Download the RealSense SDK (https://www.intelrealsense.com/sdk-2/) from the following link: https://github.com/IntelRealSense/librealsense/blob/master/doc/installation.md.

After installing the RealSense library, we must install the ROS wrapper (https://dev.intelrealsense.com/docs/ros-wrapper) to start sensor data...

Interfacing Hokuyo lasers with ROS

We can interface with different ranges of laser scanners in ROS. One of the most popular laser scanners available in the market is the Hokuyo laser scanner (http://www.robotshop.com/en/hokuyo-utm-03lx-laser-scanning-rangefinder.html):

Figure 10.13 – Different series of Hokuyo laser scanners

One of the most commonly used Hokuyo laser scanner models is UTM-30LX. This sensor is fast and accurate and is suitable for robotic applications. The device has a USB 2.0 interface for communication and has a 30-meter range, along with a millimeter resolution. The arc range of the scan is about 270 degrees:

Figure 10.14 – Hokuyo UTM-30LX

There is already a driver available in ROS for interfacing with these scanners. One of the interfaces is called urg_node (http://wiki.ros.org/urg_node).

We can install this package using the following command:

sudo apt install ros-noetic-urg-node

When the...

Working with point cloud data

We can handle the point cloud data from Kinect or other 3D sensors to perform a wide variety of tasks, such as 3D object detection and recognition, obstacle avoidance, 3D modeling, and so on. In this section, we will look at some basic functionalities; that is, using the PCL library and its ROS interface. We will discuss the following topics:

  • How to publish a point cloud in ROS
  • How to subscribe and process a point cloud
  • How to write point cloud data to a PCD file
  • How to read and publish a point cloud from a PCD file

Let's learn how to publish point cloud data as a ROS topic using a C++ example.

How to publish a point cloud

In this example, we will learn how to publish point cloud data using the sensor_msgs/PointCloud2 message. The code will use PCL APIs to handle and create the point cloud, as well as to convert the PCL cloud data into the PointCloud2 message type.

You can find the pcl_publisher.cpp example code...

Summary

This chapter was about vision sensors and their programming in ROS. We looked at the interfacing packages that are used to interface the cameras and 3D vision sensors, such as vision_opencv and perception_pcl. We looked at each package and how they function on these stacks. We also looked at how to interface a basic webcam and processing image using ROS cv_bridge. After discussing cv_bridge, we looked at how to interface various 3D vision sensors and laser scanners with ROS. After this, we learned how to process the data from these sensors using the PCL library and ROS. In the next chapter, we will learn how to build an autonomous mobile robot using ROS.

Here are a few questions based on what we covered in this chapter.

Questions

  • What are the packages in the vision_opencv stack?
  • What are the packages in the perception_pcl stack?
  • What are the functions of cv_bridge?
  • How do we convert a PCL cloud into a ROS message?
  • How do we do distributive computing using ROS?
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Mastering ROS for Robotics Programming, Third edition - Third Edition
Published in: Oct 2021Publisher: PacktISBN-13: 9781801071024
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Lentin Joseph

Lentin Joseph is an author and robotics entrepreneur from India. He runs a robotics software company called Qbotics Labs in India. He has 7 years of experience in the robotics domain primarily in ROS, OpenCV, and PCL. He has authored four books in ROS, namely, Learning Robotics using Python, Mastering ROS for Robotics Programming, ROS Robotics Projects, and Robot Operating System for Absolute Beginners. He is currently pursuing his master's in Robotics from India and is also doing research at Robotics Institute, CMU, USA.
Read more about Lentin Joseph

author image
Jonathan Cacace

Jonathan Cacace was born in Naples, Italy, on December 13, 1987. He received his Master's degree in computer science, and a Ph.D. degree in Information and Automation Engineering, from the University of Naples Federico II. Currently, he is an Assistant Professor at the University of Naples Federico II. He is also a member of PRISMA Lab where he is involved in different research projects focused on industrial and service robotics in which he has developed several ROS-based applications integrating robot perception and control.
Read more about Jonathan Cacace