Chapter 4. Navigating the World with TurtleBot
In the previous chapter, the TurtleBot robot was described as a two-wheeled differential drive robot developed by Willow Garage. The setup of the TurtleBot hardware, netbook, network system, and remote computer were explained, so the user could set up and operate his own TurtleBot. Then, the TurtleBot was driven around using the keyboard control, command-line control, and a Python script.
In this chapter, we will expand TurtleBot's capabilities by giving the robot vision. The chapter begins by describing 3D vision systems and how they are used to map obstacles within the camera's field of view. The three types of 3D sensors typically used for TurtleBot are shown and described, detailing their specifications.
Setting up the 3D sensor for use on TurtleBot is described and the configuration is tested in a standalone mode. To visualize the sensor data coming from TurtleBot, two ROS tools are utilized: Image Viewer and rviz. Then, an important aspect...
3D vision systems for TurtleBot
TurtleBot's capability is greatly enhanced by the addition of a 3D vision sensor. The function of 3D sensors is to map the environment around the robot by discovering nearby objects that are either stationary or moving. The mapping function must be accomplished in real time so that the robot can move around the environment, evaluate its path choices, and avoid obstacles. For autonomous vehicles, such as Google's self-driving cars, 3D mapping is accomplished by a high-cost LIDAR system that uses laser radar to illuminate its environment and analyze the reflected light. For our TurtleBot, we will present a number of low cost but effective options. These standard 3D sensors for TurtleBot include Kinect sensors, ASUS Xtion sensors, and Carmine sensors.
How these 3D vision sensors work
The 3D vision systems that we describe for TurtleBot have a common infrared technology to sense depth. This technology was developed by PrimeSense, an Israeli 3D sensing company and...
Configuring TurtleBot and installing the 3D sensor software
There are minor but important environmental variables and software that are needed for the TurtleBot based on your selection of 3D sensors. We have attached a Kinect Xbox 360 sensor to our TurtleBot, but we will provide instructions to configure each of the 3D sensors mentioned in this chapter. These environmental variables are used by the ROS launch files to launch the correct camera drivers. In ROS Indigo, the Kinect and ASUS sensors are supported by different camera drivers, as described in the following sections.
The environmental variables for the Kinect sensors are as follows:
These variables should be added to the ~/.bashrc
files of both the TurtleBot and the remote computer.
Kinect also requires a special driver for the camera to be downloaded from GitHub. Type the following commands in a terminal window on the TurtleBot netbook:
Testing the 3D sensor in standalone mode
Before we make an attempt to control the TurtleBot from a remote computer, it is wise to test the TurtleBot in standalone mode. TurtleBot will be powered on and we will use its netbook to check whether the robot is operational on its own.
To prepare the TurtleBot, perform the following steps:
Plug in the power to the 3D sensor via the TurtleBot base connection (Kinect only).
Plug in the power to the netbook via the TurtleBot base connection.
Power on the netbook and establish the network connection on the netbook. This should be the network used for TurtleBot's ROS_MASTER_URI
IP address.
Power on the TurtleBot base.
Plug in the 3D sensor to the netbook through a USB 2.0 port (Kinect for Windows v2 uses the USB 3.0 port).
Ensure that ROS environment variables are configured correctly on the netbook. Refer to the Netbook network setup section in Chapter 3, Driving Around with TurtleBot, and the Configuring TurtleBot and installing 3D sensor software section...
Running ROS nodes for visualization
Viewing images on the remote computer is the next step to setting up the TurtleBot. Two ROS tools can be used to visualize the rgb and depth camera images. Image Viewer and rviz are used in the following sections to view the image streams published by the Kinect sensor.
Visual data using Image Viewer
A ROS node can allow us to view images that come from the rgb camera on Kinect. The camera_nodelet_manager
node implements a basic camera capture program using OpenCV to handle publishing ROS image messages as a topic. This node publishes the camera images in the /camera
namespace.
Three terminal windows will be required to launch the base and camera nodes on TurtleBot and launch the Image Viewer node on the remote computer. The steps are as follows:
Terminal Window 1: Minimal launch of TurtleBot:
Terminal Window 2: Launch freenect camera:
Navigating with TurtleBot
Launch files for TurtleBot will create ROS nodes either remotely on the TurtleBot netbook (via SSH to TurtleBot) or locally on the remote computer. As a general rule, the launch files (and nodes) that handle the GUI and visualization processing should run on the remote computer while the minimal launch and camera drivers should run on the TurtleBot netbook. Note that we will specify when to SSH to TurtleBot for a ROS command or omit the SSH for using a ROS command on the remote computer.
Mapping a room with TurtleBot
TurtleBot can autonomously drive around its environment if a map is made of the environment. The 3D sensor is used to create a 2D map of the room as the TurtleBot is driven around either by a joystick, keyboard, or any other method of teleoperation.
Since we are using the Kobuki base, calibration of the gyro inside the base is not necessary. If you are using the Create base, make sure that you perform the gyro calibration procedure in the TurtleBot ROS...
TurtleBot comes with its own 3D vision system that is a low-cost laser scanner. The Kinect, ASUS, or PrimeSense devices can be mounted on the TurtleBot base and provide a 3D depth view of the environment. This chapter provides a comparison of these three types of sensors and identifies the software that is needed to operate them as ROS components. We check their operation by testing the sensor on TurtleBot in standalone mode. To use the devices, we can utilize Image Viewer or rviz to view image streams from the rgb or depth cameras.
The primary objective is for TurtleBot to see its surroundings and be able to autonomously navigate through them. First, TurtleBot is driven around in teleoperation mode to create a map of the environment. The map provides the room boundaries and obstacles so that TurtleBot's navigation algorithm, amcl, can plan a path through the environment from its start location to a user-defined goal.
In the next chapter, we will return to the ROS simulation world...