Chapter 9. The Navigation Stack – Beyond Setups
We are now getting close to the end of the book, and this is when we will use all the knowledge acquired through it. We have created packages, nodes, 3D models of robots, and more. In Chapter 8, The Navigation Stack – Robot Setups, you configured your robot in order to be used with the navigation stack, and in this chapter, we will finish the configuration for the navigation stack so that you learn how to use it with your robot.
All the work done in the previous chapters has been a preamble for this precise moment. This is when the fun begins and when the robots come alive.
In this chapter, we are going to learn how to do the following:
Apply the knowledge of Chapter 8, The Navigation Stack – Robot Setups and the programs developed therein
Understand the navigation stack and how it works
Configure all the necessary files
Create launch files to start the navigation stack
Let's begin!
The correct way to create a package is by adding the dependencies with the other packages created for your robot. For example, you could use the next command to create the package:
But in our case, as we have everything in the same package, so it is only necessary to execute the following:
Remember that in the repository, you may find all the necessary files for the chapter.
Creating a robot configuration
To launch the entire robot, we are going to create a launch file with all the necessary files to activate all the systems. Anyway, here you have a launch file for a real robot that you can use as a template. The following script is present in configuration_template.launch
:
This launch file will launch three nodes that will start up the robot.
Tip
Downloading the example...
Configuring the costmaps – global_costmap and local_costmap
Okay, now we are going to start configuring the navigation stack and all the necessary files to start it. To start with the configuration, first we will learn what costmaps are and what they are used for. Our robot will move through the map using two types of navigation—global
and local
.
The global navigation is used to create paths for a goal in the map or at a far-off distance
The local navigation is used to create paths in the nearby distances and avoid obstacles, for example, a square window of 4 x 4 meters around the robot
These modules use costmaps to keep all the information of our map. The global
costmap is used for global
navigation and the local
costmap for local
navigation.
The costmaps have parameters to configure the behaviors, and they have common parameters as well, which are configured in a shared file.
The configuration basically consists of three files where we can set up different parameters. The files are as follows...
Creating a launch file for the navigation stack
Now we have all the files created and the navigation stack is configured. To run everything, we are going to create a launch file. Create a new file in the chapter9_tutorials/launch
folder, and put the following code in a file with the name move_base.launch
:
Setting up rviz for the navigation stack
It is good practice to visualize all possible data which what the navigation stack does. In this section, we will show you the visualization topic that you must add to rviz
to see the correct data sent by the navigation stack. Discussions on each visualization topic that the navigation stack publishes are given next.
The 2D pose estimate (P shortcut) allows the user to initialize the localization system used by the navigation stack by setting the pose of the robot in the world.
The navigation stack waits for the new pose of a new topic with the name initialpose
. This topic is sent using the rviz
windows where we previously changed the name of the topic.
You can see in the following screenshot how you can use initialpose
. Click on the 2D Pose Estimate button, and click on the map to indicate the initial position of your robot. If you don't do this at the beginning, the robot will start the auto-localization process and try to set an...
Adaptive Monte Carlo Localization
In this chapter, we are using the amcl
(Adaptive Monte Carlo Localization) algorithm for the localization. The amcl algorithm
is a probabilistic localization system for a robot moving in 2D. This system implements the adaptive Monte Carlo localization approach, which uses a particle filter to track the pose of a robot against a known map.
The amcl
algorithm has many configuration options that will affect the performance of localization. For more information on amcl
, please refer to the AMCL documentation at http://wiki.ros.org/amcl and also at http://www.probabilistic-robotics.org/.
The amcl
node works mainly with laser scans and laser maps, but it could be extended to work with other sensor data, such as a sonar or stereo vision. So for this chapter, it takes a laser-based map and laser scans, transforms messages, and generates a probabilistic pose. On startup, amcl
initializes its particle filter according to the parameters provided in the setup. If you...
Modifying parameters with rqt_reconfigure
A good option for understanding all the parameters configured in this chapter, is by using rqt_reconfigure
to change the values without restarting the simulation.
To launch rqt_reconfigure,
use the following command:
You will see the screen as follows:
As an example, we are going to change the parameter max_vel_x
configured in the file, base_local_planner_params.yaml
. Click over the move_base
menu and expand it. Then select TrajectoryPlannerROS
in the menu tree. You will see a list of parameters. As you can see, the parameter max_vel_x
has the same value that we assigned in the configuration file.
You can see a brief description for the parameter by hovering the mouse over the name for a few seconds. This is very useful for understanding the function of each parameter.
A great functionality of the navigation stack is the recalculation of the path if it finds obstacles during the movement. You can easily see this feature by adding an object in front of the robot in Gazebo. For example, in our simulation we added a big box in the middle of the path. The navigation stack detects the new obstacle, and automatically creates an alternative path.
In the next image, you can see the object that we added. Gazebo has some predefined 3D objects that you can use in the simulations with mobile robots, arms, humanoids, and so on.
To see the list, go to the Insert model section. Select one of the objects and then click at the location where you want to put it, as shown in the following screenshot:
If you go to the rviz
windows now, you will see a new global plan to avoid the obstacle. This feature is very interesting when you use the robot in real environments with people walking around the robot. If the robot detects a possible collision, it will change...
We are sure that you have been playing with the robot by moving it around the map a lot. This is funny but a little tedious, and it is not very functional.
Perhaps you were thinking that it would be a great idea to program a list of movements and send the robot to different positions with only a button, even when we are not in front of a computer with rviz
.
Okay, now you are going to learn how to do it using actionlib
.
The actionlib
package provides a standardized interface for interfacing with tasks. For example, you can use it to send goals for the robot to detect something at a place, make scans with the laser, and so on. In this section, we will send a goal to the robot, and we will wait for this task to end.
It could look similar to services, but if you are doing a task that has a long duration, you might want the ability to cancel the request during the execution, or get periodic feedback about how the request is progressing. You cannot do this with services. Furthermore...
At the end of this chapter, you should have a robot—simulated or real—moving autonomously through the map (which models the environment), using the navigation stack. You can program the control and the localization of the robot by following the ROS philosophy of code reusability, so that you can have the robot completely configured without much effort. The most difficult part of this chapter is to understand all the parameters and learn how to use each one of them appropriately. The correct use of them will determine whether your robot works fine or not; for this reason, you must practice changing the parameters and look for the reaction of the robot.
In the next chapter, you will learn how to use MoveIt! with some tutorials and examples. If you don't know what MoveIt! is, it is a software for building mobile manipulation applications. With it, you can move your articulated robot in an easy way.