Navigation Stack - Robot Setups

Exclusive offer: get 50% off this eBook here
Learning ROS for Robotics Programming

Learning ROS for Robotics Programming — Save 50%

A practical, instructive, and comprehensive guide to introduce yourself to ROS, the top-notch, leading robotics framework

$29.99    $15.00
by Aaron Martinez Enrique Fernández | October 2013 | Open Source

This article by Aaron Martinez and Enrique Fernández, authors of the book Learning ROS for Robotics Programming, talks about configuring your robot to use it in the navigation stack. Now, in this article, you will learn something that is probably one of the most powerful features in ROS, something that will let you move your robot autonomously.

Thanks to the community and the shared code, ROS has many algorithms that can be used for navigation.

First of all, in this chapter, you will learn all the necessary ways to configure the navigation stack with your robot. In particular, we will cover the following items in this article:

  • Introduction to the navigation stacks and their powerful capabilities—clearly one of the greatest pieces of software that comes with ROS.
  • The TF is explained in order to show how to transform from the frame of one physical element to the other; for example, the data received using a sensor or the command for the desired position of an actuator.
  • We will see how to create a laser driver or simulate it.
  • We will learn how the odometry is computed and published, and how Gazebo provides it.
  • A base controller will be presented, including a detailed description of how to create one for your robot.
  • We will see how to execute SLAM with ROS. That is, we will show you how you can build a map from the environment with your robot as it moves through it.
  • Finally, you will be able to localize your robot in the map using the localization algorithms of the navigation stack.

The navigation stack in ROS

In order to understand the navigation stack, you should think of it as a set of algorithms that use the sensors of the robot and the odometry, and you can control the robot using a standard message. It can move your robot without problems (for example, without crashing or getting stuck in some location, or getting lost) to another position.

You would assume that this stack can be easily used with any robot. This is almost true, but it is necessary to tune some configuration files and write some nodes to use the stack.

The robot must satisfy some requirements before it uses the navigation stack:

  • The navigation stack can only handle a differential drive and holonomic-wheeled robots. The shape of the robot must be either a square or a rectangle. However, it can also do certain things with biped robots, such as robot localization, as long as the robot does not move sideways.
  • It requires that the robot publishes information about the relationships between all the joints and sensors' position.
  • The robot must send messages with linear and angular velocities.
  • A planar laser must be on the robot to create the map and localization. Alternatively, you can generate something equivalent to several lasers or a sonar, or you can project the values to the ground if they are mounted in another place on the robot.

The following diagram shows you how the navigation stacks are organized. You can see three groups of boxes with colors (gray and white) and dotted lines. The plain white boxes indicate those stacks that are provided by ROS, and they have all the nodes to make your robot really autonomous:

In the following sections, we will see how to create the parts marked in gray in the diagram. These parts depend on the platform used; this means that it is necessary to write code to adapt the platform to be used in ROS and to be used by the navigation stack.

Creating transforms

The navigation stack needs to know the position of the sensors, wheels, and joints.

To do that, we use the TF (which stands for Transform Frames) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy.

Thanks to TF, we can add more sensors and parts to the robot, and the TF will handle all the relations for us.

If we put the laser 10 cm backwards and 20 cm above with regard to the origin of the coordinates of base_link, we would need to add a new frame to the transformation tree with these offsets.

Once inserted and created, we could easily know the position of the laser with regard to the base_link value or the wheels. The only thing we need to do is call the TF library and get the transformation.

Creating a broadcaster

Let's test it with a simple code. Create a new file in chapter7_tutorials/src with the name tf_broadcaster.cpp, and put the following code inside it:

#include <ros/ros.h> #include <tf/transform_broadcaster.h> int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_publisher"); ros::NodeHandle n; ros::Rate r(100); tf::TransformBroadcaster broadcaster; while(n.ok()){ broadcaster.sendTransform( tf::StampedTransform( tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.1, 0.0, 0.2)), ros::Time::now(),"base_link", "base_laser")); r.sleep(); } }

Remember to add the following line in your CMakelist.txt file to create the new executable:

rosbuild_add_executable(tf_broadcaster src/tf_broadcaster.cpp)

And we also create another node that will use the transform, and it will give us the position of a point of a sensor with regard to the center of base_link (our robot).

Creating a listener

Create a new file in chapter7_tutorials/src with the name tf_listener.cpp and input the following code:

#include <ros/ros.h> #include <geometry_msgs/PointStamped.h> #include <tf/transform_listener.h> void transformPoint(const tf::TransformListener& listener){ //we'll create a point in the base_laser frame that we'd like to transform to the base_link frame geometry_msgs::PointStamped laser_point; laser_point.header.frame_id = "base_laser"; //we'll just use the most recent transform available for our simple example laser_point.header.stamp = ros::Time(); //just an arbitrary point in space laser_point.point.x = 1.0; laser_point.point.y = 2.0; laser_point.point.z = 0.0; geometry_msgs::PointStamped base_point; listener.transformPoint("base_link", laser_point, base_point); ROS_INFO("base_laser: (%.2f, %.2f. %.2f) -----> base_link: (%.2f, %.2f, %.2f) at time %.2f", laser_point.point.x, laser_point.point.y, laser_point.point.z, base_point.point.x, base_point.point.y, base_point.point.z, base_point.header.stamp.toSec()); ROS_ERROR("Received an exception trying to transform a point from \"base_laser\" to \"base_link\": %s", ex.what()); } int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_listener"); ros::NodeHandle n; tf::TransformListener listener(ros::Duration(10)); //we'll transform a point once every second ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPoint, boost::ref(listener))); ros::spin(); }

Remember to add the line in the CMakeList.txt file to create the executable.

Compile the package and run both the nodes using the following commands:

$ rosmake chapter7_tutorials $ rosrun chapter7_tutorials tf_broadcaster $ rosrun chapter7_tutorials tf_listener

Then you will see the following message:

[ INFO] [1368521854.336910465]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521854.33 [ INFO] [1368521855.336347545]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521855.33

This means that the point that you published on the node, with the position (1.00, 2.00, 0.00) relative to base_laser, has the position (1.10, 2.00, 0.20) relative to base_link.

As you can see, the tf library performs all the mathematics for you to get the coordinates of a point or the position of a joint relative to another point.

A transform tree defines offsets in terms of both translation and rotation between different coordinate frames.

Let us see an example to help you understand this. We are going to add another laser, say, on the back of the robot (base_link):

The system had to know the position of the new laser to detect collisions, such as the one between wheels and walls. With the TF tree, this is very simple to do and maintain and is also scalable. Thanks to tf, we can add more sensors and parts, and the tf library will handle all the relations for us. All the sensors and joints must be correctly configured on tf to permit the navigation stack to move the robot without problems, and to exactly know where each one of their components is.

Before starting to write the code to configure each component, keep in mind that you have the geometry of the robot specified in the URDF file. So, for this reason, it is not necessary to configure the robot again. Perhaps you do not know it, but you have been using the robot_state_publisher package to publish the transform tree of your robot. We used it for the first time; therefore, you do have the robot configured to be used with the navigation stack.

Watching the transformation tree

If you want to see the transformation tree of your robot, use the following command:

$ roslaunch chapter7_tutorials gazebo_map_robot.launch model:= "`rospack find chapter7 _tutorials`/urdf/robot1_base_04.xacro" $ rosrun tf view_frames

The resultant frame is depicted as follows:

And now, if you run tf_broadcaster and run the rosrun tf view_frames command again, you will see the frame that you have created by code:

$ rosrun chapter7_tutorials tf_broadcaster $ rosrun tf view_frames

The resultant frame is depicted as follows:

Learning ROS for Robotics Programming A practical, instructive, and comprehensive guide to introduce yourself to ROS, the top-notch, leading robotics framework
Published: September 2013
eBook Price: $29.99
Book Price: $49.99
See more
Select your format and quantity:

Publishing sensor information

Your robot can have a lot of sensors to see the world; you can program a lot of nodes to take these data and do something, but the navigation stack is prepared only to use the planar laser's sensor. So, your sensor must publish the data with one of these types: sensor_msgs/LaserScan or sensor_msgs/PointCloud.

We are going to use the laser located in front of the robot to navigate in Gazebo. Remember that this laser is simulated on Gazebo, and it publishes data on the base_scan/scan frame.

In our case, we do not need to configure anything of our laser to use it on the navigation stack. This is because we have tf configured in the .urdf file, and the laser is publishing data with the correct type.

If you use a real laser, ROS might have a driver for it. Anyway, if you are using a laser that has no driver on ROS and want to write a node to publish the data with the sensor_msgs/LaserScan sensor, you have an example template to do it, which is shown in the following section.

But first, remember the structure of the message sensor_msgs/LaserScan. Use the following command:

$ rosmsg show sensor_msgs/LaserScan std_msgs/Header header uint32 seq time stamp string frame_id float32 angle_min float32 angle_max float32 angle_increment float32 time_increment float32 scan_time float32 range_min float32 range_max float32[] rangesfloat32[] intensities

Creating the laser node

Now we will create a new file in chapter7_tutorials/src with the name laser.cpp and put the following code in it:

#include <ros/ros.h> #include <sensor_msgs/LaserScan.h> int main(int argc, char** argv){ ros::init(argc, argv, "laser_scan_publisher"); ros::NodeHandle n; ros::Publisher scan_pub = n.advertise <sensor_msgs::LaserScan>("scan", 50); unsigned int num_readings = 100; double laser_frequency = 40; double ranges[num_readings]; double intensities[num_readings]; int count = 0; ros::Rate r(1.0); while(n.ok()){ //generate some fake data for our laser scan for(unsigned int i = 0; i < num_readings; ++i){ ranges[i] = count; intensities[i] = 100 + count; } ros::Time scan_time = ros::Time::now(); //populate the LaserScan message sensor_msgs::LaserScan scan; scan.header.stamp = scan_time; scan.header.frame_id = "base_link"; scan.angle_min = -1.57; scan.angle_max = 1.57; scan.angle_increment = 3.14 / num_readings; scan.time_increment = (1 / laser_frequency) / (num_readings); scan.range_min = 0.0; scan.range_max = 100.0; scan.ranges.resize(num_readings); scan.intensities.resize(num_readings); for(unsigned int i = 0; i < num_readings; ++i){ scan.ranges[i] = ranges[i]; scan.intensities[i] = intensities[i]; } scan_pub.publish(scan); ++count; r.sleep(); } }

As you can see, we are going to create a new topic with the name scan and the message type sensor_msgs/LaserScan. You must be familiar with this message type from sensor_msgs/LaserScan. The name of the topic must be unique. When you configure the navigation stack, you will select this topic to be used for the navigation. The following command line shows how to create the topic with the correct name:

ros::Publisher scan_pub = n.advertise <sensor_msgs::LaserScan>("scan", 50);

It is important to publish data with header, stamp, frame_id, and many more elements because, if not, the navigation stack could fail with such data:

scan.header.stamp = scan_time; scan.header.frame_id = "base_link";

Other important data on header is frame_id. It must be one of the frames created in the .urdf file and must have a frame published on the tf frame transforms. The navigation stack will use this information to know the real position of the sensor and make transforms such as the one between the data sensor and obstacles.

With this template, you can use any laser although it has no driver for ROS. You only have to change the fake data with the right data from your laser.

This template can also be used to create something that looks like a laser but is not. For example, you could simulate a laser using stereoscopy or using a sensor such as a sonar.

Publishing odometry information

The navigation stack also needs to receive data from the robot odometry. The odometry is the distance of something relative to a point. In our case, it is the distance between base_link and a fixed point in the frame odom.

The type of message used by the navigation stack is nav_msgs/Odometry. We are going to watch the structure using the following command:

$ rosmsg show nav_msgs/Odometry

As you can see in the message structure, nav_msgs/Odometry gives the position of the robot between frame_id and child_frame_id. It also gives us the pose of the robot using the geometry_msgs/Pose message and the velocity with the geometry_msgs/Twist message.

The pose has two structures that show the position in Euler coordinates and the orientation of the robot using a quaternion. The orientation is the angular displacement of the robot.

The velocity has two structures that show the linear velocity and the angular velocity. For our robot, we will use only the linear x velocity and the angular z velocity. We will use the linear x velocity to know whether the robot is moving forward or backward. The angular z velocity is used to check whether the robot is rotating towards the left or right.

As the odometry is the displacement between two frames, it is necessary to publish the transform of it. We did it in the last point, but later on in this section, I will show you an example to publish the odometry and tf of our robot.

Now let me show you how Gazebo works with the odometry.

How Gazebo creates the odometry

As you have seen in other examples with Gazebo, our robot moves in the simulated world just like a robot in the real world. We use a driver for our robot, the diffdrive_plugin.

This driver publishes the odometry generated in the simulated world, so we do not need to write anything for Gazebo.

Execute the robot sample in Gazebo to see the odometry working. Type the following commands in the shell:

$ roslaunch chapter7_tutorials gazebo_xacro. launch model:="`rospack find chapter7_tutorials `/urdf/robot1_base_04.xacro" $ rosrun erratic_teleop erratic_keyboard_teleop

Then, with the teleop node, move the robot for a few seconds to generate new data on the odometry topic.

On the screen of the Gazebo simulator, if you click on robot_model1, you will see some properties of the object. One of these properties is the pose of the robot. Click on the pose, and you will see some fields with data. What you are watching is the position of the robot in the virtual world. If you move the robot, the data changes:

Gazebo continuously publishes the odometry data. Check the topic and see what data it is sending. Type the following command in a shell:

$ rostopic echo /odom/pose/pose

The following is the output you will receive:

--- position: x: 0.32924763712 y: 0.97509878254 z: 0.0 orientation: x: 0.0 y: 0.0 z: 0.941128847661 w: 0.33804806182 ---

Notice that the data is the same as the one you can see on the Gazebo screen.

As you can observe, Gazebo is creating the odometry as the robot moves. We are going to see how Gazebo creates it by looking inside the plugin's source code.

The plugin file is located in the erratic_gazebo_plugins package, and the file is diffdrive_plugin.cpp. Open the file and you will see the following code inside the file:

$ rosed erratic_gazebo_plugins diffdrive_plugin.cpp

The file has a lot of code, but the important part for us now is the following function, publish_odometry():

void DiffDrivePlugin::publish_odometry() { ros::Time current_time = ros::Time::now(); std::string odom_frame = tf::resolve (tf_prefix_, "odom"); std::string base_footprint_frame = tf::resolve (tf_prefix_, "base_footprint"); // getting data for base_footprint to odom transform math::Pose pose = this->parent->GetState().GetPose(); btQuaternion qt(pose.rot.x, pose.rot.y, pose.rot.z, pose.rot.w); btVector3 vt(pose.pos.x, pose.pos.y, pose.pos.z); tf::Transform base_footprint_to_odom(qt, vt); transform_broadcaster_->sendTransform(tf: :StampedTransform(base_footprint_to_odom, current_time, odom_frame, base_footprint_frame)); // publish odom topic odom_.pose.pose.position.x = pose.pos.x; odom_.pose.pose.position.y = pose.pos.y; odom_.pose.pose.orientation.x = pose.rot.x; odom_.pose.pose.orientation.y = pose.rot.y; odom_.pose.pose.orientation.z = pose.rot.z; odom_.pose.pose.orientation.w = pose.rot.w; math::Vector3 linear = this->parent->GetWorldLinearVel(); odom_.twist.twist.linear.x = linear.x; odom_.twist.twist.linear.y = linear.y; odom_.twist.twist.angular.z = this->parent->GetWorldAngularVel().z; odom_.header.stamp = current_time; odom_.header.frame_id = odom_frame; odom_.child_frame_id = base_footprint_frame; pub_.publish(odom_); }

The publish_odometry() function is where the odometry is published. You can see how the fields of the structure are filled and the name of the topic for the odometry is set (in this case, it is odom). The pose is generated in the other part of the code that we will see in the following section.

Once you have learned how and where Gazebo creates the odometry, you will be ready to learn how to publish the odometry and tf for a real robot. The following code will show a robot doing circles continuously. The finality does not really matter; the important thing to know is how to publish the correct data for our robot.

Creating our own odometry

Create a new file in chapter7_tutorials/src with the name odometry.cpp and put the following code in it:

#include <string> #include <ros/ros.h> #include <sensor_msgs/JointState.h> #include <tf/transform_broadcaster.h> #include <nav_msgs/Odometry.h> int main(int argc, char** argv) { ros::init(argc, argv, "state_publisher"); ros::NodeHandle n; ros::Publisher odom_pub = n.advertise <nav_msgs::Odometry>("odom", 10); // initial position double x = 0.0; double y = 0.0; double th = 0; // velocity double vx = 0.4; double vy = 0.0; double vth = 0.4; ros::Time current_time; ros::Time last_time; current_time = ros::Time::now(); last_time = ros::Time::now(); tf::TransformBroadcaster broadcaster; ros::Rate loop_rate(20); const double degree = M_PI/180; // message declarations geometry_msgs::TransformStamped odom_trans; odom_trans.header.frame_id = "odom"; odom_trans.child_frame_id = "base_footprint"; while (ros::ok()) { current_time = ros::Time::now(); double dt = (current_time - last_time).toSec(); double delta_x = (vx * cos(th) - vy * sin(th)) * dt; double delta_y = (vx * sin(th) + vy * cos(th)) * dt; double delta_th = vth * dt; x += delta_x; y += delta_y; th += delta_th; geometry_msgs::Quaternion odom_quat; odom_quat = tf: :createQuaternionMsgFromRollPitchYaw(0,0,th); // update transform odom_trans.header.stamp = current_time; odom_trans.transform.translation.x = x; odom_trans.transform.translation.y = y; odom_trans.transform.translation.z = 0.0; odom_trans.transform.rotation = tf: :createQuaternionMsgFromYaw(th); //filling the odometry nav_msgs::Odometry odom; odom.header.stamp = current_time; odom.header.frame_id = "odom"; odom.child_frame_id = "base_footprint"; // position odom.pose.pose.position.x = x; odom.pose.pose.position.y = y; odom.pose.pose.position.z = 0.0; odom.pose.pose.orientation = odom_quat; // velocity odom.twist.twist.linear.x = vx; odom.twist.twist.linear.y = vy; odom.twist.twist.linear.z = 0.0; odom.twist.twist.angular.x = 0.0; odom.twist.twist.angular.y = 0.0; odom.twist.twist.angular.z = vth; last_time = current_time; // publishing the odometry and the new tf broadcaster.sendTransform(odom_trans); odom_pub.publish(odom); loop_rate.sleep(); } return 0; }

First, create the transformation variable and fill it with frame_id and the child_frame_id values to know when the frames have to move. In our case, the base base_footprint will move relatively toward the frame odom:

geometry_msgs::TransformStamped odom_trans; odom_trans.header.frame_id = "odom"; odom_trans.child_frame_id = "base_footprint";

In this part, we generate the pose of the robot. With the linear velocity and the angular velocity, we can calculate the theoretical position of the robot after a while:

double dt = (current_time - last_time).toSec(); double delta_x = (vx * cos(th) - vy * sin(th)) * dt; double delta_y = (vx * sin(th) + vy * cos(th)) * dt; double delta_th = vth * dt; x += delta_x; y += delta_y; th += delta_th; geometry_msgs::Quaternion odom_quat; odom_quat = tf::createQuaternionMsgFromRollPitchYaw(0,0,th);

In this article, you will not find an explanation about the kinematics of the robot. You can find a lot of literature on the Internet about it; you should look out for "differential wheel kinematics".

On the transformation, we will only fill in the x and rotation fields, as our robot can only move forward and backward and can turn:

odom_trans.header.stamp = current_time; odom_trans.transform.translation.x = x; odom_trans.transform.translation.y = 0.0; odom_trans.transform.translation.z = 0.0; odom_trans.transform.rotation = tf: :createQuaternionMsgFromYaw(th);

With the odometry, we will do the same. Fill the frame_id and child_frame_id fields with odom and base_footprint.

As the odometry has two structures, we will fill in the x, y, and orientation of the pose. On the twist structure, we will fill in the linear velocity x and the angular velocity z:

// position odom.pose.pose.position.x = x; odom.pose.pose.position.y = y; odom.pose.pose.orientation = odom_quat; // velocity odom.twist.twist.linear.x = vx; odom.twist.twist.angular.z = vth;

Once all the necessary fields are filled in, publish the data:

// publishing the odometry and the new tf broadcaster.sendTransform(odom_trans); odom_pub.publish(odom);

Remember to create the following line in the CMakeLists.txt file before you run rosmake chapter7_tutorials:

rosbuild_add_executable(odometry src/odometry.cpp)

Compile the package and launch the robot without using Gazebo, using only rviz to visualize the model and the movement of the robot. Use the following command to do this:

$ roslaunch chapter7_tutorials display_xacro. launch model:="`rospack find chapter7_tutorials`/urdf/robot1_base_04.xacro"

And run the odometry node with the following command:

$ rosrun chapter7_tutorials odometry

This is what you will get:

On the rviz screen, you can see the robot moving over some red arrows (grid). The robot moves over the grid because you published a new tf frame transform for the robot. The red arrows are the graphical representation for the odometry message. You will see the robot moving in circles continuously as we programmed in the code.

Learning ROS for Robotics Programming A practical, instructive, and comprehensive guide to introduce yourself to ROS, the top-notch, leading robotics framework
Published: September 2013
eBook Price: $29.99
Book Price: $49.99
See more
Select your format and quantity:

Creating a base controller

A base controller is an important element in the navigation stack because it is the only way to effectively control your robot. It communicates directly with the electronics of your robot.

ROS does not provide a standard base controller, so you must write a base controller for your mobile platform.

Your robot has to be controlled with the message type geometry_msgs/Twist. This message is used on the Odometry message that we have seen before.

So, your base controller must subscribe to a topic with the name cmd_vel and must generate the correct commands to move the platform with the correct linear and angular velocities.

We are now going to recall the structure of this message. Type the following command in a shell to see the structure:

$ rosmsg show geometry_msgs/Twist

The output of this command is as follows:

geometry_msgs/Vector3 linear float64 x float64 y float64 z geometry_msgs/Vector3 angular float64 x float64 y float64 z

The vector with the name linear indicates the linear velocity for the axes x, y, and z. The vector with the name angular is for the angular velocity on the axes.

For our robot, we will only use the linear velocity x and the angular velocity z. This is because our robot is on a differential wheeled platform, and it has two motors to move the robot forward and backward and to turn.

We are working with a simulated robot on Gazebo, and the base controller is implemented on the driver used to move/simulate the platform. This means that we will not have to create the base controller for this robot.

Anyway, in this chapter, you will see an example to implement the base controller on your physical robot. Before that, let's go to execute our robot on Gazebo to see how the base controller works. Run the following commands on different shells:

$ roslaunch chapter7_tutorials gazebo_xacro.launch model :="`rospack find chapter7_tutorials`/urdf/robot1_base_04.xacro" $ rosrun erratic_teleop erratic_keyboard_teleop

When all the nodes are launched and working, open rxgraph to see the relation between all the nodes:

$ rxgraph

You can see that Gazebo subscribes automatically to the cmd_vel topic that is generated by the teleoperation node.

Inside the Gazebo simulator, the plugin of our differential wheeled robot is running and is getting the data from the cmd_vel topic. Also, this plugin moves the robot in the virtual world and generates the odometry.

Using Gazebo to create the odometry

To obtain some insight of how Gazebo does that, we are going to have a sneak peek inside the diffdrive_plugin.cpp file:

$ rosed erratic_gazebo_plugins diffdrive_plugin.cpp

The Load(...) function performs the subscription to the topic, and when a cmd_vel topic is received, the cmdVelCallback() function is executed to handle the message:

void DiffDrivePlugin::Load(physics: :ModelPtr _parent, sdf::ElementPtr _sdf) { … … ros::SubscribeOptions so = ros::SubscribeOptions::create<geometry_msgs: :Twist>(topicName, 1, boost::bind(&DiffDrivePlugin: :cmdVelCallback, this, _1), ros::VoidPtr(), &queue_); }

When a message arrives, the linear and angular velocities are stored in the internal variables to run some operations later:

void DiffDrivePlugin: :cmdVelCallback(const geometry_msgs: :Twist::ConstPtr& cmd_msg) { ... ... x_ = cmd_msg->linear.x; rot_ = cmd_msg->angular.z; ... ... }

The plugin estimates the velocity for each motor using the formulas from the kinematic model of the robot in the following manner:

void DiffDrivePlugin::GetPositionCmd() { ... vr = x_; va = rot_; wheelSpeed[LEFT] = vr + va * wheelSeparation / 2.0; wheelSpeed[RIGHT] = vr - va * wheelSeparation / 2.0; ... }

And finally, it estimates the distance traversed by the robot using more formulas from the kinematic motion model of the robot. As you can see in the code, you must know the wheel diameter and the wheel separation of your robot:

// Update the controller void DiffDrivePlugin::UpdateChild() { ... ... wd = wheelDiameter; ws = wheelSeparation; // Distance travelled by front wheels d1 = stepTime * wd / 2 * joints[LEFT]->GetVelocity(0); d2 = stepTime * wd / 2 * joints[RIGHT]->GetVelocity(0); dr = (d1 + d2) / 2; da = (d1 - d2) / ws; // Compute odometric pose odomPose[0] += dr * cos(odomPose[2]); odomPose[1] += dr * sin(odomPose[2]); odomPose[2] += da; // Compute odometric instantaneous velocity odomVel[0] = dr / stepTime; odomVel[1] = 0.0; odomVel[2] = da / stepTime; ... ... }

This is the way diffdrive_plugin controls our simulated robot in Gazebo.

Creating our base controller

Now we are going to do something similar, that is, prepare a code to be used with a real robot with two wheels and encoders.

Create a new file in chapter7_tutorials/src with the name base_controller.cpp and put in the following code:

#include <ros/ros.h> #include <sensor_msgs/JointState.h> #include <tf/transform_broadcaster.h> #include <nav_msgs/Odometry.h> #include <iostream> using namespace std; double width_robot = 0.1; double vl = 0.0; double vr = 0.0; ros::Time last_time; double right_enc = 0.0; double left_enc = 0.0; double right_enc_old = 0.0; double left_enc_old = 0.0; double distance_left = 0.0; double distance_right = 0.0; double ticks_per_meter = 100; double x = 0.0; double y = 0.0; double th = 0.0; geometry_msgs::Quaternion odom_quat; void cmd_velCallback(const geometry_msgs::Twist &twist_aux) { geometry_msgs::Twist twist = twist_aux; double vel_x = twist_aux.linear.x; double vel_th = twist_aux.angular.z; double right_vel = 0.0; double left_vel = 0.0; if(vel_x == 0){ // turning right_vel = vel_th * width_robot / 2.0; left_vel = (-1) * right_vel; }else if(vel_th == 0){ // forward / backward left_vel = right_vel = vel_x; }else{ // moving doing arcs left_vel = vel_x - vel_th * width_robot / 2.0; right_vel = vel_x + vel_th * width_robot / 2.0; } vl = left_vel; vr = right_vel; } int main(int argc, char** argv){ ros::init(argc, argv, "base_controller"); ros::NodeHandle n; ros::Subscriber cmd_vel_sub = n. subscribe("cmd_vel", 10, cmd_velCallback); ros::Rate loop_rate(10); while(ros::ok()) { double dxy = 0.0; double dth = 0.0; ros::Time current_time = ros::Time::now(); double dt; double velxy = dxy / dt; double velth = dth / dt; ros::spinOnce(); dt = (current_time - last_time).toSec();; last_time = current_time; // calculate odomety if(right_enc == 0.0){ distance_left = 0.0; distance_right = 0.0; }else{ distance_left = (left_enc - left_enc_old) / ticks_per_meter; distance_right = (right_enc - right_enc_old) / ticks_per_meter; } left_enc_old = left_enc; right_enc_old = right_enc; dxy = (distance_left + distance_right) / 2.0; dth = (distance_right - distance_left) / width_robot; if(dxy != 0){ x += dxy * cosf(dth); y += dxy * sinf(dth); } if(dth != 0){ th += dth; } odom_quat = tf: :createQuaternionMsgFromRollPitchYaw(0,0,th); loop_rate.sleep(); } }

Notice that the equations are similar to diffdrive_plugin; this is because both robots are differential wheeled robots.

Do not forget to insert the following in your CMakeLists.txt file to create the executable of this file:

rosbuild_add_executable(base_controller src/base_controller.cpp)

This code is only a common example and must be extended with more code to make it work with a specific robot. It depends on the controller used, the encoders, and so on. We assume that you have the right background to add the necessary code in order to make the example work fine.

Creating a map with ROS

Getting a map can sometimes be a complicated task if you do not have the correct tools. ROS has a tool that will help you build a map using the odometry and a laser sensor. This tool is the map_server (http://www.ros.org/wiki/slam_gmapping). In this example, you will learn how to use the robot that we created in Gazebo, as we did in the previous chapters, to create a map, to save it, and load it again.

We are going to use a .launch file to make it easy. Create a new file in chapter7_tutorials/launch with the name gazebo_mapping_robot.launch and put in the following code:

<?xml version="1.0"?> <launch> <param name="/use_sim_time" value="true" /> <!-- start up wg world --> <include file="$(find gazebo_worlds) /launch/wg_collada_world.launch"/> <arg name="model" /> <param name="robot_description" command="$(find xacro) /xacro.py $(arg model)" /> <node name="spawn_robot" pkg="gazebo" type="spawn_model" args="-urdf -param robot_description -z 0.1 -model robot_model" respawn="false" output="screen" /> <node name="rviz" pkg="rviz" type="rviz" args="$(find chapter7_tutorials/) launch/mapping.vcg"/> <node name="slam_gmapping" pkg="gmapping" type="slam_gmapping"> <remap from="scan" to="base_scan/scan"/> </node> </launch>

With this .launch file, you can launch the Gazebo simulator with the 3D model, the rviz program with the correct configuration file, and slam_mapping to build a map in real time. Launch the file in a shell, and in the other shell, run the teleoperation node to move the robot:

$ roslaunch chapter7_tutorials gazebo_mapping_robot. launch model:="`rospack find chapter7_tutorials`/urdf/ robot1_base_04.xacro" $ rosrun erratic_teleop erratic_keyboard_teleop

When you start to move the robot with the keyboard, you will see the free and the unknown space on the rviz screen as well as the map with the occupied space; this is known as an Occupancy Grid Map (OGM). The slam_mapping node updates the map state when the robot moves, or more specifically, when (after some motion) it has a good estimate of the robot's location and how the map is. It takes the laser scans and the odometry and builds the OGM for you.

Saving the map using map_server

Once you have a complete map or something acceptable, you can save it to use it later in the navigation stack. To save it, use the following command:

$ rosrun map_server map_saver -f map

This command will create two files, map.pgm and map.yaml. The first one is the map in the .pgm format (the portable gray map format). The other is the configuration file for the map. If you open it, you will see the following output:

Now, open the .pgm image file with your favorite viewer, and you will see the map built before you:

Loading the map using map_server

When you want to use the map built with your robot, it is necessary to load it with the map_server package. The following command will load the map:

$ rosrun map_server map_server map.yaml

But to make it easy, create another .launch file in chapter7_tutorials/launch with the name gazebo_map_robot.launch and put in the following code:

<?xml version="1.0"?> <launch> <param name="/use_sim_time" value="true" /> <!-- start up wg world --> <include file="$(find gazebo_worlds)/ launch/wg_collada_world.launch"/> <arg name="model" /> <param name="robot_description" command="$(find xacro) /xacro.py $(arg model)" /> <node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher" ></node> <!-- start robot state publisher --> <node pkg="robot_state_publisher" type="state_publisher" name="robot_state_publisher" output="screen" > <param name="publish_frequency" type="double" value="50.0" /> </node> <node name="spawn_robot" pkg="gazebo" type="spawn_model" args="-urdf -param robot_description -z 0.1 -model robot_model" respawn="false" output="screen" /> <node name="map_server" pkg="map_server" type="map_server" args=" $(find chapter7_tutorials)/maps/map.yaml" /> <node name="rviz" pkg="rviz" type="rviz" args="-d $ (find chapter7_tutorials)/launch/mapping.vcg" /> </launch>

And now, launch the file using the following command and remember to put the model of the robot that will be used:

$ roslaunch chapter7_tutorials gazebo_map_robot.launch model:=

"`rospack find chapter7_tutorials`/urdf/robot1_base_04.xacro"

Then you will see rviz with the robot and the map. The navigation stack, in order to know the localization of the robot, will use the map published by the map server and the laser readings. This will help it perform a scan matching algorithm that helps to estimate the robot's location using a particle filter implemented in the amcl ( adaptive Monte Carlo localization) node.

Summary

In this chapter, you worked on the steps required to configure your robot in order to use it with the navigation stack. Now you know that the robot must have a planar laser, must be a differential wheeled robot, and it should satisfy some requirements for the base control and the geometry.

Keep in mind that we are working with Gazebo to demonstrate the examples and explain how the navigation stack works with different configurations. It is more complex to explain all of this directly on a real, robotic platform because we do not know whether you have one or have access to one. In any case, depending on the platform, the instructions may vary and the hardware may fail, so it is safer and useful to run these algorithms in simulations; later, we can test them on a real robot, as long as it satisfies the requirements described thus far.

Resources for Article :


Further resources on this subject:


About the Author :


Aaron Martinez

Aaron Martinez is a computer engineer, entrepreneur, and expert in digital fabrication. He did his Master's thesis in 2010 at the IUCTC (Instituto Universitario de Ciencias y Tecnologias Ciberneticas) in the University of Las Palmas de Gran Canaria. He prepared his Master's thesis in the field of telepresence using immersive devices and robotic platforms. After completing his academic career, he attended an internship program at The Institute for Robotics in the Johannes Kepler University in Linz, Austria. During his internship program, he worked as part of a development team of a mobile platform using ROS and the navigation stack. After that, he was involved in some projects related to robotics, one of them is the AVORA project in the University of Las Palmas de Gran Canaria. In this project, he worked on the creation of an AUV (Autonomous Underwater Vehicle) to participate in the Student Autonomous Underwater Challenge-Europe (SAUC-E) in Italy. In 2012, he was responsible for manufacturing this project; in 2013, he helped to adapt the navigation stack and other algorithms from ROS to the robotic platform.

Recently, Aaron created his own company called Biomecan. This company works with projects related to robotics, manufacturing of prototypes, and engineering tissue. The company manufactures devices for other companies and research and development institutes. For the past two years, he has been working on engineering tissue projects, creating a new device to help researchers of cell culture.

Aaron has experience in many fields such as programming, robotics, mechatronics, and digital fabrication, many devices such as Arduino, BeagleBone, Servers, and LIDAR, servomotors, and robotic platforms such as Wifibot, Nao Aldebaran, and Pioneer P3AT.

Enrique Fernández

Enrique Fernández is a computer engineer and roboticist. He did his Master's Thesis in 2009 at the University Institute of Intelligent Systems and Computational Engineering in the University of Las Palmas de Gran Canaria. There he has been working on his Ph.D for the last four years; he is expected to become a Doctor in Computer Science by September 2013. His Ph.D addresses the problem of Path Planning for Autonomous Underwater Gliders, but he has also worked on other robotic projects. He participated in the Student Autonomous Underwater Challenge-Europe (SAUC-E) in 2012, and collaborated for the 2013 edition. In 2012, he was awarded a prize for the development of an underwater pan-tilt vision system.

Now, Enrique is working for Pal-Robotics as a SLAM engineer. He completed his internship in 2012 at the Center of Underwater Robotics Research in the University of Girona, where he developed SLAM and INS modules for the Autonomous Underwater Vehicles of the research group using ROS. He joined Pal-Robotics in June 2013, where he is working with REEM robots using the ROS software intensively and developing new navigation algorithms for wheeled and biped humanoid robots, such as the REEM-H3 and REEM-C.

During his Ph.D, Enrique has published several conference papers and publications. Two of these were sent to the International Conference of Robotics and Automation (ICRA) in 2011. He is the co-author of some chapters of this book, and his Master's Thesis was about the FastSLAM algorithm for indoor robots using a SICK laser scanner and the odometry of a Pioneer differential platform. He also has experience with electronics and embedded systems, such as PC104 and Arduino. His background covers SLAM, Computer Vision, Path Planning, Optimization, and Robotics and Artificial Intelligence in general.

Books From Packt


	 Processing 2: Creative Coding Hotshot
Processing 2: Creative Coding Hotshot

Game Maker 8 Cookbook
Game Maker 8 Cookbook

Unity 3 Game Development Hotshot
Unity 3 Game Development Hotshot

	 Instant LEGO Mindstorm EV3
Instant LEGO Mindstorm EV3

	 Asterisk 1.4 – the Professional’s Guide
Asterisk 1.4 – the Professional’s Guide

Unity 3D Game Development by Example Beginner's Guide
Unity 3D Game Development by Example Beginner's Guide

Moodle 1.9: The English Teacher's Cookbook
Moodle 1.9: The English Teacher's Cookbook

Building Minecraft Server Modifications
Building Minecraft Server Modifications


No votes yet

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
D
6
6
k
x
5
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software