Reader small image

You're reading from  Learn Robotics Programming - Second Edition

Product typeBook
Published inFeb 2021
PublisherPackt
ISBN-139781839218804
Edition2nd Edition
Concepts
Right arrow
Author (1)
Danny Staple
Danny Staple
author image
Danny Staple

Danny Staple builds robots and gadgets as a hobbyist, makes videos about his work with robots, and attends community events such as PiWars and Arduino Day. He has been a professional Python programmer, later moving into DevOps, since 2009, and a software engineer since 2000. He has worked with embedded systems, including embedded Linux systems, throughout the majority of his career. He has been a mentor at a local CoderDojo, where he taught how to code with Python. He has run Lego Robotics clubs with Mindstorms. He has also developed Bounce!, a visual programming language targeted at teaching code using the NodeMCU IoT platform. The robots he has built with his children include TankBot, SkittleBot (now the Pi Wars robot), ArmBot, and SpiderBot.
Read more about Danny Staple

Right arrow

Chapter 14: Line-Following with a Camera in Python

In the last chapter, we saw how to use a camera to follow and track objects. In this chapter, we will be extending the camera code to create line-sensing behavior.

We will look at where robots use line following and how it is useful. We will also learn about some of the different approaches taken to following paths in different robots, along with their trade-offs. You will see how to build a simple line-following track.

We will learn about some different algorithms to use and then choose a simple one. We will make a data flow diagram to see how it works, collect sample images to test it with, and then tune its performance based on the sample images. Along the way, we'll see more ways to approach computer vision and extract useful data from it.

We will enhance our PID code, build our line detection algorithm into robot driving behavior, and see the robot running with this. The chapter closes with ideas on how you can...

Technical requirements

For this chapter, you will need the following:

  • The robot and code from Chapter 13, Robot Vision – Using a Pi Camera and OpenCV
  • Some white or some black insulating tape
  • Some A2 paper or boards – the opposite color to the insulating tape
  • A pair of scissors
  • Good lighting

The code for this section can be found at https://github.com/PacktPublishing/Learn-Robotics-Programming-Second-Edition/tree/master/chapter14.

Check out the following video to see the Code in Action: https://bit.ly/3slLzbQ

Introduction to line following

Before we start building code, let's find out about line-following robot behaviors, where and how systems use them, and the different techniques for doing so.

What is line following?

Some robots are required to stay on specific paths within their tasks. It is simpler for a robot to navigate a line than to plan and map whole rooms or buildings.

In simple terms, line following is being able to follow a marked path autonomously. These can be visual markers, such as blue tape or a white line on a black road. As the robot drives along the line, it will continually be looking for where the line ahead is and correcting its course to follow that line.

In robot competitions, racing on lines is a common challenge, with speed being critical after accuracy.

Usage in industry

By far the most common usage of line-following behavior is in industry. Robots known as automated guided vehicles (AGVs) need to follow set paths for many reasons. These...

Making a line-follower test track

Since you will be making your robot follow a line, we need to start with a section of line to follow. The track will be used at the beginning to test our line detection algorithm and can then be extended to more exciting tracks when we turn on the motors and start driving along the line. What I will show you in this section is easy to make and extendable. It allows you to experiment with different line shapes and curves and see how the robot responds.

You can even experiment with different color and contrast options.

Getting the test track materials in place

The following photo shows the main materials required:

Figure 14.2 – Materials for making a test track

The photo in Figure 14.2 shows a roll of black electrical tape on a large sheet of white paper. For this section, you'll need the following:

  • Some A2 plain white paper or board.
  • Some black electrical insulation tape or painter&apos...

Line-following computer vision pipeline

As we did with the previous computer vision tasks, we will visualize this as a pipeline. Before we do, there are many methods for tracking a line with computer vision.

Camera line-tracking algorithms

It is in our interests to pick one of the simplest ones, but as always, there is a trade-off, in that others will cope with more tricky situations or anticipate curves better than ours.

Here is a small selection of methods we could use:

  • Using edge detection: An edge detection algorithm, such as the Canny edge detector, can be run across the image, turning any transitions it finds into edges. OpenCV has a built-in edge detection system if we wanted to use this. The system can detect dark-to-light and light-to-dark edges. It is more tolerant of less sharp edges.
  • Finding differences along lines: This is like cheeky edge detection, but only on a particular row. By finding the difference between each pixel along a row in the image...

Trying computer vision with test images

In this section, we will look out how and why to use test images. We will write our first chunk of code for this behavior and try it on test images from our robot's camera. These tests will prepare us for using the code to drive the robot.

Why use test images?

So far, our computer vision work has been written directly with robot behaviors; this is the end goal of them, but sometimes, you want to try the visual processing code in isolation.

Perhaps you want to get it working or work out bugs in it, or you may want to see whether you can make the code faster and time it. To do this, it makes sense to run that particular code away from the robot control systems.

It also makes sense to use test images. So, instead of running the camera and needing light conditions, you can run with test images you've already captured and compare them against the result you expected from them.

For performance testing, trying the same image...

Line following with the PID algorithm

In this section, we will combine the visual processing seen previously with the PID control loops and camera streaming seen in Chapter 13, Robot Vision – Using a Pi Camera and OpenCV. Please start from the code in that chapter.

The files you will need are as follows:

  • pid_controller.py
  • robot.py
  • servos.py
  • camera_stream.py
  • image_app_core.py
  • leds_led_shim.py
  • encoder_counter.py
  • The templates folder

We will use the same template for displaying this, but we are going to add a quick and cheeky way of rendering the diff graphs in OpenCV onto our output frame. Matplotlib would be too slow for this.

Creating the behavior flow diagram

Before we build a new behavior, creating a data flow diagram will help us get a picture of what happens to the data after we've processed it.

The system will look familiar, as it is very similar to those we made in Chapter 13, Robot Vision – Using a Pi...

Finding a line again

An important thing to consider is what the robot should do if it has lost the line. Coming back to our examples of an industrial setting, this could be a safety measure.

Our current robot stops. That requires you to put it back on the line. However, when you do so, the robot immediately starts moving again. This behavior is fine for our little robot, but it could be a dangerous hazard for a large robot.

Another behavior that you could consider is to spin until the robot finds the line again. Losing a line can be because the robot has under/oversteered off the line and couldn't find it again, or it could be because the robot has gone past the end of a line. This behavior is suitable perhaps for small robot competitions.

We need to consider things like this carefully and where you would use the robot. Note that for competition-type robots, or industrial robots, they will have either multiple sensors at different angles or a wider angled sensor &...

Summary

In this chapter, you saw how to use the camera to detect a line and how to plot data showing what it found. You then saw how to take this data and put it into driving behavior so that the robot follows the line. You added to your OpenCV knowledge, and I showed you a sneaky way to put graphs into frames rendered on the camera stream output. You saw how to tune the PID to make the line following more accurate and how to ensure the robot stops predictably when it has lost the line.

In the next chapter, we will see how to communicate with our robot via a voice agent, Mycroft. You will add a microphone and speakers to a Raspberry Pi, then add speech recognition software. This will let us speak commands to a Raspberry Pi to send to the robot, and Mycroft will respond to let us know what it has done.

Exercises

Now that we've got this to work, there are ways we could enhance the system and make it more interesting:

  • Could you use cv2.putText to draw values such as the PID data onto the frames in the make_display method?
  • Consider writing the PID and error data versus time to a file, then loading it into another Python file, using Matplotlib to show what happened. This change might make the under/oversteer clearer in retrospect.
  • You could modify the motor handling code to go faster when the line is closer to the middle and slow down when it is further.
  • A significant enhancement would be to check two rows and find the angle between them. You then know how far the line is from the middle, but you also know which way the line is headed and could use that to guide your steering further.

These exercises should give you some interesting ways to play and experiment with the things you've built and learned in this chapter.

Further reading

The following should help you look further into line following:

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Learn Robotics Programming - Second Edition
Published in: Feb 2021Publisher: PacktISBN-13: 9781839218804
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Danny Staple

Danny Staple builds robots and gadgets as a hobbyist, makes videos about his work with robots, and attends community events such as PiWars and Arduino Day. He has been a professional Python programmer, later moving into DevOps, since 2009, and a software engineer since 2000. He has worked with embedded systems, including embedded Linux systems, throughout the majority of his career. He has been a mentor at a local CoderDojo, where he taught how to code with Python. He has run Lego Robotics clubs with Mindstorms. He has also developed Bounce!, a visual programming language targeted at teaching code using the NodeMCU IoT platform. The robots he has built with his children include TankBot, SkittleBot (now the Pi Wars robot), ArmBot, and SpiderBot.
Read more about Danny Staple