Reader small image

You're reading from  Artificial Intelligence for Robotics - Second Edition

Product typeBook
Published inMar 2024
PublisherPackt
ISBN-139781805129592
Edition2nd Edition
Concepts
Right arrow
Author (1)
Francis X. Govers III
Francis X. Govers III
author image
Francis X. Govers III

Francis X. Govers III is an Associate Technical Fellow for Autonomy at Bell Textron, and chairman of the Textron Autonomy Council. He is the designer of over 30 unmanned vehicles and robots for land, sea, air, and space, including RAMSEE, the autonomous security guard robot. Francis helped lead the design of the International Space Station, the F-35 JSF Fighter, the US Army Future Combat Systems, and telemetry systems for NASCAR and IndyCar. He is an engineer, pilot, author, musician, artist, and maker. He received five outstanding achievement awards from NASA and recognition from Scientific American for World Changing Ideas. He has a Master of Science degree from Brandeis University and is a veteran of the US Air Force.
Read more about Francis X. Govers III

Right arrow

Conceptualizing the Practical Robot Design Process

This chapter represents a bridge between the preceding chapters on general theory, introduction, and setup, and the following chapters, where we will apply problem-solving methods that use artificial intelligence (AI) techniques to robotics. The first step is to clearly state our problem, from the perspective of the use of the robot, which is different from our view as the designer/builder of the robot. Then, we need to decide how to approach each of the hardware- and software-based challenges that we and the robot will attempt. By the end of this chapter, you will be able to understand the process of how to design a robot systematically.

This chapter will cover the following topics:

  • A systems engineering-based approach to robotics
  • Understanding our task – cleaning up the playroom
  • How to state the problem with the help of use cases
  • How to approach solving problems with storyboards
  • Understanding the...

A systems engineering-based approach to robotics

When you set out to create a complex robot with AI-based software, you can’t just jump in and start slinging code and throwing things together without some sort of game plan as to how the robot goes together and how all the parts communicate with one another. We will discuss a systematic approach to robot design based on systems engineering principles. We will be learning about use cases and will use storyboards as techniques to understand what we are building and what parts – hardware and software – are needed.

Understanding our task – cleaning up the playroom

We have already talked a bit about our main task for Albert, our example robot for this book, which is to clean up the playroom in my house after my grandchildren come to visit. We need to provide a more formal definition of our problem, and then turn that into a list of tasks for the robot to perform along with a plan of action on how we might accomplish those tasks.

Why are we doing this? Well, consider this quote by Steve Maraboli:

“If you don’t know where you are going, how do you know when you get there?”

Figure 3.1 – It’s important to know what your robot does

Figure 3.1 – It’s important to know what your robot does

The internet and various robot websites are littered with dozens of robots that share one fatal character flaw: the robot and its software were designed first and then they went out to look for a job for it. In the robot business, this is called the ready, fire, aim problem. The task, the customer...

Use cases

Let’s begin our task with a statement of the problem.

Our robot’s task – part 1

About once or twice a month, my five delightful, intelligent, and playful grandchildren come to visit me and my wife. Like most grandparents, we keep a box full of toys in our upstairs playroom for them to play with during their visits. The first thing they do upon arrival – at least the older grandkids– is take every single toy out of the toy box and start playing. This results in the scene shown in the following photograph – toys randomly and uniformly distributed throughout the playroom:

Figure 3.2 – The playroom in the aftermath of the grandchildren

Figure 3.2 – The playroom in the aftermath of the grandchildren

Honestly, you could not get a better random distribution. They are really good at this. Since, as grandparents, our desire is to maximize the amount of time that our grandchildren have fun at our house and we want them to associate Granddad and Grandmother’...

Using storyboards

In this section, we are going to decompose our use cases further in order to understand the various tasks our robot must undertake on our behalf in the course of its two missions. I’ve created some storyboards – quick little drawings – to illustrate each point.

The concept of storyboards is borrowed from the movie industry, where a comic-strip-like narration is used to translate words on a page in the script into a series of pictures or cartoons that convey additional information not found in the script, such as framing, context, movement, props, sets, and camera moves. The practice of storyboarding goes all the way back to silent movies and is still used today.

We can use storyboards in robotics design for the same reasons: to convey additional information not found in the words of the use cases. Storyboards should be simple, quick, and just convey enough information to help you understand what is going on.

Let’s get started....

Understanding the scope of our use case

Desirements (a word made up by combining desire and requirements) are functions that would be nice to have but not strictly necessary. For example, if we decided to add flashing lights to the robot because it looks cool, that would be a desirement. You may want to have it, but it does not contribute to the mission of the robot or the task it needs to perform.

Another example would be if we added that the robot must operate in the dark. There is no reason for this in the current context, and nothing we’ve stated in the use cases said that the robot would operate in the dark – just in an indoor room. This would be an example of scope creep, or extending the operation conditions without a solid reason why. It’s important to work very hard to keep requirements and use cases to a minimum, and even to throw out use cases that are unnecessary or redundant. I might have added a requirement for sorting the toys by color, but sorting...

Identifying our hardware needs

Based on our storyboards, I extracted or derived the following hardware tasks:

  • Drive the robot base
  • Carry the robot arm
  • Lift toys
  • Put toys in the toy box (arm length)
  • Sensors:
    • Arm location
    • Hand status (open/close)
    • Robot vision (camera) for obstacle avoidance
  • Provide power for all systems:
    • 5V for Nvidia Nano
    • 5V for Arduino
    • Arm power – 7.2V
    • Motor power – 7.2V
  • Onboard computers:
    • A computer that can receive commands remotely (Wi-Fi Nano):
      • Runs ROS 2
      • Runs Python 3
    • A computer that can interface with a camera
    • A computer that can control motors (Arduino)
    • An interface that can drive servo motors for the robot arm (servo controller)

Now, let’s take a look at the software requirements.

Breaking down our software needs

This list of software tasks was composed by reviewing the state machine diagram, the use cases, and the storyboards. I’ve highlighted the steps that will require AI and will be covered in detail in the coming chapters:

  1. Power on self-test (POST):
    1. Start up robot programs.
    2. Check that the Nano can talk to the Arduino and back.
    3. Try to establish communications with the control station.
    4. Report POST success or failure as appropriate and enter in the log.
  2. Receive commands via Wi-Fi for teleoperation:
    • Drive motor base (right/left/forward/back)
    • Move hand up/down/right/left/in/out/twist
    • Record video or record pictures as image files
  3. Send telemetry via Wi-Fi.
  4. Monitor progress.
  5. Send video.
  6. Navigate safely:
    • Learn to avoid obstacles
    • Learn to not fall down stairs
  7. Find toys:
    • Detect objects
    • Learn to classify objects (Toy/Not toy)
    • Determine which toy is closest
  8. Pick up toys:
    1. Move to the position where the arm can reach the toy
    2. Devise a strategy...

Writing a specification

Our next task is to write specifications for our various components. I’ll go through an example here that we must do as part of our toy-grasping robot project: we need to select a camera. Just any old camera will not do – we need one that meets our needs. But what are those needs? We need to write a camera specification so that when we are looking at cameras to buy, we can tell which one will do the job.

We’ve created our storyboard and our use cases, so we have the information we need to figure out what our camera needs to do. We can reverse engineer this process somewhat: let’s discuss what things make one camera different from another. First of all is the interface: this camera goes on board the robot, so it has to interface with the robot’s computer, which has USB, Ethernet, and a special camera bus. What other things about cameras do we care about? We certainly care about cost. We don’t want (or need) to use...

Summary

This chapter outlined a suggested process for developing your to-do list as you develop your robot project. This process is called systems engineering. Our first step was to create use cases or descriptions of how the robot is to behave from a user’s perspective. Then, we created more detail behind the use cases by creating storyboards, where we went step by step through the use case. Our example followed the robot finding and recognizing toys, before picking them up and putting them in the toy box. We extracted our hardware and software needs, creating a to-do list of what the robot will be able to do. Finally, we wrote a specification for one of our critical sensors: the camera.

In the next chapter, we will dive into our first robot task – teaching the robot to recognize toys using computer vision and neural networks.

Questions

  1. Describe some of the differences between a storyboard for a movie or cartoon and a storyboard for a software program.
  2. What are the five W questions? Can you think of any more questions that would be relevant to examine a use case?
  3. Complete this sentence: A use case shows what the robot does but not _____.
  4. Take storyboard 9 in Figure 3.16, where the robot is driving to the toy box, and break it down into more sequenced steps in your own storyboard. Think about all that must happen between frames 9 and 10.
  5. Complete the reply form of the knock-knock joke, where the robot answers the user telling the joke. What do you think is the last step?
  6. Look at the teleoperate operations. Would you add any more, or does this look like a good list?
  7. Write a specification for a sensor that uses distance measurement to prevent the robot from driving downstairs.
  8. What is the distance at which a camera with 320 x 200 pixels and a 30-degree field of view can see...

Further reading

For more information on the topics in this chapter, you can refer to the following resources:

  • A Practical Guide to SysML: The Systems Modeling Language, by Sanford Friedenthal, Alan Moore, and Rick Steiner, published by Morgan Kaufman; this is the standard introduction to Model-Based Systems Engineering (MBSE)
  • The Agile Developer’s Handbook by Paul Flewelling, published by Packt
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Artificial Intelligence for Robotics - Second Edition
Published in: Mar 2024Publisher: PacktISBN-13: 9781805129592
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Francis X. Govers III

Francis X. Govers III is an Associate Technical Fellow for Autonomy at Bell Textron, and chairman of the Textron Autonomy Council. He is the designer of over 30 unmanned vehicles and robots for land, sea, air, and space, including RAMSEE, the autonomous security guard robot. Francis helped lead the design of the International Space Station, the F-35 JSF Fighter, the US Army Future Combat Systems, and telemetry systems for NASCAR and IndyCar. He is an engineer, pilot, author, musician, artist, and maker. He received five outstanding achievement awards from NASA and recognition from Scientific American for World Changing Ideas. He has a Master of Science degree from Brandeis University and is a veteran of the US Air Force.
Read more about Francis X. Govers III