Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Applied Deep Learning and Computer Vision for Self-Driving Cars

You're reading from  Applied Deep Learning and Computer Vision for Self-Driving Cars

Product type Book
Published in Aug 2020
Publisher Packt
ISBN-13 9781838646301
Pages 332 pages
Edition 1st Edition
Languages
Authors (2):
Sumit Ranjan Sumit Ranjan
Profile icon Sumit Ranjan
Dr. S. Senthamilarasu Dr. S. Senthamilarasu
Profile icon Dr. S. Senthamilarasu
View More author details

Table of Contents (18) Chapters

Preface 1. Section 1: Deep Learning Foundation and SDC Basics
2. The Foundation of Self-Driving Cars 3. Dive Deep into Deep Neural Networks 4. Implementing a Deep Learning Model Using Keras 5. Section 2: Deep Learning and Computer Vision Techniques for SDC
6. Computer Vision for Self-Driving Cars 7. Finding Road Markings Using OpenCV 8. Improving the Image Classifier with CNN 9. Road Sign Detection Using Deep Learning 10. Section 3: Semantic Segmentation for Self-Driving Cars
11. The Principles and Foundations of Semantic Segmentation 12. Implementing Semantic Segmentation 13. Section 4: Advanced Implementations
14. Behavioral Cloning Using Deep Learning 15. Vehicle Detection Using OpenCV and Deep Learning 16. Next Steps 17. Other Books You May Enjoy
Next Steps

We have come to the end of this book. You're now well-versed on the topic of camera sensors, one of the most important sensors of an autonomous vehicle. The great news is that the era of autonomous vehicles has arrived, despite not being widely accepted by the general public. However, major automotive companies are now spending millions on the research and development of autonomous vehicles. Companies are actively exploring autonomous vehicle systems, as well as road-testing autonomous vehicle prototypes. Moreover, many autonomous vehicle systems such as automatic emergency braking, automatic cruise control, lane-departure warning, and automatic parking are already in place.

The primary goal of an autonomous vehicle is to reduce the number of road traffic accidents. Recently, companies have been observed experimenting with SDCs for delivering food, as well as taxi...

SDC sensors

Autonomous cars consist of many sensors. The camera is one of the sensors, and there are others such sensors, such as LIDAR, RADAR, ultrasonic sensors, and odometers. 

The following image shows the various sensors that are used in an SDC:

Fig 12.7: SDCs sensors

Knowledge of perceptrons is not enough for a SDC. We must also learn about sensor fusion, localization, path planning, and control. 

Sensor fusion is one of the crucial steps when it comes to creating autonomous vehicles. Generally, an autonomous vehicle uses an extensive number of sensors that help it recognize its environment and locate itself. 

We will briefly discuss the sensors used in autonomous vehicles in the following sections.

Camera

We have already learned about cameras in this book, which serves as the car's vision. Cameras are used to help the car understand the environment with the help of artificial intelligence (AI). With cameras in place, the car can classify roads, pedestrians, traffic signs, and so on. 

RADAR

Radio Detection and Ranging (RADARemits radio waves that detect nearby objects. As we discussed in Chapter 1The Foundations of Self-Driving Cars, RADAR has been used in the field of autonomous vehicles for many years. RADAR help cars avoid collisions by detecting vehicles in the car's blind spots. RADAR also performs significantly when detecting moving objects. RADAR uses the Doppler effect to measure the distance between objects, as well as their positioning. The Doppler effect measures the change in waves when an object moves closer or further away. You can read more about the Doppler effect at https://en.wikipedia.org/wiki/Doppler_effect.

 RADAR cannot categorize an object, but it is good at detecting its speed and position.

Ultrasonic sensors

In general, ultrasonic sensors are used for estimating the position of static vehicles, such as parked vehicles. They are cheaper than LIDAR and RADAR but only have a range of detection of a few meters.

Odometric sensors

Odometric sensors are a type of camera that help the vehicle estimate its speed by sensing the displacement of the wheels of the car.

LIDAR 

The light detection and ranging (LIDAR) sensor uses infrared sensors to determine the vehicle's distance from an object. LIDAR has a rotating system that sends reflecting electromagnetic (EM) waves and calculates the time taken for them to come back to the LIDAR. LIDAR generates point clouds, which are sets of points that describe an object or surface in the environment around the sensor.

The LIDAR sensor can also classify objects as it generates 2 million cloud points per second, which generates the 3D shapes of the objects. 

Introduction to sensor fusion

The sensors that are used in autonomous vehicles have various advantages and disadvantages. The main aim of sensor fusion is to utilize the various strengths of each sensor used in the vehicle so that the car can understand the environment more precisely. 

Camera sensors are great tools for detecting roads, traffic signs, and other objects on the road. LIDAR estimates the position of the vehicle accurately. RADAR estimates the speed of the running vehicle accurately.

Kalman filter

One of the most popular sensor fusion algorithms is the Kalman filter. It is used to merge the data from various autonomous vehicle sensors. The Kalman filter was invented in 1960 by Rudolph Kalman. It is used to track navigation signals, as well as phones and satellites.

The Kalman filter was used during the first manned mission to land on the moon (the Apollo 11 Mission) for communication between staff on Earth and the crew on the shuttle/rocket.

The main application of the Kalman filter is data fusion, which is used to estimate the state of a dynamic system in the present, past, and future. It can be used to monitor a moving pedestrian's location and velocity over time, and also to quantify their associated uncertainty. In general, the Kalman filter consists of two iterative steps:

  • Predict
  • Update

The state of a system is calculated using a Kalman filter and is denoted as x. This vector is composed of a position (p)and a velocity (v), while the measure of uncertainty...

Summary

In this chapter, we learned about sensor fusion. Sensor fusion is the next step after collecting all of the sensor data. This book taught you about one of the most important types of sensors available: cameras. We also learned about deep learning networks that enable camera sensors to function, and are also useful for making predictions from the data generated by other types of sensors. Finally, we learned about Kalman filters.

The overall goal of this book was to introduce you to the field of SDCs and to help you prepare for a future in the industry.

Here is a quick summary of the chapters we covered in this book:

In Chapter 1, The Foundations of Self-Driving Cars, we addressed the complicated path of how SDCs are becoming a reality. We discovered that SDC technology has existed for decades. We learned how it has evolved and learned about advanced research on the topic as a result of modern computational power. We also learned about the advantages and disadvantages...

lock icon The rest of the chapter is locked
You have been reading a chapter from
Applied Deep Learning and Computer Vision for Self-Driving Cars
Published in: Aug 2020 Publisher: Packt ISBN-13: 9781838646301
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at AU $19.99/month. Cancel anytime}