Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Hands-On Reinforcement Learning with Python

You're reading from  Hands-On Reinforcement Learning with Python

Product type Book
Published in Jun 2018
Publisher Packt
ISBN-13 9781788836524
Pages 318 pages
Edition 1st Edition
Languages
Author (1):
Sudharsan Ravichandiran Sudharsan Ravichandiran
Profile icon Sudharsan Ravichandiran

Table of Contents (16) Chapters

Preface Introduction to Reinforcement Learning Getting Started with OpenAI and TensorFlow The Markov Decision Process and Dynamic Programming Gaming with Monte Carlo Methods Temporal Difference Learning Multi-Armed Bandit Problem Deep Learning Fundamentals Atari Games with Deep Q Network Playing Doom with a Deep Recurrent Q Network The Asynchronous Advantage Actor Critic Network Policy Gradients and Optimization Capstone Project – Car Racing Using DQN Recent Advancements and Next Steps Assessments Other Books You May Enjoy

Deep Learning Fundamentals

So far, we have learned about how reinforcement learning (RL) works. In the upcoming chapters, we will learn about Deep reinforcement learning (DRL), which is a combination of deep learning and RL. DRL is creating a lot of buzz around the RL community and is making a serious impact on solving many RL tasks. To understand DRL, we need to have a strong foundation in deep learning. Deep learning is actually a subset of machine learning and it is all about neural networks. Deep learning has been around for a decade, but the reason it is so popular right now is because of the computational advancements and availability of a huge volume of data. With this huge volume of data, deep learning algorithms will outperform all classic machine learning algorithms. Therefore, in this chapter, we will learn about several deep learning algorithms like recurrent neural...

Artificial neurons

Before understanding ANN, first, let's understand what neurons are and how neurons in our brain actually work. A neuron can be defined as the basic computational unit of the human brain. Our brain contains approximately 100 billion neurons. Each and every neuron is connected through synapses. Neurons receive input from the external environment, sensory organs, or from the other neurons through a branchlike structure called dendrites, as can be seen in the following diagram. These inputs are strengthened or weakened, that is, they are weighted according to their importance and then they are summed together in the soma (cell body). Then, from the cell body, these summed inputs are processed and move through the axons and are sent to the other neurons. The basic single biological neuron is shown in the following diagram:

Now, how do artificial neurons work...

ANNs

Neurons are cool, right? But single neurons cannot perform complex tasks, which is why our brain has billions of neurons, organized in layers, forming a network. Similarly, artificial neurons are arranged in layers. Each and every layer will be connected in such a way that information is passed from one layer to another. A typical ANN consists of the following layers:

  • Input layer
  • Hidden layer
  • Output layer

Each layer has a collection of neurons, and the neurons in one layer interact with all the neurons in the other layers. However, neurons in the same layer will not interact with each other. A typical ANN is shown in the following diagram:

Input layer

The input layer is where we feed input to the network. The number...

Deep diving into ANN

We know that in artificial neurons, we multiply the input by weights, add bias to them and apply an activation function to produce the output. Now, we will see how this happens in a neural network setting where neurons are arranged in layers. The number of layers in a network is equal to the number of hidden layers plus the number of output layers. We don't take the input layer into account. Consider a two-layer neural network with one input layer, one hidden layer, and one output layer, as shown in the following diagram:

Let's say we have two inputs, x1 and x2, and we have to predict the output y. Since we have two inputs, the number of neurons in the input layer will be two. Now, these inputs will be multiplied by weights and then we add bias and propagate the resultant value to the hidden layer where the activation function will be applied. So...

Neural networks in TensorFlow

Now, we will see how to build a basic neural network using TensorFlow, which predicts handwritten digits. We will use the popular MNIST dataset which has a collection of labeled handwritten images for training.

First, we must import TensorFlow and load the dataset from tensorflow.examples.tutorial.mnist:

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

Now, we will see what we have in our data:

print("No of images in training set {}".format(mnist.train.images.shape))
print("No of labels in training set {}".format(mnist.train.labels.shape))

print("No of images in test set {}".format(mnist.test.images.shape))
print("No of labels in test set {}".format(mnist.test.labels.shape))

It will print the following:

No of...

RNN

The birds are flying in the ____. If I ask you to predict the blank, you might predict sky. How did you predict that the word sky would be a good fit to fill this blank? Because you read the whole sentence and predicted sky would be the right word based on understanding the context of the sentence. If we ask our normal neural network to predict the right word for this blank, it will not predict the correct word. This is because a normal neural network's output is based on only the current input. So, the input to the neural network will be just the previous word, the. That is, in normal neural networks, each input is independent of the others. So, it will not perform well in a case where we have to remember the sequence of input to predict the next sequence.

How do we make our network remember the whole sentence to predict the next word correctly? Here is where RNN comes...

Long Short-Term Memory RNN

RNNs are pretty cool, right? But we have seen a problem in training the RNNs called the vanishing gradient problem. Let's explore that a bit. The sky is __. An RNN can easily predict the last word as blue based on the information it has seen. But an RNN cannot cover long-term dependencies. What does that mean? Let's say Archie lived in China for 20 years. He loves listening to good music. He is a very big comic fan. He is fluent in _. Now, you would predict the blank as Chinese. How did you predict that? Because you understood that Archie lived for 20 years in China, you thought he might be fluent in Chinese. But an RNN cannot retain all of this information in memory to say that Archie is fluent in Chinese. Due to the vanishing gradient problem, it cannot recollect/remember the information for a long time in memory. How do we solve that?

Here...

Convolutional neural networks

CNN, also known as ConvNet, is a special type of neural network and it is extensively used in Computer Vision. The application of a CNN ranges from enabling vision in self-driving cars to the automatic tagging of friends in your Facebook pictures. CNNs make use of spatial information to recognize the image. But how do they really work? How can the neural networks recognize these images? Let's go through this step by step.

A CNN typically consists of three major layers:

  • Convolutional layer
  • Pooling layer
  • Fully connected layer

Convolutional layer

When we feed an image as input, it will actually be converted to a matrix of pixel values. These pixel values range from 0 to 255 and the dimensions...

Classifying fashion products using CNN

We will now see how to use CNN for classifying fashion products.

First, we will import our required libraries as usual:

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

Now, we will read the data. The dataset is available in tensorflow.examples, so we can directly extract the data as follows:

from tensorflow.examples.tutorials.mnist import input_data
fashion_mnist = input_data.read_data_sets('data/fashion/', one_hot=True)

We will check what we have in our data:

print("No of images in training set {}".format(fashion_mnist.train.images.shape))
print("No of labels in training set {}".format(fashion_mnist.train.labels.shape))

print("No of images in test set {}".format(fashion_mnist.test.images.shape))
print("No of labels in test set {}".format(fashion_mnist...

Summary

In this chapter, we learned how neural networks actually work followed by building a neural network to classify handwritten digits using TensorFlow. We also saw different types of neural networks such as an RNN, which can remember information in the memory. Then, we saw the LSTM network, which is used to overcome the vanishing gradient problem by keeping several gates to retain information in the memory as long as it is required. We also saw another interesting neural network for recognizing images called CNN. We saw how CNN use different layers to understand the image. Following this, we learned how to build a CNN to recognize fashion products using TensorFlow.

In the next chapter, Chapter 8, Atari Games With Deep Q Network, we will see how neural networks will actually help our RL agents to learn more efficiently.

Questions

The question list is as follows:

  1. What is the difference between linear regression and neural networks?
  2. What is the use of the activation function?
  3. Why do we need to calculate the gradient in gradient descent?
  4. What is the advantage of an RNN?
  5. What are vanishing and exploding gradient problems?
  6. What are gates in LSTM?
  7. What is the use of the pooling layer?

Further reading

lock icon The rest of the chapter is locked
You have been reading a chapter from
Hands-On Reinforcement Learning with Python
Published in: Jun 2018 Publisher: Packt ISBN-13: 9781788836524
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}