Reader small image

You're reading from  Advanced Deep Learning with Keras

Product typeBook
Published inOct 2018
Reading LevelIntermediate
PublisherPackt
ISBN-139781788629416
Edition1st Edition
Languages
Right arrow
Author (1)
Rowel Atienza
Rowel Atienza
author image
Rowel Atienza

Rowel Atienza is an Associate Professor at the Electrical and Electronics Engineering Institute of the University of the Philippines, Diliman. He holds the Dado and Maria Banatao Institute Professorial Chair in Artificial Intelligence. Rowel has been fascinated with intelligent robots since he graduated from the University of the Philippines. He received his MEng from the National University of Singapore for his work on an AI-enhanced four-legged robot. He finished his Ph.D. at The Australian National University for his contribution on the field of active gaze tracking for human-robot interaction. Rowel's current research work focuses on AI and computer vision. He dreams on building useful machines that can perceive, understand, and reason. To help make his dreams become real, Rowel has been supported by grants from the Department of Science and Technology (DOST), Samsung Research Philippines, and Commission on Higher Education-Philippine California Advanced Research Institutes (CHED-PCARI).
Read more about Rowel Atienza

Right arrow

Chapter 10. Policy Gradient Methods

In the final chapter of this book, we're going to introduce algorithms that directly optimize the policy network in reinforcement learning. These algorithms are collectively referred to as policy gradient methods. Since the policy network is directly optimized during training, the policy gradient methods belong to the family of on-policy reinforcement learning algorithms. Like value-based methods that we discussed in Chapter 9, Deep Reinforcement Learning, policy gradient methods can also be implemented as deep reinforcement learning algorithms.

A fundamental motivation in studying the policy gradient methods is addressing the limitations of Q-Learning. We'll recall that Q-Learning is about selecting the action that maximizes the value of the state. With Q function, we're able to determine the policy that enables the agent to decide on which action to take for a given state. The chosen...

Policy gradient theorem

As discussed in Chapter 9, Deep Reinforcement Learning, in Reinforcement Learning the agent is situated in an environment that is in state st', an element of state space Policy gradient theorem. The state space Policy gradient theorem may be discrete or continuous. The agent takes an action at from the action space Policy gradient theorem by obeying the policy, Policy gradient theorem. Policy gradient theorem may be discrete or continuous. Because of executing the action at, the agent receives a reward r t+1 and the environment transitions to a new state s t+1. The new state is dependent only on the current state and action. The goal of the agent is to learn an optimal policy Policy gradient theorem that maximizes the return from all the states:

Policy gradient theorem (Equation 9.1.1)

The return, Policy gradient theorem, is defined as the discounted cumulative reward from time t until the end of the episode or when the terminal state is reached:

Policy gradient theorem (Equation 9.1.2)

From Equation 9.1.2, the return can also be interpreted as a value of a given state by following the policy Policy gradient theorem. It can be observed from Equation 9.1.1 that future...

Monte Carlo policy gradient (REINFORCE) method

The simplest policy gradient method is called REINFORCE [5], this is a Monte Carlo policy gradient method:

Monte Carlo policy gradient (REINFORCE) method (Equation 10.2.1)

where Rt is the return as defined in Equation 9.1.2. Rt is an unbiased sample of Monte Carlo policy gradient (REINFORCE) method in the policy gradient theorem.

Algorithm 10.2.1 summarizes the REINFORCE algorithm [2]. REINFORCE is a Monte Carlo algorithm. It does not require knowledge of the dynamics of the environment (that is, model-free). Only experience samples, Monte Carlo policy gradient (REINFORCE) method, are needed to optimally tune the parameters of the policy network, Monte Carlo policy gradient (REINFORCE) method. The discount factor, Monte Carlo policy gradient (REINFORCE) method, takes into consideration that rewards decrease in value as the number of steps increases. The gradient is discounted by Monte Carlo policy gradient (REINFORCE) method. Gradients taken at later steps have smaller contributions. The learning rate, Monte Carlo policy gradient (REINFORCE) method, is a scaling factor of the gradient update.

The parameters are updated by performing gradient ascent using the discounted gradient and learning rate. As a Monte Carlo algorithm, REINFORCE requires...

Conclusion

In this chapter, we've covered the policy gradient methods. Starting with the policy gradient theorem, we formulated four methods to train the policy network. The four methods, REINFORCE, REINFORCE with baseline, Actor-Critic, and A2C algorithms were discussed in detail. We explored how the four methods could be implemented in Keras. We then validated the algorithms by examining the number of times the agent successfully reached its goal and in terms of the total rewards received per episode.

Similar to Deep Q-Network [3] that we discussed in the previous chapter, there are several improvements that can be done on the fundamental policy gradient algorithms. For example, the most prominent one is the A3C [4] which is a multi-threaded version of A2C. This enables the agent to get exposed to different experiences simultaneously and to optimize the policy and value networks asynchronously. However, in the experiments conducted by OpenAI, https://blog.openai.com/baselines-acktr...

References

  1. Sutton and Barto. Reinforcement Learning: An Introduction, http://incompleteideas.net/book/bookdraft2017nov5.pdf, (2017).
  2. Mnih, Volodymyr, and others. Human-level control through deep reinforcement learning, Nature 518.7540 (2015): 529.
  3. Mnih, Volodymyr, and others. Asynchronous methods for deep reinforcement learningInternational conference on machine learning, 2016.
  4. Williams and Ronald J. Simple statistical gradient-following algorithms for connectionist reinforcement learningMachine learning 8.3-4 (1992): 229-256.
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Advanced Deep Learning with Keras
Published in: Oct 2018Publisher: PacktISBN-13: 9781788629416
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Rowel Atienza

Rowel Atienza is an Associate Professor at the Electrical and Electronics Engineering Institute of the University of the Philippines, Diliman. He holds the Dado and Maria Banatao Institute Professorial Chair in Artificial Intelligence. Rowel has been fascinated with intelligent robots since he graduated from the University of the Philippines. He received his MEng from the National University of Singapore for his work on an AI-enhanced four-legged robot. He finished his Ph.D. at The Australian National University for his contribution on the field of active gaze tracking for human-robot interaction. Rowel's current research work focuses on AI and computer vision. He dreams on building useful machines that can perceive, understand, and reason. To help make his dreams become real, Rowel has been supported by grants from the Department of Science and Technology (DOST), Samsung Research Philippines, and Commission on Higher Education-Philippine California Advanced Research Institutes (CHED-PCARI).
Read more about Rowel Atienza