Reader small image

You're reading from  Mastering Reinforcement Learning with Python

Product typeBook
Published inDec 2020
Reading LevelBeginner
PublisherPackt
ISBN-139781838644147
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Enes Bilgin
Enes Bilgin
author image
Enes Bilgin

Enes Bilgin works as a senior AI engineer and a tech lead in Microsoft's Autonomous Systems division. He is a machine learning and operations research practitioner and researcher with experience in building production systems and models for top tech companies using Python, TensorFlow, and Ray/RLlib. He holds an M.S. and a Ph.D. in systems engineering from Boston University and a B.S. in industrial engineering from Bilkent University. In the past, he has worked as a research scientist at Amazon and as an operations research scientist at AMD. He also held adjunct faculty positions at the McCombs School of Business at the University of Texas at Austin and at the Ingram School of Engineering at Texas State University.
Read more about Enes Bilgin

Right arrow

Chapter 7: Policy-Based Methods

Value-based methods that we covered in the previous chapter achieve great results in many environments with discrete control spaces. However, a lot of applications, such as robotics, require continuous control. In this chapter, we go into another important class of algorithms, called policy-based methods, which enable us to solve continuous-control problems. In addition, these methods directly optimize a policy network, and hence stand on a stronger theoretical foundation. Finally, policy-based methods are able to learn truly stochastic policies, which are needed in partially observable environments and games, which value-based methods could not learn. All in all, policy-based approaches complement value-based methods in many ways. This chapter goes into the details of policy-based methods to gain you a strong understanding of how they work.

In particular, we discuss the following topics in this chapter.

  • Need for policy-based methods
  • Vanilla...

Need for policy-based methods

We start this chapter by first discussing why we need policy-based methods as we have already introduced many value-based methods. Policy-based methods i) are arguably more principled as they directly optimize the policy parameters, ii) allow us to use continuous action spaces, and iii) are able to learn truly random stochastic policies. Let's now go into the details of each of these points.

A more principled approach

In Q-learning, a policy is obtained in an indirect manner by learning action values, which are then used to determine the best action(s). But do we really need to know the value of an action? Most of the time we don't, as they are only proxies to get us to optimal policies. Policy-based methods learn function approximations that directly give policies without such an intermediate step. This is arguably a more principled approach because we can take gradient steps directly to optimize the policy, not the proxy action-value...

Vanilla policy gradient

We start discussing the policy-based methods with the most fundamental algorithm: a vanilla policy gradient approach. Although such an algorithm is rarely useful in realistic problem settings, it is very important to understand it to build a strong intuition and a theoretical background for the more complex algorithms we will cover later.

Objective in the policy gradient methods

In value-based methods, we focused on finding good estimates for action values, with which we then obtained policies. Policy gradient methods, on the other hand, directly focus on optimizing the policy with respect to the reinforcement learning objective - although we will still make use of value estimates. If you don't remember what this objective was, it is the expected discounted return:

This is a slightly more rigorous way of writing this objective compared to how we wrote it before. Let's unpack what we have here:

  • The objective...

Actor-critic methods

Actor-critic methods propose further remedies to the high variance problem in policy gradient algorithm. Just like REINFORCE and other policy gradient methods, actor-critic algorithms have been around for decades now. Combining this approach with deep reinforcement learning, however, has enabled them to solve more realistic RL problems. We start this section my presenting the ideas behind the actor-critic approach, and later we define them in more detail.

Further reducing the variance in policy-based methods

Remember that earlier, to reduce the variance in gradient estimates, we replaced the reward sum obtained in a trajectory with a reward-to-go term. Although a step in the right direction, it is usually not enough. We now introduce two more methods to further reduce this variance.

Estimating the reward-to-go

The reward-to-go term, , obtained in a trajectory is an estimate of the action-value under the existing policy .

Info

Notice the difference...

Trust-region methods

One of the important developments in the world of policy-based methods has been the evolution of the trust-region methods. In particular, TRPO and PPO algorithms have led to significant improvement over the algorithms like A2C and A3C. For example, the famous Dota 2 AI agent which reached expert-level performance competitions was trained using PPO and GAE. In this section, we go into the details of those algorithms to help you gain a solid understanding of how they work.

Info

Prof. Sergey Levine, who co-authored the TRPO and PPO papers, goes deep into the details of the math behind these methods in his online lecture more than we do in this section. That lecture is available at https://youtu.be/uR1Ubd2hAlE and I highly recommend you watch it to improve your theoretical understanding of these algorithms.

Without further ado, let's dive in!

Policy gradient as policy iteration

In the earlier chapters, we described how most of the RL algorithms...

Revisiting off-policy Methods

One of the challenges with policy-based methods is that they are on-policy, which requires collecting new samples after every policy update. If it is costly to collect samples from the environment, then training on-policy methods could be really expensive. On the other hand, the value-based methods we covered in the previous chapter are off-policy but they only work with discrete action spaces. Therefore, there is a need for a class of methods that work with continuous action spaces and off-policy. In this section, we cover such algorithms. Let's start with the first one: Deep Deterministic Policy Gradient.

DDPG: Deep Deterministic Policy Gradient

DDPG, in some sense, is an extension of deep Q-learning to continuous action spaces. Remember that deep Q-learning methods learn a representation for action values, . The best action is then given by in a given state . Now, if the action space is continuous, learning the action-value representation...

Comparison of the policy-based methods in Lunar Lander

Below is a comparison of evaluation reward performance progress for different policy-based algorithms over a single training session in the Lunar Lander environment:

Figure 7.6 – Lunar Lander training performance of various policy-based algorithms

To also give a sense of how long each training session took and what was the performance at the end of the training, below is TensorBoard tooltip for the plot above:

Figure 7.7 – Wall-clock time and end-of-training performance comparisons

Before going into further discussions, let's make the following disclaimer: The comparisons here should not be taken as a benchmark of different algorithms for multiple reasons:

  • We did not perform any hyper-parameter tuning,
  • The plots come from a single training trial for each algorithm. Training an RL agent is a highly stochastic process and a fair comparison should include...

How to pick the right algorithm?

As in all machine learning domains, there is no silver bullet in terms of which algorithm to use for different applications. There are many criteria you should consider, and in some cases some of them will be more important than others.

Here are different dimensions of algorithm performances that you should look into when picking your algorithm.

  • Highest reward: When you are not bounded by compute and time resources and your goals is simply to train the best possible agent for your application, highest reward is the criterion you should pay attention to. PPO and SAC are promising alternatives here.
  • Sample efficiency: If your sampling process is costly / time-consuming, then sample efficiency (achieving higher rewards using less samples is important). When this is the case, you should look into off-policy algorithms as they reuse past experiences for training as on-policy methods are often incredibly wasteful in how they consume samples...

Open source implementations of policy-gradient methods

In this chapter we have covered many algorithms. It is not quite possible to explicitly implement all these algorithms given the space limitations here. We instead relied on RLlib implementations to train agents for our use case. RLlib is open source, so you can go to https://github.com/ray-project/ray/tree/releases/1.0.1/rllib and dive into implementations of these algorithms.

Having said that, RLlib implementations is built for production systems and therefore involve many other implementations regarding error-handling, and preprocessing. In addition, there is a lot of code reuse, resulting in implementations of with multiple class inheritances. A much easier set of implementations is provided by OpenAI's Spinning Up repo at https://github.com/openai/spinningup. I highly recommend you go into that repo and dive into the implementation details of these algorithms we discussed in this chapter.

Info

OpenAI Spinning...

Summary

In this chapter, we covered an important class of algorithms called policy-based methods. These methods directly optimize a policy network unlike the value-based methods we covered in the previous chapter. As a result, they have stronger theoretical foundation. In addition, they can be used with continuous action spaces. With this, we have covered model-free approaches in detail. In the next chapter, we go into model-based methods, which aim to learn the dynamics of the environment the agent is in.

References

  1. OpenAI. (2018). Spinning Up. URL: https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html
  2. Williams R. (1992). Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8, 229-256. URL: https://link.springer.com/article/10.1007/BF00992696
  3. Sutton, R. et al. (1999). Policy Gradient Methods for Reinforcement Learning with Function Approximation. NIPS. URL: https://bit.ly/3lOMFs7
  4. Silver, D. et al. (2014). Deterministic Policy Gradient Algorithms. Journal of Machine Learning Research. URL: http://proceedings.mlr.press/v32/silver14.pdf
  5. Mnih, Volodymyr, et al. (2016). Asynchronous Methods for Deep Reinforcement Learning. arXiv.org, http://arxiv.org/abs/1602.01783.
  6. Gu, Shixiang, et al. (2016). Continuous Deep Q-Learning with Model-Based Acceleration. arXiv.org, http://arxiv.org/abs/1603.00748
  7. Schulman, John, et al. (2017). Trust Region Policy Optimization. arXiv.org, http://arxiv.org/abs/1502...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Mastering Reinforcement Learning with Python
Published in: Dec 2020Publisher: PacktISBN-13: 9781838644147
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Enes Bilgin

Enes Bilgin works as a senior AI engineer and a tech lead in Microsoft's Autonomous Systems division. He is a machine learning and operations research practitioner and researcher with experience in building production systems and models for top tech companies using Python, TensorFlow, and Ray/RLlib. He holds an M.S. and a Ph.D. in systems engineering from Boston University and a B.S. in industrial engineering from Bilkent University. In the past, he has worked as a research scientist at Amazon and as an operations research scientist at AMD. He also held adjunct faculty positions at the McCombs School of Business at the University of Texas at Austin and at the Ingram School of Engineering at Texas State University.
Read more about Enes Bilgin