Reader small image

You're reading from  Reinforcement Learning Algorithms with Python

Product typeBook
Published inOct 2019
Reading LevelBeginner
PublisherPackt
ISBN-139781789131116
Edition1st Edition
Languages
Right arrow
Author (1)
Andrea Lonza
Andrea Lonza
author image
Andrea Lonza

Andrea Lonza is a deep learning engineer with a great passion for artificial intelligence and a desire to create machines that act intelligently. He has acquired expert knowledge in reinforcement learning, natural language processing, and computer vision through academic and industrial machine learning projects. He has also participated in several Kaggle competitions, achieving high results. He is always looking for compelling challenges and loves to prove himself.
Read more about Andrea Lonza

Right arrow

TRPO and PPO Implementation

In the previous chapter, we looked at policy gradient algorithms. Their uniqueness lies in the order in which they solve a reinforcement learning (RL) problempolicy gradient algorithms take a step in the direction of the highest gain of the reward. The simpler version of this algorithm (REINFORCE) has a straightforward implementation that alone achieves good results. Nevertheless, it is slow and has a high variance. For this reason, we introduced a value function that has a double goalto critique the actor and to provide a baseline. Despite their great potential, these actor-critic algorithms can suffer from unwanted rapid variations in the action distribution that may cause a drastic change in the states that are visited, followed by a rapid decline in the performance from which they could never recover from.

In this chapter, we will...

Roboschool

Up until this point, we have worked with discrete control tasks such as the Atari games in Chapter 5, Deep Q-Network, and LunarLander in Chapter 6, Learning Stochastic and PG Optimization. To play these games, only a few discrete actions have to be controlled, that is, approximately two to five actions. As we learned in Chapter 6, Learning Stochastic and PG Optimization, policy gradient algorithms can be easily adapted to continuous actions. To show these properties, we'll deploy the next few policy gradient algorithms in a new set of environments called Roboschool, in which the goal is to control a robot in different situations. Roboschool has been developed by OpenAI and uses the famous OpenAI Gym interface that we used in the previous chapters. These environments are based on the Bullet Physics Engine (a physics engine that simulates soft and rigid body dynamics...

Natural policy gradient

REINFORCE and Actor-Critic are very intuitive methods that work well on small to medium-sized RL tasks. However, they present some problems that need to be addressed so that we can adapt policy gradient algorithms so that they work on much larger and complex tasks. The main problems are as follows:

  • Difficult to choose a correct step size: This comes from the nature of RL being non-stationary, meaning that the distribution of the data changes continuously over time and as the agent learns new things, it explores a different state space. Finding an overall stable learning rate is very tricky.
  • Instability: The algorithms aren't aware of the amount by which the policy will change. This is also related to the problem we stated previously. A single, not controlled update could induce a substantial shift of the policy that will drastically change the action...

Trust region policy optimization

Trust region policy optimization (TRPO) is the first successful algorithm that makes use of several approximations to compute the natural gradient with the goal of training a deep neural network policy in a more controlled and stable way. From NPG, we saw that it isn't possible to compute the inverse of the FIM for nonlinear functions with a lot of parameters. TRPO overcomes these difficulties by building on top of NPG. It does this by introducing a surrogate objective function and making a series of approximations, which means it succeeds in learning about complex policies for walking, hopping, or playing Atari games from raw pixels.

TRPO is one of the most complex model-free algorithms and though we already learned the underlying principles of the natural gradient, there are still difficult parts behind it. In this chapter, we'll only...

Proximal Policy Optimization

A work by Schulman and others shows that this is possible. Indeed, it uses a similar idea to TRPO while reducing the complexity of the method. This method is called Proximal Policy Optimization (PPO) and its strength is in the use of the first-order optimization only, without degrading the reliability compared to TRPO. PPO is also more general and sample-efficient than TRPO and enables multi updates with mini-batches.

A quick overview

The main idea behind PPO is to clip the surrogate objective function when it moves away, instead of constraining it as it does in TRPO. This prevents the policy from making updates that are too large. The main objective is as follows:

(7.9)

Here, is defined as...

Summary

In this chapter, you learned how policy gradient algorithms can be adapted to control agents with continuous actions and then used a new set of environments called Roboschool.

You also learned aboutand developed two advanced policy gradient algorithms: trust region policy optimization and proximal policy optimization. These algorithms make better use of the data sampled from the environment and both use techniques to limit the difference in the distribution of two subsequent policies. In particular, TRPO (as the name suggests) builds a trust region around the objective function using a second-order derivative and some constraints based on the KL divergence between the old and the new policy. PPO, on the other hand, optimizes an objective function similar to TRPO but using only a first-order optimization method. PPO prevents the policy from taking steps that are too large...

Questions

  1. How can a policy neural network control a continuous agent?
  2. What's the KL divergence?
  3. What's the main idea behind TRPO?
  4. How is the KL divergence used in TRPO?
  5. What's the main benefit of PPO?
  6. How does PPO achieve good sample efficiency?

Further reading

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Reinforcement Learning Algorithms with Python
Published in: Oct 2019Publisher: PacktISBN-13: 9781789131116
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Andrea Lonza

Andrea Lonza is a deep learning engineer with a great passion for artificial intelligence and a desire to create machines that act intelligently. He has acquired expert knowledge in reinforcement learning, natural language processing, and computer vision through academic and industrial machine learning projects. He has also participated in several Kaggle competitions, achieving high results. He is always looking for compelling challenges and loves to prove himself.
Read more about Andrea Lonza