Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Deep Reinforcement Learning Hands-On. - Second Edition

You're reading from  Deep Reinforcement Learning Hands-On. - Second Edition

Product type Book
Published in Jan 2020
Publisher Packt
ISBN-13 9781838826994
Pages 826 pages
Edition 2nd Edition
Languages
Author (1):
Maxim Lapan Maxim Lapan
Profile icon Maxim Lapan

Table of Contents (28) Chapters

Preface 1. What Is Reinforcement Learning? 2. OpenAI Gym 3. Deep Learning with PyTorch 4. The Cross-Entropy Method 5. Tabular Learning and the Bellman Equation 6. Deep Q-Networks 7. Higher-Level RL Libraries 8. DQN Extensions 9. Ways to Speed up RL 10. Stocks Trading Using RL 11. Policy Gradients – an Alternative 12. The Actor-Critic Method 13. Asynchronous Advantage Actor-Critic 14. Training Chatbots with RL 15. The TextWorld Environment 16. Web Navigation 17. Continuous Action Space 18. RL in Robotics 19. Trust Regions – PPO, TRPO, ACKTR, and SAC 20. Black-Box Optimization in RL 21. Advanced Exploration 22. Beyond Model-Free – Imagination 23. AlphaGo Zero 24. RL in Discrete Optimization 25. Multi-agent RL 26. Other Books You May Enjoy
27. Index

PPO

Historically, the PPO method came from the OpenAI team and it was proposed long after TRPO, which is from 2015. However, PPO is much simpler than TRPO, so we will start with it. The 2017 paper in which it was proposed is by John Schulman et. al., and it is called Proximal Policy Optimization Algorithms (arXiv:1707.06347).

The core improvement over the classic A2C method is changing the formula used to estimate the policy gradients. Instead of using the gradient of logarithm probability of the action taken, the PPO method uses a different objective: the ratio between the new and the old policy scaled by the advantages.

In math form, the old A2C objective could be written as . The new objective proposed by PPO is .

The reason behind changing the objective is the same as with the cross-entropy method covered in Chapter 4, The Cross-Entropy Method: importance sampling. However, if we just start to blindly maximize this value, it may lead to a very large update to the policy...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}