Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Deep Reinforcement Learning Hands-On. - Second Edition

You're reading from  Deep Reinforcement Learning Hands-On. - Second Edition

Product type Book
Published in Jan 2020
Publisher Packt
ISBN-13 9781838826994
Pages 826 pages
Edition 2nd Edition
Languages
Author (1):
Maxim Lapan Maxim Lapan
Profile icon Maxim Lapan

Table of Contents (28) Chapters

Preface 1. What Is Reinforcement Learning? 2. OpenAI Gym 3. Deep Learning with PyTorch 4. The Cross-Entropy Method 5. Tabular Learning and the Bellman Equation 6. Deep Q-Networks 7. Higher-Level RL Libraries 8. DQN Extensions 9. Ways to Speed up RL 10. Stocks Trading Using RL 11. Policy Gradients – an Alternative 12. The Actor-Critic Method 13. Asynchronous Advantage Actor-Critic 14. Training Chatbots with RL 15. The TextWorld Environment 16. Web Navigation 17. Continuous Action Space 18. RL in Robotics 19. Trust Regions – PPO, TRPO, ACKTR, and SAC 20. Black-Box Optimization in RL 21. Advanced Exploration 22. Beyond Model-Free – Imagination 23. AlphaGo Zero 24. RL in Discrete Optimization 25. Multi-agent RL 26. Other Books You May Enjoy
27. Index

Double DQN

The next fruitful idea on how to improve a basic DQN came from DeepMind researchers in the paper titled Deep Reinforcement Learning with Double Q-Learning ([3] van Hasselt, Guez, and Silver, 2015). In the paper, the authors demonstrated that the basic DQN tends to overestimate values for Q, which may be harmful to training performance and sometimes can lead to suboptimal policies. The root cause of this is the max operation in the Bellman equation, but the strict proof is too complicated to write down here. As a solution to this problem, the authors proposed modifying the Bellman update a bit.

In the basic DQN, our target value for Q looked like this:

Q'(st+1, a) was Q-values calculated using our target network, so we update with the trained network every n steps. The authors of the paper proposed choosing actions for the next state using the trained network, but taking values of Q from the target network. So, the new expression for target Q-values will look...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}