Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Deep Reinforcement Learning Hands-On. - Second Edition

You're reading from  Deep Reinforcement Learning Hands-On. - Second Edition

Product type Book
Published in Jan 2020
Publisher Packt
ISBN-13 9781838826994
Pages 826 pages
Edition 2nd Edition
Languages
Author (1):
Maxim Lapan Maxim Lapan
Profile icon Maxim Lapan

Table of Contents (28) Chapters

Preface What Is Reinforcement Learning? OpenAI Gym Deep Learning with PyTorch The Cross-Entropy Method Tabular Learning and the Bellman Equation Deep Q-Networks Higher-Level RL Libraries DQN Extensions Ways to Speed up RL Stocks Trading Using RL Policy Gradients – an Alternative The Actor-Critic Method Asynchronous Advantage Actor-Critic Training Chatbots with RL The TextWorld Environment Web Navigation Continuous Action Space RL in Robotics Trust Regions – PPO, TRPO, ACKTR, and SAC Black-Box Optimization in RL Advanced Exploration Beyond Model-Free – Imagination AlphaGo Zero RL in Discrete Optimization Multi-agent RL Other Books You May Enjoy
Index

The theoretical background of the cross-entropy method

This section is optional and included for readers who are interested in why the method works. If you wish, you can refer to the original paper on the cross-entropy method, which will be given at the end of the section.

The basis of the cross-entropy method lies in the importance sampling theorem, which states this:

In our RL case, H(x) is a reward value obtained by some policy, x, and p(x) is a distribution of all possible policies. We don't want to maximize our reward by searching all possible policies; instead we want to find a way to approximate p(x)H(x) by q(x), iteratively minimizing the distance between them. The distance between two probability distributions is calculated by Kullback-Leibler (KL) divergence, which is as follows:

The first term in KL is called entropy and it doesn't depend on p2(x), so it could be omitted during the minimization. The second term is called cross-entropy, which is...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}