Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
TensorFlow 2 Reinforcement Learning Cookbook

You're reading from  TensorFlow 2 Reinforcement Learning Cookbook

Product type Book
Published in Jan 2021
Publisher Packt
ISBN-13 9781838982546
Pages 472 pages
Edition 1st Edition
Languages
Author (1):
Palanisamy P Palanisamy P
Profile icon Palanisamy P

Table of Contents (11) Chapters

Preface 1. Chapter 1: Developing Building Blocks for Deep Reinforcement Learning Using Tensorflow 2.x 2. Chapter 2: Implementing Value-Based, Policy-Based, and Actor-Critic Deep RL Algorithms 3. Chapter 3: Implementing Advanced RL Algorithms 4. Chapter 4: Reinforcement Learning in the Real World – Building Cryptocurrency Trading Agents 5. Chapter 5: Reinforcement Learning in the Real World – Building Stock/Share Trading Agents 6. Chapter 6: Reinforcement Learning in the Real World – Building Intelligent Agents to Complete Your To-Dos 7. Chapter 7: Deploying Deep RL Agents to the Cloud 8. Chapter 8: Distributed Training for Accelerated Development of Deep RL Agents 9. Chapter 9: Deploying Deep RL Agents on Multiple Platforms 10. Other Books You May Enjoy

Implementing the Dueling Double DQN algorithm and DDDQN agent

Dueling Double DQN (DDDQN) combines the benefits of both Double Q-learning and Dueling architecture. Double Q-learning corrects DQN from overestimating the action values. The Dueling architecture uses a modified architecture to separately learn the state value function (V) and the advantage function (A). This explicit separation allows the algorithm to learn faster, especially when there are many actions to choose from and when the actions are very similar to each other. The dueling architecture enables the agent to learn even when only one action in a state has been taken, as it can update and estimate the state value function, unlike the DQN agent, which cannot learn from actions that were not taken yet. By the end of this recipe, you will have a complete implementation of the DDDQN agent.

Getting ready

To complete this recipe, you will first need to activate the tf2rl-cookbook Conda Python virtual environment and...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}