- Deep Q Network (DQN) is a neural network used for approximating the Q function.
- Experience replay is used to remove the correlations between the agent's experience.
- When we use the same network for predicting target value and predicted value there will lot of divergence so we use separate target network.
- Because of the max operator DQN overestimates Q value.
- By having two separate Q functions each learning independently double DQN avoids overestimating Q values.
- Experiences are priorities based on TD error in prioritized experience replay.
- Dueling DQN estimating the Q value precisely by breaking the Q function computation into value function and advantage function.
- Tech Categories
- Best Sellers
- New Releases
- Books
- Videos
- Audiobooks
Tech Categories Popular Audiobooks
- Articles
- Newsletters
- Free Learning
You're reading from Hands-On Reinforcement Learning with Python
Sudharsan Ravichandiran is a data scientist and artificial intelligence enthusiast. He holds a Bachelors in Information Technology from Anna University. His area of research focuses on practical implementations of deep learning and reinforcement learning including natural language processing and computer vision. He is an open-source contributor and loves answering questions on Stack Overflow.
Read more about Sudharsan Ravichandiran
Unlock this book and the full library FREE for 7 days
Author (1)
Sudharsan Ravichandiran is a data scientist and artificial intelligence enthusiast. He holds a Bachelors in Information Technology from Anna University. His area of research focuses on practical implementations of deep learning and reinforcement learning including natural language processing and computer vision. He is an open-source contributor and loves answering questions on Stack Overflow.
Read more about Sudharsan Ravichandiran