- Asynchronous Advantage Actor-Critic Agents (A3C) is an on-policy algorithm, as we do not use a replay buffer to sample data from. However, a temporary buffer is used to collect immediate samples, which are used to train once, after which the buffer is emptied.
- The Shannon entropy term is used as a regularizer—the higher the entropy, the better the policy is.
- When too many worker threads are used, the training can slow down and can crash, as memory is limited. If, however, you have access to a large cluster of processors, then using a large number of worker threads/processes helps.
- Softmax is used in the policy network to obtain probabilities of different actions.
- An advantage function is widely used, as it decreases the variance of the policy gradient. Section 3 of the A3C paper (https://arxiv.org/pdf/1602.01783.pdf) has more regarding this.
- This is an exercise...
- Tech Categories
- Best Sellers
- New Releases
- Books
- Videos
- Audiobooks
Tech Categories Popular Audiobooks
- Articles
- Newsletters
- Free Learning
You're reading from TensorFlow Reinforcement Learning Quick Start Guide
Kaushik Balakrishnan works for BMW in Silicon Valley, and applies reinforcement learning, machine learning, and computer vision to solve problems in autonomous driving. Previously, he also worked at Ford Motor Company and NASA Jet Propulsion Laboratory. His primary expertise is in machine learning, computer vision, and high-performance computing, and he has worked on several projects involving both research and industrial applications. He has also worked on numerical simulations of rocket landings on planetary surfaces, and for this he developed several high-fidelity models that run efficiently on supercomputers. He holds a PhD in aerospace engineering from the Georgia Institute of Technology in Atlanta, Georgia.
Read more about Kaushik Balakrishnan
Unlock this book and the full library FREE for 7 days
Author (1)
Kaushik Balakrishnan works for BMW in Silicon Valley, and applies reinforcement learning, machine learning, and computer vision to solve problems in autonomous driving. Previously, he also worked at Ford Motor Company and NASA Jet Propulsion Laboratory. His primary expertise is in machine learning, computer vision, and high-performance computing, and he has worked on several projects involving both research and industrial applications. He has also worked on numerical simulations of rocket landings on planetary surfaces, and for this he developed several high-fidelity models that run efficiently on supercomputers. He holds a PhD in aerospace engineering from the Georgia Institute of Technology in Atlanta, Georgia.
Read more about Kaushik Balakrishnan