- DDPG is an off-policy algorithm, as it uses a replay buffer.
- In general, the same number of hidden layers and the number of neurons per hidden layer is used for the actor and the critic, but this is not required. Note that the output layer will be different for the actor and the critic, with the actor having the number of outputs equal to the number of actions; the critic will have only one output.
- DDPG is used for continuous control, that is, when the actions are continuous and real-valued. Atari Breakout has discrete actions, and so DDPG is not suitable for Atari Breakout.
- We use the relu activation function, and so the biases are initialized to small positive values so that they fire at the beginning of the training and allow gradients to back-propagate.
- This is an exercise. See https://gym.openai.com/envs/InvertedDoublePendulum-v2/.
- This is also an exercise. Notice...
- Tech Categories
- Best Sellers
- New Releases
- Books
- Videos
- Audiobooks
Tech Categories Popular Audiobooks
- Articles
- Newsletters
- Free Learning
You're reading from TensorFlow Reinforcement Learning Quick Start Guide
Kaushik Balakrishnan works for BMW in Silicon Valley, and applies reinforcement learning, machine learning, and computer vision to solve problems in autonomous driving. Previously, he also worked at Ford Motor Company and NASA Jet Propulsion Laboratory. His primary expertise is in machine learning, computer vision, and high-performance computing, and he has worked on several projects involving both research and industrial applications. He has also worked on numerical simulations of rocket landings on planetary surfaces, and for this he developed several high-fidelity models that run efficiently on supercomputers. He holds a PhD in aerospace engineering from the Georgia Institute of Technology in Atlanta, Georgia.
Read more about Kaushik Balakrishnan
Unlock this book and the full library FREE for 7 days
Author (1)
Kaushik Balakrishnan works for BMW in Silicon Valley, and applies reinforcement learning, machine learning, and computer vision to solve problems in autonomous driving. Previously, he also worked at Ford Motor Company and NASA Jet Propulsion Laboratory. His primary expertise is in machine learning, computer vision, and high-performance computing, and he has worked on several projects involving both research and industrial applications. He has also worked on numerical simulations of rocket landings on planetary surfaces, and for this he developed several high-fidelity models that run efficiently on supercomputers. He holds a PhD in aerospace engineering from the Georgia Institute of Technology in Atlanta, Georgia.
Read more about Kaushik Balakrishnan