Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Reinforcement Learning with TensorFlow

You're reading from  Reinforcement Learning with TensorFlow

Product type Book
Published in Apr 2018
Publisher Packt
ISBN-13 9781788835725
Pages 334 pages
Edition 1st Edition
Languages
Author (1):
Sayon Dutta Sayon Dutta
Profile icon Sayon Dutta

Table of Contents (21) Chapters

Title Page
Packt Upsell
Contributors
Preface
1. Deep Learning – Architectures and Frameworks 2. Training Reinforcement Learning Agents Using OpenAI Gym 3. Markov Decision Process 4. Policy Gradients 5. Q-Learning and Deep Q-Networks 6. Asynchronous Methods 7. Robo Everything – Real Strategy Gaming 8. AlphaGo – Reinforcement Learning at Its Best 9. Reinforcement Learning in Autonomous Driving 10. Financial Portfolio Management 11. Reinforcement Learning in Robotics 12. Deep Reinforcement Learning in Ad Tech 13. Reinforcement Learning in Image Processing 14. Deep Reinforcement Learning in NLP 1. Further topics in Reinforcement Learning 2. Other Books You May Enjoy Index

Markov decision processes


As already mentioned, an MDP is a reinforcement learning approach in a gridworld environment containing sets of states, actions, and rewards, following the Markov property to obtain an optimal policy. MDP is defined as the collection of the following:

  • States: S
  • Actions: A(s), A
  • Transition model: T(s,a,s') ~ P(s'|s,a)
  • Rewards: R(s), R(s,a), R(s,a,s')
  • Policy:
     
     is the optimal policy

In the case of an MDP, the environment is fully observable, that is, whatever observation the agent makes at any point in time is enough to make an optimal decision. In case of a partially observable environment, the agent needs a memory to store the past observations to make the best possible decisions.

Let's try to break this into different lego blocks to understand what this overall process means. 

The Markov property

In short, as per the Markov property, in order to know the information of near future (say, at time t+1) the present information at time t matters. 

Given a sequence, 

, the first...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}