- The Markov property states that the future depends only on the present and not on the past.
- MDP is an extension of the Markov chain. It provides a mathematical framework for modeling decision-making situations. Almost all RL problems can be modeled as MDP.
- Refer section Discount factor.
- The discount factor decides how much importance we give to the future rewards and immediate rewards.
- We use Bellman function for solving the MDP.
- Refer section Deriving the Bellman equation for value and Q functions.
- Value function specifies goodness of a state and Q function specifies goodness of an action in that state.
- Refer section Value iteration and Policy iteration.
- Tech Categories
- Best Sellers
- New Releases
- Books
- Videos
- Audiobooks
Tech Categories Popular Audiobooks
- Articles
- Newsletters
- Free Learning
You're reading from Hands-On Reinforcement Learning with Python
Sudharsan Ravichandiran is a data scientist and artificial intelligence enthusiast. He holds a Bachelors in Information Technology from Anna University. His area of research focuses on practical implementations of deep learning and reinforcement learning including natural language processing and computer vision. He is an open-source contributor and loves answering questions on Stack Overflow.
Read more about Sudharsan Ravichandiran
Unlock this book and the full library FREE for 7 days
Author (1)
Sudharsan Ravichandiran is a data scientist and artificial intelligence enthusiast. He holds a Bachelors in Information Technology from Anna University. His area of research focuses on practical implementations of deep learning and reinforcement learning including natural language processing and computer vision. He is an open-source contributor and loves answering questions on Stack Overflow.
Read more about Sudharsan Ravichandiran