Reader small image

You're reading from  Mastering Reinforcement Learning with Python

Product typeBook
Published inDec 2020
Reading LevelBeginner
PublisherPackt
ISBN-139781838644147
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Enes Bilgin
Enes Bilgin
author image
Enes Bilgin

Enes Bilgin works as a senior AI engineer and a tech lead in Microsoft's Autonomous Systems division. He is a machine learning and operations research practitioner and researcher with experience in building production systems and models for top tech companies using Python, TensorFlow, and Ray/RLlib. He holds an M.S. and a Ph.D. in systems engineering from Boston University and a B.S. in industrial engineering from Bilkent University. In the past, he has worked as a research scientist at Amazon and as an operations research scientist at AMD. He also held adjunct faculty positions at the McCombs School of Business at the University of Texas at Austin and at the Ingram School of Engineering at Texas State University.
Read more about Enes Bilgin

Right arrow

Chapter 5: Solving the Reinforcement Learning Problem

In the previous chapter we provided the mathematical foundations for modeling a reinforcement learning problem. In this chapter, we lay the foundation for solving it. Many of the following chapters will focus on some specific solution approaches that will rise on this foundation. To this end, we first cover the dynamic programming (DP) approach, with which we introduce some key ideas and concepts. DP methods provide optimal solutions to Markov decision processes (MDPs), yet they require the complete knowledge and a compact representation of the state transition and reward dynamics of the environment. This could be severely limiting and impractical in a realistic scenario, where the agent is either directly trained in the environment itself or in a simulation of it. Monte Carlo and temporal difference (TD) approaches that we cover later, unlike DP, use sampled transitions from the environment and relax the aforementioned limitations...

Exploring dynamic programming

Dynamic programming is a branch of mathematical optimization that proposes optimal solution methods to MDPs. Although most real-world problems are too complex to optimally solve via DP methods, the ideas behind these algorithms are central to many RL approaches. So, it is important to have a solid understanding of them. Throughout this chapter, we go from these exact methods to more practical approaches by systematically introducing approximations.

We start this section by describing an example that will serve as a use case for the algorithms that we will introduce throughout the chapter. Then, we will cover how to do prediction and control using DP. Let's get started!

Example use case: Inventory replenishment of a food truck

Our use case involves a food truck business that needs to decide how many burger patties to buy every weekday to replenish its inventory. Inventory planning is an important class of problems in retail and manufacturing...

Training your agent with Monte Carlo methods

Let's say you would like to learn the chance of flipping heads with a particular, possibly biased, coin:

  • One way of calculating this is through a careful analysis of the physical properties of the coin. Although this could give you the precise probability distribution of the outcomes, it is far from being a practical approach.
  • Alternatively, you can just flip the coin many times and look at the distribution in your sample. Your estimate could be a bit off if you don't have a large sample, but it will do the job for most practical purposes. The math you need to deal with using the latter method will be incomparably simpler.

Just like in the coin example, we can estimate the state values and action values in an MDP from random samples. Monte Carlo (MC) estimation is a general concept that refers to making estimations through repeated random sampling. In the context of RL, it refers to a collection of methods...

Temporal-difference learning

The first class of methods to solve MDP we covered in this chapter was DP, which

  • Requires to completely know the environment dynamics to be able find the optimal solution.
  • Allow us to progress toward the solution with one-step updates of the value functions.

We then covered the MC methods, which

  • Only require the ability to sample from the environment, therefore learn from experience, as opposed to knowing the environment dynamics - a huge advantage over DP,
  • But need to wait for a complete episode trajectory to update a policy.

Temporal-difference (TD) methods are, in some sense, the best of both worlds: They learn from experience, and they can update the policy after each step by bootstrapping. This comparison of TD to DP and MC is illustrated in Table 5.2.

Table 5.2 – Comparison of DP, MC, and TD learning methods

As a result, TD methods are central in RL, and you will encounter them...

Understanding the importance of the simulation in reinforcement learning

As we mentioned multiple times so far, and especially in the first chapter when we talked about RL success stories, RL's hunger for data is orders of magnitude greater than that of deep supervised learning. That is why it takes many months to train some complex RL agents, over millions and billions of iterations. Since it is often impractical to collect such data in a physical environment, we heavily rely on simulation models in training RL agents. This brings some challenges along with it:

  • Many businesses don't have a simulation model of their processes. This makes it challenging to bring the RL technology to the use of such companies.
  • When a simulation model exists, it is often too simplistic to capture the real-world dynamics. As a result, RL models could easily overfit to the simulation environment and may fail in deployment. It takes significant time and resources to calibrate and validate...

Summary

In this chapter, we covered three important approaches to solving MDPs: Dynamic programming, Monte Carlo methods, and temporal-difference learning. We have seen that while DP provides exact solutions to MDPs, it requires knowing the precise dynamics of an environment. Monte Carlo and TD learning methods, on the other hand, explore in the environment and learn from experience. TD learning, in particular, can learn from even a single step transitions in the environment. Within the chapter, we also presented on-policy methods, which estimate the value functions for a behavior policy, while off-policy methods for a target policy. Finally, we discussed the importance of the simulator in RL experiments and what to pay attention to when working with one.

Next, we take our journey to a next level and dive into deep reinforcement learning, which will enable us to solve some complex real-world problems. Particularly, in the next chapter, we cover deep Q-learning in detail.

See...

References

  1. Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction. A Bradford Book. URL: http://incompleteideas.net/book/the-book.html
  2. Barto, A. (2006). Reinforcement learning. University of Massachusetts - Amherst CMPSCI 687. URL: https://www.andrew.cmu.edu/course/10-703/textbook/BartoSutton.pdf
  3. Goldstick, J. (2009). Importance sampling. Statistics 406: Introduction to Statistical Computing at University of Michigan. URL: http://dept.stat.lsa.umich.edu/~jasoneg/Stat406/lab7.pdf
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Mastering Reinforcement Learning with Python
Published in: Dec 2020Publisher: PacktISBN-13: 9781838644147
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Enes Bilgin

Enes Bilgin works as a senior AI engineer and a tech lead in Microsoft's Autonomous Systems division. He is a machine learning and operations research practitioner and researcher with experience in building production systems and models for top tech companies using Python, TensorFlow, and Ray/RLlib. He holds an M.S. and a Ph.D. in systems engineering from Boston University and a B.S. in industrial engineering from Bilkent University. In the past, he has worked as a research scientist at Amazon and as an operations research scientist at AMD. He also held adjunct faculty positions at the McCombs School of Business at the University of Texas at Austin and at the Ingram School of Engineering at Texas State University.
Read more about Enes Bilgin