Reader small image

You're reading from  Hands-On Intelligent Agents with OpenAI Gym

Product typeBook
Published inJul 2018
Reading LevelIntermediate
PublisherPackt
ISBN-139781788836579
Edition1st Edition
Languages
Right arrow
Author (1)
Palanisamy P
Palanisamy P
author image
Palanisamy P

Praveen Palanisamy works on developing autonomous intelligent systems. He is currently an AI researcher at General Motors R&D. He develops planning and decision-making algorithms and systems that use deep reinforcement learning for autonomous driving. Previously, he was at the Robotics Institute, Carnegie Mellon University, where he worked on autonomous navigation, including perception and AI for mobile robots. He has experience developing complete, autonomous, robotic systems from scratch.
Read more about Palanisamy P

Right arrow

Reinforcement Learning and Deep Reinforcement Learning

This chapter provides a concise explanation of the basic terminology and concepts in reinforcement learning. It will give you a good understanding of the basic reinforcement learning framework for developing artificial intelligent agents. This chapter will also introduce deep reinforcement learning and provide you with a flavor of the types of advanced problems the algorithms enable you to solve. You will find mathematical expressions and equations used in quite a few places in this chapter. Although there's enough theory behind reinforcement learning and deep reinforcement learning to fill a whole book, the key concepts that are useful for practical implementation are discussed in this chapter, so that when we actually implement the algorithms in Python to train our agents, you can clearly understand the logic behind...

What is reinforcement learning?

If you are new to the field of Artificial Intelligence (AI) or machine learning, you might be wondering what reinforcement learning is all about. In simple terms, it is learning through reinforcement. Reinforcement, as you know from general English or psychology, is the act of increasing or strengthening the choice to take a particular action in response to something, because of the perceived benefit of receiving higher rewards for taking that action. We humans are good at learning through reinforcement from a very young age. Those who have kids may be utilizing this fact more often to teach good habits to them. Nevertheless, we will all be able to relate to this, because not so long ago we all went through that phase of life! Say parents reward their kid with chocolate if the kid completes their homework on time after school every day. The kid...

Understanding what AI means and what's in it in an intuitive way

The intelligence demonstrated by humans and animals is called natural intelligence, but the intelligence demonstrated by machines is called AI, for obvious reasons. We humans develop algorithms and technologies that provide intelligence to machines. Some of the greatest developments on this front are in the fields of machine learning, artificial neural networks, and deep learning. These fields collectively drive the development of AI. There are three main types of machine learning paradigms that have been developed to some reasonable level of maturity so far, and they are the following:

  • Supervised learning
  • Unsupervised learning
  • Reinforcement learning

In the following diagram, you can get an intuitive picture of the field of AI. You can see that these learning paradigms are subsets of the field of machine learning...

Practical reinforcement learning

Now that you have an intuitive understanding of what AI really means and the various classes of algorithm that drive its development, we will now focus on the practical aspects of building a reinforcement learning machine.

Here are the core concepts that you need to be aware of to develop reinforcement learning systems:

  • Agent
  • Rewards
  • Environment
  • State
  • Value function
  • Policy

Agent

In the reinforcement learning world, a machine is run or instructed by a (software) agent. The agent is the part of the machine that possesses intelligence and makes decisions on what to do next. You will come across the term "agent" several times as we dive deeper into reinforcement learning. Reinforcement...

Markov Decision Process

A Markov Decision Process (MDP) provides a formal framework for reinforcement learning. It is used to describe a fully observable environment where the outcomes are partly random and partly dependent on the actions taken by the agent or the decision maker. The following diagram is the progression of a Markov Process into a Markov Decision Process through the Markov Reward Process:

These stages can be described as follows:

  • A Markov Process (or a markov chain) is a sequence of random states s1, s2,... that obeys the Markov property. In simple terms, it is a random process without any memory about its history.
  • A Markov Reward Process (MRP) is a Markov Process (also called a Markov chain) with values.
  • A Markov Decision Process is a Markov Reward Process with decisions.

Planning with dynamic programming

Dynamic programming is a very general method to efficiently solve problems that can be decomposed into overlapping sub-problems. If you have used any type of recursive function in your code, you might have already got some preliminary flavor of dynamic programming. Dynamic programming, in simple terms, tries to cache or store the results of sub-problems so that they can be used later if required, instead of computing the results again.

Okay, so how is that relevant here, you may ask. Well, they are pretty useful for solving a fully defined MDP, which means that an agent can find the most optimal way to act in an environment to achieve the highest reward using dynamic programming if it has full knowledge of the MDP! In the following table, you will find a concise summary of what the inputs and outputs are when we are interested in sequential prediction...

Monte Carlo learning and temporal difference learning

At this point, we understand that it is very useful for an agent to learn the state value function , which informs the agent about the long-term value of being in state so that the agent can decide if it is a good state to be in or not. The Monte Carlo (MC) and Temporal Difference (TD) learning methods enable an agent to learn that!

The goal of MC and TD learning is to learn the value functions from the agent's experience as the agent follows its policy .

The following table summarizes the value estimate's update equation for the MC and TD learning methods:

Learning method State-value function
Monte Carlo
Temporal Difference

MC learning updates the value towards the actual return , which is the total discounted reward from time step t. This means that until the end. It is important to note that we...

SARSA and Q-learning

It is also very useful for an agent to learn the action value function , which informs the agent about the long-term value of taking action in state so that the agent can take those actions that will maximize its expected, discounted future reward. The SARSA and Q-learning algorithms enable an agent to learn that! The following table summarizes the update equation for the SARSA algorithm and the Q-learning algorithm:

Learning method Action-value function

SARSA

Q-learning

SARSA is so named because of the sequence State->Action->Reward->State'->Action' that the algorithm's update step depends on. The description of the sequence goes like this: the agent, in state S, takes an action A and gets a reward R, and ends up in the next state S', after which the agent decides to take an action A' in the new state...

Deep reinforcement learning

With a basic understanding of reinforcement learning, you are now in a better state (hopefully you are not in a strictly Markov state where you have forgotten the history/things you have learned so far) to understand the basics of the cool new suite of algorithms that have been rocking the field of AI in recent times.

Deep reinforcement learning emerged naturally when people made advancements in the deep learning field and applied them to reinforcement learning. We learned about the state-value function, action-value function, and policy. Let's briefly look at how they can be represented mathematically or realized through computer code. The state-value function is a real-value function that takes the current state as the input and outputs a real-value number (such as 4.57). This number is the agent's prediction of how good it is to be in...

Practical applications of reinforcement and deep reinforcement learning algorithms

Until recently, practical applications of reinforcement learning and deep reinforcement learning were limited, due to sample complexity and instability. But, these algorithms proved to be quite powerful in solving some really hard practical problems. Some of them are listed here to give you an idea:

  • Learning to play video games better than humans: This news has probably reached you by now. Researchers at DeepMind and others developed a series of algorithms, starting with DeepMind's Deep-Q-Network, or DQN for short, which reached human-level performance in playing Atari games. We will actually be implementing this algorithm in a later chapter of this book! In essence, it is a deep variant of the Q-learning algorithm we briefly saw in this chapter, with a few changes that increased the speed...

Summary

In this chapter, we discussed how an agent interacts with an environment by taking an action based on the observation it receives from the environment, and the environment responds to the agent's action with an (optional) reward and the next observation.

With a concise understanding of the foundations of reinforcement learning, we went deeper to understand what deep reinforcement learning is, and uncovered the fact that we could use deep neural networks to represent value functions and policies. Although this chapter was a little heavy on notation and definitions, hopefully it laid a strong foundation for us to develop some cool agents in the upcoming chapters. In the next chapter, we will consolidate our learning in the first two chapters and put it to use by laying out the groundwork to train an agent to solve some interesting problems.

...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Hands-On Intelligent Agents with OpenAI Gym
Published in: Jul 2018Publisher: PacktISBN-13: 9781788836579
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Palanisamy P

Praveen Palanisamy works on developing autonomous intelligent systems. He is currently an AI researcher at General Motors R&D. He develops planning and decision-making algorithms and systems that use deep reinforcement learning for autonomous driving. Previously, he was at the Robotics Institute, Carnegie Mellon University, where he worked on autonomous navigation, including perception and AI for mobile robots. He has experience developing complete, autonomous, robotic systems from scratch.
Read more about Palanisamy P