Reinforcement Learning Algorithms with Python

By Andrea Lonza
  • Instant online access to over 8,000+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. The Landscape of Reinforcement Learning

About this book

Reinforcement Learning (RL) is a popular and promising branch of AI that involves making smarter models and agents that can automatically determine ideal behavior based on changing requirements. This book will help you master RL algorithms and understand their implementation as you build self-learning agents.

Starting with an introduction to the tools, libraries, and setup needed to work in the RL environment, this book covers the building blocks of RL and delves into value-based methods, such as the application of Q-learning and SARSA algorithms. You'll learn how to use a combination of Q-learning and neural networks to solve complex problems. Furthermore, you'll study the policy gradient methods, TRPO, and PPO, to improve performance and stability, before moving on to the DDPG and TD3 deterministic algorithms. This book also covers how imitation learning techniques work and how Dagger can teach an agent to drive. You'll discover evolutionary strategies and black-box optimization techniques, and see how they can improve RL algorithms. Finally, you'll get to grips with exploration approaches, such as UCB and UCB1, and develop a meta-algorithm called ESBAS.

By the end of the book, you'll have worked with key RL algorithms to overcome challenges in real-world applications, and be part of the RL research community.

Publication date:
October 2019
Publisher
Packt
Pages
366
ISBN
9781789131116

 

The Landscape of Reinforcement Learning

Humans and animals learn through a process of trial and error. This process is based on our reward mechanisms that provide a response to our behaviors. The goal of this process is to, through multiple repetitions, incentivize the repetition of actions which trigger positive responses, and disincentivize the repetition of actions which trigger negative ones. Through the trial and error mechanism, we learn to interact with the people and world around us, and pursue complex, meaningful goals, rather than immediate gratification. 

Learning through interaction and experience is essential. Imagine having to learn to play football by only looking at other people playing it. If you took to the field to play a football match based on this learning experience, you would probably perform incredibly poorly.

This was demonstrated throughout the mid-20th century, notably by Richard Held and Alan Hein's 1963 study on two kittens, both of whom were raised on a carousel. One kitten was able to move freely (actively), whilst the other was restrained and moved following the active kitten (passively). Upon both kittens being introduced to light, only the kitten who was able to move actively developed a functioning depth perception and motor skills, whilst the passive kitten did not. This was notably demonstrated by the absence of the passive kitten's blink-reflex towards incoming objects. What this, rather crude experiment demonstrated is that regardless of visual deprivation, physical interaction with the environment is necessary in order for animals to learn. 

Inspired by how animals and humans learn, reinforcement learning (RL) is built around the idea of trial and error from active interactions with the environment. In particular, with RL, an agent learns incrementally as it interacts with the world. In this way, it's possible to train a computer to learn and behave in a rudimentary, yet similar way to how humans do.

This book is all about reinforcement learning. The intent of the book is to give you the best possible understanding of this field with a hands-on approach. In the first chapters, you'll start by learning the most fundamental concepts of reinforcement learning. As you grasp these concepts, we'll start developing our first reinforcement learning algorithms. Then, as the book progress, you'll create more powerful and complex algorithms to solve more interesting and compelling problems. You'll see that reinforcement learning is very broad and that there exist many algorithms that tackle a variety of problems in different ways. Nevertheless, we'll do our best to provide you with a simple but complete description of all the ideas, alongside a clear and practical implementation of the algorithms.

To start with, in this chapter, you'll familiarize yourself with the fundamental concepts of RL, the distinctions between different approaches, and the key concepts of policy, value function, reward, and model of the environment. You'll also learn about the history and applications of RL.

The following topics will be covered in this chapter:

  • An introduction to RL
  • Elements of RL
  • Applications of RL
 

An introduction to RL

RL is an area of machine learning that deals with sequential decision-making, aimed at reaching a desired goal. An RL problem is constituted by a decision-maker called an Agent and the physical or virtual world in which the agent interacts, is known as the Environment. The agent interacts with the environment in the form of Action which results in an effect. As a result, the environment will feedback to the agent a new State and Reward. These two signals are the consequences of the action taken by the agent. In particular, the reward is a value indicating how good or bad the action was, and the state is the current representation of the agent and the environment. This cycle is shown in the following diagram:

In this diagram the agent is represented by PacMan that based on the current state of the environment, choose which action to take. Its behavior will influence the environment, like its position and that of the enemies, that will be returned by the environment in the form of a new state and the reward. This cycle is repeated until the game ends.

The ultimate goal of the agent is to maximize the total reward accumulated during
its lifetime. Let's simplify the notation: if  is the action at time  and  is the reward at time , then the agent will take actions , to maximize the sum of all rewards .

To maximize the cumulative reward, the agent has to learn the best behavior in every situation. To do so, the agent has to optimize for a long-term horizon while taking care of every single action. In environments with many discrete or continuous states and actions, learning is difficult because the agent should be accountable for each situation. To make the problem harder, RL can have very sparse and delayed rewards, making the learning process more arduous.

To give an example of an RL problem while explaining the complexity of a sparse reward, consider the well-known story of two siblings, Hansel and Gretel. Their parents led them into the forest to abandon them, but Hansel, who knew of their intentions, had taken a slice of bread with him when they left the house and managed to leave a trail of breadcrumbs that would lead him and his sister home. In the RL framework, the agents are Hansel and Gretel, and the environment is the forest. A reward of +1 is obtained for every crumb of bread reached and a reward of +10 is acquired when they reach home. In this case, the denser the trail of bread, the easier it will be for the siblings to find their way home. This is because to go from one piece of bread to another, they have to explore a smaller area. Unfortunately, sparse rewards are far more common than dense rewards in the real world.

An important characteristic of RL is that it can deal with environments that are dynamic, uncertain, and non-deterministic. These qualities are essential for the adoption of RL in the real world. The following points are examples of how real-world problems can be reframed in RL settings:

  • Self-driving cars are a popular, yet difficult, concept to approach with RL. This is because of the many aspects to be taken into consideration while driving on the road (such as pedestrians, other cars, bikes, and traffic lights) and the highly uncertain environment. In this case, the self-driving car is the agent that can act on the steering wheel, accelerator, and brakes. The environment is the world around it. Obviously, the agent cannot be aware of the whole world around it, as it can only capture limited information via its sensors (for example, the camera, radar, and GPS). The goal of the self-driving car is to reach the destination in the minimum amount of time while following the rules of the road and without damaging anything. Consequently, the agent can receive a negative reward if a negative event occurs and a positive reward can be received in proportion to the driving time when the agent reaches its destination.
  • In the game of chess, the goal is to checkmate the opponent's piece. In an RL framework, the player is the agent and the environment is the current state of the board. The agent is allowed to move the game pieces according to their own way of moving. As a result of an action, the environment returns a positive or negative reward corresponding to a win or a loss for the agent. In all other situations, the reward is 0 and the next state is the state of the board after the opponent has moved. Unlike the self-driving car example, here, the environment state equals the agent state. In other words, the agent has a perfect view of the environment.

Comparing RL and supervised learning

RL and supervised learning are similar, yet different, paradigms to learn from data. Many problems can be tackled with both supervised learning and RL; however, in most cases, they are suited to solve different tasks.

Supervised learning learns to generalize from a fixed dataset with a limited amount of data consisting of examples. Each example is composed of the input and the desired output (or label) that provides immediate learning feedback.

In comparison, RL is more focused on sequential actions that you can take in a particular situation. In this case, the only supervision provided is the reward signal. There's no correct action to take in a circumstance, as in the supervised settings.

RL can be viewed as a more general and complete framework for learning. The major characteristics that are unique to RL are as follows:

  • The reward could be dense, sparse, or very delayed. In many cases, the reward is obtained only at the end of the task (for example, in the game of chess).
  • The problem is sequential and time-dependent; actions will affect the next actions, which, in turn, influence the possible rewards and states.
  • An agent has to take actions with a higher potential to achieve a goal (exploitation), but it should also try different actions to ensure that other parts of the environment are explored (exploration). This problem is called the exploration-exploitation dilemma (or exploration-exploitation trade-off) and it manages the difficult task of balancing between the exploration and exploitation of the environment. This is also very important because, unlike supervised learning, RL can influence the environment since it is free to collect new data as long as it deems it useful.
  • The environment is stochastic and nondeterministic, and the agent has to take this into consideration when learning and predicting the next action. In fact, we'll see that many of the RL components can be designed to either output a single deterministic value or a range of values along with their probability.

The third type of learning is unsupervised learning, and this is used to identify patterns in data without giving any supervised information. Data compression, clustering, and generative models are examples of unsupervised learning. It can also be adopted in RL settings in order to explore and learn about the environment. The combination of unsupervised learning and RL is called unsupervised RL. In this case, no reward is given and the agent could generate an intrinsic motivation to favor new situations where they can explore the environment.

It's worth noting that the problems associated with self-driving cars have also been addressed as a supervised learning problem, but with poor results. The main problem is derived from a different distribution of data that the agent would encounter during its lifetime compared to that used during training. 

History of RL

The first mathematical foundation of RL was built during the 1960s and 1970s in the field of optimal control. This solved the problem of minimizing a behavior's measure of a dynamic system over time. The method involved solving a set of equations with the known dynamics of the system. During this time, the key concept of a Markov decision process (MDP) was introduced. This provides a general framework for modeling decision-making in stochastic situations. During these years, a solution method for optimal control called dynamic programming (DP) was introduced. DP is a method that breaks down a complex problem into a collection of simpler subproblems for solving an MDP.

Note that DP only provides an easier way to solve optimal control for systems with known dynamics; there is no learning involved. It also suffers from the problem of the curse of dimensionality because the computational requirements grow exponentially with the number of states.

Even if these methods don't involve learning, as noted by Richard S. Sutton and Andrew G. Barto, we must consider the solution methods of optimal control, such as DP, to also be RL methods.

In the 1980s, the concept of learning by temporally successive predictions—the so-called temporal difference learning (TD learning) method—was finally introduced. TD learning introduced a new family of powerful algorithms that will be explained in this book.

The first problems solved with TD learning are small enough to be represented in tables or arrays. These methods are called tabular methods, which are often found as an optimal solution but are not scalable. In fact, many RL tasks involve huge state spaces, making tabular methods impossible to adopt. In these problems, function approximations are used to find a good approximate solution with less computational resources.

The adoption of function approximations and, in particular, of artificial neural networks (and deep neural networks) in RL is not trivial; however, as shown on many occasions, they are able to achieve amazing results. The use of deep learning in RL is called deep reinforcement learning (deep RL) and it has achieved great popularity ever since a deep RL algorithm named deep q network (DQN) displayed a superhuman ability to play Atari games from raw images in 2015. Another striking achievement of deep RL was with AlphaGo in 2017, which became the first program to beat Lee Sedol, a human professional Go player, and 18-time world champion. These breakthroughs not only showed that machines can perform better than humans in high-dimensional spaces (using the same perception as humans with respect to images), but also that they can behave in interesting ways. An example of this is the creative shortcut found by a deep RL system while playing Breakout, an Atari arcade game in which the player has to destroy all the bricks, as shown in the following image. The agent found that just by creating a tunnel on the left-hand side of the bricks and by putting the ball in that direction, it could destroy much more bricks and thus increase its overall score with just one move. 

There are many other interesting cases where the agents exhibit superb behavior or strategies that weren't known to humans, like a move performed by AlphaGo while playing Go against Lee Sedol. From a human perspective, that move seemed nonsense but ultimately allowed AlphaGo to win the game (the move is called move 37).

Nowadays, when dealing with high-dimensional state or action spaces, the use of deep neural networks as function approximations becomes almost a default choice. Deep RL has been applied to more challenging problems, such as data center energy optimization, self-driving cars, multi-period portfolio optimization, and robotics, just to name a few. 

Deep RL

Now you could ask yourself—why can deep learning combined with RL perform so well? Well, the main answer is that deep learning can tackle problems with a high-dimensional state space. Before the advent of deep RL, state spaces had to break down into simpler representations, called features. These were difficult to design and, in some cases, only an expert could do it. Now, using deep neural networks such as a convolutional neural network (CNN) or a recurrent neural network (RNN), RL can learn different levels of abstraction directly from raw pixels or sequential data (such as natural language). This configuration is shown in the following diagram:

Furthermore, deep RL problems can now be solved completely in an end-to-end fashion. Before the deep learning era, an RL algorithm involved two distinct pipelines: one to deal with the perception of the system and one to be responsible for the decision-making. Now, with deep RL algorithms, these processes are joined and are trained end-to-end, from the raw pixels straight to the action. For example, as shown in the preceding diagram, it's possible to train Pacman end-to-end using a CNN to process the visual component and a fully connected neural network (FNN) to translate the output of the CNN into an action.

Nowadays, deep RL is a very hot topic. The principal reason for this is that deep RL is thought to be the type of technology that will enable us to build highly intelligent machines. As proof, two of the more renowned AI companies that are working to solve intelligence problems, namely DeepMind and OpenAI, are heavily researching in RL.

Besides the huge steps achieved with deep RL, there is a long way to go. There are many challenges that still need to be addressed, some of which are listed as follows:

  • Deep RL is far too slow to learn compared to humans.
  • Transfer learning in RL is still an open problem.
  • The reward function is difficult to design and define.
  • RL agents struggle to learn in highly complex and dynamic environments such as the physical world.

Nonetheless, the research in this field is growing at a fast rate and companies are starting to adopt RL in their products.

 

Elements of RL

As we know, an agent interacts with their environment by the means of actions. This will cause the environment to change and to feedback to the agent a reward that is proportional to the quality of the actions and the new state of the agent. Through trial and error, the agent incrementally learns the best action to take in every situation so that, in the long run, it will achieve a bigger cumulative reward. In the RL framework, the choice of the action in a particular state is done by a policy, and the cumulative reward that is achievable from that state is called the value function. In brief, if an agent wants to behave optimally, then in every situation, the policy has to select the action that will bring it to the next state with the highest value. Now, let's take a deeper look at these fundamental concepts.

Policy

The policy defines how the agent selects an action given a state. The policy chooses the action that maximizes the cumulative reward from that state, not with the bigger immediate reward. It takes care of looking for the long-term goal of the agent. For example, if a car has another 30 km to go before reaching its destination, but only has another 10 km of autonomy left and the next gas stations are 1 km and 60 km away, then the policy will choose to get fuel at the first gas station (1 km away) in order to not run out of gas. This decision is not optimal in the immediate future as it will take some time to refuel, but it will be sure to ultimately accomplish the goal. 

The following diagram shows a simple example where an actor moving in a 4 x 4 grid has to go toward the star while avoiding the spirals. The actions recommended by a policy are indicated by an arrow pointing in the direction of the move. The diagram on the left shows a random initial policy, while the diagram on the right shows the final optimal policy. In a situation with two equally optimal actions, the agent can arbitrarily chooses which action to take:

An important distinction is between stochastic policies and deterministic policies. In the deterministic case, the policy provides a single deterministic action to take. On the other hand, in the stochastic case, the policy provides a probability for each action. The concept of the probability of an action is useful because it takes into consideration the dynamicity of the environment and helps its exploration.

One way to classify RL algorithms is based on how policies are improved during learning. The simpler case is when the policy that acts on the environment is similar to the one that improves while learning. Another way to say this is that the policy learns from the same data that it generates. These algorithms are called on-policy. Off-policy algorithms, in comparison, involve two policies—one that acts on the environment and another that learns but is not actually used. The former is called the behavior policy, while the latter is called the target policy. The goal of the behavior policy is to interact with and collect information about the environment in order to improve the passive target policy. Off-policy algorithms, as we will see in the coming chapters, are more unstable and difficult to design than on-policy algorithms, but they are more sample efficient, meaning that they require less experience to learn.

To better understand these two concepts, we can think of someone who has to learn a new skill. If the person behaves as on-policy algorithms do, then every time they try a sequence of actions, they'll change their belief and behavior in accordance with the reward accumulated. In comparison, if the person behaves as an off-policy algorithm, they (the target policy) can also learn by looking at an old video of themselves (the behavior policy) doing the same skill—that is, they can use old experiences to help them to improve.

The policy-gradient method is a family of RL algorithms that learns a parametrized policy (as a deep neural network) directly from the gradient of the performance with respect to the policy. These algorithms have many advantages, including the ability to deal with continuous actions and explore the environment with different levels of granularity. They will be presented in greater detail in Chapter 6, Learning Stochastic and PG Optimization, Chapter 7, TRPO and PPO Implementation, and Chapter 8, DDPG and TD3 Applications

The value function

The value function represents the long-term quality of a state. This is the cumulative reward that is expected in the future if the agent starts from a given state. If the reward measures the immediate performance, the value function measures the performance in the long run. This means that a high reward doesn't imply a high-value function and a low reward doesn't imply a low-value function. 

Moreover, the value function can be a function of the state or of the state-action pair. The former case is called a state-value function, while the latter is called an action-value function:

Here, the diagram shows the final state values (on the left side) and the corresponding optimal policy (on the right side).

Using the same gridworld example used to illustrate the concept of policy, we can show the state-value function. First of all, we can assume a reward of 0 in each situation except for when the agent reaches the star, gaining a reward of +1. Moreover, let's assume that a strong wind moves the agent in another direction with a probability of 0.33. In this case, the state values will be similar to those shown in the left-hand side of the preceding diagram. An optimal policy will choose the actions that will bring it to the next state with the highest state value, as shown in the right-hand side of the preceding diagram.

Action-value methods (or value-function methods) are the other big family of RL algorithms. These methods learn an action-value function and use it to choose the actions to take. Starting from Chapter 3, Solving Problems with Dynamic Programming, you'll learn more about these algorithms. It's worth noting that some policy-gradient methods, in order to combine the advantages of both methods, can also use a value function to learn the appropriate policy. These methods are called actor-critic methods. The following diagram shows the three main families of RL algorithms:

Reward

At each timestep, that is, after each move of the agent, the environment sends back a number that indicates how good that action was to the agent. This is called a reward. As we have already mentioned, the end goal of the agent is to maximize the cumulative reward obtained during their interaction with the environment.

In literature, the reward is assumed to be a part of the environment, but that's not strictly true in reality. The reward can come from the agent too, but never from the decision-making part of it. For this reason and to simplify the formulation, the reward is always sent from the environment.

The reward is the only supervision signal injected into the RL cycle and it is essential to design the reward in the correct way in order to obtain an agent with good behavior. If the reward has some flaws, the agent may find them and follow incorrect behavior. For example, Coast Runners is a boat-racing game with the goal being to finish ahead of other players. During the route, the boats are rewarded for hitting targets. Some folks at OpenAI trained an agent with RL to play it. They found that, instead of running to the finish line as fast as possible, the trained boat was driving in a circle to capture re-populating targets while crashing and catching fire. In this way, the boat found a way to maximize the total reward without acting as expected. This behavior was due to an incorrect balance between short-term and long-term rewards.

The reward can appear with different frequencies depending on the environment. A frequent reward is called a dense reward; however, if it is seen only a few times during a game, or only at its end, it is called a sparse reward. In the latter case, it could be very difficult for an agent to catch the reward and find the optimal actions.

Imitation learning and inverse RL are two powerful techniques that deal with the absence of a reward in the environment. Imitation learning uses an expert demonstration to map states to actions. On the other hand, inverse RL deduces the reward function from an expert optimal behavior. Imitation learning and inverse RL will be studied in Chapter 10, Imitation Learning with the DAgger Algorithm.

Model

The model is an optional component of the agent, meaning that it is not required in order to find a policy for the environment. The model details how the environment behaves, predicting the next state and the reward, given a state and an action. If the model is known, planning algorithms can be used to interact with the model and recommend future actions. For example, in environments with discrete actions, potential trajectories can be simulated using look ahead searches (for instance, using the Monte Carlo tree search).

The model of the environment could either be given in advance or learned through interactions with it. If the environment is complex, it's a good idea to approximate it using deep neural networks. RL algorithms that use an already known model of the environment, or learn one, are called model-based methods. These solutions are opposed to model-free methods and will be explained in more detail in Chapter 9, Model-Based RL.

 

Applications of RL

RL has been applied to a wide variety of fields, including robotics, finance, healthcare, and intelligent transportation systems. In general, they can be grouped into three major areas—automatic machines (such as autonomous vehicles, smart grids, and robotics), optimization processes (for example, planned maintenance, supply chains, and process planning) and control (for example, fault detection and quality control). 

In the beginning, RL was only ever applied to simple problems, but deep RL opened the road to different problems, making it possible to deal with more complex tasks. Nowadays, deep RL has been showing some very promising results. Unfortunately, many of these breakthroughs are limited to research applications or games, and, in many situations, it is not easy to bridge the gap between purely research-oriented applications and industry problems. Despite this, more companies are moving toward the adoption of RL in their industries and products.

We will now take a look at the principal fields that are already adopting or will benefit from RL.

Games 

Games are a perfect testbed for RL because they are created in order to challenge human capabilities, and, to complete them, skills common to the human brain are required (such as memory, reasoning, and coordination). Consequently, a computer that can play on the same level or better than a human must possess the same qualities. Moreover, games are easy to reproduce and can be easily simulated in computers. Video games proved to be very difficult to solve because of their partial observability (that is, only a fraction of the game is visible) and their huge search space (that is, it's impossible for a computer to simulate all possible configurations). 

A breakthrough in games occurred when, in 2015, AlphaGo beat Lee Sedol in the ancient game of Go. This win occurred in spite of the prediction that it wouldn't. At the time, it was thought that no computer would be able to beat an expert in Go for the next 10 years. AlphaGo used both RL and supervised learning to learn from professional human games. A few years after that match, the next version, named AlphaGo Zero, beat AlphaGo 100 games to 0. AlphaGo Zero learned to play Go in only three days through self-play.

Self-play is a very effective way to train an algorithm because it just plays against itself. Through self-play, useful sub-skills or behaviors could also emerge that otherwise would not have been discovered.

To capture the messiness and continuous nature of the real world, a team of five neural networks named OpenAI Five was trained to play DOTA 2, a real-time strategy game with two teams (each with five players) playing against each other. The steep learning curve in playing this game is due to the long time horizons (a game lasts for 45 minutes on average with thousands of actions), the partial observability (each player can only see a small area around themselves), and the high-dimensional continuous action and observation space. In 2018, OpenAI Five played against the top DOTA 2 players at The International, losing the match but showing innate capabilities in both collaboration and strategy skills. Finally, on April 13, 2019, OpenAI Five officially defeated the world champions in the game, becoming the first AI to beat professional teams in an esports game.

Robotics and Industry 4.0

RL in industrial robotics is a very active area of research as it is a natural adoption of this paradigm in the real world. The potential and benefit of industrial intelligent robots are huge and extensive. RL enables Industry 4.0 (referred to as the fourth industrial revolution) with intelligent devices, systems, and robots that perform highly complex and rational operations. Systems that predict maintenance, real-time diagnoses, and management of manufacturing activities can be integrated for better control and productivity. 

Machine learning

Thanks to the flexibility of RL, it can be employed not only in standalone tasks but also as a sort of fine-tune method in supervised learning algorithms. In many natural language processing (NLP) and computer vision tasks, the metric to optimize isn't differentiable, so to address the problem in supervised settings with neural networks, it needs an auxiliary differentiable loss function. However, the discrepancy between the two loss functions will penalize the final performance. One way to deal with this is to first train the system using supervised learning with the auxiliary loss function, and then use RL to fine-tune the network optimizing with respect to the final metric. For instance, this process can be of benefit in subfields such as machine translation and question answering, where the evaluation metrics are complex and not differentiable.

Furthermore, RL can solve NLP problems such as dialogue systems and text generation. Computer vision, localization, motion analysis, visual control, and visual tracking can all be trained with deep RL.

Deep learning proposes to overcome the heavy task of manual feature engineering while requiring the manual design of the neural network architecture. This is tedious work involving many parts that have to be combined in the best possible way. So, why can we not automate it? Well, actually, we can. Neural architecture design (NAD) is an approach that uses RL to design the architecture of deep neural networks. This is computationally very expensive, but this technique is able to create DNN architectures that can achieve state-of-the-art results in image classification.

Economics and finance

Business management is another natural application of RL. It has been successfully used for internet advertising with the objective to maximize pay-per-click adverts for product recommendations, customer management, and marketing. Furthermore, finance has benefited from RL for tasks such as option pricing and multi-period optimization.

Healthcare

RL is used in healthcare both for diagnosis and treatment. It can build the baseline for an AI-powered assistant for doctors and nurses. In particular, RL can provide individual progressive treatments for patients—a process known as the dynamic treatment regime. Other examples of RL in healthcare are personalized glycemic control and personalized treatments for sepsis and HIV.

Intelligent transportation systems

Intelligent transportation systems can be empowered with RL to develop and improve all types of transportation systems. Its application can range from smart networks that control congestion (such as traffic signal controls), traffic surveillance, and safety (such as collision predictions), to self-driving cars.

Energy optimization and smart grid

Energy optimization and smart grids are central for intelligent generation, distribution, and consumption of electricity. Decision energy systems and control energy systems can adopt RL techniques to provide a dynamic response to the variability of the environment. RL can also be used to adjust the demand of electricity in response to a dynamic energy pricing or reduce energy usage.

 

Summary

RL is a goal-oriented approach to decision-making. It differs from other paradigms due to its direct interaction with the environment and for its delayed reward mechanism. The combination of RL and deep learning is very useful in problems with high-dimensional state spaces and in problems with perceptual inputs. The concepts of policy and value functions are key as they give an indication about the action to take and the quality of the states of the environment. In RL, the model of the environment is not required, but it can give additional information and, therefore, improve the quality of the policy. 

Now that all the key concepts have been introduced, in the following chapters, the focus will be on actual RL algorithms. But first, in the next chapter, you will be given the grounding to develop RL algorithms using OpenAI and TensorFlow. 

 

Questions

  • What is RL?
  • What is the end goal of an agent?
  • What are the main differences between supervised learning and RL?
  • What are the benefits of combining deep learning and RL?
  • Where does the term "reinforcement" come from?
  • What is the difference between policy and value functions?
  • Can the model of an environment be learned through interacting with it?
 

Further reading

About the Author

  • Andrea Lonza

    Andrea Lonza is a deep learning engineer with a great passion for artificial intelligence and a desire to create machines that act intelligently. He has acquired expert knowledge in reinforcement learning, natural language processing, and computer vision through academic and industrial machine learning projects. He has also participated in several Kaggle competitions, achieving high results. He is always looking for compelling challenges and loves to prove himself.

    Browse publications by this author
Book Title
Access this book, plus 8,000 other titles for FREE
Access now