Reader small image

You're reading from  Deep Reinforcement Learning Hands-On. - Second Edition

Product typeBook
Published inJan 2020
Reading LevelIntermediate
PublisherPackt
ISBN-139781838826994
Edition2nd Edition
Languages
Right arrow
Author (1)
Maxim Lapan
Maxim Lapan
author image
Maxim Lapan

Maxim has been working as a software developer for more than 20 years and was involved in various areas: distributed scientific computing, distributed systems and big data processing. Since 2014 he is actively using machine and deep learning to solve practical industrial tasks, such as NLP problems, RL for web crawling and web pages analysis. He has been living in Germany with his family.
Read more about Maxim Lapan

Right arrow

Ways to Speed up RL

In Chapter 8, DQN Extensions, you saw several practical tricks to make the deep Q-network (DQN) method more stable and converge faster. They involved the basic DQN method modifications (like injecting noise into the network or unrolling the Bellman equation) to get a better policy, with less time spent on training. But there is another way: tweaking the implementation details of the method to improve the speed of the training. This is a pure engineering approach, but it's also important in practice.

In this chapter, we will:

  • Take the Pong environment from Chapter 8 and try to get it solved as fast as possible
  • In a step-by-step manner, get Pong solved 3.5 times faster using exactly the same commodity hardware
  • Discuss fancier ways to speed up reinforcement learning (RL) training that could become common in the future

Why speed matters

First, let's talk a bit about why speed is important and why we optimize it at all. It might not be obvious, but enormous hardware performance improvements have happened in the last decade or two. 14 years ago, I was involved with a project that focused on building a supercomputer for computational fluid dynamics (CFD) simulations performed by an aircraft engine design company. The system consisted of 64 servers, occupied three 42-inch racks, and required dedicated cooling and power subsystems. The hardware alone (without cooling) cost almost $1M.

In 2005, this supercomputer occupied fourth place for Russian supercomputers and was the fastest system installed in the industry. Its theoretical performance was 922 GFLOPS (billion floating-point operations per second), but in comparison to the GTX 1080 Ti released 12 years later, all the capabilities of this pile of iron look tiny.

One single GTX 1080 Ti is able to perform 11,340 GFLOPS, which is 12.3 times...

The baseline

In the rest of the chapter, we will take the Atari Pong environment that you are already familiar with and try to speed up its convergence. As a baseline, we will take the same simple DQN that we used in Chapter 8, DQN Extensions, and the hyperparameters will also be the same. To compare the effect of our changes, we will use two characteristics:

  • The number of frames that we consume from the environment every second (FPS). It indicates how fast we can communicate with the environment during the training. It is very common in RL papers to indicate the number of frames that the agent observed during the training; normal numbers are 25M-50M frames. So, if our FPS=200, it will take days. In such calculations, you need to take into account that RL papers commonly report raw environment frames. But if frame skip is used (and it almost always is), this number needs to be divided by the frame skip factor, which is commonly equal to 4. In our measurements, we calculate...

The computation graph in PyTorch

Our first examples won't be around speeding up the baseline, but will show one common, and not always obvious, situation that can cost you performance. In Chapter 3, Deep Learning with PyTorch, we discussed the way PyTorch calculates gradients: it builds the graph of all operations that you perform on tensors, and when you call the backward() method of the final loss, all gradients in the model parameters are automatically calculated.

This works well, but RL code is normally much more complex than traditional supervised learning models, so the RL model that we are currently training is also being applied to get the actions that the agent needs to perform in the environment. The target network discussed in Chapter 6 makes it even more tricky. So, in DQN, a neural network (NN) is normally used in three different situations:

  1. When we want to calculate Q-values predicted by the network to get the loss in respect to reference Q-values approximated...

Several environments

The first idea that we usually apply to speed up deep learning training is larger batch size. It's applicable to the domain of deep RL, but you need to be careful here. In the normal supervised learning case, the simple rule "a large batch is better" is usually true: you just increase your batch as your GPU memory allows, and a larger batch normally means more samples will be processed in a unit of time thanks to enormous GPU parallelism.

The RL case is slightly different. During the training, two things happen simultaneously:

  • Your network is trained to get better predictions on the current data
  • Your agent explores the environment

As the agent explores the environment and learns about the outcome of its actions, the training data changes. In a shooter example, your agent can run randomly for a time while being shot by monsters and have only a miserable "death is everywhere" experience in the training buffer. But after...

Play and train in separate processes

At a high level, our training contains a repetition of the following steps:

  1. Ask the current network to choose actions and execute them in our array of environments
  2. Put observations into the replay buffer
  3. Randomly sample the training batch from the replay buffer
  4. Train on this batch

The purpose of the first two steps is to populate the replay buffer with samples from the environment (which are observation, action, reward, and next observation). The last two steps are for training our network.

The following is an illustration of the preceding steps that will make potential parallelism slightly more obvious. On the left, the training flow is shown. The training steps use environments, the replay buffer, and our NN. The solid lines show data and code flow.

Dotted lines represent usage of the NN for training and inference.

Figure 9.6: A sequential diagram of the training process

As you can see, the top two steps...

Tweaking wrappers

The final step in our sequence of experiments will be tweaking wrappers applied to the environment. This is very easy to overlook, as wrappers are normally written once, or just borrowed from other code, applied to the environment, and left to sit there. But you should be aware of their importance in terms of the speed and convergence of your method. For example, the normal DeepMind-style stack of wrappers applied to an Atari game looks like this:

  1. NoopResetEnv: applies a random amount of NOOP operations to the game reset. In some Atari games, this is needed to remove weird initial observations.
  2. MaxAndSkipEnv: applies max to N observations (four by default) and returns this as an observation for the step. This solves the "flickering" problem in some Atari games, when the game draws different portions of the screen on even and odd frames (a normal practice among 2600 developers to increase the complexity of the game's sprites).
  3. EpisodicLifeEnv...

Benchmark summary

I summarized our experiments in the following table. The percentages show the changes versus the baseline version.

...

Going hardcore: CuLE

During the writing of this chapter, NVIDIA researchers published the paper and code for their latest experiments with porting the Atari emulator on GPU: Steven Dalton, Iuri Frosio, GPU-Accelerated Atari Emulation for Reinforcement Learning, 2019, arXiv:1907.08467. The code of their Atari port is called CuLE (CUDA Learning Environment) and is available on GitHub: https://github.com/NVlabs/cule.

According to their paper, by keeping both the Atari emulator and NN on the GPU, they were able to get Pong solved within one to two minutes and reach FPS of 50k (on the advantage actor-critic (A2C) method, which will be the subject of the next part of the book).

Unfortunately, at the time of writing, the code wasn't stable enough. I failed to make it work on my hardware, but I hope that when you read this, the situation will have already changed. In any case, this project shows a somewhat extreme, but very efficient, way to increase RL methods' performance...

Summary

In this chapter, you saw several ways to improve the performance of the RL method using a pure engineering approach, which was in contrast to the "algorithmic" or "theoretical" approach covered in Chapter 8, DQN Extensions. From my perspective, both approaches complement each other, and a good RL practitioner needs to both know the latest tricks that researchers have found and be aware of the implementation details.

In the next chapter, we will apply our DQN knowledge to stocks trading.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Deep Reinforcement Learning Hands-On. - Second Edition
Published in: Jan 2020Publisher: PacktISBN-13: 9781838826994
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Maxim Lapan

Maxim has been working as a software developer for more than 20 years and was involved in various areas: distributed scientific computing, distributed systems and big data processing. Since 2014 he is actively using machine and deep learning to solve practical industrial tasks, such as NLP problems, RL for web crawling and web pages analysis. He has been living in Germany with his family.
Read more about Maxim Lapan

Step FPS Solution time in minutes

Baseline

159

105

Version without torch.no_grad()

157 (-1.5%)

115 (+10%)

Three environments

228 (+43%)

44 (-58%)

Parallel version

402 (+152%)

59 (-43%)

Tweaked wrappers, three environments

330 (+108%)

40 (-62%)

Tweaked wrappers, parallel version

409 (+157%)

31 (-70%)