Reader small image

You're reading from  Mastering Reinforcement Learning with Python

Product typeBook
Published inDec 2020
Reading LevelBeginner
PublisherPackt
ISBN-139781838644147
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Enes Bilgin
Enes Bilgin
author image
Enes Bilgin

Enes Bilgin works as a senior AI engineer and a tech lead in Microsoft's Autonomous Systems division. He is a machine learning and operations research practitioner and researcher with experience in building production systems and models for top tech companies using Python, TensorFlow, and Ray/RLlib. He holds an M.S. and a Ph.D. in systems engineering from Boston University and a B.S. in industrial engineering from Bilkent University. In the past, he has worked as a research scientist at Amazon and as an operations research scientist at AMD. He also held adjunct faculty positions at the McCombs School of Business at the University of Texas at Austin and at the Ingram School of Engineering at Texas State University.
Read more about Enes Bilgin

Right arrow

Chapter 9: Multi-Agent Reinforcement Learning

If there is something more exciting than training a reinforcement learning (RL) agent to exhibit intelligent behavior, it is to train multiple of them to collaborate or compete. Multi-agent RL (MARL) is where you will really feel the potential in artificial intelligence. Many famous RL stories, such as AlphaGo or OpenAI Five, stemmed from MARL, which we introduce you to in this chapter. Of course, there is no free lunch, and MARL comes with lots of challenges along with its opportunities, which we will also explore. At the end of the chapter, we will train a bunch of tic-tac-toe agents through competitive self-play. So, at the end, you will have some companions to play some game against.

This will be a fun chapter, and specifically we will cover the following topics:

  • Introducing multi-agent reinforcement learning,
  • Exploring the challenges in multi-agent reinforcement learning,
  • Training policies in multi-agent settings...

Introducing multi-agent reinforcement learning

All of the problems and algorithms we have covered in the book so far involved a single agent being trained in an environment. On the other hand, in many applications from games to autonomous vehicle fleets, there are multiple decision-makers, agents, which train concurrently, but execute local policies (i.e., without a central decision-maker). This leads us to MARL, which involves a much richer set of problems and challenges than single-agent RL does. In this section, we give an overview of MARL landscape.

Collaboration and competition between MARL agents

MARL problems can be classified into three different groups with respect to the structure of collaboration and competition between agents. Let's look into what those groups are and what types of applications fit into each group.

Fully cooperative environments

In this setting, all of the agents in the environment work towards a common long-term goal. The agents are credited...

Exploring the challenges in multi-agent reinforcement learning

In the earlier chapters in this book, we discussed many challenges in reinforcement learning. In particular, the dynamic programming methods we initially introduced are not able to scale to problems with complex and large observation and action spaces. Deep reinforcement learning approaches, on the other hand, although capable of handling complex problems, lack theoretical guarantees and therefore required many tricks to stabilize and converge. Now that we talk about problems in which there are more than one agent learning, interacting with each other, and affecting the environment; the challenges and complexities of single-agent RL are multiplied. For this reason, many results in MARL are empirical.

In this section, we discuss what makes MARL specifically complex and challenging.

Non-stationarity

The mathematical framework behind single-agent RL is the Markov decision process (MDP), which establishes that the...

Training policies in multi-agent settings

There are many algorithms and approaches designed for MARL, which can be classified in the following two broad categories.

  • Independent learning: This approach suggests training agents individually while treating the other agents in the environment as part of the environment.
  • Centralized training and decentralized execution: In this approach, there is a centralized controller that uses information from multiple agents during training. At the time of execution (inference), the agents locally execute the policies, without relying on a central mechanism.

Generally speaking, we can take any of the algorithms we covered in one of the previous chapters and use it in a multi-agent setting to train policies via independent learning, which, as it turns out, is a very competitive alternative to specialized MARL algorithms. So rather than dumping more theory and notation on you, in this chapter, we will skip discussing the technical...

Training tic-tac-toe agents through self-play

In this section, we will provide you with some key explanations of the code in our Github repo to get a better grasp of MARL with RLlib while training tic-tac-toe agents on a 3x3 board. For the full code, you can refer to https://github.com/PacktPublishing/Mastering-Reinforcement-Learning-with-Python.

Figure 9.5 – A 3x3 tic-tac-toe. For the image credit and to learn how it is played, see https://en.wikipedia.org/wiki/Tic-tac-toe

Figure 9.5 – A 3x3 tic-tac-toe. For the image credit and to learn how it is played, see https://en.wikipedia.org/wiki/Tic-tac-toe

Let's started with designing the multi-agent environment.

Designing the multi-agent tic-tac-toe environment

In the game, we have two agents, X and O, playing the game. We will train four policies for the agents to pull their actions from, and each policy can play either an X or O. We construct the environment class as follows:

Chapter09/tic_tac_toe.py

class TicTacToe(MultiAgentEnv):
    def __init__(self, config=None):
   &...

Summary

In this chapter, we covered multi-agent reinforcement learning. This branch of RL is more challenging than others due to multiple decision-makers influencing the environment and also evolving over time. After introducing some MARL concepts, we explored these challenges in detail. We then proceeded to train tic-tac-toe agents through competitive self-play using RLlib. And they were so competitive that they kept coming to a draw at the end of the training!

In the next chapter, we switch gears to discuss an emerging approach in reinforcement learning, called Machine Teaching, which brings the subject matter expert, you, more actively into the process to guide the training. Hoping to see you there soon!

References

  1. Mosterman, P. J. et al. (2014). A heterogeneous fleet of vehicles for automated humanitarian missions. Computing in Science & Engineering, vol. 16, issue 3, pg. 90-95. URL: http://msdl.cs.mcgill.ca/people/mosterman/papers/ifac14/review.pdf
  2. Papoudakis, Georgios, et al. (2020). Comparative Evaluation of Multi-Agent Deep Reinforcement Learning Algorithms. arXiv.org, http://arxiv.org/abs/2006.07869
  3. Palanisamy, Praveen. (2019). Multi-Agent Connected Autonomous Driving Using Deep Reinforcement Learning. arxiv.org, https://arxiv.org/abs/1911.04175v1
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Mastering Reinforcement Learning with Python
Published in: Dec 2020Publisher: PacktISBN-13: 9781838644147
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Enes Bilgin

Enes Bilgin works as a senior AI engineer and a tech lead in Microsoft's Autonomous Systems division. He is a machine learning and operations research practitioner and researcher with experience in building production systems and models for top tech companies using Python, TensorFlow, and Ray/RLlib. He holds an M.S. and a Ph.D. in systems engineering from Boston University and a B.S. in industrial engineering from Bilkent University. In the past, he has worked as a research scientist at Amazon and as an operations research scientist at AMD. He also held adjunct faculty positions at the McCombs School of Business at the University of Texas at Austin and at the Ingram School of Engineering at Texas State University.
Read more about Enes Bilgin