Over the past few hundred pages, we have faced numerous challenges, to which we applied reinforcement and deep learning algorithms. To conclude our reinforcement learning (RL) journey, this chapter will look at several aspects of the field that we have not covered yet. We will start by looking at several of the drawbacks of reinforcement learning, which any practitioner or researcher should be aware of. To end on a positive note, we will follow up by describing numerous exciting academic developments and achievements the field has seen in recent years.
You're reading from Python Reinforcement Learning Projects
So far, we have only covered what reinforcement learning algorithms can do. To the reader, reinforcement learning may seem like the panacea for all kinds of problems. But why do we not see a ubiquitous application of reinforcement learning algorithms in real-life situations? The reality is that the field has a myriad of shortcomings that hinder commercial adoption.
Why is it necessary to talk about the field's flaws? We think this will help you build a more holistic, less biased view of reinforcement learning. Moreover, understanding the weaknesses of reinforcement learning and machine learning is an important quality of a good machine learning researcher or practitioner. In the following subsections, we will discuss a few of the most important limitations that reinforcement learning is currently facing.
The past few sections may have painted a stark outlook for deep learning and reinforcement learning. However, there is no need to feel entirely discouraged; this is, in fact, an exciting time for DL and RL, where many significant advances in research are continuing to shape the field and cause it to evolve at a rapid pace. With increasing availability of computational resources and data, the possibilities of expanding and improving deep learning and reinforcement learning algorithms continue to expand.
For one, the issues raised in the preceding section are recognized and acknowledged by the research community. There are several efforts being made to address them. In the work by Pattanaik et. al., not only do the authors demonstrate that current deep reinforcement learning algorithms are susceptible to adversarial attacks, they also propose techniques that can make the same algorithms more robust toward such attacks...
This concludes our introductory journey into reinforcement learning. Over the course of this book, we learned how to implement agents that can play Atari games, navigate Minecraft, predict stock market prices, play the complex board game of Go, and even generate other neural networks to train on CIFAR-10
data. In doing so, you acquired and became accustomed to some of the fundamental and state-of-the-art deep learning and reinforcement learning algorithms. In short, you have achieved a lot!
But the journey does not and should not end here. We hope that, with your newfound skills and knowledge, you will continue to utilize deep learning and reinforcement learning algorithms to tackle problems that you face outside of this book. More importantly, we hope that this guide motivates you to explore other fields of machine learning and further develop your knowledge and experience.
There are many obstacles for the reinforcement learning community to overcome. However, there is much to look...
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.
Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., and Meger, D. (2017). Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560.
Pattanaik, A., Tang, Z., Liu, S., Bommannan, G., and Chowdhary, G. (2018, July). Robust deep reinforcement learning with adversarial attacks. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (pp. 2040-2042). International Foundation for Autonomous Agents and Multiagent Systems.