Bayesian Inference in the Age of Deep Learning
Over the last fifteen years, machine learning (ML) has gone from a relatively little-known field to a buzzword in the tech community. This is due in no small part to the impressive feats of neural networks (NNs). Once a niche underdog in the field, deep learning’s accomplishments in almost every conceivable application have resulted in a near-meteoric rise in its popularity. Its success has been so pervasive that, rather than being impressed by features afforded by deep learning, we’ve come to expect them. From applying filters in social networking apps, through to relying on Google Translate when on vacation abroad, it’s undeniable that deep learning is now well and truly embedded in the technology landscape.
But, despite all of its impressive accomplishments, and the variety of products and features it’s afforded us, deep learning has not yet surmounted its final hurdle. As sophisticated neural...
1.1 Technical requirements
All of the code for this book can be found on the GitHub repository for the book: https://github.com/PacktPublishing/Enhancing-Deep-Learning-with-Bayesian-Inference.
1.2 Wonders of the deep learning age
Over the last 10 to 15 years, we’ve seen a dramatic shift in the landscape of ML thanks to the enormous success of deep learning. Perhaps one of the most impressive feats of the universal impact of deep learning is that it has affected fields from medical imaging and manufacturing all the way through to tools for translation and content creation.
While deep learning has only seen great success over recent years, many of its core principles are already well established. Researchers have been working with neural networks for some time – in fact, one could argue that the first neural network was introduced by Frank Rosenblatt as early as 1957! This, of course, wasn’t as sophisticated as the models we have today, but it was an important component of these models: the perceptron, as shown in Figure 1.1.
Figure 1.1: Diagram of a single perceptron
The 1980s saw the introduction of many now-familiar concepts, with the introduction...
1.3 Understanding the limitations of deep learning
As we’ve seen, deep learning has achieved some remarkable feats, and it’s undeniable that it’s revolutionizing the way that we deal with data and predictive modeling. But deep learning’s short history also comprises darker tales: stories that bring with them crucial lessons for developing systems that are more robust, and, crucially, safer.
In this section, we’ll introduce a couple of key cases in which deep learning failed, and we will discuss how a Bayesian perspective could have helped to produce a better outcome.
1.3.1 Bias in deep learning systems
We’ll start with a textbook example of bias, a crucial problem faced by data-driven methods. This example centers around Amazon. Now a household name, the e-commerce company started out by revolutionizing the world of book retail, before becoming literally the one-stop shop for just about anything: from garden furniture to a new laptop, or even...
1.4 Core topics
The aim of this book is to provide you with the tools and knowledge you need to develop your own BDL solutions. To this end, while we assume some familiarity with concepts of statistical learning and deep learning, we will still provide a refresher of these fundamental concepts.
In Chapter 2, Fundamentals of Bayesian Inference, we’ll go over some of the key concepts from Bayesian inference, including probabilities and model uncertainty estimates. In Chapter 3, Fundamentals of Deep Learning, we’ll cover important key aspects of deep learning, including learning via backpropagation, and popular varieties of NNs. With these fundamentals covered, we’ll start to explore BDL in Chapter 4, Introducing Bayesian Deep Learning. In Chapters 5 and 6 we’ll delve deeper into BDL; we’ll first learn about principled methods, before going on to understand more practical methods for approximating Bayesian neural networks.
In Chapter ...
1.5 Setting up the work environment
To complete the practical elements of the book, you’ll need a Python 3.9 environment with the necessary prerequisites. We recommend using
conda, a Python package manager specifically designed for scientific computing applications. To install
conda, simply head to https://conda.io/projects/conda/en/latest/user-guide/install/index.html and follow the instructions for your operating system.
conda installed, you can set up the
conda environment that you’ll use for the book:
conda create -n bdl python=3.9
When you hit Enter to execute this command, you’ll be asked if you wish to continue installing the required packages; simply type
y and hit Enter.
conda will now proceed to install the core packages.
You can now activate your environment by typing the following:
conda activate bdl
You’ll now see that your shell prompt contains
bdl, indicating that your
conda environment is active. Now you...
In this chapter, we’ve revisited the successes of deep learning, renewing our understanding of its enormous potential, and its ubiquity within today’s technology. We’ve also explored some key examples of its shortcomings: scenarios in which deep learning has failed us, demonstrating the potential for catastrophic consequences. While BDL can’t eliminate these risks, it can allow us to build more robust ML systems that incorporate both the flexibility of deep learning and the caution of Bayesian inference.
In the next chapter, we’ll dive deeper into the latter as we cover some of the core concepts of Bayesian inference and probability, in preparation for our foray into BDL.