6.4 Exploring neural network augmentation with Bayesian last-layer methods
Through the course of Chapter 5, Principled Approaches for Bayesian Deep Learning and Chapter 6, Using the Standard Toolbox for Bayesian Deep Learning, we’ve explored a variety of methods for Bayesian inference with DNNs. These methods have incorporated some form of uncertainty information at every layer, whether through the use of explicitly probabilistic means or via ensemble-based or dropout-based approximations. These methods have certain advantages. Their consistent Bayesian (or, more accurately, approximately Bayesian) mechanics mean that they are consistent: the same principles are applied at each layer, both in terms of network architecture and update rules. This makes them easier to justify from a theoretical standpoint, as we know that any theoretical guarantees apply at each layer. In addition to this, it means that we have the benefit of being able to access uncertainties at every...