5.8 Summary
In this chapter, we learned about two fundamental, well-principled, Bayesian deep learning models. BBB showed us how we can make use of variational inference to efficiently sample from our weight space and produce output distributions, while PBP demonstrated that it’s possible to obtain predictive uncertainties without sampling. This makes PBP more computationally efficient than BBB, but each model has its pros and cons.
In BBB’s case, while it’s less computationally efficient than PBP, it’s also more adaptable (particularly with the tools available in TensorFlow for variational layers). We can apply this to a variety of different DNN architectures with relatively little difficulty. The price is incurred through the sampling required at both inference and training time: we need to do more than just a single forward pass to obtain our output distributions.
Conversely, PBP allows us to obtain our uncertainty estimates with a single pass, but –...