Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Enhancing Deep Learning with Bayesian Inference

You're reading from  Enhancing Deep Learning with Bayesian Inference

Product type Book
Published in Jun 2023
Publisher Packt
ISBN-13 9781803246888
Pages 386 pages
Edition 1st Edition
Languages
Authors (3):
Matt Benatan Matt Benatan
Profile icon Matt Benatan
Jochem Gietema Jochem Gietema
Profile icon Jochem Gietema
Marian Schneider Marian Schneider
Profile icon Marian Schneider
View More author details

Table of Contents (11) Chapters

Preface 1. Chapter 1: Bayesian Inference in the Age of Deep Learning 2. Chapter 2: Fundamentals of Bayesian Inference 3. Chapter 3: Fundamentals of Deep Learning 4. Chapter 4: Introducing Bayesian Deep Learning 5. Chapter 5: Principled Approaches for Bayesian Deep Learning 6. Chapter 6: Using the Standard Toolbox for Bayesian Deep Learning 7. Chapter 7: Practical Considerations for Bayesian Deep Learning 8. Chapter 8: Applying Bayesian Deep Learning 9. Chapter 9: Next Steps in Bayesian Deep Learning 10. Why subscribe?

5.2 Explaining notation

While we’ve introduced much of the notation used throughout the book in the previous chapters, we’ll be introducing more notation associated with BDL in the following chapters. As such, we’ve provided an overview of the notation here for reference:

  • μ: The mean. To make it easy to cross-reference our chapter with the original Probabilistic Backpropagation paper, this is represented as m when discussing PBP.

  • σ: The standard deviation.

  • σ2: The variance (meaning the square of the standard deviation). To make it easy to cross-reference our chapter with the paper, this is represented as v when discussing PBP.

  • x: A single vector input to our model. If considering multiple inputs, we’ll use X to represent a matrix comprising multiple vector inputs.

  • x: An approximation of our input x.

  • y: A single scalar target. When considering multiple targets, we’ll use y to represent a vector of multiple scalar targets.

  • ŷ:...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}