Reader small image

You're reading from  Bayesian Analysis with Python. - Second Edition

Product typeBook
Published inDec 2018
Reading LevelIntermediate
PublisherPackt
ISBN-139781789341652
Edition2nd Edition
Languages
Concepts
Right arrow
Author (1)
Osvaldo Martin
Osvaldo Martin
author image
Osvaldo Martin

Osvaldo Martin is a researcher at CONICET, in Argentina. He has experience using Markov Chain Monte Carlo methods to simulate molecules and perform Bayesian inference. He loves to use Python to solve data analysis problems. He is especially motivated by the development and implementation of software tools for Bayesian statistics and probabilistic modeling. He is an open-source developer, and he contributes to Python libraries like PyMC, ArviZ and Bambi among others. He is interested in all aspects of the Bayesian workflow, including numerical methods for inference, diagnosis of sampling, evaluation and criticism of models, comparison of models and presentation of results.
Read more about Osvaldo Martin

Right arrow

Model Comparison

"A map is not the territory it represents, but, if correct, it has a similar structure to the territory."
-Alfred Korzybski

Models should be designed as approximations to help us understand a particular problem, or a class of related problems. Models are not designed to be verbatim copies of the real world. Thus, all models are wrong in the same sense that maps are not the territory. Even when a priori, we consider every model to be wrong, not every model is equally wrong; some models will be better than others at describing a given problem. In the foregoing chapters, we focused our attention on the inference problem, that is, how to learn values of parameters from the data. In this chapter, we are going to focus on a complementary problem: how to compare two or more models that are used to explain the same data. As we will learn, this is not a simple...

Posterior predictive checks

In Chapter 1, Thinking Probabilistically, we introduced the concept of posterior predictive checks, and, in subsequent chapters, we have used it as a way to evaluate how well models explain the same data that's used to fit the model. The purpose of posterior predictive checks is not to dictate that a model is wrong; we already know that! By performing posterior predictive checks, we hope to get a better grasp of the limitations of a model, either to properly acknowledge them, or to attempt to improve the model. Implicit, in the previous statement is the fact that models will not generally reproduce all aspects of a problem equally well. This is not generally a problem given that models are built with a purpose in mind. A posterior predictive check is a way to evaluate a model in the context of that purpose; thus, if we have more than one model...

Occam's razor – simplicity and accuracy

When choosing among alternatives, there is a guiding principle known as Occam's razor that loosely states the following:

If we have two or more equivalent explanations for the same phenomenon, we should choose the simpler one.

There are many justifications for this heuristic; one of them is related to the falsifiability criterion introduced by Popper. Another takes a pragmatic perspective and states that: Given simpler models are easier to understand than more complex models, we should keep the simpler one. Another justification is based on Bayesian statistics, as we will see when we discus Bayes factors. Without getting into the details of these justifications, we are going to accept this criterion as a useful rule of thumb for the moment, just something that sounds reasonable.

Another factor we generally should take into...

Information criteria

Information criteria is a collection of different and somehow related tools that are used to compare models in terms of how well they fit the data while taking into account their complexity through a penalization term. In other words, information criteria formalizes the intuition we developed at the beginning of this chapter. We need a proper way to balance how well a model explains the data on the one hand, and how complex the model is on the other hand.

The exact way these quantities are derived has to do with a field known as information theory, something that is beyond the scope of this book, so we are going to limit ourselves to understand them from a practical point of view.

Log-likelihood and deviance

...

Bayes factors

A common alternative to evaluate and compare models in the Bayesian world (at least in some of its countries) are Bayes factors. To understand what Bayes factors are, let's write Bayes' theorem one more time (we have not done so for a while!):

Here, represents the data. We can make the dependency of the inference on a given model explicit and write:

The term in the denominator is known as marginal likelihood (or evidence), as you may remember from the first chapter. When doing inference, we do not need to compute this normalizing constant, so in practice, we often compute the posterior up to a constant factor. However, for model comparison and model averaging, the marginal likelihood is an important quantity. If our main objective is to choose only one model, the best one, from a set of models, we can just choose the one with the largest . As a general...

Regularizing priors

Using informative and weakly informative priors is a way of introducing bias in a model and, if done properly, this can be a really good because bias prevents overfitting and thus contributes to models being able to make predictions that generalize well. This idea of adding a bias to reduce a generalization error without affecting the ability of the model to adequately model the data that's used to fit is known as regularization. This regularization often takes the form of penalizing larger values for the parameters in a model. This is a way of reducing the information that a model is able to represent and thus reduces the chances that a model captures the noise instead of the signal.

The regularization idea is so powerful and useful that it has been discovered several times, including outside the Bayesian framework. In some fields, this idea is known...

WAIC in depth

If we expand equation 5.6, we get the following:

Both terms in this expression look very similar. The first one, the lppd (log point-wise predictive density), is computing the mean likelihood over the posterior samples. We do this for each data point and then we take the logarithm and sum up over all data points. Please compare this term with equations 5.3 and 5.4. This is just what we call deviance, but computed, taking into account the posterior. Thus, if we accept that computing the log-likelihood is a good way to measure the appropriateness of the fit of a model, then computing it from the posterior is a logic path for a Bayesian approach. As we already said, the lddp of observed data is an overestimate of the lppd for future data. Thus, we introduce a second term to correct the overestimation. The second term computes the variance of the log-likelihood over...

Summary

Posterior predictive checks is a general concept and practice that can help us understand how well models are capturing the data and how well the model is capturing the aspects of a problem we are interested in. We can perform posterior predictive checks with just one model or with many models, and thus we can use it as a method for model comparison. Posterior predictive checks are generally done via visualizations, but numerical summaries like Bayesian -values can also be helpful.

Good models have a good balance between complexity and predictive accuracy. We exemplified this feature by using the classical example of polynomial regression. We discussed two methods to estimate the out-of-sample accuracy without leaving data aside: cross-validation and information criteria. We focused our discussion on the latter. From a practical point of view, information criteria is a...

Exercises

  1. This exercise is about regularization priors. In the code that generates the data, change order=2 to another value, such as order=5. Then, fit model_p and plot the resulting curve. Repeat this, but now using a prior for beta with sd=100 instead of sd=1 and plot the resulting curve. How are both curves different? Try this out with sd=np.array([10, 0.1, 0.1, 0.1, 0.1]), too.
  2. Repeat the previous exercise but increase the amount of data to 500 data points.
  3. Fit a cubic model (order 3), compute WAIC and LOO, plot the results, and compare them with the linear and quadratic models.
  4. Use pm.sample_posterior_predictive() to rerun the PPC example, but this time, plot the values of y instead of the values of the mean.
  5. Read and run the posterior predictive example from PyMC3's documentation at https://pymc-devs.github.io/pymc3/notebooks/posterior_predictive.html. Pay special...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Bayesian Analysis with Python. - Second Edition
Published in: Dec 2018Publisher: PacktISBN-13: 9781789341652
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Osvaldo Martin

Osvaldo Martin is a researcher at CONICET, in Argentina. He has experience using Markov Chain Monte Carlo methods to simulate molecules and perform Bayesian inference. He loves to use Python to solve data analysis problems. He is especially motivated by the development and implementation of software tools for Bayesian statistics and probabilistic modeling. He is an open-source developer, and he contributes to Python libraries like PyMC, ArviZ and Bambi among others. He is interested in all aspects of the Bayesian workflow, including numerical methods for inference, diagnosis of sampling, evaluation and criticism of models, comparison of models and presentation of results.
Read more about Osvaldo Martin