Bayesian inference is a method of learning about the relationship between variables from data, in the presence of uncertainty, in real-world problems. It is one of the frameworks of probability theory. Any reader interested in Bayesian inference should have a good knowledge of probability theory to understand and use Bayesian inference. This chapter covers an overview of probability theory, which will be sufficient to understand the rest of the chapters in this book.

It was Pierre-Simon Laplace who first proposed a formal definition of probability with mathematical rigor. This definition is called the
*Classical Definition* and it states the following:

| ||

--Pierre-Simon Laplace, A Philosophical Essay on Probabilities |

What this definition means is that, if a random experiment can result in mutually exclusive and equally likely outcomes, the probability of the event is given by:

Here, is the number of occurrences of the event .

To illustrate this concept, let us take a simple example of a rolling dice. If the dice is a fair dice, then all the faces will have an equal chance of showing up when the dice is rolled. Then, the probability of each face showing up is 1/6. However, when one rolls the dice 100 times, all the faces will not come in equal proportions of 1/6 due to random fluctuations. The estimate of probability of each face is the number of times the face shows up divided by the number of rolls. As the denominator is very large, this ratio will be close to 1/6.

In the long run, this classical definition treats the probability of an uncertain event as the relative frequency of its occurrence. This is also called a
**frequentist** approach to probability. Although this approach is suitable for a large class of problems, there are cases where this type of approach cannot be used. As an example, consider the following question: *Is Pataliputra the name of an ancient city or a king?* In such cases, we have a degree of belief in various plausible answers, but it is not based on counts in the outcome of an experiment (in the Sanskrit language *Putra* means son, therefore some people may believe that Pataliputra is the name of an ancient king in India, but it is a city).

Another example is, *What is the chance of the Democratic Party winning the election in 2016 in America?* Some people may believe it is 1/2 and some people may believe it is 2/3. In this case, probability is defined as the **degree of belief** of a person in the outcome of an uncertain event. This is called the
**subjective** definition of probability.

One of the limitations of the classical or frequentist definition of probability is that it cannot address subjective probabilities. As we will see later in this book, Bayesian inference is a natural framework for treating both frequentist and subjective interpretations of probability.

In both classical and Bayesian approaches, a probability distribution function is the central quantity, which captures all of the information about the relationship between variables in the presence of uncertainty. A probability distribution assigns a probability value to each measurable subset of outcomes of a random experiment. The variable involved could be discrete or continuous, and univariate or multivariate. Although people use slightly different terminologies, the commonly used probability distributions for the different types of random variables are as follows:

One of the well-known distribution functions is the normal or Gaussian distribution, which is named after Carl Friedrich Gauss, a famous German mathematician and physicist. It is also known by the name *bell curve* because of its shape. The mathematical form of this distribution is given by:

Here, is the mean or location parameter and is the standard deviation or scale parameter ( is called variance). The following graphs show what the distribution looks like for different values of location and scale parameters:

One can see that as the mean changes, the location of the peak of the distribution changes. Similarly, when the standard deviation changes, the width of the distribution also changes.

Many natural datasets follow normal distribution because, according to the
**central limit theorem**, any random variable that can be composed as a mean of independent random variables will have a normal distribution. This is irrespective of the form of the distribution of this random variable, as long as they have finite mean and variance and all are drawn from the same original distribution. A normal distribution is also very popular among data scientists because in many statistical inferences, theoretical results can be derived if the underlying distribution is normal.

Now, let us look at the multidimensional version of normal distribution. If the random variable is an N-dimensional vector, *x* is denoted by:

Then, the corresponding normal distribution is given by:

Here, corresponds to the mean (also called location) and is an *N x N* covariance matrix (also called scale).

To get a better understanding of the multidimensional normal distribution, let us take the case of two dimensions. In this case, and the covariance matrix is given by:

Here, and are the variances along and directions, and is the correlation between and . A plot of two-dimensional normal distribution for , , and is shown in the following image:

If , then the two-dimensional normal distribution will be reduced to the product of two one-dimensional normal distributions, since would become diagonal in this case. The following 2D projections of normal distribution for the same values of and but with and illustrate this case:

The high correlation between *x* and *y* in the first case forces most of the data points along the 45 degree line and makes the distribution more anisotropic; whereas, in the second case, when the correlation is zero, the distribution is more isotropic.

We will briefly review some of the other well-known distributions used in Bayesian inference here.

Often, one would be interested in finding the probability of the occurrence of a set of random variables when other random variables in the problem are held fixed. As an example of population health study, one would be interested in finding what is the probability of a person, in the age range 40-50, developing heart disease with high blood pressure and diabetes. Questions such as these can be modeled using conditional probability, which is defined as the probability of an event, given that another event has happened. More formally, if we take the variables *A* and *B*, this definition can be rewritten as follows:

Similarly:

The following Venn diagram explains the concept more clearly:

In Bayesian inference, we are interested in conditional probabilities corresponding to multivariate distributions. If denotes the entire random variable set, then the conditional probability of , given that is fixed at some value, is given by the ratio of joint probability of and joint probability of :

In the case of two-dimensional normal distribution, the conditional probability of interest is as follows:

It can be shown that (exercise 2 in the *Exercises* section of this chapter) the RHS can be simplified, resulting in an expression for in the form of a normal distribution again with the mean and variance .

From the definition of the conditional probabilities and , it is easy to show the following:

Rev. Thomas Bayes (1701–1761) used this rule and formulated his famous Bayes theorem that can be interpreted if represents the initial degree of belief (or prior probability) in the value of a random variable *A* before observing *B*; then, its posterior probability or degree of belief after accounted for *B* will get updated according to the preceding equation. So, the Bayesian inference essentially corresponds to updating beliefs about an uncertain system after having made some observations about it. In the sense, this is also how we human beings learn about the world. For example, before we visit a new city, we will have certain prior knowledge about the place after reading from books or on the Web.

However, soon after we reach the place, this belief will get updated based on our initial experience of the place. We continuously update the belief as we explore the new city more and more. We will describe Bayesian inference more in detail in Chapter 3, *Introducing Bayesian Inference*.

In many situations, we are interested only in the probability distribution of a subset of random variables. For example, in the heart disease problem mentioned in the previous section, if we want to infer the probability of people in a population having a heart disease as a function of their age only, we need to integrate out the effect of other random variables such as blood pressure and diabetes. This is called **marginalization**:

Or:

Note that marginal distribution is very different from conditional distribution. In conditional probability, we are finding the probability of a subset of random variables with values of other random variables fixed (conditioned) at a given value. In the case of marginal distribution, we are eliminating the effect of a subset of random variables by integrating them out (in the sense averaging their effect) from the joint distribution. For example, in the case of two-dimensional normal distribution, marginalization with respect to one variable will result in a one-dimensional normal distribution of the other variable, as follows:

The details of this integration is given as an exercise (exercise 3 in the *Exercises* section of this chapter).

Having known the distribution of a set of random variables , what one would be typically interested in for real-life applications is to be able to estimate the average values of these random variables and the correlations between them. These are computed formally using the following expressions:

For example, in the case of two-dimensional normal distribution, if we are interested in finding the correlation between the variables and , it can be formally computed from the joint distribution using the following formula:

A binomial distribution is a discrete distribution that gives the probability of heads in *n* independent trials where each trial has one of two possible outcomes, heads or tails, with the probability of heads being *p*. Each of the trials is called a Bernoulli trial. The functional form of the binomial distribution is given by:

Here, denotes the probability of having *k* heads in *n* trials. The mean of the binomial distribution is given by *np* and variance is given by *np(1-p)*. Have a look at the following graphs:

The preceding graphs show the binomial distribution for two values of *n*; 100 and 1000 for *p = 0.7*. As you can see, when *n* becomes large, the Binomial distribution becomes sharply peaked. It can be shown that, in the large *n* limit, a binomial distribution can be approximated using a normal distribution with mean *np* and variance *np(1-p)*. This is a characteristic shared by many discrete distributions that, in the large *n* limit, they can be approximated by some continuous distributions.

The Beta distribution denoted by is a function of the power of , and its reflection is given by:

Here, are parameters that determine the shape of the distribution function and is the Beta function given by the ratio of Gamma functions: .

The Beta distribution is a very important distribution in Bayesian inference. It is the conjugate prior probability distribution (which will be defined more precisely in the next chapter) for binomial, Bernoulli, negative binomial, and geometric distributions. It is used for modeling the random behavior of percentages and proportions. For example, the Beta distribution has been used for modeling **allele** frequencies in population genetics, time allocation in project management, the proportion of minerals in rocks, and heterogeneity in the probability of HIV transmission.

The Gamma distribution denoted by is another common distribution used in Bayesian inference. It is used for modeling the waiting times such as survival rates. Special cases of the Gamma distribution are the well-known Exponential and Chi-Square distributions.

In Bayesian inference, the Gamma distribution is used as a conjugate prior for the inverse of variance of a one-dimensional normal distribution or parameters such as the rate () of an exponential or Poisson distribution.

The mathematical form of a Gamma distribution is given by:

Here, and are the shape and rate parameters, respectively (both take values greater than zero). There is also a form in terms of the scale parameter , which is common in
**econometrics**. Another related distribution is the Inverse-Gamma distribution that is the distribution of the reciprocal of a variable that is distributed according to the Gamma distribution. It's mainly used in Bayesian inference as the conjugate prior distribution for the variance of a one-dimensional normal distribution.

The Dirichlet distribution is a multivariate analogue of the Beta distribution. It is commonly used in Bayesian inference as the conjugate prior distribution for multinomial distribution and categorical distribution. The main reason for this is that it is easy to implement inference techniques, such as Gibbs sampling, on the Dirichlet-multinomial distribution.

The Dirichlet distribution of order is defined over an open dimensional simplex as follows:

Here, , , and .

The Wishart distribution is a multivariate generalization of the Gamma distribution. It is defined over symmetric non-negative matrix-valued random variables. In Bayesian inference, it is used as the conjugate prior to estimate the distribution of inverse of the covariance matrix (or precision matrix) of the normal distribution. When we discussed Gamma distribution, we said it is used as a conjugate distribution for the inverse of the variance of the one-dimensional normal distribution.

The mathematical definition of the Wishart distribution is as follows:

Here, denotes the determinant of the matrix of dimension and is the degrees of freedom.

A special case of the Wishart distribution is when corresponds to the well-known Chi-Square distribution function with degrees of freedom.

Wikipedia gives a list of more than 100 useful distributions that are commonly used by statisticians (reference 1 in the *Reference* section of this chapter). Interested readers should refer to this article.

- By using the definition of conditional probability, show that any multivariate joint distribution of N random variables has the following trivial factorization:
The bivariate normal distribution is given by:

Here:

By using the definition of conditional probability, show that the conditional distribution can be written as a normal distribution of the form where and .

By using explicit integration of the expression in exercise 2, show that the marginalization of bivariate normal distribution will result in univariate normal distribution.

In the following table, a dataset containing the measurements of petal and sepal sizes of 15 different Iris flowers are shown (taken from the Iris dataset, UCI machine learning dataset repository). All units are in cms:

Sepal Length

Sepal Width

Petal Length

Petal Width

Class of Flower

5.1

3.5

1.4

0.2

Iris-setosa

4.9

3

1.4

0.2

Iris-setosa

4.7

3.2

1.3

0.2

Iris-setosa

4.6

3.1

1.5

0.2

Iris-setosa

5

3.6

1.4

0.2

Iris-setosa

7

3.2

4.7

1.4

Iris-versicolor

6.4

3.2

4.5

1.5

Iris-versicolor

6.9

3.1

4.9

1.5

Iris-versicolor

5.5

2.3

4

1.3

Iris-versicolor

6.5

2.8

4.6

1.5

Iris-versicolor

6.3

3.3

6

2.5

Iris-virginica

5.8

2.7

5.1

1.9

Iris-virginica

7.1

3

5.9

2.1

Iris-virginica

6.3

2.9

5.6

1.8

Iris-virginica

6.5

3

5.8

2.2

Iris-virginica

Answer the following questions:

What is the probability of finding flowers with a sepal length more than 5 cm and a sepal width less than 3 cm?

What is the probability of finding flowers with a petal length less than 1.5 cm; given that petal width is equal to 0.2 cm?

What is the probability of finding flowers with a sepal length less than 6 cm and a petal width less than 1.5 cm; given that the class of the flower is Iris-versicolor?

http://en.wikipedia.org/wiki/List_of_probability_distributions

Feller W.

*An Introduction to Probability Theory and Its Applications*. Vol. 1. Wiley Series in Probability and Mathematical Statistics. 1968. ISBN-10: 0471257087Jayes E.T.

*Probability Theory: The Logic of Science*. Cambridge University Press. 2003. ISBN-10: 0521592712Radziwill N.M.

*Statistics (The Easier Way) with R: an informal text on applied statistics*. Lapis Lucera. 2015. ISBN-10: 0692339426

To summarize this chapter, we discussed elements of probability theory; particularly those aspects required for learning Bayesian inference. Due to lack of space, we have not covered many elementary aspects of this subject. There are some excellent books on this subject, for example, books by William Feller (reference 2 in the *References* section of this chapter), E. T. Jaynes (reference 3 in the *References* section of this chapter), and M. Radziwill (reference 4 in the *References* section of this chapter). Readers are encouraged to read these to get a more in-depth understanding of probability theory and how it can be applied in real-life situations.

In the next chapter, we will introduce the R programming language that is the most popular open source framework for data analysis and Bayesian inference in particular.