Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Hands-On Markov Models with Python

You're reading from  Hands-On Markov Models with Python

Product type Book
Published in Sep 2018
Publisher Packt
ISBN-13 9781788625449
Pages 178 pages
Edition 1st Edition
Languages
Concepts
Authors (2):
Ankur Ankan Ankur Ankan
Profile icon Ankur Ankan
Abinash Panda Abinash Panda
Profile icon Abinash Panda
View More author details

Parameter Inference Using the Bayesian Approach

In the previous chapter, we discussed inferring the parameters using the maximum-likelihood approach. In this chapter, we will explore the same issue through a Bayesian approach. The main topics are as follows:

  • Introduction to Bayesian learning
  • Bayesian learning in HMMs
  • Approximate algorithms for estimating distributions

Bayesian learning

In the maximum-likelihood approach to learning, we try to find the most optimal parameters for our model that maximizes our likelihood function. But data in real life is usually really noisy, and in most cases, it doesn't represent the true underlying distribution. In such cases, the maximum-likelihood approach fails. For example, consider tossing a fair coin a few times. It is possible that all of our tosses result in either heads or tails. If we use a maximum-likelihood approach on this data, it will assign a probability of 1 to either heads or tails, which would suggest that we would never get the other side of the coin. Or, let's take a less extreme case: let's say we toss a coin 10 times and get three heads and seven tails. In this case, a maximum-likelihood approach will assign a probability of 0.3 to heads and 0.7 to tails, which is not...

Bayesian learning in HMM

As we saw in the previous section, in the case of Bayesian learning we assume all the variables as a random variable, assign a prior to it, and then try to compute the posterior based on that. Therefore, in the case of HMM, we can assign a prior on our transition probabilities, emission probabilities, or the number of observation states.

Therefore, the first problem that we need to solve is to select the prior. Theoretically, a prior can be any distribution over the parameters of the model, but in practice, we usually try to use a conjugate prior to the likelihood, so that we have a closed-form solution to the equation. For example, in the case when the output of the HMM is discrete, a common choice of prior is the Dirichlet distribution. It is mainly for two reasons, the first of which is that the Dirichlet distribution is a conjugate distribution to...

Code

Currently, there are no packages in Python that support learning using Bayesian learning and it would be really difficult to write the complete code to fit in this book. And even though there are a lot of advantages to using Bayesian learning, it is usually computationally infeasible in a lot of cases. For these reasons, we are skipping the code for Bayesian learning in HMMs.

Summary

In this chapter, we talked about applying Bayesian learning in the case of learning parameters in HMMs. Bayesian learning has a few benefits over the maximum-likelihood estimator, but it turns out to be computationally quite expensive except when we have closed-form solutions. Closed-form solutions are only possible when we use conjugate priors. In the following chapters, we will discuss detailed applications of HMMs for a wide variety of problems.

lock icon The rest of the chapter is locked
You have been reading a chapter from
Hands-On Markov Models with Python
Published in: Sep 2018 Publisher: Packt ISBN-13: 9781788625449
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}