Exact Inference Using Graphical Models

Exclusive offer: get 50% off this eBook here
Building Probabilistic Graphical  Models with Python

Building Probabilistic Graphical Models with Python — Save 50%

Solve machine learning problems using probabilistic graphical models implemented in Python, with real-world applications with this book and ebook

$16.99    $8.50
by Kiran R Karkera | June 2014 | Open Source

In this article by Kiran R Karkera, the author of Building Probabilistic Graphical Models with Python, we shall employ over fully specified graphical models to various inference engines to obtain answers to the questions.

Before we start on the inference trail, we shall learn about the complexity of the inference problem. Using different types of inference is appropriate in certain contexts, and we will learn which type is applicable where, and then experiment with both approaches.

(For more resources related to this topic, see here.)

Complexity of inference

A graphical model can be used to answer both probability queries and MAP queries. The most straightforward way to use this model is to generate the joint distribution and sum out all the variables, except the ones we are interested in. However, we need to determine and specify the joint distribution where an exponential blowup happens.

In worst-case scenarios, we need to determine the exact inference in NP-hard. By the word exact, we mean specifying the probability values with a certain precision (say, five digits after the decimals). Suppose we tone down our precision requirements (for example, only up to two digits after the decimals). Now, is the (approximate) inference task any easier? Unfortunately not—even approximate inference is NP-hard, that is, getting values is far better than random guessing (50 percent or a probability of 0.5), which takes exponential time.

It might seem like inference is a hopeless task, but that is only in the worst case. In general cases, we can use exact inference to solve certain classes of real-world problems (such as Bayesian networks that have a small number of discrete random variables). Of course, for larger problems, we have to resort to approximate inference.

Real-world issues

Since inference is a task that is NP-hard, inference engines are written in languages that are as close to bare metal as possible; usually in C or C++.

  • Use Python implementations of inference algorithms. Complete and mature packages for these are uncommon.

  • Use inference engines that have a Python interface, such as Stan (mc-stan.org). This choice serves a good balance between running the Python code and a fast inference implementation.

  • Use inference engines that do not have a Python interface, which is true for majority of the inference engines out there. A fairly comprehensive list can be found at http://en.wikipedia.org/wiki/Bayesian_network#Software. The use of Python here is limited to creating a file that describes the model in a format that the inference engine can consume.

In the article on inference, we will stick to the first two choices in the list. We will use native Python implementations (of inference algorithms) to peek into the interiors of the inference algorithms while running toy-sized problems, and then use an external inference engine with Python interfaces to try out a more real-world problem.

The tree algorithm

We will now look at another class of exact inference algorithms based on message passing.

Message passing is a general mechanism, and there exist many variations of message passing algorithms. We shall look at a short snippet of the clique tree-message passing algorithm (which is sometimes called the junction tree algorithm too). Other versions of the message passing algorithm are used in approximate inference as well.

We initiate the discussion by clarifying some of the terms used.

A cluster graph is an arrangement of a network where groups of variables are placed in the cluster. It is similar to a factor where each cluster has a set of variables in its scope.

The message passing algorithm is all about passing messages between clusters. As an analogy, consider the gossip going on at a party, where Shelly and Clair are in a conversation. If Shelly knows B, C, and D, and she is chatting with Clair who knows D, E, and F (note that the only person they know in common is D), they can share information (or pass messages) about their common friend D.

In the message passing algorithm, two clusters are connected by a Separation Set (sepset), which contains variables common to both clusters. Using the preceding example, the two clusters and are connected by the sepset , which contains the only variable common to both clusters.

In the next section, we shall learn about the implementation details of the junction tree algorithm. We will first understand the four stages of the algorithm and then use code snippets to learn about it from an implementation perspective.

The four stages of the junction tree algorithm

In this section, we will discuss the four stages of the junction tree algorithm.

In the first stage, the Bayes network is converted into a secondary structure called a join tree (alternate names for this structure in the literature are junction tree, cluster tree, or a clique tree). The transformation from the Bayes network to junction tree proceeds as per the following steps:

  • We will construct a moral graph by changing all the directed edges to undirected edges. All nodes that have V-structures that enter the said node have their parents connected with an edge. We have seen an example of this process (in the VE algorithm) called moralization, which is a possible reference to connect (apparently unmarried) parents that have a child (node).

  • Then, we will selectively add edges to the moral graph to create a triangulated graph. A triangulated graph is an undirected graph where the maximum cycle length between the nodes is 3.

  • From the triangulated graph, we will identify the subsets of nodes (called cliques).

  • Starting with the cliques as clusters, we will arrange the clusters to form an undirected tree called the join tree, which satisfies the running intersection property. This property states that if a node appears in two cliques, it should also appear in all the nodes on the path that connect the two cliques.

In the second stage, the potentials at each cluster are initialized. The potentials are similar to a CPD or a table. They have a list of values against each assignment to a variable in their scope. Both clusters and sepsets contain a set of potentials. The term potential is used as opposed to probabilities because in Markov networks, unlike probabilities, the values of the potentials are not obliged to sum to 1.

This stage consists of message passing or belief propagation between neighboring clusters. Each message consists of a belief the cluster has about a particular variable.

Each message can be passed asynchronously, but it has to wait for information from other clusters before it collates that information and passes it to the next cluster. It can be useful to think of a tree-structured cluster graph, where the message passing happens in two stages: an upward pass stage and a downward pass stage. Only after a node receives messages from the leaf nodes, will it send the message to its parent (in the "upward pass"), and only after the node receives a message from its parents will it send a message to its children (in the "downward pass").

The message passing stage completes when each cluster sepset has consistent beliefs. Recall that a cluster connected to a sepset has common variables. For example, cluster C and sepset S have and variables in its scope. Then, the potential against obtained from either the cluster or the sepset has the same value, which is why it is said that the cluster graph has consistent beliefs or that the cliques are calibrated.

Once the whole cluster graph has consistent beliefs, the fourth stage is marginalization, where we can query the marginal distribution for any variable in the graph.

Summary

We first explored the inference problem where we studied the types of inference. We then learned that inference is NP-hard and understood that, for large networks, exact inference is infeasible.

Resources for Article:


Further resources on this subject:


Building Probabilistic Graphical  Models with Python Solve machine learning problems using probabilistic graphical models implemented in Python, with real-world applications with this book and ebook
Published: June 2014
eBook Price: $16.99
Book Price: $27.99
See more
Select your format and quantity:

About the Author :


Kiran R Karkera

Kiran R Karkera is a telecom engineer with a keen interest in machine learning. He has been programming professionally in Python, Java, and Clojure for more than 10 years. In his free time, he can be found attempting machine learning competitions at Kaggle and playing the flute.

Books From Packt


Python 3 Object Oriented Programming
Python 3 Object Oriented Programming

 Practical Maya Programming with Python
Practical Maya Programming with Python

Parallel Programming with Python
Parallel Programming with Python

 Spring Python 1.1
Spring Python 1.1

 MySQL for Python
MySQL for Python

Python Geospatial Development - Second Edition
Python Geospatial Development - Second Edition

Python Data Visualization Cookbook
Python Data Visualization Cookbook

Mastering Python Regular Expressions
Mastering Python Regular Expressions


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software