Reader small image

You're reading from  Learning Bayesian Models with R

Product typeBook
Published inOct 2015
Reading LevelBeginner
PublisherPackt
ISBN-139781783987603
Edition1st Edition
Languages
Right arrow
Author (1)
Hari Manassery Koduvely
Hari Manassery Koduvely
author image
Hari Manassery Koduvely

Dr. Hari M. Koduvely is an experienced data scientist working at the Samsung R&D Institute in Bangalore, India. He has a PhD in statistical physics from the Tata Institute of Fundamental Research, Mumbai, India, and post-doctoral experience from the Weizmann Institute, Israel, and Georgia Tech, USA. Prior to joining Samsung, the author has worked for Amazon and Infosys Technologies, developing machine learning-based applications for their products and platforms. He also has several publications on Bayesian inference and its applications in areas such as recommendation systems and predictive health monitoring. His current interest is in developing large-scale machine learning methods, particularly for natural language understanding.
Read more about Hari Manassery Koduvely

Right arrow

Chapter 6. Bayesian Classification Models

We introduced the classification machine learning task in Chapter 4, Machine Learning Using Bayesian Inference, and said that the objective of classification is to assign a data record into one of the predetermined classes. Classification is one of the most studied machine learning tasks and there are several well-established state of the art methods for it. These include logistic regression models, support vector machines, random forest models, and neural network models. With sufficient labeled training data, these models can achieve accuracies above 95% in many practical problems.

Then, the obvious question is, why would you need to use Bayesian methods for classification? There are two answers to this question. One is that often it is difficult to get a large amount of labeled data for training. When there are hundreds or thousands of features in a given problem, one often needs a large amount of training data for these supervised methods to avoid...

Performance metrics for classification


To understand the concepts easily, let's take the case of binary classification, where the task is to classify an input feature vector into one of the two states: -1 or 1. Assume that 1 is the positive class and -1 is the negative class. The predicted output contains only -1 or 1, but there can be two types of errors. Some of the -1 in the test set could be predicted as 1. This is called a false positive or type I error. Similarly, some of the 1 in the test set could be predicted as -1. This is called a false negative or type II error. These two types of errors can be represented in the case of binary classification as a confusion matrix as shown below.

Confusion Matrix

Predicted Class

Positive

Negative

Actual Class

Positive

TP

FN

Negative

FP

TN

From the confusion matrix, we can derive the following performance metrics:

  • Precision: This gives the percentage of correct answers in the output predicted as positive
  • Recall: This gives the percentage...

The Naïve Bayes classifier


The name Naïve Bayes comes from the basic assumption in the model that the probability of a particular feature is independent of any other feature given the class label . This implies the following:

Using this assumption and the Bayes rule, one can show that the probability of class , given features , is given by:

Here, is the normalization term obtained by summing the numerator on all the values of k. It is also called Bayesian evidence or partition function Z. The classifier selects a class label as the target class that maximizes the posterior class probability :

The Naïve Bayes classifier is a baseline classifier for document classification. One reason for this is that the underlying assumption that each feature (words or m-grams) is independent of others, given the class label typically holds good for text. Another reason is that the Naïve Bayes classifier scales well when there is a large number of documents.

There are two implementations of Naïve Bayes. In...

The Bayesian logistic regression model


The name logistic regression comes from the fact that the dependent variable of the regression is a logistic function. It is one of the widely used models in problems where the response is a binary variable (for example, fraud or not-fraud, click or no-click, and so on).

A logistic function is defined by the following equation:

It has the particular feature that, as y varies from to , the function value varies from 0 to 1. Hence, the logistic function is ideal for modeling any binary response as the input signal is varied.

The inverse of the logistic function is called logit. It is defined as follows:

In logistic regression, y is treated as a linear function of explanatory variables X. Therefore, the logistic regression model can be defined as follows:

Here, is the set of basis functions and are the model parameters as explained in the case of linear regression in Chapter 4, Machine Learning Using Bayesian Inference. From the definition of GLM in Chapter...

Exercises


  1. In this exercise, we will use the DBWorld e-mails dataset from the UCI Machine Learning repository to compare the relative performance of Naïve Bayes and BayesLogit methods. The dataset contains 64 e-mails from the DBWorld newsletter and the task is to classify the e-mails into either announcements of conferences or everything else. The reference for this dataset is a course by Prof. Michele Filannino (reference 5 in the References section of this chapter). The dataset can be downloaded from the UCI website at https://archive.ics.uci.edu/ml/datasets/DBWorld+e-mails#.

    Some preprocessing of the dataset would be required to use it for both the methods. The dataset is in the ARFF format. You need to download the foreign R package (http://cran.r-project.org/web/packages/foreign/index.html) and use the read.arff( ) method in it to read the file into an R data frame.

References


  1. Almeida T.A., Gómez Hidalgo J.M., and Yamakami A. "Contributions to the Study of SMS Spam Filtering: New Collection and Results". In: 2011 ACM Symposium on Document Engineering (DOCENG'11). Mountain View, CA, USA. 2011

  2. MacKay D.J.C. "The Evidence Framework Applied to Classification Networks". Neural Computation 4(5)

  3. "Bayesian Inference for Logistic Models Using Pólya-Gamma Latent Variables". Journal of the American Statistical Association. Volume 108, Issue 504, Page 1339. 2013

  4. Costello D.A.E., Little M.A., McSharry P.E., Moroz I.M., and Roberts S.J. "Exploiting Nonlinear Recurrence and Fractal Scaling Properties for Voice Disorder Detection". BioMedical Engineering OnLine. 2007

  5. Filannino M. "DBWorld e-mail Classification Using a Very Small Corpus". Project of Machine Learning Course. University of Manchester. 2011

Summary


In this chapter, we discussed the various merits of using Bayesian inference for the classification task. We reviewed some of the common performance metrics used for the classification task. We also learned two basic and popular methods for classification, Naïve Bayes and logistic regression, both implemented using the Bayesian approach. Having learned some important Bayesian-supervised machine learning techniques, in the next chapter, we will discuss some unsupervised Bayesian models.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Learning Bayesian Models with R
Published in: Oct 2015Publisher: PacktISBN-13: 9781783987603
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Hari Manassery Koduvely

Dr. Hari M. Koduvely is an experienced data scientist working at the Samsung R&D Institute in Bangalore, India. He has a PhD in statistical physics from the Tata Institute of Fundamental Research, Mumbai, India, and post-doctoral experience from the Weizmann Institute, Israel, and Georgia Tech, USA. Prior to joining Samsung, the author has worked for Amazon and Infosys Technologies, developing machine learning-based applications for their products and platforms. He also has several publications on Bayesian inference and its applications in areas such as recommendation systems and predictive health monitoring. His current interest is in developing large-scale machine learning methods, particularly for natural language understanding.
Read more about Hari Manassery Koduvely