Reader small image

You're reading from  Applied Supervised Learning with Python

Product typeBook
Published inApr 2019
Reading LevelIntermediate
Publisher
ISBN-139781789954920
Edition1st Edition
Languages
Right arrow
Authors (2):
Benjamin Johnston
Benjamin Johnston
author image
Benjamin Johnston

Benjamin Johnston is a senior data scientist for one of the world's leading data-driven MedTech companies and is involved in the development of innovative digital solutions throughout the entire product development pathway, from problem definition to solution research and development, through to final deployment. He is currently completing his Ph.D. in machine learning, specializing in image processing and deep convolutional neural networks. He has more than 10 years of experience in medical device design and development, working in a variety of technical roles, and holds first-class honors bachelor's degrees in both engineering and medical science from the University of Sydney, Australia.
Read more about Benjamin Johnston

Ishita Mathur
Ishita Mathur
author image
Ishita Mathur

Ishita Mathur has worked as a data scientist for 2.5 years with product-based start-ups working with business concerns in various domains and formulating them as technical problems that can be solved using data and machine learning. Her current work at GO-JEK involves the end-to-end development of machine learning projects, by working as part of a product team on defining, prototyping, and implementing data science models within the product. She completed her masters' degree in high-performance computing with data science at the University of Edinburgh, UK, and her bachelor's degree with honors in physics at St. Stephen's College, Delhi.
Read more about Ishita Mathur

View More author details
Right arrow

Chapter 4. Classification

Note

Learning Objectives

By the end of this chapter, you will be able to:

  • Implement logistic regression and explain how it can be used to classify data into specific groups or classes

  • Use the K-nearest neighbors clustering algorithm for classification

  • Use decision trees for data classification, including the ID3 algorithm

  • Describe the concept of entropy within data

  • Explain how decision trees such as ID3 aim to reduce entropy

  • Use decision trees for data classification

Note

This chapter introduces classification problems, classification using linear and logistic regression, K-nearest neighbors classification, and decision trees.

Introduction


In the previous chapter, we began our supervised machine learning journey using regression techniques, predicting the continuous variable output given a set of input data. We will now turn to the other sub-type of machine learning problems that we previously described: classification problems. Recall that classification tasks aim to predict, given a set of input data, which one of a specified number of groups of classes data belongs to.

In this chapter, we will extend the concepts learned in Chapter 3, Regression Analysis, and will apply them to a dataset labeled with classes, rather than continuous values, as output.

Linear Regression as a Classifier


We covered linear regression in the context of predicting continuous variable output in the previous chapter, but it can also be used to predict the class that a set of data is a member of. Linear regression classifiers are not as powerful as other types of classifiers that we will cover in this chapter, but they are particularly useful in understanding the process of classification. Let's say we had a fictional dataset containing two separate groups, Xs and Os, as shown in Figure 4.1. We could construct a linear classifier by first using linear regression to fit the equation of a straight line to the dataset. For any value that lies above the line, the X class would be predicted, and for any value beneath the line, the O class would be predicted. Any dataset that can be separated by a straight line is known as linearly separable, which forms an important subset of data types in machine learning problems. While this may not be particularly helpful in the...

Logistic Regression


The logistic or logit model is one such non-linear model that has been effectively used for classification tasks in a number of different domains. In this section, we will use it to classify images of hand-written digits. In understanding the logistic model, we also take an important step in understanding the operation of a particularly powerful machine learning model, artificial neural networks. So, what exactly is the logistic model? Like the linear model, which is composed of a linear or straight-line function, the logistic model is composed of the standard logistic function, which, in mathematical terms, looks something like this:

Figure 4.8: Logistic function

In practical terms, when trained, this function returns the probability of the input information belonging to a particular class or group.

Say we would like to predict whether a single entry of data belongs to one of two groups. As in the previous example, in linear regression, this would equate to y being either...

Classification Using K-Nearest Neighbors


Now that we are comfortable with creating multiclass classifiers using logistic regression and are getting reasonable performance with these models, we will turn our attention to another type of classifier: the K-nearest neighbors (K-NN) clustering method of classification. This is a handy method, as it can be used in both supervised classification problems as well as in unsupervised problems.

Figure 4.32: Visual representation of K-NN

The solid circle approximately in the center is the test point requiring classification, while the inner circle shows the classification process where K=3 and the outer circle where K=5.

K-NN is one of the simplest "learning" algorithms available for data classification. The use of learning in quotation marks is explicit, as K-NN doesn't really learn from the data and encode these learnings in parameters or weights like other methods, such as logistic regression. K-NN uses instance-based or lazy learning in that it simply...

Classification Using Decision Trees


The final classification method that we will be examining in this chapter is decision trees, which have found particular use in applications such as natural language processing. There are a number of different machine learning algorithms that fall within the overall umbrella of decision trees, such as ID3, CART, and the powerful random forest classifiers (covered in Chapter 5, Ensemble Modeling). In this chapter, we will investigate the use of the ID3 method in classifying categorical data, and we will use the scikit-learn CART implementation as another means of classifying the Iris dataset. So, what exactly are decision trees?

As the name suggests, decision trees are a learning algorithm that apply a sequential series of decisions based on input information to make the final classification. Recalling your childhood biology class, you may have used a process similar to decision trees in the classification of different types of animals via dichotomous keys...

Summary


We covered a number of powerful and extremely useful classification models in this chapter, starting with the use of linear regression as a classifier, then we observed a significant performance increase through the use of the logistic regression classifier. We then moved on to memorizing models, such as K-NN, which, while simple to fit, was able to form complex non-linear boundaries in the classification process, even with images as input information into the model. We then finished our introduction to classification problems, looking at decision trees and the ID3 algorithm. We saw how decision trees, like K-NN models, memorize the training data using rules and decision gates to make predictions with quite a high degree of accuracy.

In the next chapter, we will be extending what we have learned in this chapter. It will cover ensemble techniques, including boosting and the very effective random forest method.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Applied Supervised Learning with Python
Published in: Apr 2019Publisher: ISBN-13: 9781789954920
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Benjamin Johnston

Benjamin Johnston is a senior data scientist for one of the world's leading data-driven MedTech companies and is involved in the development of innovative digital solutions throughout the entire product development pathway, from problem definition to solution research and development, through to final deployment. He is currently completing his Ph.D. in machine learning, specializing in image processing and deep convolutional neural networks. He has more than 10 years of experience in medical device design and development, working in a variety of technical roles, and holds first-class honors bachelor's degrees in both engineering and medical science from the University of Sydney, Australia.
Read more about Benjamin Johnston

author image
Ishita Mathur

Ishita Mathur has worked as a data scientist for 2.5 years with product-based start-ups working with business concerns in various domains and formulating them as technical problems that can be solved using data and machine learning. Her current work at GO-JEK involves the end-to-end development of machine learning projects, by working as part of a product team on defining, prototyping, and implementing data science models within the product. She completed her masters' degree in high-performance computing with data science at the University of Edinburgh, UK, and her bachelor's degree with honors in physics at St. Stephen's College, Delhi.
Read more about Ishita Mathur