Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Machine Learning with the Elastic Stack - Second Edition

You're reading from  Machine Learning with the Elastic Stack - Second Edition

Product type Book
Published in May 2021
Publisher Packt
ISBN-13 9781801070034
Pages 450 pages
Edition 2nd Edition
Languages
Authors (3):
Rich Collier Rich Collier
Profile icon Rich Collier
Camilla Montonen Camilla Montonen
Profile icon Camilla Montonen
Bahaaldine Azarmi Bahaaldine Azarmi
Profile icon Bahaaldine Azarmi
View More author details

Table of Contents (19) Chapters

Preface 1. Section 1 – Getting Started with Machine Learning with Elastic Stack
2. Chapter 1: Machine Learning for IT 3. Chapter 2: Enabling and Operationalization 4. Section 2 – Time Series Analysis – Anomaly Detection and Forecasting
5. Chapter 3: Anomaly Detection 6. Chapter 4: Forecasting 7. Chapter 5: Interpreting Results 8. Chapter 6: Alerting on ML Analysis 9. Chapter 7: AIOps and Root Cause Analysis 10. Chapter 8: Anomaly Detection in Other Elastic Stack Apps 11. Section 3 – Data Frame Analysis
12. Chapter 9: Introducing Data Frame Analytics 13. Chapter 10: Outlier Detection 14. Chapter 11: Classification Analysis 15. Chapter 12: Regression 16. Chapter 13: Inference 17. Other Books You May Enjoy Appendix: Anomaly Detection Tips

Chapter 11: Classification Analysis

When we speak about the field of machine learning and specifically the types of machine learning algorithms, we tend to invoke a taxonomy of three different classes of algorithms: supervised learning, unsupervised learning, and reinforcement learning. The third one falls outside of the scope of both this book and the current features available in the Elastic Stack, while the second one has been our topic of investigation throughout the chapters on anomaly detection, as well as the previous chapter on outlier detection. In this chapter, we will finally start dipping our toes into the world of supervised learning. The Elastic Stack provides two flavors of supervised learning: classification and regression. This chapter will be dedicated to understanding the former, while the subsequent chapter will tackle the latter.

The goal of supervised learning is to take a labeled dataset and extract the patterns from it, encode the knowledge obtained from...

Technical requirements

The material in this chapter requires Elasticsearch 7.9+. The examples have been tested using Elasticsearch version 7.10.1, but should work on any version of Elasticsearch later than 7.9. Please note that running the examples in this chapter requires a Platinum license. In case a particular example or section requires a later version of Elasticsearch, this will be mentioned in the text.

Classification: from data to a trained model

The process of training a classification model from a source dataset is a multi-step affair that involves many steps. In this section, we will take a bird's eye view (depicted in Figure 11.1) of this whole process, which begins with a labeled training dataset (Figure 11.1 part A.).

Figure 11.1 – An overview of the supervised learning process that takes a labeled dataset and outputs a trained model

This training dataset is usually split into a training part, which will be fed into the training algorithm (Figure 11.1 part B.). The output of the training algorithm is a trained model (Figure 11.1 part C.). The trained model is then used to classify the testing dataset (Figure 11.1, part D.), originally set aside from the whole dataset. The performance of the model on the testing dataset is captured in a set of evaluation metrics that can be used to determine whether a model generalizes well enough to previously...

Taking your first steps with classification

In this section, we will be creating a sample classification job using the public Wisconsin Breast Cancer dataset. The original dataset is available here: (https://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+(original)). For this exercise, we will be using a slightly sanitized version of the dataset, which will remove the necessity for data cleaning (an important step in the lifecycle of a machine learning project, but not one we have space to discuss in this book) and allow us to focus on the basics of creating a classification job:

  1. Download the sanitized dataset file breast-cancer-wisconsin-outlier.csv from the Chapter 11 - Classification Analysis folder in the book's GitHub repository (https://github.com/PacktPublishing/Machine-Learning-with-Elastic-Stack-Second-Edition/tree/main/Chapter%2011%20-%20Classification%20Analysis) and store it locally on your machine. In your Kibana instance, navigate to the Machine...

Classification under the hood: gradient boosted decision trees

The ultimate goal for a classification task is to solve a problem that requires us to take previously unseen data points and try to infer which of the several possible classes they belong to. We achieve this by taking a labeled training dataset that contains a representative number of data points, extracting relevant features that allow us to learn a decision boundary, and then encode the knowledge about this decision boundary into a classification model. This model then makes decisions about which class a given data point belongs to. How does the model learn to do this? This is the question that we will try to answer in this section.

In accordance with our habits throughout the book, let's start by exploring conceptually what tools humans use to navigate a set of complicated decisions. A familiar tool that many of us have used before to help make decisions when several, possibly complex factors are involved, is...

Hyperparameters

In the previous section, we took a conceptual overview of how decision trees are constructed. In particular, we established that one of the criteria for determining where a decision tree should be split (in other words, when a new path should be added to our conceptual flowchart) is by looking at the purity of the resulting nodes. We also noted that allowing the algorithm to exclusively focus on the purity of the nodes as a criterion for constructing the decision tree would quickly lead to trees that overfit the training data. These decision trees are so tuned to the training data that they are not only capturing the most salient features for classifying a given data point but are even modeling the noise in the data as though it is a real signal. Therefore, while this kind of a decision tree that is allowed to optimize for specific metrics without restrictions will perform really well on the training data, it will neither perform well on the testing dataset nor generalize...

Interpreting results

In the last section, we took a look at the theoretical underpinnings of decision trees and took a conceptual tour of how they are constructed. In this section, we will return to the classification example we examined earlier in the chapter and take a closer look at the format of the results as well as how to interpret them.

Earlier in the chapter, we created a trained model to predict whether a given breast tissue sample was malicious or benign (as a reminder, in this dataset malignant is denoted by class 2 and benign by class 4). A snippet of the classification results for this model is shown in Figure 11.18.

Figure 11.17 – Classification results for a sample data point in the Wisconsin breast cancer dataset

With this trained model, we can take previously unseen data points and make predictions. What form do these predictions take? In the simplest form, a data point is assigned a class label (the field ml.Class_prediction in...

Summary

In this chapter, we have taken a deep dive into supervised learning. We have examined what supervised learning means, what role is played by training data in constructing the model, what it means to train a supervised learning model, what features are and how they should be engineered to obtain optimal performance, as well as how a model is evaluated and what various model performance measures mean.

After learning about the basics of supervised learning in general, we took a closer look at classification and examined how one can create and run classification jobs in the Elastic Stack as well as how one can evaluate the trained models that are produced by these jobs. In addition to looking at basic concepts such as confusion matrices, we also examined situations where it is good to be skeptical about results that seem to be too good to be true and the potential underlying reasons why classification results can sometimes appear perfect and why this does not necessarily mean...

Further reading

For more information about how the class score is calculated, please take a look at the text and code examples in this Jupyter notebook: https://github.com/elastic/examples/blob/master/Machine%20Learning/Class%20Assigment%20Objectives/classification-class-assignment-objective.ipynb.

lock icon The rest of the chapter is locked
You have been reading a chapter from
Machine Learning with the Elastic Stack - Second Edition
Published in: May 2021 Publisher: Packt ISBN-13: 9781801070034
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}