Reader small image

You're reading from  Data Science Projects with Python - Second Edition

Product typeBook
Published inJul 2021
Reading LevelIntermediate
PublisherPackt
ISBN-139781800564480
Edition2nd Edition
Languages
Concepts
Right arrow
Author (1)
Stephen Klosterman
Stephen Klosterman
author image
Stephen Klosterman

Stephen Klosterman is a Machine Learning Data Scientist with a background in math, environmental science, and ecology. His education includes a Ph.D. in Biology from Harvard University, where he was an assistant teacher of the Data Science course. His professional experience includes work in the environmental, health care, and financial sectors. At work, he likes to research and develop machine learning solutions that create value, and that stakeholders understand. In his spare time, he enjoys running, biking, paddleboarding, and music.
Read more about Stephen Klosterman

Right arrow

6. Gradient Boosting, XGBoost, and SHAP Values

Overview

After reading this chapter, you will be able to describe the concept of gradient boosting, the fundamental idea underlying the XGBoost package. You will then train XGBoost models on synthetic data, while learning about early stopping as well as several XGBoost hyperparameters along the way. In addition to using a similar method to grow trees as we have previously (by setting max_depth), you'll also discover a new way of growing trees that is offered by XGBoost: loss-guided tree growing. After learning about XGBoost, you'll then be introduced to a new and powerful way of explaining model predictions, called SHAP (SHapley Additive exPlanations). You will see how SHAP values can be used to provide individualized explanations for model predictions from any dataset, not just the training data, and also understand the additive property of SHAP values.

Introduction

As we saw in the previous chapter, decision trees and ensemble models based on them provide powerful methods for creating machine learning models. While random forests have been around for decades, recent work on a different kind of tree ensemble, gradient boosted trees, has resulted in state-of-the-art models that have come to dominate the landscape of predictive modeling with tabular data, or data that is organized into a structured table, similar to the case study data. The two main packages used by machine learning data scientists today to create the most accurate predictive models with tabular data are XGBoost and LightGBM. In this chapter, we'll become familiar with XGBoost using a synthetic dataset, and then apply it to the case study data in the activity.

Note

Perhaps some of the best motivation for using XGBoost comes from the paper describing this machine learning system, in the context of Kaggle, a popular online forum for machine learning competitions...

Gradient Boosting and XGBoost

What Is Boosting?

Boosting is a procedure for creating ensembles of many machine learning models, or estimators, similar to the bagging concept that underlies the random forest model. Like bagging, while boosting can be used with any kind of machine learning model, it is commonly used to build ensembles of decision trees. A key difference from bagging is that in boosting, each new estimator added to the ensemble depends on all the estimators added before it. Because the boosting procedure proceeds in sequential stages, and the predictions of ensemble members are added up to calculate the overall ensemble prediction, it is also called stagewise additive modeling. The difference between bagging and boosting can be visualized as in Figure 6.1:

Figure 6.1: Bagging versus boosting

While bagging trains many estimators using different random samples of the training data, boosting trains new estimators using information about which...

XGBoost Hyperparameters

Early Stopping

When training ensembles of decision trees with XGBoost, there are many options available for reducing overfitting and leveraging the bias-variance trade-off. Early stopping is a simple one of these and can help provide an automated answer to the question "How many boosting rounds are needed?". It's important to note that early stopping relies on having a separate validation set of data, aside from the training set. However, this validation set will actually be used during the model training process, so it does not qualify as "unseen" data that was held out from model training, similar to how we used validation sets in cross-validation to select model hyperparameters in Chapter 4, The Bias-Variance Trade-Off.

When XGBoost is training successive decision trees to reduce error on the training set, it's possible that adding more and more trees to the ensemble will provide increasingly better fits to the training...

Another Way of Growing Trees: XGBoost's grow_policy

In addition to limiting the maximum depth of trees using a max_depth hyperparameter, there is another paradigm for controlling tree growth: finding the node where a split would result in the greatest reduction in the loss function, and splitting this node, regardless of how deep it will make the tree. This may result in a tree with one or two very deep branches, while the other branches may not have grown very far. XGBoost offers a hyperparameter called grow_policy, and setting this to lossguide results in this kind of tree growth, while the depthwise option is the default and grows trees to an indicated max_depth, as we've done in Chapter 5, Decision Trees and Random Forests, and so far in this chapter. The lossguide grow policy is a newer option in XGBoost and mimics the behavior of LightGBM, another popular gradient boosting package.

To use the lossguide policy, it is necessary to set another hyperparameter we haven...

Explaining Model Predictions with SHAP Values

Along with cutting-edge modeling techniques such as XGBoost, the practice of explaining model predictions has undergone substantial development in recent years. So far, we've learned that logistic regression coefficients, or feature importances from random forests, can provide insight into the reasons for model predictions. A more powerful technique for explaining model predictions was described in a 2017 paper, A Unified Approach to Interpreting Model Predictions, by Scott Lundberg and Su-In Lee (https://arxiv.org/abs/1705.07874). This technique is known as SHAP (SHapley Additive exPlanations) as it is based on earlier work by mathematician Lloyd Shapley. Shapely developed an area of game theory to understand how coalitions of players can contribute to the overall outcome of a game. Recent machine learning research into model explanation leveraged this concept to consider how groups or coalitions of features in a predictive model...

Missing Data

As a final note on the use of both XGBoost and SHAP, one valuable trait of both packages is their ability to handle missing values. Recall that in Chapter 1, Data Exploration and Cleaning, we found that some samples in the case study data had missing values for the PAY_1 feature. So far, our approach has been to simply remove these samples from the dataset when building models. This is because, without specifically addressing the missing values in some way, the machine learning models implemented by scikit-learn cannot work with the data. Ignoring them is one approach, although this may not be satisfactory as it involves throwing data away. If it's a very small fraction of the data, this may be fine; however, in general, it's good to be able to know how to deal with missing values.

There are several approaches for imputing missing values of features, such as filling them in with the mean or mode of the non-missing values of that feature, or a randomly selected...

Summary

In this chapter, we've learned some of the most cutting-edge techniques for building machine learning models with tabular data. While other types of data, such as image or text data, warrant exploration with different types of models such as neural networks, many standard business applications leverage tabular data. XGBoost and SHAP are some of the most advanced and popular tools you can use to build and understand models with this kind of data. Having gained familiarity and practical experience using these tools with synthetic data, in the following activity, we return to the dataset for the case study and see how we can use XGBoost to model it, including the samples with missing feature values, and use SHAP values to understand the model.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Data Science Projects with Python - Second Edition
Published in: Jul 2021Publisher: PacktISBN-13: 9781800564480
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Stephen Klosterman

Stephen Klosterman is a Machine Learning Data Scientist with a background in math, environmental science, and ecology. His education includes a Ph.D. in Biology from Harvard University, where he was an assistant teacher of the Data Science course. His professional experience includes work in the environmental, health care, and financial sectors. At work, he likes to research and develop machine learning solutions that create value, and that stakeholders understand. In his spare time, he enjoys running, biking, paddleboarding, and music.
Read more about Stephen Klosterman