Reader small image

You're reading from  scikit-learn Cookbook - Second Edition

Product typeBook
Published inNov 2017
Reading LevelIntermediate
PublisherPackt
ISBN-139781787286382
Edition2nd Edition
Languages
Right arrow
Author (1)
Trent Hauck
Trent Hauck
author image
Trent Hauck

Trent Hauck is a data scientist living and working in the Seattle area. He grew up in Wichita, Kansas and received his undergraduate and graduate degrees from the University of Kansas. He is the author of the book Instant Data Intensive Apps with pandas How-to, Packt Publishing—a book that can get you up to speed quickly with pandas and other associated technologies.
Read more about Trent Hauck

Right arrow

Tree Algorithms and Ensembles

In this chapter we will cover the following recipes:

  • Doing basic classifications with decision trees
  • Visualizing a decision tree with pydot
  • Tuning a decision tree
  • Using decision trees for regression
  • Reducing overfitting with cross-validation
  • Implementing random forest regression
  • Bagging regression with nearest neighbor
  • Tuning gradient boosting trees
  • Tuning an AdaBoost regressor
  • Writing a stacking aggregator with scikit-learn

Introduction

In this chapter, we focus on decision trees and ensemble algorithms. Decision algorithms are easy to interpret and visualize as they are outlines of the decision making process we are familiar with. Ensembles can be partially interpreted and visualized, but they have many parts (base estimators), so we cannot always read them easily.

The goal of ensemble learning is that several estimators can work better than a single one. There are two families of ensemble methods implemented in scikit-learn: averaging methods and boosting methods. Averaging methods (random forest, bagging, extra trees) reduce variance by averaging the predictions of several estimators. Boosting methods (gradient boost and AdaBoost) reduce bias by sequential building base estimators with the goal of reducing the bias of the whole ensemble.

A common characteristic of many ensemble constructions is...

Doing basic classifications with decision trees

Here, we perform basic classification with decision trees. Decision trees for classification are sequences of decisions that determine a classification, or a categorical outcome. Additionally, the decision tree can be examined in SQL by other individuals within the same company looking at the data.

Getting ready

Start by loading the iris dataset once again and dividing the data into training and testing sets:

from sklearn.datasets import load_iris

iris = load_iris()

X = iris.data
y = iris.target

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, stratify=y)
...

Visualizing a decision tree with pydot

If you would like to produce graphs, install the pydot library. Unfortunately, for Windows this installation could be non-trivial. Please focus on looking at the graphs rather than reproducing them if you struggle to install pydot.

How to do it...

  1. Within an IPython Notebook, perform several imports and type the following script:
import numpy as np
from sklearn import tree
from sklearn.externals.six import StringIO
import pydot
from IPython.display import Image

dot_iris = StringIO()
tree.export_graphviz(dtc, out_file = dot_iris, feature_names = iris.feature_names)
graph = pydot.graph_from_dot_data(dot_iris.getvalue())
Image(graph.create_png())
...

Tuning a decision tree

We will continue to explore the iris dataset further by focusing on the first two features (sepal length and sepal width), optimizing the decision tree, and creating some visualizations.

Getting ready

  1. Load the iris dataset, focusing on the first two features. Additionally, split the data into training and testing sets:
from sklearn.datasets import load_iris

iris = load_iris()
X = iris.data[:,:2]
y = iris.target

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, stratify=y)
  1. View the data with pandas:
import pandas as pd
pd.DataFrame(X,columns=iris.feature_names[:2])
  1. Before optimizing the decision tree, let's try a single decision...

Using decision trees for regression

Decision trees for regression are very similar to decision trees for classification. The procedure for developing a regression model consists of four parts:

  • Load the dataset
  • Split the set into training/testing subsets
  • Instantiate a decision tree regressor and train it
  • Score the model on the test subset

Getting ready

For this example, load scikit-learn's diabetes dataset:

#Use within an Jupyter notebook
%matplotlib inline

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

from sklearn.datasets import load_diabetes

diabetes = load_diabetes()

X = diabetes.data
y = diabetes.target

X_feature_names = ['age', 'gender', 'body mass index', &apos...

Reducing overfitting with cross-validation

Here, we will use cross-validation on the diabetes dataset from the previous recipe to improve performance. Start by loading the dataset, as in the previous recipe:

%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

from sklearn.datasets import load_diabetes

diabetes = load_diabetes()

X = diabetes.data
y = diabetes.target

X_feature_names = ['age', 'gender', 'body mass index', 'average blood pressure','bl_0','bl_1','bl_2','bl_3','bl_4','bl_5']

bins = 50*np.arange(8)
binned_y = np.digitize(y, bins)

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,stratify=binned_y)
...

Implementing random forest regression

Random forests is an ensemble algorithm. Ensemble algorithms use several algorithms together to improve predictions. Scikit-learn has several ensemble algorithms, most of which use trees to predict. Let's start by expanding on decision tree regression with several decision trees working together in a random forest.

A random forest is a mixture of several decision trees, where each tree provides a single vote toward the final prediction. The final random forest calculates a final output by averaging the results of all the trees it is composed of.

Getting ready

Load the diabetes regression dataset as we did with decision trees. Split all of the data into training and testing sets:

...

 Bagging regression with nearest neighbors

Bagging is an additional ensemble type that, interestingly, does not necessarily involve trees. It builds several instances of a base estimator acting on random subsets of the first training set. In this section, we try k-nearest neighbors (KNN) as the base estimator.

Pragmatically, bagging estimators are great for reducing the variance of a complex base estimator, for example, a decision tree with many levels. On the other hand, boosting reduces the bias of weak models, such as decision trees of very few levels, or linear models.

To try out bagging, we will find the best parameters, a hyperparameter search, using scikit-learn's random grid search. As we have done previously, we will go through the following process:

  1. Figure out which parameters to optimize in the algorithm (these are the parameters researchers view as the best...

Tuning gradient boosting trees

We will examine the California housing dataset with gradient boosting trees. Our overall approach will be the same as before:

  1. Focus on important parameters in the gradient boosting algorithm:
    • max_features
    • max_depth
    • min_samples_leaf
    • learning_rate
    • loss
  2. Create a parameter distribution where the most important parameters are varied.
  3. Perform a random grid search. If using an ensemble, keep the number of estimators low at first.
  4. Use the best parameters from the previous step with many estimators.

Getting ready

Load the California housing dataset and split the loaded dataset into training and testing sets:

%matplotlib inline 

from __future__ import division #Load within Python 2.7 for regular division...

Tuning an AdaBoost regressor

The important parameters to vary in an AdaBoost regressor are learning_rate and loss. As with the previous algorithms, we will perform a randomized parameter search to find the best scores that the algorithm can do.

How to do it...

  1. Import the algorithm and randomized grid search. Try a randomized parameter distribution:
from sklearn.ensemble import AdaBoostRegressor
from sklearn.model_selection import RandomizedSearchCV

param_dist = {
'n_estimators': [50, 100],
'learning_rate' : [0.01,0.05,0.1,0.3,1],
'loss' : ['linear', 'square', 'exponential']
}

pre_gs_inst = RandomizedSearchCV(AdaBoostRegressor(),
param_distributions = param_dist,
cv...

Writing a stacking aggregator with scikit-learn

In this section, we will write a stacking aggregator with scikit-learn. A stacking aggregator mixes models of potentially very different types. Many of the ensemble algorithms we have seen mix models of the same type, usually decision trees.

The fundamental process in the stacking aggregator is that we use the predictions of several machine learning algorithms as input for the training of another machine learning algorithm.

In more detail, we train two or more machine learning algorithms using a pair of X and y sets (X_1, y_1). Then we make predictions on a second X set (X_stack), y_pred_1, y_pred_2, and so on.

These predictions, y_pred_1 and y_pred_2, become inputs to a machine learning algorithm with the training output y_stack. Finally, the error can be measured on a third input set, X_3, and a ground truth set, y_3.

It will be...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
scikit-learn Cookbook - Second Edition
Published in: Nov 2017Publisher: PacktISBN-13: 9781787286382
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at £13.99/month. Cancel anytime

Author (1)

author image
Trent Hauck

Trent Hauck is a data scientist living and working in the Seattle area. He grew up in Wichita, Kansas and received his undergraduate and graduate degrees from the University of Kansas. He is the author of the book Instant Data Intensive Apps with pandas How-to, Packt Publishing—a book that can get you up to speed quickly with pandas and other associated technologies.
Read more about Trent Hauck