Reader small image

You're reading from  scikit-learn Cookbook - Second Edition

Product typeBook
Published inNov 2017
Reading LevelIntermediate
PublisherPackt
ISBN-139781787286382
Edition2nd Edition
Languages
Right arrow
Author (1)
Trent Hauck
Trent Hauck
author image
Trent Hauck

Trent Hauck is a data scientist living and working in the Seattle area. He grew up in Wichita, Kansas and received his undergraduate and graduate degrees from the University of Kansas. He is the author of the book Instant Data Intensive Apps with pandas How-to, Packt Publishing—a book that can get you up to speed quickly with pandas and other associated technologies.
Read more about Trent Hauck

Right arrow

Dimensionality Reduction

In this chapter, we will cover the following recipes:

  • Reducing dimensionality with PCA
  • Using factor analysis for decomposition
  • Using kernel PCA for nonlinear dimensionality reduction
  • Using truncated SVD to reduce dimensionality
  • Using decomposition to classify with DictionaryLearning
  • Doing dimensionality reduction with manifolds – t-SNE
  • Testing methods to reduce dimensionality with pipelines

Introduction

In this chapter, we will reduce the number of features or inputs into the machine learning models. This is a very important operation because sometimes datasets have a lot of input columns, and reducing the number of columns creates simpler models that take less computing power to predict.

The main model used in this section is principal component analysis (PCA). You do not have to know how many features you can reduce the dataset to, thanks to PCA's explained variance. A similar model in performance is truncated singular value decomposition (truncated SVD). It is always best to first choose a linear model that allows you to know how many columns you can reduce the set to, such as PCA or truncated SVD.

Later in the chapter, check out the modern method of t-distributed stochastic neighbor embedding (t-SNE), which makes features easier to visualize in lower dimensions...

Reducing dimensionality with PCA

Now it's time to take the math up a level! PCA is the first somewhat advanced technique discussed in this book. While everything else thus far has been simple statistics, PCA will combine statistics and linear algebra to produce a preprocessing step that can help to reduce dimensionality, which can be the enemy of a simple model.

Getting ready

PCA is a member of the decomposition module of scikit-learn. There are several other decomposition methods available, which will be covered later in this recipe. Let's use the iris dataset, but it's better if you use your own data:

from sklearn import datasets
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib...

Using factor analysis for decomposition

Factor analysis is another technique that we can use to reduce dimensionality. However, factor analysis makes assumptions and PCA does not. The basic assumption is that there are implicit features responsible for the features of the dataset.

This recipe will boil down to the explicit features from our samples in an attempt to understand the independent variables as much as the dependent variables.

Getting ready

To compare PCA and factor analysis, let's use the iris dataset again, but we'll first need to load the FactorAnalysis class:

from sklearn import datasets
iris = datasets.load_iris()
iris_X = iris.data
from sklearn.decomposition import FactorAnalysis
...

Using kernel PCA for nonlinear dimensionality reduction

Most of the techniques in statistics are linear by nature, so in order to capture nonlinearity, we might need to apply some transformation. PCA is, of course, a linear transformation. In this recipe, we'll look at applying nonlinear transformations, and then apply PCA for dimensionality reduction.

Getting ready

Life would be so easy if data was always linearly separable, but unfortunately, it's not. Kernel PCA can help to circumvent this issue. Data is first run through the kernel function that projects the data onto a different space; then, PCA is performed.

To familiarize yourself with the kernel functions, it will be a good exercise to think of how to generate...

Using truncated SVD to reduce dimensionality

Truncated SVD is a matrix factorization technique that factors a matrix M into the three matrices U, Σ, and V. This is very similar to PCA, except that the factorization for SVD is done on the data matrix, whereas for PCA, the factorization is done on the covariance matrix. Typically, SVD is used under the hood to find the principle components of a matrix.

Getting ready

Truncated SVD is different from regular SVDs in that it produces a factorization where the number of columns is equal to the specified truncation. For example, given an n x n matrix, SVD will produce matrices with n columns, whereas truncated SVD will produce matrices with the specified number of columns. This...

Using decomposition to classify with DictionaryLearning

In this recipe, we'll show how a decomposition method can actually be used for classification. DictionaryLearning attempts to take a dataset and transform it into a sparse representation.

Getting ready

With DictionaryLearning, the idea is that the features are the basis for the resulting datasets. Load the iris dataset:

from sklearn.datasets import load_iris
iris = load_iris()
iris_X = iris.data
y = iris.target

Additionally, create a training set by taking every other element of iris_X and y. Take the remaining elements for testing:

X_train = iris_X[::2]
X_test = iris_X[1::2]
y_train = y[::2]
y_test = y[1::2]
...

Doing dimensionality reduction with manifolds – t-SNE

Getting ready

This is a short and practical recipe.

If you read the rest of the chapter, we have been doing a lot of dimensionality reduction with the iris dataset. Let's continue the pattern for additional easy comparisons. Load the iris dataset:

from sklearn.datasets import load_iris
iris = load_iris()
iris_X = iris.data
y = iris.target

Load PCA and some classes from the manifold module:

from sklearn.decomposition import PCA
from sklearn.manifold import TSNE, MDS, Isomap

#Load visualization library
import matplotlib.pyplot as plt
%matplotlib inline

How to do it....

Testing methods to reduce dimensionality with pipelines

Here we will see how different estimators composed of dimensionality reduction and a support vector machine perform.

Getting ready

Load the iris dataset and some dimensionality reduction libraries. This is a big step for this particular recipe:

import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC, LinearSVC
from sklearn.decomposition import PCA, NMF, TruncatedSVD
from sklearn.manifold import Isomap
%matplotlib inline

How to do it....

lock icon
The rest of the chapter is locked
You have been reading a chapter from
scikit-learn Cookbook - Second Edition
Published in: Nov 2017Publisher: PacktISBN-13: 9781787286382
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Trent Hauck

Trent Hauck is a data scientist living and working in the Seattle area. He grew up in Wichita, Kansas and received his undergraduate and graduate degrees from the University of Kansas. He is the author of the book Instant Data Intensive Apps with pandas How-to, Packt Publishing—a book that can get you up to speed quickly with pandas and other associated technologies.
Read more about Trent Hauck