Reader small image

You're reading from  scikit-learn Cookbook - Second Edition

Product typeBook
Published inNov 2017
Reading LevelIntermediate
PublisherPackt
ISBN-139781787286382
Edition2nd Edition
Languages
Right arrow
Author (1)
Trent Hauck
Trent Hauck
author image
Trent Hauck

Trent Hauck is a data scientist living and working in the Seattle area. He grew up in Wichita, Kansas and received his undergraduate and graduate degrees from the University of Kansas. He is the author of the book Instant Data Intensive Apps with pandas How-to, Packt Publishing—a book that can get you up to speed quickly with pandas and other associated technologies.
Read more about Trent Hauck

Right arrow

Linear Models with scikit-learn

This chapter contains the following recipes:

  • Fitting a line through data
  • Fitting a line through data with machine learning
  • Evaluating the linear regression model
  • Using ridge regression to overcome linear regression's shortfalls
  • Optimizing the ridge regression parameter
  • Using sparsity to regularize models
  • Taking a more fundamental approach to regularization with LARS

Introduction

I conjecture that we are built to perceive linear functions very well. They are very easy to visualize, interpret, and explain. Linear regression is very old and was probably the first statistical model.

In this chapter, we will take a machine learning approach to linear regression.

Note that this chapter, similar to the chapter on dimensionality reduction and PCA, involves selecting the best features using linear models. Even if you decide not to perform regression for predictions with linear models, you can select the most powerful features.

Also note that linear models provide a lot of the intuition behind the use of many machine learning algorithms. For example, RBF-kernel SVMs have smooth boundaries, which when looked at up close, look like a line. Thus, SVMs are often easy to explain if, in the background, you remember your linear model intuition.

...

Fitting a line through data

Now we will start with some basic modeling with linear regression. Traditional linear regression is the first, and therefore, probably the most fundamental model—a straight line through data.

Intuitively, it is familiar to a lot of the population: a change in one input variable proportionally changes the output variable. It is important that many people will have seen it in school, or in a newspaper data graphic, or in a presentation at work, and so on, as it will be easy for you to explain to colleagues and investors.

Getting ready

The Boston dataset is perfect to play around with regression. The Boston dataset has the median home price of several areas in Boston. It also has other factors...

Fitting a line through data with machine learning

Linear regression with machine learning involves testing the linear regression algorithm on unseen data. Here, we will perform 10-fold cross-validation:

  • Split the set into 10 parts
  • Train on 9 of the parts and test on the one left over
  • Repeat this 10 times so that every part gets to be a test set once

Getting ready

As in the previous section, load the dataset you want to apply linear regression to, in this case, the Boston housing dataset:

from sklearn import datasets
boston = datasets.load_boston()

How to do it...

The...

Evaluating the linear regression model

In this recipe, we'll look at how well our regression fits the underlying data. We fitted a regression in the last recipe, but didn't pay much attention to how well we actually did it. The first question after we fitted the model was clearly, how well does the model fit? In this recipe, we'll examine this question.

Getting ready

Let's use the lr object and Boston dataset—reach back into your code from the Fitting a line through data recipe. The lr object will have a lot of useful methods now that the model has been fit.

How to do it...

...

Using ridge regression to overcome linear regression's shortfalls

In this recipe, we'll learn about ridge regression. It is different from vanilla linear regression; it introduces a regularization parameter to shrink coefficients. This is useful when the dataset has collinear factors.

Ridge regression is actually so powerful in the presence of collinearity that you can model polynomial features: vectors x, x2, x3, ... which are highly collinear and correlated.

Getting ready

Let's load a dataset that has a low effective rank and compare ridge regression with linear regression by way of the coefficients. If you're not familiar with rank, it's the smaller of the linearly independent columns and the linearly...

Optimizing the ridge regression parameter

Once you start using ridge regression to make predictions or learn about relationships in the system you're modeling, you'll start thinking about the choice of alpha.

For example, using ordinary least squares (OLS) regression might show a relationship between two variables; however, when regularized by an alpha, the relationship is no longer significant. This can be a matter of whether a decision needs to be taken.

Getting ready

Through cross-validation, we will tune the alpha parameter of ridge regression. If you remember, in ridge regression, the gamma parameter is typically represented as alpha in scikit-learn when calling RidgeRegression; so, the question that arises...

Using sparsity to regularize models

The least absolute shrinkage and selection operator (LASSO) method is very similar to ridge regression and least angle regression (LARS). It's similar to ridge regression in the sense that we penalize our regression by an amount, and it's similar to LARS in that it can be used as a parameter selection, typically leading to a sparse vector of coefficients. Both LASSO and LARS get rid of a lot of the features of the dataset, which is something you might or might not want to do depending on the dataset and how you apply it. (Ridge regression, on the other hand, preserves all features, which allows you to model polynomial functions or complex functions with correlated features.)

Getting ready

...

Taking a more fundamental approach to regularization with LARS

To borrow from Gilbert Strang's evaluation of the Gaussian elimination, LARS is an idea you probably would've considered eventually had it not already been discovered by Efron, Hastie, Johnstone, and Tibshirani in their work [1].

Getting ready

LARS is a regression technique that is well suited to high-dimensional problems, that is, p >> n, where p denotes the columns or features and n is the number of samples.

How to do it...

  1. First, import the necessary objects. The data we use will have 200...

References

  1. Bradley Efron, Trevor Hastie, Iain Johnstone, and Robert Tibshirani, Least angle regression, The Annals of Statistics 32(2) 2004: pp. 407–499, doi:10.1214/009053604000000067, MR2060166.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
scikit-learn Cookbook - Second Edition
Published in: Nov 2017Publisher: PacktISBN-13: 9781787286382
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at £13.99/month. Cancel anytime

Author (1)

author image
Trent Hauck

Trent Hauck is a data scientist living and working in the Seattle area. He grew up in Wichita, Kansas and received his undergraduate and graduate degrees from the University of Kansas. He is the author of the book Instant Data Intensive Apps with pandas How-to, Packt Publishing—a book that can get you up to speed quickly with pandas and other associated technologies.
Read more about Trent Hauck