Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Regression Analysis with Python

You're reading from  Regression Analysis with Python

Product type Book
Published in Feb 2016
Publisher
ISBN-13 9781785286315
Pages 312 pages
Edition 1st Edition
Languages
Concepts
Authors (2):
Luca Massaron Luca Massaron
Profile icon Luca Massaron
Alberto Boschetti Alberto Boschetti
Profile icon Alberto Boschetti
View More author details

Table of Contents (16) Chapters

Regression Analysis with Python
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Regression – The Workhorse of Data Science Approaching Simple Linear Regression Multiple Regression in Action Logistic Regression Data Preparation Achieving Generalization Online and Batch Learning Advanced Regression Methods Real-world Applications for Regression Models Index

Chapter 6. Achieving Generalization

We have to confess that, until this point, we've delayed the crucial moment of truth when our linear model has to be put to the test and verified as effectively predicting its target. Up to now, we have just considered whether we were doing a good modeling job by naively looking at a series of good-fit measures, all just telling us if the linear model could be apt at predicting based solely on the information in our training data.

Unless you love sink-or-swim situations, in much the same procedure you'd employ with new software before going into production, you need to apply the correct tests to your model and to be able to anticipate its live performance.

Moreover, no matter your level of skill and experience with such types of models, you can easily be misled into thinking you're building a good model just on the basis of the same data you used to define it. We will therefore introduce you to the fundamental distinction between in-sample and out-of-sample...

Checking on out-of-sample data


Until this point in the book, we have striven to make the regression model fit data, even by modifying the data itself (inputting missing data, removing outliers, transforming for non-linearity, or creating new features). By keeping an eye on measures such as R-squared, we have tried our best to reduce prediction errors, though we have no idea to what extent this was successful.

The problem we face now is that we shouldn't expect a well fit model to automatically perform well on any new data during production.

While defining and explaining the problem, we recall what we said about underfitting. Since we are working with a linear model, we are actually expecting to apply our work to data that has a linear relationship with the response variable. Having a linear relationship means that, with respect to the level of the response variable, our predictors always tend to constantly increase (or decrease) at the same rate. Graphically, on a scatterplot, this is refigured...

Greedy selection of features


By following our experiments throughout the book, you may have noticed that adding new variables is always a great success in a linear regression model. That's especially true for training errors and it happens not just when we insert the right variables but also when we place the wrong ones. Puzzlingly, when we add redundant or non-useful variables, there is always a more or less positive impact on the fit of the model.

The reason is easily explained; since regression models are high-bias models, they find it beneficial to augment their complexity by increasing the number of coefficients they use. Thus, some of the new coefficients can be used to fit the noise and other details present in data. It is precisely the memorization/overfitting effect we discussed before. When you have as many coefficients as observations, your model can become saturated (that's the technical term used in statistics) and you could have a perfect prediction because basically you have...

Regularization optimized by grid-search


Regularization is another way to modify the role of variables in a regression model to prevent overfitting and to achieve simpler functional forms. The interesting aspect of this alternative approach is that it actually doesn't require manipulating your original dataset, making it suitable for systems that learn and predict online from large amounts of features and observations, without human intervention. Regularization works by enriching the learning process using a penalization for too complex models to shrink (or reduce to zero) coefficients relative to variables that are irrelevant for your prediction term or are redundant, as they are highly correlated with others present in the model (the collinearity problem seen before).

Ridge (L2 regularization)

The idea behind ridge regression is simple and straightforward: if the problem is the presence of many variables, which affect the regression model because of their coefficient, all we have to do is...

Stability selection


As presented, L1-penalty offers the advantage of rendering your coefficients' estimates sparse, and effectively it acts as a variable selector since it tends to leave only essential variables in the model. On the other hand, the selection itself tends to be unstable when data changes and it requires a certain effort to correctly tune the C parameter to make the selection most effective. As we have seen while discussing elastic net, the peculiarity resides in the behavior of Lasso when there are two highly correlated variables; depending on the structure of the data (noise and correlation with other variables), L1 regularization will choose just one of the two.

In the field of studies related to bioinformatics (DNA, molecular studies), it is common to work with a large number of variables based on a few observations. Typically, such problems are denominated p >> n (features are much more numerous than cases) and they present the necessity to select what features to...

Summary


During this chapter, we have covered quite a lot of ground, finally exploring the most experimental and scientific part of the task of modeling linear regression or classification models.

Starting with the topic of generalization, we explained what can go wrong in a model and why it is always important to check the true performances of your work by train/test splits and by bootstraps and cross-validation (though we recommend using the latter more for validation work than general evaluation itself).

Model complexity as a source of variance in the estimate gave us the occasion to introduce variable selection, first by greedy selection of features, univariate or multivariate, then using regularization techniques, such as Ridge, Lasso and Elastic Net.

Finally, we demonstrated a powerful application of Lasso, called stability selection, which, in the light of our experience, we recommend you try for many feature selection problems.

In the next chapter, we will deal with the problem of incrementally...

lock icon The rest of the chapter is locked
You have been reading a chapter from
Regression Analysis with Python
Published in: Feb 2016 Publisher: ISBN-13: 9781785286315
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}