Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Building Statistical Models in Python

You're reading from  Building Statistical Models in Python

Product type Book
Published in Aug 2023
Publisher Packt
ISBN-13 9781804614280
Pages 420 pages
Edition 1st Edition
Languages
Concepts
Authors (3):
Huy Hoang Nguyen Huy Hoang Nguyen
Profile icon Huy Hoang Nguyen
Paul N Adams Paul N Adams
Profile icon Paul N Adams
Stuart J Miller Stuart J Miller
Profile icon Stuart J Miller
View More author details

Table of Contents (22) Chapters

Preface 1. Part 1:Introduction to Statistics
2. Chapter 1: Sampling and Generalization 3. Chapter 2: Distributions of Data 4. Chapter 3: Hypothesis Testing 5. Chapter 4: Parametric Tests 6. Chapter 5: Non-Parametric Tests 7. Part 2:Regression Models
8. Chapter 6: Simple Linear Regression 9. Chapter 7: Multiple Linear Regression 10. Part 3:Classification Models
11. Chapter 8: Discrete Models 12. Chapter 9: Discriminant Analysis 13. Part 4:Time Series Models
14. Chapter 10: Introduction to Time Series 15. Chapter 11: ARIMA Models 16. Chapter 12: Multivariate Time Series 17. Part 5:Survival Analysis
18. Chapter 13: Time-to-Event Variables – An Introduction 19. Chapter 14: Survival Models 20. Index 21. Other Books You May Enjoy

Multiple Linear Regression

In the last chapter, we discussed simple linear regression (SLR) using one variable to explain a target variable. In this chapter, we will discuss multiple linear regression (MLR), which is a model that leverages multiple explanatory variables to model a response variable. Two of the major conundrums facing multivariate modeling are multicollinearity and the bias-variance trade-off. Following an overview of MLR, we will provide an induction into the methodologies used for evaluating and minimizing multicollinearity. We will then discuss methods for leveraging the bias-variance trade-off to our benefit as analysts. Finally, we will discuss handling multicollinearity using Principal Component Regression (PCR) to minimize overfitting without removing features but rather transforming them instead.

In this chapter, we’re going to cover the following main topics:

  • Multiple linear regression
  • Feature selection
  • Shrinkage methods
  • Dimension...

Multiple linear regression

In the previous chapter, we discussed SLR. With SLR, we were able to predict the value of a variable (commonly called the response variable, denoted as y) using another variable (commonly called the explanatory variable, denoted as x). The SLR model is expressed by the following equation where β 0 is the intercept term and β 1 is the slope of the linear model.

y = β 0 + β 1 x + ϵ

While this is a useful model, in many problems, multiple explanatory variables could be used to predict the response variable. For example, if we wanted to predict home prices, we might want to consider many variables, which may include lot size, the number of bedrooms, the number of bathrooms, and overall size. In this situation, we can expand the previous model to include these additional variables. This is called MLR. The MLR model can be expressed with the following equation.

y = β 0 + β 1 x ...

Feature selection

The are many factors that influence the success or failure of a model, such as sampling, data quality, feature creation, and model selection, several of which we have not covered. One of those critical factors is feature selection. Feature selection is simply the process of choosing or systematically determining the best features for a model from an existing set of features. We have done some simple feature selection already. In the previous section, we removed features that had high VIFs. In this section, we will look at some methods for feature selection. The methods presented in this section fall into two categories: statistical methods for feature selection and performance-based methods for feature selection. Let’s start with statistical methods.

Statistical methods for feature selection

Statistical methods for feature selection rely on the primary tool that we have used throughout the previous chapters: statistical significance. The methods presented...

Shrinkage methods

The bias-variance trade-off is a decision point all statistics and machine learning practitioners must balance when performing modeling. Too much of either renders results useless. To catch these when they become issues, we look at test results and the residuals. For example, assuming a useful set of features and the appropriate model have been selected, a model that performs well on validation, but poorly on a test set could be indicative of too much variance and conversely, a model that fails to perform well at all could have too much bias. In either case, both models fail to generalize well. However, while bias in a model can be identified in poor model performance from the start, high variance can be notoriously deceptive as it has the potential to perform very well during training and even during validation, depending on the data. High-variance models frequently use values of coefficients that are unnecessarily high when very similar results can be obtained from...

Dimension reduction

In this section, we will use a specific technique – PCR – to study MLR. This technique is useful when we need to deal with a multicollinearity data issue. Multicollinearity occurs when an independent variable is highly correlated with another independent variable, or an independent variable can be predicted from another independent variable in a regression model. A high correlation can affect the result poorly when fitting a model.

The PCR technique is based on PCA as used in unsupervised machine learning for data compression and exploratory analysis. The idea behind it is to use the dimension reduction technique, PCA, on these original variables to create new uncorrelated variables. The information obtained on these new variables helps us to understand the relationship and then apply the MLR algorithm to these new variables. The PCA technique can also be used in a classification problem, which we will discuss in the next chapter.

PCA –...

Summary

In this chapter, we discussed the concept of MLR and topics aiding in its implementation. These topics included feature selection methods, shrinkage methods, and PCR. Using these tools, we were able to demonstrate approaches to reduce the risk of modeling excess variance. In doing so, we were able to also induce model bias so that models can have a better chance of generalizing on unseen data with minimal complications as frequently faced when overfitting.

In the next chapter, we will begin a discussion on classification with the introduction of logistic regression, which fits a sigmoid to a linear regression model to derive probabilities of binary class membership.

lock icon The rest of the chapter is locked
You have been reading a chapter from
Building Statistical Models in Python
Published in: Aug 2023 Publisher: Packt ISBN-13: 9781804614280
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}