Reader small image

You're reading from  Building Statistical Models in Python

Product typeBook
Published inAug 2023
Reading LevelIntermediate
PublisherPackt
ISBN-139781804614280
Edition1st Edition
Languages
Concepts
Right arrow
Authors (3):
Huy Hoang Nguyen
Huy Hoang Nguyen
author image
Huy Hoang Nguyen

Huy Hoang Nguyen is a Mathematician and a Data Scientist with far-ranging experience, championing advanced mathematics and strategic leadership, and applied machine learning research. He holds a Master's in Data Science and a PhD in Mathematics. His previous work was related to Partial Differential Equations, Functional Analysis and their applications in Fluid Mechanics. He transitioned from academia to the healthcare industry and has performed different Data Science projects from traditional Machine Learning to Deep Learning.
Read more about Huy Hoang Nguyen

Paul N Adams
Paul N Adams
author image
Paul N Adams

Paul Adams is a Data Scientist with a background primarily in the healthcare industry. Paul applies statistics and machine learning in multiple areas of industry, focusing on projects in process engineering, process improvement, metrics and business rules development, anomaly detection, forecasting, clustering and classification. Paul holds a Master of Science in Data Science from Southern Methodist University.
Read more about Paul N Adams

Stuart J Miller
Stuart J Miller
author image
Stuart J Miller

Stuart Miller is a Machine Learning Engineer with degrees in Data Science, Electrical Engineering, and Engineering Physics. Stuart has worked at several Fortune 500 companies, including Texas Instruments and StateFarm, where he built software that utilized statistical and machine learning techniques. Stuart is currently an engineer at Toyota Connected helping to build a more modern cockpit experience for drivers using machine learning.
Read more about Stuart J Miller

View More author details
Right arrow

Multiple Linear Regression

In the last chapter, we discussed simple linear regression (SLR) using one variable to explain a target variable. In this chapter, we will discuss multiple linear regression (MLR), which is a model that leverages multiple explanatory variables to model a response variable. Two of the major conundrums facing multivariate modeling are multicollinearity and the bias-variance trade-off. Following an overview of MLR, we will provide an induction into the methodologies used for evaluating and minimizing multicollinearity. We will then discuss methods for leveraging the bias-variance trade-off to our benefit as analysts. Finally, we will discuss handling multicollinearity using Principal Component Regression (PCR) to minimize overfitting without removing features but rather transforming them instead.

In this chapter, we’re going to cover the following main topics:

  • Multiple linear regression
  • Feature selection
  • Shrinkage methods
  • Dimension...

Multiple linear regression

In the previous chapter, we discussed SLR. With SLR, we were able to predict the value of a variable (commonly called the response variable, denoted as y) using another variable (commonly called the explanatory variable, denoted as x). The SLR model is expressed by the following equation where β 0 is the intercept term and β 1 is the slope of the linear model.

y = β 0 + β 1 x + ϵ

While this is a useful model, in many problems, multiple explanatory variables could be used to predict the response variable. For example, if we wanted to predict home prices, we might want to consider many variables, which may include lot size, the number of bedrooms, the number of bathrooms, and overall size. In this situation, we can expand the previous model to include these additional variables. This is called MLR. The MLR model can be expressed with the following equation.

y = β 0 + β 1 x ...

Feature selection

The are many factors that influence the success or failure of a model, such as sampling, data quality, feature creation, and model selection, several of which we have not covered. One of those critical factors is feature selection. Feature selection is simply the process of choosing or systematically determining the best features for a model from an existing set of features. We have done some simple feature selection already. In the previous section, we removed features that had high VIFs. In this section, we will look at some methods for feature selection. The methods presented in this section fall into two categories: statistical methods for feature selection and performance-based methods for feature selection. Let’s start with statistical methods.

Statistical methods for feature selection

Statistical methods for feature selection rely on the primary tool that we have used throughout the previous chapters: statistical significance. The methods presented...

Shrinkage methods

The bias-variance trade-off is a decision point all statistics and machine learning practitioners must balance when performing modeling. Too much of either renders results useless. To catch these when they become issues, we look at test results and the residuals. For example, assuming a useful set of features and the appropriate model have been selected, a model that performs well on validation, but poorly on a test set could be indicative of too much variance and conversely, a model that fails to perform well at all could have too much bias. In either case, both models fail to generalize well. However, while bias in a model can be identified in poor model performance from the start, high variance can be notoriously deceptive as it has the potential to perform very well during training and even during validation, depending on the data. High-variance models frequently use values of coefficients that are unnecessarily high when very similar results can be obtained from...

Dimension reduction

In this section, we will use a specific technique – PCR – to study MLR. This technique is useful when we need to deal with a multicollinearity data issue. Multicollinearity occurs when an independent variable is highly correlated with another independent variable, or an independent variable can be predicted from another independent variable in a regression model. A high correlation can affect the result poorly when fitting a model.

The PCR technique is based on PCA as used in unsupervised machine learning for data compression and exploratory analysis. The idea behind it is to use the dimension reduction technique, PCA, on these original variables to create new uncorrelated variables. The information obtained on these new variables helps us to understand the relationship and then apply the MLR algorithm to these new variables. The PCA technique can also be used in a classification problem, which we will discuss in the next chapter.

PCA –...

Summary

In this chapter, we discussed the concept of MLR and topics aiding in its implementation. These topics included feature selection methods, shrinkage methods, and PCR. Using these tools, we were able to demonstrate approaches to reduce the risk of modeling excess variance. In doing so, we were able to also induce model bias so that models can have a better chance of generalizing on unseen data with minimal complications as frequently faced when overfitting.

In the next chapter, we will begin a discussion on classification with the introduction of logistic regression, which fits a sigmoid to a linear regression model to derive probabilities of binary class membership.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Building Statistical Models in Python
Published in: Aug 2023Publisher: PacktISBN-13: 9781804614280
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Huy Hoang Nguyen

Huy Hoang Nguyen is a Mathematician and a Data Scientist with far-ranging experience, championing advanced mathematics and strategic leadership, and applied machine learning research. He holds a Master's in Data Science and a PhD in Mathematics. His previous work was related to Partial Differential Equations, Functional Analysis and their applications in Fluid Mechanics. He transitioned from academia to the healthcare industry and has performed different Data Science projects from traditional Machine Learning to Deep Learning.
Read more about Huy Hoang Nguyen

author image
Paul N Adams

Paul Adams is a Data Scientist with a background primarily in the healthcare industry. Paul applies statistics and machine learning in multiple areas of industry, focusing on projects in process engineering, process improvement, metrics and business rules development, anomaly detection, forecasting, clustering and classification. Paul holds a Master of Science in Data Science from Southern Methodist University.
Read more about Paul N Adams

author image
Stuart J Miller

Stuart Miller is a Machine Learning Engineer with degrees in Data Science, Electrical Engineering, and Engineering Physics. Stuart has worked at several Fortune 500 companies, including Texas Instruments and StateFarm, where he built software that utilized statistical and machine learning techniques. Stuart is currently an engineer at Toyota Connected helping to build a more modern cockpit experience for drivers using machine learning.
Read more about Stuart J Miller