Reader small image

You're reading from  The Supervised Learning Workshop - Second Edition

Product typeBook
Published inFeb 2020
Reading LevelIntermediate
PublisherPackt
ISBN-139781800209046
Edition2nd Edition
Languages
Tools
Right arrow
Authors (4):
Blaine Bateman
Blaine Bateman
author image
Blaine Bateman

Blaine Bateman has more than 35 years of experience working with various industries from government R&D to startups to $1B public companies. His experience focuses on analytics including machine learning and forecasting. His hands-on abilities include Python and R coding, Keras/Tensorflow, and AWS & Azure machine learning services. As a machine learning consultant, he has developed and deployed actual ML models in industry.
Read more about Blaine Bateman

Ashish Ranjan Jha
Ashish Ranjan Jha
author image
Ashish Ranjan Jha

Ashish Ranjan Jha received his bachelor's degree in electrical engineering from IIT Roorkee (India), a master's degree in Computer Science from EPFL (Switzerland), and an MBA degree from Quantic School of Business (Washington). He has received a distinction in all 3 of his degrees. He has worked for large technology companies, including Oracle and Sony as well as the more recent tech unicorns such as Revolut, mostly focused on artificial intelligence. He currently works as a machine learning engineer. Ashish has worked on a range of products and projects, from developing an app that uses sensor data to predict the mode of transport to detecting fraud in car damage insurance claims. Besides being an author, machine learning engineer, and data scientist, he also blogs frequently on his personal blog site about the latest research and engineering topics around machine learning.
Read more about Ashish Ranjan Jha

Benjamin Johnston
Benjamin Johnston
author image
Benjamin Johnston

Benjamin Johnston is a senior data scientist for one of the world's leading data-driven MedTech companies and is involved in the development of innovative digital solutions throughout the entire product development pathway, from problem definition to solution research and development, through to final deployment. He is currently completing his Ph.D. in machine learning, specializing in image processing and deep convolutional neural networks. He has more than 10 years of experience in medical device design and development, working in a variety of technical roles, and holds first-class honors bachelor's degrees in both engineering and medical science from the University of Sydney, Australia.
Read more about Benjamin Johnston

Ishita Mathur
Ishita Mathur
author image
Ishita Mathur

Ishita Mathur has worked as a data scientist for 2.5 years with product-based start-ups working with business concerns in various domains and formulating them as technical problems that can be solved using data and machine learning. Her current work at GO-JEK involves the end-to-end development of machine learning projects, by working as part of a product team on defining, prototyping, and implementing data science models within the product. She completed her masters' degree in high-performance computing with data science at the University of Edinburgh, UK, and her bachelor's degree with honors in physics at St. Stephen's College, Delhi.
Read more about Ishita Mathur

View More author details
Right arrow

3. Linear Regression

Overview

This chapter covers regression problems and analysis, introducing us to linear regression, as well as multiple linear regression and gradient descent. By the end of this chapter, you will be able to distinguish between regression and classification problems. You will be able to implement gradient descent in linear regression problems, and also apply it to other model architectures. You will also be able to use linear regression to construct a linear model for data in an x-y plane, evaluate the performance of linear models, and use the evaluation to choose the best model. In addition, you will be able to execute feature engineering to create dummy variables for constructing complicated linear models.

Introduction

In Chapter 1, Fundamentals, and Chapter 2, Exploratory Data Analysis and Visualization, we introduced the concept of supervised machine learning in Python and the essential techniques required for loading, cleaning, exploring, and visualizing raw data sources. We discussed the importance of fully understanding the data before moving on to further analysis, as well as how the initial data preparation process can sometimes account for the majority of the time spent on the project as a whole. In particular, we considered correlations among all the variables, finding and addressing missing values, and understanding the shape of data via histograms, bar plots, and density plots. In this chapter, we will delve into the model building process and will construct our first supervised machine learning solution using linear regression.

Regression and Classification Problems

We discussed two distinct methods, supervised learning and unsupervised learning, in Chapter 1, Fundamentals. Supervised learning problems aim to map input information to a known output value or label, but there are two further subcategories to consider. Supervised learning problems can be further divided into regression or classification problems. Regression problems, which are the subject of this chapter, aim to predict or model continuous values, for example, predicting the temperature tomorrow in degrees Celsius, from historical data, or forecasting future sales of a product on the basis of its sales history. In contrast, classification problems, rather than returning a continuous value, predict membership of one or more of a specified number of classes or categories. The example supervised learning problem in Chapter 1, Fundamentals, where we wanted to determine or predict whether a hairstyle was from the 1960s or 1980s, is a good example...

Linear Regression

We will start our investigation into regression models with the selection of a linear model. Linear models, while being a great first choice due to their intuitive nature, are also very powerful in their predictive power, assuming datasets contain some degree of linear or polynomial relationship between the input features and values. The intuitive nature of linear models often arises from the ability to view data as plotted on a graph and observe a trending pattern in the data with, say, the output (the y-axis value for the data) trending positively or negatively with the input (the x-axis value). The fundamental components of linear regression models are also often learned during high school mathematics classes. You may recall that the equation of a straight line is defined as follows:

Figure 3.13: Equation of a straight line

Here, x is the input value and y is the corresponding output or predicted value. The parameters of the model are the...

Multiple Linear Regression

We have already covered regular linear regression, as well as linear regression with polynomial and other terms, and considered training them with both the least squares method and gradient descent. This section of the chapter considers an additional type of linear regression: multiple linear regression, where more than one variable (or feature) is used to construct the model. In fact, we have already used multiple linear regression without calling it as such—when we added dummy variables, and again when we added the sine and cosine terms, we were fitting multiple x variables to predict the single y variable.

Let's consider a simple example of where multiple linear regression naturally arises as a modeling solution. Suppose you were shown the following chart, which is the total annual earnings of a hypothetical tech worker over a long career. You can see that over time, their pay increased, but there are some odd jumps and changes in the data...

Summary

In this chapter, we took our first big leap into constructing machine learning models and making predictions with labeled datasets. We began our analysis by looking at a variety of different ways to construct linear models, starting with the precise least squares method, which is very good when modeling small amounts of data that can be processed using the available computer memory. The performance of linear models can be improved using dummy variables, which we created from categorical variables, adding additional features and context to the model. We then used linear regression analysis with a polynomial model to further improve performance, fitting a more natural curve to the dataset, and we investigated other non-linear feature engineering with the addition of sine and cosine series as predictors.

As a generalization from explicit linear regression, we implemented the gradient descent algorithm, which we noted, while not as precise as the least squares method (for a...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
The Supervised Learning Workshop - Second Edition
Published in: Feb 2020Publisher: PacktISBN-13: 9781800209046
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (4)

author image
Blaine Bateman

Blaine Bateman has more than 35 years of experience working with various industries from government R&D to startups to $1B public companies. His experience focuses on analytics including machine learning and forecasting. His hands-on abilities include Python and R coding, Keras/Tensorflow, and AWS & Azure machine learning services. As a machine learning consultant, he has developed and deployed actual ML models in industry.
Read more about Blaine Bateman

author image
Ashish Ranjan Jha

Ashish Ranjan Jha received his bachelor's degree in electrical engineering from IIT Roorkee (India), a master's degree in Computer Science from EPFL (Switzerland), and an MBA degree from Quantic School of Business (Washington). He has received a distinction in all 3 of his degrees. He has worked for large technology companies, including Oracle and Sony as well as the more recent tech unicorns such as Revolut, mostly focused on artificial intelligence. He currently works as a machine learning engineer. Ashish has worked on a range of products and projects, from developing an app that uses sensor data to predict the mode of transport to detecting fraud in car damage insurance claims. Besides being an author, machine learning engineer, and data scientist, he also blogs frequently on his personal blog site about the latest research and engineering topics around machine learning.
Read more about Ashish Ranjan Jha

author image
Benjamin Johnston

Benjamin Johnston is a senior data scientist for one of the world's leading data-driven MedTech companies and is involved in the development of innovative digital solutions throughout the entire product development pathway, from problem definition to solution research and development, through to final deployment. He is currently completing his Ph.D. in machine learning, specializing in image processing and deep convolutional neural networks. He has more than 10 years of experience in medical device design and development, working in a variety of technical roles, and holds first-class honors bachelor's degrees in both engineering and medical science from the University of Sydney, Australia.
Read more about Benjamin Johnston

author image
Ishita Mathur

Ishita Mathur has worked as a data scientist for 2.5 years with product-based start-ups working with business concerns in various domains and formulating them as technical problems that can be solved using data and machine learning. Her current work at GO-JEK involves the end-to-end development of machine learning projects, by working as part of a product team on defining, prototyping, and implementing data science models within the product. She completed her masters' degree in high-performance computing with data science at the University of Edinburgh, UK, and her bachelor's degree with honors in physics at St. Stephen's College, Delhi.
Read more about Ishita Mathur