Reader small image

You're reading from  Deep Learning with MXNet Cookbook

Product typeBook
Published inDec 2023
Reading LevelBeginner
PublisherPackt
ISBN-139781800569607
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Andrés P. Torres
Andrés P. Torres
author image
Andrés P. Torres

Andrés P. Torres, is the Head of Perception at Oxa, a global leader in industrial autonomous vehicles, leading the design and development of State-Of The-Art algorithms for autonomous driving. Before, Andrés had a stint as an advisor and Head of AI at an early-stage content generation startup, Maekersuite, where he developed several AI-based algorithms for mobile phones and the web. Prior to this, Andrés was a Software Development Manager at Amazon Prime Air, developing software to optimize operations for autonomous drones.
Read more about Andrés P. Torres

Right arrow

Solving Regression Problems

In the previous chapters, we learned how to set up and run MXNet, work with Gluon and DataLoaders, and visualize datasets for regression, classification, image, and text problems. We also discussed the different learning methodologies (supervised learning, unsupervised learning, and reinforcement learning). In this chapter, we are going to focus on supervised learning, where the expected outputs are known for at least some examples. Depending on the given type of these outputs, supervised learning can be decomposed into regression and classification. Regression outputs are numbers from a continuous distribution (such as predicting the stock price of a public company), whereas classification outputs are defined from a known set (for example, identifying whether an image corresponds to a mouse, a cat, or a dog).

Classification problems can be seen as a subset of regression problems, and therefore, in this chapter, we will start working with the latter ones...

Technical requirements

Apart from the technical requirements specified in the Preface, the following are some of the additional requirements needed for this chapter:

  • Ensure that you have completed Recipe, Installing MXNet, Gluon, GluonCV and GluonNLP from Chapter 1 Up and Running with MXNet.
  • Ensure that you have completed Recipe 1, Toy dataset for regression – load, manage, and visualize a house sales dataset from Chapter 2, Working with MXNet and Visualizing Datasets: Gluon and DataLoader.

The code for this chapter can be found at the following GitHub URL: https://github.com/PacktPublishing/Deep-Learning-with-MXNet-Cookbook/tree/main/ch03.

Furthermore, you can access each recipe directly from Google Colab, for example, for the first recipe of this chapter: https://colab.research.google.com/github/PacktPublishing/Deep-Learning-with-MXNet-Cookbook/blob/main/ch03/3_1_Understanding_Maths_for_Regression_Models.ipynb.

Understanding the math of regression models

As we saw in the previous chapter, regression problems are a type of supervised learning problem whose output is a number from a continuous distribution, such as the price of a house or the predicted value of a company stock price.

The simplest model we can use for a regression problem is a linear regression model. However, these models are extremely powerful for simple problems, as their parameters can be trained and are very fast and explainable, given the small number of parameters involved. As we will see, this number of parameters is completely dependent on the number of features we use.

Another interesting property of linear regression models is that they can be represented by neural networks, and as neural networks will be the basis for most models that we will be using throughout the book, this is the linear regression model based on neural networks that we will be using.

The simplest neural network model is known as the...

Defining loss functions and evaluation metrics for regression

In the previous recipe, we defined our input features, described our model, and initialized it. At that point, we passed the features vector of a house to predict the price, calculated the output, and compared it against the expected output.

At the end of the previous recipe, the comparison of the expected output and the actual output of the model intuitively provided us with an idea of how good our model was. This is what it means to “evaluate” our model: we assessed the model’s performance. However, that evaluation is not complete for several reasons, as we did not correctly take into account several factors:

  • We only evaluated the model on one house – what about the others? How can we take all houses into account in our evaluation?
  • Is the difference between values an accurate measurement of model error? What other operations make sense?

In this recipe, we will cover how...

Training regression models

In supervised learning, training is the process of optimizing the parameters of a model towards a specific objective. It is typically the most complex and the most time-consuming step in solving a deep learning problem statement.

In this recipe, we will visit the basic concepts involved in training a model. We will apply them to solve the regression model we previously defined in this chapter, combined with the usage of the functions we discussed.

We will predict house prices using the dataset seen in Recipe 1, Toy dataset for regression – load, manage, and visualize house sales dataset from Chapter 2, Working with MXNet and Visualizing Datasets: Gluon and DataLoader.

Getting ready

There are a number of concepts that we should get familiar with to understand this recipe. These concepts define how the training will proceed:

  • Loss function: The training process is an iterative optimization process. As the training progresses, the model...

Evaluating regression models

In the previous recipe, we learned how to choose our training hyperparameters to optimize our training. We also verified how those choices affected the training and validation losses. In this recipe, we are going to explore how those choices affect our actual evaluation in the real world. The observant reader will have noticed that we split the dataset into three different sets: training, validation, and test. However, during our training, we only used the training set and the validation set. In this recipe, we will emulate some real-world behavior of our model by running it on the unseen data, the test set.

Getting ready

When evaluating a model, we can perform qualitative evaluation and quantitative evaluation:

  • Qualitative evaluation is the selection of one or more random (or not so random, depending on what we are looking for) samples and analyzing the result, verifying whether it matches our expectations.
  • Quantitative evaluation deals...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Deep Learning with MXNet Cookbook
Published in: Dec 2023Publisher: PacktISBN-13: 9781800569607
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Andrés P. Torres

Andrés P. Torres, is the Head of Perception at Oxa, a global leader in industrial autonomous vehicles, leading the design and development of State-Of The-Art algorithms for autonomous driving. Before, Andrés had a stint as an advisor and Head of AI at an early-stage content generation startup, Maekersuite, where he developed several AI-based algorithms for mobile phones and the web. Prior to this, Andrés was a Software Development Manager at Amazon Prime Air, developing software to optimize operations for autonomous drones.
Read more about Andrés P. Torres