Reader small image

You're reading from  Accelerate Model Training with PyTorch 2.X

Product typeBook
Published inApr 2024
Reading LevelIntermediate
PublisherPackt
ISBN-139781805120100
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Maicon Melo Alves
Maicon Melo Alves
author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves

Right arrow

Deconstructing the Training Process

We already know that training neural network models takes a long time to finish. Otherwise, we would not be here discussing ways to run this process faster. But which characteristics make the building process of these models so computationally heavy? Why does the training step take so long? To answer these questions, we need to understand the computational burden of the training phase.

In this chapter, we will first remember how the training phase works under the hood. We will understand what makes the training process so computationally heavy.

Here is what you will learn as part of this first chapter:

  • Remembering the training process
  • Understanding the computational burden of the training phase
  • Understanding the factors that influence training time

Technical requirements

You can find the complete code of the examples mentioned in this chapter in the book’s GitHub repository at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main.

You can access your favorite environment to execute this notebook, such as Google Colab or Kaggle.

Remembering the training process

Before describing the computational burden imposed by neural network training, we must remember how this process works.

Important note

This section gives a very brief introduction to the training process. If you are totally unfamiliar with this topic, you should invest some time to understand this theme before moving to the following chapters. An excellent resource for learning this topic is the book entitled Machine Learning with PyTorch and Scikit-Learn, published by Packt and written by Sebastian Raschka, Yuxi (Hayden) Liu, and Vahid Mirjalili.

Basically speaking, neural networks learn from examples, similar to a child observing an adult. The learning process relies on feeding the neural network with pairs of input and output values so that the network catches the intrinsic relation between the input and output data. Such relationships can be interpreted as the knowledge obtained by the model. So, where a human sees a bunch of data, the...

Understanding the computational burden of the model training phase

Now that we’ve brushed up on how the training process works, let’s understand the computational cost required to train a model. By using the terms computational cost or burden, we mean the computing power needed to execute the training process. The higher the computational cost, the higher the time taken to train the model. In the same way, the higher the computational burden, the higher the computing resources required to train the model.

Essentially, we can say the computational burden to train a model is defined by a three-fold factor, as illustrated in Figure 1.6:

Figure 1.6 – Factors that influence the training computational burden

Figure 1.6 – Factors that influence the training computational burden

Each one of these factors contributes (to some degree) to the computational complexity imposed by the training process. Let’s talk about each one of them.

Hyperparameters

Hyperparameters define two aspects of neural networks...

Quiz time!

Let’s review what we have learned in this chapter by answering eight questions. At first, try to answer these questions without consulting the material.

Important note

The answers to all these questions are available at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main/quiz/chapter01-answers.md.

Before starting the quiz, remember that it is not a test at all! This section aims to complement your learning process by revising and consolidating the content covered in this chapter.

Choose the correct option for the following questions:

  1. Which phases comprise the training process?
    1. Forward, processing, optimization, and backward.
    2. Processing, pre-processing, and post-processing.
    3. Forward, loss calculation, optimization, and backward.
    4. Processing, loss calculation, optimization, and post-processing.
  2. Which factors impact the computational burden of the training process?
    1. Loss function, optimizer, and parameters.
    2. Hyperparameters...

Summary

We have reached the end of the first step of our training acceleration journey. You started this chapter by remembering how the training process works. In addition to refreshing concepts such as datasets and samples, you remembered the four phases of the training algorithm.

Next, you learned that hyperparameters, operations, and parameters are the three-fold factors influencing the training process’s computational burden.

Now that you have remembered the training process and understood what contributes to its computational complexity, it’s time to move on to the next topic.

Let’s take our first steps to learn how to accelerate this heavy computational process!

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Accelerate Model Training with PyTorch 2.X
Published in: Apr 2024Publisher: PacktISBN-13: 9781805120100
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves