Reader small image

You're reading from  Accelerate Model Training with PyTorch 2.X

Product typeBook
Published inApr 2024
Reading LevelIntermediate
PublisherPackt
ISBN-139781805120100
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Maicon Melo Alves
Maicon Melo Alves
author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves

Right arrow

Training Models Faster

In the last chapter, we learned the factors that contribute to increasing the computational burden of the training process. Those factors have a direct influence on the complexity of the training phase and, hence, on the execution time.

Now, it is time to learn how to accelerate this process. In general, we can improve performance by changing something in the software stack or increasing the number of computing resources.

In this chapter, we will start to understand both of these options. Next, we will learn what can be modified in the application and environment layers.

Here is what you will learn as part of this chapter:

  • Understanding the approaches to accelerate the training process
  • Knowing the layers of the software stack used to train a model
  • Learning the difference between vertical and horizontal scaling
  • Understanding what can be changed in the application layer to accelerate the training process.
  • Understanding what can...

Technical requirements

You can find the complete code of the examples mentioned in this chapter in the book’s GitHub repository at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main.

You can access your favorite environments to execute this notebook, such as Google Colab or Kaggle.

What options do we have?

Once we have decided to accelerate the training process of a model, we can take two directions, as illustrated in Figure 2.1:

Figure 2.1 – Approaches to accelerating the training phase

Figure 2.1 – Approaches to accelerating the training phase

In the first option (Modify the software stack), we go through each layer of the software stack used to train a model to seek opportunities to improve the training process. In simpler words, we can change the application code, install and use a specialized library, or enable a special capability regarding the operating system or container environment.

This first approach relies on having profound knowledge of performance tuning techniques. In addition, it demands a high sense of investigation to identify bottlenecks and apply the most suitable solution to overcome them. Thus, this approach is about harnessing the most hardware and software resources by extracting the maximum performance of the computing system.

Nevertheless, remark that...

Modifying the application layer

The application layer is the starting point of the performance improvement journey. As we have complete control of the application code, we can change it without depending on anyone else. Thus, there is no better way to start the performance optimization process than working independently.

What can we change in the application layer?

You may wonder how we can modify the code to improve performance. Well, we can reduce model complexity, increase the batch size to optimize memory usage, compile the model to fuse operations and disable profiling functions to eliminate extra overhead in the training process.

Regardless of the changes applied to the application layer, we cannot sacrifice model accuracy in favor of performance improvement since this does not make sense. As the primary goal of a neural network is to solve problems, it would be meaningless to accelerate the building process of a useless model. Then, we must pay attention to model quality...

Modifying the environment layer

The environment layer comprises the machine learning framework and all the software needed to support its execution, such as libraries, compilers, and auxiliary tools.

What can we change in the environment layer?

As we discussed before, we may not have the necessary permission to change anything in the environment layer. This restriction depends on the type of environment we use to train the model. In third-party environments, such as notebook’s online services, we do not have the flexibility to make advanced configurations, such as downloading, compiling, and installing a specialized library. We can upgrade a package or install a new library, but nothing beyond that.

To overcome this restriction, we commonly use containers. Containers allow us to configure anything we need to run our application without requiring the support or permission of everyone else. Obviously, we are talking about the environment layer and not about the execution...

Quiz time!

Let’s review what we have learned in this chapter by answering a few questions. At first, try to answer these questions without consulting the material.

Important note

The answers to all these questions are available at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main/quiz/chapter02-answers.md.

Before starting the quiz, remember that it is not a test at all! This section aims to complement your learning process by revising and consolidating the content covered in this chapter.

Choose the correct answers for the following questions:

  1. After running the training process using two GPUs in a single machine, we decided to add two extra GPUs to accelerate the training process. In this case, we tried to improve the performance of the training process by applying which of the following?
    1. Horizontal scaling.
    2. Vertical scaling.
    3. Transversal scaling.
    4. Distributed scaling.
  2. The training process of a simple model is taking a...

Summary

We reached the end of the introductory part of the book. We started this chapter by learning the approaches we can take to reduce the training time. Next, we learned what kind of modifications we can perform in the application and environment layers to accelerate the training process.

We have experienced, in practice, how changing a few things in the code or environment can result in impressive performance improvements.

You are ready to move on in the performance journey! In the next chapter, you will learn how to apply one of the most exciting capabilities provided by PyTorch 2.0: model compilation.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Accelerate Model Training with PyTorch 2.X
Published in: Apr 2024Publisher: PacktISBN-13: 9781805120100
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves