Reader small image

You're reading from  Accelerate Model Training with PyTorch 2.X

Product typeBook
Published inApr 2024
Reading LevelIntermediate
PublisherPackt
ISBN-139781805120100
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Maicon Melo Alves
Maicon Melo Alves
author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves

Right arrow

Adopting Mixed Precision

Scientific computing is a tool that’s used by scientists to push the limits of the known. Biology, physics, chemistry, and cosmology are examples of areas that rely on scientific computing to simulate and model the real world. In these fields of knowledge, numeric precision is paramount to yield coherent results. Since each decimal place matters in this case, scientific computing usually adopts double-precision data types to represent numbers with the highest possible precision.

However, that need for extra information comes with a price. The higher the numeric precision, the higher the computing power required to process those numbers. Besides that, higher precision also demands a higher memory space, increasing memory consumption.

In the face of those drawbacks, we must ask ourselves: do we need so much precision to build our models? Usually, we do not! In this sense, we can reduce the numeric precision for a few operations, thus bursting the...

Technical requirements

You can find the complete code for the examples mentioned in this chapter in this book’s GitHub repository at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main.

You can access your favorite environment to execute this code, such as Google Colab or Kaggle.

Remembering numeric precision

Before diving into the benefits of adopting a mixed precision strategy, it is essential to ground you on numeric representation and common data types. Let’s start by remembering how computers represent numbers.

How do computers represent numbers?

A computer is a machine – endowed with finite resources – that’s designed to work on bits, the smallest unit of information it can manage. As numbers are infinite, computer designers had to put a lot of effort into finding a solution to represent this theoretical concept in a real machine.

To get the work done, computer designers needed to deal with three key factors regarding numeric representation:

  • Sign: Whether the number is positive or negative
  • Range: The interval of the represented numbers.
  • Precision: The number of decimal places.

Considering these elements, computer architects successfully defined numeric data types to represent not only integer...

Understanding the mixed precision strategy

The benefits of using lower-precision formats are crystal clear. Besides saving memory, the computing power required to handle data with lower precision is less than that needed to process numbers with higher precision.

One approach to accelerate the training process of machine learning models concerns employing a mixed precision strategy. Along the lines of Chapter 6, Simplifying the Model, we will understand this strategy by asking (and answering, of course) a couple of simple NH questions about this approach.

Note

When searching for information about reducing the precision of deep learning models, you may come across a term known as model quantization. Despite being related terms, the goal of mixed precision is quite different from model quantization. The former intends to accelerate the training process by employing reduced numeric precision formats. The latter focuses on reducing the complexity of trained models to use in the...

Enabling AMP

Fortunately, PyTorch provides methods and tools to perform AMP by changing just a few things in our original code.

In PyTorch, AMP relies on enabling a couple of flags, wrapping the training process with the torch.autocast object, and using a gradient scaler. The more complex case, which is related to implementing AMP on GPU, takes all these three parts, while the most simple scenario (CPU-based training) requires only the usage of torch.autocast.

Let’s start by covering the more complex scenario. So, follow me to the next section to learn how to activate this approach in our GPU-based code.

Activating AMP on GPU

To activate AMP on GPU, we need to make three modifications to our code:

  1. Enable the CUDA and CuDNN backend flags.
  2. Wrap the training loop with torch.autocast.
  3. Use a gradient scaler.

Let’s take a closer look.

Enabling backend flags

As we learned in Chapter 4, Using Specialized Libraries, PyTorch relies on third...

Quiz time!

Let’s review what we have learned in this chapter by answering a few questions. Initially, try to answer these questions without consulting the material.

Note

The answers to all these questions are available at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main/quiz/chapter07-answers.md.

Before starting the quiz, remember that this is not a test! This section aims to complement your learning process by revising and consolidating the content covered in this chapter.

Choose the correct option for the following questions:

  1. Which of the following numeric formats represents integers by using only 8 bits?
    1. FP8.
    2. INT32.
    3. INT8.
    4. INTFB8.
  2. FP16 is a numeric representation that uses 16 bits to represent floating-point numbers. What is this numeric format also known as?
    1. Half-precision floating-point representation.
    2. Single-precision floating-point representation.
    3. Double-precision floating-point representation.
    4. One quarter-precision...

Summary

In this chapter, you learned that adopting a mixed-precision approach can accelerate the training process of our models.

Although it is possible to implement the mixed precision strategy by hand, it is preferable to rely on the AMP solution provided by PyTorch since it is an elegant and seamless process that’s designed to avoid the occurrence of errors involving numeric representation. When this kind of error occurs, they are very hard to identify and solve.

Implementing AMP on PyTorch requires adding a few extra lines to the original code. Essentially, we must wrap the training loop with the AMP engine, enable four flags related to backend libraries, and instantiate a gradient scaler.

Depending on the GPU architecture, library version, and the model itself, we can significantly improve the performance of the training process.

This chapter closes the second part of this book. Next, in the third and last part, we will learn how to spread the training process...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Accelerate Model Training with PyTorch 2.X
Published in: Apr 2024Publisher: PacktISBN-13: 9781805120100
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves