Reader small image

You're reading from  Accelerate Model Training with PyTorch 2.X

Product typeBook
Published inApr 2024
Reading LevelIntermediate
PublisherPackt
ISBN-139781805120100
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Maicon Melo Alves
Maicon Melo Alves
author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves

Right arrow

Enabling AMP

Fortunately, PyTorch provides methods and tools to perform AMP by changing just a few things in our original code.

In PyTorch, AMP relies on enabling a couple of flags, wrapping the training process with the torch.autocast object, and using a gradient scaler. The more complex case, which is related to implementing AMP on GPU, takes all these three parts, while the most simple scenario (CPU-based training) requires only the usage of torch.autocast.

Let’s start by covering the more complex scenario. So, follow me to the next section to learn how to activate this approach in our GPU-based code.

Activating AMP on GPU

To activate AMP on GPU, we need to make three modifications to our code:

  1. Enable the CUDA and CuDNN backend flags.
  2. Wrap the training loop with torch.autocast.
  3. Use a gradient scaler.

Let’s take a closer look.

Enabling backend flags

As we learned in Chapter 4, Using Specialized Libraries, PyTorch relies on third...

lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Accelerate Model Training with PyTorch 2.X
Published in: Apr 2024Publisher: PacktISBN-13: 9781805120100

Author (1)

author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves