Reader small image

You're reading from  Accelerate Model Training with PyTorch 2.X

Product typeBook
Published inApr 2024
Reading LevelIntermediate
PublisherPackt
ISBN-139781805120100
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Maicon Melo Alves
Maicon Melo Alves
author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves

Right arrow

Training with Multiple GPUs

Undoubtedly, the computing power provided by GPUs is one of the factors that’s responsible for boosting the deep learning area. If a single GPU device can accelerate the training process exceedingly, imagine what we can do with a multi-GPU environment.

In this chapter, we will show you how to use multiple GPUs to accelerate the training process. Before describing the code and launching procedure, we will dive into the characteristics and nuances of the multi-GPU environment.

Here is what you will learn as part of this chapter:

  • The fundamentals of a multi-GPU environment
  • How to distribute the training process among multiple GPUs
  • NCCL, the default backend for distributed training on NVIDIA GPUs

Technical requirements

You can find the complete code mentioned in this chapter in this book’s GitHub repository at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main.

You can access your favorite environment to execute this code, such as Google Colab or Kaggle.

Demystifying the multi-GPU environment

A multi-GPU environment is a computing system with more than one GPU device. Although multiple interconnected machines with just one GPU can be considered a multi-GPU environment, we usually employ this term to describe environments with two or more GPUs per machine.

To understand how this environment works under the hood, we need to learn about the connectivity of the devices and technologies that are adopted to provide efficient communication across multiple GPUs.

However, before we dive into these topics, we will answer a disquieting question that has probably come to your mind: will we have access to an expensive environment like that? Yes, we will. But first, let’s briefly discuss the increasing popularity of multi-GPU environments.

The popularity of multi-GPU environments

Going back 10 years ago, it was inconceivable to think of a machine with more than one GPU. Besides the high cost of this device, the applicability of...

Implementing distributed training on multiple GPUs

In this section, we’ll show you how to implement and run distributed training on multiple GPUs using NCCL, the de facto communication backend for NVIDIA GPUs. We’ll start by providing a brief overview of NCCL, after which we will learn how to code and launch distributed training in a multi-GPU environment.

The NCCL communication backend

NCCL stands for NVIDIA Collective Communications Library. As its name suggests, NCCL is a library that provides optimized collective operations for NVIDIA GPUs. Therefore, we can use NCCL to execute collective routines such as broadcast, reduce, and the so-called all-reduce operation. Roughly speaking, NCCL plays the same role as oneCCL does for Intel CPUs.

PyTorch supports NCCL natively, which means that the default installation of PyTorch for NVIDIA GPUs already comes with a built-in NCCL version. NCCL works on single or multiple machines and supports the usage of high-performance...

Quiz time!

Let’s review what we have learned in this chapter by answering a few questions. Initially, try to answer these questions without consulting the material.

Note

The answers to all these questions are available at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main/quiz/chapter10-answers.md.

Before starting the quiz, remember that this is not a test! This section aims to complement your learning process by revising and consolidating the content covered in this chapter.

Choose the correct option for the following questions.

  1. Which are the three main types of GPU interconnection technologies?
    1. PCI Express, NCCL, and GPU-Link.
    2. PCI Express, NVLink, and NVSwitch.
    3. PCI Express, NCCL, and GPU-Switch.
    4. PCI Express, NVML, and NVLink.
  2. NVLink is a proprietary interconnection technology that allows you to do which of the following?
    1. Connect the GPU to the CPU.
    2. Connect the GPU to the main memory.
    3. Connect pairs of GPUs directly to each...

Summary

In this chapter, we learned how to distribute the training process across multiple GPUs by using NCCL, the optimized NVIDIA library for collective communication.

We started this chapter by understanding how a multi-GPU environment employs distinct technologies to interconnect devices. Depending on the technology and interconnection topology, the communication between devices can slow down the entire distributed training process.

After being introduced to the multi-GPU environment, we learned how to code and launch distributed training on multiple GPUs by using NCCL as the communication backend and torchrun as the launch provider.

The experimental evaluation of our multi-GPU implementation showed that distributed training with 8 GPUs was 6.5 times faster than running with a single GPU; this is an expressive performance improvement. We also learned that model accuracy can be affected by performing distributed training on multiple GPUs, so we must take it into account...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Accelerate Model Training with PyTorch 2.X
Published in: Apr 2024Publisher: PacktISBN-13: 9781805120100
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves