Reader small image

You're reading from  Accelerate Model Training with PyTorch 2.X

Product typeBook
Published inApr 2024
Reading LevelIntermediate
PublisherPackt
ISBN-139781805120100
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Maicon Melo Alves
Maicon Melo Alves
author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves

Right arrow

Implementing distributed training on multiple GPUs

In this section, we’ll show you how to implement and run distributed training on multiple GPUs using NCCL, the de facto communication backend for NVIDIA GPUs. We’ll start by providing a brief overview of NCCL, after which we will learn how to code and launch distributed training in a multi-GPU environment.

The NCCL communication backend

NCCL stands for NVIDIA Collective Communications Library. As its name suggests, NCCL is a library that provides optimized collective operations for NVIDIA GPUs. Therefore, we can use NCCL to execute collective routines such as broadcast, reduce, and the so-called all-reduce operation. Roughly speaking, NCCL plays the same role as oneCCL does for Intel CPUs.

PyTorch supports NCCL natively, which means that the default installation of PyTorch for NVIDIA GPUs already comes with a built-in NCCL version. NCCL works on single or multiple machines and supports the usage of high-performance...

lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Accelerate Model Training with PyTorch 2.X
Published in: Apr 2024Publisher: PacktISBN-13: 9781805120100

Author (1)

author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves