Reader small image

You're reading from  Accelerate Model Training with PyTorch 2.X

Product typeBook
Published inApr 2024
Reading LevelIntermediate
PublisherPackt
ISBN-139781805120100
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Maicon Melo Alves
Maicon Melo Alves
author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves

Right arrow

Training with Multiple Machines

We’ve finally arrived at the last mile of our performance improvement journey. In this last stage, we will broaden our horizons and learn how to distribute the training process across multiple machines or servers. So, instead of using four or eight devices, we can use dozens or hundreds of computing resources to train our models.

An environment comprised of multiple connected servers is usually called a computing cluster or simply a cluster. Such environments are shared among multiple users and have technical particularities such as a high bandwidth and low latency network.

In this chapter, we’ll describe the characteristics of computing clusters that are more relevant to the distributed training process. After that, we will learn how to distribute the training process among multiple machines using Open MPI as the launcher and NCCL as the communication backend.

Here is what you will learn as part of this chapter:

  • The most...

Technical requirements

You can find the complete code of examples mentioned in this chapter in the book’s GitHub repository at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main.

You can access your favorite environments to execute this notebook, such as Google Colab or Kaggle.

What is a computing cluster?

A computing cluster is a system of powerful servers interconnected by a high-performance network, as shown in Figure 11.1. This environment can be provisioned on-premises or in the cloud:

Figure 11.1 – A computing cluster

Figure 11.1 – A computing cluster

The computing power provided by these machines is combined to solve complex problems or to execute highly intensive computing tasks. A computing cluster is also known as a high-performance computing (HPC) system.

Each server has powerful computing resources such as multiple CPUs and GPUs, fast memory devices, ultra-fast disks, and special network adapters. Moreover, a computing cluster often has a parallel filesystem, which provides high transfer I/O rates.

Although not formally defined, we conventionally use the term “cluster” to reference environments comprised of four machines at least. Some computing clusters have a half-dozen machines, while others have more than two or three...

Implementing distributed training on multiple machines

This section shows how to implement and run the distributed training on multiple machines by using Open MPI as the launch provider and NCCL as the communication backend. Let’s start by introducing Open MPI.

Getting introduced to Open MPI

MPI stands for message passing interface and is a standard that specifies a set of communication routines, data types, events, and operations used to implement distributed memory-based applications. MPI is so relevant to the HPC industry that it is ruled and maintained by a forum comprised of distinguished scientists, researchers, and professionals around the globe.

Note

You can find more information about MPI at this link: https://www.mpi-forum.org/

Therefore, MPI, strictly speaking, is not software; it is a standard specification that can be used to implement a software, tool, or library. Like non-proprietary programming languages such as C and Python, MPI also has many...

Quiz time!

Let’s review what we have learned in this chapter by answering a few questions. First, try to answer these questions without consulting the material.

Note

The answers to all these questions are available at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main/quiz/chapter11-answers.md.

Before starting the quiz, remember that it is not a test at all! This section aims to complement your learning process by revising and consolidating the content covered in this chapter.

Choose the correct option for the following questions:

  1. What is a task submitted to a computing cluster called?
    1. Thread.
    2. Process.
    3. Job.
    4. Work.
  2. What are the main tasks executed by a workload manager?
    1. Resource management and job scheduling.
    2. Memory allocation and thread scheduling.
    3. GPU management and node scheduling.
    4. Resource management and node scheduling.
  3. Which of the following is an open source, fault-tolerant, and highly scalable workload manager for...

Summary

In this chapter, we learned how to distribute the training process across multiple GPUs located on multiple machines. We used Open MPI as the launch provider and NCCL as the communication backend.

We decided to use Open MPI as the launcher because it provides an easy and elegant way to create distributed processes on remote machines. Although Open MPI can also be employed like the communication backend, it is preferable to adopt NCCL since it has the most optimized implementation of collective operations for NVIDIA GPUs.

Results showed that the distributed training with 16 GPUs on two machines was 70% faster than running with 8 GPUs on a single machine. The model accuracy decreased from 68.82% to 63.73%, which is expected since we have doubled the number of model replicas in the distributed training process.

This chapter ends our journey about learning how to accelerate the training process with PyTorch. More than knowing how to apply techniques and methods to speed...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Accelerate Model Training with PyTorch 2.X
Published in: Apr 2024Publisher: PacktISBN-13: 9781805120100
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves