Reader small image

You're reading from  Accelerate Model Training with PyTorch 2.X

Product typeBook
Published inApr 2024
Reading LevelIntermediate
PublisherPackt
ISBN-139781805120100
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Maicon Melo Alves
Maicon Melo Alves
author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves

Right arrow

Compiling the Model

Paraphrasing one of the famous presenters: “It’s time!” After completing our initial steps toward performance improvement, it is time to learn a new capability of PyTorch 2.0 to accelerate the training and inference of deep learning models.

We are talking about the Compile API, which was presented in PyTorch 2.0 as one of the most exciting capabilities of this new version. In this chapter, we will learn how to use this API to build a faster model to optimize the execution of its training phase.

Here is what you will learn as part of this chapter:

  • The benefits of graph mode over eager mode
  • How to use the API to compile a model
  • The components, workflow, and backends used by the API

Technical requirements

You can find the complete code for the examples mentioned in this chapter in this book’s GitHub repository at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main.

You can access your favorite environment to execute this notebook, such as Google Collab or Kaggle.

What do you mean by compiling?

As a programmer, you will immediately assign the term “compiling” to the process of building a program or application from the source code. Although the complete building process comprises additional phases, such as generating assembly code and linking it to libraries and other objects, it is reasonable to think that way. However, at first glance, it may be a bit confusing to think about the compiling process in the context of this book since we are talking about Python. After all, Python is not a compiled language; it is an interpreted language, and thus, no compiling is involved.

Note

It is important to clarify that Python uses compiled functions for performance purposes, though it is primarily an interpreted language.

That said, what is the meaning of compiling a model? Before answering this question, we must understand the two execution modes of machine learning frameworks. Follow me to the next section.

Execution modes

...

Using the Compile API

We will start learning the basic usage of the Compile API by applying it to our well-known CNN model and Fashion-MNIST dataset. After that, we will accelerate a heavier model that’s used to classify images from the CIFAR-10 dataset.

Basic usage

Instead of describing the API’s components and explaining a bunch of optional parameters, let’s dive into a simple example to show the basic usage of this capability. The following piece of code uses the Compile API to compile the CNN model presented in previous chapters:

model = CNN()graph_model = torch.compile(model)

Note

The complete code shown in this section is available at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main/code/chapter03/cnn-graph_mode.ipynb.

To compile a model, we need to call a function named compile, passing the model as a parameter. Nothing else is necessary for the basic usage of this API. compile returns an object that...

How does the Compile API work under the hood?

The Compile API is exactly what its name suggests: it is an entry point to access a set of functionalities PyTorch provides to move from eager to graph execution mode. Besides intermediary components and processes, we also have the compiler, which is an entity that’s responsible for getting the final work done. There are half a dozen compilers available, each one specialized in generating optimized code for a given architecture or device.

The following sections describe the steps that are involved in the compiling process and the components that make all this possible.

Compiling workflow and components

At this point, we can imagine that the compiling process is much more complex than calling a single line in our code. To transform an eager model into a compiled model, the Compile API executes three steps, namely graph acquisition, graph lowering, and graph compilation, as depicted in Figure 3.8:

Figure 3.8 – Compiling workflow
...

Quiz time!

Let’s review what we have learned in this chapter by answering a few questions. Initially, try to answer these questions without consulting the material.

Note

The answers to all these questions are available at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main/quiz/chapter03-answers.md.

Before starting this quiz, remember that this is not a test! This section aims to complement your learning process by revising and consolidating the content covered in this chapter.

Choose the correct options for the following questions:

  1. Which are the two execution modes of PyTorch?
    1. Horizontal and vertical modes.
    2. Eager and graph modes.
    3. Eager and distributed modes.
    4. Eager and auto modes.
  2. In which execution mode does PyTorch execute operations as soon as they appear in the code?
    1. Graph mode.
    2. Eager mode.
    3. Distributed mode.
    4. Auto mode.
  3. In which execution mode does PyTorch evaluate the complete set of operations seeking optimization...

Summary

In this chapter, you learned about the Compile API, a novel capability that was launched in PyTorch 2.0 and is useful to compile a model – that is, changing the operating mode from eager to graph mode. Models that execute in graph mode tend to train faster, especially in certain hardware platforms. To use the Compile API, we just need to add a single line to our original code. So, it is a simple and powerful technique to accelerate the training process of our models.

In the following chapter, you will learn how to install and configure specialized libraries such as OpenMP and IPEX to speed up the training process of our models.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Accelerate Model Training with PyTorch 2.X
Published in: Apr 2024Publisher: PacktISBN-13: 9781805120100
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves