Reader small image

You're reading from  Accelerate Model Training with PyTorch 2.X

Product typeBook
Published inApr 2024
Reading LevelIntermediate
PublisherPackt
ISBN-139781805120100
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Maicon Melo Alves
Maicon Melo Alves
author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves

Right arrow

Using Specialized Libraries

Nobody needs to do all things by themselves. Neither does PyTorch! We already know PyTorch is one of the most powerful frameworks for building deep learning models. However, as many other tasks are involved in the model-building process, PyTorch relies on specialized libraries and tools to get the job done.

In this chapter, we will learn how to install, use, and configure libraries to optimize CPU-based training and multithreading.

More important than learning the technical nuances presented in this chapter is catching the message it brings: we can improve performance by using and configuring third-party libraries specialized in tasks that PyTorch relies on. In this sense, we can search for many other options than the ones described in this book.

Here is what you will learn as part of this chapter:

  • Understanding the concept of multithreading with OpenMP
  • Learning how to use and configure OpenMP
  • Understanding IPEX – an API for...

Technical requirements

You can find the complete code of the examples mentioned in this chapter in the book’s GitHub repository at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main.

You can access your favorite environment to execute this notebook, such as Google Colab or Kaggle.

Multithreading with OpenMP

OpenMP is a library used for parallelizing tasks by harnessing all the power of multicore processors by using the multithreading technique. In the context of PyTorch, OpenMP is employed to parallelize operations executed in the training phase and to accelerate preprocessing tasks related to data augmentation, normalization, and so forth.

As multithreading is a key concept here, to see how OpenMP works, follow me to the next section to understand this technique.

What is multithreading?

Multithreading is a technique to parallelize tasks in a multicore system, which, in turn, is a computer system endowed with multicore processors. Nowadays, any computing system has multicore processors; smartphones, notebooks, and even TVs have CPUs with more than one processing core.

As an example, let’s look at the notebook that I’m using right now to write this book. My notebook possesses one Intel i5-8265U processor, which has eight cores, as illustrated...

Optimizing Intel CPU with IPEX

IPEX stands for Intel extension for PyTorch and is a set of libraries and tools provided by Intel to accelerate the training and inference of machine learning models.

IPEX is a clear sign by Intel of highlighting the relevance of PyTorch among machine learning frameworks. After all, Intel has invested a lot of energy and resources in designing and maintaining an API specially created for PyTorch.

It is interesting to say that IPEX strongly relies on libraries provided by the Intel oneAPI toolset. oneAPI contains libraries and tools specific for machine learning applications, such as oneDNN, and other ones to accelerate applications, such as oneTBB, in general.

Important note

The complete code shown in this section is available at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main/code/chapter04/baseline-densenet121_cifar10.ipynb and https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch...

Quiz time!

Let’s review what we have learned in this chapter by answering a few questions. At first, try to answer these questions without consulting the material.

Important note

The answers to all these questions are available at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main/quiz/chapter04-answers.md.

Before starting the quiz, remember that it is not a test at all! This section aims to complement your learning process by revising and consolidating the content covered in this chapter.

Choose the correct option for the following questions.

  1. A multicore system can have the following two types of computing cores:
    1. Physical and active.
    2. Physical and digital.
    3. Physical and logical.
    4. Physical and vectorial.
  2. A set of threads created by the same process...
    1. May share the same memory address space.
    2. Do not share the same memory address space.
    3. Is impossible in modern systems.
    4. Do share the same memory address space.
  3. Which of the following...

Summary

You learned that PyTorch relies on third-party libraries to accelerate the training process. Besides understanding the concept of multithreading, you have learned how to install, configure, and use OpenMP. In addition, you have learned how to install and use IPEX, which is a set of libraries developed by Intel to optimize the training process of PyTorch code executed on Intel-based platforms.

OpenMP can accelerate the training process by employing multiple threads to parallelize the execution of PyTorch code, whereas IPEX is useful for replacing the operations provided by the default PyTorch library by optimizing the operations written specifically for Intel hardware.

In the next chapter, you will learn how to create an efficient data pipeline to keep the GPU working at peak performance during the entire training process.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Accelerate Model Training with PyTorch 2.X
Published in: Apr 2024Publisher: PacktISBN-13: 9781805120100
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Maicon Melo Alves

Dr. Maicon Melo Alves is a senior system analyst and academic professor specialized in High Performance Computing (HPC) systems. In the last five years, he got interested in understanding how HPC systems have been used to leverage Artificial Intelligence applications. To better understand this topic, he completed in 2021 the MBA in Data Science of Pontifícia Universidade Católica of Rio de Janeiro (PUC-RIO). He has over 25 years of experience in IT infrastructure and, since 2006, he works with HPC systems at Petrobras, the Brazilian energy state company. He obtained his D.Sc. degree in Computer Science from the Fluminense Federal University (UFF) in 2018 and possesses three published books and publications in international journals of HPC area.
Read more about Maicon Melo Alves