Reader small image

You're reading from  Mastering PyTorch

Product typeBook
Published inFeb 2021
Reading LevelIntermediate
PublisherPackt
ISBN-139781789614381
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Ashish Ranjan Jha
Ashish Ranjan Jha
author image
Ashish Ranjan Jha

Ashish Ranjan Jha received his bachelor's degree in electrical engineering from IIT Roorkee (India), a master's degree in Computer Science from EPFL (Switzerland), and an MBA degree from Quantic School of Business (Washington). He has received a distinction in all 3 of his degrees. He has worked for large technology companies, including Oracle and Sony as well as the more recent tech unicorns such as Revolut, mostly focused on artificial intelligence. He currently works as a machine learning engineer. Ashish has worked on a range of products and projects, from developing an app that uses sensor data to predict the mode of transport to detecting fraud in car damage insurance claims. Besides being an author, machine learning engineer, and data scientist, he also blogs frequently on his personal blog site about the latest research and engineering topics around machine learning.
Read more about Ashish Ranjan Jha

Right arrow

Distributed training on GPUs with CUDA

Throughout the various exercises in this book, you may have noticed a common line of PyTorch code:

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

This code simply looks for the available compute device and prefers cuda (which uses the GPU) over cpu. This preference is because of the computational speedups that GPUs can provide on regular neural network operations, such as matrix multiplications and additions through parallelization.

In this section, we will learn how to speed this up further with the help of distributed training on GPUs. We will build upon the work done in the previous exercise. Note that most of the code looks the same. In the following steps, we will highlight the changes. Executing the script has been left to you as an exercise. The full code is available here: https://github.com/PacktPublishing/Mastering-PyTorch/blob/master/Chapter11/convnet_distributed_cuda.py. Let&apos...

lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Mastering PyTorch
Published in: Feb 2021Publisher: PacktISBN-13: 9781789614381

Author (1)

author image
Ashish Ranjan Jha

Ashish Ranjan Jha received his bachelor's degree in electrical engineering from IIT Roorkee (India), a master's degree in Computer Science from EPFL (Switzerland), and an MBA degree from Quantic School of Business (Washington). He has received a distinction in all 3 of his degrees. He has worked for large technology companies, including Oracle and Sony as well as the more recent tech unicorns such as Revolut, mostly focused on artificial intelligence. He currently works as a machine learning engineer. Ashish has worked on a range of products and projects, from developing an app that uses sensor data to predict the mode of transport to detecting fraud in car damage insurance claims. Besides being an author, machine learning engineer, and data scientist, he also blogs frequently on his personal blog site about the latest research and engineering topics around machine learning.
Read more about Ashish Ranjan Jha