Reader small image

You're reading from  Deep Learning with MXNet Cookbook

Product typeBook
Published inDec 2023
Reading LevelBeginner
PublisherPackt
ISBN-139781800569607
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Andrés P. Torres
Andrés P. Torres
author image
Andrés P. Torres

Andrés P. Torres, is the Head of Perception at Oxa, a global leader in industrial autonomous vehicles, leading the design and development of State-Of The-Art algorithms for autonomous driving. Before, Andrés had a stint as an advisor and Head of AI at an early-stage content generation startup, Maekersuite, where he developed several AI-based algorithms for mobile phones and the web. Prior to this, Andrés was a Software Development Manager at Amazon Prime Air, developing software to optimize operations for autonomous drones.
Read more about Andrés P. Torres

Right arrow

Improving Training Performance with MXNet

In previous chapters, we have leveraged MXNet capabilities to solve computer vision and Natural Language Processing (NLP) tasks. In those chapters, the focus was on obtaining the maximum performance out of pre-trained models, leveraging the Model Zoos from GluonCV and GluonNLP. We trained these models using different approaches: from scratch, transfer learning, and fine-tuning. In this chapter, we will focus on improving the performance of the training process itself and accelerating how we can obtain those results.

To achieve the objective of optimizing the performance of our training loops, MXNet contains different features. We have already briefly used some of those features such as the concept of lazy evaluation, which was introduced in Chapter 1. We will revisit it in this chapter, in combination with automatic parallelization. Moreover, we will optimize how to access data efficiently, leveraging Gluon DataLoaders in different contexts...

Technical requirements

Apart from the technical requirements specified in the Preface, the following technical requirements apply:

  • Ensure that you have completed the Installing MXNet, Gluon, GluonCV and GluonNLP recipe from Chapter 1.
  • Ensure that you have completed Chapter 5 and Chapter 6.
  • Ensure that you have completed Chapter 7.

The code for this chapter can be found at the following GitHub URL: https://github.com/PacktPublishing/Deep-Learning-with-MXNet-Cookbook/tree/main/ch08.

Furthermore, you can access each recipe directly from Google Colab. For example, the code for the first recipe of this chapter can be found here: https://colab.research.google.com/github/PacktPublishing/Deep-Learning-with-MXNet-Cookbook/blob/main/ch08/8_1_Introducing_training_optimization_features.ipynb.

Introducing training optimization features

In the previous chapters, we saw how we could leverage MXNet, GluonCV, and GluonNLP to retrieve pre-trained models in certain datasets (such as ImageNet, MS COCO, or IWSLT2015) and use them for our specific tasks and datasets. Furthermore, we used transfer learning and fine-tuning techniques to improve the performance on those tasks/datasets.

In this recipe, we will introduce (and revisit) several concepts and features that will optimize our training loops, after which we will analyze the trade-offs involved.

Getting ready

Similar to the previous chapters, in this recipe, we will be using some matrix operations and linear algebra, but it will not be hard at all, as you will find lots of examples and code snippets to facilitate your learning.

How to do it...

In this recipe, we will work through the following steps:

  1. Working with lazy evaluation and automatic parallelization
  2. Optimizing DataLoaders: GPU preprocessing...

Optimizing training for image segmentation

In the previous recipe, we saw how we could leverage MXNet and Gluon to optimize the training of our models with a variety of different techniques. We understood how we can jointly use lazy evaluation and automatic parallelization for parallel processing. We saw how to improve the performance of our DataLoaders by combining preprocessing in the CPU and GPU, and how using half-precision (Float16) in combination with AMP can halve our training times. Lastly, we explored how to take advantage of multiple GPUs to further reduce training times.

Now, we can revisit a problem we have been working with throughout the book: image segmentation. We have worked on this task in recipes from previous chapters. In the Segmenting objects semantically with MXNet Model Zoo – PSPNet and DeepLabv3 recipe in Chapter 5, we learned how to use pre-trained models from GluonCV Model Zoo, and introduced the task and the datasets that we will be using in this...

Optimizing training for translating text from English to German

In the first recipe of this chapter, we saw how we could leverage MXNet and Gluon to optimize the training of our models, applying different techniques. We understood how to jointly use lazy evaluation and automatic parallelization for parallel processing and improved the performance of our DataLoaders by combining preprocessing in the CPU and GPU. We saw how using half-precision (Float16) in combination with AMP can halve our training times, and explored how to take advantage of multiple GPUs for further reduced training times.

Now, we can revisit a problem we have been working with throughout the book, that of translating text from English to German. We have worked with translation tasks in recipes in previous chapters. In the Translating text from Vietnamese to English recipe from Chapter 6, we introduced the task of translating text, while also learning how to use pre-trained models from GluonCV Model Zoo. Furthermore...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Deep Learning with MXNet Cookbook
Published in: Dec 2023Publisher: PacktISBN-13: 9781800569607
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Andrés P. Torres

Andrés P. Torres, is the Head of Perception at Oxa, a global leader in industrial autonomous vehicles, leading the design and development of State-Of The-Art algorithms for autonomous driving. Before, Andrés had a stint as an advisor and Head of AI at an early-stage content generation startup, Maekersuite, where he developed several AI-based algorithms for mobile phones and the web. Prior to this, Andrés was a Software Development Manager at Amazon Prime Air, developing software to optimize operations for autonomous drones.
Read more about Andrés P. Torres