Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Accelerate Model Training with PyTorch 2.X

You're reading from  Accelerate Model Training with PyTorch 2.X

Product type Book
Published in Apr 2024
Publisher Packt
ISBN-13 9781805120100
Pages 230 pages
Edition 1st Edition
Languages
Author (1):
Maicon Melo Alves Maicon Melo Alves
Profile icon Maicon Melo Alves

Table of Contents (17) Chapters

Preface 1. Part 1: Paving the Way
2. Chapter 1: Deconstructing the Training Process 3. Chapter 2: Training Models Faster 4. Part 2: Going Faster
5. Chapter 3: Compiling the Model 6. Chapter 4: Using Specialized Libraries 7. Chapter 5: Building an Efficient Data Pipeline 8. Chapter 6: Simplifying the Model 9. Chapter 7: Adopting Mixed Precision 10. Part 3: Going Distributed
11. Chapter 8: Distributed Training at a Glance 12. Chapter 9: Training with Multiple CPUs 13. Chapter 10: Training with Multiple GPUs 14. Chapter 11: Training with Multiple Machines 15. Index 16. Other Books You May Enjoy

Training with Multiple CPUs

When accelerating the model-building process, we immediately think of machines endowed with GPU devices. What if I told you that running distributed training on machines equipped only with multicore processors is possible and advantageous?

Although the performance improvement obtained from GPUs is incomparable, we should not disdain the computing power provided by modern CPUs. Processor vendors have continuously increased the number of computing cores on CPUs, besides creating sophisticated mechanisms to treat access contention to shared resources.

Using CPUs to run distributed training is especially interesting for cases where we do not have easy access to GPU devices. Thus, learning this topic is vital to enrich our knowledge about distributed training.

In this chapter, we show how to execute the distributed training process on multiple CPUs in a single machine by adopting a general approach and using the Intel oneCCL backend.

Here is what...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}