Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
The Machine Learning Solutions Architect Handbook - Second Edition

You're reading from  The Machine Learning Solutions Architect Handbook - Second Edition

Product type Book
Published in Apr 2024
Publisher Packt
ISBN-13 9781805122500
Pages 602 pages
Edition 2nd Edition
Languages
Author (1):
David Ping David Ping
Profile icon David Ping

Table of Contents (19) Chapters

Preface 1. Navigating the ML Lifecycle with ML Solutions Architecture 2. Exploring ML Business Use Cases 3. Exploring ML Algorithms 4. Data Management for ML 5. Exploring Open-Source ML Libraries 6. Kubernetes Container Orchestration Infrastructure Management 7. Open-Source ML Platforms 8. Building a Data Science Environment Using AWS ML Services 9. Designing an Enterprise ML Architecture with AWS ML Services 10. Advanced ML Engineering 11. Building ML Solutions with AWS AI Services 12. AI Risk Management 13. Bias, Explainability, Privacy, and Adversarial Attacks 14. Charting the Course of Your ML Journey 15. Navigating the Generative AI Project Lifecycle 16. Designing Generative AI Platforms and Solutions 17. Other Books You May Enjoy
18. Index

Advanced ML Engineering

Congratulations on making it so far! By now, you should have developed a good understanding of the core fundamental skills that an ML solutions architect needs in order to operate effectively across the ML lifecycle. In this chapter, we will delve into advanced ML concepts. Our focus will be on exploring a range of options for distributed model training for large models and datasets. Understanding the concept and techniques for distributed training is becoming increasingly important as all large-scale model training such as GPT will require distributed training architecture. Furthermore, we’ll delve into diverse technical approaches aimed at optimizing model inference latency. As model sizes grow larger, having a good grasp on how to optimize models for low-latency inference is becoming an essential skill in ML engineering. Lastly, we will close this chapter with a hands-on lab on distributed model training.

Specifically, we will cover the following...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}