Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
The Machine Learning Solutions Architect Handbook - Second Edition

You're reading from  The Machine Learning Solutions Architect Handbook - Second Edition

Product type Book
Published in Apr 2024
Publisher Packt
ISBN-13 9781805122500
Pages 602 pages
Edition 2nd Edition
Languages
Author (1):
David Ping David Ping
Profile icon David Ping

Table of Contents (19) Chapters

Preface Navigating the ML Lifecycle with ML Solutions Architecture Exploring ML Business Use Cases Exploring ML Algorithms Data Management for ML Exploring Open-Source ML Libraries Kubernetes Container Orchestration Infrastructure Management Open-Source ML Platforms Building a Data Science Environment Using AWS ML Services Designing an Enterprise ML Architecture with AWS ML Services Advanced ML Engineering Building ML Solutions with AWS AI Services AI Risk Management Bias, Explainability, Privacy, and Adversarial Attacks Charting the Course of Your ML Journey Navigating the Generative AI Project Lifecycle Designing Generative AI Platforms and Solutions Other Books You May Enjoy
Index

Achieving low-latency model inference

As ML models continue to grow and get deployed to different hardware devices, latency can become an issue for certain inference use cases that require low-latency and high-throughput inferences, such as real-time fraud detection.

To reduce the overall model inference latency for a real-time application, there are different optimization considerations and techniques we can use, including model optimization, graph optimization, hardware acceleration, and inference engine optimization.

In this section, we will focus on model optimization, graph optimization, and hardware optimization. Before we get into these various topics, let’s first understand how model inference works, specifically for DL models, since that’s what most of the inference optimization processes focus on.

How model inference works and opportunities for optimization

As we discussed earlier in this book, DL models are constructed as computational graphs...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}