Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Practical Deep Learning at Scale with MLflow

You're reading from  Practical Deep Learning at Scale with MLflow

Product type Book
Published in Jul 2022
Publisher Packt
ISBN-13 9781803241333
Pages 288 pages
Edition 1st Edition
Languages
Author (1):
Yong Liu Yong Liu
Profile icon Yong Liu

Table of Contents (17) Chapters

Preface Section 1 - Deep Learning Challenges and MLflow Prime
Chapter 1: Deep Learning Life Cycle and MLOps Challenges Chapter 2: Getting Started with MLflow for Deep Learning Section 2 –
Tracking a Deep Learning Pipeline at Scale
Chapter 3: Tracking Models, Parameters, and Metrics Chapter 4: Tracking Code and Data Versioning Section 3 –
Running Deep Learning Pipelines at Scale
Chapter 5: Running DL Pipelines in Different Environments Chapter 6: Running Hyperparameter Tuning at Scale Section 4 –
Deploying a Deep Learning Pipeline at Scale
Chapter 7: Multi-Step Deep Learning Inference Pipeline Chapter 8: Deploying a DL Inference Pipeline at Scale Section 5 – Deep Learning Model Explainability at Scale
Chapter 9: Fundamentals of Deep Learning Explainability Chapter 10: Implementing DL Explainability with MLflow Other Books You May Enjoy

Understanding current MLflow explainability integration

MLflow has several ways to support explainability integration. When implementing explainability, we refer to two types of artifacts: explainers and explanations:

  • An explainer is an explainability model, and a common one is a SHAP model that could be different kinds of SHAP explainers, such as TreeExplainer, KernelExplainer, and PartitionExplainer (https://shap.readthedocs.io/en/latest/generated/shap.explainers.Partition.html). For computational efficiency, we usually choose PartitionExplainer for DL models.
  • An explanation is an artifact that shows some form of output from the explainer, which could be text, numerical values, or plots. Explanations can happen in offline training or testing, or can happen during online production. Thus, we should be able to provide an explainer for offline evaluation or an explainer endpoint for online queries if we want to know why the model provides certain predictions.

Here...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}