Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Machine Learning Security with Azure

You're reading from  Machine Learning Security with Azure

Product type Book
Published in Dec 2023
Publisher Packt
ISBN-13 9781805120483
Pages 310 pages
Edition 1st Edition
Languages
Author (1):
Georgia Kalyva Georgia Kalyva
Profile icon Georgia Kalyva

Table of Contents (17) Chapters

Preface 1. Part 1: Planning for Azure Machine Learning Security
2. Chapter 1: Assessing the Vulnerability of Your Algorithms, Models, and AI Environments 3. Chapter 2: Understanding the Most Common Machine Learning Attacks 4. Chapter 3: Planning for Regulatory Compliance 5. Part 2: Securing Your Data
6. Chapter 4: Data Protection and Governance 7. Chapter 5: Data Privacy and Responsible AI Best Practices 8. Part 3: Securing and Monitoring Your AI Environment
9. Chapter 6: Managing and Securing Access 10. Chapter 7: Managing and Securing Your Azure Machine Learning Workspace 11. Chapter 8: Managing and Securing the MLOps Life Cycle 12. Chapter 9: Logging, Monitoring, and Threat Detection 13. Part 4: Best Practices for Enterprise Security in Azure Machine Learning
14. Chapter 10: Setting a Security Baseline for Your Azure Machine Learning Workloads 15. Index 16. Other Books You May Enjoy

Working with model interpretability

Model interpretability in ML refers to the ability to understand and explain how a particular model makes predictions or decisions. Interpretable models provide clear insights into the features or variables that are most influential in the model’s decision-making process. This is particularly important in domains where the decision-making process needs to be transparent and understandable, such as healthcare, finance, and legal systems.

Although you can never explain 100% why a model makes a prediction, you can use explainers to understand which features affect the results. Explainers can help us provide global explanations; for example, which features affect the overall behavior of the model or local explanations that provide us with information on what influenced an individual prediction.

Let us explore some methods we can use to achieve model interpretability:

  • Feature importance (FI) determines the influence of each feature...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}