Reader small image

You're reading from  Interpretable Machine Learning with Python - Second Edition

Product typeBook
Published inOct 2023
PublisherPackt
ISBN-139781803235424
Edition2nd Edition
Right arrow
Author (1)
Serg Masís
Serg Masís
author image
Serg Masís

Serg Masís has been at the confluence of the internet, application development, and analytics for the last two decades. Currently, he's a climate and agronomic data scientist at Syngenta, a leading agribusiness company with a mission to improve global food security. Before that role, he co-founded a start-up, incubated by Harvard Innovation Labs, that combined the power of cloud computing and machine learning with principles in decision-making science to expose users to new places and events. Whether it pertains to leisure activities, plant diseases, or customer lifetime value, Serg is passionate about providing the often-missing link between data and decision-making—and machine learning interpretation helps bridge this gap robustly.
Read more about Serg Masís

Right arrow

Interpretable Machine Learning with Python, Second Edition: Build Your Own Interpretable Models

Welcome to Packt Early Access. We’re giving you an exclusive preview of this book before it goes on sale. It can take many months to write a book, but our authors have cutting-edge information to share with you today. Early Access gives you an insight into the latest developments by making chapter drafts available. The chapters may be a little rough around the edges right now, but our authors will update them over time.

You can dip in and out of this book or follow along from start to finish; Early Access is designed to be flexible. We hope you enjoy getting to know more about the process of writing a Packt book.

  1. Chapter 1: Interpretation, Interpretability and Explainability; and why does it all matter?
  2. Chapter 2: Key Concepts of Interpretability
  3. Chapter 3: Interpretation Challenges
  4. Chapter 4: Global Model-agnostic Interpretation Methods
  5. Chapter 5: Local...

Technical requirements

To follow the example in this chapter, you will need Python 3, either running in a Jupyter environment or in your favorite integrated development environment (IDE) such as PyCharm, Atom, VSCode, PyDev, or Idle. The example also requires the pandas, sklearn, matplotlib, and scipy Python libraries.

The code for this chapter is located here: https://packt.link/Lzryo.

What is machine learning interpretation?

To interpret something is to explain the meaning of it. In the context of machine learning, that something is an algorithm. More specifically, that algorithm is a mathematical one that takes input data and produces an output, much like with any formula.

Let’s examine the most basic of models, simple linear regression, illustrated in the following formula:

Once fitted to the data, the meaning of this model is that predictions are a weighted sum of the x features with the β coefficients. In this case, there’s only one x feature or predictor variable, and the y variable is typically called the response or target variable. A simple linear regression formula single-handedly explains the transformation, which is performed on the input data x1 to produce the output . The following example can illustrate this concept in further detail.

Understanding a simple weight prediction model

If you go to this web page...

Understanding the difference between interpretability and explainability

Something you’ve probably noticed when reading the first few pages of this book is that the verbs interpret and explain, as well as the nouns interpretation and explanation, have been used interchangeably. This is not surprising, considering that to interpret is to explain the meaning of something. Despite that, the related terms interpretability and explainability should not be used interchangeably, even though they are often mistaken for synonyms. Most practitioners don’t make any distinction and many academics reverse the definitions provided in this book.

What is interpretability?

Interpretability is the extent to which humans, including non-subject-matter experts, can understand the cause and effect, and input and output, of a machine learning model. To say a model has a high level of interpretability means you can describe in a human-interpretable way its inference. In other words...

A business case for interpretability

This section describes several practical business benefits of machine learning interpretability, such as better decisions, as well as being more trusted, ethical, and profitable.

Better decisions

Typically, machine learning models are trained and then evaluated against the desired metrics. If they pass quality control against a hold-out dataset, they are deployed. However, once tested in the real world, things can get wild, as in the following hypothetical scenarios:

  • A high-frequency trading algorithm could single-handedly crash the stock market.
  • Hundreds of smart home devices might inexplicably burst into unprompted laughter, terrifying their users.
  • License-plate recognition systems could incorrectly read a new kind of license plate and fine the wrong drivers.
  • A racially biased surveillance system could incorrectly detect an intruder, and because of this guards shoot an innocent office worker.
  • A self...

Summary

This chapter has shown us what machine learning interpretation is and what it is not, and the importance of interpretability. In the next chapter, we will learn what can make machine learning models so challenging to interpret, and how you would classify interpretation methods in both category and scope.

Image sources

Dataset sources

  • Statistics Online Computational Resource, University of Southern California. (1993). Growth Survey of 25,000 children from birth to 18 years of age recruited from Maternal and Child Health Centers. Originally retrieved from http://www.socr.ucla.edu/

Further reading

  • Lipton, Zachary (2017). The Mythos of Model Interpretability. ICML 2016 Human Interpretability in Machine Learning Workshop: https://doi.org/10.1145/3236386.3241340
  • Roscher, R., Bohn, B., Duarte, M.F. & Garcke, J. (2020). Explainable Machine Learning for Scientific Insights and Discoveries. IEEE Access, 8, 42200-42216: https://dx.doi.org/10.1109/ACCESS.2020.2976199
  • Doshi-Velez, F. & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning: http://arxiv.org/abs/1702.08608
  • Arrieta, A.B., Diaz-Rodriguez, N., Ser, J.D., Bennetot, A., Tabik, S., Barbado, A., Garc’ia, S., Gil-L’opez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI: https://arxiv.org/abs/1910.10045
  • Coglianese, C. & Lehr, D. (2019). Transparency and algorithmic governance. Administrative Law Review...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Interpretable Machine Learning with Python - Second Edition
Published in: Oct 2023Publisher: PacktISBN-13: 9781803235424
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Serg Masís

Serg Masís has been at the confluence of the internet, application development, and analytics for the last two decades. Currently, he's a climate and agronomic data scientist at Syngenta, a leading agribusiness company with a mission to improve global food security. Before that role, he co-founded a start-up, incubated by Harvard Innovation Labs, that combined the power of cloud computing and machine learning with principles in decision-making science to expose users to new places and events. Whether it pertains to leisure activities, plant diseases, or customer lifetime value, Serg is passionate about providing the often-missing link between data and decision-making—and machine learning interpretation helps bridge this gap robustly.
Read more about Serg Masís