Reader small image

You're reading from  Interpretable Machine Learning with Python - Second Edition

Product typeBook
Published inOct 2023
PublisherPackt
ISBN-139781803235424
Edition2nd Edition
Right arrow
Author (1)
Serg Masís
Serg Masís
author image
Serg Masís

Serg Masís has been at the confluence of the internet, application development, and analytics for the last two decades. Currently, he's a climate and agronomic data scientist at Syngenta, a leading agribusiness company with a mission to improve global food security. Before that role, he co-founded a start-up, incubated by Harvard Innovation Labs, that combined the power of cloud computing and machine learning with principles in decision-making science to expose users to new places and events. Whether it pertains to leisure activities, plant diseases, or customer lifetime value, Serg is passionate about providing the often-missing link between data and decision-making—and machine learning interpretation helps bridge this gap robustly.
Read more about Serg Masís

Right arrow

What’s Next for Machine Learning Interpretability?

Over the last thirteen chapters, we have explored the field of Machine Learning (ML) interpretability. As stated in the preface, it’s a broad area of research, most of which hasn’t even left the lab and become widely used yet, and this book has no intention of covering absolutely all of it. Instead, the objective is to present various interpretability tools in sufficient depth to be useful as a starting point for beginners and even complement the knowledge of more advanced readers. This chapter will summarize what we’ve learned in the context of the ecosystem of ML interpretability methods, and then speculate on what’s to come next!

These are the main topics we are going to cover in this chapter:

  • Understanding the current landscape of ML interpretability
  • Speculating on the future of ML interpretability

Understanding the current landscape of ML interpretability

First, we will provide some context on how the book relates to the main goals of ML interpretability and how practitioners can start applying the methods to achieve those broad goals. Then, we’ll discuss the current areas of growth in research.

Tying everything together!

As discussed in Chapter 1, Interpretation, Interpretability, and Explainability; and Why Does It All Matter?, there are three main themes when talking about ML interpretability: Fairness, Accountability, and Transparency (FAT), and each of these presents a series of concerns (see Figure 14.1). I think we can all agree these are all desirable properties for a model! Indeed, these concerns all present opportunities for the improvement of Artificial Intelligence (AI) systems. These improvements start by leveraging model interpretation methods to evaluate models, confirm or dispute assumptions, and find problems.

What your aim is will depend...

Speculating on the future of ML interpretability

I’m used to hearing the metaphor of this period being the “Wild West of AI”, or worse, an “AI Gold Rush!” It conjures images of an unexplored and untamed territory being eagerly conquered, or worse, civilized. Yet, in the 19th century, the United States western areas were not too different from other regions on the planet and had already been inhabited by Native Americans for millennia, so the metaphor doesn’t quite work. Predicting with the accuracy and confidence that we can achieve with ML would spook our ancestors and is not a “natural” position for us humans. It’s more akin to flying than exploring unknown land.

The article Toward the Jet Age of machine learning (linked in the Further reading section at the end of this chapter) presents a much more fitting metaphor of AI being like the dawn of aviation. It’s new and exciting, and people still marvel at what...

Summary

Interpretable machine learning is an extensive topic, and this book has only covered some aspects of some of its most important areas on two levels: diagnosis and treatment. Practitioners can leverage the tools offered by the toolkit anywhere in the ML pipeline. However, it’s up to the practitioner to choose when and how to apply them.

What matters most is to engage with the tools. Not using the interpretable machine learning toolkit is like flying a plane with very few instruments or none at all. Much like flying a plane operates under different weather conditions, machine learning models operate under different data conditions, and to be a skilled pilot or machine learning engineer, we can’t be overconfident and validate or rule out hypotheses with our instruments. And much like aviation took a few decades to become the safest mode of transportation, it will take AI a few decades to become the safest mode of decision-making. It will take a global village...

Further reading

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Interpretable Machine Learning with Python - Second Edition
Published in: Oct 2023Publisher: PacktISBN-13: 9781803235424
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Serg Masís

Serg Masís has been at the confluence of the internet, application development, and analytics for the last two decades. Currently, he's a climate and agronomic data scientist at Syngenta, a leading agribusiness company with a mission to improve global food security. Before that role, he co-founded a start-up, incubated by Harvard Innovation Labs, that combined the power of cloud computing and machine learning with principles in decision-making science to expose users to new places and events. Whether it pertains to leisure activities, plant diseases, or customer lifetime value, Serg is passionate about providing the often-missing link between data and decision-making—and machine learning interpretation helps bridge this gap robustly.
Read more about Serg Masís