Reader small image

You're reading from  Engineering MLOps

Product typeBook
Published inApr 2021
PublisherPackt
ISBN-139781800562882
Edition1st Edition
Right arrow
Author (1)
Emmanuel Raj
Emmanuel Raj
author image
Emmanuel Raj

Emmanuel Raj is a Finland-based Senior Machine Learning Engineer with 6+ years of industry experience. He is also a Machine Learning Engineer at TietoEvry and a Member of the European AI Alliance at the European Commission. He is passionate about democratizing AI and bringing research and academia to industry. He holds a Master of Engineering degree in Big Data Analytics from Arcada University of Applied Sciences. He has a keen interest in R&D in technologies such as Edge AI, Blockchain, NLP, MLOps and Robotics. He believes "the best way to learn is to teach", he is passionate about sharing and learning new technologies with others.
Read more about Emmanuel Raj

Right arrow

Chapter 11: Key Principles for Monitoring Your ML System

In this chapter, we will learn about the fundamental principles that are essential for monitoring your machine learning (ML) models in production. You will learn how to build trustworthy and Explainable AI solutions using the Explainable Monitoring Framework. The Explainable Monitoring Framework can be used to build functional monitoring pipelines so that you can monitor ML models in production, analyze application and model performance, and govern ML systems. The goal of monitoring ML systems is to enable trust, transparency, and explainability in order to increase business impact. We will learn about this by looking at some real-world examples.

Understanding the principles mentioned in this chapter will equip you with the knowledge to build end-to-end monitoring systems for your use case or company. This will help you engage business, tech, and public (customers and legal) stakeholders so that you can efficiently achieve...

Understanding the key principles of monitoring an ML system

Building trust into AI systems is vital these days with the growing demands for products to be data-driven and to adjust to the changing environment and regulatory frameworks. One of the reasons ML projects are failing to bring value to businesses is due to the lack of trust and transparency in their decision making. Many black box models are good at reaching high accuracy, but they become obsolete when it comes to explaining the reasons behind the decisions that have been made. At the time of writing, news has been surfacing that raises these concerns of trust and explainability, as shown in the following figure:

Figure 11.1 – Components of model trust and explainability

This image showcases concerns in important areas in real life. Let's look at how this translates into some key aspects of model explainability, such as model drift, model bias, model transparency, and model compliance...

Monitoring in the MLOps workflow

We learned about the MLOps workflow in Chapter 1, Fundamentals of MLOps Workflow. As shown in the following diagram, the monitoring block is an integral part of the MLOps workflow for evaluating the ML models' performance in production and measuring the ML system's business value. We can only do both (measure the performance and business value that's been generated by the ML model) if we understand the model's decisions in terms of transparency and explainability (to explain the decisions to stakeholders and customers).

Explainable Monitoring enables both transparency and explainability to govern ML systems in order to drive the best business value:

Figure 11.4 – MLOps workflow – Monitor

In practice, Explainable Monitoring enables us to monitor, analyze, and govern ML system, and it works in a continuous loop with other components in the MLOps workflow. It also empowers humans to engage...

Understanding the Explainable Monitoring Framework

In this section, we will explore the Explainable Monitoring Framework (as shown in the following diagram) in detail to understand and learn how Explainable Monitoring enhances the MLOps workflow and the ML system itself:

Figure 11.6 – Explainable Monitoring Framework

The Explainable Monitoring Framework is a modular framework that's used to monitor, analyze, and govern a ML system while enabling continual learning. All the modules work in sync to enable transparent and Explainable Monitoring. Let's look at how each module works to understand how they contribute and function in the framework. First, let's look at the monitor module (the first panel in the preceding diagram).

Monitor

The monitor module is dedicated to monitoring the application in production (serving the ML model). Several factors are at play in an ML system, such as application performance (telemetry data, throughput...

Enabling continuous monitoring for the service

The Explainable Monitoring Framework can be resourceful if we wish to monitor ML systems in production. In the next chapter, we will enable the Explainable Monitoring Framework for the business use case we worked on in the previous chapters. We will enable continuous monitoring for the system we have deployed. We will then monitor the ML application that's been deployed to production and analyze the incoming data and the model's performance to govern the ML system to produce maximum business value for the use case.

Summary

In this chapter, we learned about the key principles for monitoring an ML system. We explored some common monitoring methods and the Explainable Monitoring Framework (including the monitor, analyze, and govern stages). We then explored the concepts of Explainable Monitoring thoroughly.

In the next chapter, we will delve into a hands-on implementation of the Explainable Monitoring Framework. Using this, we will build a monitoring pipeline in order to continuously monitor the ML system in production for the business use case (predicting weather at the port of Turku).

The next chapter is quite hands-on, so buckle up and get ready!

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Engineering MLOps
Published in: Apr 2021Publisher: PacktISBN-13: 9781800562882
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Emmanuel Raj

Emmanuel Raj is a Finland-based Senior Machine Learning Engineer with 6+ years of industry experience. He is also a Machine Learning Engineer at TietoEvry and a Member of the European AI Alliance at the European Commission. He is passionate about democratizing AI and bringing research and academia to industry. He holds a Master of Engineering degree in Big Data Analytics from Arcada University of Applied Sciences. He has a keen interest in R&D in technologies such as Edge AI, Blockchain, NLP, MLOps and Robotics. He believes "the best way to learn is to teach", he is passionate about sharing and learning new technologies with others.
Read more about Emmanuel Raj