Reader small image

You're reading from  Distributed Data Systems with Azure Databricks

Product typeBook
Published inMay 2021
Reading LevelBeginner
PublisherPackt
ISBN-139781838647216
Edition1st Edition
Languages
Concepts
Right arrow
Author (1)
Alan Bernardo Palacio
Alan Bernardo Palacio
author image
Alan Bernardo Palacio

Alan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder in startups, and later on earned a Master's degree from the faculty of Mathematics in the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.
Read more about Alan Bernardo Palacio

Right arrow

Chapter 11: Managing and Serving Models with MLflow and MLeap

In the previous chapter, we learned how we can fine-tune models created in Azure Databricks. The next step is how we can effectively keep track and make use of the models that we train. Software development has clear methodologies for keeping track of code, having stages such as staging or production versions of the code and general code lifecycle management processes, but it's not that common to see that applied to machine learning models. The reasons for this might vary, but one reason could be that the data science team follows its own methodologies that might be closer to academia than the production of software, as well as the fact that machine learning doesn't have clearly defined methodologies for development life cycles. We can apply some of the methodologies used commonly in software for machine learning models in Azure Databricks.

This chapter will focus on exploring how the models and processes...

Technical requirements

To work on the examples given in this chapter, you need to have the following:

  • An Azure Databricks subscription
  • An Azure Databricks notebook attached to a running cluster with Databricks Runtime for Machine Learning (Databricks Runtime ML) with version 7.0 or higher

Managing machine learning models

As we have seen before, in Azure Databricks we have at our disposal the MLflow Model Registry, which is an open source platform for managing the complete lifecycle of a machine learning or deep learning model. It allows us to directly manage models with a chronological linage, model versioning, and stage transition. It provides us with tools such as Experiments and Runs, which allow us to quickly visualize the results of training runs and hyperparameter optimization, and to maintain a proper model version control to keep track of which models we have available for serving and quickly update the current version if necessary.

MLflow has in Azure Databricks a Model Repository user interface (UI) in which we can set our models to respond to REST API requests for inference, transition models between stages, and visualize metrics and unstructured data associated with the models, such as description and comments. It gives us the possibility of managing...

Model Registry example

In this section, we will go through an example in which we will develop a machine learning model and use the MLflow Model Registry to save it, manage the stages in which it belongs, and use it to make predictions. The model will be a Keras neural network, and we will use the Windfarm US dataset to predict the power output of wind farms based on parameters from weather conditions such as wind direction, speed, and air temperature. We will make use of MLflow to keep track of the stage of the model and be able to register and load it back again to make predictions:

  1. First, we will retrieve the dataset that will be used to train the model. We will use the pandas read_csv() function to load directly from the Uniform Resource Identifier (URI) of the file in GitHub, as follows:
    import pandas as pd
    wind_farm_data = pd.read_csv("https://github.com/dbczumar/model-registry-demo-notebook/raw/master/dataset/windfarm_data.csv", index_col=0)

    The dataset...

Exporting and loading pipelines with MLeap

When we train a machine learning or deep learning model, our intention is to be able to use it several times to predict new observations of data. To do this, we must be able to not only store the model but also load it back again into one or more platforms. Therefore, we encounter the need to serialize the model for future use in scoring or predictions.

MLeap is a commonly used format to serialize and execute machine learning and deep learning pipelines made in popular frameworks such as Apache Spark, scikit-learn, and TensorFlow. It is commonly used for making individual predictions rather than batch predictions. These serialized pipelines are called bundles and can be exported as models and later be loaded and deployed back into Azure Databricks to make new predictions.

In this section, we will learn how to use MLeap to export and load back again a DecisionTreeClassifier MLlib model to make predictions using a saved pipeline in...

Serving models with MLflow

One of the benefits of using MLflow in Azure Databricks as the repository of our machine learning models is that it allows us to simply serve predictions from the Model Registry as REST API endpoints. These endpoints are updated automatically on newer versions of the models in each one of the stages, therefore this is a complementary feature of keeping track of the model's lifecycle using the MLflow Model Registry.

Enabling a model to be served as a REST API endpoint can be done from the Model Registry UI in the Azure workspace. To enable a model to be served, go to the model page in the Model Registry UI and click on the Enable Serving button in the Serving tab.

Once you have clicked on the button, which is shown in the following screenshot, you should see the status as Pending. After a couple of minutes, the status will change to Ready:

Figure 11.9 – Enabling the serving of a model

If you want to disable...

Summary

The development process of machine learning models can be a complicated task because of the inherent mixed background of the discipline and the fact that it is commonly detached from the common software development lifecycle. Moreover, we will encounter issues when transitioning the models from development to production if we are not able to export the used preprocessing pipeline that was used to extract features of the data.

As we have seen in this chapter, we can tackle issues using MLflow to manage the model lifecycle and apply staging and version control to the models used, and effectively serialize the preprocessing pipeline to be used to preprocess data to be inferred.

In the next chapter, we will explore the concept of distributed learning, a technique in which we can distribute the training process of deep learning models to many workers effectively in Azure Databricks.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Distributed Data Systems with Azure Databricks
Published in: May 2021Publisher: PacktISBN-13: 9781838647216
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Alan Bernardo Palacio

Alan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder in startups, and later on earned a Master's degree from the faculty of Mathematics in the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.
Read more about Alan Bernardo Palacio