Reader small image

You're reading from  Practical Machine Learning on Databricks

Product typeBook
Published inNov 2023
Reading LevelIntermediate
PublisherPackt
ISBN-139781801812030
Edition1st Edition
Languages
Concepts
Right arrow
Author (1)
Debu Sinha
Debu Sinha
author image
Debu Sinha

Debu is an experienced Data Science and Engineering leader with deep expertise in Software Engineering and Solutions Architecture. With over 10 years in the industry, Debu has a proven track record in designing scalable Software Applications, Big Data, and Machine Learning systems. As Lead ML Specialist on the Specialist Solutions Architect team at Databricks, Debu focuses on AI/ML use cases in the cloud and serves as an expert on LLMs, Machine Learning, and MLOps. With prior experience as a startup co-founder, Debu has demonstrated skills in team-building, scaling, and delivering impactful software solutions. An established thought leader, Debu has received multiple awards and regularly speaks at industry events.
Read more about Debu Sinha

Right arrow

Model Deployment Approaches

In the previous chapter, we looked at how we can utilize Databricks MLflow Model Registry to manage our ML model versioning and life cycle. We also learned how we could use the integrated access control to manage access to the models registered in Model Registry. We also understood how we could use the available webhook support with Model Registry to trigger automatic Slack notifications or jobs to validate the registered model in the registry.

In this chapter, we will take the registered models from Model Registry and understand how to deploy them using the various model deployment options available in Databricks.

We will cover the following topics:

  • Understanding ML deployments and paradigms
  • Deploying ML models for batch and streaming inference
  • Deploying ML models for real-time inference
  • Incorporating custom Python libraries into MLflow models for Databricks deployment
  • Deploying custom models with MLflow and Model Serving
  • ...

Technical requirements

We’ll need the following before diving into this chapter:

  • Access to a Databricks workspace
  • A running cluster with Databricks Runtime for Machine Learning (Databricks Runtime ML) with a version of 13 or above
  • All the previous notebooks, executed as described
  • A basic knowledge of Apache Spark, including DataFrames and SparkUDF

Let’s take a look at what exactly ML deployment is.

Understanding ML deployments and paradigms

Data science is not the same as data engineering. Data science is more geared toward taking a business problem that we convert into data problems using scientific methods. We develop mathematical models and then optimize their performance. Data engineers are mainly concerned with the reliability of the data in the data lake. They are more focused on the tools to make the data pipelines scalable and maintainable while meeting the service-level agreements (SLAs).

When we talk about ML deployments, we want to bridge the gap between data science and data engineering.

The following figure visualizes the entire process of ML deployment:

Figure 7.1 – Displaying the ML deployment process

Figure 7.1 – Displaying the ML deployment process

On the right-hand side, we have the process of data science, which is very interactive and iterative. We understand the business problem and discover the datasets that can add value to our analysis. Then, we build data pipelines...

Deploying ML models for batch and streaming inference

This section will cover examples of deploying ML models in a batch and streaming manner using Databricks.

In both batch and streaming inference deployments, we use the model to make the predictions and then store them at a location for later use. The final storage area for the prediction results can be a database with low latency read access, cloud storage such as S3 to be exported to another system, or even a Delta table that can easily be queried by business analysts.

When working with large amounts of data, Spark offers an efficient framework for processing and analyzing it, making it an ideal candidate to leverage our trained machine learning models.

Note

One important note to remember is that we can use any non-distributed ML library to train our models. So long as it uses the MLflow model abstractions, you can utilize all the benefits of MLflow’s Model Registry and the code presented in this chapter.

...

Deploying ML models for real-time inference

Real-time inferences include generating predictions on a small number of records using a model deployed as a REST endpoint. The expectation is to receive the predictions in a few milliseconds.

Real-time deployments are needed in use cases when the features are only available when serving the model and cannot be pre-computed. These deployments are more complex to manage than batch or streaming deployments.

Databricks offers integrated model serving endpoints, enabling you to prototype, develop, and deploy real-time inference models on production-grade, fully managed infrastructure within the Databricks environment. At the time of writing this book, there are two additional methods you can utilize to deploy your models for real-time inference:

  • Managed solutions provided by the following cloud providers:
    • Azure ML
    • AWS SageMaker
    • GCP VertexAI
  • Custom solutions that use Docker and Kubernetes or a similar set of technologies
...

Incorporating custom Python libraries into MLflow models for Databricks deployment

If your projects necessitate the integration of bespoke Python libraries or packages hosted on a secure private repository, MLflow provides a useful utility function, add_libraries_to_model. This feature allows you to seamlessly incorporate these custom dependencies into your models during the logging process, before deploying them via Databricks Model Serving. While the subsequent code examples demonstrate this functionality using scikit-learn models, the same methodology can be applied to any model type supported by MLflow:

  1. Upload dependencies and install them in the notebook: The recommended location for uploading dependency files is Databricks File System (DBFS):
    dbutils.fs.cp("local_path/to/your_dependency.whl", "dbfs:/path/to/your_dependency.whl")# Installing custom library using %pip%pip install /dbfs/path/to/your_dependency.whl
  2. Model logging with custom libraries...

Packaging dependencies with MLflow models

In a Databricks environment, files commonly reside in DBFS. However, for enhanced performance, it’s recommended to bundle these artifacts directly within the model artifact. This ensures that all dependencies are statically captured at deployment time.

The log_model() method allows you to not only log the model but also its dependent files and artifacts. This function takes an artifacts parameter where you can specify paths to these additional files:

Here is an example of how to log custom artifacts with your models: mlflow.pyfunc.log_model(    artifacts={'model-weights': "/dbfs/path/to/file", "tokenizer_cache": "./tokenizer_cache"}
)

In custom Python models logged with MLflow, you can access these dependencies within the model’s code using the context.artifacts attribute:

class CustomMLflowModel(mlflow.pyfunc.PythonModel):    def load_context...

Summary

This chapter covered the various deployment options in Databricks for your ML models. We also learned about the multiple deployment paradigms and how you can implement them using the Databricks workspace. The book’s subsequent editions will detail the many new features that Databricks is working on to simplify the MLOps journey for its users.

In the next chapter, we will dive deeper into Databricks Workflows to schedule and automate ML workflows. We will go over how to set up ML training using the Jobs API. We will also take a look at the Jobs API’s integration with webhooks to trigger automated testing for your models when a model is transitioned from one registry stage to another.

Further reading

Here are some more links for further reading:

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Practical Machine Learning on Databricks
Published in: Nov 2023Publisher: PacktISBN-13: 9781801812030
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Debu Sinha

Debu is an experienced Data Science and Engineering leader with deep expertise in Software Engineering and Solutions Architecture. With over 10 years in the industry, Debu has a proven track record in designing scalable Software Applications, Big Data, and Machine Learning systems. As Lead ML Specialist on the Specialist Solutions Architect team at Databricks, Debu focuses on AI/ML use cases in the cloud and serves as an expert on LLMs, Machine Learning, and MLOps. With prior experience as a startup co-founder, Debu has demonstrated skills in team-building, scaling, and delivering impactful software solutions. An established thought leader, Debu has received multiple awards and regularly speaks at industry events.
Read more about Debu Sinha