Reader small image

You're reading from  Practical Deep Learning at Scale with MLflow

Product typeBook
Published inJul 2022
PublisherPackt
ISBN-139781803241333
Edition1st Edition
Right arrow
Author (1)
Yong Liu
Yong Liu
author image
Yong Liu

Yong Liu has been working in big data science, machine learning, and optimization since his doctoral student years at the University of Illinois at Urbana-Champaign (UIUC) and later as a senior research scientist and principal investigator at the National Center for Supercomputing Applications (NCSA), where he led data science R&D projects funded by the National Science Foundation and Microsoft Research. He then joined Microsoft and AI/ML start-ups in the industry. He has shipped ML and DL models to production and has been a speaker at the Spark/Data+AI summit and NLP summit. He has recently published peer-reviewed papers on deep learning, linked data, and knowledge-infused learning at various ACM/IEEE conferences and journals.
Read more about Yong Liu

Right arrow

Chapter 5: Running DL Pipelines in Different Environments

It is critical to have the flexibility of running a deep learning (DL) pipeline in different execution environments such as local or remote, on-premises, or in the cloud. This is because, during different stages of the DL development, there may be different constraints or preferences to either improve the velocity of the development or ensure security compliance. For example, it is desirable to do small-scale model experimentation in a local or laptop environment, while for a full hyperparameter tuning, we need to run the model on a cloud-hosted GPU cluster to get a quick turn-around time. Given the diverse execution environments in both hardware and software configurations, it used to be a challenge to achieve this kind of flexibility within a single framework. MLflow provides an easy-to-use framework to run DL pipelines at scale in different environments. We will learn how to do that in this chapter.

In this chapter, we...

Technical requirements

The following technical requirements are needed for completing the learning in this chapter:

An overview of different execution scenarios and environments

In our previous chapters, we mainly focused on learning how to track DL pipelines using MLflow's tracking capabilities. Most of our execution environments are in a local environment, such as a local laptop or desktop environment. However, as we already know, the DL full life cycle consists of different stages where we may need to run the DL pipelines either entirely, partially, or as a single step in a different execution environment. Here are two typical examples:

  • When accessing data for model training purposes, it is not uncommon to require the data to reside in an enterprise-security and privacy-compliant environment, where both the computation and the storage cannot leave a compliant boundary.
  • When training a DL model, it is usually desirable to use a remote GPU cluster to maximize the efficiency of model training, where a local laptop usually does not have the required hardware capability.

Both...

Running locally with local code

Let's start with the first running scenario using the same Natural Language Processing (NLP) text sentiment classification example as the driving use case. You are advised to check out the following version of the source code from the GitHub location to follow along with the steps and learnings: https://github.com/PacktPublishing/Practical-Deep-Learning-at-Scale-with-MLFlow/tree/26119e984e52dadd04b99e6f7e95f8dda8b59238/chapter05. Note that this requires a specific Git hash committed version, as shown in the URL path. That means we are asking you to check out a specific committed version, not the main branch.

Let's start with the DL pipeline that downloads the review data to local storage as a first execution exercise. Once you check out this chapter's code, you can type the following command line to execute the DL pipeline's first step:

mlflow run . --experiment-name='dl_model_chapter05' -P pipeline_steps='download_data...

Running remote code in GitHub locally

Now, let's see how we run remote code from a GitHub repository on a local execution environment. This allows us to precisely run a specific version that has been checked into the GitHub repository using the commit hash. Let's use the same example as before by running a single download_data step of the DL pipeline that we have been using in this chapter. In the command line prompt, run the following command:

mlflow run https://github.com/PacktPublishing/Practical-Deep-Learning-at-Scale-with-MLFlow#chapter05 -v 26119e984e52dadd04b99e6f7e95f8dda8b59238  --experiment-name='dl_model_chapter05' -P pipeline_steps='download_data'

Notice the difference between this command line and the one in the previous section. Instead of a dot to refer to a local copy of the code, we are pointing to a remote GitHub repository (https://github.com/PacktPublishing/Practical-Deep-Learning-at-Scale-with-MLFlow) and the folder...

Running local code remotely in the cloud

In previous chapters, we ran all our code in a local laptop environment, and limited our DL fine-tuning step to only three epochs due to the limited power of a laptop. This serves the purpose of getting the code running and testing quickly in a local environment but does not serve to build an actual high-performance DL model. We really need to run the fine-tuning step in a remote GPU cluster. Ideally, we should only change some configuration and still issue the MLflow run command line in a local laptop console, but the actual pipeline will be submitted to a remote cluster in the cloud. Let's see how we can do this for our DL pipeline.

Let's start with submitting code to run in a Databricks server. There are three prerequisites:

Running remotely in the cloud with remote code in GitHub

The most reliable way to reproduce a DL pipeline is to point to a specific version of the project code in GitHub and then run it in the cloud without invoking any local resources. This way, we know the exact version of the code as well as using the same running environment defined in the project. Let's see how this works with our DL pipeline.

As a prerequisite and a reminder, the following three environment variables need to be set up before you issue the MLflow run command to complete this section of the learning:

export MLFLOW_TRACKING_URI=databricks
export DATABRICKS_TOKEN=[databricks_token]
export DATABRICKS_HOST='https://[your databricks host name/'

We already know how to set up these environment variables from the last section. There is potentially one more setup needed, which is to allow your Databricks server to access your GitHub repository if it is non-public (see the following GitHub Token...

Summary

In this chapter, we have learned how to run a DL pipeline in different execution environments (local or remote Databricks clusters) using either local source code or GitHub project repository code. This is critical not just for reproducibility and flexibility in executing a DL pipeline, but also provides much better productivity and future automation possibility using CI/CD tools. The power to run one or multiple steps of a DL pipeline in remote resource-rich environments gives us the speed to execute large-scale computation and data-intensive jobs that are typically seen in production-quality DL model training and fine-tuning. This allows us to do hyperparameter tuning or cross-validation of a DL model if necessary. We will start to learn how to run large-scale hyperparameter tuning in the next chapter as our natural next step.

Further reading

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Practical Deep Learning at Scale with MLflow
Published in: Jul 2022Publisher: PacktISBN-13: 9781803241333
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Yong Liu

Yong Liu has been working in big data science, machine learning, and optimization since his doctoral student years at the University of Illinois at Urbana-Champaign (UIUC) and later as a senior research scientist and principal investigator at the National Center for Supercomputing Applications (NCSA), where he led data science R&D projects funded by the National Science Foundation and Microsoft Research. He then joined Microsoft and AI/ML start-ups in the industry. He has shipped ML and DL models to production and has been a speaker at the Spark/Data+AI summit and NLP summit. He has recently published peer-reviewed papers on deep learning, linked data, and knowledge-infused learning at various ACM/IEEE conferences and journals.
Read more about Yong Liu