Reader small image

You're reading from  Distributed Data Systems with Azure Databricks

Product typeBook
Published inMay 2021
Reading LevelBeginner
PublisherPackt
ISBN-139781838647216
Edition1st Edition
Languages
Concepts
Right arrow
Author (1)
Alan Bernardo Palacio
Alan Bernardo Palacio
author image
Alan Bernardo Palacio

Alan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder in startups, and later on earned a Master's degree from the faculty of Mathematics in the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.
Read more about Alan Bernardo Palacio

Right arrow

Chapter 9: Databricks Runtime for Deep Learning

This chapter will take a deep dive into the development of classic deep learning algorithms to train and deploy models based on unstructured data, exploring libraries and algorithms as well. The examples will be focused on the particularities and advantages of using Databricks for DL, creating DL models. In this chapter, we will learn about how we can efficiently train deep learning models in Azure Databricks and implementations of the different libraries that we have available to use.

The following topics will be introduced in this chapter:

  • Loading data for deep learning
  • Managing data using TFRecords
  • Automating scheme inference
  • Using Petastorm for distributed learning
  • Reading a dataset
  • Data preprocessing and featurization

This chapter will have more of a focus on deep learning models rather than machine learning ones. The main distinction is that we will focus more on handling large amounts of unstructured...

Technical requirements

To work on the examples given in this chapter, it is required for you to have the following:

  • An Azure Databricks subscription
  • An Azure Databricks notebook attached to a running cluster with Databricks Runtime ML of version 7.0 or higher.

Loading data for deep learning

In this chapter, we will learn how we can prepare data for distributed training. To do this, we will learn how to efficiently load data to create deep learning based applications that can leverage the distributed computing nature of Azure Databricks while handling large amounts of data. We will describe two different methods that we have at our disposal for working with large datasets for distributed training. Those methods are Petastorm and TFRecord, which are libraries that make our work easier when loading large and complex datasets to our deep learning algorithms in Azure Databricks.

At a quick glance, the main characteristics of the Petastorm and TFRecord methods are as follows:

  • Petastorm: It is an open source library that allows us to directly load data in Apache Parquet format to train our deep learning algorithms. This is a great feature of Azure Databricks because Parquet is a widely used format when working with large amounts of data...

Managing data using TFRecords

In this section, we will demonstrate how to save image data from Spark DataFrames to TFRecords and load it using TensorFlow in Azure Databricks. We will use as an example the flowers example dataset available in the Databricks filesystem, which contains flower photos stored under five sub-directories, one per class.

We will load the flowers Delta table, which contains the preprocessed flowers dataset using a binary file data source and stored as a Spark DataFrame. We will use this data to demonstrate how you can save data from Spark DataFrames to TFRecords:

  1. As the first step, we will load the data using PySpark:
    from pyspark.sql.functions import col
    import tensorflow as tf
    spark_df = spark.read.format("delta").load("/databricks-datasets/flowers/delta") \
      .select(col("content"), col("label_index")) \
      .limit(100)
  2. Next, we will save the loaded data to TFRecord-type files:
    path =...

Automating schema inference

The spark-tensorflow-connector library, which integrates Spark with TensorFlow, supports automatic schema inference when reading TensorFlow records into Spark DataFrames. Schema inference is an expensive operation because it requires an extra reading pass through the data, and therefore it's good practice to specify it as it will improve the overall performance of our pipeline.

The following Python code example demonstrates how we can do this on some test data we create as an example:

  1. Our first step is to define the schema of our data:
    from pyspark.sql.types import *
    path = "test-output.tfrecord"
    fields = [StructField("id", IntegerType()), 
    StructField("IntegerCol", IntegerType()),
    StructField("LongCol", LongType()), 
    StructField("FloatCol", FloatType()),
    StructField("DoubleCol", DoubleType()), 
    StructField("VectorCol", ArrayType(DoubleType(), 
        &...

Using Petastorm for distributed learning

Petastorm is an open source library that allows us to do single or distributed training of machine and deep learning algorithms using datasets stored as Apache Parquet files. It supports popular frameworks such as PyTorch, TensorFlow, and PySpark and can also be used for other Python applications. Petastorm provides us with a simple function to augment the functionality of the Parquet format with Petastorm-specific data to be able to be used in machine and deep learning model training. We can simply read our data by creating a reader object from Databricks File System and iterating over it. The underlying Petastorm library uses the PyArrow library to read Parquet files.

In this section, we will discuss how we can use Petastorm to further extend the performance of our machine and deep learning training pipelines in Azure Databricks.

Introducing Petastorm

As mentioned before, Petastorm is an open source library that enables a single machine...

Reading a dataset

Reading datasets using Petastorm can be very simple. In this section, we will demonstrate how we can easily load a Petastorm dataset into two frequently used deep learning frameworks, which are TensorFlow and PyTorch:

  1. To load our Petastorm datasets, we use the petastorm.reader.Reader class, which implements the iterator interface that allows us to use plain Python to go over the samples very efficiently. The petastorm.reader.Reader class can be created using the petastorm.make_reader factory method:
    from petastorm import make_reader
    with make_reader('dfs://some_dataset') as reader:
       for sample in reader:
           print(sample.id)
           plt.imshow(sample.image1)
  2. The following code example shows how we can stream a dataset into the TensorFlow Examples class, which as we have seen before is a named tuple with the keys being the ones specified in the Unischema of...

Data preprocessing and featurization

Featurization is the process that we use to transform unstructured data such as text, images, or time-series data into numerical continuous features that are more easily handled by machine and deep learning models. It can be differentiated from featuring engineering from the fact that in featuring engineering the variables are already in the numerical form or have a more defined structure that leads us to the need to refactor or transform these variables into something that makes the machine or deep learning algorithm easier to extract patterns. In featurization, we need to first define a way in which we will extract numerical features from the unstructured data that we have.

We have the need to perform featurization basically because our deep learning models cannot interpret unstructured data directly and therefore, we need not only to extract it but to do this in a computationally efficient manner. This process needs to be incorporated into...

Summary

In this chapter, we discussed how we can use TFRecords and Petastorm as libraries to make the process of loading a large amount of data easier to train our distributed deep learning models. This led to us learning how these records are structured, how we can handle expensive operations such as automated schema inference, how we can prepare records to be consumed, and how we can use them not only in the context of deep learning frameworks but also for pure Python applications.

We finished the chapter with an example of how we can leverage having pre-trained models to extract new features based on domain knowledge that later can be applied to extract features to train a new model.

In the next chapter, we will learn how we can fine-tune the parameters of our deep learning models to improve their performance in Azure Databricks.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Distributed Data Systems with Azure Databricks
Published in: May 2021Publisher: PacktISBN-13: 9781838647216
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Alan Bernardo Palacio

Alan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder in startups, and later on earned a Master's degree from the faculty of Mathematics in the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.
Read more about Alan Bernardo Palacio