Reader small image

You're reading from  Distributed Data Systems with Azure Databricks

Product typeBook
Published inMay 2021
Reading LevelBeginner
PublisherPackt
ISBN-139781838647216
Edition1st Edition
Languages
Concepts
Right arrow
Author (1)
Alan Bernardo Palacio
Alan Bernardo Palacio
author image
Alan Bernardo Palacio

Alan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder in startups, and later on earned a Master's degree from the faculty of Mathematics in the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.
Read more about Alan Bernardo Palacio

Right arrow

Chapter 7: Using Python Libraries in Azure Databricks

Azure Databricks has implementations on different programming languages, but we will focus on Python developers, therefore we will explore all the nuances regarding working with it, as well as introducing core concepts regarding models and data that later will be studied in more detail.

In this chapter, we will cover the following:

  • Installing popular Python libraries in Azure Databricks
  • Learning key concepts of the PySpark API
  • Using the Koalas API to manipulate data in a similar way as we would do with pandas
  • Using visualization libraries to make plots and graphics

These concepts will be introduced more deeply in the next sections of this chapter.

Technical requirements

This chapter will require you to have an Azure Databrick subscription available to work on the examples in notebooks attached to running clusters.

Without further ado, we can start by looking into the different ways in which we can install libraries in Azure Databricks and the differences of each method.

Installing libraries in Azure Databricks

We can make use of third-party or custom code by installing libraries written in Python, Java, Scala, or R. These libraries will be available to notebooks and jobs running on your clusters depending on the level at which the libraries were installed.

In Azure Databricks, installing libraries can be done in different ways, the most important decision being at which level we will be installing these libraries. The options available are at the workspace, cluster, or notebook level:

  • Workspace libraries serve as a local repository from which you create cluster-installed libraries. A workspace library might be custom code created by your organization or might be a particular version of an open-source library that your organization has standardized on.
  • Cluster libraries are available to be used by all notebooks attached to that cluster. You can install a cluster library from a public repository or create one from a previously installed...

PySpark API

We have been using the PySpark API across all sections when describing the features of Azure Databricks without discussing too much of its functionalities and how we can leverage them to make reliable ETL operations when working with big data. PySpark is the Python API for Apache Spark, a cluster-computing framework that is the heart of Azure Databricks.

Main functionalities of PySpark

PySpark allows you to harness the power of distributed computing with the ease of use of Python and it's the default way in which we express our computations through this book unless stated otherwise.

The fundamentals of PySpark lies in the functionality of its sub-packages of which the most central are the following:

  • PySpark DataFrames: Data stored in rows following a set of named columns. These DataFrames are immutable and allow us to perform lazy computations.
  • The PySpark SQL module: A higher-abstraction module for processing structured and semi-structured datasets...

pandas DataFrame API (Koalas)

Data scientists and data engineers that are Python users are very familiar with working with pandas DataFrames when manipulating data. pandas is a Python library for data manipulation and analysis but that lacks the capability to work with big data, therefore it is only suitable when working with small datasets. When we need to work with more data, the most common option is PySpark, as we have demonstrated in the previous section, which is a library with a very different syntax than pandas.

Koalas is a library that eases the learning curve from transitioning from pandas to working with big data in Azure Databricks. Koalas has a syntax that is very similar to the pandas API but with the functionality of PySpark.

Not all the pandas methods have been implemented and there are many small differences or subtleties that must be considered and might not be obvious. We cannot understand Koalas without understanding PySpark.

Koalas, functionality is built...

Visualizing data

We can use popular Python libraries such as Bokeh, Matplotlib, and Plotly to make visualizations in Azure Databricks. In this section, we will learn how we can use these libraries in Azure Databricks and how we can make use of notebook features to work with these visualizations.

Bokeh

Bokeh is a Python interactive data visualization library used to create beautiful and versatile graphics, dashboards, and plots.

To use Bokeh, you can install the Bokeh PyPI package either by installing it at the cluster level through the libraries UI and attaching it to your cluster or by installing it at the notebook level using pip commands.

Once we have installed the library and we can import it into our notebook, to display a Bokeh plot in Databricks, we must first create the plot and generate an HTML file embedded with the data for the plot, created, for example, by using Bokeh's file_html or output_file functions, and then pass this HTML to the Databricks displayHTML...

Summary

In this chapter, we have turned our focus to deepening our knowledge on how we can manipulate data using the PySpark API and using its core features. We have also gone through the steps required to install and use Python libraries at different instance levels and how we can use them to visualize our data using the display function.

Finally, we went through the basics of the Koalas API, which makes it easier to migrate from working with pandas to working with big data in Azure Databricks.

In the next chapter, will learn how to use Azure Databricks to run machine learning experiments, train models, and make inferences on new data using libraries such as XGBoost, sklearn, and Spark's MlLib.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Distributed Data Systems with Azure Databricks
Published in: May 2021Publisher: PacktISBN-13: 9781838647216
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Alan Bernardo Palacio

Alan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder in startups, and later on earned a Master's degree from the faculty of Mathematics in the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.
Read more about Alan Bernardo Palacio