Reader small image

You're reading from  Distributed Data Systems with Azure Databricks

Product typeBook
Published inMay 2021
Reading LevelBeginner
PublisherPackt
ISBN-139781838647216
Edition1st Edition
Languages
Concepts
Right arrow
Author (1)
Alan Bernardo Palacio
Alan Bernardo Palacio
author image
Alan Bernardo Palacio

Alan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder in startups, and later on earned a Master's degree from the faculty of Mathematics in the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.
Read more about Alan Bernardo Palacio

Right arrow

Chapter 2: Creating an Azure Databricks Workspace

In this chapter, we will apply all the concepts we explored in Chapter 1, Introduction to Azure Databricks. We will create our first Azure Databricks workspace using the UI, and then explore the different possibilities of resource management through the Azure CLI, how to deploy these resources using the ARM template, and how we can integrate Azure Databricks within our virtual network using VNet injection.

In this chapter, we will discuss the following topics:

  • Using the Azure portal UI
  • Examining Azure Databricks authentication
  • Working with VNets in Azure Databricks
  • Azure Resource Manager templates
  • Setting up the Azure Databricks CLI

We will first begin by creating our workspace from the Azure portal UI.

Technical requirements

The most important prerequisite for this chapter is to already have an Azure subscription with funds and permissions. Remember that this is a pay-as-you-go service, but nevertheless, you can create a free trial subscription; check out more information about this option in the Azure portal (https://azure.microsoft.com/en-us/free/).

Using the Azure portal UI

Let's start by setting up a new Databricks workspace through the Azure portal UI:

  1. Log in to the Azure portal of your subscription and navigate to the Azure services ribbon.
  2. Click on Azure Databricks:

    Figure 2.1 – Creating an Azure Databricks service

  3. This will lead you to the Azure Databricks default folder in which you will see all your resources listed. Click on Create new resource to create an Azure Databricks workspace environment:

    Figure 2.2 – Your Azure Databricks deployed resources

  4. Once you click on Create azure databricks service, you will have to fill in a couple of details regarding the workspace you are creating. The settings will look something like this:
    • Workspace name
    • Subscription
    • Resource group
    • Location
    • Pricing Tier
    • Deploy Azure Databricks workspace in your Virtual Network (Preview)

      The name of the workspace can be whatever you like, but it is always good to pick a name that is simple and references the use that...

Examining Azure Databricks authentication

In Azure Databricks, authentication is effected through our Azure AD account, which, in some cases, can be linked to our Microsoft account. Subscriptions such as Premium allow us to manage access to our assets through access control in a more detailed manner.

Access control

If we share the URL of our workspace with a user to collaborate, first we must grant access to that user. To do this, we could either grant them Owner or Contributor roles of the asset we want to share or do this in Admin Console:

  1. You can access Admin Console by clicking on the resource name icon in the top-right corner:

    Figure 2.15 – Admin Console

  2. After that, users can be added by clicking on the Add User button and then selecting the role you would like that user to have. You can create internal groups and apply a more detailed control over folder and workspace assets:

Figure 2.16 – Adding a user

Users can...

Working with VNets in Azure Databricks

Azure Databricks can be deployed within a custom virtual network. This is called VNet injection and is very important from a security perspective. When we deploy with default settings, inbound traffic is closed, but outbound traffic is open without restrictions. When we use VNet injection and we deploy directly to a custom virtual network, we can apply the same security policies around all our Azure Services, to meet compliance and security requirements.  

In case you are working in data science or exploratory environments, it's good to leave the outbound traffic open to be able to download packages and libraries for Python, R, and Maven, and Ubuntu packages also.

As we have mentioned before, Azure Databricks works on two planes of service. The first is the control page, which we use through the Databricks API to work with workspace assets. The second is the data plane where the clusters are deployed. It is this second plane...

Azure Resource Manager templates

ARM templates are infrastructure as code and allow us to deploy resources automatically in an agile manner. These templates are JSON files that define infrastructure and configuration in a declarative way, specifying resources and properties. We can deploy several resources as a single resource, and modify existing configurations. Just like code, it can be stored in a repository and versioned, and anyone can run the code to deploy similar environments.

ARM templates are then passed to the ARM API, which deploys the specified resources. These can include virtual networks, VMs, or an Azure Databricks workspace.

These templates have two modes of operation, which are Complete or Incremental mode. When we deploy in Complete mode, this deletes any objects that are not specified in the template and the resource group that is being deployed to. Incremental deployment adds additional resources to the existing ones.

The limitation of these templates...

Setting up the Azure Databricks CLI

Azure Databricks comes with a CLI tool that allows us to manage our resources. It's built on top of the Azure Databricks API and allows you to access the workspace, jobs, clusters, libraries, and more. This is an open source project hosted on GitHub.

The Azure Databricks CLI is based on Python 3 and is installed through the following pip command:

pip3 install databricks-cli

You can confirm that the installation was successful by checking the version. If the installation was successful, you will see as a result the current version of the Azure Databricks CLI:

databricks --version

It's good to bear in mind that using the Databricks CLI with firewall-enabled storage containers is not possible and, in that case, it is recommended to use Databricks Connect or AZ storage.

To be able to install the Azure CLI, you will need to have Python 3 already installed and added to the path of the environment you will be working on.

Authentication...

Summary

In this chapter, we explored the possibilities of creating an Azure Databricks service either through the UI or the ARM templates, and explored the options in terms of enforcing access control on resources. We also reviewed the different authentication methods, the use of VNets to have a consistent approach when dealing with access policies throughout Azure resources, and how we use the Databricks CLI to create and manage clusters, jobs, and other assets. This knowledge will allow us to efficiently deploy the required resources to work with data in Azure Databricks while maintaining control over how these assets access and transform data.

In the next chapter, we will apply this and the previous concepts to run more advanced notebooks, create ETLs, data science experiments, and more. We'll start with ETL pipelines in Chapter 3, Creating ETL Operations with Azure Databricks.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Distributed Data Systems with Azure Databricks
Published in: May 2021Publisher: PacktISBN-13: 9781838647216
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Alan Bernardo Palacio

Alan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder in startups, and later on earned a Master's degree from the faculty of Mathematics in the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.
Read more about Alan Bernardo Palacio