Reader small image

You're reading from  Data Engineering with Scala and Spark

Product typeBook
Published inJan 2024
PublisherPackt
ISBN-139781804612583
Edition1st Edition
Right arrow
Authors (3):
Eric Tome
Eric Tome
author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

Rupam Bhattacharjee
Rupam Bhattacharjee
author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

David Radford
David Radford
author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford

View More author details
Right arrow

An Introduction to Apache Spark and Its APIs – DataFrame, Dataset, and Spark SQL

Apache Spark is written in Scala and has become the dominant distributed data processing framework due to its ability to ingest, enrich, and prepare at-scale data for analytical use cases. As a data engineer, you will eventually have to work with data volumes that won’t be processable on a single machine. This chapter will teach you how to leverage Spark and its various APIs to do that processing on a cluster of machines.

In this chapter, we’re going to cover the following main topics:

  • Working with Apache Spark
  • Creating a Spark application using Scala
  • Understanding the Spark Dataset API
  • Understanding the Spark DataFrame API

Technical requirements

Please refer to our GitHub repository for all the code used in this chapter. The repository is located at the following URL: https://github.com/PacktPublishing/Data-Engineering-with-Scala.

Working with Apache Spark

According to spark.apache.org, Spark is described as “a unified analytics engine for large-scale data processing. It provides high-level APIs in Java, Scala, Python, R, and an optimized engine.” Spark can be used for data engineering, machine learning (ML), and data science. Our focus will be on how it can be used for data engineering in Scala.

Spark is built and designed to process vast amounts of data, which is accomplished by making the compute used by Spark easily scalable and distributable. A Spark application is written by leveraging one of the Spark APIs that we will cover later in the chapter. For now, let’s take a look at how Spark applications work.

How do Spark applications work?

A Spark application runs on a Spark cluster, which is a connected group of nodes. These nodes can be virtual machines (VMs) or bare-metal servers. In terms of Spark architecture, there is one driver node and one to n executors that run on your Spark cluster. The driver will control the executors and provide instructions (defined in your Spark application) to the executors. Generally, the driver never actually touches the data you are processing. The executors are where data is manipulated, given instructions from the driver. This is depicted in the following diagram:

Figure 3.1 – Spark driver and executor architecture

Figure 3.1 – Spark driver and executor architecture

Note that the following calculations assume linear scalability, which is not always the case. The actual gain from distributing the work across many nodes depends on the nature of the data and the transformations applied to the data.

On open source Spark, you can configure the number of executors...

Creating a Spark application using Scala

To create data engineering pipelines in Scala, we need to leverage the Spark framework and create a Spark application. Spark provides various types of APIs to work with data, each with pros and cons. Regardless of which API we use, we need to encapsulate them in a Spark application. Let’s create one now.

Each Spark application written in Scala needs a SparkSession. The SparkSession is an object that provides the entry point to the Spark APIs.

In order to use the SparkSession, we need to create a Scala object. The object is an implementation of the singleton pattern. We use objects because each Spark application needs a single instance of Spark, which we can guarantee with an object. Let’s create a Scala object with some commonly used imports for our first Spark application:

package com.packt.descala.scalaplayground
import org.apache.spark.sql.{
DataFrame,
Dataset,
Row,
SparkSession
}
import org.apache.spark.sql.functions...

Understanding the Spark Dataset API

Spark provides various APIs for interacting with data. They are powerful tools for building data engineering pipelines in Scala because you can use the functionality they provide without having to write those functions yourself. The first API we will work with is the Dataset API.

A Dataset is a type of object that is a collection of other objects called Rows. These Row objects have a structure and data types that hold the data we process. The Dataset rows can be processed in parallel on our Spark cluster, as explained previously. Explicitly defining a structure and data types of objects is called strong typing. Being strongly typed means that each column in your row data is associated with a specified data type for that column. Because Datasets are strongly typed, at compile time, they are checked for errors, which is better than finding out you have a data type problem at runtime! Strong typing means you have to put in a little work ahead of...

Understanding the Spark DataFrame API

DataFrames are the most commonly used Spark API. They are a special type of Dataset with a type of Row (that is, Dataset[Row]). The major difference between DataFrames and Datasets is that DataFrames are not strongly typed, hence, data types are not checked at compile time. Because of this, they are arguably easier to work with as they do not require you to provide any structure while defining them.

We do this by creating a DataFrame similar to how we created a Dataset:

val personDf: DataFrame = spark
.read
.format("parquet")
.load(personDataLocation)

This is the output in the Spark console:

Figure 3.8 – DataFrame with our person data in the Spark console

Figure 3.8 – DataFrame with our person data in the Spark console

The main difference is that we are not required to specify a type while instantiating the DataFrame object or on spark.read. Now, let’s take a look at the Spark SQL module.

Spark SQL

Spark SQL is another way to interact with...

Summary

In this chapter, we learned how distributed data processing works on Spark. You now understand how Spark uses Scala code encapsulated in a Spark application to break down datasets into pieces that are run on executors on a Spark cluster. You have created a simple Spark application that uses a SparkSession to interact with the Spark APIs to manipulate data. You now have the basics to move on to more challenging topics such as data ingestion, transforming data, and loading that data into target sources.

In the next chapter, we are going to look at various database operations starting with Spark JDBC API and work our way through building a small Database API of our own.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Data Engineering with Scala and Spark
Published in: Jan 2024Publisher: PacktISBN-13: 9781804612583
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford