Reader small image

You're reading from  Data Engineering with Scala and Spark

Product typeBook
Published inJan 2024
PublisherPackt
ISBN-139781804612583
Edition1st Edition
Right arrow
Authors (3):
Eric Tome
Eric Tome
author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

Rupam Bhattacharjee
Rupam Bhattacharjee
author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

David Radford
David Radford
author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford

View More author details
Right arrow

Understanding Data Transformation

One of the main jobs of any data engineer is to transform data in some way to make it usable for Business Intelligence (BI) applications or for data scientists or analysts. In Chapter 3, you learned the basics of a Spark application and how to ingest data.

Now, in this chapter, we are going to dive a bit deeper and look at some advanced topics that are essential for any data engineer to understand when using Spark to build data pipelines.

Here is a list of them:

  • Understanding the difference between transformations and actions
  • Learning how to aggregate, group, and join data
  • Leveraging advanced window functions
  • Working with complex dataset types

Technical requirements

All of the code and data for this chapter is located in our GitHub repository at the following location: https://github.com/PacktPublishing/Data-Engineering-with-Scala-and-Spark/tree/main/src/main/scala/com/packt/dewithscala/chapter6.

You also need to add the following dependency to work with spark-xml:

libraryDependencies += "com.databricks" %% "spark-xml" % "0.16.0"

Understanding the difference between transformations and actions

When working with data and sets of data in Spark with Scala, it’s helpful to understand how and when execution takes place on your Spark cluster. Spark by design is lazy, meaning that it doesn’t transform your data until absolutely necessary. This is so that it can run a batch of transactions together and apply optimizations to help improve the processing time.

Transformations are code statements that are lazily executed. A ledger of transformations is tracked until Spark sees a code statement called an action. The action tells Spark it’s time to execute all the transformations. Transformations are code that returns an RDD (short for resilient distributed dataset), dataset, or DataFrame. An action is code that returns some kind of value using the dataset you are processing.

Examples of action functions are as follows:

  • count
  • show
  • write
  • head
  • take

The following are...

Learning how to aggregate, group, and join data

Another set of basic skills a data engineer needs is the ability to aggregate, group, and join data together. Let’s learn how to do this using Scala in Spark!

val numDirectorsByShow: DataFrame =
  dfDirectorByShowSelectExpr
    .groupBy($"show_id")
    .agg(
      count($"director").alias("num_director")
    )
numDirectorsByShow.show(10, 0)

Here is the output:

+-------+------------+
|show_id|num_director|
+-------+------------+
|s1     |1           |
|s2     |1           |
|s3     |1           |
|s4     |1    &...

Leveraging advanced window functions

Window functions are a way to perform calculations on a specific group of records or a sliding subset of records. They can be used to perform cumulative calculations, get values from other records relative to the position of the current record being processed, and perform ranking calculations. They can also use aggregate functions such as count, avg, min, and max. Let’s take a look at them now:

val windowSpecRatingMonth =
  Window.partitionBy("rating", "month")
        .orderBy("year", "month")

Windows are created by defining a window specification. In the preceding example, we defined a window called windowSpecRatingMonth by partitioning our records by rating and month and ordering the records in the partition by year and month:

val dfWindowedLagLead = dfNetflixUSMetrics
  .withColumn(
    "cast_per_director...

Working with complex dataset types

In the real world, we very often have to deal with data that doesn’t fit into a standard table format with one value per column in each record. We did see a little of that previously with our netflix titles CSV file in our cast and director columns, but what happens when we run into more complex structures?

In this section, we’ll show you how to manage nested data in semi-structured data, such as XML and JSON. Consider the following code:

val dfDevicesJson = spark.read.json(
"src/main/scala/com/packt/dewithscala/chapter6/data/devices.json")
dfDevicesJson.printSchema()
root
 |-- country: string (nullable = true)
 |-- device_id: string (nullable = true)
 |-- event_ts: timestamp (nullable = true)
 |-- event_type: string (nullable = true)
 |-- id: long (nullable = true)
 |-- line: string (nullable = true)
 |-- manufacturer: string (nullable = true)
 |-- observations: array (nullable = true)
 |    |-- element...

Summary

In this chapter, we learned how to transform our data using Scala with Spark. You now understand the difference between transformations and actions. You’ve learned how to use select, selectExpr, filter, join, and sort to reduce data to just what you need for your transformation. You’ve worked with various types of complex data and generated aggregations using group by and windows. We’ve covered a lot, and now you’ll be able to take what you’ve learned and apply it to a real-world scenario.

In the next chapter, we are going to cover how to work with various sources and sinks for object data, streaming data, and so on.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Data Engineering with Scala and Spark
Published in: Jan 2024Publisher: PacktISBN-13: 9781804612583
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford