Reader small image

You're reading from  Data Engineering with Scala and Spark

Product typeBook
Published inJan 2024
PublisherPackt
ISBN-139781804612583
Edition1st Edition
Right arrow
Authors (3):
Eric Tome
Eric Tome
author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

Rupam Bhattacharjee
Rupam Bhattacharjee
author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

David Radford
David Radford
author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford

View More author details
Right arrow

Object Stores and Data Lakes

Enterprises have leaned heavily on databases and data warehouses for many decades. Around the turn of the millennium, the internet age was beginning to take hold. The proliferation of connected devices began to present a volume and variety of data that traditional databases and warehouses could no longer keep up with.

While developing a web indexing solution using this large influx of data, Google published a paper in 2003 titled the Google File System (GFS) that would shape industry solutions for the next two decades. This solution allowed for the development of data lakes, which led to lakehouses. Data lakes are a distributed file system that provide a cost-efficient method to store structured, unstructured, and semi-structured data. Lakehouses are a combination of data warehouses and data lake capabilities. We’re going to learn how to work with object stores, which are the foundational technology and storage for both data lakes and lakehouses...

Understanding distributed file systems

The GFS paper outlined to the technology world how to successfully store and access data on a massive (for the time) scale. At the time, hundreds of terabytes were spread across thousands of commodity servers. Not only could they store and access vast amounts of data but they also provided the ability to store non-traditional types of data.

However, the rise of the internet brought with it video files, images, audio, email, and HTML. Data warehouses did not have the capability to store and use these types of data, so the new distributed file system was a perfect solution. This solution very quickly took hold in the industry through Apache Hadoop, in 2005, as the first widely adopted distributed file system, called Hadoop Distributed File System (HDFS), and processing framework (MapReduce). The newly found scalability in storage and compute at commodity prices brought on the rise of data lakes.

Now, let’s dive into data lakes and explore...

Streaming data

Streaming data is often a misunderstood topic as streaming is often thought of as being required for real-time data processing. This level of processing needs some type of compute resource to be running continuously to keep the data as close to up to date as possible and is thought of as being very expensive. Some engineering teams will avoid this architecture because of budgetary constraints, and because the use case only requires data to be fresh at some type of frequency, such as daily, hourly, twice a day, and so on. While this is true for many scenarios, it misses the main purpose of streaming architecture, which is incremental processing. This type of processing is the holy grail of data engineering because the less data that is processed typically means less cost is associated with a pipeline.

This section will show how to stream from different sources and process these streams into different destinations or sinks.

There are many different ways to set up...

Working with streaming sources

Apache Spark uses the structured streaming API to read from a large variety of streaming sources. This API is easy for developers to use because it is very interoperable with Spark’s batch API, so developers can reuse their knowledge across both use cases. The following are some examples of using Spark to read from Kafka and Kinesis and show the results in the console. The writeStream portion will be covered in more detail later in this section:

//KINESIS
val spark = SparkSession
 .builder()
 .appName("Spark SQL basic example")
 .master("local[*]")
 .getOrCreate()
var df = spark.readStream.format("kinesis")
 .option("streamName","scala-book")
 .option("region","us-east-1")
 .option("initialPosition","TRIM_HORIZON")
 .option("awsAccessKeyId",sys.env.getOrElse("AWS_ACCESS_KEY_ID",""))
 .option("awsSecretKey",sys.env...

Summary

In this chapter, we have learned about the evolution of storage to manage large volumes of data with distributed file systems. These systems have evolved over time, from data lakes to lakehouses that use cloud object storage to efficiently store data types and volumes in a way that is not possible with data warehouses.

We also learned how to read, write, and modify data stored in these systems. We then learned about streaming systems and the various ways to use them to enrich and store data in an incremental fashion, all of which is fundamental knowledge the Scala data engineer needs to know.

In the next chapter, we’ll dive deep into how to further transform and use data.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Data Engineering with Scala and Spark
Published in: Jan 2024Publisher: PacktISBN-13: 9781804612583
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford