Reader small image

You're reading from  Data Engineering with Scala and Spark

Product typeBook
Published inJan 2024
PublisherPackt
ISBN-139781804612583
Edition1st Edition
Right arrow
Authors (3):
Eric Tome
Eric Tome
author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

Rupam Bhattacharjee
Rupam Bhattacharjee
author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

David Radford
David Radford
author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford

View More author details
Right arrow

What this book covers

Chapter 1, Scala Essentials for Data Engineers, introduces Scala in data engineering, recognizing its importance due to type safety, adoption by major companies such as Netflix and Airbnb, native integration with Spark, fostering a software engineering mindset, and its versatility in both object-oriented and functional programming. The chapter covers concepts such as functional programming, objects, classes, higher-order functions, polymorphism, variance, option types, collections, pattern matching, and implicits in Scala.

Chapter 2, Environment Setup, presents two data engineering pipeline development environments. The first, a cloud-based setup, offers portability and easy access but incurs costs for system maintenance. The second involves local machine utilization, requiring a setup but avoiding cloud expenses.

Chapter 3, An Introduction to Apache Spark and Its APIs – DataFrame, Dataset, and Spark SQL, focuses on Apache Spark as a leading distributed data processing framework. It emphasizes handling large data volumes across machine clusters. Topics include working with Spark, building Spark applications with Scala, and comprehending Spark’s Dataset and DataFrame APIs for effective data processing.

Chapter 4, Working with Databases, dives into relational databases’ utilization within data pipelines, emphasizing efficiency in reading from and writing to databases. It covers the Spark API and building a straightforward database library, exploring Spark’s JDBC API, loading configurations, creating an interface, and executing multiple database operations.

Chapter 5, Object Stores and Data Lakes, discusses the evolution from traditional databases to the era of data lakes and lakehouses, due to surges in data volumes. The focus will be on object stores, which are fundamental for both data lakes and lake houses.

Chapter 6, Understanding Data Transformation, goes deeper into essential Spark skills for data engineers aiming to transform data for downstream use cases. It covers advanced Spark topics such as the distinctions between transformations and actions, aggregation, grouping, joining data, utilizing window functions, and handling complex dataset types.

Chapter 7, Data Profiling and Data Quality, stresses the importance of data quality checks in preventing issues downstream. It introduces the Deequ library, an open source tool by Amazon, for defining checks, performing analysis, suggesting constraints, and storing metrics.

Chapter 8, Test-Driven Development, Code Health, and Maintainability discusses software development best practices applied to data engineering, defect identification, code consistency, and security. It introduces Test-Driven Development (TDD), unit tests, integration tests, code coverage checks, static code analysis, and the importance of linting and code style for development practices.

Chapter 9, CI/CD with GitHub, introduces Continuous Integration/Continuous Delivery (CI/CD) concepts in Scala data engineering projects using GitHub. It explains CI/CD as automated testing and deployment, aiming for rapid iteration, error reduction, and consistent quality.

Chapter 10, Data Pipeline Orchestration, focuses on data pipeline orchestration, emphasizing the need for seamless task coordination and failure notification. It introduces tools such as Apache Airflow, Argo, Databricks Workflows, and Azure Data Factory.

Chapter 11, Performance Tuning, emphasizes the critical role of the Spark UI in optimizing performance. It covers topics such as the Spark UI basics, performance tuning, computing resource optimization, understanding data skewing, indexing, and partitioning.

Chapter 12, Building Batch Pipelines Using Spark and Scala, combines all of your previously learned skills to construct a batch pipeline. It stresses the significance of batch processing, leveraging Apache Spark’s distributed processing and Scala’s versatility. The topics cover a typical business use case, medallion architecture, batch data ingestion, transformation, quality checks, loading into a serving layer, and pipeline orchestration.

Chapter 13, Building Streaming Pipelines Using Spark and Scala, focuses on constructing a streaming pipeline, emphasizing real-time data ingestion using Azure Event Hubs, configured as Apache Kafka for Spark integration. It employs Spark’s Structured Streaming and Scala for efficient data handling. Topics include use case understanding, streaming data ingestion, transformation, serving layer loading, and orchestration, aiming to equip you with the skills to develop and implement similar pipelines in your organizations.

lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Data Engineering with Scala and Spark
Published in: Jan 2024Publisher: PacktISBN-13: 9781804612583

Authors (3)

author image
Eric Tome

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges.
Read more about Eric Tome

author image
Rupam Bhattacharjee

Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering.
Read more about Rupam Bhattacharjee

author image
David Radford

David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.
Read more about David Radford