Reader small image

You're reading from  Simplifying Data Engineering and Analytics with Delta

Product typeBook
Published inJul 2022
PublisherPackt
ISBN-139781801814867
Edition1st Edition
Concepts
Right arrow
Author (1)
Anindita Mahapatra
Anindita Mahapatra
author image
Anindita Mahapatra

Anindita Mahapatra is a Solutions Architect at Databricks in the data and AI space helping clients across all industry verticals reap value from their data infrastructure investments. She teaches a data engineering and analytics course at Harvard University as part of their extension school program. She has extensive big data and Hadoop consulting experience from Thinkbig/Teradata prior to which she was managing development of algorithmic app discovery and promotion for both Nokia and Microsoft AppStores. She holds a Masters degree in Liberal Arts and Management from Harvard Extension School, a Masters in Computer Science from Boston University and a Bachelors in Computer Science from BITS Pilani, India.
Read more about Anindita Mahapatra

Right arrow

Chapter 12: Optimizing Cost and Performance with Delta

"You have to perform at a consistently higher level than others. That's the mark of a true professional."

– Joe Paterno, Football Coach

In the previous chapter, we saw how Delta helps with the hardening of data pipelines and making them production worthy. Now that the pipeline is put into action, crunching massive datasets every day, we want that pipeline to be as lean and mean as possible, to extract every ounce of performance from it to make the most of the infrastructure investment while serving the needs of the business users. In addition, the numbers around the end-to-end SLA requirements of pipelines need to be shaved. Both cost and speed are the driving requirements, which is why the key metric is price performance as both aspects need to be optimized.

In this chapter, we will explore the ways that data engineers...

Technical requirements

To follow along with this chapter, make sure you have the code and instructions as detailed on GitHub here:

https://github.com/PacktPublishing/Simplifying-Data-Engineering-and-Analytics-with-Delta/tree/main/Chapter12

https://github.com/delta-io/delta/issues/920 is the proposed roadmap for select edge feature migration from Databricks to open source Delta primarily for performance enhancements.

Let's get started!

Improving performance with common strategies

The performance of a pipeline refers to how quickly the data load can be processed. Throughput is defined as the volume of data that can be processed. In a big data system, both are important scalable metrics. Let's look at ways to improve performance:

  • Increase the level of parallelism: The ability to break a large chunk into smaller independent chunks that can be executed in parallel.
  • Better code: Efficient algorithms and code help to crunch through the same business transformations faster.
  • Workflow that captures task dependencies: Not all tasks can run independently; there are inherent dependencies between tasks and pipelining or orchestration refers to chaining these dependencies as DAGs, where the inherent lineage determines which ones can run simultaneously and which ones need to wait until all the dependent stages have completed successfully. Even better would be the option to share compute for some of these...

Optimizing with Delta

Delta's support for ACID transactions and quality guarantees helps ensure data reliability, thereby reducing superfluous validation steps and shortening the end-to-end time. This involves less downtime and triage cycles. Delta's support of fine-grained updates, deletes, and merges applies at a file level instead of to the entire partition, leading to less data manipulation and faster operations. This also leads to fewer compute resources, leading to cost savings.

Changing the data layout in storage

Optimizing the layout of the data can help speed up query performance, and there are two ways to do so, namely the following:

  • Compaction, also known as bin-packing
    • Here, lots of smaller files are combined into fewer large ones.
    • Depending on how many files are involved, this can be an expensive operation and it is a good idea to run it either during off-peak hours or on a separate cluster from the main pipeline to avoid unnecessary delays to the...

Is cost always inversely proportional to performance?

Typically, higher performance is associated with higher costs. Spark provides options for tunable performance and cost. At a high level, it is a given that if your end-to-end latency is stringent or low, then your cost will be higher.

But using Delta to unify all your workloads on a single platform brings efficiencies of scale through automation and standardization, leading to cost reductions by reducing the number of hops and processing steps, which translates to a reduction in compute power. Also, when your queries run faster on the same hardware, you pay for a shorter duration of your running cloud computing cost. So yes, it is possible to improve performance and still contain the cost. SLA requirements are not compromised. Instead, superior architecture options are available, such as the unification of batch and streaming workloads, handling both schema enforcement alongside schema evolution, and the ability to handle unstructured...

Best practices for managing performance

Managing cost and performance is a continuous activity. Sometimes, they can inversely affect each other, and other times, they go hand in hand. Once optimized, a workload pattern can change and need a different set of tweaks. That said, managed platforms, such as Databricks, are getting better at analyzing workloads and suggesting optimizations or directly applying them, thereby relieving the data engineer from these responsibilities. But there is still a long way to go to reach complete auto-pilot. We covered a lot of different techniques to tune your workloads; partition pruning and I/O pruning are the main ones:

  • Partition pruning: It is file-based by having directories for each partition value. On-disk, it will look like <partition_key>=<partition_value> and a set of associated Parquet data files. If the amount of data pulled from executors back to the driver is large, use spark.driver.maxResultSize to increase it. It may...

Summary

As data grows exponentially over time, query performance is an important ask from all stakeholders. Delta is based on the columnar Parquet format, which is highly compressible, consuming less storage and memory and automatically creating and maintaining indices on data. Data skipping helps with getting faster access to data and is achieved by maintaining file statistics so that only the relevant files are read, avoiding full scans. Delta caching improves the performance of common queries that repeat. optimize compacts smaller files and zorder colocates relevant details that are usually queried together, leading to fewer file reads.

The Delta architecture pattern has empowered data engineers not only by simplifying a lot of their daily activities but also by also improving the query performance for data analysts who consume the hard work and output produced by these upstream data engineers. In this chapter, we looked at some common techniques to apply to our Delta tables...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Simplifying Data Engineering and Analytics with Delta
Published in: Jul 2022Publisher: PacktISBN-13: 9781801814867
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Anindita Mahapatra

Anindita Mahapatra is a Solutions Architect at Databricks in the data and AI space helping clients across all industry verticals reap value from their data infrastructure investments. She teaches a data engineering and analytics course at Harvard University as part of their extension school program. She has extensive big data and Hadoop consulting experience from Thinkbig/Teradata prior to which she was managing development of algorithmic app discovery and promotion for both Nokia and Microsoft AppStores. She holds a Masters degree in Liberal Arts and Management from Harvard Extension School, a Masters in Computer Science from Boston University and a Bachelors in Computer Science from BITS Pilani, India.
Read more about Anindita Mahapatra