Reader small image

You're reading from  Apache Spark Quick Start Guide

Product typeBook
Published inJan 2019
Reading LevelIntermediate
PublisherPackt
ISBN-139781789349108
Edition1st Edition
Languages
Right arrow
Authors (2):
Shrey Mehrotra
Shrey Mehrotra
author image
Shrey Mehrotra

Shrey Mehrotra has over 8 years of IT experience and, for the past 6 years, has been designing the architecture of cloud and big-data solutions for the finance, media, and governance sectors. Having worked on research and development with big-data labs and been part of Risk Technologies, he has gained insights into Hadoop, with a focus on Spark, HBase, and Hive. His technical strengths also include Elasticsearch, Kafka, Java, YARN, Sqoop, and Flume. He likes spending time performing research and development on different big-data technologies. He is the coauthor of the books Learning YARN and Hive Cookbook, a certified Hadoop developer, and he has also written various technical papers.
Read more about Shrey Mehrotra

Akash Grade
Akash Grade
author image
Akash Grade

Akash Grade is a data engineer living in New Delhi, India. Akash graduated with a BSc in computer science from the University of Delhi in 2011, and later earned an MSc in software engineering from BITS Pilani. He spends most of his time designing highly scalable data pipeline using big-data solutions such as Apache Spark, Hive, and Kafka. Akash is also a Databricks-certified Spark developer. He has been working on Apache Spark for the last five years, and enjoys writing applications in Python, Go, and SQL.
Read more about Akash Grade

View More author details
Right arrow

Spark Optimizations

In the previous chapters, we learned how to use Spark to implement a variety of use cases using features such as RDDs, DataFrames, Spark SQL, MLlib, GraphX/Graphframes, and Spark Streaming. We also discussed how to monitor your applications to better understand their behavior in production. However, sometimes, you would want your jobs to run efficiently. We measure the efficiency of any job on two parameters: runtime and storage space. In the Spark application, you might also be interested in the statistic of the data shuffles between the nodes. We discussed some of the optimizations in the earlier chapters, but, in this chapter, we'll discuss more optimizations that can help you achieve some performance benefits.

Most developers focus only on writing their applications on Spark and do not focus on optimizing their job for a variety of reasons. This chapter...

Cluster-level optimizations

As discussed in Chapter 1, Introduction to Apache Spark, Spark can scale horizontally. This means that performance will increase if you add more nodes to your cluster, because Spark can perform more operations in parallel. Spark also enables users to take good advantage of memory, and a fast network can also help in optimizing shuffle data. Because of all of these reasons, more hardware is always better.

Memory

Efficient use of memory is critical for good performance. In the earlier versions of Spark, the memory was used for three main purposes:

  • RDD storage
  • Shuffle and aggregation storage
  • User code

Memory was divided among them with some fixed proportions. For example, RDD storage had 60%, shuffle...

Application optimizations

As discussed, there are multiple ways to improve the performance of your Spark applications. In the previous section, we covered some optimizations that were related to hardware. In this section, we'll discuss how you can apply some optimizations while writing your Spark applications.

Language choice

One of the first choices the developers have to make is to decide the language API they are going to write their applications in. In Chapter 1, Introduction to Apache Spark, we gave an overview of all the languages supported by Spark. The choice of language depends on the use case and the dynamics of the team. If you are part of a data science team and comfortable in writing your machine-learning...

Summary

In this chapter, we discussed some of the optimizations provided by Spark. First, we discussed some hardware level optimizations, such as setting the number of cores, executors, and the amount of memory for your Spark applications. We then gave an overview of project Tungsten and its optimizations. Then, we covered application-level optimizations, such as choosing the right language, API and the file format for your applications. Finally, we covered optimizations provided by RDD and DataFrame APIs.

This is the end of this book. I thank you for staying with me till the end, and I hope you enjoyed the content. If you wish to explore Apache Spark in more detail, you may also consider buying Learning Apache Spark 2, by Packt.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Apache Spark Quick Start Guide
Published in: Jan 2019Publisher: PacktISBN-13: 9781789349108
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Shrey Mehrotra

Shrey Mehrotra has over 8 years of IT experience and, for the past 6 years, has been designing the architecture of cloud and big-data solutions for the finance, media, and governance sectors. Having worked on research and development with big-data labs and been part of Risk Technologies, he has gained insights into Hadoop, with a focus on Spark, HBase, and Hive. His technical strengths also include Elasticsearch, Kafka, Java, YARN, Sqoop, and Flume. He likes spending time performing research and development on different big-data technologies. He is the coauthor of the books Learning YARN and Hive Cookbook, a certified Hadoop developer, and he has also written various technical papers.
Read more about Shrey Mehrotra

author image
Akash Grade

Akash Grade is a data engineer living in New Delhi, India. Akash graduated with a BSc in computer science from the University of Delhi in 2011, and later earned an MSc in software engineering from BITS Pilani. He spends most of his time designing highly scalable data pipeline using big-data solutions such as Apache Spark, Hive, and Kafka. Akash is also a Databricks-certified Spark developer. He has been working on Apache Spark for the last five years, and enjoys writing applications in Python, Go, and SQL.
Read more about Akash Grade