Reader small image

You're reading from  Apache Spark 2.x Cookbook

Product typeBook
Published inMay 2017
Reading LevelIntermediate
Publisher
ISBN-139781787127265
Edition1st Edition
Languages
Right arrow
Author (1)
Rishi Yadav
Rishi Yadav
author image
Rishi Yadav

Rishi Yadav has 19 years of experience in designing and developing enterprise applications. He is an open source software expert and advises American companies on big data and public cloud trends. Rishi was honored as one of Silicon Valley's 40 under 40 in 2014. He earned his bachelor's degree from the prestigious Indian Institute of Technology, Delhi, in 1998. About 12 years ago, Rishi started InfoObjects, a company that helps data-driven businesses gain new insights into data. InfoObjects combines the power of open source and big data to solve business challenges for its clients and has a special focus on Apache Spark. The company has been on the Inc. 5000 list of the fastest growing companies for 6 years in a row. InfoObjects has also been named the best place to work in the Bay Area in 2014 and 2015. Rishi is an open source contributor and active blogger. This book is dedicated to my parents, Ganesh and Bhagwati Yadav; I would not be where I am without their unconditional support, trust, and providing me the freedom to choose a path of my own. Special thanks go to my life partner, Anjali, for providing immense support and putting up with my long, arduous hours (yet again).Our 9-year-old son, Vedant, and niece, Kashmira, were the unrelenting force behind keeping me and the book on track. Big thanks to InfoObjects' CTO and my business partner, Sudhir Jangir, for providing valuable feedback and also contributing with recipes on enterprise security, a topic he is passionate about; to our SVP, Bart Hickenlooper, for taking the charge in leading the company to the next level; to Tanmoy Chowdhury and Neeraj Gupta for their valuable advice; to Yogesh Chandani, Animesh Chauhan, and Katie Nelson for running operations skillfully so that I could focus on this book; and to our internal review team (especially Rakesh Chandran) for ironing out the kinks. I would also like to thank Marcel Izumi for, as always, providing creative visuals. I cannot miss thanking our dog, Sparky, for giving me company on my long nights out. Last but not least, special thanks to our valuable clients, partners, and employees, who have made InfoObjects the best place to work at and, needless to say, an immensely successful organization.
Read more about Rishi Yadav

Right arrow

Understanding resilient distributed dataset - RDD


Though RDD is getting replaced with DataFrame/DataSet-based APIs, there are still a lot of APIs that have not been migrated yet. In this recipe, we will look at how the concept of lineage works in RDD.

Externally, RDD is a distributed, immutable collection of objects. Internally, it consists of the following five parts:

  • Set of partitions (rdd.getPartitions)
  • List of dependencies on parent RDDs (rdd.dependencies)
  • Function to compute a partition, given its parents
  • Partitioner, which is optional (rdd.partitioner)
  • Preferred location of each partition, which is optional (rdd.preferredLocations)

The first three are needed for an RDD to be recomputed in case data is lost. When combined, it is called lineage. The last two parts are optimizations.

A set of partitions is how data is divided into nodes. In the case of HDFS, it means InputSplits, which are mostly the same as the block (except when a record crosses block boundaries; in that case, it will be slightly bigger than a block).

How to do it...

Let's revisit our word count example to understand these five parts. This is how an RDD graph looks for wordCount at the dataset level view:

Basically, this is how the flow goes:

  1. Load the words folder as an RDD:
scala> val words = sc.textFile("hdfs://localhost:9000/user/hduser/words")

The following are the five parts of the words RDD:

Part

Description

Partitions

One partition per HDFS inputsplit/block (org.apache.spark.rdd.HadoopPartition)

Dependencies

None

Compute function

To read the block

Preferred location

The HDFS block's location

Partitioner

None

  1. Tokenize the words of the words RDD with each word on a separate line:
scala> val wordsFlatMap = words.flatMap(_.split("W+"))

The following are the five parts of the wordsFlatMap RDD:

Part

Description

Partitions

Same as the parent RDD, that is, words (org.apache.spark.rdd.HadoopPartition)

Dependencies

Same as the parent RDD, that is, words (org.apache.spark.OneToOneDependency)

Compute function

To compute the parent and split each element, which flattens the results

Preferred location

Ask parent RDD

Partitioner

None

  1. Transform each word in the wordsFlatMap RDD into the (word,1) tuple:
scala> val wordsMap = wordsFlatMap.map( w => (w,1))

The following are the five parts of the wordsMap RDD:

Part

Description

Partitions

Same as the parent RDD, that is, wordsFlatMap (org.apache.spark.rdd.HadoopPartition)

Dependencies

Same as the parent RDD, that is, wordsFlatMap (org.apache.spark.OneToOneDependency)

Compute function

To compute the parent and map it to PairRDD

Preferred Location

Ask parent RDD

Partitioner

None

  1. Reduce all the values of a given key and sum them up:
scala> val wordCount = wordsMap.reduceByKey(_+_)

The following are the five parts of the wordCount RDD:

Part

Description

Partitions

One per reduce task (org.apache.spark.rdd.ShuffledRDDPartition)

Dependencies

Shuffle dependency on each parent (org.apache.spark.ShuffleDependency)

Compute function

To perform additions on shuffled data

Preferred location

None

Partitioner

HashPartitioner (org.apache.spark.HashPartitioner)

This is how an RDD graph of wordcount looks at the partition level view:

Previous PageNext Chapter
You have been reading a chapter from
Apache Spark 2.x Cookbook
Published in: May 2017Publisher: ISBN-13: 9781787127265
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Rishi Yadav

Rishi Yadav has 19 years of experience in designing and developing enterprise applications. He is an open source software expert and advises American companies on big data and public cloud trends. Rishi was honored as one of Silicon Valley's 40 under 40 in 2014. He earned his bachelor's degree from the prestigious Indian Institute of Technology, Delhi, in 1998. About 12 years ago, Rishi started InfoObjects, a company that helps data-driven businesses gain new insights into data. InfoObjects combines the power of open source and big data to solve business challenges for its clients and has a special focus on Apache Spark. The company has been on the Inc. 5000 list of the fastest growing companies for 6 years in a row. InfoObjects has also been named the best place to work in the Bay Area in 2014 and 2015. Rishi is an open source contributor and active blogger. This book is dedicated to my parents, Ganesh and Bhagwati Yadav; I would not be where I am without their unconditional support, trust, and providing me the freedom to choose a path of my own. Special thanks go to my life partner, Anjali, for providing immense support and putting up with my long, arduous hours (yet again).Our 9-year-old son, Vedant, and niece, Kashmira, were the unrelenting force behind keeping me and the book on track. Big thanks to InfoObjects' CTO and my business partner, Sudhir Jangir, for providing valuable feedback and also contributing with recipes on enterprise security, a topic he is passionate about; to our SVP, Bart Hickenlooper, for taking the charge in leading the company to the next level; to Tanmoy Chowdhury and Neeraj Gupta for their valuable advice; to Yogesh Chandani, Animesh Chauhan, and Katie Nelson for running operations skillfully so that I could focus on this book; and to our internal review team (especially Rakesh Chandran) for ironing out the kinks. I would also like to thank Marcel Izumi for, as always, providing creative visuals. I cannot miss thanking our dog, Sparky, for giving me company on my long nights out. Last but not least, special thanks to our valuable clients, partners, and employees, who have made InfoObjects the best place to work at and, needless to say, an immensely successful organization.
Read more about Rishi Yadav