Reader small image

You're reading from  Hands-On Big Data Analytics with PySpark

Product typeBook
Published inMar 2019
Reading LevelExpert
PublisherPackt
ISBN-139781838644130
Edition1st Edition
Languages
Tools
Concepts
Right arrow
Authors (2):
Rudy Lai
Rudy Lai
author image
Rudy Lai

Colibri Digital is a technology consultancy company founded in 2015 by James Cross and Ingrid Funie. The company works to help its clients navigate the rapidly changing and complex world of emerging technologies, with deep expertise in areas such as big data, data science, machine learning, and Cloud computing. Over the past few years, they have worked with some of the World's largest and most prestigious companies, including a tier 1 investment bank, a leading management consultancy group, and one of the World's most popular soft drinks companies, helping each of them to better make sense of its data, and process it in more intelligent ways.The company lives by its motto: Data -> Intelligence -> Action. Rudy Lai is the founder of QuantCopy, a sales acceleration startup using AI to write sales emails for prospects. By taking in leads from your pipelines, QuantCopy researches them online and generates sales emails from that data. It also has a suite of email automation tools to schedule, send, and track email performancekey analytics that all feed back into how our AI generates content. Prior to founding QuantCopy, Rudy ran HighDimension.IO, a machine learning consultancy, where he experienced first-hand the frustrations of outbound sales and prospecting. As a founding partner, he helped startups and enterprises with HighDimension.IO's Machine-Learning-as-a-Service, allowing them to scale up data expertise in the blink of an eye. In the first part of his career, Rudy spent 5+ years in quantitative trading at leading investment banks such as Morgan Stanley. This valuable experience allowed him to witness the power of data, but also the pitfalls of automation using data science and machine learning. Quantitative trading was also a great platform from which you can learn about reinforcement learning and supervised learning topics in depth and in a commercial setting. Rudy holds a Computer Science degree from Imperial College London, where he was part of the Dean's List, and received awards such as the Deutsche Bank Artificial Intelligence prize.
Read more about Rudy Lai

Bartłomiej Potaczek
Bartłomiej Potaczek
author image
Bartłomiej Potaczek

Bartłomiej Potaczek is a software engineer working for Schibsted Tech Polska and programming mostly in JavaScript. He is a big fan of everything related to the react world, functional programming, and data visualization. He founded and created InitLearn, a portal that allows users to learn to program in a pair-programming fashion. He was also involved in InitLearn frontend, which is built on the React-Redux technologies. Besides programming, he enjoys football and crossfit. Currently, he is working on rewriting the frontend for tv.nu—Sweden's most complete TV guide, with over 200 channels. He has also recently worked on technologies including React, React Router, and Redux.
Read more about Bartłomiej Potaczek

View More author details
Right arrow

Working with the Spark Key/Value API

In this chapter, we'll be working with the Spark key/value API. We will start by looking at the available transformations on key/value pairs. We will then learn how to use the aggregateByKey method instead of the groupBy() method. Later, we'll be looking at actions on key/value pairs and looking at the available partitioners on key/value data. At the end of this chapter, we'll be implementing an advanced partitioner that will be able to partition our data by range.

In this chapter, we will be covering the following topics:

  • Available actions on key/value pairs
  • Using aggregateByKey instead of groupBy()
  • Actions on key/value pairs
  • Available partitioners on key/value data
  • Implementing a custom partitioner

Available actions on key/value pairs

In this section, we will be covering the following topics:

  • Available transformations on key/value pairs
  • Using countByKey()
  • Understanding the other methods

So, this is our well-known test in which we will be using transformations on key/value pairs.

First, we will create an array of user transactions for users A, B, A, B, and C for some amount, as per the following example:

 val keysWithValuesList =
Array(
UserTransaction("A", 100),
UserTransaction("B", 4),
UserTransaction("A", 100001),
UserTransaction("B", 10),
UserTransaction("C", 10)
)

We then need to key our data by a specific field, as per the following example:

val keyed = data.keyBy(_.userId)

We will key it by userId, by invoking the keyBy method with a userId parameter.

Now, our data is assigned to the keyed variable and its type...

Using aggregateByKey instead of groupBy()

In this section, we will explore the reason why we use aggregateByKey instead of groupBy.

We will cover the following topics:

  • Why we should avoid the use of groupByKey
  • What aggregateByKey gives us
  • Implementing logic using aggregateByKey

First, we will create our array of user transactions, as shown in the following example:

 val keysWithValuesList =
Array(
UserTransaction("A", 100),
UserTransaction("B", 4),
UserTransaction("A", 100001),
UserTransaction("B", 10),
UserTransaction("C", 10)
)

We will then use parallelize to create an RDD, as we want our data to be key-wise. This is shown in the following example:

 val data = spark.parallelize(keysWithValuesList)
val keyed = data.keyBy(_.userId)

In the preceding code, we invoked keyBy for userId to have the data of payers, key, and user...

Actions on key/value pairs

In this section, we'll be looking at the actions on key/value pairs.

We will cover the following topics:

  • Examining actions on key/value pairs
  • Using collect()
  • Examining the output for the key/value RDD

In the first section of this chapter, we covered transformations that are available on key/value pairs. We saw that they are a bit different compared to RDDs. Also, for actions, it is slightly different in terms of result but not in the method name.

Therefore, we'll be using collect() and we'll be examining the output of our action on these key/value pairs.

First, we will create our transactions array and RDD according to userId, as shown in the following example:

 val keysWithValuesList =
Array(
UserTransaction("A", 100),
UserTransaction("B", 4),
UserTransaction("A", 100001),
UserTransaction("B"...

Available partitioners on key/value data

We know that partitioning and partitioners are the key components of Apache Spark. They influence how our data is partitioned, which means they influence where the data actually resides on which executors. If we have a good partitioner, then we will have good data locality, which will reduce shuffle. We know that shuffle is not desirable for processing, so reducing shuffle is crucial, and, therefore, choosing a proper partitioner is also crucial for our systems.

In this section, we will cover the following topics:

  • Examining HashPartitioner
  • Examining RangePartitioner
  • Testing

We will first examine our HashPartitioner and RangePartitioner. We will then compare them and test the code using both the partitioners.

First we will create a UserTransaction array, as per the following example:

 val keysWithValuesList =
Array(
UserTransaction("...

Implementing a custom partitioner

In this section, we'll implement a custom partitioner and create a partitioner that takes a list of parses with ranges. If our key falls into a specific range, we will assign the partition number index of the list.

We will cover the following topics:

  • Implementing a custom partitioner
  • Implementing a range partitioner
  • Testing our partitioner

We will implement the logic range partitioning based on our own range partitioning and then test our partitioner. Let's start with the black box test without looking at the implementation.

The first part of the code is similar to what we have used already, but this time we have keyBy amount of data, as shown in the following example:

 val keysWithValuesList =
Array(
UserTransaction("A", 100),
UserTransaction("B", 4),
UserTransaction("A", 100001),
UserTransaction("...

Summary

In this chapter, we first saw available the transformations on key/value pairs. We then learned how to use aggregateByKey instead of groupBy. We also covered actions on key/value pairs. Later, we looked at available partitioners like rangePartitioner and HashPartition on key/value data. By the end of this chapter, we had implemented our custom partitioner, which was able to assign partitions, based on the end and start of the range for learning purposes.

In the next chapter, we will learn how to test our Spark jobs and Apache Spark jobs.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Hands-On Big Data Analytics with PySpark
Published in: Mar 2019Publisher: PacktISBN-13: 9781838644130
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Rudy Lai

Colibri Digital is a technology consultancy company founded in 2015 by James Cross and Ingrid Funie. The company works to help its clients navigate the rapidly changing and complex world of emerging technologies, with deep expertise in areas such as big data, data science, machine learning, and Cloud computing. Over the past few years, they have worked with some of the World's largest and most prestigious companies, including a tier 1 investment bank, a leading management consultancy group, and one of the World's most popular soft drinks companies, helping each of them to better make sense of its data, and process it in more intelligent ways.The company lives by its motto: Data -> Intelligence -> Action. Rudy Lai is the founder of QuantCopy, a sales acceleration startup using AI to write sales emails for prospects. By taking in leads from your pipelines, QuantCopy researches them online and generates sales emails from that data. It also has a suite of email automation tools to schedule, send, and track email performancekey analytics that all feed back into how our AI generates content. Prior to founding QuantCopy, Rudy ran HighDimension.IO, a machine learning consultancy, where he experienced first-hand the frustrations of outbound sales and prospecting. As a founding partner, he helped startups and enterprises with HighDimension.IO's Machine-Learning-as-a-Service, allowing them to scale up data expertise in the blink of an eye. In the first part of his career, Rudy spent 5+ years in quantitative trading at leading investment banks such as Morgan Stanley. This valuable experience allowed him to witness the power of data, but also the pitfalls of automation using data science and machine learning. Quantitative trading was also a great platform from which you can learn about reinforcement learning and supervised learning topics in depth and in a commercial setting. Rudy holds a Computer Science degree from Imperial College London, where he was part of the Dean's List, and received awards such as the Deutsche Bank Artificial Intelligence prize.
Read more about Rudy Lai

author image
Bartłomiej Potaczek

Bartłomiej Potaczek is a software engineer working for Schibsted Tech Polska and programming mostly in JavaScript. He is a big fan of everything related to the react world, functional programming, and data visualization. He founded and created InitLearn, a portal that allows users to learn to program in a pair-programming fashion. He was also involved in InitLearn frontend, which is built on the React-Redux technologies. Besides programming, he enjoys football and crossfit. Currently, he is working on rewriting the frontend for tv.nu—Sweden's most complete TV guide, with over 200 channels. He has also recently worked on technologies including React, React Router, and Redux.
Read more about Bartłomiej Potaczek