Reader small image

You're reading from  Fast Data Processing with Spark 2 - Third Edition

Product typeBook
Published inOct 2016
Reading LevelBeginner
PublisherPackt
ISBN-139781785889271
Edition3rd Edition
Languages
Right arrow
Author (1)
Holden Karau
Holden Karau
author image
Holden Karau

Holden Karau is a software development engineer and is active in the open source. She has worked on a variety of search, classification, and distributed systems problems at IBM, Alpine, Databricks, Google, Foursquare, and Amazon. She graduated from the University of Waterloo with a bachelor's of mathematics degree in computer science. Other than software, she enjoys playing with fire and hula hoops, and welding.
Read more about Holden Karau

Right arrow

Chapter 5. Loading and Saving Data in Spark

Until now, you have experimented with the Spark shell, figured out how to create a connection with the Spark cluster, and build jobs for deployment. Now to make these jobs useful, you will need to learn how to load and save data in Spark, which we'll do in this chapter.

Before we dive into data, we have a couple of background tasks to do. First we need to get a view of Spark abstractions, and second, have a quick discussion about the different modalities of data.

Spark abstractions


The goal of this book is that you get a good understanding of Spark via hands-on programming. The best way to understand Spark is to work through operations iteratively. As we are still in the initial chapters, some of the things might not be very clear, but they should be clear enough for the current context. As you write code and read further chapters, you will gather more information and insight. With this in mind, let's move to a quick discussion on Spark abstractions. We will revisit the abstractions in more detail in the following chapters.

The main features of Apache Spark are distributed data representation and computation, thus achieving massive scaling of data operations. Spark's primary unit for representation of data is RDD, which allows for easy parallel operations on the data. Until 2.0.0, everyone worked with RDDs. However, they are low-level raw structures, which can be optimized for performance and scalability.

This is where Datasets/DataFrames come into...

Data modalities


From a modality perspective, all data can be grouped into three categories: structured, semi-structured, and unstructured. The modality is independent of the data source, organization, or storage technologies. In fact, different representations, organizations, and storage technologies perform well with, at the most, one modality. It is very difficult to efficiently support more than one modality.

  • Structured data is usually stored in databases, Oracle, HBase, Cassandra, and so on. Relational tables are the most commonly used organization and storage mechanism. Usually, structured data formats, data types, and sizes are fixed and well known.

  • Semi-structured data, as the name implies, has enough structure; however, there is also variability in its size, type, and format. The most common semi-structured formats are csv, json, and parquet.

  • Unstructured data, of course, is about 85 percent of the data we encounter. Images, audio files, and social media data all are unstructured. A...

Data modalities and Datasets/DataFrames/RDDs


Now let's tie together the modalities with the Spark abstractions and see how we can read and write data. Before 2.0.0, things were conceptually simpler-we only needed to read data into RDDs and use map() to transform the data as required. However, data wrangling was harder. With Dataset/DataFrame, we have the ability to read directly into a table with headings, associate data types with domain semantics, and start working with data more effectively.

As a general rule of thumb, perform the following steps:

  1. Use SparkContext and RDDs to handle unstructured data.

  2. Use SparkSession and Datasets/DataFrames for semi-structured and structured data. As you will see in the later chapters, SparkSession has unified the read from various formats, such as the .csv, .json, .parquet, .jdbc, .orc, and .text files. Moreover, there is a pluggable architecture called DataSource API to access any type of structured data.

Loading data into an RDD


In this chapter, we will examine the different sources you can use for your RDD. If you decide to run it through the examples in the Spark shell, you can call .cache() or .first() on the RDDs you generate to check whether it can be loaded. In Chapter 2, Using the Spark Shell, you learned how to load data text from a file and from S3. In this chapter, we will look at the different formats of data (text file and CSV) and the different sources (filesystem and HDFS) supported.

One of the easiest ways to create an RDD is taking an existing Scala collection and converting it into an RDD. The SparkContext object provides a function called parallelize that takes a Scala collection and converts it into an RDD of the same type as the input collection, as shown here.

As mentioned in the previous chapters, cd to the fdps-v3 directory and run spark-shell or pyspark.

For Scala, refer to the following screenshot:

For Java, refer to the following code:

import java.util.Arrays; 
...

Saving your data


While distributed computational jobs are a lot of fun, they are much more useful when the results are stored in a useful place. While the methods for loading an RDD are largely found in the SparkContext class, the methods for saving an RDD are defined on the RDD classes. In Scala, implicit conversions exist so that an RDD, which can be saved as a sequence file, could be converted to the appropriate type; in Java, explicit conversions must be used.

Here are the different ways to save an RDD.

Here's the code for Scala:

rddOfStrings.saveAsTextFile("out.txt") 
keyValueRdd.saveAsObjectFile("sequenceOut") 

Here's the code for Java:

rddOfStrings.saveAsTextFile("out.txt") 
keyValueRdd.saveAsObjectFile("sequenceOut") 

Here's the code for Python:

rddOfStrings.saveAsTextFile("out.txt")

Tip

In addition, users can save the RDD as a compressed text file using the following function: saveAsTextFile(path: String, codec: Class[_ <: CompressionCodec])

Summary


In this chapter, first you got an insight into Spark abstractions, data modalities, and how different data types can be read into a Spark environment. Then, you saw how to load data from a variety of different sources. We also looked at the basic parsing of data from text input files. Now that we can get our data loaded into a Spark RDD, it is time to explore the different operations we can perform on our data in the next chapter.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Fast Data Processing with Spark 2 - Third Edition
Published in: Oct 2016Publisher: PacktISBN-13: 9781785889271
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Holden Karau

Holden Karau is a software development engineer and is active in the open source. She has worked on a variety of search, classification, and distributed systems problems at IBM, Alpine, Databricks, Google, Foursquare, and Amazon. She graduated from the University of Waterloo with a bachelor's of mathematics degree in computer science. Other than software, she enjoys playing with fire and hula hoops, and welding.
Read more about Holden Karau