Reader small image

You're reading from  Fast Data Processing with Spark 2 - Third Edition

Product typeBook
Published inOct 2016
Reading LevelBeginner
PublisherPackt
ISBN-139781785889271
Edition3rd Edition
Languages
Right arrow
Author (1)
Holden Karau
Holden Karau
author image
Holden Karau

Holden Karau is a software development engineer and is active in the open source. She has worked on a variety of search, classification, and distributed systems problems at IBM, Alpine, Databricks, Google, Foursquare, and Amazon. She graduated from the University of Waterloo with a bachelor's of mathematics degree in computer science. Other than software, she enjoys playing with fire and hula hoops, and welding.
Read more about Holden Karau

Right arrow

Chapter 9. Foundations of Datasets/DataFrames – The Proverbial Workhorse for DataScientists

From a data wrangling perspective, Datasets are the most important feature of Spark 2.0.0. In this chapter, we will first look at Datasets from a stack perspective, including layering, optimizations, and so forth. Then we will delve more deeply into the actual Dataset APIs and cover the various operations, starting from reading various formats to creating Datasets and finally covering the rich capabilities for queries, aggregations, and scientific operations. We will use the car and orders Datasets for our examples.

Datasets - a quick introduction


A Spark Dataset is a group of specified heterogeneous columns, akin to a spreadsheet or a relational database table. RDDs have always been the basic building blocks of Spark and they still are. But RDDs deal with objects; we might know what the objects are but the framework doesn't. So things such as type checking and semantic queries are not possible with RDDs. Then came DataFrames, which added schemas; we can associate schemas with an RDD. DataFrames also added SQL and SQL-like capabilities.

Spark 2.0.0 added Datasets, which have all the original DataFrame APIs as well as compile-time type checking, thus making our interfaces richer and more robust. So now we have three mechanisms:

  • Our preferred mechanism is the semantic-rich Datasets

  • Our second option is the use of DataFrames as untyped views in a Dataset

  • For low-level operations, we'll use RDDs as the underlying basic distributed objects

In short, we should always use the Dataset APIs and abstractions. RDDs...

Dataset APIs - an overview


Before we delve into Datasets and data wrangling, let's take a broader view of the APIs; we will focus on the relevant functions we need. This will give us a firm foundation when we wrangle with data later in this chapter. Refer to the following diagram:

The preceding diagram shows the broader hierarchy of the org.apache.spark.sql classes. Interestingly, pyspark.sql mirrors this hierarchy, except for DataFrame, which is basically the Scala Dataset. What I like about the PySpark interface is that it is very succinct and crisp, offering the same power, performance, and functionality as Scala or Java. But Scala has more elaborate hierarchies and more abstractions. One of the tricks to learn more about its functions is to refer to the Scala documentation, which I found to be a lot more detailed.

Each of these classes is rich with a lot of functions. The diagram shows only the most common ones we need in this chapter. You should refer to either https://spark.apache...

Dataset interfaces and functions


Now let's work out a few interesting examples, starting out with a simple one and then moving on to progressively complex operations.

Tip

The code files are in fdps-v3/code, and the data files are in fdps-v3/data. You can run the code either from a Scala IDE or just from the Spark Shell.

Start Spark Shell from the bin directory where you have installed the spark:

/Volumes/sdxc-01/spark-2.0.0/bin/spark-shell 

Inside the shell, the following command will load the source:

:load /Users/ksankar/fdps-v3/code/DS01.scala

Read/write operations

As we saw earlier, SparkSession.read.* gives us a rich set of features to read different types of data with flexible control over the options. Dataset.write.* does the same for writing data:

val spark = SparkSession.builder 
      .master("local") 
      .appName("Chapter 9") 
      .config("spark.logConf","true") 
      .config("spark.logLevel","ERROR") 
      .getOrCreate() 
println("Running Spark...

Summary


This was an interesting chapter. Finally, we got to work with Dataset APIs, using real data. We also got a glimpse of API organization. Datasets and their associated classes have a lot of interesting functions for you to explore. Python APIs are very much similar to Scala APIs and sometimes a little easier. The IPython notebook is available at https://github.com/xsankar/fdps-v3/blob/master/extras/003-DataFrame-For-DS.ipynb. Data wrangling with Python, and especially with Python notebooks, is the preferred way for data scientists.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Fast Data Processing with Spark 2 - Third Edition
Published in: Oct 2016Publisher: PacktISBN-13: 9781785889271
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Holden Karau

Holden Karau is a software development engineer and is active in the open source. She has worked on a variety of search, classification, and distributed systems problems at IBM, Alpine, Databricks, Google, Foursquare, and Amazon. She graduated from the University of Waterloo with a bachelor's of mathematics degree in computer science. Other than software, she enjoys playing with fire and hula hoops, and welding.
Read more about Holden Karau