From a data wrangling perspective, Datasets are the most important feature of Spark 2.0.0. In this chapter, we will first look at Datasets from a stack perspective, including layering, optimizations, and so forth. Then we will delve more deeply into the actual Dataset APIs and cover the various operations, starting from reading various formats to creating Datasets and finally covering the rich capabilities for queries, aggregations, and scientific operations. We will use the car and orders Datasets for our examples.
You're reading from Fast Data Processing with Spark 2 - Third Edition
A Spark Dataset is a group of specified heterogeneous columns, akin to a spreadsheet or a relational database table. RDDs have always been the basic building blocks of Spark and they still are. But RDDs deal with objects; we might know what the objects are but the framework doesn't. So things such as type checking and semantic queries are not possible with RDDs. Then came DataFrames, which added schemas; we can associate schemas with an RDD. DataFrames also added SQL and SQL-like capabilities.
Spark 2.0.0 added Datasets, which have all the original DataFrame APIs as well as compile-time type checking, thus making our interfaces richer and more robust. So now we have three mechanisms:
Our preferred mechanism is the semantic-rich Datasets
Our second option is the use of DataFrames as untyped views in a Dataset
For low-level operations, we'll use RDDs as the underlying basic distributed objects
In short, we should always use the Dataset APIs and abstractions. RDDs...
Before we delve into Datasets and data wrangling, let's take a broader view of the APIs; we will focus on the relevant functions we need. This will give us a firm foundation when we wrangle with data later in this chapter. Refer to the following diagram:
The preceding diagram shows the broader hierarchy of the org.apache.spark.sql
classes. Interestingly, pyspark.sql
mirrors this hierarchy, except for DataFrame, which is basically the Scala Dataset. What I like about the PySpark interface is that it is very succinct and crisp, offering the same power, performance, and functionality as Scala or Java. But Scala has more elaborate hierarchies and more abstractions. One of the tricks to learn more about its functions is to refer to the Scala documentation, which I found to be a lot more detailed.
Each of these classes is rich with a lot of functions. The diagram shows only the most common ones we need in this chapter. You should refer to either https://spark.apache...
Now let's work out a few interesting examples, starting out with a simple one and then moving on to progressively complex operations.
Tip
The code files are in fdps-v3/code
, and the data files are in fdps-v3/data
. You can run the code either from a Scala IDE or just from the Spark Shell.
Start Spark Shell from the bin directory where you have installed the spark:
/Volumes/sdxc-01/spark-2.0.0/bin/spark-shell
Inside the shell, the following command will load the source:
:load /Users/ksankar/fdps-v3/code/DS01.scala
As we saw earlier, SparkSession.read.*
gives us a rich set of features to read different types of data with flexible control over the options. Dataset.write.*
does the same for writing data:
val spark = SparkSession.builder .master("local") .appName("Chapter 9") .config("spark.logConf","true") .config("spark.logLevel","ERROR") .getOrCreate() println("Running Spark...
Here are some links you can refer to for more information:
https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html, which provides information on three Apache Spark APIs, RDDs, DataFrames, and Datasets; when to use them; and why to use them.
https://databricks.com/blog/2016/05/11/spark-2-0-technical-preview-easier-faster-and-smarter.html
This was an interesting chapter. Finally, we got to work with Dataset APIs, using real data. We also got a glimpse of API organization. Datasets and their associated classes have a lot of interesting functions for you to explore. Python APIs are very much similar to Scala APIs and sometimes a little easier. The IPython notebook is available at https://github.com/xsankar/fdps-v3/blob/master/extras/003-DataFrame-For-DS.ipynb. Data wrangling with Python, and especially with Python notebooks, is the preferred way for data scientists.