Reader small image

You're reading from  Apache Spark for Data Science Cookbook

Product typeBook
Published inDec 2016
Publisher
ISBN-139781785880100
Edition1st Edition
Concepts
Right arrow
Author (1)
Padma Priya Chitturi
Padma Priya Chitturi
author image
Padma Priya Chitturi

Padma Priya Chitturi is Analytics Lead at Fractal Analytics Pvt Ltd and has over five years of experience in Big Data processing. Currently, she is part of capability development at Fractal and responsible for solution development for analytical problems across multiple business domains at large scale. Prior to this, she worked for an Airlines product on a real-time processing platform serving one million user requests/sec at Amadeus Software Labs. She has worked on realizing large-scale deep networks (Jeffrey deans work in Google brain) for image classification on the big data platform Spark. She works closely with Big Data technologies such as Spark, Storm, Cassandra and Hadoop. She was an open source contributor to Apache Storm.
Read more about Padma Priya Chitturi

Right arrow

Chapter 8. Data Visualization with Spark

In this chapter, you will learn the following recipes:

  • Visualization using Zeppelin

  • Creating scatter plots with Bokeh-Scala

  • Creating a time series MultiPlot with Bokeh-Scala

  • Creating plots with the lightning visualization server

  • Visualizing machine learning models with Databricks notebook

Introduction


Visualizing large data is challenging. There are more data points than possible pixels and manipulating distributed data can take a long time. Along with the increase in volume, there are new kinds of datasets which are becoming more and more mainstream. The need to analyze user comments, sentiments, customer calls and various unstructured data has resulted in the use of new kinds of visualizations. The use of graph databases and visualization to represent unstructured data is an example of how things are changing because of increased variety.

There are a variety of tools developed recently which allow interactive analysis with Spark by reducing query latency to the range of human interactions through caching. Additionally, Spark's unified programming model and diverse programming interfaces enable smooth integration with popular visualization tools. We can use these to perform both exploratory and expository visualization over large data. In this chapter, we are going to look...

Visualization using Zeppelin


Apache Zeppelin is a nifty web-based tool that helps us visualize and explore large datasets. From a technical standpoint, Apache Zeppelin is a web application on steroids. We aim to use this application to render some neat, interactive, and shareable graphs and charts.

The interesting part of Zeppelin is that it has a bunch of built-in interpreters--ones that can interpret and invoke all API functions in Spark (with a SparkContext ) and Spark SQL (with a SQLContext ). The other interpreters that are built in are for Hive, Flink, Markdown and Scala. It also has the ability to run remote interpreters (outside of Zeppelin's own JVM) via Thrift. To look at the list of built-in interpreters, you can go through conf/interpreter.json in the Zeppelin installation directory. Alternatively, you can view and customize the interpreters from http://localhost:8080/#/interpreter once you start the Zeppelin daemon.

Getting ready

To step through this recipe, you will need a running...

Installing Zeppelin


Zeppelin supports binary build as well as source build. Let's see how to build it from source. We just ought to run one command to install it to our local machine. At the end of this recipe, we'll see how to connect Zeppelin to an external Spark master. Here is the code:

git clone https://github.com/apache/zeppelin.git 
cd zeppelin/ 
mvn clean package -Pspark-1.6 -Phadoop-2.6 -Pyarn -Ppyspark -Psparkr -Pscala-2.10 -DskipTests                 
 
[INFO] Reactor Summary: 
[INFO]  
[INFO] Zeppelin .......................................... SUCCESS [1:39.666s] 
[INFO] Zeppelin: Interpreter ............................. SUCCESS [1:40.830s] 
[INFO] Zeppelin: Zengine ................................. SUCCESS [2:46.084s] 
[INFO] Zeppelin: Display system apis ..................... SUCCESS [2:03.322s] 
[INFO] Zeppelin: Spark dependencies ...................... SUCCESS [14:30.613s] 
[INFO] Zeppelin: Spark ......................

Customizing Zeppelin's server and websocket port


Zeppelin runs on port 8080 by default, and it has a websocket port enabled at the +1 port 8081 by default. We can customize the port by copying conf/zeppelin-site.xml.template to conf/zeppelin-site.xml and changing the ports and various other properties, if necessary. Since the Spark standalone cluster master web UI also runs on 8080, when we are running Zeppelin on the same machine as the Spark master, we have to change the ports to avoid conflicts:

For now, let's change the port to 8180 by editing the configuration file shown in the following image. In order for this to take effect, let's restart Zeppelin using bin/zeppelin-daemon restart. Now Zeppelin can be viewed on the web browser by visiting the site http://localhost:8180 and the web browser looks like the following screenshot:

Visualizing data on HDFS - parameterizing inputs


Once we start the service, we can point our browser to http://localhost:8080 (change the port as per your modified port configuration) to view the Zeppelin UI. Zeppelin organizes its contents as notes and paragraphs. A note is simply a list of all the paragraphs on a single web page.

Using data from HDFS simply means that we point to the HDFS location instead of the local file system location. Before we consume the file from HDFS, let's quickly check the Spark version that Zeppelin uses. This can be achieved by issuing sc.version on a paragraph. The sc variable is an implicit variable representing the SparkContext inside Zeppelin, which simply means that we need not programmatically create a SparkContext within Zeppelin:

sc.version 
res0: String = 1.6.0

Let's load the sample file profiles.json, convert it into a DataFrame, and print the schema and the first 20 rows (show) for verification. Let's also finally register the DataFrame as a...

Running custom functions


While Spark SQL doesn't support a range of functions as wide as ANSI SQL does, it has an easy and powerful mechanism for registering a normal Scala function and using it inside the SQL context.

Let's say we would like to find out how many profiles fall under each age group. We have a simple function called ageGroup. Given an age, it returns a string representing the age group:

def fnGroupAge(age: Int, bucket:Int=10) = { 
val buckets = Array("0-10", "11-20", "20-30", "31-40", "41-50", "51-60", "61-70", "71-80", "81-90", "91-100", ">100") 
val bucket = buckets((age-1)/10) 
bucket 
} 

Now, in order to register this function to be used inside Spark SQL, all that we need to do is give it a name and call the register method of the SQLContext's user-defined function object:

sqlc.udf.register("fnGroupAge", (age:Long)=>ageGroup(age.toInt)) 

Let's fire our query and see the use of the function in action:

%sql select fnGroupAge(age) as ageGroup...

Adding external dependencies to Zeppelin


Sooner or later, we will be depending on external libraries than that don't come bundled with Zeppelin. For instance, we might need, a library for CSV or import or RDBMS data import. Let's see how to load a MySQL database driver and visualize data from a table.

In order to load a mysql connector Java driver, we just need to specify the group ID, artifact ID, and version number, and the JAR gets downloaded from the Maven repository. %dep indicates that the paragraph adds a dependency, and the z implicit variable represents the Zeppelin context:

The only thing that we need to watch out for while using %dep is that the dependency paragraph should be used before using the libraries that are being loaded. So it is generally advised to load the dependencies at the top of the Notebook.

Once we have loaded the dependencies, we need to construct the options required to connect to the MySQL database:

We use the connection to create a DataFrame:

Pointing to an external Spark Cluster


Running Zeppelin with built-in Spark is all good, but in most of our cases, we'll be executing the Spark jobs initiated by Zeppelin on a cluster of workers. Achieving this is pretty simple: we need to configure Zeppelin to point its Spark master property to an external Spark master URL. Let's take for example a simple and standalone external Spark cluster running on my local machine. Please note that we will have to run Zeppelin on a different port because of the Zeppelin UI port's conflict with the Spark standalone cluster master web UI over 8080.

Let's bring up the Spark Cluster. From inside your Spark source, execute the following:

sbin/start-all.sh

How to do it…

  1. Finally, let's modify conf/interpreter.json and conf/zeppelin-env.sh to point the master property to the host on which the Spark VM is running. In this case, it will be my localhost, with the port being 7077, which is the default master port:

  2. The conf/interpreter.json file looks like the following...

Creating scatter plots with Bokeh-Scala


In this section, we'll take a brief look at the most popular visualizing framework in Python, called Bokeh, and use its (also fast-evolving) Scala bindings to the framework. Breeze also has a visualization API called breeze-viz, which is built on JFreeChart. Bokeh is backed by a JavaScript visualization library, called BokehJS. The Scala bindings library bokeh-scala not only gives an easier way to construct glyphs (lines, circles, and so on) out of Scala objects, but also translates glyphs into a format that is understandable by the BokehJS JavaScript components. The various terms in Bokeh actually mean the following:

  • Glyph: All geometric shapes that we can think of--circles, squares, lines, and so on - are glyphs. This is just the UI representation and doesn't hold any data. All the properties related to this object just help us modify the UI properties: color, x, y, width and so on.

  • Plot: A plot is like a canvas on which we arrange various objects...

Creating a time series MultiPlot with Bokeh-Scala


In this second recipe on plotting using Bokeh, we'll see how to plot a time series graph with a dataset borrowed from https://archive.ics.uci.edu/ml/datasets/Dow+Jones+Index. We will also see how to plot multiple charts in a single document.

Getting ready

To step through this recipe, you will need a running Spark Cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. Also, include the Spark MLlib package in the build.sbt file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.

How to do it…

Initially, specify the following libraries in the build.sbt file as follows:

  libraryDependencies ++= Seq( 
      "io.continuum.bokeh" % "bokeh_2.10" % "0.5", 
      "org.scalanlp" %% "breeze" % "0.5", 
      "org.scalanlp" %% "breeze-viz" % "0.5" ) 

We'll be using only two fields from the dataset: the closing price of the stock at the end of the week, and...

Creating plots with the lightning visualization server


Lightning is a framework for interactive data visualization, including a server, visualizations, and client libraries. The lightning server provides API-based access to reproducible, web-based visualizations. It includes a core set of visualization types, but is built for extendibility and customization. It can be deployed in many ways, including Heroku, Docker, a public server, a local app for OS X and even a serverless version well suited to notebooks such as Jupyter.

Lightning can expose a single visualization to all the languages of data science. Client libraries are available in multiple languages, including Python, Scala, JavaScript, and rstats, with many more in future.

Getting ready

To step through this recipe, you will need a running Spark Cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. Install Hadoop (optionally), Scala, and Java. Lightning is designed to support a variety of use cases. The first option...

Visualize machine learning models with Databricks notebook


Databricks provides flexibility to visualize machine learning models using the built-in display() command that displays DataFrames as a table and creates convenient one-click plots. In the following recipe we'll, we'll see how to visualize data with Databricks notebook.

Getting ready

To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. Install Hadoop (optionally), Scala, and Java. Create a user account in Databricks and get access for the Notebook.

How to do it…

The fitted versus residuals plot is available for linear regression and logistic regression models. The Databricks fitted versus residuals plot is analogous to R's residuals versus fitted plot for linear models. Linear regression computes a prediction as a weighted sum of the input variables. The fitted versus residuals plot can be used to assess a linear regression model's goodness of fit. The...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Apache Spark for Data Science Cookbook
Published in: Dec 2016Publisher: ISBN-13: 9781785880100
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Padma Priya Chitturi

Padma Priya Chitturi is Analytics Lead at Fractal Analytics Pvt Ltd and has over five years of experience in Big Data processing. Currently, she is part of capability development at Fractal and responsible for solution development for analytical problems across multiple business domains at large scale. Prior to this, she worked for an Airlines product on a real-time processing platform serving one million user requests/sec at Amadeus Software Labs. She has worked on realizing large-scale deep networks (Jeffrey deans work in Google brain) for image classification on the big data platform Spark. She works closely with Big Data technologies such as Spark, Storm, Cassandra and Hadoop. She was an open source contributor to Apache Storm.
Read more about Padma Priya Chitturi