Reader small image

You're reading from  Apache Spark for Data Science Cookbook

Product typeBook
Published inDec 2016
Publisher
ISBN-139781785880100
Edition1st Edition
Concepts
Right arrow
Author (1)
Padma Priya Chitturi
Padma Priya Chitturi
author image
Padma Priya Chitturi

Padma Priya Chitturi is Analytics Lead at Fractal Analytics Pvt Ltd and has over five years of experience in Big Data processing. Currently, she is part of capability development at Fractal and responsible for solution development for analytical problems across multiple business domains at large scale. Prior to this, she worked for an Airlines product on a real-time processing platform serving one million user requests/sec at Amadeus Software Labs. She has worked on realizing large-scale deep networks (Jeffrey deans work in Google brain) for image classification on the big data platform Spark. She works closely with Big Data technologies such as Spark, Storm, Cassandra and Hadoop. She was an open source contributor to Apache Storm.
Read more about Padma Priya Chitturi

Right arrow

Chapter 7. Working with Sparkling Water - H2O

In this chapter, you will learn the following recipes:

  • Working with H2O on Spark

    Downloading and installing H2O

    Using H2O API in Spark

  • Implementing k-means using H2O over Spark

  • Implementing spam detection with Sparkling Water

  • Deep learning with airlines and weather data

  • Implementing a crime detection application

  • Running SVM with H2O over Spark

Introduction


H2O is a fast, scalable, open-source machine learning and deep learning library for smarter applications. Using in-memory compression, H2O handles billions of data rows in memory, even with a small cluster. In order to create complete analytic workflows, H2O's platform includes interfaces for R, Python, Scala, Java, JSON and CoffeeScript/JavaScript flows, as well as a built-in web interface. H2O is designed to run in standalone mode on Hadoop, or within a Spark Cluster. It includes many common machine learning algorithms, such as generalized linear modeling (linear regression, logistic regression, and so on), Naive Bayes, principal components analysis, k-means clustering and others.

H2O also implements best-in-class algorithms at scale, such as distributed random forest, gradient boosting and deep learning. Users can build thousands of models and compare the results to get the best predictions.

Sparkling Water allows users to combine the fast, scalable machine learning algorithms...

Features


Sparkling Water provides transparent integration for the H2O engine and its machine learning algorithms into Spark platforms, which enables the following:

  • Use of H2O algorithms in the Spark workflow

  • Transformation between H2O and Spark data structures

  • Use of Spark RDDs and DataFrames as input for H2O algorithms

  • Use of H2O frames as input for MLlib algorithms

  • Transparent execution of Sparkling Water applications on top of Spark

Working with H2O on Spark


Sparkling Water is executed as a regular Spark application. It provides a way to initialize H2O services on each node in the Spark Cluster and to access data stored in the data structures of Spark and H2O. The Sparkling Water application is launched inside a spark executor created after submitting the application. At this point, H2O starts the services, including the distributed key value (KV) storage and the memory manager.

Getting ready

To step through this recipe, you will need a running Spark Cluster in any one of the following modes: Local, standalone, YARN, Mesos. You must also include the Spark MLlib package in the build.sbt file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.

How to do it…

In this recipe, we'll learn how to download and install H2O services in a Spark Cluster. We'll also use the H2O API in Spark.

The list of sub-recipes in this section is as follows:

  • Downloading and installing H2O...

Implementing k-means using H2O over Spark


In this recipe, we'll look at how to run a k-means clustering algorithm on a dataset of figures concerning prostate cancer. Please download the dataset from https://github.com/ChitturiPadma/datasets/blob/master/prostate.csv. This is prostate cancer data that came from a study that examined the correlation between the level of prostate-specific antigen and a number of other clinical measures in men.

Getting ready

To step through this recipe, you will need a running Spark Cluster in any one of the following modes: Local, standalone, YARN, Mesos. Include the Spark MLlib package in the build.sbt file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java. Also, install Sparkling Water as discussed in the preceding recipe.

How to do it…

  1. The sample rows in the prostate.csv look like the following:

  2. Here is the code to run k-means on the preceding dataset:

            import org.apache.spark._ 
          ...

Implementing spam detection with Sparkling Water


In this recipe, we'll look at how to implement a spam detector by extracting data, transforming and tokenizing messages, building Spark's Tf-IDF model, and expanding messages to feature vectors. We'll also create and evaluate H2O's deep learning model. Lastly, we will use the models to detect spam messages.

Getting ready

To step through this recipe, you will need a running Spark Cluster in any one of the following modes: Local, standalone, YARN, Mesos. Include the Spark MLlib package in the build.sbt file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java. Also, install Sparkling Water as discussed in the preceding recipe.

How to do it…

  1. Please download the dataset from https://github.com/ChitturiPadma/datasets/blob/master/smsData.txt. The records in the dataset look like the following:

           ham   Ok... But they said i've got wisdom teeth hidden inside n
           mayb need 2 remove...

Deep learning with airlines and weather data


In this recipe, we'll see how to run deep learning models on an airlines dataset.

Getting ready

To step through this recipe, you will need a running Spark Cluster in any one of the following modes: Local, standalone, YARN, Mesos. Include the Spark MLlib package in the build.sbt file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java. Also, install Sparkling Water as discussed in the preceding recipe.

How to do it…

  1. Please download the dataset from https://github.com/ChitturiPadma/datasets/blob/master/allyears2k_headers.csv. The sample records (with a few columns) in the dataset look like the following:

  2. Here is the code for loading the airline data and fetching records with the specific destination SFO:

       
          import hex.deeplearning.DeepLearning 
          import hex.deeplearning.DeepLearningModel.DeepLearningParameters 
          import org.apache.spark.{SparkContext, SparkConf...

Implementing a crime detection application


In this recipe, we'll see how to run deep learning models on various sets of data to detect crime in the city of Chicago.

Getting ready

To step through this recipe, you will need a running Spark Cluster in any one of the following modes: Local, standalone, YARN, Mesos. Include the Spark MLlib package in the build.sbt file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java. Also, install Sparkling Water as discussed in the preceding recipe.

How to do it…

  1. Please download the following datasets from the following locations:

    Weather data: https://github.com/ChitturiPadma/datasets/blob/master/chicagoAllWeather.csv.

    Census data:https://github.com/ChitturiPadma/datasets/blob/master/chicagoCensus.csv.

    Crime data: https://github.com/ChitturiPadma/datasets/blob/master/chicagoCrimes10k.csv.

  2. The sample records (with a few columns) in the datasets look as follows:

    The sample rows in weather data:

    The sample...

Running SVM with H2O over Spark


In this recipe, we'll see how to run SVM to predict or classify a cancer.

Getting ready

To step through this recipe, you will need a running Spark Cluster in any one of the following modes: Local, standalone, YARN, Mesos. Include the Spark MLlib package in the build.sbt file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java. Also, install Sparkling Water as discussed in the preceding recipe.

How to do it…

Please download the dataset from https://github.com/ChitturiPadma/datasets/blob/master/Breast_CancerData.csv. While including the dependencies sparkling-water-core and sparkling-water-ml, please change the version to 1.6.8.

The sample records in the data (with a few columns) look as follows:

Here, the last column label indicates whether the person has breast cancer (represented by B).

The code that runs SVM on the data is as follows:

  import java.io._ 
  import org.apache.spark.ml.spark.models.svm...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Apache Spark for Data Science Cookbook
Published in: Dec 2016Publisher: ISBN-13: 9781785880100
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Padma Priya Chitturi

Padma Priya Chitturi is Analytics Lead at Fractal Analytics Pvt Ltd and has over five years of experience in Big Data processing. Currently, she is part of capability development at Fractal and responsible for solution development for analytical problems across multiple business domains at large scale. Prior to this, she worked for an Airlines product on a real-time processing platform serving one million user requests/sec at Amadeus Software Labs. She has worked on realizing large-scale deep networks (Jeffrey deans work in Google brain) for image classification on the big data platform Spark. She works closely with Big Data technologies such as Spark, Storm, Cassandra and Hadoop. She was an open source contributor to Apache Storm.
Read more about Padma Priya Chitturi