Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Large Scale Machine Learning with Python

You're reading from  Large Scale Machine Learning with Python

Product type Book
Published in Aug 2016
Publisher Packt
ISBN-13 9781785887215
Pages 420 pages
Edition 1st Edition
Languages
Authors (2):
Bastiaan Sjardin Bastiaan Sjardin
Profile icon Bastiaan Sjardin
Alberto Boschetti Alberto Boschetti
Profile icon Alberto Boschetti
View More author details

Table of Contents (17) Chapters

Large Scale Machine Learning with Python
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Preface
First Steps to Scalability Scalable Learning in Scikit-learn Fast SVM Implementations Neural Networks and Deep Learning Deep Learning with TensorFlow Classification and Regression Trees at Scale Unsupervised Learning at Scale Distributed Environments – Hadoop and Spark Practical Machine Learning with Spark Introduction to GPUs and Theano Index

Chapter 9. Practical Machine Learning with Spark

In the previous chapter, we saw the main functionalities of data processing with Spark. In this chapter, we will focus on data science with Spark on a real data problem. During the chapter, you will learn the following topics:

  • How to share variables across a cluster's nodes

  • How to create DataFrames from structured (CSV) and semi-structured (JSON) files, save them on disk, and load them

  • How to use SQL-like syntax to select, filter, join, group, and aggregate datasets, thus making the preprocessing extremely easy

  • How to handle missing data in the dataset

  • Which algorithms are available out of the box in Spark for feature engineering and how to use them in a real case scenario

  • Which learners are available and how to measure their performance in a distributed environment

  • How to run cross-validation for hyperparameter optimization in a cluster

Setting up the VM for this chapter


As machine learning needs a lot of computational power, in order to save some resources (especially memory) we will use the Spark environment not backed by YARN in this chapter. This mode of operation is named standalone and creates a Spark node without cluster functionalities; all the processing will be on the driver machine and won't be shared. Don't worry; the code that we will see in this chapter will work in a cluster environment as well.

In order to operate this way, perform the following steps:

  1. Turn on the virtual machine using the vagrant up command.

  2. Access the virtual machine when it's ready, with vagrant ssh.

  3. Launch Spark standalone mode with the IPython Notebook from inside the virtual machine with ./start_jupyter.sh.

  4. Open a browser pointing to http://localhost:8888.

To turn it off, use the Ctrl + C keys to exit the IPython Notebook and vagrant halt to turn off the virtual machine.

Note

Note that, even in this configuration, you can access the Spark...

Sharing variables across cluster nodes


When we're working on a distributed environment, sometimes it is required to share information across nodes so that all the nodes can operate using consistent variables. Spark handles this case by providing two kinds of variables: read-only and write-only variables. By not ensuring that a shared variable is both readable and writable anymore, it also drops the consistency requirement, letting the hard work of managing this situation fall on the developer's shoulders. Usually, a solution is quickly reached as Spark is really flexible and adaptive.

Broadcast read-only variables

Broadcast variables are variables shared by the driver node, that is, the node running the IPython Notebook in our configuration, with all the nodes in the cluster. It's a read-only variable as the variable is broadcast by one node and never read back if another node changes it.

Let's now see how it works on a simple example: we want to one-hot encode a dataset containing just gender...

Data preprocessing in Spark


So far, we've seen how to load text data from the local filesystem and HDFS. Text files can contain either unstructured data (like a text document) or structured data (like a CSV file). As for semi-structured data, just like files containing JSON objects, Spark has special routines able to transform a file into a DataFrame, similar to the DataFrame in R and Python pandas. DataFrames are very similar to RDBMS tables, where a schema is set.

JSON files and Spark DataFrames

In order to import JSON-compliant files, we should first create a SQL context, creating a SQLContext object from the local Spark Context:

In:from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)

Now, let's see the content of a small JSON file (it's provided in the Vagrant virtual machine). It's a JSON representation of a table with six rows and three columns, where some attributes are missing (such as the gender attribute for the user with user_id=0):

In:!cat /home/vagrant/datasets/users.json...

Machine learning with Spark


Here, we arrive at the main task of your job: creating a model to predict one or multiple attributes missing in the dataset. For this, we use some machine learning modeling, and Spark can provide us with a big hand in this context.

MLlib is the Spark machine learning library; although it is built in Scala and Java, its functions are also available in Python. It contains classification, regression, and recommendation learners, some routines for dimensionality reduction and feature selection, and has lots of functionalities for text processing. All of them are able to cope with huge datasets and use the power of all the nodes in the cluster to achieve the goal.

As of now (2016), it's composed of two main packages: mllib, which operates on RDDs, and ml, which operates on DataFrames. As the latter performs well and the most popular way to represent data in data science, developers have chosen to contribute and improve the ml branch, letting the former remain, but without...

Summary


This is the final chapter of the book. We have seen how to do data science at scale on a cluster of machines. Spark is able to train and test machine learning algorithms using all the nodes in a cluster with a simple interface, very similar to Scikit-learn. It's proved that this solution is able to cope with petabytes of information, creating a valid alternative to observation subsampling and online learning.

To become an expert in Spark and streaming processing, we strongly advise you to read the book, Mastering Apache Spark, Mike Frampton, Packt Publishing.

If you're brave enough to switch to Scala, the main programming language for Spark, this book is the best for such a transition: Scala for Data Science, Pascal Bugnion, Packt Publishing.

lock icon The rest of the chapter is locked
You have been reading a chapter from
Large Scale Machine Learning with Python
Published in: Aug 2016 Publisher: Packt ISBN-13: 9781785887215
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}