Fast Data Processing with Spark - Second Edition

Perform real-time analytics using Spark in a fast, distributed, and scalable way

Fast Data Processing with Spark - Second Edition

Learning
Krishna Sankar, Holden Karau

1 customer reviews
Perform real-time analytics using Spark in a fast, distributed, and scalable way
$23.99
$29.99
RRP $23.99
RRP $29.99
eBook
Print + eBook

Instantly access this course right now and get the skills you need in 2017

With unlimited access to a constantly growing library of over 4,000 eBooks and Videos, a subscription to Mapt gives you everything you need to learn new skills. Cancel anytime.

Free Sample

Book Details

ISBN 139781784392574
Paperback184 pages

Book Description

Spark is a framework used for writing fast, distributed programs. Spark solves similar problems as Hadoop MapReduce does, but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and built-in tools for interactive query analysis (Spark SQL), large-scale graph processing and analysis (GraphX), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big datasets.

Fast Data Processing with Spark - Second Edition covers how to write distributed programs with Spark. The book will guide you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API to developing analytics applications and tuning them for your purposes.

Table of Contents

Chapter 1: Installing Spark and Setting up your Cluster
Directory organization and convention
Installing prebuilt distribution
Building Spark from source
Spark topology
A single machine
Running Spark on EC2
Deploying Spark with Chef (Opscode)
Deploying Spark on Mesos
Spark on YARN
Spark Standalone mode
Summary
Chapter 2: Using the Spark Shell
Loading a simple text file
Using the Spark shell to run logistic regression
Interactively loading data from S3
Summary
Chapter 3: Building and Running a Spark Application
Building your Spark project with sbt
Building your Spark job with Maven
Building your Spark job with something else
Summary
Chapter 4: Creating a SparkContext
Scala
Java
SparkContext – metadata
Shared Java and Scala APIs
Python
Summary
Chapter 5: Loading and Saving Data in Spark
RDDs
Loading data into an RDD
Saving your data
Summary
Chapter 6: Manipulating your RDD
Manipulating your RDD in Scala and Java
Manipulating your RDD in Python
Summary
Chapter 7: Spark SQL
The Spark SQL architecture
Summary
Chapter 8: Spark with Big Data
Parquet – an efficient and interoperable big data format
Querying Parquet files with Impala
HBase
Summary
Chapter 9: Machine Learning Using Spark MLlib
The Spark machine learning algorithm table
Spark MLlib examples
Summary
Chapter 10: Testing
Testing in Java and Scala
Testing in Python
Summary
Chapter 11: Tips and Tricks
Where to find logs
Concurrency limitations
Using Spark with other languages
A quick note on security
Community developed packages
Mailing lists
Summary

What You Will Learn

  • Install and set up Spark on your cluster
  • Prototype distributed applications with Spark's interactive shell
  • Learn different ways to interact with Spark's distributed representation of data (RDDs)
  • Query Spark with a SQL-like query syntax
  • Effectively test your distributed software
  • Recognize how Spark works with big data
  • Implement machine learning systems with highly scalable algorithms

Authors

Table of Contents

Chapter 1: Installing Spark and Setting up your Cluster
Directory organization and convention
Installing prebuilt distribution
Building Spark from source
Spark topology
A single machine
Running Spark on EC2
Deploying Spark with Chef (Opscode)
Deploying Spark on Mesos
Spark on YARN
Spark Standalone mode
Summary
Chapter 2: Using the Spark Shell
Loading a simple text file
Using the Spark shell to run logistic regression
Interactively loading data from S3
Summary
Chapter 3: Building and Running a Spark Application
Building your Spark project with sbt
Building your Spark job with Maven
Building your Spark job with something else
Summary
Chapter 4: Creating a SparkContext
Scala
Java
SparkContext – metadata
Shared Java and Scala APIs
Python
Summary
Chapter 5: Loading and Saving Data in Spark
RDDs
Loading data into an RDD
Saving your data
Summary
Chapter 6: Manipulating your RDD
Manipulating your RDD in Scala and Java
Manipulating your RDD in Python
Summary
Chapter 7: Spark SQL
The Spark SQL architecture
Summary
Chapter 8: Spark with Big Data
Parquet – an efficient and interoperable big data format
Querying Parquet files with Impala
HBase
Summary
Chapter 9: Machine Learning Using Spark MLlib
The Spark machine learning algorithm table
Spark MLlib examples
Summary
Chapter 10: Testing
Testing in Java and Scala
Testing in Python
Summary
Chapter 11: Tips and Tricks
Where to find logs
Concurrency limitations
Using Spark with other languages
A quick note on security
Community developed packages
Mailing lists
Summary

Book Details

ISBN 139781784392574
Paperback184 pages
Read More
From 1 reviews

Read More Reviews