Fast Data Processing with Spark

Fast Data Processing with Spark
eBook: $22.99
Formats: PDF, PacktLib, ePub and Mobi formats
save 15%!
Print + free eBook + free PacktLib access to the book: $60.98    Print cover: $37.99
save 38%!
Free Shipping!
UK, US, Europe and selected countries in Asia.
Also available on:
Table of Contents
Sample Chapters
  • Implement Spark's interactive shell to prototype distributed applications
  • Deploy Spark jobs to various clusters such as Mesos, EC2, Chef, YARN, EMR, and so on
  • Use Shark's SQL query-like syntax with Spark

Book Details

Language : English
Paperback : 120 pages [ 235mm x 191mm ]
Release Date : October 2013
ISBN : 1782167064
ISBN 13 : 9781782167068
Author(s) : Holden Karau
Topics and Technologies : All Books, Big Data and Business Intelligence, Open Source

Table of Contents

Chapter 1: Installing Spark and Setting Up Your Cluster
Chapter 2: Using the Spark Shell
Chapter 3: Building and Running a Spark Application
Chapter 4: Creating a SparkContext
Chapter 5: Loading and Saving Data in Spark
Chapter 6: Manipulating Your RDD
Chapter 7: Shark – Using Spark with Hive
Chapter 8: Testing
Chapter 9: Tips and Tricks
  • Chapter 1: Installing Spark and Setting Up Your Cluster
    • Running Spark on a single machine
    • Running Spark on EC2
      • Running Spark on EC2 with the scripts
    • Deploying Spark on Elastic MapReduce
    • Deploying Spark with Chef (opscode)
    • Deploying Spark on Mesos
    • Deploying Spark on YARN
    • Deploying set of machines over SSH
    • Links and references
    • Summary
            • Chapter 6: Manipulating Your RDD
              • Manipulating your RDD in Scala and Java
                • Scala RDD functions
                • Functions for joining PairRDD functions
                • Other PairRDD functions
                • DoubleRDD functions
                • General RDD functions
                • Java RDD functions
                • Spark Java function classes
                  • Common Java RDD functions
                • Methods for combining JavaPairRDD functions
                  • JavaPairRDD functions
              • Manipulating your RDD in Python
                • Standard RDD functions
                • PairRDD functions
              • Links and references
              • Summary
                • Chapter 8: Testing
                  • Testing in Java and Scala
                    • Refactoring your code for testability
                    • Testing interactions with SparkContext
                  • Testing in Python
                  • Links and references
                  • Summary
                  • Chapter 9: Tips and Tricks
                    • Where to find logs?
                    • Concurrency limitations
                    • Memory usage and garbage collection
                    • Serialization
                    • IDE integration
                    • Using Spark with other languages
                    • A quick note on security
                    • Mailing lists
                    • Links and references
                    • Summary

                    Holden Karau

                    Holden Karau is a transgendered software developer from Canada currently living in San Francisco. Holden graduated from the University of Waterloo in 2009 with a Bachelors of Mathematics in Computer Science. She currently works as a Software Development Engineer at Google. She has worked at Foursquare, where she was introduced to Scala. She worked on search and classification problems at Amazon. Open Source development has been a passion of Holden's from a very young age, and a number of her projects have been covered on Slashdot. Outside of programming, she enjoys playing with fire, welding, and dancing. You can learn more at her website (, blog (, and github (
                    Sorry, we don't have any reviews for this title yet.

                    Submit Errata

                    Please let us know if you have found any errors not listed on this list by completing our errata submission form. Our editors will check them and add them to this list. Thank you.

                    Sample chapters

                    You can view our sample chapters and prefaces of this title on PacktLib or download sample chapters in PDF format.

                    Frequently bought together

                    Fast Data Processing with Spark +    Mastering Object-oriented Python =
                    50% Off
                    the second eBook
                    Price for both: £21.95

                    Buy both these recommended eBooks together and get 50% off the cheapest eBook.

                    What you will learn from this book

                    • Prototype distributed applications with Spark's interactive shell
                    • Learn different ways to interact with Spark's distributed representation of data (RDDs)
                    • Load data from the various data sources
                    • Query Spark with a SQL-like query syntax
                    • Integrate Shark queries with Spark programs
                    • Effectively test your distributed software
                    • Tune a Spark installation
                    • Install and set up Spark on your cluster
                    • Work effectively with large data sets

                    In Detail

                    Spark is a framework for writing fast, distributed programs. Spark solves similar problems as Hadoop MapReduce does but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and inbuilt tools for interactive query analysis (Shark), large-scale graph processing and analysis (Bagel), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big data sets.

                    Fast Data Processing with Spark covers how to write distributed map reduce style programs with Spark. The book will guide you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API, to deploying your job to the cluster, and tuning it for your purposes.

                    Fast Data Processing with Spark covers everything from setting up your Spark cluster in a variety of situations (stand-alone, EC2, and so on), to how to use the interactive shell to write distributed code interactively. From there, we move on to cover how to write and deploy distributed jobs in Java, Scala, and Python.

                    We then examine how to use the interactive shell to quickly prototype distributed programs and explore the Spark API. We also look at how to use Hive with Spark to use a SQL-like query syntax with Shark, as well as manipulating resilient distributed datasets (RDDs).


                    This book will be a basic, step-by-step tutorial, which will help readers take advantage of all that Spark has to offer.

                    Who this book is for

                    Fast Data Processing with Spark is for software developers who want to learn how to write distributed programs with Spark. It will help developers who have had problems that were too much to be dealt with on a single computer. No previous experience with distributed programming is necessary. This book assumes knowledge of either Java, Scala, or Python.

                    Code Download and Errata
                    Packt Anytime, Anywhere
                    Register Books
                    Print Upgrades
                    eBook Downloads
                    Video Support
                    Contact Us
                    Awards Voting Nominations Previous Winners
                    Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
                    Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software