Free Sample
+ Collection

Learning Hadoop 2

Learning
Garry Turkington, Gabriele Modena

Design and implement data processing, lifecycle management, and analytic workflows with the cutting-edge toolbox of Hadoop 2
$29.99
$49.99
RRP $29.99
RRP $49.99
eBook
Print + eBook

Want this title & more?

$21.99 p/month

Subscribe to PacktLib

Enjoy full and instant access to over 2000 books and videos – you’ll find everything you need to stay ahead of the curve and make sure you can always get the job done.

Book Details

ISBN 139781783285518
Paperback382 pages

About This Book

  • Construct state-of-the-art applications using higher-level interfaces and tools beyond the traditional MapReduce approach
  • Use the unique features of Hadoop 2 to model and analyze Twitter’s global stream of user generated data
  • Develop a prototype on a local cluster and deploy to the cloud (Amazon Web Services)

Who This Book Is For

If you are a system or application developer interested in learning how to solve practical problems using the Hadoop framework, then this book is ideal for you. You are expected to be familiar with the Unix/Linux command-line interface and have some experience with the Java programming language. Familiarity with Hadoop would be a plus.

Table of Contents

Chapter 1: Introduction
A note on versioning
The background of Hadoop
Components of Hadoop
Hadoop 2 – what's the big deal?
Distributions of Apache Hadoop
A dual approach
AWS – infrastructure on demand from Amazon
Getting started
Running the examples
Data processing with Hadoop
Summary
Chapter 2: Storage
The inner workings of HDFS
Command-line access to the HDFS filesystem
Protecting the filesystem metadata
Apache ZooKeeper – a different type of filesystem
Automatic NameNode failover
HDFS snapshots
Hadoop filesystems
Managing and serializing data
Storing data
Summary
Chapter 3: Processing – MapReduce and Beyond
MapReduce
Java API to MapReduce
Writing MapReduce programs
Walking through a run of a MapReduce job
YARN
YARN in the real world – Computation beyond MapReduce
Summary
Chapter 4: Real-time Computation with Samza
Stream processing with Samza
Summary
Chapter 5: Iterative Computation with Spark
Apache Spark
The Spark ecosystem
Processing data with Apache Spark
Comparing Samza and Spark Streaming
Summary
Chapter 6: Data Analysis with Apache Pig
An overview of Pig
Getting started
Running Pig
Fundamentals of Apache Pig
Programming Pig
Extending Pig (UDFs)
Analyzing the Twitter stream
Summary
Chapter 7: Hadoop and SQL
Why SQL on Hadoop
Prerequisites
Hive architecture
Hive and Amazon Web Services
Extending HiveQL
Programmatic interfaces
Stinger initiative
Impala
Summary
Chapter 8: Data Lifecycle Management
What data lifecycle management is
Building a tweet analysis capability
Challenges of external data
Collecting additional data
Pulling it all together
Summary
Chapter 9: Making Development Easier
Choosing a framework
Hadoop streaming
Kite Data
Apache Crunch
Summary
Chapter 10: Running a Hadoop Cluster
I'm a developer – I don't care about operations!
Cloudera Manager
Ambari – the open source alternative
Operations in the Hadoop 2 world
Sharing resources
Building a physical cluster
Building a cluster on EMR
Cluster tuning
Security
Monitoring
Troubleshooting
Summary
Chapter 11: Where to Go Next
Alternative distributions
Other computational frameworks
Other interesting projects
Other programming abstractions
AWS resources
Sources of information
Summary

What You Will Learn

  • Write distributed applications using the MapReduce framework
  • Go beyond MapReduce and process data in real time with Samza and iteratively with Spark
  • Familiarize yourself with data mining approaches that work with very large datasets
  • Prototype applications on a VM and deploy them to a local cluster or to a cloud infrastructure (Amazon Web Services)
  • Conduct batch and real time data analysis using SQL-like tools
  • Build data processing flows using Apache Pig and see how it enables the easy incorporation of custom functionality
  • Define and orchestrate complex workflows and pipelines with Apache Oozie
  • Manage your data lifecycle and changes over time

In Detail

This book introduces you to the world of building data-processing applications with the wide variety of tools supported by Hadoop 2. Starting with the core components of the framework—HDFS and YARN—this book will guide you through how to build applications using a variety of approaches.

You will learn how YARN completely changes the relationship between MapReduce and Hadoop and allows the latter to support more varied processing approaches and a broader array of applications. These include real-time processing with Apache Samza and iterative computation with Apache Spark. Next up, we discuss Apache Pig and the dataflow data model it provides. You will discover how to use Pig to analyze a Twitter dataset.

With this book, you will be able to make your life easier by using tools such as Apache Hive, Apache Oozie, Hadoop Streaming, Apache Crunch, and Kite SDK. The last part of this book discusses the likely future direction of major Hadoop components and how to get involved with the Hadoop community.

Authors

Read More