Hadoop Real-World Solutions Cookbook - Second Edition

Over 90 hands-on recipes to help you learn and master the intricacies of Apache Hadoop 2.X, YARN, Hive, Pig, Oozie, Flume, Sqoop, Apache Spark, and Mahout

Hadoop Real-World Solutions Cookbook - Second Edition

This ebook is included in a Mapt subscription
Tanmay Deshpande

Over 90 hands-on recipes to help you learn and master the intricacies of Apache Hadoop 2.X, YARN, Hive, Pig, Oozie, Flume, Sqoop, Apache Spark, and Mahout
$10.00
$54.99
RRP $43.99
RRP $54.99
eBook
Print + eBook
Preview in Mapt

Book Details

ISBN 139781784395506
Paperback290 pages

Book Description

Big data is the current requirement. Most organizations produce huge amount of data every day. With the arrival of Hadoop-like tools, it has become easier for everyone to solve big data problems with great efficiency and at minimal cost. Grasping Machine Learning techniques will help you greatly in building predictive models and using this data to make the right decisions for your organization.

Hadoop Real World Solutions Cookbook gives readers insights into learning and mastering big data via recipes. The book not only clarifies most big data tools in the market but also provides best practices for using them. The book provides recipes that are based on the latest versions of Apache Hadoop 2.X, YARN, Hive, Pig, Sqoop, Flume, Apache Spark, Mahout and many more such ecosystem tools. This real-world-solution cookbook is packed with handy recipes you can apply to your own everyday issues. Each chapter provides in-depth recipes that can be referenced easily. This book provides detailed practices on the latest technologies such as YARN and Apache Spark. Readers will be able to consider themselves as big data experts on completion of this book.

This guide is an invaluable tutorial if you are planning to implement a big data warehouse for your business.

Table of Contents

Chapter 1: Getting Started with Hadoop 2.X
Introduction
Installing a single-node Hadoop Cluster
Installing a multi-node Hadoop cluster
Adding new nodes to existing Hadoop clusters
Executing the balancer command for uniform data distribution
Entering and exiting from the safe mode in a Hadoop cluster
Decommissioning DataNodes
Performing benchmarking on a Hadoop cluster
Chapter 2: Exploring HDFS
Introduction
Loading data from a local machine to HDFS
Exporting HDFS data to a local machine
Changing the replication factor of an existing file in HDFS
Setting the HDFS block size for all the files in a cluster
Setting the HDFS block size for a specific file in a cluster
Enabling transparent encryption for HDFS
Importing data from another Hadoop cluster
Recycling deleted data from trash to HDFS
Saving compressed data in HDFS
Chapter 3: Mastering Map Reduce Programs
Introduction
Writing the Map Reduce program in Java to analyze web log data
Executing the Map Reduce program in a Hadoop cluster
Adding support for a new writable data type in Hadoop
Implementing a user-defined counter in a Map Reduce program
Map Reduce program to find the top X
Map Reduce program to find distinct values
Map Reduce program to partition data using a custom partitioner
Writing Map Reduce results to multiple output files
Performing Reduce side Joins using Map Reduce
Unit testing the Map Reduce code using MRUnit
Chapter 4: Data Analysis Using Hive, Pig, and Hbase
Introduction
Storing and processing Hive data in a sequential file format
Storing and processing Hive data in the ORC file format
Storing and processing Hive data in the ORC file format
Storing and processing Hive data in the Parquet file format
Performing FILTER By queries in Pig
Performing Group By queries in Pig
Performing Order By queries in Pig
Performing JOINS in Pig
Writing a user-defined function in Pig
Analyzing web log data using Pig
Performing the Hbase operation in CLI
Performing Hbase operations in Java
Executing the MapReduce programming with an Hbase Table
Chapter 5: Advanced Data Analysis Using Hive
Introduction
Processing JSON data in Hive using JSON SerDe
Processing XML data in Hive using XML SerDe
Processing Hive data in the Avro format
Writing a user-defined function in Hive
Performing table joins in Hive
Executing map side joins in Hive
Performing context Ngram in Hive
Call Data Record Analytics using Hive
Twitter sentiment analysis using Hive
Implementing Change Data Capture using Hive
Multiple table inserting using Hive
Chapter 6: Data Import/Export Using Sqoop and Flume
Introduction
Importing data from RDMBS to HDFS using Sqoop
Exporting data from HDFS to RDBMS
Using query operator in Sqoop import
Importing data using Sqoop in compressed format
Performing Atomic export using Sqoop
Importing data into Hive tables using Sqoop
Importing data into HDFS from Mainframes
Incremental import using Sqoop
Creating and executing Sqoop job
Importing data from RDBMS to Hbase using Sqoop
Importing Twitter data into HDFS using Flume
Importing data from Kafka into HDFS using Flume
Importing web logs data into HDFS using Flume
Chapter 7: Automation of Hadoop Tasks Using Oozie
Introduction
Implementing a Sqoop action job using Oozie
Implementing a Map Reduce action job using Oozie
Implementing a Java action job using Oozie
Implementing a Hive action job using Oozie
Implementing a Pig action job using Oozie
Implementing an e-mail action job using Oozie
Executing parallel jobs using Oozie (fork)
Scheduling a job in Oozie
Chapter 8: Machine Learning and Predictive Analytics Using Mahout and R
Introduction
Setting up the Mahout development environment
Creating an item-based recommendation engine using Mahout
Creating a user-based recommendation engine using Mahout
Using Predictive analytics on Bank Data using Mahout
Clustering text data using K-Means
Performing Population Data Analytics using R
Performing Twitter Sentiment Analytics using R
Performing Predictive Analytics using R
Chapter 9: Integration with Apache Spark
Introduction
Running Spark standalone
Running Spark on YARN
Olympics Athletes analytics using the Spark Shell
Creating Twitter trending topics using Spark Streaming
Twitter trending topics using Spark streaming
Analyzing Parquet files using Spark
Analyzing JSON data using Spark
Processing graphs using Graph X
Conducting predictive analytics using Spark MLib
Chapter 10: Hadoop Use Cases
Introduction
Call Data Record analytics
Web log analytics
Sensitive data masking and encryption using Hadoop

What You Will Learn

  • Installing and maintaining Hadoop 2.X cluster and its ecosystem.
  • Write advanced Map Reduce programs and understand design patterns.
  • Advanced Data Analysis using the Hive, Pig, and Map Reduce programs.
  • Import and export data from various sources using Sqoop and Flume.
  • Data storage in various file formats such as Text, Sequential, Parquet, ORC, and RC Files.
  • Machine learning principles with libraries such as Mahout
  • Batch and Stream data processing using Apache Spark

Authors

Table of Contents

Chapter 1: Getting Started with Hadoop 2.X
Introduction
Installing a single-node Hadoop Cluster
Installing a multi-node Hadoop cluster
Adding new nodes to existing Hadoop clusters
Executing the balancer command for uniform data distribution
Entering and exiting from the safe mode in a Hadoop cluster
Decommissioning DataNodes
Performing benchmarking on a Hadoop cluster
Chapter 2: Exploring HDFS
Introduction
Loading data from a local machine to HDFS
Exporting HDFS data to a local machine
Changing the replication factor of an existing file in HDFS
Setting the HDFS block size for all the files in a cluster
Setting the HDFS block size for a specific file in a cluster
Enabling transparent encryption for HDFS
Importing data from another Hadoop cluster
Recycling deleted data from trash to HDFS
Saving compressed data in HDFS
Chapter 3: Mastering Map Reduce Programs
Introduction
Writing the Map Reduce program in Java to analyze web log data
Executing the Map Reduce program in a Hadoop cluster
Adding support for a new writable data type in Hadoop
Implementing a user-defined counter in a Map Reduce program
Map Reduce program to find the top X
Map Reduce program to find distinct values
Map Reduce program to partition data using a custom partitioner
Writing Map Reduce results to multiple output files
Performing Reduce side Joins using Map Reduce
Unit testing the Map Reduce code using MRUnit
Chapter 4: Data Analysis Using Hive, Pig, and Hbase
Introduction
Storing and processing Hive data in a sequential file format
Storing and processing Hive data in the ORC file format
Storing and processing Hive data in the ORC file format
Storing and processing Hive data in the Parquet file format
Performing FILTER By queries in Pig
Performing Group By queries in Pig
Performing Order By queries in Pig
Performing JOINS in Pig
Writing a user-defined function in Pig
Analyzing web log data using Pig
Performing the Hbase operation in CLI
Performing Hbase operations in Java
Executing the MapReduce programming with an Hbase Table
Chapter 5: Advanced Data Analysis Using Hive
Introduction
Processing JSON data in Hive using JSON SerDe
Processing XML data in Hive using XML SerDe
Processing Hive data in the Avro format
Writing a user-defined function in Hive
Performing table joins in Hive
Executing map side joins in Hive
Performing context Ngram in Hive
Call Data Record Analytics using Hive
Twitter sentiment analysis using Hive
Implementing Change Data Capture using Hive
Multiple table inserting using Hive
Chapter 6: Data Import/Export Using Sqoop and Flume
Introduction
Importing data from RDMBS to HDFS using Sqoop
Exporting data from HDFS to RDBMS
Using query operator in Sqoop import
Importing data using Sqoop in compressed format
Performing Atomic export using Sqoop
Importing data into Hive tables using Sqoop
Importing data into HDFS from Mainframes
Incremental import using Sqoop
Creating and executing Sqoop job
Importing data from RDBMS to Hbase using Sqoop
Importing Twitter data into HDFS using Flume
Importing data from Kafka into HDFS using Flume
Importing web logs data into HDFS using Flume
Chapter 7: Automation of Hadoop Tasks Using Oozie
Introduction
Implementing a Sqoop action job using Oozie
Implementing a Map Reduce action job using Oozie
Implementing a Java action job using Oozie
Implementing a Hive action job using Oozie
Implementing a Pig action job using Oozie
Implementing an e-mail action job using Oozie
Executing parallel jobs using Oozie (fork)
Scheduling a job in Oozie
Chapter 8: Machine Learning and Predictive Analytics Using Mahout and R
Introduction
Setting up the Mahout development environment
Creating an item-based recommendation engine using Mahout
Creating a user-based recommendation engine using Mahout
Using Predictive analytics on Bank Data using Mahout
Clustering text data using K-Means
Performing Population Data Analytics using R
Performing Twitter Sentiment Analytics using R
Performing Predictive Analytics using R
Chapter 9: Integration with Apache Spark
Introduction
Running Spark standalone
Running Spark on YARN
Olympics Athletes analytics using the Spark Shell
Creating Twitter trending topics using Spark Streaming
Twitter trending topics using Spark streaming
Analyzing Parquet files using Spark
Analyzing JSON data using Spark
Processing graphs using Graph X
Conducting predictive analytics using Spark MLib
Chapter 10: Hadoop Use Cases
Introduction
Call Data Record analytics
Web log analytics
Sensitive data masking and encryption using Hadoop

Book Details

ISBN 139781784395506
Paperback290 pages
Read More

Read More Reviews