Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Hadoop Real-World Solutions Cookbook - Second Edition

You're reading from  Hadoop Real-World Solutions Cookbook - Second Edition

Product type Book
Published in Mar 2016
Publisher
ISBN-13 9781784395506
Pages 290 pages
Edition 2nd Edition
Languages
Author (1):
Tanmay Deshpande Tanmay Deshpande
Profile icon Tanmay Deshpande

Table of Contents (18) Chapters

Hadoop Real-World Solutions Cookbook Second Edition
Credits
About the Author
Acknowledgements
About the Reviewer
www.PacktPub.com
Preface
Getting Started with Hadoop 2.X Exploring HDFS Mastering Map Reduce Programs Data Analysis Using Hive, Pig, and Hbase Advanced Data Analysis Using Hive Data Import/Export Using Sqoop and Flume Automation of Hadoop Tasks Using Oozie Machine Learning and Predictive Analytics Using Mahout and R Integration with Apache Spark Hadoop Use Cases Index

Chapter 5. Advanced Data Analysis Using Hive

  • Processing JSON data in Hive using JSON SerDe

  • Processing XML data in Hive using XML SerDe

  • Processing Hive data in the Avro format

  • Writing a user-defined function in Hive

  • Performing table joins in Hive

  • Executing map side joins in Hive

  • Performing context Ngram in Hive

  • Analyzing a call data record using Hive

  • Performing sentiment analysis using Hive on Twitter data

  • Implementing Change Data Capture (CDC) using Hive

  • Inserting data in multiple tables data using Hive

Introduction


In the previous chapter, we discussed various tasks that can be performed using Hive, Pig, and Hbase. In this chapter, we are going to take a look at how to perform some advanced tasks using Hive. We will see how to analyze data in various formats such as JSON, XML, and AVRO. We will also explore how to write User-Defined Functions (UDFs) in Hive, deploy them, and use them in Hive queries. Now let's get started.

Processing JSON data in Hive using JSON SerDe


These days, JSON is a very common data structure that's used for data communication and storage. Its key value-based structure gives great flexibility in handling data. In this recipe, we are going to take a look at how to process data stored in the JSON format in Hive. Hive does not have any built-in support to handle JSON, so we will be using JSON SerDe. SerDe is a program that consists of a serializer and deserializer, which tell Hive how to read and write data.

Getting ready

To perform this recipe, you should have a running Hadoop cluster with the latest version of Hive installed on it. Here, I am using Hive 1.2.1. Apart from Hive, we also need JSON SerDe.

There are various JSON SerDe binaries available from various developers. The most popular, though, can be found at https://github.com/rcongiu/Hive-JSON-Serde.

This project contains code for JSON SerDe and is compatible with the latest version of Hive. You can either download the code and build...

Processing XML data in Hive using XML SerDe


XML has been one of the most important data structures and has been used for quite a long time for data transfers and storage. Parsing XML data and then processing it is always a tricky task as parsing XML is one of the most costliest operations. Hive does not have any built-in support for XML data processing, but many organizations and individuals have made open source contributions to XML SerDe.

Getting ready

To perform this recipe, you should have a running Hadoop cluster as well as the latest version of Hive installed on it. Here, I am using Hive 1.2.1. Apart from Hive, we also need XML SerDe.

There are various XML SerDe that have been made available by open source developers. Out of these, XML SerDe at https://github.com/dvasilen/Hive-XML-SerDe is well developed and quite useful. So, we can download the jar from http://search.maven.org/remotecontent?filepath=com/ibm/spss/hive/serde2/xml/hivexmlserde/1.0.5.3/hivexmlserde-1.0.5.3.jar.

How to do...

Processing Hive data in the Avro format


Avro is an evolvable schema-driven binary data format. It is hosted and maintained by the Apache Software Foundation (http://avro.apache.org/). It provides a rich data structure to store compact, fast binary data, and it relies on schemas. Avro files store data and schemas together; this helps faster reading of data as the files do not need to look for schema anywhere else. It can also be used in Remote Procedure Calls (RPC). There, the schema is transferred at the time of handshake between a client and server. In this recipe, we will take a look at how to process Avro files in Hive.

Getting ready

To perform this recipe, you should have a running Hadoop cluster as well as the latest version of Hive installed on it. Here, I am using Hive 1.2.1. Hive has built-in support for the Avro file format, so we don't need to import any third-party JARs.

How to do it...

Using Avro SerDe, we can either read data that is already in the Avro format or write new data...

Writing a user-defined function in Hive


In the previous chapter, we talked about how to write user-defined functions in Pig; in this recipe, we are going to do the same for Hive. Hive supports the adding of temporary functions, which can be used to process data. We will be writing UDF in Java and will also create functions that can be used in data processing.

Getting ready

To perform this recipe, you should have a running Hadoop cluster as well as the latest version of Hive installed on it. Here, I am using Hive 1.2.1. We will also need the Eclipse IDE for development.

How to do it

There are various system functions that are supported by Hive, but sometimes, you will need to do something different that cannot be handled by system provided functions. To do this, we will need to write a custom function.

Take a situation where we have census data and a person's income, and we want to categorize them into three parts based on the person's income. The following is some sample data where we have the...

Performing table joins in Hive


In the previous chapter, we talked about how to perform joins in Pig. In this recipe, we are going to take a look at how to perform joins in Hive. Hive supports various types of joins such as inner, outer, and so on.

Getting ready

To perform this recipe, you should have a running Hadoop cluster as well as the latest version of Hive installed on it. Here, I am using Hive 1.2.1.

How to do it...

To perform joins, we will need two types of datasets, which have something in common to join. Consider a situation where we have two employee tables and departments, and every employee table has a structure (ID, name, salary, and department ID) and every department table has an ID and a name. We will quickly create tables and load data into them:

 CREATE TABLE emp(
 id INT,
 name STRING,
 salary DOUBLE,
 deptId INT)
 ROW FORMAT DELIMITED
 FIELDS TERMINATED BY '|'
 STORED AS TEXTFILE;
 
 LOAD DATA LOCAL INPATH 'emp.txt' INTO TABLE emp;
    hive> select * from emp;
    OK
...

Executing map side joins in Hive


Map side joins are special types of optimizations; Hive executes these automatically based on table sizes. In this recipe, we are going to explore map side joins in further detail.

Getting ready

To perform this recipe, you should have a running Hadoop cluster as well as the latest version of Hive installed on it. Here, I am using Hive 1.2.1.

How to do it...

To perform map joins, we need two types of datasets that have something in common to join. One dataset also has to be big, and the other has to be small in comparison. Consider a situation where we have two tables for employees and departments; the employee table has a structure (ID, name, salary, and department ID) and the department table has an ID and a name.

We will quickly create tables and load data into them:

CREATE TABLE emp(
id INT,
name STRING,
salary DOUBLE,
deptId INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '|'
STORED AS TEXTFILE;
   LOAD DATA LOCAL INPATH 'emp.txt' INTO TABLE emp;
   hive>...

Performing context Ngram in Hive


Ngrams are sequences that are collected from specific sets of words and are based on their occurrence in a given text. N-grams are generally used to find the occurrence of certain words in a sequence, which helps in the calculation of sentiment analysis. Hive provides built-in support for Ngram calculations by providing a function. In this recipe, we will take a look at how to use this function in order to analyze text data.

Getting ready

To perform this recipe, you should have a running Hadoop cluster as well as the latest version of Hive installed on it. Here, I am using Hive 1.2.1.

How to do it...

N-gram can be used to find the most frequently used word after a sequence of words in a give text dataset. To do this, let's first create a Hive table and load data into it.

Take a situation where we have data from Twitter where people are writing about their sentiments about chocolate. Let's assume that we have text data, as follows:

Chocolate is good
Chocolate is...

Call Data Record Analytics using Hive


Call Data Records (CDR) are special types of records that are used in the telecom domain to keep track of calls made by individuals. We can use Hive to analyze these records in order to give special offers to customers.

Note

You can read more about CDR at https://en.wikipedia.org/wiki/Call_detail_record.

Getting ready

To perform this recipe, you should have a running Hadoop cluster as well as the latest version of Hive installed on it. Here, I am using Hive 1.2.1.

How to do it...

First of all, let's consider a situation where we have the following type of dataset with us. To analyze it, we first need to create a Hive table and load data into it:

CALLER_PHONE_NO|RECEIVER_PHONE_NUMBER|START_TIME|END_TIME|CALL_TYPE
11111|22222|2015-01-12 01:20:00|2015-01-12 01:30:00|VOICE
11111|22222|2015-02-12 01:35:00|2015-02-12 01:35:30|VOICE
11111|22222|2015-02-12 02:20:00|2015-02-12 02:20:00|SMS
33333|44444|2015-01-12 01:20:00|2015-01-12 01:30:00|VOICE
11111|33333|2015-05...

Twitter sentiment analysis using Hive


Twitter is one of the most important data sources that helps you to know the sentiments behind various things. In this recipe, we will take a look at how to perform sentiment analysis using Hive on Twitter data.

Getting ready

To perform this recipe, you should have a running Hadoop cluster as well as the latest version of Hive installed on it. Here, I am using Hive 1.2.1.

How to do it...

First of all, we need a dataset to perform this recipe. We will be using a dataset that can be found at http://s3.amazonaws.com/hw-sandbox/tutorial13/SentimentFiles.zip.

Next, we will unzip this data and upload it on HDFS. The zip contains three folders: the first for raw Twitter data, the second for a dictionary, and the third for a time zone:

hadoop fs -mkdir /data
hadoop fs -put tweets_raw /data
hadoop fs -put time_zone_map /data
hadoop fs -put dictionary /data

We use Hive's JSON SerDe jar to read the tweeter data, as shown here:

ADD JAR json-serde-1.1.9.9-Hive1.2-jar-with...

Implementing Change Data Capture using Hive


Change Data Capture or CDC is one the most painful areas in Data Warehousing. CDC captures the changes that occur in a table. A change could be in the form of new records getting added, updated, or getting deleted. In this recipe, we are going to take a look at how to perform CDC in Hive.

Getting ready

To perform this recipe, you should have a running Hadoop cluster as well as the latest version of Hive installed on it. Here, I am using Hive 1.2.1.

How to do it

First of all, we need a data sample. Consider a simple employee table that has columns, such as the employee ID, name, and salary. Let's say we import this table from a source table in week 1, and after a week, we want to know about the changes that have taken place in the same table. Let's say we have a table, employee1, which was imported in week 1, and we have another table, which was imported in week 2. Week 2 being the latest week, we want to know the changes that have taken place. Here...

Multiple table inserting using Hive


Hive allows you to write data to multiple tables or directories at a time. This is an optimized solution as a source table needs to be read only once, which helps reduce the time. In this recipe, we are going to take a look at how write data to multiple tables/directories in a single query.

Getting ready

To perform this recipe, you should have a running Hadoop cluster as well as the latest version of Hive installed on it. Here, I am using Hive 1.2.1.

How to do it

Let's say we have an employee table with columns such as ID, name, and salary:

Table – employee

1,A,1000
2,B,2000
3,C,3000
4,D,2000
5,E,1000
6,F,3000
7,G,1000
8,H,3000
9,I,1000
10,J,2000
11,K,1000
12,L,1000
13,M,1000
14,N,3000
15,O,3000
16,P,1000
17,Q,1000
18,R,1000
19,S,2000
20,T,3000

Let's create the table and load the data into it:

CREATE TABLE employee (
id INT,
name STRING,
salary BIGINT
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE;

LOAD DATA LOCAL INPATH 'emp.txt' INTO TABLE...
lock icon The rest of the chapter is locked
You have been reading a chapter from
Hadoop Real-World Solutions Cookbook - Second Edition
Published in: Mar 2016 Publisher: ISBN-13: 9781784395506
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}