Reader small image

You're reading from  Big Data Analytics with Java

Product typeBook
Published inJul 2017
Reading LevelIntermediate
PublisherPackt
ISBN-139781787288980
Edition1st Edition
Languages
Concepts
Right arrow
Author (1)
RAJAT MEHTA
RAJAT MEHTA
author image
RAJAT MEHTA

The author is a VP (Technical Architect) in technology in JP Morgan Chase in New York. The author is a sun certified java developer and has worked on java related technologies for more than 16 years. Current role for the past few years heavily involves the usage of bid data stack and running analytics on it. Author is also a contributor in various open source projects that are available on his GitHub repository and is also a frequent write on dev magazines.
Read more about RAJAT MEHTA

Right arrow

Chapter 2. First Steps in Data Analysis

Let's take the first steps towards data analysis now. Spark has a very useful module, Spark. Apache Spark has a prebuilt module called as Spark SQL and this module is used for structured data processing. Using this module, we can execute SQL queries on our underlying data. Spark lets you read data from various datasources whether text, CSV, or Parquet files on HDFS or also from hive tables or HBase tables. For simple data analysis tasks, whether you are exploring your datasets initially or trying to analyze and cut a report for your end users with simple stats this module is tremendously useful.

In this chapter, we will work on two datasets. The first dataset that we will analyze is a simple dataset and the next one is a more complex real-world dataset from an e-commerce store.

In this chapter, we will cover the following topics:

  • Basic statistical analytic approaches using Spark SQL

  • Building association rules using the Apriori algorithm

  • Advantages and disadvantages...

Datasets


Before we get our hands wet in the world of complex analytics, we will take small baby steps and learn some basic statistical analysis first. This would help us get familiar with the approach that we will be using on big data for other solutions as well. For our analysis initially we will take a simple cars JSON dataset that has details about a few cars from different countries. We will analyze it using Spark SQL and see how easy it is to query and analyze datasets using Spark SQL. Spark SQL is handy to use for basic analytics purposes and is nicely suited on big data. It can be run on massive datasets and data can reside in HDFS.

To start with a simple case study we are using a cars dataset. This dataset can be obtained from http://www.carqueryapi.com/. It can be obtained from link http://www.carqueryapi.com/api/0.3/?callback=?&cmd=getMakes. This datasets contains data about cars in different countries. It is in JSON format. It is not a very big dataset from the perspective...

Data cleaning and munging


The major amount of time spent by a developer while performing a data analysis task is spent in data cleaning or producing data in a particular format. Most of the time, while performing analysis of some log file data or getting files from some other system, there will definitely be some data cleaning involved. Data cleaning can be in many forms whether it involves discarding a certain kind of data or converting some bad data into a different format. Also note that most of the machine learning algorithms involve running algorithms on a mathematical dataset, but most of the practical datasets won't always have mathematical data. Converting text data to mathematical form is another important task that many developers need to do themselves before they can apply the data analysis tasks on the data.

If there are problems in the data that we need to resolve before we use it, then this approach of fixing the data is called as data munging. One of the common data munging...

Basic analysis of data with Spark SQL


Spark SQL is a spark module for structured data processing. Almost all the developers know SQL. Spark SQL provides an SQL interface to your Spark data (RDDs). Using Spark SQL you can fire SQL queries or SQL-like queries on your big data set and fetch data in objects called dataframes.

A dataframe is like a relational database table. It has columns in it and we can apply functions to these columns such as groupBy, and so on. It is very easy to learn and use.

In the next section, we will cover a few examples on how we can use the dataframe and run regular analysis tasks.

Building SparkConf and context

This is just boilerplate code and is the entry point for the usage of our Spark SQL code. Every spark program will start with this boiler plate code for initialization. In this code we build the Spark configuration and then apply the configuration parameters (like application name and master location) and also build the SparkSession object. This SparkSession...

Implementation of the Apriori algorithm in Apache Spark


We have gone through the preceding algorithm. Now we will try to write the entire algorithm in Spark. Spark does not have a default implementation of Apriori algorithm, so we will have to write our own implementation as shown next (refer to the comments in the code as well).

First, we will have the regular boilerplate code to initiate the Spark configuration and context:

SparkConf conf = new SparkConf().setAppName(appName).setMaster(master);
JavaSparkContext sc = new JavaSparkContext(conf);

Now, we will load the dataset file using the SparkContext and store the result in a JavaRDD instance. We will create the instance of the AprioriUtil class. This class contains the methods for calculating the support and confidence values. Finally, we will store the total number of transactions (stored in the transactionCount variable) so that this variable can be broadcasted and reused on different DataNodes when needed:

JavaRDD<String> rddX =...

Summary


We started this chapter on a simple note by going over the very basic yet very power simple analytics on simple datasets. While doing so, we also learned a very powerful module of Apache Spark called Spark SQL. Using this module, Java developers can use their regular SQL skills and analyze their big data datasets.

After exploring the simple analytics piece using spark-sql, we went over two complex analytic algorithms: Apriori and FP-Growth. We learned how we can use these algorithms to build association rules from a transaction dataset.

In the next chapter, we will learn the basics of machine learning and get an introduction to the machine learning approach for dealing with a predictive analytics problem.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Big Data Analytics with Java
Published in: Jul 2017Publisher: PacktISBN-13: 9781787288980
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
RAJAT MEHTA

The author is a VP (Technical Architect) in technology in JP Morgan Chase in New York. The author is a sun certified java developer and has worked on java related technologies for more than 16 years. Current role for the past few years heavily involves the usage of bid data stack and running analytics on it. Author is also a contributor in various open source projects that are available on his GitHub repository and is also a frequent write on dev magazines.
Read more about RAJAT MEHTA