Reader small image

You're reading from  Big Data Analysis with Python

Product typeBook
Published inApr 2019
Reading LevelIntermediate
PublisherPackt
ISBN-139781789955286
Edition1st Edition
Languages
Right arrow
Authors (3):
Ivan Marin
Ivan Marin
author image
Ivan Marin

Ivan Marin is a systems architect and data scientist working at Daitan Group, a Campinas-based software company. He designs big data systems for large volumes of data and implements machine learning pipelines end to end using Python and Spark. He is also an active organizer of data science, machine learning, and Python in So Paulo, and has given Python for data science courses at university level.
Read more about Ivan Marin

Ankit Shukla
Ankit Shukla
author image
Ankit Shukla

Ankit Shukla is a data scientist working with World Wide Technology, a leading US-based technology solution provider, where he develops and deploys machine learning and artificial intelligence solutions to solve business problems and create actual dollar value for clients. He is also part of the company's R&D initiative, which is responsible for producing intellectual property, building capabilities in new areas, and publishing cutting-edge research in corporate white papers. Besides tinkering with AI/ML models, he likes to read and is a big-time foodie.
Read more about Ankit Shukla

Sarang VK
Sarang VK
author image
Sarang VK

Sarang VK is a lead data scientist at StraitsBridge Advisors, where his responsibilities include requirement gathering, solutioning, development, and productization of scalable machine learning, artificial intelligence, and analytical solutions using open source technologies. Alongside this, he supports pre-sales and competency.
Read more about Sarang VK

View More author details
Right arrow

Chapter 8. Creating a Full Analysis Report

Note

Learning Objectives

By the end of this chapter, you will be able to:

  • Read data from different sources in Spark

  • Perform SQL operations on a Spark DataFrame

  • Generate statistical measurements in a consistent way

  • Generate graphs and plots using Plotly

  • Compile an analysis report incorporating all the previous steps and data

Note

In this chapter, we will read the data using Spark, aggregating it, and extract the statistical measurements. We will also use the Pandas to generate graphs from aggregated data and form an analysis report.

Introduction


If you have been part of the data industry for a while, you will understand the challenge of working with different data sources, analyzing them, and presenting them in consumable business reports. When using Spark on Python, you may have to read data from various sources, such as flat files, REST APIs in JSON format, and so on.

In the real world, getting data in the right format is always a challenge and several SQL operations are required to gather data. Thus, it is mandatory for any data scientist to know how to handle different file formats and different sources, and to carry out basic SQL operations and present them in a consumable format.

This chapter provides common methods for reading different types of data, carrying out SQL operations on it, doing descriptive statistical analysis, and generating a full analysis report. We will start with understanding how to read different kinds of data into PySpark and will then generate various analyses and plots on it.

Reading Data in Spark from Different Data Sources


One of the advantages of Spark is the ability to read data from various data sources. However, this is not consistent and keeps changing with each Spark version. This section of the chapter will explain how to read files in CSV and JSON.

Exercise 47: Reading Data from a CSV File Using the PySpark Object

To read CSV data, you have to write the spark.read.csv("the file name with .csv") function. Here, we are reading the bank data that was used in the earlier chapters.

Note

The sep function is used here.

We have to ensure that the right sep function is used based on how the data is separated in the source data.

Now let's perform the following steps to read the data from the bank.csv file:

  1. First, let's import the required packages into the Jupyter notebook:

    import os
    import pandas as pd
    import numpy as np
    import collections
    from sklearn.base import TransformerMixin
    import random
    import pandas_profiling
  2. Next, import all the required libraries, as illustrated...

SQL Operations on a Spark DataFrame


A DataFrame in Spark is a distributed collection of rows and columns. It is the same as a table in a relational database or an Excel sheet. A Spark RDD/DataFrame is efficient at processing large amounts of data and has the ability to handle petabytes of data, whether structured or unstructured.

Spark optimizes queries on data by organizing the DataFrame into columns, which helps Spark understand the schema. Some of the most frequently used SQL operations include subsetting the data, merging the data, filtering, selecting specific columns, dropping columns, dropping all null values, and adding new columns, among others.

Exercise 48: Reading Data in PySpark and Carrying Out SQL Operations

For summary statistics of data, we can use the spark_df.describe().show() function, which will provide information on count, mean, standard deviation, max, and min for all the columns in the DataFrame.

For example, in the dataset that we have considered—the bank marketing...

Generating Statistical Measurements


Python is a general-purpose language with statistical modules. A lot of statistical analysis, such as carrying out descriptive analysis, which includes identifying the distribution of data for numeric variables, generating a correlation matrix, the frequency of levels in categorical variables with identifying mode and so on, can be carried out in Python. The following is an example of correlation:

Figure 8.12: Segment numeric data and correlation matrix output

Identifying the distribution of data and normalizing it is important for parametric models such as linear regression and support vector machines. These algorithms assume the data to be normally distributed. If data is not normally distributed, it can lead to bias in the data. In the following example, we will identify the distribution of data through a normality test and then apply a transformation using the yeo-johnson method to normalize the data:

Figure 8.13: Identifying the distribution of the data...

Summary


In this chapter, we learned how to import data from various sources into a Spark environment as a Spark DataFrame. In addition, we learned how to carry out various SQL operations on that DataFrame, and how to generate various statistical measures, such as correlation analysis, identifying the distribution of data, building a feature importance model, and so on. We also looked into how to generate effective graphs using Plotly offline, where you can generate various plots to develop an analysis report.

This book has hopefully offered a stimulating journey through big data. We started with Python and covered several libraries that are part of the Python data science stack: NumPy and Pandas, We also looked at home we can use Jupyter notebooks. We then saw how to create informative data visualizations, with some guiding principles on what is a good graph, and used Matplotlib and Seaborn to materialize the figures. Then we made a start with the Big Data tools - Hadoop and Spark, thereby...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Big Data Analysis with Python
Published in: Apr 2019Publisher: PacktISBN-13: 9781789955286
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (3)

author image
Ivan Marin

Ivan Marin is a systems architect and data scientist working at Daitan Group, a Campinas-based software company. He designs big data systems for large volumes of data and implements machine learning pipelines end to end using Python and Spark. He is also an active organizer of data science, machine learning, and Python in So Paulo, and has given Python for data science courses at university level.
Read more about Ivan Marin

author image
Ankit Shukla

Ankit Shukla is a data scientist working with World Wide Technology, a leading US-based technology solution provider, where he develops and deploys machine learning and artificial intelligence solutions to solve business problems and create actual dollar value for clients. He is also part of the company's R&D initiative, which is responsible for producing intellectual property, building capabilities in new areas, and publishing cutting-edge research in corporate white papers. Besides tinkering with AI/ML models, he likes to read and is a big-time foodie.
Read more about Ankit Shukla

author image
Sarang VK

Sarang VK is a lead data scientist at StraitsBridge Advisors, where his responsibilities include requirement gathering, solutioning, development, and productization of scalable machine learning, artificial intelligence, and analytical solutions using open source technologies. Alongside this, he supports pre-sales and competency.
Read more about Sarang VK