Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Hands-On Big Data Analytics with PySpark

You're reading from  Hands-On Big Data Analytics with PySpark

Product type Book
Published in Mar 2019
Publisher Packt
ISBN-13 9781838644130
Pages 182 pages
Edition 1st Edition
Languages
Concepts
Authors (2):
Rudy Lai Rudy Lai
Profile icon Rudy Lai
Bartłomiej Potaczek Bartłomiej Potaczek
Profile icon Bartłomiej Potaczek
View More author details

Table of Contents (15) Chapters

Preface Installing Pyspark and Setting up Your Development Environment Getting Your Big Data into the Spark Environment Using RDDs Big Data Cleaning and Wrangling with Spark Notebooks Aggregating and Summarizing Data into Useful Reports Powerful Exploratory Data Analysis with MLlib Putting Structure on Your Big Data with SparkSQL Transformations and Actions Immutable Design Avoiding Shuffle and Reducing Operational Expenses Saving Data in the Correct Format Working with the Spark Key/Value API Testing Apache Spark Jobs Leveraging the Spark GraphX API Other Books You May Enjoy

Powerful Exploratory Data Analysis with MLlib

In this chapter, we will explore Spark's capability to perform regression tasks with models such as linear regression and support-vector machines (SVMs). We will learn how to compute summary statistics with MLlib, and discover correlations in datasets using Pearson and Spearman correlations. We will also test our hypothesis on large datasets.

We will cover the following topics:

  • Computing summary statistics with MLlib
  • Using the Pearson and Spearman methods to discover correlations
  • Testing our hypotheses on large datasets

Computing summary statistics with MLlib

In this section, we will be answering the following questions:

  • What are summary statistics?
  • How do we use MLlib to create summary statistics?

MLlib is the machine learning library that comes with Spark. There has been a recent new development that allows us to use Spark's data-processing capabilities to pipe into machine learning capabilities native to Spark. This means that we can use Spark not only to ingest, collect, and transform data, but we can also analyze and use it to build machine learning models on the PySpark platform, which allows us to have a more seamless deployable solution.

Summary statistics are a very simple concept. We are familiar with average, or standard deviation, or the variance of a particular variable. These are summary statistics of a dataset. The reason why it's called a summary statistic is that...

Using Pearson and Spearman correlations to discover correlations

In this section, we will look at two different ways of computing correlations in your datasets, and these two methods are called Pearson and Spearman correlations.

The Pearson correlation

The Pearson correlation coefficient shows us how two different variables vary at the same time, and then adjusts it for how much they vary. This is probably one of the most popular ways to compute a correlation if you have a dataset.

The Spearman correlation

Spearman's rank correlation is not the default correlation...

Testing our hypotheses on large datasets

In this section, we will look at hypothesis testing, and also learn how to test the hypotheses using PySpark. Let's look at one particular type of hypothesis testing that is implemented in PySpark. This form of hypothesis testing is called Pearson's chi-square test. Chi-square tests how likely it is that the differences in the two datasets are there by chance.

For example, if we had a retail store without any footfall, and suddenly you get footfall, how likely is it that this is random, or is there even any statistically significant difference in the level of visitors that we are getting now in comparison to before? The reason why this is called the chi-square test is that the test itself references the chi-square distributions. You can refer to online documentation to understand more about chi-square distributions.

There are...

Summary

In this chapter, we learned summary statistics and computing the summary statistics with MLlib. We also learned about Pearson and Spearman correlations, and how we can discover these correlations in our datasets using PySpark. Finally, we learned one particular way of performing hypothesis testing, which is called the Pearson chi-square test. We then used PySpark's hypothesis-testing functions to test our hypotheses on large datasets.

In the next chapter, we're going to look at putting the structure on our big data with Spark SQL.

lock icon The rest of the chapter is locked
You have been reading a chapter from
Hands-On Big Data Analytics with PySpark
Published in: Mar 2019 Publisher: Packt ISBN-13: 9781838644130
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}