Reader small image

You're reading from  Essential PySpark for Scalable Data Analytics

Product typeBook
Published inOct 2021
Reading LevelBeginner
PublisherPackt
ISBN-139781800568877
Edition1st Edition
Languages
Tools
Concepts
Right arrow
Author (1)
Sreeram Nudurupati
Sreeram Nudurupati
author image
Sreeram Nudurupati

Sreeram Nudurupati is a data analytics professional with years of experience in designing and optimizing data analytics pipelines at scale. He has a history of helping enterprises, as well as digital natives, build optimized analytics pipelines by using the knowledge of the organization, infrastructure environment, and current technologies.
Read more about Sreeram Nudurupati

Right arrow

Chapter 5: Scalable Machine Learning with PySpark

In the previous chapters, we have established that modern-day data is growing at a rapid rate, with a volume, velocity, and veracity not possible for traditional systems to keep pace with. Thus, we learned about distributed computing to keep up with the ever-increasing data processing needs and saw practical examples of ingesting, cleansing, and integrating data to bring it to a level that is conducive to business analytics using the power and ease of use of Apache Spark's unified data analytics platform. This chapter, and the chapters that follow, will explore the data science and machine learning (ML) aspects of data analytics.

Today, the computer science disciplines of AI and ML have made a massive comeback and are pervasive. Businesses everywhere need to leverage these techniques to remain competitive, expand their customer base, introduce novel product lines, and stay profitable. However, traditional ML and data science...

Technical requirements

In this chapter, we will be using the Databricks Community Edition to run our code: https://community.cloud.databricks.com.

ML overview

Machine Learning is a field of AI and computer science that leverages statistical models and computer science algorithms for learning patterns inherent in data, without being explicitly programmed. ML consists of algorithms that automatically convert patterns within data into models. Where pure mathematical or rule-based models perform the same task over and over again, an ML model learns from data and its performance can be greatly improved by exposing it to vast amounts of data.

A typical ML process involves applying an ML algorithm to a known dataset called the training dataset, to generate a new ML model. This process is generally termed model training or model fitting. Some ML models are trained on datasets containing a known correct answer that we intend to predict in an unknown dataset. This known, correct value in the training dataset is termed the label.

Once the model is trained, the resultant model is applied to new data in order to 
predict the...

Scaling out machine learning

In the previous sections, we learned that ML is a set of algorithms that, instead of being explicitly programmed, automatically learn patterns hidden within data. Thus, an ML algorithm exposed to a larger dataset can potentially result in a better-performing model. However, traditional ML algorithms were designed to be trained on a limited data sample and on a single machine at a time. This means that the existing ML libraries are not inherently scalable. One solution to this problem is to down-sample a larger dataset to fit in the memory of a single machine, but this also potentially means that the resulting models aren't as accurate as they could be.

Also, typically, several ML models are built on the same dataset, simply varying the parameters supplied to the algorithm. Out of these several models, the best model is chosen for production purposes, using a technique called hyperparameter tuning. Building several models using a single machine,...

Data wrangling with Apache Spark and MLlib

Data wrangling, also referred to within the data science community as data munging, or simply data preparation, is the first step in a typical data science process. Data wrangling involves sampling, exploring, selecting, manipulating, and cleansing data to make it ready for ML applications. Data wrangling takes up to 60 to 80 percent of the whole data science process and is the most crucial step in guaranteeing the accuracy of the ML model being built. The following sections explore the data wrangling process using Apache Spark and MLlib.

Data preprocessing

Data preprocessing is the first step in the data wrangling process and involves gathering, exploring, and selecting the data elements useful for solving the problem at hand. The data science process typically succeeds the data engineering process and the assumption here is that clean and integrated data is already available in the data lake. However, data that is clean enough for...

Summary

In this chapter, you learned about the concept of ML and the different types of ML algorithms. You also learned about some of the real-world applications of ML to help businesses minimize losses and maximize revenues and accelerate their time to market. You were introduced to the necessity of scalable ML and two different techniques for scaling out ML algorithms. Apache Spark's native ML Library, MLlib, was introduced, along with its major components.

Finally, you learned a few techniques to perform data wrangling to clean, manipulate, and transform data to make it more suitable for the data science process. In the following chapter, you will learn about the send phase of the ML process, called feature extraction and feature engineering, where you will learn to apply various scalable algorithms to transform individual data fields to make them even more suitable for data science applications.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Essential PySpark for Scalable Data Analytics
Published in: Oct 2021Publisher: PacktISBN-13: 9781800568877
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Sreeram Nudurupati

Sreeram Nudurupati is a data analytics professional with years of experience in designing and optimizing data analytics pipelines at scale. He has a history of helping enterprises, as well as digital natives, build optimized analytics pipelines by using the knowledge of the organization, infrastructure environment, and current technologies.
Read more about Sreeram Nudurupati