Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
The Data Wrangling Workshop - Second Edition

You're reading from  The Data Wrangling Workshop - Second Edition

Product type Book
Published in Jul 2020
Publisher Packt
ISBN-13 9781839215001
Pages 576 pages
Edition 2nd Edition
Languages
Authors (3):
Brian Lipp Brian Lipp
Profile icon Brian Lipp
Shubhadeep Roychowdhury Shubhadeep Roychowdhury
Profile icon Shubhadeep Roychowdhury
Dr. Tirthajyoti Sarkar Dr. Tirthajyoti Sarkar
Profile icon Dr. Tirthajyoti Sarkar
View More author details

Table of Contents (11) Chapters

Preface
1. Introduction to Data Wrangling with Python 2. Advanced Operations on Built-In Data Structures 3. Introduction to NumPy, Pandas, and Matplotlib 4. A Deep Dive into Data Wrangling with Python 5. Getting Comfortable with Different Kinds of Data Sources 6. Learning the Hidden Secrets of Data Wrangling 7. Advanced Web Scraping and Data Gathering 8. RDBMS and SQL 9. Applications in Business Use Cases and Conclusion of the Course Appendix

6. Learning the Hidden Secrets of Data Wrangling

Overview

In this chapter, you will learn about data problems that arise in business use cases and how to resolve them. This chapter will give you the skills needed to be able to clean and handle real-life messy data. By the end of this chapter, you will be able to prepare data for analysis by formatting it as required by downstream systems. You will also be able to identify and remove outliers from data.

Introduction

In this chapter, we will learn the secret behind creating a successful data wrangling pipeline. In the previous chapters, we were introduced to basic and advanced data structures and other building blocks of data wrangling, such as pandas and NumPy. In this chapter, we will look at the data handling aspect of data wrangling.

Imagine that you have a database of patients who have heart diseases, and like any survey, the data is either missing, incorrect, or has outliers. Outliers are values that are abnormal and tend to be far away from the central tendency, and thus including it in your fancy machine learning model may introduce a terrible bias that we need to avoid. Often, these problems can cause a huge difference in terms of money, man-hours, and other organizational resources. It is undeniable that someone with the skills to solve these problems will prove to be an asset to an organization. In this chapter, we'll talk about a few advanced techniques that we...

Advanced List Comprehension and the zip Function

In this section, we will deep dive into the heart of list comprehension. We have already seen a basic form of it, including something as simple as a = [i for i in range(0, 30)] to something a bit more complex that involves one conditional statement. However, as we already mentioned, list comprehension is a very powerful tool and, in this section, we will explore this amazing tool further. We will investigate another close relative of list comprehension called generators, which also provides a way to create lists, and work with zip and its related functions and methods. By the end of this section, you will be confident in handling complicated logical problems.

Introduction to Generator Expressions

In the previous chapter, while discussing advanced data structures, we witnessed functions such as repeat. We said that they represent a special type of function known as iterators. We also showed you how the lazy evaluation of an iterator...

Data Formatting

In this section, we will format a given dataset. The main motivations behind formatting data properly are as follows:

  • It helps all the downstream systems have a single and pre-agreed form of data for each data point, thus avoiding surprises and, in effect, there is no risk which might break the system.
  • To produce a human-readable report from lower-level data that is, most of the time, created for machine consumption.
  • To find errors in data.

There are a few ways to perform data formatting in Python. We will begin with the modulus % operator.

The % operator

Python gives us the modulus % operator to apply basic formatting on data. To demonstrate this, we will load the data by reading the combined_data.csv file, and then we will apply some basic formatting to it.

Note

The combined_data.csv file contains some sample medical data for four individuals. The file can be found here: https://packt.live/310179U.

We can load the data from the...

Identifying and Cleaning Outliers

When confronted with real-world data, we often see a specific thing in a set of records: there are some data points that do not fit with the rest of the records. They have some values that are too big, too small, or that are completely missing. These kinds of records are called outliers.

Statistically, there is a proper definition and idea about what an outlier means. And often, you need deep domain expertise to understand when to call a particular record an outlier. However, in this exercise, we will look into some basic techniques that are commonplace for flagging and filtering outliers in real-world data for day-to-day work.

Exercise 6.07: Outliers in Numerical Data

In this exercise, we will construct a notion of an outlier based on numerical data. Imagine a cosine curve. If you remember the math for this from high school, then a cosine curve is a very smooth curve within the limit of [1, -1]. We will plot this cosine curve using the plot...

Levenshtein Distance

Levenshtein distance is an advanced concept. We can think of it as the minimum number of single-character edits that are needed to convert one string into another. When two strings are identical, the distance between them is 0 – the bigger the difference, the higher the number. We can consider a threshold of distance, under which we will consider two strings as the same. Thus, we can not only rectify human error but also spread a safety net so that we don't pass all the candidates. Levenshtein distance calculation is an involved process, and we are not going to implement it from scratch here. Thankfully, like a lot of other things, there is a library available for us to do this. It is called python-Levenshtein.

Additional Software Required for This Section

The code for this exercise depends on two additional libraries. We need to install SciPy and python-Levenshtein, libraries. To install the libraries, type the following command in the running...

Summary

In this chapter, we learned about interesting ways to deal with list data by using a generator expression. They are easy and elegant and, once mastered, they give us a powerful trick that we can use repeatedly to simplify several common data wrangling tasks. We also examined different ways to format data. Formatting data is not only useful for preparing beautiful reports – it is often very important to guarantee data integrity for the downstream system.

We ended this chapter by checking out some methods to identify and remove outliers. This is important for us because we want our data to be properly prepared and ready for all our fancy downstream analysis jobs. We also observed how important it is to take the time to and use domain expertise to set up rules for identifying outliers, as doing this incorrectly can do more harm than good.

In the next chapter, we will cover how to read web pages, XML files, and APIs.

lock icon The rest of the chapter is locked
You have been reading a chapter from
The Data Wrangling Workshop - Second Edition
Published in: Jul 2020 Publisher: Packt ISBN-13: 9781839215001
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}