Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Python Real-World Projects

You're reading from  Python Real-World Projects

Product type Book
Published in Sep 2023
Publisher Packt
ISBN-13 9781803246765
Pages 478 pages
Edition 1st Edition
Languages
Author (1):
Steven F. Lott Steven F. Lott
Profile icon Steven F. Lott

Table of Contents (20) Chapters

Preface 1. Chapter 1: Project Zero: A Template for Other Projects 2. Chapter 2: Overview of the Projects 3. Chapter 3: Project 1.1: Data Acquisition Base Application 4. Chapter 4: Data Acquisition Features: Web APIs and Scraping 5. Chapter 5: Data Acquisition Features: SQL Database 6. Chapter 6: Project 2.1: Data Inspection Notebook 7. Chapter 7: Data Inspection Features 8. Chapter 8: Project 2.5: Schema and Metadata 9. Chapter 9: Project 3.1: Data Cleaning Base Application 10. Chapter 10: Data Cleaning Features 11. Chapter 11: Project 3.7: Interim Data Persistence 12. Chapter 12: Project 3.8: Integrated Data Acquisition Web Service 13. Chapter 13: Project 4.1: Visual Analysis Techniques 14. Chapter 14: Project 4.2: Creating Reports 15. Chapter 15: Project 5.1: Modeling Base Application 16. Chapter 16: Project 5.2: Simple Multivariate Statistics 17. Chapter 17: Next Steps 18. Other Books You Might Enjoy 19. Index

Chapter 9
Project 3.1: Data Cleaning Base Application

Data validation, cleaning, converting, and standardizing are steps required to transform raw data acquired from source applications into something that can be used for analytical purposes. Since we started using a small data set of very clean data, we may need to improvise a bit to create some ”dirty” raw data. A good alternative is to search for more complicated, raw data.

This chapter will guide you through the design of a data cleaning application, separate from the raw data acquisition. Many details of cleaning, converting, and standardizing will be left for subsequent projects. This initial project creates a foundation that will be extended by adding features. The idea is to prepare for the goal of a complete data pipeline that starts with acquisition and passes the data through a separate cleaning stage. We want to exploit the Linux principle of having applications connected by a shared buffer, often referred...

9.1 Description

We need to build a data validating, cleaning, and standardizing application. A data inspection notebook is a handy starting point for this design work. The goal is a fully-automated application to reflect the lessons learned from inspecting the data.

A data preparation pipeline has the following conceptual tasks:

  • Validate the acquired source text to be sure it’s usable and to mark invalid data for remediation.

  • Clean any invalid raw data where necessary; this expands the available data in those cases where sensible cleaning can be defined.

  • Convert the validated and cleaned source data from text (or bytes) to usable Python objects.

  • Where necessary, standardize the code or ranges of source data. The requirements here vary with the problem domain.

The goal is to create clean, standardized data for subsequent analysis. Surprises occur all the time. There are several sources:

  • Technical problems with file formats of the upstream software. The intent of the acquisition...

9.2 Approach

We’ll take some guidance from the C4 model ( https://c4model.com) when looking at our approach.

  • Context: For this project, the context diagram has expanded to three use cases: acquire, inspect, and clean.

  • Containers: There’s one container for the various applications: the user’s personal computer.

  • Components: There are two significantly different collections of software components: the acquisition program and the cleaning program.

  • Code: We’ll touch on this to provide some suggested directions.

A context diagram for this application is shown in Figure 9.1.

Figure 9.1: Context Diagram
Figure 9.1: Context Diagram

A component diagram for the conversion application isn’t going to be as complicated as the component diagrams for acquisition applications. One reason for this is there are no choices for reading, extracting, or downloading raw data files. The source files are the ND JSON files created by the acquisition application.

The second reason the conversion...

9.3 Deliverables

This project has the following deliverables:

  • Documentation in the docs folder.

  • Acceptance tests in the tests/features and tests/steps folders.

  • Unit tests for the application modules in the tests folder.

  • Application to clean some acquired data and apply simple conversions to a few fields. Later projects will add more complex validation rules.

We’ll look at a few of these deliverables in a little more detail.

When starting a new kind of application, it often makes sense to start with acceptance tests. Later, when adding features, the new acceptance tests may be less important than new unit tests for the features. We’ll start by looking at a new scenario for this new application.

9.3.1 Acceptance tests

As we noted in Chapter 4, Data Acquisition Features: Web APIs and Scraping, we can provide a large block of text as part of a Gherkin scenario. This can be the contents of an input file. We can consider something like the following scenario.

Scenario...

9.4 Summary

This chapter has covered a number of aspects of data validation and cleaning applications:

  • CLI architecture and how to design a simple pipeline of processes.

  • The core concepts of validating, cleaning, converting, and standardizing raw data.

In the next chapter, we’ll dive more deeply into a number of data cleaning and standardizing features. Those projects will all build on this base application framework. After those projects, the next two chapters will look a little more closely at the analytical data persistence choices, and provide an integrated web service for providing cleaned data to other stakeholders.

9.5 Extras

Here are some ideas for you to add to this project.

9.5.1 Create an output file with rejected samples

In Error reports we suggested there are times when it’s appropriate to create a file of rejected samples. For the examples in this book — many of which are drawn from well-curated, carefully managed data sets — it can feel a bit odd to design an application that will reject data.

For enterprise applications, data rejection is a common need.

It can help to look at a data set like this: https://datahub.io/core/co2-ppm. This contains data same with measurements of CO2 levels measures with units of ppm, parts per million.

This has some samples with an invalid number of days in the month. It has some samples where a monthly CO2 level wasn’t recorded.

It can be insightful to use a rejection file to divide this data set into clearly usable records, and records that are not as clearly usable.

The output will not reflect the analysis model. These objects...

lock icon The rest of the chapter is locked
You have been reading a chapter from
Python Real-World Projects
Published in: Sep 2023 Publisher: Packt ISBN-13: 9781803246765
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}