Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Hands-On Data Preprocessing in Python

You're reading from  Hands-On Data Preprocessing in Python

Product type Book
Published in Jan 2022
Publisher Packt
ISBN-13 9781801072137
Pages 602 pages
Edition 1st Edition
Languages
Concepts
Author (1):
Roy Jafari Roy Jafari
Profile icon Roy Jafari

Table of Contents (24) Chapters

Preface 1. Part 1:Technical Needs
2. Chapter 1: Review of the Core Modules of NumPy and Pandas 3. Chapter 2: Review of Another Core Module – Matplotlib 4. Chapter 3: Data – What Is It Really? 5. Chapter 4: Databases 6. Part 2: Analytic Goals
7. Chapter 5: Data Visualization 8. Chapter 6: Prediction 9. Chapter 7: Classification 10. Chapter 8: Clustering Analysis 11. Part 3: The Preprocessing
12. Chapter 9: Data Cleaning Level I – Cleaning Up the Table 13. Chapter 10: Data Cleaning Level II – Unpacking, Restructuring, and Reformulating the Table 14. Chapter 11: Data Cleaning Level III – Missing Values, Outliers, and Errors 15. Chapter 12: Data Fusion and Data Integration 16. Chapter 13: Data Reduction 17. Chapter 14: Data Transformation and Massaging 18. Part 4: Case Studies
19. Chapter 15: Case Study 1 – Mental Health in Tech 20. Chapter 16: Case Study 2 – Predicting COVID-19 Hospitalizations 21. Chapter 17: Case Study 3: United States Counties Clustering Analysis 22. Chapter 18: Summary, Practice Case Studies, and Conclusions 23. Other Books You May Enjoy

Performing numerosity data reduction

When we need to reduce the number of data objects (rows) as opposed to the number of attributes (columns), we have a case of numerosity reduction. In this section, we will cover three methods: random sampling, stratified sampling, and random over/undersampling. Let's start with random sampling.

Random sampling

Randomly selecting some of the rows to be included in the analysis is known as random sampling. The reason we are compelled to accept random sampling is when we run into computational limitations. This normally happens when the size of our data is bigger than our computational capabilities. In those situations, we may randomly select a subset of the data objects to be included in the analysis. Let's look at an example.

Example – random sampling to speed up tuning

In this example, we are using Customer Churn.csv to train a decision tree so that it can predict (classify) what customer will be churning in the future...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at £13.99/month. Cancel anytime}