Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Hands-On Data Analysis with Pandas - Second Edition

You're reading from  Hands-On Data Analysis with Pandas - Second Edition

Product type Book
Published in Apr 2021
Publisher Packt
ISBN-13 9781800563452
Pages 788 pages
Edition 2nd Edition
Languages
Concepts
Author (1):
Stefanie Molin Stefanie Molin
Profile icon Stefanie Molin

Table of Contents (21) Chapters

Preface Section 1: Getting Started with Pandas
Chapter 1: Introduction to Data Analysis Chapter 2: Working with Pandas DataFrames Section 2: Using Pandas for Data Analysis
Chapter 3: Data Wrangling with Pandas Chapter 4: Aggregating Pandas DataFrames Chapter 5: Visualizing Data with Pandas and Matplotlib Chapter 6: Plotting with Seaborn and Customization Techniques Section 3: Applications – Real-World Analyses Using Pandas
Chapter 7: Financial Analysis – Bitcoin and the Stock Market Chapter 8: Rule-Based Anomaly Detection Section 4: Introduction to Machine Learning with Scikit-Learn
Chapter 9: Getting Started with Machine Learning in Python Chapter 10: Making Better Predictions – Optimizing Models Chapter 11: Machine Learning Anomaly Detection Section 5: Additional Resources
Chapter 12: The Road Ahead Solutions
Other Books You May Enjoy Appendix

Ensemble methods

Ensemble methods combine many models (often weak ones) to create a stronger one that will either minimize the average error between observed and predicted values (the bias) or improve how well it generalizes to unseen data (minimize the variance). We have to strike a balance between complex models that may increase variance, as they tend to overfit, and simple models that may have high bias, as these tend to underfit. This is called the bias-variance trade-off, which is illustrated in the following subplots:

Figure 10.11 – The bias-variance trade-off

Ensemble methods can be broken down into three categories: boosting, bagging, and stacking. Boosting trains many weak learners, which learn from each other's mistakes to reduce bias, making a stronger learner. Bagging, on the other hand, uses bootstrap aggregation to train many models on bootstrap samples of the data and aggregate the results together (using voting for classification...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}