Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
The Python Workshop Second Edition - Second Edition

You're reading from  The Python Workshop Second Edition - Second Edition

Product type Book
Published in Nov 2022
Publisher Packt
ISBN-13 9781804610619
Pages 600 pages
Edition 2nd Edition
Languages
Authors (5):
Corey Wade Corey Wade
Profile icon Corey Wade
Mario Corchero Jiménez Mario Corchero Jiménez
Profile icon Mario Corchero Jiménez
Andrew Bird Andrew Bird
Profile icon Andrew Bird
Dr. Lau Cher Han Dr. Lau Cher Han
Profile icon Dr. Lau Cher Han
Graham Lee Graham Lee
Profile icon Graham Lee
View More author details

Table of Contents (16) Chapters

Preface 1. Chapter 1: Python Fundamentals – Math, Strings, Conditionals, and Loops 2. Chapter 2: Python Data Structures 3. Chapter 3: Executing Python – Programs, Algorithms, and Functions 4. Chapter 4: Extending Python, Files, Errors, and Graphs 5. Chapter 5: Constructing Python – Classes and Methods 6. Chapter 6: The Standard Library 7. Chapter 7: Becoming Pythonic 8. Chapter 8: Software Development 9. Chapter 9: Practical Python – Advanced Topics 10. Chapter 10: Data Analytics with pandas and NumPy 11. Chapter 11: Machine Learning 12. Chapter 12: Deep Learning with Python 13. Chapter 13: The Evolution of Python – Discovering New Python Features 14. Index 15. Other Books You May Enjoy

Testing data with cross-validation

In cross-validation, also known as CV, the training data is split into five folds (any number will do, but five is standard). The ML algorithm is fit on one fold at a time and tested on the remaining data. The result is five different training and test sets that are all representative of the same data. The mean of the scores is usually taken as the accuracy of the model.

Note

For cross-validation, 5 folds is only one suggestion. Any natural number may be used, with 3 and 10 also being fairly common.

Cross-validation is a core tool for ML. Mean test scores on different folds are more reliable than one mean test score on the entire set, which we performed in the first exercise. When examining one test score, there is no way of knowing whether it is low or high. Five test scores give a better picture of the true accuracy of the model.

Cross-validation can be implemented in a variety of ways. A standard approach is to use cross_val_score,...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at £13.99/month. Cancel anytime}