Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Cracking the Data Science Interview

You're reading from  Cracking the Data Science Interview

Product type Book
Published in Feb 2024
Publisher Packt
ISBN-13 9781805120506
Pages 404 pages
Edition 1st Edition
Languages
Authors (2):
Leondra R. Gonzalez Leondra R. Gonzalez
Profile icon Leondra R. Gonzalez
Aaren Stubberfield Aaren Stubberfield
Profile icon Aaren Stubberfield
View More author details

Table of Contents (21) Chapters

Preface 1. Part 1: Breaking into the Data Science Field
2. Chapter 1: Exploring Today’s Modern Data Science Landscape 3. Chapter 2: Finding a Job in Data Science 4. Part 2: Manipulating and Managing Data
5. Chapter 3: Programming with Python 6. Chapter 4: Visualizing Data and Data Storytelling 7. Chapter 5: Querying Databases with SQL 8. Chapter 6: Scripting with Shell and Bash Commands in Linux 9. Chapter 7: Using Git for Version Control 10. Part 3: Exploring Artificial Intelligence
11. Chapter 8: Mining Data with Probability and Statistics 12. Chapter 9: Understanding Feature Engineering and Preparing Data for Modeling 13. Chapter 10: Mastering Machine Learning Concepts 14. Chapter 11: Building Networks with Deep Learning 15. Chapter 12: Implementing Machine Learning Solutions with MLOps 16. Part 4: Getting the Job
17. Chapter 13: Mastering the Interview Rounds 18. Chapter 14: Negotiating Compensation 19. Index 20. Other Books You May Enjoy

Validating and monitoring the model

After you’ve successfully trained and deployed your ML model, the journey doesn’t end there. Model validation and monitoring are the important next steps in your MLOps process. We will briefly discuss validating your deployed model and then focus on monitoring it long-term.

Validating the model deployment

Once your model is deployed, you will want to validate that it works as expected. This is a relatively short and straightforward process. The general steps involve connecting to your deployed model, submitting some data (preferably data unseen by the model during the training process), collecting the model predictions, and scoring them.

This will allow you to confirm a couple of things. First, you know that your deployment worked, and your model is returning results. Secondly, if you submit unseen data to the model and score it, this will give you another assessment of the model’s performance. You don’t want...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}