Summary
In this chapter, you learned about the main metrics for model evaluation. We first started with the metrics for classification problems and then we moved on to the metrics for regression problems.
In terms of classification metrics, you have been introduced to the well-known confusion matrix, which is probably the most important artifact to perform a model evaluation on classification models.
Aside from knowing what true positive, true negative, false positive, and false negative are, we have learned how to combine these components to extract other metrics, such as accuracy, precision, recall, the F1 score, and AUC.
We went even deeper and learned about ROC curves, as well as precision-recall curves. We learned that we can use ROC curves to evaluate fairly balanced datasets and precision-recall curves for moderate to imbalanced datasets.
By the way, when you are dealing with imbalanced datasets, remember that using accuracy might not be a good idea.
In terms...