Selecting the best model
Selecting the best model requires evaluation metrics. Therefore, we need to understand the common evaluation terminologies and evaluation metrics used for object detection tasks before choosing the best model. Additionally, after having the best model, this section also covers code to sample and visualize a few prediction results to qualitatively evaluate the chosen model.
Evaluation metrics for object detection models
Two main evaluation metrics are used for the object detection task: mAP@0.5
(or AP50
) and F1-score
(or F1
). The former is the mean of average precisions (mAP
) at the intersection over the union (IoU
) threshold of 0.5 and is used to select the best models. The latter represents the harmonic means of precision and recall and is used to report how the chosen model performs on a specific dataset. The definitions of these two metrics use the computation of Precision and Recall:
Here, TP (for...