Table 1.
Evaluation indicators of a model’s predictive performance
| Scenario | Name | Definition | Interpretation |
|---|---|---|---|
| Detection and segmentation | Recall | The fraction of positive examples in the whole sample that are predicted to be correct | |
| Precision | The fraction of true positive samples in the predicted positive samples | ||
| Accuracy | The fraction of samples that are predicted to be correct out of all samples | ||
| F-score | The harmonic mean of precision and recall | ||
| Intersection over union | The fraction of the intersection of the predicted bounding boxes (P) and the ground-truth bounding boxes (G) to the union | ||
| Detection | Mean average precision | The mean of the average scores of a group of queries [65], where n is the number of classes and APk is the average precision of class k | |
| Segmentation | Dice coefficient | The function that evaluates the similarity of two contour regions |
TP stands for true positive, FP stands for false positive, FN stands for false negative, and TN stands for true negative