Table 4.
Introduction to performance indicators.
| Indicator | Description |
|---|---|
| Precision | The Precision indicator measures the proportion of samples predicted by the model to be in the positive category that are actually in the positive category. It reflects the model’s ability to avoid misclassifying non-positive samples as positive. |
| Recall | The Recall metric measures the proportion of samples correctly predicted by the model to be in the positive category as a proportion of all samples that are actually in the positive category. It measures the model’s ability to identify all positively classified samples. |
| mAP50 | The average precision calculated at an IoU (Intersection over Union) threshold of 0.5 is used to evaluate the detection performance of the model at moderate overlap. mAP50 is a commonly used evaluation metric in target detection that combines precision and recall. |
| mAP50-95 | This metric is the average precision calculated over a range of IoU thresholds from 0.5 to 0.95. It provides a more comprehensive performance evaluation because different IoU thresholds require different precision for the detection frames, which enables a more nuanced evaluation of the model’s detection capability. |
| Parameters | The number of parameters is the total number of all trainable parameters in the model. This metric reflects the complexity of the model, and usually the lower the number of parameters, the simpler the model and the lower the computational requirements, making it easier to deploy in resource-constrained environments. |