Skip to main content
. 2022 Aug 8;13:945020. doi: 10.3389/fendo.2022.945020

Table 4.

Performance evaluation metrics used in the research of the above literature for deep learning in DFUs.

Evaluation metrics Formula and the source literature Source references
Accuracy Accuracy=TN+TPTN+TP+FN+FP (35) (3, 2830, 43, 50)
Sensitivity Sensitivity=TP/(TP+FN) (35) (29, 30)
Specificity Sensitivity=TN/(TN+FP) (35) (29, 30)
Precision Precision=TP/(FP+TP) (35) (8, 27, 29, 39, 51)
Recall Recall=TP/(TP+FN) (35) (27)
AUC AUC=insipostiveclassrankinsiM×(M+1)2M×N (35) (10, 29, 30)
F1-Score F Score=F Score=2×Recall×PrecisionRecall+Precision (35) (27, 28, 39, 45)
Average precision (AP) AP=q=1QTPiTPi+FPiQ (35)
Mean average precision (mAP) mAP=q=1QAveP(q)Q (35) (43, 45, 51, 53)
Dice similarity coefficient (DSC) DSC=2×TP2×TP+FP+FN (50) (50)
Union index (IoU) IoU=TPTP+FP+FN (50) (50)