Skip to main content
. 2020 Aug 13;18(1):81–99. doi: 10.1515/jib-2019-0097

Table 2:

Detail descriptions about the performance measures.

Performance measures Description
TP When the positive samples are classified accurately
TN When the negative samples are classified accurately
FP When the negative samples are misclassified
FN When the positive samples are misclassified
Accuracy It is the overall classification accuracy percentage resulted by a standard classifier.
Sensitivity (Sen) It determines the proportion of true positive samples in total samples and called as True Positive Rate (TPR) Sn=TP/(TP+FN)
Specificity (Spe) It identifies the proportion of true negative samples in total samples and called as False Positive Rate (FPR) Sp=TN/(TN+FP)
Gmean It is determined from both, specificity and sensitivity. Gmean = Sp×Sn
F-score F-score value gives the combined performance of the two classes F-score = (2 × Sp × Sn)/(Sp + Sn)
ROC It is a graphical representation between sensitivity and specificity. ROC plots TPR versus FPR at different classification thresholds.
AUC It is an aggregate evaluation of performance across all possible classification thresholds. So, it is called as Area under ROC curve.