TP |
When the positive samples are classified accurately |
TN |
When the negative samples are classified accurately |
FP |
When the negative samples are misclassified |
FN |
When the positive samples are misclassified |
Accuracy |
It is the overall classification accuracy percentage resulted by a standard classifier. |
Sensitivity (Sen) |
It determines the proportion of true positive samples in total samples and called as True Positive Rate (TPR)
|
Specificity (Spe) |
It identifies the proportion of true negative samples in total samples and called as False Positive Rate (FPR)
|
Gmean |
It is determined from both, specificity and sensitivity. Gmean =
|
F-score |
F-score value gives the combined performance of the two classes F-score =
|
ROC |
It is a graphical representation between sensitivity and specificity. ROC plots TPR versus FPR at different classification thresholds. |
AUC |
It is an aggregate evaluation of performance across all possible classification thresholds. So, it is called as Area under ROC curve. |