Accuracy |
This is the number of currently classified samples in a validation set, divided by the total number of samples. |
Balanced Accuracy |
This is the average of the per-class accuracies in a validation set; per-class accuracy is one way to account for imbalance in the number of samples in each class. |
Sensitivity |
This is the same as true positive rate (TPR) or recall; it measures the fraction of a designated ‘positive class’ (e.g., ‘tumour’) that are correctly classified. |
Specificity |
This is the same as true negative rate (TNR); it measures the fraction of a designated ‘negative class’ (e.g., ‘normal’) that are correctly classified. |
AUC |
This is a measure of the quality of binary classifier based on the classifier’s confidence scores on the validation set; it is determined without regard to the selection of a single fixed threshold for separating classes. |