Skip to main content
. 2022 Jan 18;55(6):4755–4808. doi: 10.1007/s10462-021-10116-x

Table B.1.

Evaluation metrics

S. no. Metrics Description
1 Accuracy=TP+TNTP+TN+FP+FN Ratio of number of correct prediction and total number of input samples
2 Precision=TPTP+FP It is the no. of correct positives divided by the predicted positives
3 Recall=TPTP+FN It is the no. of correct positives divided by total no. of true positives and false negatives
4 F1-score=2P×RP+R Harmonic mean between precision and recall
5 Specificity=TNTN+FP The proportion of actual negatives predicted as positives
6 Sensitivity=TPTP+FN The proportion of actual positives predicted as positives
7

Positive LHR=Sensitivity100-Specificity

Negative LHR=100-SensitivitySepcificity

LHR assess the goodness of fit of two competing statistical models based on their likelihoods

P precision, R recall, TP true positive, TN true negative, FP false positive, and FN false negative, LHR likelihood ratio