Skip to main content
. 2021 Sep 24;22(19):10291. doi: 10.3390/ijms221910291

Table 5.

Description of performance metrics. Here, a “positive” label can, for example, correspond to “having CVD”, while a “negative” may correspond to “healthy”.

Metric Description Math. Definition
True positive A positive sample correctly TP
(TP) predicted by the model.
True negative A negative sample correctly TN
(TN) predicted by the model.
False positive A sample wrongly classified as FP
(FP) positive by the model.
False negative A sample wrongly classified as FN
(FN) negative by the model.
Precision Fraction of true positives among    TP    
the predicted positives. TP+FP
Recall Fraction of positives that are    TP    
(Sensitivity) correctly predicted. TP+FN
Specificity Fraction of negatives that are    TN    
correctly predicted. TN+FP
Accuracy Fraction of correctly predicted       TN+TP       
positives and negatives. TP+TN+FP+FN
ROC curve A curve indicating performance
(Receiver Operating of a classifier. The Y-axis shows R(s)
Characteristic) recall and the X-axis shows
s = (1-specificity)
AUC-ROC Quantitative performance
(Area Under the Curve measure based on ROC curve.
- ROC) Ranges from 0 to 1, where 1 R(s)ds
corresponds to perfect, and 0.5
to random, classification.
C-statistic Equivalent to AUC-ROC. Can
be used for censored data
(missing patient outcomes).
ri—predicted risk of patient i ijεi1(ri>rj)1(ti<tj)
ti—time to event, patient i ijεi1(ti<tj)
εi{0,1} - whether (event)
information exists.
1(x)={1ifx,0otherwise}
PR curve Similar to ROC curve. Y-axis
(Precision Recall) shows precision and X-axis P(r)
recall (r).
AUC-PR Quantitative performance
(AUC—Precision-Recall) measure based on PR curve. P(r)dr
Alternative to AUC-ROC.