True positive |
A positive sample correctly |
|
(TP) |
predicted by the model. |
|
True negative |
A negative sample correctly |
|
(TN) |
predicted by the model. |
|
False positive |
A sample wrongly classified as |
|
(FP) |
positive by the model. |
|
False negative |
A sample wrongly classified as |
|
(FN) |
negative by the model. |
|
Precision |
Fraction of true positives among |
|
|
the predicted positives. |
|
Recall |
Fraction of positives that are |
|
(Sensitivity) |
correctly predicted. |
|
Specificity |
Fraction of negatives that are |
|
|
correctly predicted. |
|
Accuracy |
Fraction of correctly predicted |
|
|
positives and negatives. |
|
ROC curve |
A curve indicating performance |
|
(Receiver Operating |
of a classifier. The Y-axis shows |
|
Characteristic) |
recall and the X-axis shows |
|
|
s = (1-specificity) |
|
AUC-ROC |
Quantitative performance |
|
(Area Under the Curve |
measure based on ROC curve. |
|
- ROC) |
Ranges from 0 to 1, where 1 |
|
|
corresponds to perfect, and 0.5 |
|
|
to random, classification. |
|
C-statistic |
Equivalent to AUC-ROC. Can |
|
|
be used for censored data |
|
|
(missing patient outcomes). |
|
|
—predicted risk of patient i
|
|
|
—time to event, patient i
|
|
|
- whether (event) |
|
|
information exists. |
|
|
|
|
PR curve |
Similar to ROC curve. Y-axis |
|
(Precision Recall) |
shows precision and X-axis |
|
|
recall (r). |
|
AUC-PR |
Quantitative performance |
|
(AUC—Precision-Recall) |
measure based on PR curve. |
|
|
Alternative to AUC-ROC. |
|