Skip to main content
. 2020 Jan 27;9(2):343. doi: 10.3390/jcm9020343

Table 2.

Model performance metrics.

Model Sensitivity, %
(95% CI)
Specificity, %
(95% CI)
Accuracy, %
(95% CI)
PPV, %
(95% CI)
F1 Score ROC
(95% CI)
AUC PR
(95% CI)
p-Value *
Random Forest (MEWS++) 78.9
(77.6–80.1)
79.1
(78.9–79.3)
79.1
(78.9–79.3)
11.5
(11.1–11.9)
0.2 87.9
(87.4–88.4)
36.2
(34.7–37.7)
<0.0001
Linear SVM 79.0
(77.6–80.3)
77.9
(77.6–78.1)
77.9
(77.7–78.2)
11.0
(10.6–11.4)
0.19 87.3
(86.8–87.9)
28.7
(27.2–30.2)
<0.00010.16 **
LR 61.4
(59.8–63.0)
78.5
(78.3–78.8)
77.9
(77.7–78.2)
9.0
(8.6–9.4)
0.16 79.1
(78.4–79.8)
17.2
(16.0–18.5)
<0.0001
MEWS Score 64.2
(62.7–65.7)
66.2
(66.0–66.5)
66.2
(65.9–66.4)
6.1
(5.9–6.4)
0.11 66.7
(65.9–67.6)
7.0
(6.2–7.8)

* p-value for difference between AUC ROC for respective ML model and MEWS Score. ** p-value = 0.16 for Random Forest vs. Linear SVM. AUCPR—Area Under Precision Recall Curve, LR—Linear Regression, SVM—Support Vector Machine, ROC—Receiver Operating Characteristic.