Table 4. Performance of the four machine-learning models for PIH prediction.
Naïve Bayes | Logistic regression | Random forest | ANN | ||
---|---|---|---|---|---|
All features | accuracy | 55.74 | 62.01 | 76.28 | 70.7 |
precision | 80 | 70.25 | 77.99 | 74.51 | |
recall | 13.33 | 60.79 | 81.28 | 74.5 | |
AUC | 60.16 | 60.47 | 79.5 | 76.01 | |
95% CI | 45.41–74.62 | 46.69–74.26 | 67.87–91.14 | 64.03–88 | |
Feature set A (Remove redundant features) |
accuracy | 53.64 | 59.97 | 68.28 | 64.53 |
precision | 78.5 | 67.11 | 71.36 | 69.14 | |
recall | 28.02 | 59.23 | 74.61 | 69.27 | |
AUC | 67.23 | 66.78 | 71.85 | 70.72 | |
95% CI | 53.19–81.27 | 52.74–80.81 | 58.58–85.1 | 56.98–84.47 | |
Feature set B (Rank features by importance) |
accuracy | 70.2 | 79.16 | 78.8 | 70.61 |
precision | 71.54 | 79.95 | 79.5 | 72.92 | |
recall | 79.71 | 85.01 | 84.58 | 77.64 | |
AUC | 77.82 | 75.56 | 83.78 | 67.57 | |
95% CI | 65.87–89.76 | 63.02–88.11 | 73.36–94.2 | 53.53–81.6 | |
Feature set C (Recursive feature elimination) |
accuracy | 70.02 | 68.56 | 79.48 | 68.62 |
precision | 77.08 | 72.75 | 81.16 | 72.25 | |
recall | 67.13 | 71.67 | 83.65 | 72.97 | |
AUC | 77.25 | 73.42 | 84.23 | 72.3 | |
95% CI | 65.13–89.38 | 60.57–86.27 | 73.63–94.84 | 59.5–85.09 |
ANN, artificial neural network; AUC, area under receiver operating characteristic curve. Table notes the precision and recall for the hypotension class.