Table 3.
Author | Algorithms | Sensitivity | Accuracy | AUC (mortality) | AUC (Hospitalization) | F-score |
---|---|---|---|---|---|---|
Adler, E.D (2019) [10] | Boosted decision trees | 0.88 (0.85–0.90) | ||||
Ahmad, T (2018) [30] | Random forest | 0.83 | ||||
Allam, A (2019) [31] | Recurrent neural network | 0.64 (0.640–0.645) | ||||
Logistic regression l2-norm regularization (LASSO) | 0.643 (0.640–0.646) | |||||
Angraal, S (2020) [13] | Logistic regression | 0.66 (0.62–0.69) | 0.73 (0.66–0.80) | |||
Logistic regression with LASSO regularization | 0.65 (0.61–0.70) | 0.73 (0.67–0.79) | ||||
Gradient descent boosting | 0.68 (0.66–0.71) | 0.73 (0.69–0.77) | ||||
Support vector machines (linear kernel) | 0.66 (0.60–0.72) | 0.72 (0.63–0.81) | ||||
Random forest | 0.72 (0.69–0.75) | 0.76 (0.71–0.81) | ||||
Ashfaq, A (2019) [32] | Long Short-Term Memory (LSTM) neural network | 0.77 | 0.51 | |||
Awan, SE (2019) [33] | Multi-layer perceptron (MLP) | 48.4 | 0.62 | |||
Chen, R (2019) [34] | Naïve Bayes | 0.827 | 0.855 0.887 0.890 0.877 0.852 0.847 0.705 0.797 |
|||
Naïve Bayes + IG | 0.857 | |||||
Random forest | 0.817 | |||||
Random forest + IG | 0.827 | |||||
Decision trees (bagged) | 0.827 | |||||
Decision trees (bagged) + IG | 0.816 | |||||
Decision trees (boosted) | 0.735 | |||||
Decision trees (boosted) + IG | 0.806 | |||||
Chicco, D (2020) [11] | Random forest | 0.740 | 0.800 | 0.547 | ||
Decision tree | 0.737 | 0.681 | 0.554 | |||
Gradient boosting | 0.738 | 0.754 | 0.527 | |||
Linear regression | 0.730 | 0.643 | 0.475 | |||
One rule | 0.729 | 0.637 | 0.465 | |||
Artificial neural network | 0.680 | 0.559 | 0.483 | |||
Naïve Bayes | 0.696 | 0.589 | 0.364 | |||
SVM (radial) | 0.690 | 0.749 | 0.182 | |||
SVM (linear) | 0.684 | 0.754 | 0.115 | |||
K-nearest neighbors | 0.624 | 0.493 | 0.148 | |||
Chirinos, J (2020) [35] | Tree-based pipeline optimizer | 0.717 (0.643–0.791) | ||||
Desai, R.J (2020) [6] | Logistic regression (traditional) | 0.749 (0.729–0.768) | 0.738 (0.711–0.766) | |||
LASSO | 0.750 (0.731–0.769) | 0.764 (0.738–0.789) | ||||
CART | 0.700 (0.680–0.721) | 0.738 (0.710–0.765) | ||||
Random forest | 0.757 (0.739–0.776) | 0.764 (0.738–0.790) | ||||
GBM | 0.767 (0.749–0.786) | 0.778 (0.753–0.802) | ||||
Frizzell, J.D (2017) [36] | Random forest | 0.607 | ||||
GBM | 0.614 | |||||
TAN | 0.618 | |||||
LASSO | 0.618 | |||||
Logistic regression | 0.624 | |||||
Gleeson, S (2017) [37] | Decision trees | 0.7505 | ||||
Golas, S.B (2018) [12] | Logistic regression | 0.626 | 0.664 | 0.435 | ||
Gradient boosting | 0.612 | 0.650 | 0.425 | |||
Maxout networks | 0.645 | 0.695 | 0.454 | |||
Deep unified networks | 0.646 | 0.705 | 0.464 | |||
Hearn, J (2018) [38] | Staged LASSO | 0.827 (0.785–0.867) | ||||
Staged neural network | 0.835 (0.795–0.880) | |||||
LASSO (breath-by-breath) | 0.816 (0.767–0.866) | |||||
Neural network (breath-by-breath) | 0.842 (0.794–0.882) | |||||
Hsich, E (2011) [9] | Random survival forest | 0.705 | ||||
Cox proportional hazard | 0.698 | |||||
Jiang, W (2019) [39] | Logistic and beta regression (ML) | 0.73 | ||||
Kourou, K (2016) [19] | Naïve Bayes | 85 | 0.86 | |||
Bayesian network | 85.9 | 0.596 | ||||
Adaptive boosting | 78 | 0.74 | ||||
Support vector machines | 90 | 0.74 | ||||
Neural networks | 87 | 0.845 | ||||
Random forest | 75 | 0.65 | ||||
Krumholz, H (2019) [40] | Logistic regression (ML) | 0.776 | ||||
Kwon, J (2019) [5] | Deep learning | 0.813 (0.810–0.816) | ||||
Random forest | 0.696 (0.692–0.700) | |||||
Logistic regression | 0.699 (0.695–0.702) | |||||
Support vector machine | 0.636 (0.632–0.640) | |||||
Bayesian network | 0.725 (0.721–0.728) | |||||
Liu, W (2019) [41] | Logistic regression | 0.580 (0.578–0.583) | ||||
Gradient boosting | 0.602 (0.599–0.605) | |||||
Artificial neural networks | 0.604 (0.602–0.606) | |||||
Lorenzoni, G (2019) [7] | GLMN | 77.8 | 0.812 | 0.86 | ||
Logistic regression | 54.7 | 0.589 | 0.646 | |||
CART | 44.3 | 0.635 | 0.586 | |||
Random forest | 54.9 | 0.726 | 0.691 | |||
Adaptive Boosting | 57.3 | 0.671 | 0.644 | |||
Logitboost | 66.7 | 0.625 | 0.654 | |||
Support vector machines | 57.3 | 0.699 | 0.695 | |||
Artificial neural networks | 61.6 | 0.682 | 0.677 | |||
Maharaj, S.M (2018) [42] | Boosted tree | 0.719 | ||||
Spike and slab regression | 0.621 | |||||
McKinley, D (2019) [20] | K-nearest neighbor | 0.773 | 0.768 | |||
K-nearest neighbor (randomized) | 0.477 | 0.469 | ||||
Support vector machines | 0.545 | 0.496 | ||||
Random forest | 0.682 | 0.616 | ||||
Gradient boosting machine | 0.614 | 0.589 | ||||
LASSO | 0.614 | 0.576 | ||||
Miao, F (2017) [43] | Random survival forest | 0.804 | ||||
Random survival forest (improved) | 0.821 | |||||
Nakajima, K (2020) [24] | Logistic regression | 0.898 | ||||
Random forest | 0.917 | |||||
GBT | 0.907 | |||||
Support vector machine | 0.910 | |||||
Naïve Bayes | 0.875 | |||||
k-nearest neighbors | 0.854 | |||||
Shameer, K (2016) [44] | Naïve Bayes | 0.832 | 0.78 | |||
Shams, I (2015) [45] | Phase type Random forest | 91.95 | 0.836 | 0.892 | ||
Random forest | 88.43 | 0.802 | 0.865 | |||
Support vector machine | 86.16 | 0.775 | 0.857 | |||
Logistic regression | 83.40 | 0.721 | 0.833 | |||
Artificial neural network | 82.39 | 0.704 | 0.823 | |||
Stampehl, M (2020) [46] | CART | |||||
Logistic regression | ||||||
Logistic regression (stepwise) | 0.74 | |||||
Taslimitehrani, V (2016) [47] | CPXR(Log) | 0.914 | ||||
Support vector machine | 0.75 | |||||
Logistic regression | 0.89 | |||||
Turgeman, L (2016) [27] | Naïve Bayes | 48.9 | 0.676 | |||
Logistic regression | 28.1 | 0.699 | ||||
Neural network | 8.9 | 0.639 | ||||
Support vector machine | 23.0 | 0.643 | ||||
C5 (ensemble model) | 43.5 | 0.693 | ||||
CART (boosted) | 22.6 | 0.556 | ||||
CART (bagged) | 9.0 | 0.579 | ||||
CHAID Decision trees (boosted) | 30.3 | 0.691 | ||||
CHAID Decision trees (bagged) | 10.5 | 0.707 | ||||
Quest decision tree (boosted) | 20.3 | 0.487 | ||||
Quest decision tree (bagged) | 7.2 | 0.579 | ||||
Naïve network + Logistic regression | 38.2 | 0.653 | ||||
Naïve network + Neural network | 26.3 | 0.635 | ||||
Naïve network + SVM | 35.8 | 0.649 | ||||
Logistic regression + Neural network | 16.8 | 0.59 | ||||
Logistic regression + SVM | 26.2 | 0.607 | ||||
Neural network + SVM | 16.5 | 0.577 |
AUC: area under the receiver operating characteristic curve; CART: classification and regression tree; CPXR: contrast pattern aided logistic regression; GBM: gradient-boosted model; HR: hazard ratio; IG: information gain; LASSO: least absolute shrinkage and selection operator; ML: machine learning; SVM: support vector machine; TAN: tree augmented Bayesian network. The AUC is displayed under both the mortality and hospitalization column if the authors did not specify the outcome predicted.