Skip to main content
. 2021 Apr 15;9(4):e18803. doi: 10.2196/18803

Table 5.

TOP-Net performance based on transfer learning in the general ward (2-hour forecast range).

Model AUROCa, mean (SD) Accuracy (%), mean (SD) Sensitivity (%), mean (SD) Specificity (%), mean (SD) F1 score (%), mean (SD) Precision (%), mean (SD)
TOP-Net 96.5 (1.92) 93.7 (1.02) 95.5 (4.85) 88.1 (4.28) 79.3 (4.33) 68.0 (5.99)
CNNb 93.8 (2.02) 95.3 (1.43) 90.1 (2.88) 88.1 (8.4) 83.8 (5.38) 78.8 (9.85)
LSTMc 93.2 (1.89) 92.6 (0.61) 93.6 (2.76) 81.5 (5.6) 73.0 (3.4) 60.0 (4.89)
XGBoostd 89.9 (2.1) 92.9 (1.1) 83.4 (5.2) 82.6 (7.9) 73.7 (3.7) 66.6 (6.8)
MLPe 84.2 (4.1) 91.0 (0.7) 75.9 (9.6) 78.9 (9.1) 62.6 (2.0) 54.0 (2.9)
Random forest 87.3 (3.0) 92.5 (1.0) 76.6 (5.2) 86.8 (4.7) 75.0 (3.7) 73.8 (4.9)

aAUROC: area under the receiver operating characteristic curve.

bCNN: convolutional neural network.

cLSTM: long short-term memory.

dXGBoost: extreme gradient boosting.

eMLP: multilayer perceptron.