Table 2.
Models, AUC (95% CI)* | Prediction setting (used variables: predicted outcome) | ||
---|---|---|---|
Baseline: post-mTICI | Baseline: mRS | All variables: mRS | |
Super learner | 0.55 (0.54–0.56) | 0.79 (0.79–0.80) | 0.90 (0.90–0.91) |
Random forests | 0.55 (0.55–0.56) | 0.79 (0.79–0.79) | 0.91 (0.90–0.91) |
Support vector machine | 0.53 (0.53–0.54) | 0.78 (0.77–0.78) | 0.88 (0.88–0.89) |
Neural network | 0.53 (0.53–0.54) | 0.77 (0.76–0.77) | 0.88 (0.88–0.89) |
LR: AUTOMATED SELECTION** | |||
Random forests | 0.55 (0.55–0.56) | 0.78 (0.78–0.78) | 0.90 (0.90–0.90) |
LASSO | NA¥ | 0.78 (0.78–0.79) | 0.90 (0.89–0.90) |
Elastic net | NA¥ | 0.77 (0.77–0.78) | 0.89 (0.88–0.89) |
Backward elimination | 0.57 (0.57–0.58) | 0.78 (0.77–0.78) | 0.90 (0.89–0.90) |
LR: prior knowledge‡ | 0.55 (0.55–0.58) | 0.78 (0.78–0.79) | 0.90 (0.90–0.90) |
Model discrimination is assessed by calculating mean Area Under the Curve (AUC) of the receiver operating characteristic across all outer cross-validation folds.
Logistic regression using automated variable selection methods.
Variable selection not possible, likely due to insufficient signal-to-noise ratio.
Logistic regression using variables based on prior knowledge.