Table 6.
Testing data results.
| Model | Sensitivity | Specificity | Accuracy | Balanced accuracy | AUROCa |
| Ensemble-based DNNb with top 20 features | 0.8324 | 0.8678 | 0.8400 | 0.8501 | 0.9216 |
| Ensemble-based DNN with all 47 features | 0.8160 | 0.8792 | 0.8307 | 0.8476 | 0.9211 |
| DNN single model with top 20 features | 0.8313 | 0.8662 | 0.8394 | 0.8488 | 0.9198 |
| DNN single model with all 47 features | 0.8345 | 0.8559 | 0.8395 | 0.8452 | 0.9184 |
| SVMc | 0.8359 | 0.8564 | 0.8407 | 0.8461 | 0.9204 |
| LRd | 0.8398 | 0.8482 | 0.8418 | 0.8440 | 0.9208 |
| RFe | 0.8463 | 0.8401 | 0.8448 | 0.8432 | 0.9187 |
| LGBMf | 0.8359 | 0.8531 | 0.8399 | 0.8445 | 0.9178 |
| GBMg | 0.8329 | 0.8548 | 0.8380 | 0.8438 | 0.9073 |
| AdaBoosth | 0.8393 | 0.8417 | 0.8399 | 0.8405 | 0.9169 |
| XGBoosti | 0.8373 | 0.8482 | 0.8399 | 0.8428 | 0.9172 |
| Provisional diagnosis (clinicians) | 1.000 | 0.000 | 0.7673 | 0.5000 | N/Aj |
aAUROC: area under receiver operating characteristic.
bDNN: deep neural network.
cSVM: support vector machine.
dLR: logistic regression.
eRF: random forest.
fLGBM: light gradient boosting machine.
gGBM: gradient boosting machine.
hAdaBoost: adaptive boosting.
iXGBoost: extreme gradient boosting.
jN/A: not applicable.