Skip to main content
. 2022 Jun 29;13:896104. doi: 10.3389/fphar.2022.896104

TABLE 5.

Eight algorithms’ model performance in the test cohort.

Model AUROC SEN SPE AUPR Precision Recall F1
RF 0.805 (0.705, 0.906) 0.760 0.773 0.550 0.528 0.760 0.623
XGBoost 0.754 (0.645, 0.863) 0.760 0.720 0.323 0.475 0.760 0.585
DT 0.650 (0.525, 0.776) 0.480 0.853 0.150 0.522 0.480 0.500
GBDT 0.832 (0.744, 0.920) 0.720 0.853 0.557 0.621 0.720 0.667
LightGBM 0.750 (0.635, 0.864) 0.840 0.640 0.485 0.438 0.840 0.575
AdaBoost 0.782 (0.678, 0.886) 0.640 0.867 0.538 0.615 0.640 0.627
CatBoost 0.817 (0.725, 0.909) 0.720 0.813 0.462 0.563 0.720 0.632
Ensemble learning model 0.797 (0.694, 0.899) 0.720 0.840 0.537 0.600 0.720 0.655