Table 2.
Performance of different ML algorithms in training dataset I
Algorithm | AUROC | AUPRC | ||
---|---|---|---|---|
Mean | SD | Mean | SD | |
GNB | 0.966 | 0.013 | 0.805 | 0.057 |
LR | 0.968 | 0.012 | 0.892 | 0.027 |
SVM | 0.974 | 0.008 | 0.884 | 0.024 |
RF | 0.963 | 0.009 | 0.823 | 0.033 |
XGB | 0.973 | 0.012 | 0.905 | 0.022 |
LGB | 0.946 | 0.016 | 0.793 | 0.040 |
CAT | 0.968 | 0.011 | 0.873 | 0.028 |
AUROC Areas Under the Receiver Operator Characteristic curve, AUPRC Areas Under the Precision-Recall Curve, GNB Naive Gaussian Bayes classification model, LR logistic regression, SVM support vector machine, RF random forest. XGB eXtreme Gradient Boosting classifier, LGB Light Gradient Boosting Machine, CAT CatBoost