Skip to main content
. 2023 May 26;13(11):1863. doi: 10.3390/diagnostics13111863

Table 4.

Performance summary of algorisms used in the present study.

Model Accuracy Sensitivity Specificity F1 Score AUC
SVM 0.52 (0.44–0.56) 0.52 (0.45–0.62) 0.52 (0.37–0.60) 0.49 (0.46–0.54) 0.53 (0.44–0.56)
LGBM 0.59 (0.52–0.69) 0.57 (0.41–0.79) 0.60 (0.29–0.89) 0.56 (0.52–0.60) 0.59 (0.52–0.67)
XGB 0.59 (0.48–0.69) 0.55 (0.42–0.69) 0.61 (0.40–0.86) 0.55 (0.44–0.64) 0.56 (0.49–0.68)
XGB based on RF 0.61 (0.50–0.67) 0.59 (0.46–0.66) 0.62 (0.38–0.76) 0.58 (0.51–0.62) 0.61 (0.51–0.66)
CatBoost 0.63 (0.56–0.69) 0.70 (0.52–0.86) 0.58 (0.44–0.80) 0.63 (0.56–0.74) 0.60 (0.57–0.70)
iRF 0.76 (0.70–0.80) 0.69 (0.62–0.77) 0.83 (0.65–0.94) 0.73 (0.71–0.76) 0.77 (0.73–0.83)

SVM, support vector machine; LGBM, light gradient boosting machine; XGB, extreme gradient boosting; RF, random forest; CatBoost, categorical boosting; iRF, improved random forest; AUC, area under the receiver operating characteristic curve.