Skip to main content
. 2025 Aug 12;12:1638097. doi: 10.3389/fmed.2025.1638097

Table 4.

Comparison of the performance of the six models in testing set.

Model Testing set
Accuracy AUC F1 Recall Sensitivity Specificity
KNN 0.78 (0.74, 0.81) 0.82 (0.78, 0.86) 0.85 (0.82, 0.87) 0.91 (0.88, 0.94) 0.91 (0.88, 0.94) 0.50 (0.42, 0.57)
LR 0.83 (0.80, 0.86) 0.90 (0.87, 0.93) 0.88 (0.86, 0.91) 0.94 (0.92, 0.97) 0.94 (0.92, 0.97) 0.61 (0.53, 0.68)
NNET 0.74 (0.70, 0.78) 0.81 (0.77, 0.85) 0.82 (0.78, 0.85) 0.84 (0.80, 0.88) 0.84 (0.80, 0.88) 0.53 (0.45, 0.61)
RF 0.84 (0.81, 0.87) 0.92 (0.89, 0.94) 0.89 (0.87, 0.91) 0.95 (0.93, 0.97) 0.95 (0.93, 0.97) 0.62 (0.55, 0.69)
SVM 0.82 (0.79, 0.85) 0.89 (0.86, 0.92) 0.87 (0.84, 0.89) 0.86 (0.83, 0.90) 0.86 (0.83, 0.90) 0.74 (0.67, 0.81)
XGboost 0.86 (0.83, 0.89) 0.91 (0.89, 0.94) 0.90 (0.88, 0.92) 0.95 (0.92, 0.97) 0.95 (0.92, 0.97) 0.68 (0.61, 0.75)

LR, Logistic regression; RF, Random Forest; XGBoost, Extreme Gradient Boosting; SVC, Support vector Classifier; KNN, k-nearest neighbor; NNET, Neural Network.