Skip to main content
. 2022 Nov 12;2022:6356399. doi: 10.1155/2022/6356399

Table 4.

Comparison and prediction performances of different models for LNM.

Models AUC (95% CI) Accuracy Precision F1-score Recall-rate Specificity
Training set
 XGBoost 0.791 (0.776–0.806) 0.739 0.690 0.663 0.638 0.894
 SVM 0.733 (0.707–0.750) 0.732 0.696 0.640 0.593 0.944
 KNN 0.719 (0.702–0.737) 0.738 0.727 0.653 0.592 0.961
 LR 0.747 (0.733–0.763) 0.728 0.671 0.646 0.623 0.888
 RF 0.772 (0.757–0.787) 0.734 0.713 0.645 0.589 0.956
 LightGBM 0.772 (0.757–0.787) 0.736 0.691 0.653 0.619 0.916

Test set
 XGBoost 0.829 (0.818–0.843) 0.770 0.738 0.706 0.677 0.912
 SVM 0.791 (0.778–0.805) 0.755 0.755 0.681 0.620 0.960
 KNN 0.634 (0.618–0.653) 0.715 0.666 0.607 0.558 0.955
 LR 0.795 (0.780–0.808) 0.748 0.706 0.675 0.647 0.905
 RF 0.821 (0.808–0.833) 0.740 0.745 0.659 0.591 0.969
 LightGBM 0.826 (0.813–0.839) 0.759 0.729 0.687 0.650 0.925

Abbreviations: XGBoost, extreme gradient boosting; SVM, support vector machine; KNN, k-nearest neighbor; LR, logistic regression; RF, random forest; LightGBM, light gradient boosting machine; LNM, lymph node metastasis.