Skip to main content
. 2025 Sep 1;16:1630863. doi: 10.3389/fimmu.2025.1630863

Table 5.

The performance of the ML models using the best set of features on the validation dataset.

Feature name ML model Sensitivity Specificity Accuracy AUC Kappa MCC
  • CeTD

  • XGB

  • 61.02

  • 75.76

  • 66.30

  • 0.75

  • 0.33

  • 0.35

  • TPC

  • ET

  • 62.71

  • 69.70

  • 65.22

  • 0.74

  • 0.30

  • 0.31

  • ALLCOMP

  • LR

  • 57.63

  • 75.76

  • 64.13

  • 0.73

  • 0.30

  • 0.32

  • APAAC

  • ET

  • 61.02

  • 63.64

  • 61.96

  • 0.72

  • 0.23

  • 0.24

  • DPC

  • KNN

  • 61.02

  • 69.70

  • 64.13

  • 0.71

  • 0.28

  • 0.30

  • AAC

  • ET

  • 59.32

  • 60.61

  • 59.78

  • 0.70

  • 0.19

  • 0.19

  • BTC

  • GNB

  • 50.85

  • 75.76

  • 59.78

  • 0.69

  • 0.23

  • 0.26

  • DDR

  • ET

  • 52.54

  • 75.76

  • 60.87

  • 0.67

  • 0.25

  • 0.28

  • CTC

  • SVC

  • 62.71

  • 66.67

  • 64.13

  • 0.65

  • 0.27

  • 0.28

  • PRI

  • LR

  • 55.93

  • 69.70

  • 60.87

  • 0.63

  • 0.23

  • 0.25

  • AABP

  • KNN

  • 55.93

  • 51.52

  • 54.35

  • 0.59

  • 0.07

  • 0.07

XGB, extreme gradient boosting; ET, extra tree; LR, logistic regression; KNN, k-nearest neighbors; GNB, Gaussian Naïve Baise; SVC, support vector classifier; AUC, area under curve; kappa, Cohen’s kappa coefficient; MCC, Mathew’s correlation coefficient.