Skip to main content
. 2023 Nov 17;23:265. doi: 10.1186/s12911-023-02344-8

Table 2.

Performance results of the classifiers

Model category Model name FPR TPR F0.5 score ROC/ AUC a Recallb Precisionb Kappa  (-1, 1)
Approach #1: Clinical data only (on the set of 30 selected clinical labels from ExtraTree classifier) Gaussian NB 0.49 0.70 0.57 0.60 0.61 0.59 0.19
Random Forest 0.07 0.56 0.60 0.74 0.74 0.64 0.27
Gradient Boosting 0.14 0.70 0.66 0.78 0.78 0.68 0.37
XGBRF 0.16 0.65 0.62 0.72 0.73 0.64 0.28
k-nearest neighbors 0.47 0.58 0.50 0.55 0.56 0.52 0.05
SVM 0.18 0.18 0.32 0.50 0.50 0.42 0
MLP 0.40 0.40 0.22 0.50 0.50 0.20 0
Approach #2: CTs only 3D-CNN 0.83 0.84 0.63 0.57 0.57 0.78 0.22
3D Swin Transformer 0.38 0.89 0.75 0.75 0.75 0.75 0.49
Approach #3: Data fusion

Terminal 3D-CNN

on CTs + 30 labels

0.36 0.90 0.75 0.76 0.76 0.76 0.51

Medial 3D-CNN

on CTs + 67 labels

0.65 0.98 0.70 0.66 0.66 0.67 0.37

3D Swin Transformer

on CTs + 30 labels

0.35 0.91 0.78 0.78 0.78 0.80 0.55
3D Swin Transformer on CTs + 67 labels 0.40 0.95 0.82 0.77 0.77 0.83 0.60

aROC Receiver Operating Characteristics Curve, AUC Area under the ROC Curve

bDenotes the macro-averaging evaluation method