Skip to main content
. 2023 Apr 13;13(5):3174–3184. doi: 10.21037/qims-22-1042

Table 2. Performance of the different models in the training and validation sets.

Models’ or radiologists’ performance AUC ACC Sensitivity Cut-off Specificity PPV NPV
LR 0.725 (0.716) 0.672 (0.660) 0.788 (0.652) 0.390 0.586 (0.667) 0.586 (0.600) 0.788 (0.714)
RF 0.800 (0.711) 0.672 (0.623) 0.558 (0.391) 0.378 0.757 (0.800) 0.630 (0.600) 0.697 (0.632)
SVM 0.733 (0.696) 0.664 (0.604) 0.481 (0.391) 0.409 0.800 (0.767) 0.641 (0.562) 0.675 (0.622)
DT 0.824 (0.695) 0.730 (0.679) 0.923 (0.826) 0.541 0.586 (0.567) 0.623 (0.594) 0.911 (0.810)
Bayes 0.715 (0.690) 0.680 (0.604) 0.519 (0.261) 0.267 0.800 (0.867) 0.659 (0.600) 0.691 (0.605)
KNN 0.816 (0.693) 0.697 (0.660) 0.654 (0.609) 0.400 0.729 (0.700) 0.642 (0.609) 0.739 (0.700)
Adaboost 0.705 (0.624) 0.730 (0.660) 0.538 (0.348) 0.474 0.871 (0.900) 0.757 (0.727) 0.718 (0.643)
Xgboost 0.823 (0.688) 0.836 (0.698) 0.731 (0.609) 0.423 0.914 (0.767) 0.864 (0.667) 0.821 (0.719)
GBDT 0.625 (0.510) 0.680 (0.566) 0.25 (0.087) 0.441 1.000 (0.933) 1.000 (0.500) 0.642 (0.571)
Radiomics + CNN 0.885 (0.812) 0.811 (0.774) 0.865 (0.826) 0.425 0.771 (0.733) 0.738 (0.704) 0.885 (0.846)
Radiologist1 0.757 0.739 0.768 0.675 0.819
Radiologist2 0.811 0.8 0.818 0.75 0.857
Radiologist3 0.789 0.753 0.8 0.725 0.822
3D CNN 0.874 (0.709) 0.862 (0.717) 0.786 (0.767) 0.495 0.962 (0.652) 0.965 (0.742) 0.773 (0.682)
nnU-Net 0.922 (0.835) 0.919 (0.830) 0.900 (0.800) 0.506 0.943 (0.870) 0.955 (0.889) 0.877 (0.769)

LR, logistic regression; RF, random forest; SVM, support vector machine; DT, decision tree; KNN, k-nearest neighbor; GBDT, gradient boosting decision tree; CNN, convolutional neural network; AUC, area under the curve; ACC, accuracy; PPV, positive predictive value; NPV, negative predictive value. Training set, in front of the brackets. Validation set, in brackets.