Table 3.
The best-performing classifiers among various deep models.
| Model | Best-performing classifier | AUC | 95% CI | Sensitivity | Specificity | PPV | NPV | |
|---|---|---|---|---|---|---|---|---|
| ViT | MLP | train | 0.80 | 0.74-0.85 | 0.59 | 0.86 | 0.78 | 0.71 |
| test | 0.78 | 0.67-0.89 | 0.50 | 0.79 | 0.67 | 0.66 | ||
| VGG16 | MLP | train | 0.85 | 0.80-0.90 | 0.66 | 0.82 | 0.76 | 0.74 |
| test | 0.79 | 0.67-0.90 | 0.75 | 0.79 | 0.75 | 0.79 | ||
| ShuffleNet_v2 | SVM | train | 0.92 | 0.89-0.95 | 0.79 | 0.92 | 0.89 | 0.84 |
| test | 0.81 | 0.70-0.92 | 0.55 | 0.92 | 0.84 | 0.71 | ||
| ResNet18 | MLP | train | 0.87 | 0.82-0.91 | 0.77 | 0.81 | 0.77 | 0.80 |
| test | 0.87 | 0.78-0.96 | 0.83 | 0.74 | 0.73 | 0.83 | ||
| MobileNet_v2 | MLP | train | 0.83 | 0.78-0.88 | 0.63 | 0.85 | 0.78 | 0.73 |
| test | 0.74 | 0.62-0.87 | 0.62 | 0.79 | 0.72 | 0.71 | ||
| MnasNet-0.5 | LightGBM | train | 0.92 | 0.89-0.96 | 0.75 | 0.91 | 0.88 | 0.81 |
| test | 0.75 | 0.63-0.88 | 0.41 | 0.88 | 0.75 | 0.64 | ||
| GoogleNet | SVM | train | 0.93 | 0.90-0.96 | 0.74 | 0.91 | 0.88 | 0.81 |
| test | 0.80 | 0.68-0.91 | 0.62 | 0.77 | 0.69 | 0.70 | ||
| DenseNet121 | SVM | train | 0.96 | 0.94-0.98 | 0.70 | 0.80 | 0.75 | 0.76 |
| test | 0.75 | 0.63-0.87 | 0.62 | 0.77 | 0.69 | 0.70 | ||
| AlexNet | MLP | train | 0.87 | 0.82-0.91 | 0.73 | 0.88 | 0.84 | 0.80 |
| test | 0.84 | 0.73-0.94 | 0.72 | 0.88 | 0.84 | 0.79 |