Table 6.
Score | Accuracy | AUC | Sensitivity (%) | Specificity (%) | PPV |
---|---|---|---|---|---|
AI model 1 test set (n=167) | |||||
BAD-D | |||||
Weighted average | 0.912 | 0.904 | 85.6 | 95.3 | 0.893 |
Macro average | 0.892 | 0.865 | 79.9 | 93.1 | 0.744 |
AI model 2–3 test set (n=62) | |||||
BAD-D | |||||
Weighted average | 0.951 | 0.988 | 91.4 | 98.0 | 0.938 |
Macro average | 0.935 | 0.956 | 86.3 | 96.2 | 0.778 |
CBI | |||||
Weighted average | 0.798 | 0.988 | 80.1 | 78.5 | 0.727 |
Macro average | 0.839 | 0.956 | 55.6 | 86.2 | 0.533 |
TBI | |||||
Weighted average | 0.951 | 0.988 | 91.4 | 98.0 | 0.938 |
Macro average | 0.935 | 0.956 | 86.3 | 96.2 | 0.778 |
BAD-D: Belin-Ambrósio enhanced ectasia total deviation display, CBI: Corvis biomechanical index, TBI: Tomography and biomechanical index, n: Number of images, AUC: Area under the receiver operating characteristic curve, PPV: Positive predictive value, AI: Artificial intelligence