Skip to main content
. 2024 Jun 26;14(7):690. doi: 10.3390/jpm14070690

Table 2.

Current studies on the validation of artificial intelligence technology.

Author AI Device/Software Condition Dataset Used for Deep Learning Test Sample Outcome Area under Curve (ROC)
Zheng et al. [87] semi-supervised generative adversarial networks (GANs) retinal disorders 877 OCT images 107,912 OCT images semi-supervised GANs performed better than the reference supervised DL model 0.99
Adithya et al. [88] offline deep learning algorithm (DLA) vitreoretinal abnormalities (VRA) 4319 ocular ultrasound images 421 ocular ultrasound images DLA showed high sensitivity detecting retinal detachment (97.4%) and choroidal detachment (100%) 0.939
Liu et al. [89] deep learning system (DLS) for diabetic macular edema diabetic retinopathy 4295 OCT images matched images from the same dataset DLS had 80% specificity and 81% sensitivity, while experienced graders had 59% specificity and 70% sensitivity DLS scored 0.88, compared with 0.80 for the reference software
Bai et al. [90] retinopathy of Prematurity AI (ROP.AI) plus disease in ROP Plus disease in ROP unspecified as ROP.AI is a proprietary software 8052 retinal images 84% sensitivity, 43% specificity, and 96% negative predictive value 0.75
Wagner et al. [91] code-free deep learning-based classifiers (CDFL) plus disease in ROP Plus disease in ROP retinal images from 6141 neonates 338 retinal images CFDL models conferred similar performance to senior pediatric ophthalmologists 0.989
Kemp et al. [92] Medios AI software (FOP NM-10) referable diabetic retinopathy (RDR) unspecified as Medios AI is a proprietary software 2327 retinal images Medios AI compared favorably with an experienced field grader 0.9648