Abstract
Corneal diseases represent a growing public health burden, especially in resource-limited settings lacking access to specialized eye care. Artificial intelligence (AI) offers promising solutions for automating the diagnosis and management of corneal conditions. This narrative review examines the application of AI in corneal diseases, focusing on keratoconus, infectious keratitis, pterygium, dry eye disease, Fuchs endothelial corneal dystrophy, and corneal transplantation. AI models integrating diverse imaging modalities (e.g., corneal topography, slit-lamp, and anterior segment OCT images) and clinical data have demonstrated high diagnostic accuracy, often outperforming human experts. Emerging trends include incorporation of biomechanical data to enhance keratoconus detection, leveraging in vivo confocal microscopy for diagnosing infectious keratitis, and employing multimodal approaches for comprehensive disease analysis. Additionally, AI has shown potential in predicting disease progression, treatment outcomes, and postoperative complications in corneal transplantation. While challenges remain such as population heterogeneity, limited external validation, and the “black box” nature of some models, ongoing advancement in explainable AI, data augmentation, and improved regulatory frameworks can serve to address these limitations.
Keywords: Cornea, Artificial Intelligence, Machine Learning, Deep Learning
Introduction
Corneal diseases represent a growing public health concern due to their increasing prevalence and shifting global demographics (1, 2). The problem is more challenging in resource-limited areas where the shortage of ophthalmologists hinders care delivery to affected patients with corneal diseases (e.g., keratitis and keratoconus). Given this clinical need, there has been growing interest in developing automated and remote approaches to diagnose and manage these conditions.
Artificial intelligence (AI) has made progress spanning different areas of medicine with rapid advancement in computational power and algorithms. Two major subfields of AI encompass machine learning (ML) and deep learning (DL) (3). ML models involve the development of algorithms that allow computers to learn from and make predictions based on data without explicit programming. There are various types of ML algorithms, each with distinct approaches to learning from data (4). For example, support vector machine (SVM) is a supervised learning algorithm used for classification and regression tasks. It works by finding the best boundary that divides the data groups that ensure that the separation is clear and wide as possible (4, 5). Another common algorithm is random forest (RF), which combines multiple decision trees. Each tree in a RF examines a different subset of the data, and the final prediction is based on the collective output of all trees (6). These algorithms, among others, allow ML systems to improve their performance overtime as they’re exposed to more data, rendering them valuable tools for tackling complex data.
DL is a subset of ML that involves neural networks (NN) with multiple layers mimicking the neural structure (3). These NNs, often referred to as deep neural networks (DNNs) are designed to automatically learn hierarchical representations of data (3). The basic building block of a DNN is the artificial neuron, which mimics the behavior of biological neurons. Artificial neural networks (ANNs) are composed of these neurons and are organized into layers: an input layer that receives data, hidden layers that process it, and an output layer that provides the results (7–9). One common type of ANN in DL is multilayer perceptron (MLP), where each neuron takes inputs from a previous layer, applies weight, adds a bias, and use an activation function to produce its output. MLP learns by adjusting these weights based on the difference between its prediction and outcome to improve accuracy (9). Different types of DL networks exist for specific tasks, such as convolutional neural networks (CNNs), which are often used in the setting of imaging processing due to their ability to automatically and adaptively learn special hierarchies of features through convolutional layers (9). CNNs learn to identify local features such as edges and elementary textures at initial layers, complex patterns in the mid-level layers, and more global features such as object parts or whole objects in the latter layers (9).
Due to the growing availability of ophthalmic imaging and platforms for collating large volumes of clinical data, there is a great potential in leveraging AI to enhance the diagnosis and management of corneal diseases. In this paper, AI applications in corneal diseases are reviewed. Key historical studies in the field are discussed below, and more recent applications of AI, particularly within the last two years, are highlighted.
Keratoconus
Keratoconus (KC) is a condition characterized by progressive, bilateral, and asymmetric thinning and steepening of the cornea, which can lead to irregular astigmatism and decreased visual acuity (10). Due to the risk of visual impairment, early diagnosis of KC is critical for timely intervention. Although the signs of advanced KC are clinically apparent, diagnosing milder forms of KC (e.g., forme fruste KC) remains clinically challenging. Consequently, there has been a great interest in developing AI approaches to identify KC, particularly its milder forms (11, 12). Additionally, recent applications of AI have expanded in the direction of KC management and predicting KC progression (13) (Table 1).
Table 1.
Keratoconus (KC).
| KC Detection | ||||||
|---|---|---|---|---|---|---|
| Data Type | Sample Size (N) | Study | Methods | Key Findings | Performance | Reference |
|
| ||||||
| AS-OCT and Placido disc topography data (MS-39) | 6677 eyes | Detection of KC and KC suspects | ANN – Multilayer perceptron | Incorporation of OCT-based epithelial and stromal thickness indexes enhances the ANN’s ability to detect KC suspect | For KC detection: Acc = 98.6 % Prec = 96 % Recall = 97.9 % F1 score = 96.9 % For KC suspect detection: Acc = 98.5 % Prec = 83.6 % Recall = 69.7 % F1 score = 76 % |
Del Barrio et al. (2024)1 |
| Corneal deformation videos (Pentacam HR) | 734 eyes | Detection of KC | CNN – DenseNet 121 | The DL model based on corneal deformation videos are sensitive and specific in distinguishing KC eyes from normal eyes | For an external validation dataset: Acc = 88 % Spec = 98 % Recall = 78 % Precision = 0.98 AUROC = 0.93 |
Abdelmotaal et al. (2023)2 |
| Pentacam and biomechanical data (Corvis ST) | 3412 eyes | Detection of corneal ectasia | Random forest (TBIv1 and TBIv2) | The optimized model with training from a more extensive data set displays AUROC = 0.999 in discriminating clinical ectasia and subclinical ectasia. Notably, the model displays improved performance with AUROC = 0.945 in detecting subclinical asymmetric ectasia | TBIv1 For detecting clinical ectasia: AUROC = 0.999 at a cutoff of 0.5 For detecting subclinical asymmetric ectasia: AUROC = 0.899 at a cutoff of 0.29 For detecting clinical ectasia: Sens = 98.7 % Spec = 99.2 % AUROC = 0.999 at a cutoff of 0.8 For detecting subclinical asymmetric ectasia: Sens = 84.4 % Spec = 90.1 % AUROC = 0.945 at a cutoff of 0.43 |
Ambrosio et al. (2023)3 |
| Pentacam HR, SD-OCT, air-puff tonometry (Corvis ST) | 599 eyes | Detection of KC and forme fruste KC (FFKC) | Random forest and neural network | Air-puff tonometry displays the highest performance (i.e., AUROC = 0.901) in detecting FFKC as a single device, which can be increased to AUROC = 0.902 with inclusion of selected features from SD-OCT | Performance in detecting forme fruste KC For models only incorporating air-puff tonometry data: AUROC = 0.801 For models combining SD-OCT and air-puff tonometry data: AUROC = 0.902 For models combining three devices: AUROC = 0.871 |
Lu et al. (2023)4 |
| AS-OCT | 7337 eyes | Detection of KC | ML – K-means and incorporation of flower pollination algorithm (FPAK) | The hybrid model (FPAK-means) displays an accuracy of 96 % in detecting KC | Acc = 96 % For 5 classes data size: Acc = 75.2 % Preci = 35.01 % Recall = 47.7 % F-measure = 48.97 % Purity = 53.64 % |
Alyasseri et al. (2023)5 |
| Pentacam | 1125 eyes | Detection of early KC | ML- Zernike polynomials and random forest | The model performs well with AUROC of 0.997, accuracy of 99.1 %, and precision of 99.1 % for keratoconus, and AUROC of 0.976, accuracy of 95.5 %, and precision of 91.2 % for very asymmetric ectasia | For the healthy eyes: AUROC = 0.994, Acc = 95.6 %, Recall = 98.5 %, Precision = 92.7 %, For KC eyes: AUROC = 0.997, Acc = 99.1 %, Recall = 98.7 %, Precision = 99.1 %. For eyes with very asymmetric ectasia (VAE): AUROC = 0.976, Acc = 95.5 %, Recall = 71.5 %, Precision = 91.2 %. |
Kundu et al. (2023)6 |
| Pentacam | 1371 eyes | Detection of KC | Xception and InceptionResNetV2 | The proposed DL model incorporating Xception and InceptionResNetV2 displays good performance with AUROC of 0.99 and an accuracy range from 0.97 to 1.0 | Acc = 88–92 % AUROC = 0.91–0.92 |
Al-Timemy et al. (2023)7 |
| Biomechanical (Corvis ST) | 276 eyes | Detection of KC | ML- 5-FNN model | The NN model can achieve accurate and rapid diagnosis of KC from solyly corneal biomechanical data. | For KC diagnosis in sample set: Acc = 99.6 %, Sens = 99.3 %, Spec = 100 %, Precis = 100 %. For KC diagnosis in external validation set: Acc = 98.7 %, Sens = 97.4 %, Spec = 100 %, Precis = 100 %. |
Tan et al. (2022)8 |
| Pentacam HR | 2893 eyes | Detection of subclinical asymmetric ectasia | ML – multiple logistic regression analysis | The Boosted Ectasia Susceptibility Tomography Index (BESTi) approach displays a robust performance with AUROC of 0.91, sensitivity of 86.02 % and specificity of 83.97 % in distinguishing between healthy cornea and subclinical asymmetric ectasia | In distinguishing normal corneal and subclinical asymmetric ectasia: AUROC = 0.91, Sens = 86.02 %, Spec = 83.97 % |
Almeida et al. (2022)9 |
| Pentacam | 542 eyes | Detection of KC | CNN – EfficientNet-b0 | The developed hybrid DL model displays robust performance in identifying KC with AUROC > 0.93 | AUROC = 0.99, F1 = 0.99, Accuracy of 98.8 %. | Al-Timemy et al. (2021)10 |
| Pentacam HR | 1115 eyes | Detection of KC | CNN models (VGG16) | The proposed model can automatically analyze Scheimpflug tomography scans and stage KC with high accuracies | In detecting KC vs healthy eyes, using all 4 maps concatenated: Acc = 97.85 % In detecting KC vs healthy eyes using single map: Acc = 92.83 % for axial map, Acc = 96.42 for thickness map, Acc = 96.42 % for front elevation map, Acc = 97.49 % for back elevation map. |
Chen et al. (2021)11 |
| Biomechanical data (Corvis ST) | 434 eyes | Using biomechanical data to differentiate among different topographical stages of KC | ML – Linear discriminant analysis (LDA) and random forest (RF) | The RF model displays good accuracy in predicting healthy eyes and different stages of KC with overall Acc = 78 %, illustrating the significance of Scheimpflug tonometry data alone in predicting the severity of KC without relying on keratometric data | Acc = 93 % AUROC of LDA Model: Healthy Eyes: 0.97, Mild KC: 0.84, Moderate KC: 0.84, Advanced KC: 0.95. AUROC of RF model: Healthy Eyes: 0.97, Mild KC: 0.88, Moderate KC: 0.89, Advanced KC: 0.95. |
Herber et al. (2021)12 |
| Pentacam HR | 854 eyes | Detection of KC and subclinical KC | CNN – KerNet | Ker-Net outperforms current state-of-the-art methods in detecting KC by ~ 1 % in accuracy and in detecting subclinical KC by ~ 4 % in accuracy | For subclinical KC vs normal detection: Acc = 95.91 %, AUROC = 0.97 For KC vs normal detection: Acc = 98.25 %, AUROC = 0.99 |
Feng et al. (2021)13 |
| Pentacam HR | 3218 eyes | Classification of corneal maps for detection of KC | CNN – consisting of two convolutional layers (Conv1 and Conv2) with each using rectified linear unit activation, SVM | The CNN-model displays high accuracy of 98.3% and 95.8% in the detection of KC, subclinical KC, and normal eyes | Sens > 95 % Precision > 92 % Acc > 94 % |
Abdelmotaal et al. (2020)14 |
| Pentacam HR | 354 eyes | Detection of KC | CNN models (e.g. ResNet152, VGG16) | The DL models display fair accuracy in KC screening using corneal topography data. The study’s visualization demonstrate the model’s focus on correct regions, improving clinical explainability | AUROC > 0.995 in ResNet152 model Sens/Spec > 90% Acc = 0.931 |
Kuo et al. (2020)15 |
| Orbscan | 3000 exams | Detection of KC | CNN | The model demonstrates that combining corneal topography with CNN can effectively classify examination categories | In a validation set: Avg accuracy = 99.3 % For KC: Sens = 100 %, Spec = 100 % For normal examinations: Sens = 100 %, Spec = 99 % For refractive surgery examinations: Sens = 98 % and Spec = 100 % |
Zeboulon et al. (2020)16 |
| AS-OCT | 314 eyes | Detection of KC | DL- CNN | The model can accurately distinguish clinical KC from normal cornea as well as grade KC | Grading KC performance: Acc = 87.4%. For discriminating keratoconus and normal cornea: Acc=99.1% | Kamiya et al. (2019)17 |
| Pentacam and corneal topography data generated from SyntEyes KTC model | 1350 eyes | Detection of KC | CNN | KeratoDetect displays an accuracy of 99.3 % in detecting KC from a test set | Acc = 99.33 % | Lavric et al. (2019)18 |
| SS-OCT | 12242 eyes | Classification of KC stages | Unsupervised density-based clustering, PCA | The unsupervised model identifies four clusters of varying severity ectasia status indices (ESIs), corresponding to varied severities in KC | For identifying healthy eyes from eyes with KC: Spec = 94.1 % and Sens = 97.1 % | Yousefi et al. (2018)19 |
| Pentacam HR | 3233 eyes | Detection of corneal ectasia | ML – RF | Pentacam Random Forest Index, derived from RF algorithm, performs better than Berlin/Ambrósio deviation and demonstrates good performance in detecting subclinical asymmetric ectasia and post-LASIK ectasia | At a given cutoff of 0.216: AUROC = 0.992, Sens = 94.2 %, Spec = 98.8 %, At an optimized cutoff of 0.125: Sens = 85.2 % for detecting subclinical asymmetric ectasia, Sens = 80 % for post-LASIK ectasia |
Lopes et al. (2018)20 |
| Tomographic (Pentacam HR) and biomechanical data (Corvis ST) | 684 eyes | Detection of corneal ectasia | ML – Random Forest (RF) | TBI derived from RF model achieves superior accuracy in ectasia detection compared to other techniques, displaying high sensitivity for detecting subclinical ectasia among eyes with normal topography in very asymmetric patients | For detecting ectasia using TBI: AUROC = 0.996. For detecting clinical ectasia (KC and very asymmetric ectasia): Spec = 100 % at a given TBI cutoff of 0.79. For detecting asymmetric clinical ectasia with normal topography at a given TB cutoff of 0.29: Sens = 90.4 %, Spec = 96 %, AUROC = 0.985 |
Ambrosio et al. (2017)21 |
| Pentacam | 860 eyes | Detection of KC and FFKC | ML – Support vector machine | The method displays robust performance, with accuracy of 98.9 %, sensitivity of 99.1 %, and specificity of 98.5 % in discriminating KC vs normal eyes, and an accuracy of 93.1 %, 79.1 % sensitivity, and 97.9 % specificity in discriminating forme fruste vs normal eyes | The KC vs normal eyes discriminating task: Acc = 98.9 % Sens = 99.1 % Spec = 98.5 % The forme fruste vs normal eyes discriminating task: Acc = 93.1 %, Sens = 79.1 % Spec = 97.9 %. 5 group classification Acc = 88.8 %, Sens = 89.0 % Spec = 95.2 %. |
Ruiz Hidalgo et al. (2016)22 |
| GALILEI | 372 eyes | Detection of subclinical KC | ML – Classification and regression tree | The proposed ML method shows good performance in discriminating FFKC and normal corneas | For discriminating between normal vs KC: Sens = 100 %, Spec = 99.5 %. For discriminating between normal vs FFKC: Sens = 93.5 % Spec = 99.5 %. |
Smadja et al. (2013)23 |
| Topography (Sirius) | 3502 eyes | Development of an ML-based classification method in detecting KC via combining Scheimpflug camera and Placido corneal topography data | ML – Support vector machine | The model displays an accuracy of > 93 % in the detection of KC. Incorporation data from anterior, posterior corneal surface, and pachymetry increases the sensitivity of the model in detecting subclinical KC by up to 92 % | Performance of the model incorporating both anterior and posterior surface and pachymetry data: For detection of KC: Acc = 98.2 % Sens = 95 % Spec = 99.3 % For detection of subclinical KC: Acc = 97.3 % Sens = 92 % Spec = 97.7 % |
Arbelaez et al. (2012)24 |
| Topography (EyeSys) | 396 eyes | Detection of KC | CNN | The model displays a global sensitivity of 94.1 %, specificity of 97.6 %, and accuracy of 96.4 % for the test set in classifying normal, KC, and eyes with other alterations | Acc = 96.7 % Sens = 94.1 % Spec = 97.5 % |
Accardo and Pensiero (2002)25 |
| Topography (TMS-1) | 300 exams | Detection of KC | NN | The model displays a performance of 100 % accuracy, specificity, and sensitivity in distinguishing KC and KC suspect (KCS) in a test set | For test set in distinguishing KC and KC suspects: Acc = 100 % Sens = 100 % Spec = 100 % |
Smolek and Klyce (1997)26 |
| KC Treatment | ||||||
| Multimodal clinical data (i.e., tomography and other clinical risk factors) | 570 patients | Prediction of KC progression during the initial visit | MobileNet and a multilayer perceptron | The NN model displays robust performance in predicting KC progression. The most important risk factors were age and posterior elevation | Prediction performance of KC progression using tomography and clinical risk factors: Acc = 83 % Prediction performance of KC progression using numerical risk factor alone: Acc = 82 % Prediction performance of KC progression using tomography images alone: Acc = 77 % |
Hartmann et al. (2024)27 |
| Clinical data and Pentacam parameters | 277 eyes | Prediction of visual acuity and keratometry 2 years after corneal crosslinking for KC | CatBoost, LightGBM, and XGBoost | XGBoost achieves the best performance in predicting corrected distance acuity and maximum keratometry changes with R2 = 0.8382 and 0.8956 in the validation set | Acc = 98.8 % R2 = 98.8 % RMSE = 0.053 |
Liu et al. (2023)28 |
| Multimodal clinical data | 202 patients | Prediction of the base curve (R0) for rigid gas permeable (RGP) contact lenses | U-Net | The DL model displays superior prediction performance compared to the manufacturer recommendation for RGP lenses | R2 = 0.83 MSE = 0.04 for mean keratometry R0 MAE = 0.16 ± 0.13 |
Risser et al. (2023)29 |
| Pentacam HR | 274 eyes | Prediction of KC progression and indication for corneal crosslinking | DL – Vgg16 | The model can predict KC progression with AUROC values > 0.78, sensitivities > 77.8 %, and specificities > 59 %, respectively | AUROC values > 0.78, Sens > 77.8 %, and Spec > 59 % | Kato et al. (2021)30 |
| Multimodal clinical data (e.g. CDVA, corneal topography and tomography data etc.) | 40 eyes | Aiding intracorneal ring segments (ICRS) implantation for those with KC | ANN | The developed ANN in guiding ICRS implantation improves visual acuity, reduces the spherical equivalent and keratometric parameters in KC patients, performing better than the manufacturer’s nomograms | CDVA improved from 0.60 ± 0.23 preoperatively to 0.73 ± 0.21 postoperatively in ANN, significantly higher than that in nomogram group (p < 0.01) | Fariselli et al. (2020)31 |
| Pentacam and multimodal clinical data | 209 eyes | Evaluation of the average keratometry and predictability of asphericity after intrastromal corneal ring segments (ICRS) implantation | ML – Linear regression | The model displays a MAE of 0.19 for asphericity and 1.18 for mean keratometry, a significant improvement compared to the nomogram approach | MAE = 0.19 for asphericity MAE = 1.18 for mean keratometry |
Lyra et al. (2018)32 |
Alió del Barrio JL, Eldanasoury AM, Arbelaez J, Faini S, Versaci F (2024) Artificial Neural Network for Automated Keratoconus Detection Using a Combined Placido Disc and Anterior Segment Optical Coherence Tomography Topographer. Translational Vision Science & Technology 13: 13–13 DOI 10.1167/tvst.13.4.13.
Abdelmotaal H, Hazarbassanov RM, Salouti R, Nowroozzadeh MH, Taneri S, Al-Timemy AH, Lavric A, Yousefi S (2024) Keratoconus Detection-based on Dynamic Corneal Deformation Videos Using Deep Learning. Ophthalmology Science 4 DOI 10.1016/j.xops.2023.100380.
Ambrósio R, Jr., Machado AP, Leão E, Lyra JMG, Salomão MQ, Esporcatte LGP, da Fonseca Filho JBR, Ferreira-Meneses E, Sena NB, Jr., Haddad JS, Costa Neto A, de Almeida GC, Jr., Roberts CJ, Elsheikh A, Vinciguerra R, Vinciguerra P, Bühren J, Kohnen T, Kezirian GM, Hafezi F, Hafezi NL, Torres-Netto EA, Lu N, Kang DSY, Kermani O, Koh S, Padmanabhan P, Taneri S, Trattler W, Gualdi L, Salgado-Borges J, Faria-Correia F, Flockerzi E, Seitz B, Jhanji V, Chan TCY, Baptista PM, Reinstein DZ, Archer TJ, Rocha KM, Waring GOIV, Krueger RR, Dupps WJ, Khoramnia R, Hashemi H, Asgari S, Momeni-Moghaddam H, Zarei-Ghanavati S, Shetty R, Khamar P, Belin MW, Lopes BT (2023) Optimized Artificial Intelligence for Enhanced Ectasia Detection Using Scheimpflug-Based Corneal Tomography and Biomechanical Data. American Journal of Ophthalmology 251: 126–142 DOI 10.1016/j.ajo.2022.12.016.
Lu N-J, Koppen C, Hafezi F, N Dhubhghaill S, Aslanides IM, Wang Q-M, Cui L-L, Rozema JJ (2023) Combinations of Scheimpflug tomography, ocular coherence tomography and air-puff tonometry improve the detection of keratoconus. Contact Lens and Anterior Eye 46 DOI 10.1016/j.clae.2023.101840.
Alyasseri ZA, Al-Timemy AH, Abasi AK, Lavric A, Mohammed HJ, Takahashi H, Milhomens Filho JA, Campos M, Hazarbassanov RM, Yousefi S (2022) A Hybrid Artificial Intelligence Model for Detecting Keratoconus. Applied Sciences DOI 10.3390/app122412979.
Kundu G, Shetty R, Khamar P, Mullick R, Gupta S, Nuijts R, Roy AS (2023) Universal architecture of corneal segmental tomography biomarkers for artificial intelligence-driven diagnosis of early keratoconus. British Journal of Ophthalmology 107: 635–643 DOI 10.1136/bjophthalmol-2021-319309.
Al-Timemy AH, Alzubaidi L, Mosa ZM, Abdelmotaal H, Ghaeb NH, Lavric A, Hazarbassanov RM, Takahashi H, Gu Y, Yousefi S (2023) A Deep Feature Fusion of Improved Suspected Keratoconus Detection with Deep Learning. Diagnostics DOI 10.3390/diagnostics13101689.
Tan Z, Chen X, Li K, Liu Y, Cao H, Li J, Jhanji V, Zou H, Liu F, Wang R, Wang Y (2022) Artificial Intelligence–Based Diagnostic Model for Detecting Keratoconus Using Videos of Corneal Force Deformation. Translational Vision Science & Technology 11: 32–32 DOI 10.1167/tvst.11.9.32.
Almeida Jr GC, Guido RC, Balarin Silva HM, Brandão CC, de Mattos LC, Lopes BT, Machado AP, Ambrosio R, Jr. (2022) New artificial intelligence index based on Scheimpflug corneal tomography to distinguish subclinical keratoconus from healthy corneas. Journal of Cataract & Refractive Surgery 48.
Al-Timemy AH, Mosa ZM, Alyasseri Z, Lavric A, Lui MM, Hazarbassanov RM, Yousefi S (2021) A Hybrid Deep Learning Construct for Detecting Keratoconus From Corneal Maps. Translational Vision Science & Technology 10: 16–16 DOI 10.1167/tvst.10.14.16.
Chen X, Zhao J, Iselin KC, Borroni D, Romano D, Gokul A, McGhee CNJ, Zhao Y, Sedaghat M-R, Momeni-Moghaddam H, Ziaei M, Kaye S, Romano V, Zheng Y (2021) Keratoconus detection of changes using deep learning of colour-coded maps. BMJ Open Ophthalmology 6: e000824 DOI 10.1136/bmjophth-2021-000824.
Herber R, Pillunat LE, Raiskup F (2021) Development of a classification system based on corneal biomechanical properties using artificial intelligence predicting keratoconus severity. Eye and Vision 8: 21 DOI 10.1186/s40662-021-00244-4.
Feng R, Xu Z, Zheng X, Hu H, Jin X, Chen DZ, Yao K, Wu J (2021) KerNet: A Novel Deep Learning Approach for Keratoconus and Sub-Clinical Keratoconus Detection Based on Raw Data of the Pentacam HR System. IEEE Journal of Biomedical and Health Informatics 25: 3898–3910 DOI 10.1109/JBHI.2021.3079430.
Abdelmotaal H, Mostafa MM, Mostafa ANR, Mohamed AA, Abdelazeem K (2020) Classification of Color-Coded Scheimpflug Camera Corneal Tomography Images Using Deep Learning. Translational Vision Science & Technology 9: 30–30 DOI 10.1167/tvst.9.13.30.
Kuo B-I, Chang W-Y, Liao T-S, Liu F-Y, Liu H-Y, Chu H-S, Chen W-L, Hu F-R, Yen J-Y, Wang IJ (2020) Keratoconus Screening Based on Deep Learning Approach of Corneal Topography. Translational Vision Science & Technology 9: 53–53 DOI 10.1167/tvst.9.2.53.
Zeboulon P, Debellemanière G, Bouvet M, Gatinel D (2020) Corneal Topography Raw Data Classification Using a Convolutional Neural Network. American Journal of Ophthalmology 219: 33–39 DOI https://doi.org/10.1016/j.ajo.2020.06.005.
Kamiya K, Ayatsuka Y, Kato Y, Fujimura F, Takahashi M, Shoji N, Mori Y, Miyata K (2019) Keratoconus detection using deep learning of colour-coded maps with anterior segment optical coherence tomography: a diagnostic accuracy study. BMJ Open 9: e031313 DOI 10.1136/bmjopen-2019-031313.
Lavric A, Valentin P (2019) KeratoDetect: Keratoconus Detection Algorithm Using Convolutional Neural Networks. Computational Intelligence and Neuroscience 2019: 8162567 DOI 10.1155/2019/8162567.
Yousefi S, Yousefi E, Takahashi H, Hayashi T, Tampo H, Inoda S, Arai Y, Asbell P (2018) Keratoconus severity identification using unsupervised machine learning. PLOS ONE 13: e0205998 DOI 10.1371/journal.pone.0205998.
Lopes BT, Ramos IC, Salomao MQ, Guerra FP, Schallhorn SC, Schallhorn JM, Vinciguerra R, Vinciguerra P, Price FW, Jr., Price MO, Reinstein DZ, Archer TJ, Belin MW, Machado AP, Ambrosio R, Jr. (2018) Enhanced Tomographic Assessment to Detect Corneal Ectasia Based on Artificial Intelligence. American Journal of Ophthalmology 195: 223–232 DOI 10.1016/j.ajo.2018.08.005.
Ambrosio R, Lopes BT, Faria-Correia F, Salomao MQ, Buhren J, Roberts CJ, Elsheikh A, Vinciguerra R, Vinciguerra P (2017) Integration of Scheimpflug-Based Corneal Tomography and Biomechanical Assessments for Enhancing Ectasia Detection. Journal of Refractive Surgery 33: 434–443 DOI doi:10.3928/1081597X-20170426-02.
Ruiz Hidalgo I, Rodriguez P, Rozema JJ, Ní Dhubhghaill S, Zakaria N, Tassignon M-J, Koppen C (2016) Evaluation of a Machine-Learning Classifier for Kera-toconus Detection Based on Scheimpflug Tomography. Cornea 35.
Smadja D, Touboul D, Cohen A, Doveh E, Santhiago MR, Mello GR, Krueger RR, Colin J (2013) Detection of Subclinical Keratoconus Using an Automated Decision Tree Classification. American Journal of Ophthalmology 156: 237–246.e231 DOI 10.1016/j.ajo.2013.03.034.
Arbelaez MC, Versaci F, Vestri G, Barboni P, Savini G (2012) Use of a Support Vector Machine for Keratoconus and Subclinical Keratoconus Detection by Topographic and Tomographic Data. Ophthalmology 119: 2231–2238 DOI 10.1016/j.ophtha.2012.06.005.
Accardo PA, Pensiero S (2002) Neural network-based system for early keratoconus detection from corneal topography. Journal of Biomedical Informatics 35: 151159 DOI https://doi.org/10.1016/S1532-0464(02)00513-0.
Smolek MK, Klyce SD (1997) Current keratoconus detection methods compared with a neural network approach. Investigative Ophthalmology & Visual Science 38: 2290–2299.
Hartmann LM, Langhans DS, Eggarter V, Freisenich TJ, Hillenmayer A, Konig SF, Vounotrypidis E, Wolf A, Wertheimer CM (2024) Keratoconus Progression Determined at the First Visit: A Deep Learning Approach With Fusion of Imaging and Numerical Clinical Data. Translational Vision Science & Technology 13: 7–7 DOI 10.1167/tvst.13.5.7.
Liu Y, Shen D, Wang H-y, Qi M-y, Zeng Q-y (2023) Development and validation to predict visual acuity and keratometry two years after corneal crosslinking with progressive keratoconus by machine learning. Frontiers in Medicine 10 DOI 10.3389/fmed.2023.1146529.
Risser G, Mechleb N, Muselier A, Gatinel D, Zéboulon P Novel deep learning approach to estimate rigid gas permeable contact lens base curve for keratoconus fitting. Contact Lens and Anterior Eye DOI 10.1016/j.clae.2023.102063.
Kato N, Masumoto H, Tanabe M, Sakai C, Negishi K, Torii H, Tabuchi H, Tsubota K (2021) Predicting Keratoconus Progression and Need for Corneal Crosslinking Using Deep Learning. Journal of Clinical Medicine DOI 10.3390/jcm10040844.
Fariselli C, Vega-Estrada A, Arnalich-Montiel F, Alio JL (2020) Artificial neural network to guide intracorneal ring segments implantation for keratoconus treatment: a pilot study. Eye and Vision 7: 20 DOI 10.1186/s40662-020-00184-5.
Lyra D, Ribeiro G, Torquetti L, Ferrara P, Machado A, Lyra JM (2018) Computational Models for Optimization of the Intrastromal Corneal Ring Choice in Patients With Keratoconus Using Comeal Tomography Data. Journal of Refractive Surgery 34: 547–550 DOI doi:10.3928/1081597X-20180615-01.
Applications of AI in KC diagnosis typically rely on imaging modalities such as corneal topography (e.g., Orbscan by Bausch & Lomb, USA), tomography (e.g., Pentacam by Oculus, Germany), and anterior segment optical coherence tomography (AS-OCT) (14). Early studies used corneal topographic data in the training of NN models to detect KC. In 1997, Smolek and Klyce used NN, which incorporated various topographic indices as inputs, to distinguish KC and KC suspect (KCS) with 100% accuracy, specificity, and sensitivity for an internal test dataset (15). Accardo and Pensiero proposed a convolutional neural network (CNN)-based approach to screen for early KC. By incorporating multiple corneal topography parameters from both eyes (e.g., SimK1, K2, asphericity central corneal power, differential sector index etc.), the authors demonstrated a global sensitivity of 94.1%, specificity of 97.6%, and accuracy of 96.4% (16). More recently, Lavric et al. described a CNN-based approach (i.e., KeratoDetect) to diagnose KC using 1350 generated topography of KC and normal eyes as a training set, yielding a high accuracy of 99.33% in the detection of KC (17).
Due to the clinical challenges in diagnosing subclinical KC, many efforts have focused on the application of AI in this area. Smadja et al. described an automated decision tree classifier in detecting forme fruste KC from normal cornea via Scheimpflug analyzer (GALILEI by Zeimer Ophthalmic Systems AG, Switzerland), achieving a sensitivity and specificity of 93.6% and 97.2%, respectively (18). Integrating topographic and tomographic data via Scheimpflug imaging with Placido disk corneal topography (Sirius by CSO, Italy), Arbelaez et al. used support vector machine (SVM) to classify eyes into abnormal, KC, subclinical KC, and normal eyes. The authors showed that incorporation of posterior corneal surfaces and thickness parameters greatly improved the sensitivity in the classification of subclinical KC (i.e., up to 92% sensitivity) (19). More recently, Del Barrio et al. developed an ANN model integrating AS-OCT and Placido disc topographers (MS-39 by CSO, Italy) to detect KC suspects, incorporating new OCT-based epithelial and stromal thickness indexes, which improved the ANN’s detection capacity for KC suspects with an accuracy of 98.5% (20).
Altered corneal biomechanical properties, which were surmised to drive stromal thinning, have emerged as a potentially important biomarker for KC (21, 22). As such, detection of alterations in corneal biomechanical properties hold potential in early detection of KC (Table 1). Using a set of 850 eyes from 778 patients, Ambrosio et al. employed supervised ML methods integrating corneal tomographic (i.e., Pentacam HR by Oculus, Germany) and biomechanical data (e.g., Corvis ST by Oculus, Germany) to enhance corneal ectasia detection, termed Tomographic and Biomechanical Index (TBI). The reported method achieved an Area Under the Receiver Operating Characteristic (AUROC) curve of 0.996, 100% sensitivity and 100% specificity in detecting ectasia at a given optimized TBI cut-off. Importantly, TBI displayed high sensitivity (90.4%) and specificity (96%) in detecting clinical asymmetric ectasia with normal topography findings at an optimized TBI cut-off (23). Recent refinement of this approach using an improved random forest (RF) algorithm, which was trained from a more extensive data set containing 3886 eyes from 3412 patients, demonstrated an enhanced AUROC (i.e., 0.945 vs 0.899, P<0.0001) in detecting clinical asymmetric ectasia with normal topography findings (12). In addition, Herber et al., based on the findings that biomechanical parameters vary in different topographical stages of KC (24), developed a ML model incorporating biomechanical data from Scheimpflug tonometry (Corvis ST by Oculus, Germany) to predict the severity of KC without the need for keratometry data (25). The authors showed that the RF model achieved better performance in predicting mild KC (i.e., sensitivity of 80% and specificity of 90%) compared to moderate and advanced KC (i.e., moderate KC sensitivity of 63%, specificity of 87%; advanced KC sensitivity of 72%, specificity of 95%) (25). Along these lines, Lu et al. showed that the RF model using air-puff tonometry input alone (Corvis ST by Oculus, Germany) displayed an AUROC of 0.801 for detecting forme fruste KC, which can be increased to 0.902 with the incorporation of selected features from spectral-domain optical coherence tomography (SD-OCT) (26). Further, Tan et al. demonstrated that the incorporation of four biomechanical characteristics per se (i.e., first applanation time, deformation amplitude at the highest concavity, central corneal thickness, and radius at the highest concavity) in a ML model can diagnose KC with 98.7% accuracy, 97.4% sensitivity, and 100% specificity in an external validation data set of 78 corneal dynamic deformation videos (27) (Table 1). Together, these studies highlight the emerging importance of incorporating biomechanical data in conjunction with tomographic and topographic data to assist ML-based detection of KC.
In addition to diagnostics, AI has increasingly found a foothold in the treatment planning of KC, notably in predicting KC progression and treatment outcome. Kato et al. used a DL approach incorporating tomography data (Pentacam HR by Oculus, Germany) to predict KC progression and indication for corneal crosslinking (CXL). The authors showed that axial, pachymetry map data, or their combination in conjunction with the patients’ age are valuable indicators of KC progression and the need for CXL (28). Liu et al. described a ML-based approach to predict changes in visual acuity and keratometry two years after undergoing CXL for KC, achieving R2 of 0.8956 and 0.8382 in predicting corrected distance visual acuity (CDVA) and maximum keratometry (Kmax) changes (29). More recently, Hartmann et al. developed NN model to predict KC progression during the initial visit using tomography images (Pentacam HR by Oculus, Germany) and clinical risk factors from 570 eyes, achieving an accuracy of 83% in predicting KC progression (30).
Infectious Keratitis
Infectious keratitis (IK) is among the leading causes of blindness worldwide, especially in resource limited areas (31). As such, early and accurate diagnosis of IK is necessary for timely treatment to achieve improved visual outcome. Nonetheless, the accurate diagnosis of IK, which often relies on corneal scrapings, cultures, and sensitivities remains a major challenge due to 1) polymicrobial infection, 2) low culture positivity rate, and 3) overlapping clinical signs and symptoms with non-infectious keratitis (32). AI methods have been applied to aid the automated diagnosis IK, incorporating input data such as slit-lamp and in vivo confocal microscopy (IVCM) images (Table 2). An early study by Saini et al. used trained artificial neural network (ANN) on a set of forty input variables, which included predisposing ocular factors, systemic factors, and morphological features, to classify corneal ulcers, reaching specificity of 76.47% and 100% for detecting bacterial and fungal keratitis. Moreover, the model outperformed human clinicians in classifying corneal ulcers (accuracy of 90.7% vs 62.8% (p<0.01)) (33). Since then, most AI applications have relied on the use of slit-lamp images to train CNN. Within the realm of fungal keratitis (FK), the diagnostic accuracies of CNN models using slit-lamp image data range from 70% to 89% (34, 35). In 2021, Xu et al. used a sequential-level deep model to classify IK via slit-lamp images, achieving a global classification accuracy of 80% with the highest diagnostic performances in detecting fungal and herpes simplex keratitis, significantly higher than the average performance of 421 ophthalmologists (i.e., 49.27%% global accuracy) (36).
Table 2.
Infectious Keratitis.
| Data type | Sample size | Study | Methods | Key Findings | Performance | Reference |
|---|---|---|---|---|---|---|
|
| ||||||
| Handheld camera images | 897 images | Detection of bacterial and fungal keratitis based on differential image qualities | CNN | CNN model displays robust performance regardless of image quality | For image reflection: AUROC = 0.90 for presence of image reflection AUROC = 0.8 in absence of image reflection AUROC = 0.77 for overexposed image For image exposure: AUROC = 0.83 for normal exposure AUROC = 0.86 for underexposed images For image focus: AUROC = 0.83 for in focus images AUROC = 0.83 for out of focus images For gaze: AUROC = 0.87 for eccentric gaze AUROC = 0.83 for primary gaze For lid obscuration: AUROC = 0.93 with lid obscuration AUROC = 0.83 without lid obscuration |
Hanif et al. (2023)1 |
| Slit-lamp images | 1456 images | Monitor keratitis progression/regression | DL – CNN (e.g. EfficientNet-b3, ResNet50, SEResNet50 etc.) | The best performing model can monitor keratitis progression with robust performance via integrating slit-lamp images | For EfficientNet-b3: Acc = 87.3 % F1 score = 90.2 % for getting better, F1 score = 82.1 % for becoming worse, AUROC = 0.942 |
Kuo et al. (2023)2 |
| Slit-lamp images | 684 anterior segment images | Detection of bacterial and fungal keratitis | DL – ResNet50 with incorporation of Lesion Guiding Modile (LGM) and Mask Adjusting Module (MAM) | DL diagnostic system incorporating LGM and MAM demonstrates good performance in distinguishing bacterial from fungal keratitis | Acc = 87.8 % AUROC = 0.89 For external dataset: Acc = 71.4 % AUROC = 0.69 |
Won et al. (2023)3 |
| Anterior segment images | 704 anterior segment images | Detection of bacterial and fungal keratitis | DL – CNN (e.g. ResNet50, ResNet152, DenseNet121, and DenseNet169) and BERT | DL model combining CNN and BERT can achieve high diagnostic accuracy, exceeding the performance of corneal specialists | Global performance: Acc = 93 % Sens = 97 % Spec = 92 % AUROC = 0.94 Diagnostic performance for bacterial keratitis: Acc = 81–92 % Diagnostic performance for fungal keratitis: Acc = 89–97 % |
Wu et al. (2023)4 |
| IVCM images | 4001 images | Detection of IK | DL – Densenet161, Densenet 121, Resnet152, Resnet 101, Resnext101, Resnext50, Cspresnet50, Vgg19, Vgg16, Vgg13 | Among the models used, Densenet161 model displays the best diagnostic performances, and the DL models can help justify the rationale behind the diagnoses via the use of saliency map | Best performing model (i. e., DenseNet161): Acc = 93.55 % Precision = 92.5 % Recall = 94.77 % F1 score = 96.93 % |
Essalat et al. (2023)5 |
| IVCM | 7278 images | Detection of fungal keratitis | DL – Inception, ResNet, DenseNet | The method displays high accuracy, sensitivity, and specificity in diagnosing fungal keratitis via IVCM | Acc = 97.73 % Sens = 97.02 % Spec = 98.54 % AUROC = 0.99 |
Liang et al. (2023)6 |
| Slit-lamp images | 1916 images | Detection of fungal keratitis | ML – Binary logistic regression, random forest, and decision tree classification | The models perform well, with logistic regression displaying the best diagnostic performances of fungal keratitis | For logistic regression model: Acc = 90.5 % Sens = 90.7 % Spec = 89.9 % AUROC = 0.903 |
Wei et al. (2023)7 |
| IVCM | 3364 images | Classification of pathogenic fungal genera of fungal keratitis | DL – Inception-ResNet V2 | The proposed DL model performs well with AUROC of 0.887 and 0.827 for identifying Fusarium and Aspergillus, respectively. | For identifying Fusarium: Acc = 81.7 % Sens = 79.1 % Spec = 83.1 % AUROC = 0.887 For identifying Aspergillus: Acc = 75.7 % Sens = 75.6 % Spec = 75.9 % AUROC = 0.827 |
Tang et al. (2023)8 |
| Slit-lamp images | 4830 images | Detection of IK | DL –ResNext101_32x16d and DenseNet169 | The model (i.e., KeratitisNet) displays good performance in diagnosing bacterial keratitis, fungal keratitis, Acanthamoeba keratitis, and herpes simplex keratitis with accuracies of 70.27 %, 77.71 %, 83.81 %, and 79.31 %, respectively, exceeding those of human ophthalmologists | For KeratitisNetGlobal: Acc = 77.08 % AUROC = 0.86 for bacterial keratitis Acc = 77.71 % and AUROC = 0.91 for fungal keratitis Acc = 83.81 % and AUROC = 0.96 for Acanthamoeba keratitis Acc = 79.31 % and AUROC = 0.98 for herpes simplex keratitis |
Zhang et al. (2022)9 |
| Handheld camera images | 980 images | Detection of bacterial and fungal keratitis | DL – MobileNetV2, DenseNet201, ResNet152V2, VGG19, Xception | MobileNet models demonstrate superior performance to cornea specialists in diagnosing infectious etiology of corneal ulcers | For MobileNet: AUROC = 0.86 on a single-center test set AUROC=0.83 on a multicenter test set Acc = 81 % for fungal ulcers Acc = 75 % for bacterial ulcers Ensemble CNN model performance AUROC = 0.87 |
Redd et al. (2022)10 |
| IVCM | 1089 images | Detection of fungal keratitis using an explainable AI system | Gradient-weighted Class Activation Mapping (Grad-CAM) and Guided Grad-CAM | The explainable AI system (XAI) improves the diagnostic accuracy and sensitivity of ophthalmologists in detecting fungal keratitis, particularly among inexperienced ophthalmologists. The model visually highlights image regions important for its diagnostic decision | For the global performance of XAI system: Acc = 93.3 % Sens = 92.2 % Spec = 94.3 % |
Xu et al. (2021)11 |
| Handheld camera images | 1512 images | Detection of bacterial keratitis | ResNet50, ResNeXt50, DenseNet121, SE-ResNet50, EfficientNets B0, B1, B2, and B3 | Out of all DL models tested, EfficientNet B3 displays the best performance with highest AUROC, sensitivity, and specificity, comparable to those of human ophthalmologists | For EfficientNet B3: Sens = 74 % Spec = 64 % AUROC = 0.73 |
Kuo et al. (2021)12 |
| Slit-lamp images | 4306 images | Detection and classification of IK | DL – ResNet-50, InceptionResNetV2 | The algorithm displays good performance with accuracy/AUROC for Acanthamoeba of 97.9 %/0.995, bacteria of 90.7 %/0.963, fungi of 95.0 %/0.975, and HSV of 92.3 %/0.946 | For Acanthamoeba: Acc = 97.9 % AUROC = 0.995 For bacteria: Acc = 90.7 % AUROC = 0.963 For fungi: Acc = 95 % AUROC = 0.975 For HSV: Acc = 92.3 % AUROC = 0.946 |
Koyama et al. (2021)13 |
| Slit-lamp images | 2167 images | Classification of fungal and bacterial keratitis | DL – CNN – VGG19, ResNet50, DenseNet121 | VGG19 displays the best performance with AUROC of 0.86 and classification F1 score of 0.78. An ensemble learning model can improve performance to F1 score of 0.93 and AUROC of 0.904 | Classification performance for VGG19: F1 = 0.78 AUROC = 0.86 Classification performance for ensemble learning method: F1 = 0.77 AUROC = 0.904 |
Ghosh et al. (2022)14 |
| Slit-lamp images | 1330 images | Detection of bacterial and fungal keratitis | DL – CNN models (DenseNet121, 161, 169, 201, EfficientNetB3, InceptionV3, ResNet101, and ResNet50) | DenseNet161 model displays the best diagnostic performance with AUROC of 0.85 for both bacterial and fungal keratitis, achieving better diagnostic accuracy than general ophthalmologists and corneal specialists | For DenseNet161 model: Average acc = 79.3 % Sens = 63.2 % Spec = 79.6 % AUROC = 0.85 |
Hung et al. (2021)15 |
| Handheld camera images | 2245 images | Differentiation between active corneal ulcers and healed scars | DL – CNN models | The CNN model performs well in identifying active ulcers and scars with high F1 score, sensitivity, specificity, and AUROC for both internal and external data sets. Grad-CAM heatmaps can identify clinical features such as conjunctival injection and corneal infiltrates associated with active corneal ulcers | For patients from India: F1 = 92 % Sens = 93.5 % Spec = 84.4 % AUROC = 0.973 For patients from Northern California: F1 = 84.3 % Sens = 78.2 % Spec = 91.3 % AUROC = 0.947 |
Tiwari et al. (2022)16 |
| Slit-lamp and smartphone images | 6567 images | Classification of IK | DenseNet121, Inception-v3, and ResNet50, ImageNet | The DL model displays good performance with AUROC > 0.96, comparable to experienced cornea specialists | For external datasets, the best performing algorithm (DenseNet121) displays: Acc > 94 % Sens > 93 % Spec > 95 % AUROC > 0.96 |
Li et al. (2021)17 |
| IVCM | 378 images | Detection of fungal hyphae | Adaptive robust binary pattern (ARBP) and support vector machine | The proposed ARBP method achieves robust classification of fungal hyphae, demonstrating an accuracy of 99.74 % and AUROC > 0.99 | Performance of multiple classifiers using ARBP features: Acc > 99 % AUROC > 0.99 |
Wu et al. (2018)18 |
| Multimodal clinical data | 63 ulcers | Classification of IK | CNN | The trained CNN displays specificity for bacterial and fungal ulcer classification of 76.47 % and 100 %, respectively. The model displays an accuracy of 90.7 %, significantly better performance compared to human clinicians (i.e., 62.8 %) | Acc = 90.7 % Spec = 76.47 % for bacterial category Spec = 100 % for fungal category Sens = 100 % for bacterial category Sens = 76.4 % for fungal category |
Saini et al. (2003)19 |
Hanif A, Prajna NV, Lalitha P, NaPier E, Parker M, Steinkamp P, Keenan JD, Campbell JP, Song X, Redd TK (2023) Assessing the Impact of Image Quality on Deep Learning Classification of Infectious Keratitis. Ophthalmology Science 3: 100331 DOI https://doi.org/10.10167j.xops.2023.100331.
Kuo M-T, Hsu BW-Y, Lin YS, Fang P-C, Yu H-J, Hsiao Y-T, Tseng VS (2023) Monitoring the Progression of Clinically Suspected Microbial Keratitis Using Convolutional Neural Networks. Translational Vision Science & Technology 12: 1–1 DOI 10.1167/tvst.12.11.1.
Won YK, Lee H, Kim Y, Han G, Chung T-Y, Ro YM, Lim DH (2023) Deep learning-based classification system of bacterial keratitis and fungal keratitis using anterior segment images. Frontiers in Medicine 10 DOI 10.3389/fmed.2023.1162124.
Wu J, Yuan Z, Fang Z, Huang Z, Xu Y, Xie W, Wu F, Yao Y-F (2023) A knowledge-enhanced transform-based multimodal classifier for microbial keratitis identification. Scientific Reports 13: 9003 DOI 10.1038/s41598-023-36024-4.
Essalat M, Abolhosseini M, Le TH, Moshtaghion SM, Kanavi MR (2023) Interpretable deep learning for diagnosis of fungal and acanthamoeba keratitis using in vivo confocal microscopy images. Scientific Reports 13: 8953 DOI 10.1038/s41598-023-35085-9.
Liang S, Zhong J, Zeng H, Zhong P, Li S, Liu H, Yuan J (2023) A Structure-Aware Convolutional Neural Network for Automatic Diagnosis of Fungal Keratitis with In Vivo Confocal Microscopy Images. Journal of Digital Imaging 36: 1624–1632 DOI 10.1007/s10278-021-00549-9.
Wei Z, Wang S, Wang Z, Zhang Y, Chen K, Gong L, Li G, Zheng Q, Zhang Q, He Y, Zhang Q, Chen D, Cao K, Pang J, Zhang Z, Wang L, Ou Z, Liang Q (2023) Development and multi-center validation of machine learning model for early detection of fungal keratitis. eBioMedicine 88 DOI 10.1016/j.ebiom.2023.104438.
Tang N, Huang G, Lei D, Jiang L, Chen Q, He W, Tang F, Hong Y, Lv J, Qin Y, Lin Y, Lan Q, Qin Y, Lan R, Pan X, Li M, Xu F, Lu P (2023) An artificial intelligence approach to classify pathogenic fungal genera of fungal keratitis using corneal confocal microscopy images. International Ophthalmology 43: 2203–2214 DOI 10.1007/s10792-022-02616-8.
Zhang Z, Wang H, Wang S, Wei Z, Zhang Y, Wang Z, Chen K, Ou Z, Liang Q (2022) Deep learning-based classification of infectious keratitis on slit-lamp images. Therapeutic Advances in Chronic Disease 13: 20406223221136071 DOI 10.1177/20406223221136071.
Redd TK, Prajna NV, Srinivasan M, Lalitha P, Krishnan T, Rajaraman R, Venugopal A, Acharya N, Seitzman GD, Lietman TM, Keenan JD, Campbell JP, Song X (2022) Image-Based Differentiation of Bacterial and Fungal Keratitis Using Deep Convolutional Neural Networks. Ophthalmology Science 2 DOI 10.1016/j.xops.2022.100119.
Xu F, Jiang L, He W, Huang G, Hong Y, Tang F, Lv J, Lin Y, Qin Y, Lan R, Pan X, Zeng S, Li M, Chen Q, Tang N (2021) The Clinical Value of Explainable Deep Learning for Diagnosing Fungal Keratitis Using in vivo Confocal Microscopy Images. Frontiers in Medicine 8 DOI 10.3389/fmed.2021.797616.
Kuo M-T, Hsu BW-Y, Lin Y-S, Fang P-C, Yu H-J, Chen A, Yu M-S, Tseng VS (2021) Comparisons of deep learning algorithms for diagnosing bacterial keratitis via external eye photographs. Scientific Reports 11: 24227 DOI 10.1038/s41598-021-03572-6.
Koyama A, Miyazaki D, Nakagawa Y, Ayatsuka Y, Miyake H, Ehara F, Sasaki S-i, Shimizu Y, Inoue Y (2021) Determination of probability of causative pathogen in infectious keratitis using deep learning algorithm of slit-lamp images. Scientific Reports 11: 22642 DOI 10.1038/s41598-021-02138-w.
Ghosh AK, Thammasudjarit R, Jongkhajornpong P, Attia J, Thakkinstian A (2022) Deep Learning for Discrimination Between Fungal Keratitis and Bacterial Keratitis: DeepKeratitis. Cornea 41.
Hung N, Shih AK-Y, Lin C, Kuo M-T, Hwang Y-S, Wu W-C, Kuo C-F, Kang EY-C, Hsiao C-H (2021) Using Slit-Lamp Images for Deep Learning-Based Identification of Bacterial and Fungal Keratitis: Model Development and Validation with Different Convolutional Neural Networks. Diagnostics 11: 1246.
Tiwari M, Piech C, Baitemirova M, Prajna NV, Srinivasan M, Lalitha P, Villegas N, Balachandar N, Chua JT, Redd T, Lietman TM, Thrun S, Lin CC (2022) Differentiation of Active Corneal Infections from Healed Scars Using Deep Learning. Ophthalmology 129: 139–146 DOI 10.1016/j.ophtha.2021.07.033.
Li Z, Jiang J, Chen K, Chen Q, Zheng Q, Liu X, Weng H, Wu S, Chen W (2021) Preventing corneal blindness caused by keratitis using artificial intelligence. Nature Communications 12: 3738 DOI 10.1038/s41467-021-24116-6.
Wu X, Qiu Q, Liu Z, Zhao Y, Zhang B, Zhang Y, Wu X, Ren J (2018) Hyphae Detection in Fungal Keratitis Images With Adaptive Robust Binary Pattern. IEEE Access 6: 13449–13460 DOI 10.1109/ACCESS.2018.2808941.
Saini JS, Jain AK, Kumar S, Vikal S, Pankaj S, Singh S (2003) Neural network approach to classify infective keratitis. Current Eye Research 27: 111–116 DOI 10.1076/ceyr.27.2.111.15949.
In addition to slit-lamp images, AI-models have incorporated images from handheld cameras to diagnose IK (Table 3). This approach poses a certain level of utility due to the availability of handheld cameras in resource-limited areas. Recently, Tiwari et al., using 1313 photographs of corneal ulcers and 1132 scars, developed a CNN model to differentiate active corneal ulcers from healed scars. The model achieved F1 scores of 92% and 84.3%, and AUROC of 0.97 and 0.95 for patient population in India and Northern California, respectively (37). Using Gradient-weighted Class Activation Mapping (Grad-CAM) heatmap, the authors revealed that the CNN model could highlight features beyond conjunctival injection, such as hypopyon and purulent corneal infiltrates, as salient features of active corneal ulcers (37). Overall, AI models trained on slit-lamp images generally outperform those trained on handheld camera images in diagnosing IK (38). Nonetheless, the diagnostic performances for IK of CNN models trained on handheld cameras are comparable to human ophthalmologists with newer CNN models performing at an expert level even with poorer image quality, making them a promising modality for early screening of IK in resource-limited areas (39, 40).
Table 3.
Pterygium.
| Data type | Sample Size | Study | Methods | Key findings | Performance | Reference |
|---|---|---|---|---|---|---|
|
| ||||||
| Smartphone images | 22081 images | Detection of pterygium | RFRC (Faster RCNN based on ResNet101) for detection model and SRU-Net (U-Net based on SE-ResNeXt50) for segmentation model | The fusion model performance using smartphone-based images achieves good performance (e.g., F1 = 0.9313 and accuracy 92.38 %) relative to model trained via slit-lamp images (e.g., F1 = 0.9448, accuracy = 0.9429), comparable to experienced corneal specialists’ performances | Fusion model’s performance in smartphone-based images: Acc = 92.38 % Sens = 93.6 % Spec = 96.13 % AUROC = 0.9426 F1 score = 0.9313 |
Liu et al. (2023)1 |
| Histopathological images | 400 images | Evaluation of histopathological images of pterygium | ML – bagging tree, k-nearest neighbor | The performing mode, a bagging tree classifier, achieves good grading performance and demonstrates reproducibility in external validation data set | For bagging tree model in external validation set: Sens = 81.3 % PPV = 82.0 % |
Kim et al. (2023)2 |
| Slit-lamp images | 734 images | Development of a two-category model and segmentation model for aiding the diagnosis pterygium | AlexNet, VGG16, ResNet18, and ResNet50 | The VGG16 model displays the best diagnostic performance among the two-category models of pterygium, while the double phase-fusion PSPNet model displays the best performance among the pterygium segmentation models | Acc = 99 % Sens = 98.67 % Spec = 99.3 % AUROC = 0.99 F1-score = 0.99 |
Zhu et al. (2022)3 |
| Tear samples | 89 samples | Incorporation of cytokine biomarker to aid the diagnosis and prognosis of pterygium | Naïve Bayes (NB), k-nearest neighbor, logistic regression, random forest, decision tree, and gradient boosting tree | The NB model performs the best in the diagnostic performance of pterygium with AUROC = 0.99. A novel nomogram is proposed which can predict 1- and 2-year probability of pterygium recurrence events | Diagnostic performance: AUROC = 0.99 Prediction performance: AUROC at 1 year = 0.84 AUROC at 2 year = 0.92 |
Wan et al. (2022)4 |
| Slit-lamp images | 237 images | Grading pterygium and predicting recurrence | ANN – multilayer perceptron | The system displays low sensitivity but high specificity in pterygium recurrence prediction. The model displays good performance in grading pterygium | Grading performance: Acc = 86.67 % to 91.67 % Sen = 80 % to 91.67 % Spec = 91.67 % to 100 % Recurrence prediction performance: Sens = 66.67 % Spec = 81.82 % |
Hung et al. (2022)5 |
| Slit-lamp images | 489 images | Application of a DL method to aid in segmentation and measuring clinical progression of pterygium | U-Net++ with incorporation of an Attention gate | The model performs well with Dice coefficient for cornea segmentation and pterygium segmentation of 0.962 and 0.902 | Prediction results at an optimized threshold (3.17 mm): Acc = 92.37 % Sens = 90.24 % Spec = 93.51 % AUROC = 0.9586 F1 = 0.8916 |
Wan et al. (2022)6 |
| Multimodal clinical data | 93 samples | Prediction of BCVA after surgery for pterygium | SVM, decision tree, logistic regression, and Naïve Bayes | SVM can produce highly accurate classification models for predicting changes in BCVA after pterygium surgery, displaying an accuracy, specificity, and sensitivity of 94.4 %, 100 %, and 92.14 %, respectively | SVM performance: Acc = 94.4 % Sens = 92.14 % Spec = 100 % AUROC = 0.983 |
Jais et al. (2022)7 |
| Slit-lamp and handheld camera images | 2503 images | Detection of pterygium | VGG16, ImageNet | The DL algorithms display AUROC > 0.95, achieving similar performances between hand-held cameras and slit-lamp mounted cameras | For detection of any pterygium in external data set 1 and 2: Acc = 98.4 % and 88.6 % Sens = 95.9 % and 100 % Spec = 98.5 % and 88.3 % AUROC = 0.991 and 0.997 For detection of referable pterygium in external data set 1 and 2: Acc = 99.3 % and 98 % Sens = 87.2 % and 94.3 % Spec = 99.4 % and 98.0 % AUROC = 0.997 and 0.990 |
Fang et al. (2022)8 |
| Smartphone images | 120 images | Detection of pterygium | CNN | The model demonstrates high detection sensitivity and specificity for pterygium | For pterygium detection: Sens = 95 % Spec = 98.3 % For pterygium tissue localization: Acc = 81.1 % |
Zulkifley et al. (2019)9 |
| Handheld smartphone and digital cameras | 3017 images | Detection of pterygium | SVM and ANN | The proposed model can classify pterygium and non-pterygium cases with good performances with SVM model achieving the best classification results | Performance for SVM model: Sens = 88.7 % Spec = 88.3 % AUROC = 0.956 |
Wan Zaki et al. (2018)10 |
Liu Y, Xu C, Wang S, Chen Y, Lin X, Guo S, Liu Z, Wang Y, Zhang H, Guo Y, Huang C, Wu H, Li Y, Chen Q, Hu J, Luo Z, Liu Z (2023) Accurate detection and grading of pterygium through smartphone by a fusion training model. British Journal of Ophthalmology: bjo-2022-322552 DOI 10.1136/bjo-2022-322552.
Kim JH, Kim YJ, Lee YJ, Hyon JY, Han SB, Kim KG (2023) Automated histopathological evaluation of pterygium using artificial intelligence. British Journal of Ophthalmology 107: 627-634 DOI 10.1136/bjophthalmol-2021-320141.
Zhu S, Fang X, Qian Y, He K, Wu M, Zheng B, Song J (2022) Pterygium Screening and Lesion Area Segmentation Based on Deep Learning. Journal of Healthcare Engineering 2022: 3942110 DOI 10.1155/2022/3942110.
Wan Q, Wan P, Liu W, cheng Y, Gu S, Shi Q, Su Y, Wang X, Liu C, Wang Z (2022) Tear film cytokines as prognostic indicators for predicting early recurrent pterygium. Experimental Eye Research 222: 109140 DOI https://doi.org/10.1016/j.exer.2022.109140.
Hung K-H, Lin C, Roan J, Kuo C-F, Hsiao C-H, Tan H-Y, Chen H-C, Ma DH-K, Yeh L-K, Lee OK-S (2022) Application of a Deep Learning System in Pterygium Grading and Further Prediction of Recurrence with Slit Lamp Photographs. Diagnostics 12: 888.
Wan C, Shao Y, Wang C, Jing J, Yang W (2022) A Novel System for Measuring Pterygium’s Progress Using Deep Learning. Frontiers in Medicine 9 DOI 10.3389/fmed.2022.819971.
Jais FN, Che Azemin MZ, Hilmi MR, Mohd Tamrin MI, Kamal KM (2021) Postsurgery Classification of Best-Corrected Visual Acuity Changes Based on Pterygium Characteristics Using the Machine Learning Technique. The Scientific World Journal 2021: 6211006 DOI 10.1155/2021/6211006.
Fang X, Deshmukh M, Chee ML, Soh Z-D, Teo ZL, Thakur S, Goh JHL, Liu Y-C, Husain R, Mehta JS, Wong TY, Cheng C-Y, Rim TH, Tham Y-C (2022) Deep learning algorithms for automatic detection of pterygium using anterior segment photographs from slit-lamp and hand-held cameras. British Journal of Ophthalmology 106: 1642-1647 DOI 10.1136/bjophthalmol-2021-318866.
Zulkifley MA, Abdani SR, Zulkifley NH (2019) Pterygium-Net: a deep learning approach to pterygium detection and localization. Multimedia Tools and Applications 78: 34563-34584 DOI 10.1007/s11042-019-08130-x.
Wan Zaki WMD, Mat Daud M, Abdani SR, Hussain A, Mutalib HA (2018) Automated pterygium detection method of anterior segment photographed images. Computer Methods and Programs in Biomedicine 154: 71-78 DOI https://doi.org/10.1016/j.cmpb.2017.10.026.
IVCM has emerged as a powerful tool in diagnosing IK, owing to its depth and resolution (41). Several recent AI approaches have incorporated IVCM data in CNN model training to diagnose fungal keratitis, achieving high diagnostic accuracies of >93% (42–45). Notably, IVCM images have been utilized to train AI-based hyphae detection (42, 46). More recently, Tang et al. demonstrated a DL approach to automate the classification of fungal genera via using a data set of 3364 IVCM images, reaching an accuracy/AUROC of 81.7%/0.887 and 75.7%/0.827 in classifying Fusarium and Aspergillus, respectively (47).
Pterygium
Pterygium is a common ocular surface disorder characterized by abnormal conjunctival growth onto the cornea (48). Due to the prevalence of pterygium in underserved communities with limited access to ophthalmologists, delays in surgical interventions can lead to severe visual outcomes such as blindness. Consequently, AI has found utility as a screening tool for pterygium using input data ranging from smartphone to slit-lamp mounted anterior segment images. Studies employing AI models so far (e.g., SVM, RF) can achieve high diagnostic performances, reaching AUROC>0.9 (Table 3). Fang et al. assessed DL algorithms to detect and measure pterygium that incorporate anterior segment photographs from slit-lamp and hand-held cameras, achieving high accuracy in detecting any pterygium and referable-level pterygium across multiple test sets (>88%) (49). Recently, Liu et al. combined a more extensive dataset from slit-lamp images with a smaller smartphone dataset to create a fusion training model, which achieved an impressive performance in pterygium detection from smartphone images with an accuracy of 92.38% and AUROC of 0.9426, comparable to experienced ophthalmologists (50) (Table 3). Apart from integrating anterior segment images, several studies have expanded AI application into the analysis of histopathological images and tear sample analysis (51, 52). Further AI applications have extended into the management of pterygium, such as predicting of its recurrence and best corrected visual acuity (BCVA) after surgery (52, 53) (Table 3).
Dry eye disease
Dry eye disease (DED) is a multifactorial ocular surface condition characterized by disruption of tear film stability and homeostasis (54). Etiologies of DED can stem from reduced tear production, increased evaporation of tear film or ocular surface damage etc. (54). Despite its widespread impact, diagnosing DED poses significant challenges due to the diverse underlying causes and varying symptoms. Given this, AI algorithms, which incorporate diverse diagnostic modalities, ranging from meibography to AS-OCT images, have improved the detection, prediction, and analysis of etiologies for DED (Table 4).
Table 4.
Dry Eye Disease.
| Data Type | Sample Size (N) | Study | Methods | Key Findings | Performance | Reference |
|---|---|---|---|---|---|---|
|
| ||||||
| Smartphone images | 1021 images | Measurement of tear meniscus height. | DL | The model demonstrates robust performance in automated tear meniscus height measurement for potential DED diagnosis | Dice coefficient = 0.9868 Acc = 95.39 % |
Nejat et al. (2024)1 |
| Meibography images | 82236 images | Detecting clinical characteristics of dry eye patients | DL – SimSLR NN | The unsupervised NN model can cluster patients with dry eyes into six subtypes with distinct clinical characteristics using meibography images. | – | Li et al. (2023)2 |
| Smart Eye Camera (videorecordable slit-lamp device) | 22172 frames from 158 eyes | Detection of DED via estimating tear film breakup time (TFBUT) | DL- Central neural network (CNN) | The model displays high accuracy and AUROC in estimating tear film breakup time (TFBUT) using ocular surface videos. | For estimating TFBUT: Acc = 78.9 % AUROC = 0.877 F1 score = 0.74 For diagnosing DED using ADES criteria: Sens = 77.8 % Spec = 85.7 % AUROC = 0.813 |
Shimizu et al. (2023)3 |
| Tear film images | 9089 image patches from 350 eyes | Detection of TFBUT | DL – CNN-ResNet50 | The model can detect TFBUT with high accuracy using tear film images taken by non-invasive device | For classifying tear breakup or non-breakup group: Acc = 92.4 % Sens = 83.4 % Spec = 95.2 % |
Kikukawa et al. (2023)4 |
| Ocular surface videos | 244 eyes | Detection of DED | Deep transfer learning | Deep transfer learning model displays high accuracy in detecting DED from ocular surface video data. Lower paracentral cornea was identified as the most important region by the CNN model for detection of DED | For discriminating DED and normal eyes: AUROC = 0.98 | Abdelmotaal et al. (2023)5 |
| Oculus camera photographs | 510 images | Measurement of tear meniscus height. | DL – CNN | The model demonstrates robust performance in segmenting, identifying, and quantifying tear meniscus | For corneal segmentation task: Dice = 0.99 IoU = 0.98 For tear meniscus detection: Dice = 0.92 IoU = 0.86 |
Wang et al. (2023)6 |
| Anterior segment images | 832 images | Identification of lid margin signs for DED | DL | The model can identify lid margin signs with high sensitivity and specificity | For posterior lid margin: AUROC = 0.979. For lid margin irregularity: AUROC = 0.977. For lid margin vascularization: AUROC = 0.98. For meibomian gland orifice (MGO) retroplacement: AUROC = 0.963. For MGO plugging: AUROC = 0.968. For posterior lid margin: Sens = 97.4 % Spec = 93.8 %. |
Wang et al. (2023)7 |
| Anterior slit-lamp images | 1046 images | Grading punctate epithelial erosion (PEE) | Deep NN | The model can grade PEE with good accuracy, illustrating its potential utility as a training platform. | Segmentation performance: IoU = 0.937 Grading performance: Accuracy = 76.5 % AUROC = 0.94 |
Qu et al. (2023)8 |
| Meibography images | 1600 images | Segment MG and eyelids, analyze MG area, and estimate the meiboscore | DL – ResNet | The DL model demonstrates robust automated performance in evaluation of MG morphology, ranging from segmentation to meiboscore, comparable to human ophthalmologists | Meiboscore classification performance: Acc = 73.01 % on validation set |
Saha et al. (2022)9 |
| Multimodal clinical data | 432 patients | Prediction of unstable tear film from clinical data | ML – AdaBoostM1, LogitBoost, RF | The applied ML algorithms outperform baseline classification scheme (i.e., ZeroR) | Average recall and precision > 0.74 | Fineide et al. (2022)10 |
| Infrared meibography | 4006 meibography images | Segment and diagnose MGD via Meibomian gland density | DL and TL | The model illustrates the utility of using meibomian density in improving the accuracy of meibography analysis | Segmentation performance: Acc = 92 % and Repeatability = 100 % MG density in total eyelids performance: Sens = 88 % Spec = 81 % AUROC = 0.9 |
Zhang et al. (2022)11 |
| Blink videos (collected via Keratograph 5 M) | 1019 image sets | Detection of DED via blink analysis | DL | The model can analyze blink videos with high accuracy and sensitivity. Incomplete blinking frequency was found to be closely associated with DED symptoms | For 30 FPS videos: Balanced accuracy = 95.82 % Sens = 99.38 % IoU = 0.8868 Dice = 0.9251 |
Zheng et al. (2022)12 |
| OCT epithelial mapping | 228 eyes | Detection of DED | ML – RF and LR | Inclusion of OCT corneal epithelial mapping can facilitate the diagnosis of DED with high sensitivity and specificity | Sens = 86.4 % Spec = 91.7 % AUROC = 0.87 |
Edorh et al. (2022)13 |
| AS-OCT | 27180 images from 151 eyes | Evaluation of a DL approach to diagnose DED using AS-OCT images | DL – VGG19 | The model displays robust performance in detecting DED, especially compared to standard clinical DED test and similar to cornea specialists | Acc = 84.62 % Sens = 86.36 % Spec = 82.35 % |
Chase et al. (2021)14 |
| Infrared meibography images | 1039 images | Quantification of MG morphology | DL | The model can automatically segment meibomian glands, identify ghost glands, and quantitatively analyze gland morphological features with good performance | Segmention performance: IoU = 0.63 Identification of ghost glands: Sens = 84.4 % Spec = 71.7 % |
Wang et al. (2021)15 |
| Infrared meibography | 728 images | Development of an automated DL method to segment MG | DL | The model demonstrates robust performance in segmenting MG | Segmentation performance: Precision = 83 % Recall = 81 % F1 score = 84 % Dice = 0.84 AUROC = 0.96 |
Setu et al. (2021)16 |
| Meibography images | 90 images | Quantification of the MG irregularities | ML – conditional generative adversarial network (cGAN) | The proposed technique outperforms state-of-the-art methods for detection and analysis of dropout area of MGD as well as provides a notable improvement in quantifying irregulaties of infrared MG images | F1 = 0.825 Average Pompeiu-Hausdorff distance = 0.664 Mean loss area = 30.1 % R = 0.962 and 0.968 between automatic and manual analysis |
Khan et al. (2021)17 |
| Meibography images | 497 images | Measure MG atrophy | DL – NPID, using CNN backbone | The model can automatically analyze MG atrophy and categorize gland characteristics via hierarchical clustering with good performance, outperforming human clinician in meiboscore grading accuracy. | Meiboscore grading accuracy with pretrained model = 80.9 % Meiboscore grading accuracy without pretrained model = 63.6 % |
Yeh et al. (2021)18 |
| OCT | 6658 images | Segmentation of lower tear meniscus images | DL | The proposed approach displays robust segmentation and localization of lower tear meniscus | Acc > 99.2 % Sens > 96.3 % Spec > 99.8 % IoU>0.931 |
Stegmann et al. (2020)19 |
| IVCM | 4985 images | Classification of MGD into obstructive, atrophic, and normal groups | DL – DenseNet169 etc. | The best performing model, DenseNet169, shows high diagnostic accuracy in classifying obstructive and atrophic MGD | For DenseNet169 Overall acc > 97 % For obstructive MGD Sens = 88.8 % Spec = 95.4 % For atrophic MGD Sens = 89.4 % Spec = 98.4 % |
Zhang et al. (2021)20 |
| IVCM | 221 images | Detection of MGD | DL – DenseNet201, VGG16, DenseNet169, DenseNet201, InceptionV3 | The DL approach integrating IVCM can differentiate healthy meibomian glands and MGD with high accuracy | For highest performing model (i.e. DenseNet201): Sens = 94.1 % Spec = 82.1 % AUROC = 0.966 |
Maruoka et al. (2020)21 |
| Meibography images | 706 images | Segment and quantify MG atrophy | DL | The proposed DL can segment total eyelid and meibomian gland atrophy regions with high accuracy and consistency. The model also achieves accurate meiboscore grading accuracy, outperforming human clinical teams. | Meiboscore grading acc = 95.6 % For eyelid segmentation: Acc = 97.6 % IoU = 95.5 % For atrophy segmentation: Acc = 95.4 % IoU = 66.7 % RMSD for atrophy prediction = 6.7 % |
Wang et al. (2019)22 |
| Slit-lamp data | 50 subjects | Detection of fluorescent tear film break-up area | DL-CNN | The model achieves robust performance with good agreement with standard methods to measure tear film stability (i.e., TFBUT) | R = 0.9 between CNN-BUT and TFBUT test. As a metric, CNN-BUT is statistically significantly lower in patients with DED (p < 0.05). At a given cutoff of 5 s Sens = 83 % Spec = 95 % AUROC = 0.96 |
Su et al. (2018)23 |
Nejat F, Eghtedari S, Alimoradi F (2024) Next-Generation Tear Meniscus Height detecting and measuring smartphone-based deep learning algorithm leads in dry eye management. Ophthalmology Science: 100546 DOI https://doi.org/10.10167j.xops.2024.100546.
Li S, Wang Y, Yu C, Li Q, Chang P, Wang D, Li Z, Zhao Y, Zhang H, Tang N, Guan W, Fu Y, Zhao Y-e (2023) Unsupervised Learning Based on Meibography Enables Subtyping of Dry Eye Disease and Reveals Ocular Surface Features. Investigative Ophthalmology & Visual Science 64: 43–43 DOI 10.1167/iovs.64.13.43.
Shimizu E, Ishikawa T, Tanji M, Agata N, Nakayama S, Nakahara Y, Yokoiwa R, Sato S, Hanyuda A, Ogawa Y, Hirayama M, Tsubota K, Sato Y, Shimazaki J, Negishi K (2023) Artificial intelligence to estimate the tear film breakup time and diagnose dry eye disease. Scientific Reports 13: 5822 DOI 10.1038/s41598-023-33021-5.
Kikukawa Y, Tanaka S, Kosugi T, Pflugfelder SC (2023) Non-invasive and objective tear film breakup detection on interference color images using convolutional neural networks. PLOS ONE 18: e0282973 DOI 10.1371/journal.pone.0282973.
Abdelmotaal H, Hazarbasanov R, Taneri S, Al-Timemy A, Lavric A, Takahashi H, Yousefi S (2023) Detecting dry eye from ocular surface videos based on deep learning. The Ocular Surface 28: 90–98 DOI https://doi.org/10.1016/j.jtos.2023.01.005.
Wang S, He X, He J, Li S, Chen Y, Xu C, Lin X, Kang J, Li W, Luo Z, Liu Z (2023) A Fully Automatic Estimation of Tear Meniscus Height Using Artificial Intelligence. Investigative Ophthalmology & Visual Science 64: 7–7 DOI 10.1167/iovs.64.13.7.
Wang Y, Jia X, Wei S, Li X (2023) A deep learning model established for evaluating lid margin signs with colour anterior segment photography. Eye 37: 1377–1382 DOI 10.1038/s41433-022-02088-1.
Qu J-H, Qin X-R, Li C-D, Peng R-M, Xiao G-G, Cheng J, Gu S-F, Wang H-K, Hong J (2023) Fully automated grading system for the evaluation of punctate epithelial erosions using deep neural networks. British Journal of Ophthalmology 107: 453–460 DOI 10.1136/bjophthalmol-2021-319755.
Saha RK, Chowdhury AMM, Na K-S, Hwang GD, Eom Y, Kim J, Jeon H-G, Hwang HS, Chung E (2022) Automated quantification of meibomian gland dropout in infrared meibography using deep learning. The Ocular Surface 26: 283–294 DOI https://doi.org/10.1016/j.jtos.2022.06.006.
Fineide F, Storas AM, Chen X, Magn0 MS, Yazidi A, Riegler MA, Utheim TP (2022) Predicting an unstable tear film through artificial intelligence. Scientific Reports 12: 21416 DOI 10.1038/s41598-022-25821-y.
Zhang Z, Lin X, Yu X, Fu Y, Chen X, Yang W, Dai Q (2022) Meibomian Gland Density: An Effective Evaluation Index of Meibomian Gland Dysfunction Based on Deep Learning and Transfer Learning. Journal of Clinical Medicine 11: 2396.
Zheng Q, Wang L, Wen H, Ren Y, Huang S, Bai F, Li N, Craig JP, Tong L, Chen W (2022) Impact of Incomplete Blinking Analyzed Using a Deep Learning Model With the Keratograph 5M in Dry Eye Disease. Translational Vision Science & Technology 11: 38–38 DOI 10.1167/tvst.11.3.38.
Edorh NA, Maftouhi AE, Djerada Z, Arndt C, Denoyer A (2022) New model to better diagnose dry eye disease integrating OCT corneal epithelial mapping. British Journal of Ophthalmology 106: 1488–1495 DOI 10.1136/bjophthalmol-2021-318826.
Chase C, Elsawy A, Eleiwa T, Ozcan E, Tolba M, Abou Shousha M (2021) Comparison of Autonomous AS-OCT Deep Learning Algorithm and Clinical Dry Eye Tests in Diagnosis of Dry Eye Disease. Clin Ophthalmol 15: 4281–4289 DOI 10.2147/OPTH.S321764.
Wang J, Li S, Yeh TN, Chakraborty R, Graham AD, Yu SX, Lin MC (2021) Quantifying Meibomian Gland Morphology Using Artificial Intelligence. Optometry and Vision Science 98.
Setu MAK, Horstmann J, Schmidt S, Stern ME, Steven P (2021) Deep learning-based automatic meibomian gland segmentation and morphology assessment in infrared meibography. Scientific Reports 11: 7649 DOI 10.1038/s41598-021-87314-8.
Khan ZK, Umar AI, Shirazi SH, Rasheed A, Qadir A, Gul S (2021) Image based analysis of meibomian gland dysfunction using conditional generative adversarial neural network. BMJ Open Ophthalmology 6: e000436 DOI 10.1136/bmjophth-2020-000436.
Yeh C-H, Yu SX, Lin MC (2021) Meibography Phenotyping and Classification From Unsupervised Discriminative Feature Learning. Translational Vision Science & Technology 10: 4–4 DOI 10.1167/tvst.10.2.4.
Stegmann H, Werkmeister RM, Pfister M, Garhöfer G, Schmetterer L, dos Santos VA (2020) Deep learning segmentation for optical coherence tomography measurements of the lower tear meniscus. Biomedical Optics Express 11: 1539–1554 DOI 10.1364/BOE.386228.
Zhang Y-Y, Zhao H, Lin J-Y, Wu S-N, Liu X-W, Zhang H-D, Shao Y, Yang W-F (2021) Artificial Intelligence to Detect Meibomian Gland Dysfunction From in-vivo Laser Confocal Microscopy. Frontiers in Medicine 8 DOI 10.3389/fmed.2021.774344.
Maruoka S, Tabuchi H, Nagasato D, Masumoto H, Chikama T, Kawai A, Oishi N, Maruyama T, Kato Y, Hayashi T, Katakami C (2020) Deep Neural Network-Based Method for Detecting Obstructive Meibomian Gland Dysfunction With in Vivo Laser Confocal Microscopy. Cornea 39.
Wang J, Yeh TN, Chakraborty R, Yu SX, Lin MC (2019) A Deep Learning Approach for Meibomian Gland Atrophy Evaluation in Meibography Images. Translational Vision Science & Technology 8: 37–37 DOI 10.1167/tvst.8.6.37.
Su TY, Liu ZY, Chen DY (2018) Tear Film Break-Up Time Measurement Using Deep Convolutional Neural Networks for Screening Dry Eye Disease. IEEE Sensors Journal 18: 6857–6862 DOI 10.1109/JSEN.2018.2850940.
The meibomian glands (MG) play a crucial role in maintaining the stability and quality of tear films via secreting oil that contributes to tear film stability. Meibomian gland dysfunction (MGD) is thus among the most common etiologies of DED, and MG morphology is closely associated with the severity of MGD (55). Meibography is a commonly used modality to diagnose MGD and often used as inputs for AI model training. Recent studies have employed DL approaches to facilitate the detection of DED via analyzing MG features (56–65) (Table 4). Wang et al. employed DL method to segment MG atrophy areas and calculate percent atrophy in meibography images. The trained model achieved high accuracy in meiboscore grading (i.e., 95.6%) and outperformed human clinicians (65). Setu et al. used DL-approach to segment MG by learning features from 728 clinical meibography images without preprocessing, demonstrating an AUROC of 0.96 and a Dice coefficient of 0.84 (62). Additionally, Wang et al, using meibography data, developed a DL model to segment MG areas via integrating 1087 images from patients with DED, achieving high segmentation performance of MG area (i.e., AUROC of 0.938) (57). Recently, Li et al. employed an unsupervised DL method to classify patients with DED from a dataset of over 82236 meibography images. Using the SimCLR NN, patients were clustered into six image-based subtypes, revealing distinct clinical profiles. These subtypes showed significant variations in tear breakup time and tear meniscus height with specific subtypes exhibiting severe MG atrophy and others demonstrating high corneal fluorescent staining (56).
Anterior segment images and videos, notably from slit-lamp exams, have been used to assess many ocular surface parameters including tear film, conjunctival staining, and lid margins to aid the diagnosis of DED (66–69) (Table 4). An early study by Su et al. introduced a DL-based method, CNN-BUT, to automatically detect tear film break-up areas in DED using slit-lamp videos from 80 subjects, demonstrating an AUROC, sensitivity, and specificity of 0.96, 83%, and 95% in screening for DED (69). Recently, Wang et al. aimed to identify lid margin signs for DED from slit-lamp images using ML models, which can achieve high AUROC for features like rounding of posterior lid margin (i.e., 0.979), irregularity (i.e., 0.977), vascularization (i.e., 0.980) (67). Further, a recent study used deep NN system to evaluate corneal punctate epithelial erosions (PEEs) from fluorescein staining images via corneal segmentation, image patch extraction, and grading (68). The system displayed high performance with an IoU of 0.937 for corneal segmentation, a 76.5% classification accuracy for a grading model, and a 0.940 AUROC for punctate staining (68).
DL models incorporating IVCM data, which confer high-resolution images of ocular surfaces, have been used to detect and classify DED (Table 4). Maruoka et al., using 137 IVCM images, applied DL models to detect obstructive MGD (70). The best performing ensemble model achieved a robust performance with an AUROC of 0.981, sensitivity of 92.1%, and specificity of 98.8% for diagnosing obstructive MGD (70). Another study used DL algorithms to differentiate obstructive MGD, atrophic MGD, and normal MG with the best performing model, DenseNet169, achieving a 97% accuracy in the classification performance among the three groups (71).
Several applications of DL and ML approaches incorporating AS-OCT data have demonstrated improvement in the detection of DED (72–74) (Table 4). Chase et al. showed that DL approach could detect DED with high accuracy, outperforming corneal staining, conjunctival staining, and Schirmer’s test, while displaying comparable performance to Ocular Surface Disease Index and tear breakup time (p<0.05) (73). Integrating RF methods, Edorh et al. used corneal epithelial mapping from OCT to develop a scoring method that enhances objective diagnosis of DED (74). This was done by establishing a method to utilize the accuracy of the AI method as a part of the scoring mechanism. Using data from 59 patients with DED, OCT revealed significant epithelial thinning in DED cases across all zones. Superior intermediate epithelial thickness proved to be the most effective marker for diagnosing DED, while the difference between inferior and superior peripheral zones was the best marker for grading its severity. With the aid of RF method, a multivariate model integrating OCT mapping and other variables demonstrated high sensitivity of 86.4% and specificity of 91.7% in diagnosing DED (74).
Fuchs dystrophy
Fuchs endothelial corneal dystrophy (FECD), a condition in which the deterioration of endothelial cells leads to corneal edema, is among the leading indications for keratoplasty (75). Early detection of FECD is crucial for timely medical intervention. In this regard, several applications of AI have focused on automated detection and image segmentation for FECD (Table 5). Eleiwa et al. used DL algorithm incorporating high-definition OCT data to discriminate early-stage FECD without clinically evident corneal edema from healthy and late-stage FECD, achieving an AUROC of 0.997 for detecting early FECD, 0.974 for late stage FECD, and 0.998 for healthy corneas, respectively (76). Shilpasree et al. compared corneal endothelium characteristics in FECD patients and healthy subjects by segmenting specular microscopy (SM) images, using a modified U-Net DL architecture followed by the Watershed algorithm. The authors showed that a smaller endothelial cell density and an increased average perimeter length (APL) with percentage of guttae are in FECD compared to healthy subjects, suggesting that APL elevation in FECD may signify alterations in corneal fluid dynamics (77).
Table 5.
Fuchs endothelial corneal dystrophy.
| Data type | Sample size | Study | Methods | Key Findings | Performance | Reference |
|---|---|---|---|---|---|---|
|
| ||||||
| Specular microscopy (SM) images | 775 images | Detection of Fuchs endothelial corneal dystrophy (FECD) | DL – DenseNet121 | The DL model can reliably detect FECD from SM images | For external validation dataset: AUROC = 0.77 Sens = 69 % Spec = 68 % |
Foo et al. (2024)1 |
| Specular microscopy images | 1203 images | Segmentation endothelial images with guttae from specular microscopy images | DL – UNet, ResUNeXt, DenseUNets, UNet++ | DenseUNets with fNLA achieves the best performance with the lowest error | For DenseUNets with fNLA MAE = 23.16 [cells/mm2] in endothelial cell density (ECD) Coefficient of variation = 1.28 % Hexagonality- 3.13 % |
Vigueras-Guillen et al. (2021)2 |
| Specular microscopy images | 246 images | Segmentation of corneal endothelium in FECD | DL – U-Net and Watershed | The developed algorithm performs comparably to manual segmentation of corneal endothelium in Fuchs Dystrophy and healthy patients (p > 0.1) | AUROC = 0.967 F1 = 82.3 % IoU = 77.27 % |
Shilpasree et al. (2021)3 |
| AS-OCT | 18720 images | Differentiation of healthy eyes, early-, and late- stage Fuchs dystrophy | DL – VGG19 and transfer learning | The model displays good performance with AUROC > 0.974, sensitivity of 97 %, and 97 % specificity in detecting early-, late-stage Fuchs dystrophy, and healthy eyes | For detecting early Fuch’s endothelial dystrophy (FECD): Sens = 91 % Spec = 97 % AUROC = 0.997 For detecting late-stage FECD: Sens = 100 % Spec = 92 % AUROC = 0.974 For distinguishing healthy corneas from all FECD: Sens = 99 % Spec = 98 % AUROC = 0.998 |
Eleiwa et al. (2020)4 |
Foo VHX, Lim GYS, Liu Y-C, Ong HS, Wong E, Chan S, Wong J, Mehta JS, Ting DSW, Ang M (2024) Deep learning for detection of Fuchs endothelial dystrophy from widefield specular microscopy imaging: a pilotstudy. Eye and Vision 11: 11 DOI 10.1186/s40662-024-00378-1.
Vigueras-Guillen JP, van Rooij J, van Dooren BTH, Lemij HG, Islamaj E, van Vliet LJ, Vermeer KA (2022) DenseUNets with feedback non-local attention for the segmentation of specular microscopy images of the corneal endothelium with guttae. Scientific Reports 12: 14035 DOI 10.1038/s41598-022-18180-1.
Shilpashree PS, Suresh KV, Sudhir RR, Srinivas SP (2021) Automated Image Segmentation of the Corneal Endothelium in Patients With Fuchs Dystrophy. Translational Vision Science & Technology 10: 27–27 DOI 10.1167/tvst.10.13.27.
Eleiwa T, Elsawy A, Ozcan E, Abou Shousha M (2020) Automated diagnosis and staging of Fuchs’ endothelial cell corneal dystrophy using deep learning. Eye and Vision 7: 44 DOI 10.1186/s40662-020-00209-z.
Detection of multiple corneal diseases
AI algorithms have been employed to analyze and distinguish among multiple corneal conditions, integrating data from slit-lamp to AS-OCT images (Table 6). Gu et al. developed a novel hierarchical DL network from 5325 slit-lamp images. Compared against 10 ophthalmologists using a dataset of 510 patients, the AI algorithm displayed an AUROC>0.910 for each disease type, demonstrating sensitivity and specificity similar to or surpassing human experts (78). Further, Li et al. proposed a workflow (i.e., Visionome) to segment anatomical structures and annotate pathological features in slit-lamp images to improve the performance of DL algorithm (79). The proposed approach facilitated comprehensive analysis of slit-lamp images, capable of identifying and classifying potential lesions with simply a few hundred images (79). In comparative tests, the algorithm trained with equivalent images demonstrated superior performances over image-level classifiers, matching the diagnostic accuracy of ophthalmologists (79). Elsawy et al. used a DL approach, incorporating AS-OCT images, to distinguish among three common corneal diseases—DED, FECD and KC. Using 134460 images for training and validation, the model demonstrated promising diagnostic performances with eye-level AUROCs >0.99 for DED, FECD, and KC, respectively (80). A more recent study by Ueno et al. employed DL models to diagnose and triage corneal diseases using smartphone images. The system achieved high AUROC from 0.960 to 1.000 for diagnosing various corneal conditions (i.e., IK, corneal scars, ocular surface tumors, corneal deposits, and bullous keratopathy). It also demonstrated high sensitivity and specificity for triage referral suggestion, with accuracies comparable to those of corneal specialists (81).
Table 6.
Miscellaneous corneal diseases.
| Multiple corneal diseases | |||||||
|---|---|---|---|---|---|---|---|
|
| |||||||
| Corneal disease | Data type | Sample size | Study | Methods | Key Findings | Performance | Reference |
|
| |||||||
| Multiple | Smartphone images | 6442 images | Detection and triage of corneal diseases | DL – YOLO V.3, V.5, and RetinaNet | The system demonstrates robust performance in diagnosing and triaging corneal diseases using smartphone images, achieving similar diagnostic performance to that of corneal specialists | Diagnostic performance: AUROC = 0.998 for normal eyes AUROC = 0.986 for IK AUROC = 0.960 for immunological keratitis AUROC = 0.987 for cornea scars AUROC = 0.997 for ocular surface tumours AUROC = 0.993 for corneal deposits AUROC = 0.993 for bullous keratopathy Triage performance: Sens = 100 %, spec = 100 % for urgent cases. Sens = 86.7 %, spec = 100 % for semiurgent cases. Sens = 95.3 %, spec = 98.3 % for routine cases. Sens = 100 %, spec = 89.6 % for observation |
Ueno et al. (2024)1 |
| Multiple | IVCM images | 19612 images | Classification of corneal layer images from IVCM | DL – ResNet50 | The proposed model achieves high accuracy in the classification of cornea layers and detection of normal/abnormal images, similar to that of cornea specialists, but with higher recognition speed | In external test dataset: Acc = 96% for detecting epithelium Acc = 96.5% for detecting bowman’s membrane Acc = 94.5% for detecting stroma Acc = 96.6% for detecting endothelium In distinguishing normal/abnormal images: Acc = 98.3% for detecting epithelium Acc = 97.2% for detecting bowman’s membrane Acc = 94% for detecting stroma Acc = 98.2% for detecting endothelium In human–machine competition: Acc = 98.2% for the model |
Yan et al. (2023)2 |
| Multiple | AS-OCT images | 1940 screening scans | Diagnosis of corneal abnormalitites | CorNeXT, k-means and RF | The model displays a sensitivity of 98.46 % and specificity of 91.96 % in recognizing normal and abnormal corneas, enabling screening and classification of corneal pathologies like keratoconus | Accuracy: 95.45 % F1 score: 95.88 Sensitivity: 98.46 % Specificity: 91.96 % AUROC: 0.9953 |
Fassbind et al. (2023)3 |
| Multiple | AS-OCT | 16721 images | Diagnosis of FECD and KC | DL network with novel features | The proposed network outperforms other existing models with a classification accuracy = 91 % and scan classification accuracy = 94 % | Acc = 91 % Sens = 91 % Spec = 95 % F1-score = 0.91 AUROC = 0.94 |
Elsawy et al. (2021)4 |
| Multiple | AS-OCT | 134460 images | Diagnosis of common corneal diseases such as DED, FECD, and KC | DL – Multi disease deep learning diagnostic network (MDDN) | The model displays robust performance with AUROC > 0.99 and F1 score > 0.9 in diagnostic performance across all diseases assessed | AUROC > 0.99 AUPRCs > 0.96 F1 score > 0.9 |
Elsawy et al. (2021)5 |
| Multiple | Slit-lamp images | 1772 images | Segmentation of anatomical structures and annotation of pathological features. | DL – CNN | The model displays superior diagnostic capabilities compared to the algorithm trained solely on image-level labels, performing similarly to three human ophthalmologists | Avg acc = 66–100 % | Li et al. (2020)6 |
| Multiple | Slit-lamp images | 5325 images | Detection of corneal diseases | Hierarchical DL framework – based on Inception-v3 CNN architecture | The model displays AUROC for each corneal disease type > 0.90 and similar sensitivity/specificity compared to all ophthalmologists | AUROC = 0.96 for infectious corneal disease AUROC = 0.8916 for noninfectious infectious corneal disease AUROC = 0.8949 for corneal degeneration/dystrophy AUROC = 0.9467 for neoplasm |
Gu et al. (2020)7 |
| Multiple | Slit-lamp | 1513 images | Development of a DL approach to aid automated diagnosis of ocular diseases, including pterygium | CNN | The system provides a proof-of-concept in identifying different corneal conditions, delineating different anatomical aspects, and delivering relevant diagnostic insights and treatment recommendations. | Acc > 90 % Sens > 95 % Spec > 65 % AUROC = 0.9595 |
Zhang et al. (2018)8 |
| Miscellaneous Corneal Diseases | |||||||
| Corneal disease | Data type | Sample size | Study | Methods | Key Findings | Performance | Reference |
| Ocular surface pain | IVCM images | 340 eyes | Correlating clinical and imaging data to classify ocular surface pain | ML – Random Forest | The model displays AUROC of 0.735 and accuracy of 86 % in classification performance | Acc = 86 % AUROC = 0.736 F1 = 85.9 % Precision = 85.6 % Recall = 86.3 % |
Kundu et al. (2022)9 |
| Neuropathy | IVCM images | 174 images | Quantification of corneal nerves to detect diabetic peripheral neuropathy (DPN) | DL – U-Net-based CNN combined with adaptive neuro-fuzzy interference system (ANFIS) | The proposed DL method displays good agreement with performance by human expert analysis in quantification of corneal nerves, as well as demonstrating accurate performance in detecting diabetic neuropathy subjects. | For discriminating DPN− from DPN+ subjects: Sens = 92 % Spec = 80 % AUROC = 0.95 |
Salahouddin et al. (2021)10 |
| Corneal inflammation | IVCM images | 3453 images | Detection of activated dendritic and inflammatory cells as marker of inflammation | DL – VGG-16, ResNet-101, Inception V3, Xception, and Inception-ResNet V2 | Inception-ResNet V2 transfer model achieves the best performance in the detection of activated dendritic and inflammatory cells | Performance for Inception-ResNet V2 transfer model For identifying activated dendritic cells: Acc = 93.19 % Sens = 81.7 % Spec = 95.17 % AUROC = 0.9646 For identifying inflammatory cells: Acc = 97.67 % Sens = 91.74 % Spec = 99.31 % AUROC = 0.9901 |
Xu et al. (2021)11 |
| Corneal edema | AS-OCT | 199 images | Detection of corneal edema | CNN | The model can accurately detect corneal edema with AUROC of 0.994 and accuracy of 98.7 % | Sens = 96.4 % Spec = 100 % AUROC = 0.994 |
Zeboulon et al. (2021)12 |
| Neuropathy | IVCM images | 1698 images | Quantification of nerve fiber properties related to diabetic neuropathy | DL – U-Net based | The model displays rapid and accurate localization performance in quantifying corneal nerve biomarkers | For classifying those without vs with neuropathy: Sens = 68 % Spec = 87 % AUROC = 0.83 |
Williams et al. (2020)13 |
| Amyloidosis | Digitally scanned corneal specimens | 42 samples | Classification of amyloid areas from corneal whole slide images (WSI) from those with familial amyloidosis | DL | The developed DL algorithm shows high sensitivity and specificity in classifying amyloid areas | For amyloid area classification: Sens = 86 % Spec = 92 % F-score = 0.81 For corneal stroma area classification: Sens = 74 % Spec = 82 % F-score = 0.73 |
Kessel et al. (2020)14 |
| Neuropathy | IVCM images | 100 images | Detection of diabetic patients with neuropathy | DL – CNN | The proposed approach can extract image features without requiring nerve tracing or parameter extraction, demonstrating good performance in classifying IVCM images with neuropathy | For single subject performance: Acc = 96 % Sens = 98 % Spec = 94 % |
Scarpa et al. (2020)15 |
Ueno Y, Oda M, Yamaguchi T, Fukuoka H, Nejima R, Kitaguchi Y, Miyake M, Akiyama M, Miyata K, Kashiwagi K, Maeda N, Shimazaki J, Noma H, Mori K, Oshika T (2024) Deep learning model for extensive smartphone-based diagnosis and triage of cataracts and multiple corneal diseases. British Journal of Ophthalmology: bjo-2023-324488 DOI 10.1136/bjo-2023-324488.
Yan Y, Jiang W, Zhou Y, Yu Y, Huang L, Wan S, Zheng H, Tian M, Wu H, Huang L, Wu L, Cheng S, Gao Y, Mao J, Wang Y, Cong Y, Deng Q, Shi X, Yang Z, Miao Q, Zheng B, Wang Y, Yang Y (2023) Evaluation of a computer-aided diagnostic model for corneal diseases by analyzing in vivo confocal microscopy images. Frontiers in Medicine 10 DOI 10.3389/fmed.2023.1164188.
Fassbind B, Langenbucher A, Streich A (2023) Automated cornea diagnosis using deep convolutional neural networks based on cornea topography maps. Scientific Reports 13: 6566 DOI 10.1038/s41598-023-33793-w.
Elsawy A, Abdel-Mottaleb M (2021) A Novel Network With Parallel Resolution Encoders for the Diagnosis of Corneal Diseases. IEEE Transactions on Biomedical Engineering 68: 3671–3680 DOI 10.1109/TBME.2021.3082152.
Elsawy A, Eleiwa T, Chase C, Ozcan E, Tolba M, Feuer W, Abdel-Mottaleb M, Abou Shousha M (2021) Multidisease Deep Learning Neural Network for the Diagnosis of Corneal Diseases. American Journal of Ophthalmology 226: 252–261 DOI 10.1016/j.ajo.2021.01.018.
Li W, Yang Y, Zhang K, Long E, He L, Zhang L, Zhu Y, Chen C, Liu Z, Wu X, Yun D, Lv J, Liu Y, Liu X, Lin H (2020) Dense anatomical annotation of slit-lamp images improves the performance of deep learning for the diagnosis of ophthalmic disorders. Nature Biomedical Engineering 4: 767–777 DOI 10.1038/s41551-020-0577-y.
Gu H, Guo Y, Gu L, Wei A, Xie S, Ye Z, Xu J, Zhou X, Lu Y, Liu X, Hong J (2020) Deep learning for identifying corneal diseases from ocular surface slit-lamp photographs. Scientific Reports 10: 17851 DOI 10.1038/s41598-020-75027-3.
Zhang K, Liu X, Liu F, He L, Zhang L, Yang Y, Li W, Wang S, Liu L, Liu Z, Wu X, Lin H (2018) An Interpretable and Expandable Deep Learning Diagnostic System for Multiple Ocular Diseases: Qualitative Study. J Med Internet Res 20: e11144 DOI 10.2196/11144.
Kundu G, Shetty R, D’Souza S, Khamar P, Nuijts RMMA, Sethu S, Roy AS (2022) A novel combination of corneal confocal microscopy, clinical features and artificial intelligence for evaluation of ocular surface pain. PLOS ONE 17: e0277086 DOI 10.1371/journal.pone.0277086.
Salahouddin T, Petropoulos IN, Ferdousi M, Ponirakis G, Asghar O, Alam U, Kamran S, Mahfoud ZR, Efron N, Malik RA, Qidwai UA (2021) Artificial Intelligence-Based Classification of Diabetic Peripheral Neuropathy From Corneal Confocal Microscopy Images. Diabetes Care 44: e151-e153 DOI 10.2337/dc20-2012.
Xu F, Qin Y, He W, Huang G, Lv J, Xie X, Diao C, Tang F, Jiang L, Lan R, Cheng X, Xiao X, Zeng S, Chen Q, Cui L, Li M, Tang N (2021) A deep transfer learning framework for the automated assessment of corneal inflammation on in vivo confocal microscopy images. PLOS ONE 16: e0252653 DOI 10.1371/journal.pone.0252653.
Zeboulon P, Ghazal W, Gatinel D (2021) Corneal Edema Visualization With Optical Coherence Tomography Using Deep Learning: Proof of Concept. Cornea 40.
Williams BM, Borroni D, Liu R, Zhao Y, Zhang J, Lim J, Ma B, Romano V, Qi H, Ferdousi M, Petropoulos IN, Ponirakis G, Kaye S, Malik RA, Alam U, Zheng Y (2020) An artificial intelligence-based deep learning algorithm for the diagnosis of diabetic neuropathy using corneal confocal microscopy: a development and validation study. Diabetologia 63: 419–430 DOI 10.1007/s00125-019-05023-4.
Kessel K, Mattila J, Linder N, Kivelä T, Lundin J (2019) Deep Learning Algorithms for Corneal Amyloid Deposition Quantitation in Familial Amyloidosis. Ocular Oncology and Pathology 6: 58–65 DOI 10.1159/000500896.
Scarpa F, Colonna A, Ruggeri A (2020) Multiple-Image Deep Learning Analysis for Neuropathy Detection in Corneal Nerve Images. Cornea 39.
Corneal transplantation
AI algorithms have found increasing roles in the peri- and intraoperative aspects of corneal transplantation, ranging from automated prediction of the clinical need for keratoplasty to detection of graft detachment (Table 7). Descemet membrane endothelial keratoplasty (DMEK) is among the best offered surgical options for patients with endothelial dysfunction. Nonetheless, graft detachment remains a significant issue, often requiring rebubbling (82, 83). To address this, AS-OCT images allow detailed visualization of early graft detachment and are often used to train automated AI models. Treder et al. pioneered the use of DL model integrating AS-OCT data to automatically detect post-DMEK graft detachment with a high accuracy of 96% (84). Heslinga et al. further extended these results into localization and quantification of graft detachment (85).
Table 7.
Corneal transplantation.
| Data type | Sample Size | Study | Methods | Key Findings | Performances | Reference |
|---|---|---|---|---|---|---|
|
| ||||||
| Digital micrographs of cornea | 25 eyes | Evaluation of decellularized corneas as replacement grafts | ML – Random forests and support vector machine | The developed models can identify regions of interests in decellularized corneal stromal tissue in micrograph images with relatively high accuracies | For RF model: Acc = 82.67 % AUROC = 0.92 For SVM: Acc = 80.5 % AUROC = 0.82 |
Pantic et al. (2023)1 |
| AS-OCT | 9466 images | Identification of graft detachment | MIL using ResNet101 backbone | The MIL-AI model predicts post-DMEK graft detachment with higher sensitivity (92 % vs 31 %) but lower specificity (45 % vs 64 %) compared to ophthalmologists’ performances with F1 score of 0.77 and AUROC of 0.63 | For test set: Sens = 92 % Spec = 45 % AUROC = 0.63 |
Patefield et al. (2023)2 |
| Specular microscope (i.e., Topcon SP3000) | 925 images | Prediction of allograft rejection for postkeratoplasty (i.e., DMEK) patients | DL – U-Net for segmentation, ML – RF and LR for model prediction | RF and LR models predict allograft rejection for post-DMEK patients with high accuracies (>80%) from a held-out test set | For held-out test set, in predicting post-DMEK allograft rejection: Acc > 80% for RF and LR models | Joseph et al. (2023)3 |
| AS-OCT | 1992 images | Detection of corneal edema before DMEK surgery | DL – U-Net | The model performs well in detecting corneal edema before DMEK surgery with AUROC of 0.97 for edema fraction (EF) for the detection of 20 μm of differential central corneal thickness (DCCT) | For edema fraction (EF) in the detection of 20 μm of (DCCT): AUROC = 0.97 for all patients AUROC = 0.96 for FECD AUROC = 0.99 for non-FECD and normal patients |
Bitton et al. (2022)4 |
| AS-OCT | 6912 images | Quantification of endothelial corneal graft detachment (e.g., DMEK) | DL – UNet++ | The best performing NN displays a good segmentation performance and classification performance | In validation set: Dice coefficient = 0.73 Youden index = 0.85 R2 = 0.9 with manually labeled ground truth |
Glatz et al. (2021)5 |
| Multimodal clinical data | 3647 transplants | Prediction of graft detachment following lamellar keratoplasty | ML – LASSO, classification tree analysis (CTA), RF classification (RFC) | The models perform with AUROC of 0.7, 0.65, and 0.72, respectively, in predicting graft detachment. Identified risk factors include DMEK procedure, prior graft failure, and usage of sulfur hexafluoride gas. | AUROC = 0.7 for LASSO AUROC = 0.65 for CTA AUROC = 0.72 for RFC |
Muijzer et al. (2020)6 |
| Multimodal clinical data | 1335 patients | Identification of factors associated with 10-year graft survival of DSAEK and PK | ML – Random survival forest | The model reveals that graft failure is associated with bullous keratopathy, male recipients, and poor pre-operative visual acuity. DSAEK is superior to PK in 10-year graft survival | N/A | Ang et al. (2020)7 |
| Multimodal clinical data | 1090 patients | Identification of factors predictive of graft failure associated with DSAEK | ML – Random survival forest | Random survival forest model ranks DSAEK intraoperative complications as the 3rd most predictive factor of graft failure. 47 months after DSAEK, the mean difference in survival time for grafts that experienced intraoperative complication vs those that did not was – 227 days | N/A | O’Brien et al. (2021)8 |
| Specular microscope (Topcon SP-1P) | 383 images | Estimation of corneal endothelium parameters after ultrathin DSAEK | CNN-Edge and CNN-ROI | The model achieves robust and accurate estimations of endothelium parameters, performing significantly better than Topcon’s (p < 0.0001) and comparable to manual assessment (p > 0.05) | The method can provide an estimate for endothelial cell density (ECD), coefficient of variation (CV) and hexagonality (HEX) in 98.4 % of images For percentage error of images with estimates: Percentage error = 2.5 % for ECD Percentage error = 5.7 % for CV Percentage error = 5.7 % for HEX |
Vigueras-Guillén et al. (2020)9 |
| AS-OCT images | 1280 AS-OCT B-scans | Locating and quantifying graft detachment after DMEK | Deep learning framework incorporating U-Net architecture | The model achieves good a performance with a Dice score for horizontal projections of all B-scans is 0.896, comparable to a human expert | Dice score = 0.896 Mean slecral spur localization error = 0.155 mm Interrater difference = 0.090 mm |
Heslinga et al. (2020)10 |
| AS-OCT | 12242 ASOCT images | Identification of corneal conditions and prediction for future keratoplasty | Unsupervised ML approach | Five non-overlapping clusters of eyes were identified, corresponding to different likelihoods for future keratoplasty needs. The model reveals likelihood for future corneal keratoplasty ranging from 1–31 % for five non-overlapping identified clusters of patient population | The normalized likelihood of future corneal keratoplasty needs in five clusters are: 1.0 %, 2.2 %, 31.0 %, 21.7 %, and 33.1 % | Yousefi et al. (2020)11 |
| Surgery videos | 1406 images | Augmented reality-based surgery navigation to guide suturing in DALK | DL – U-Net | The system achieves tracking accuracy of 99.2 % and peak signal-to-noise ratio reaches (PSNR) up to 25.52 in the reconstruction of occluded frames | Tracking Acc = 99.2 % PSNR = 25.52 |
Pan et al. (2020)12 |
| AS-intraoperative OCT (iOCT) | >150 training episodes | Aiding OCT-guided corneal needle insertions for DALK surgery | Deep deterministic policy gradients from demonstration, based on reinforcement learning algorithm | The system outperforms surgical fellows in reaching target needle insertion depth in test trial | Final perforation-free percent depth via robot: Mean = 84.75 % ± 4.91 % vs Mean = 78.58 % ± 6.68 % for human surgical fellows vs Mean = 61.38 % ± 17.18 % for human surgical fellows using operating microscope | Keller et al. (2020)13 |
| AS-OCT | 496 images | Assessing the need for rebubbling after DMEK | Deep neural network (e.g. VGG16, VGG19, ResNet50, InceptionV3, InceptionResNetV2, Xception, DenseNet121, DenseNet169, and DenseNet201) | Out of all models tested, VGG19 displays the best performance with AUROC of 0.964, sensitivity of 96.7 % and specificity of 91.5 %, respectively | For VGG19 model: Sens = 96.7% Spec = 91.5% AUROC = 0.964 For ensemble model: Sens = 91.3% Spec = 92.1% AUROC = 0.956 |
Hayashi et al. (2020)14 |
| AS-OCT | 1172 images | Evaluation of a DL method in automated detection of graft detachment after Descemet membrane endothelial keratoplasty (DMEK) from AS-OCT data | Deep CNN | The developed classifier achieves an accuracy of 96 %, sensitivity of 98 %, and specificity of 94 % in the detection of graft detachment after DMEK | For classification performance: Acc = 96 % Sens = 98 % Spec = 94 % |
Treder et al. (2019)15 |
Pantic IV, Cumic J, Valjarevic S, Shakeel A, Wang X, Vurivi H, Daoud S, Chan V, Petroianu GA, Shibru MG, Ali ZM, Nesic D, Salih AE, Butt H, Corridon PR (2023) Computational approaches for evaluating morphological changes in the corneal stroma associated with decellularization. Frontiers in Bioengineering and Biotechnology 11 DOI 10.3389/fbioe.2023.1105377.
Patefield A, Meng Y, Airaldi M, Coco G, Vaccaro S, Parekh M, Semeraro F, Gadhvi KA, Kaye SB, Zheng Y, Romano V (2023) Deep Learning Using Preoperative AS-OCT Predicts Graft Detachment in DMEK. Translational Vision Science & Technology 12: 14–14 DOI 10.1167/tvst.12.5.14.
Joseph N, Benetz BA, Chirra P, Menegay H, Oellerich S, Baydoun L, Melles GRJ, Lass JH, Wilson DL (2023) Machine Learning Analysis of Postkeratoplasty Endothelial Cell Images for the Prediction of Future Graft Rejection. Translational Vision Science & Technology 12: 22–22 DOI 10.1167/tvst.12.2.22.
Bitton K, Zéboulon P, Ghazal W, Rizk M, Elahi S, Gatinel D (2022) Deep Learning Model for the Detection of Corneal Edema Before Descemet Membrane Endothelial Keratoplasty on Optical Coherence Tomography Images. Translational Vision Science & Technology 11: 19–19 DOI 10.1167/tvst.11.12.19.
Glatz A, Böhringer D, Zander DB, Grewing V, Fritz M, Müller C, Bixler S, Reinhard T, Wacker K (2021) Three-Dimensional Map of Descemet Membrane Endothelial Keratoplasty Detachment: Development and Application of a Deep Learning Model. Ophthalmology Science 1 DOI 10.1016/j.xops.2021.100067.
Muijzer MB, Hoven CMW, Frank LE, Vink G, Wisse RPL, Bartels MC, Cheng YY, Dhooge MRP, Dickman M, van Dooren BTH, Eggink CA, Geerards AJM, van Goor TA, Lapid-Gortzak R, van Luijk CM, van der Meulen IJ, Nieuwendaal CP, Nuijts RMMA, Nobacht S, Oahalou A, van Oosterhout ECAA, Remeijer L, van Rooij J, Santana NTY, Stoutenbeek R, Tang ML, Vaessen T, Visser N, Wijdh RHJ, Wisse RPL, The Netherlands Corneal Transplant N (2022) A machine learning approach to explore predictors of graft detachment following posterior lamellar keratoplasty: a nationwide registry study. Scientific Reports 12: 17705 DOI 10.1038/s41598-022-22223-y.
Ang M, He F, Lang S, Sabanayagam C, Cheng C-Y, Arundhati A, Mehta JS (2022) Machine Learning to Analyze Factors Associated With Ten-Year Graft Survival of Keratoplasty for Cornea Endothelial Disease. Frontiers in Medicine 9 DOI 10.3389/fmed.2022.831352.
O’Brien RC, Ishwaran H, Szczotka-Flynn LB, Lass JH, Cornea Preservation Time Study G (2021) Random Survival Forests Analysis of Intraoperative Complications as Predictors of Descemet Stripping Automated Endothelial Keratoplasty Graft Failure in the Cornea Preservation Time Study. JAMA Ophthalmology 139: 191–197 DOI 10.1001/jamaophthalmol.2020.5743.
Vigueras-Guillen JP, van Rooij J, Engel A, Lemij HG, van Vliet LJ, Vermeer KA (2020) Deep Learning for Assessing the Corneal Endothelium from Specular Microscopy Images up to 1 Year after Ultrathin-DSAEK Surgery. Translational Vision Science & Technology 9: 49–49 DOI 10.1167/tvst.9.2.49.
Heslinga FG, Alberti M, Pluim JPW, Cabrerizo J, Veta M (2020) Quantifying Graft Detachment after Descemet’s Membrane Endothelial Keratoplasty with Deep Convolutional Neural Networks. Translational Vision Science & Technology 9: 48–48 DOI 10.1167/tvst.9.2.48.
Yousefi S, Takahashi H, Hayashi T, Tampo H, Inoda S, Arai Y, Tabuchi H, Asbell P (2020) Predicting the likelihood of need for future keratoplasty intervention using artificial intelligence. The Ocular Surface 18: 320–325 DOI https://doi.org/10.1016/j.jtos.2020.02.008.
Pan J, Liu W, Ge P, Li F, Shi W, Jia L, Qin H (2020) Real-time segmentation and tracking of excised corneal contour by deep neural networks for DALK surgical navigation. Computer Methods and Programs in Biomedicine 197: 105679 DOI https://doi.org/10.1016/j.cmpb.2020.105679.
Keller B, Draelos M, Zhou K, Qian R, Kuo AN, Konidaris G, Hauser K, Izatt JA (2020) Optical Coherence Tomography-Guided Robotic Ophthalmic Microsurgery via Reinforcement Learning from Demonstration. IEEE Transactions on Robotics 36: 1207–1218 DOI 10.1109/TRO.2020.2980158.
Hayashi T, Tabuchi H, Masumoto H, Morita S, Oyakawa I, Inoda S, Kato N, Takahashi H (2020) A Deep Learning Approach in Rebubbling After Descemet’s Membrane Endothelial Keratoplasty. Eye & Contact Lens 46.
Treder M, Lauermann JL, Alnawaiseh M, Eter N (2019) Using Deep Learning in Automated Detection of Graft Detachment in Descemet Membrane Endothelial Keratoplasty: A Pilot Study. Cornea 38.
Several studies have focused on using preoperative data to predict corneal transplant graft detachment. Muijzer et al., for example, used ML approach integrating multimodal non-imaging clinical data to explore predictors of graft detachment for DMEK and Descemet’s stripping endothelial keratoplasty (DSEK), achieving a highest AUROC of 0.72 in predicting graft detachment (86). Recently, Patefield et al. developed a multiple-instance learning AI model (MIL-AI) integrating preoperative AS-OCT images to predict post-DMEK graft detachment (87). Using 9466 images of 74 eyes, the model can predict with high sensitivity (i.e., 92%) for eyes that have post-DMEK graft detachment, albeit displaying a relatively low specificity (i.e., 45%), which is comparatively lower than human ophthalmologists’ performances (87).
Apart from graft detachment, graft rejection and failure are prominent issues facing corneal transplantation, warranting predictive models to aid preoperative planning (88). Ang et al. used random survival forests (RSF) to analyze the 10-year graft survival of Descemet stripping automated endothelial keratoplasty (DSAEK) and penetrating keratoplasty (PK), showing that DSAEK displays superior 10-year graft survival compared to PK, and graft failure is often associated with bullous keratopathy, male recipients, and poor preoperative visual acuity (89). In a retrospective study, O’Brien et al. also employed RSF to reanalyze data from a clinical trial on DSAEK to identify factors associated with graft failures (90). The study identified DSAEK intraoperative complications as highly predictive of DSAEK graft failure along with variables associated with surgeons and eye bank (90). Together, these studies highlight the utility of predicting the keratoplasty success via RSF model.
An important aspect of utilizing AI methods compared to traditional logistic regression analyses is the power of these novel AI models to integrate both population-level as well as individual-level information from data supplied for training these models. This is important because these models can learn highly complex and nonlinear relationships from the data that are beyond the capabilities of logistic regression methods. Thus, these strengths are uniquely suited for handling high dimensional and complex datasets wherein traditional regression analysis may struggle.
Discussion
Given the unique strengths of AI, several limitations arise, which will need to be addressed to better translate these emerging insights into real-world settings. First, many AI studies have limited internal and/or external validation datasets. In cases where these conditions are met, questions arise on whether AI algorithm applies to real-world settings, where the patient population is much more heterogeneous than the training or testing set. As more studies incorporate multiethnic data into the training set, a gradual improvement of AI algorithm in dealing with the heterogeneity of real-world settings is expected. Additionally, the standardization and quality of anterior segment images present unique challenges for AI applications due to variability in imaging techniques, image quality, and image resolution across different clinical settings. Addressing these issues would entail collaborative efforts to establish uniform imaging protocols, preprocessing techniques, and implement standardized quality assurance measures. Furthermore, randomized controlled trials in evaluating AI’s performance, for which a few have been conducted so far in ophthalmology (45, 91–96), will prove invaluable in realizing AI’s potential in real-world clinical practice. Second, although the rapid adoption of electronic health records (EHR) provides a setting through which AI can further streamline data analysis and assist decision making via telemedicine (97), ophthalmic imaging and EHR data are often non-standardized across different medical institutions. Furthermore, significant heterogeneity exists in the report of AI-related application in ophthalmology, rendering it challenging to integrate these data and interpret the applicability of AI findings. Recent advancement in reporting guidelines, such as CONSORT-AI, represents a step forward in addressing this concern (98). Third, AI approach often faces the issue of the “black box,” in which the models often do not explain the rationale behind a given decision-making process. The development of Explainable AI (XAI), model visualization, and feature importance analysis have made gains in addressing this issue, although such approaches often remain limited by interpretability and scalability issues (99, 100). Fourth, AI models often struggle to learn from scarce data, rendering them less suitable for rarer diseases with limited information. Given this, progress in data augmentation and synthesis (e.g., via diffusion model and generative adversarial network), transfer learning, and domain adaptation, for example, can be leveraged to ameliorate this problem (101).
With growing accumulation of imaging and clinical data, AI offers the potential to rapidly diagnose corneal diseases and provide support in clinical decision making. While AI does not replace physicians at this stage, it empowers their clinical decisions. Physicians are thus expected to comfortably work within this new AI frontier and interpret the predictions of AI algorithms. Ultimately, physicians retain the responsibility to recognize when AI algorithms are inappropriately used on patient populations and when they are justified in deviating from AI-inspired treatment protocols.
Overall, balancing the potential benefits AI with its current limitations is a critical task as the field continues to evolve. It is also important to note that the Food and Drug Administration (FDA) has started establishing protocols for approving AI-based models for clinical use, with the first FDA-approved AI for diabetic retinopathy given in 2018 (102). Thus, it is now a matter of establishing suitable clinical trials to translate these AI-based technologies to clinical use for detecting and managing corneal diseases.
Funding Statement:
TN is supported by the Medical Scientist Training Program grant from the National Institute of General Medical Sciences of the National Institutes of Health under award number: T32GM152349 to the Weill Cornell/Rockefeller/Sloan Kettering Tri-Institutional MD-PhD Program.
Footnotes
Conflict of Interest Statement: The authors declare no conflict of interests.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Bibliography
- 1.Flaxman SR, Bourne RRA, Resnikoff S, Ackland P, Braithwaite T, Cicinelli MV, et al. Global causes of blindness and distance vision impairment 1990–2020: a systematic review and meta-analysis. The Lancet Global Health. 2017;5(12):e1221–e34. [DOI] [PubMed] [Google Scholar]
- 2.Burton MJ, Ramke J, Marques AP, Bourne RRA, Congdon N, Jones I, et al. The Lancet Global Health Commission on Global Eye Health: vision beyond 2020. The Lancet Global Health. 2021;9(4):e489–e551. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44. [DOI] [PubMed] [Google Scholar]
- 4.Mahesh B Machine learning algorithms-a review. International Journal of Science and Research (IJSR)[Internet]. 2020;9(1):381–6. [Google Scholar]
- 5.Pisner DA, Schnyer DM. Support vector machine. Machine learning: Elsevier; 2020. p. 101–21. [Google Scholar]
- 6.Biau G, Scornet E. A random forest guided tour. TEST. 2016;25(2):197–227. [Google Scholar]
- 7.Krogh A What are artificial neural networks? Nature Biotechnology. 2008;26(2):195–7. [DOI] [PubMed] [Google Scholar]
- 8.Han SH, Kim KW, Kim S, Youn YC. Artificial Neural Network: Understanding the Basic Concepts without Mathematics. Dement Neurocogn Disord. 2018;17(3):83–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, et al. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. Journal of Big Data. 2021;8(1):53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Santodomingo-Rubido J, Carracedo G, Suzaki A, Villa-Collar C, Vincent SJ, Wolffsohn JS. Keratoconus: An updated review. Cont Lens Anterior Eye. 2022;45(3):101559. [DOI] [PubMed] [Google Scholar]
- 11.Ambrósio R, Salomão MQ, Barros L, da Fonseca Filho JBR, Guedes J, Neto A, et al. Multimodal diagnostics for keratoconus and ectatic corneal diseases: a paradigm shift. Eye and Vision. 2023;10(1):45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Ambrósio R Jr., Machado AP, Leão E, Lyra JMG, Salomão MQ, Esporcatte LGP, et al. Optimized Artificial Intelligence for Enhanced Ectasia Detection Using Scheimpflug-Based Corneal Tomography and Biomechanical Data. American Journal of Ophthalmology. 2023;251:126–42. [DOI] [PubMed] [Google Scholar]
- 13.Deshmukh R, Ong ZZ, Rampat R, Alió del Barrio JL, Barua A, Ang M, et al. Management of keratoconus: an updated review. Frontiers in Medicine. 2023;10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Zhang X, Munir SZ, Sami Karim SA, Munir WM. A review of imaging modalities for detecting early keratoconus. Eye. 2021;35(1):173–87. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Smolek MK, Klyce SD. Current keratoconus detection methods compared with a neural network approach. Investigative Ophthalmology & Visual Science. 1997;38(11):2290–9. [PubMed] [Google Scholar]
- 16.Accardo PA, Pensiero S. Neural network-based system for early keratoconus detection from corneal topography. Journal of Biomedical Informatics. 2002;35(3):151–9. [DOI] [PubMed] [Google Scholar]
- 17.Lavric A, Valentin P. KeratoDetect: Keratoconus Detection Algorithm Using Convolutional Neural Networks. Computational Intelligence and Neuroscience. 2019;2019:8162567. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Smadja D, Touboul D, Cohen A, Doveh E, Santhiago MR, Mello GR, et al. Detection of Subclinical Keratoconus Using an Automated Decision Tree Classification. American Journal of Ophthalmology. 2013;156(2):237–46.e1. [DOI] [PubMed] [Google Scholar]
- 19.Arbelaez MC, Versaci F, Vestri G, Barboni P, Savini G. Use of a Support Vector Machine for Keratoconus and Subclinical Keratoconus Detection by Topographic and Tomographic Data. Ophthalmology. 2012;119(11):2231–8. [DOI] [PubMed] [Google Scholar]
- 20.Alió del Barrio JL, Eldanasoury AM, Arbelaez J, Faini S, Versaci F. Artificial Neural Network for Automated Keratoconus Detection Using a Combined Placido Disc and Anterior Segment Optical Coherence Tomography Topographer. Translational Vision Science & Technology. 2024;13(4):13-. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Mas Tur V, MacGregor C, Jayaswal R, O’Brart D, Maycock N. A review of keratoconus: Diagnosis, pathophysiology, and genetics. Surv Ophthalmol. 2017;62(6):770–83. [DOI] [PubMed] [Google Scholar]
- 22.Roberts CJ, Dupps WJ Jr. Biomechanics of corneal ectasia and biomechanical treatments. Journal of Cataract & Refractive Surgery. 2014;40(6). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Ambrósio R, Lopes BT, Faria-Correia F, Salomão MQ, Bühren J, Roberts CJ, et al. Integration of Scheimpflug-Based Corneal Tomography and Biomechanical Assessments for Enhancing Ectasia Detection. Journal of Refractive Surgery. 2017;33(7):434–43. [DOI] [PubMed] [Google Scholar]
- 24.Herber R, Ramm L, Spoerl E, Raiskup F, Pillunat LE, Terai N. Assessment of corneal biomechanical parameters in healthy and keratoconic eyes using dynamic bidirectional applanation device and dynamic Scheimpflug analyzer. Journal of Cataract & Refractive Surgery. 2019;45(6):778–88. [DOI] [PubMed] [Google Scholar]
- 25.Herber R, Pillunat LE, Raiskup F. Development of a classification system based on corneal biomechanical properties using artificial intelligence predicting keratoconus severity. Eye and Vision. 2021;8(1):21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Lu N-J, Koppen C, Hafezi F, Ní Dhubhghaill S, Aslanides IM, Wang Q-M, et al. Combinations of Scheimpflug tomography, ocular coherence tomography and air-puff tonometry improve the detection of keratoconus. Contact Lens and Anterior Eye. 2023;46(3). [DOI] [PubMed] [Google Scholar]
- 27.Tan Z, Chen X, Li K, Liu Y, Cao H, Li J, et al. Artificial Intelligence–Based Diagnostic Model for Detecting Keratoconus Using Videos of Corneal Force Deformation. Translational Vision Science & Technology. 2022;11(9):32-. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Kato N, Masumoto H, Tanabe M, Sakai C, Negishi K, Torii H, et al. Predicting Keratoconus Progression and Need for Corneal Crosslinking Using Deep Learning. Journal of Clinical Medicine [Internet]. 2021; 10(4). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Liu Y, Shen D, Wang H-y, Qi M-y, Zeng Q-y. Development and validation to predict visual acuity and keratometry two years after corneal crosslinking with progressive keratoconus by machine learning. Frontiers in Medicine. 2023;10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Hartmann LM, Langhans DS, Eggarter V, Freisenich TJ, Hillenmayer A, König SF, et al. Keratoconus Progression Determined at the First Visit: A Deep Learning Approach With Fusion of Imaging and Numerical Clinical Data. Translational Vision Science & Technology. 2024;13(5):7-. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Ung L, Bispo PJ, Shanbhag SS, Gilmore MS, Chodosh J. The persistent dilemma of microbial keratitis: Global burden, diagnosis, and antimicrobial resistance. Survey of ophthalmology. 2019;64(3):255–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Ting DSJ, Gopal BP, Deshmukh R, Seitzman GD, Said DG, Dua HS. Diagnostic armamentarium of infectious keratitis: A comprehensive review. The Ocular Surface. 2022;23:27–39. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Saini JS, Jain AK, Kumar S, Vikal S, Pankaj S, Singh S. Neural network approach to classify infective keratitis. Current Eye Research. 2003;27(2):111–6. [DOI] [PubMed] [Google Scholar]
- 34.Kuo M-T, Hsu BW-Y, Yin Y-K, Fang P-C, Lai H-Y, Chen A, et al. A deep learning approach in diagnosing fungal keratitis based on corneal photographs. Scientific Reports. 2020;10(1):14424. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Mayya V, Kamath Shevgoor S, Kulkarni U, Hazarika M, Barua PD, Acharya UR. Multi-Scale Convolutional Neural Network for Accurate Corneal Segmentation in Early Detection of Fungal Keratitis. Journal of Fungi [Internet]. 2021; 7(10). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Xu Y, Kong M, Xie W, Duan R, Fang Z, Lin Y, et al. Deep Sequential Feature Learning in Clinical Image Classification of Infectious Keratitis. Engineering. 2021;7(7):1002–10. [Google Scholar]
- 37.Tiwari M, Piech C, Baitemirova M, Prajna NV, Srinivasan M, Lalitha P, et al. Differentiation of Active Corneal Infections from Healed Scars Using Deep Learning. Ophthalmology. 2022;129(2):139–46. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Wang L, Chen K, Wen H, Zheng Q, Chen Y, Pu J, et al. Feasibility assessment of infectious keratitis depicted on slit-lamp and smartphone photographs using deep learning. International Journal of Medical Informatics. 2021;155:104583. [DOI] [PubMed] [Google Scholar]
- 39.Redd TK, Prajna NV, Srinivasan M, Lalitha P, Krishnan T, Rajaraman R, et al. Image-Based Differentiation of Bacterial and Fungal Keratitis Using Deep Convolutional Neural Networks. Ophthalmology Science. 2022;2(2). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Hanif A, Prajna NV, Lalitha P, NaPier E, Parker M, Steinkamp P, et al. Assessing the Impact of Image Quality on Deep Learning Classification of Infectious Keratitis. Ophthalmology Science. 2023;3(4):100331. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Cabrera-Aguas M, Watson SL. Updates in Diagnostic Imaging for Infectious Keratitis: A Review. Diagnostics. 2023;13(21):3358. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Liu Z, Cao Y, Li Y, Xiao X, Qiu Q, Yang M, et al. Automatic diagnosis of fungal keratitis using data augmentation and image fusion with deep convolutional neural network. Computer Methods and Programs in Biomedicine. 2020;187:105019. [DOI] [PubMed] [Google Scholar]
- 43.Essalat M, Abolhosseini M, Le TH, Moshtaghion SM, Kanavi MR. Interpretable deep learning for diagnosis of fungal and acanthamoeba keratitis using in vivo confocal microscopy images. Scientific Reports. 2023;13(1):8953. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Liang S, Zhong J, Zeng H, Zhong P, Li S, Liu H, et al. A Structure-Aware Convolutional Neural Network for Automatic Diagnosis of Fungal Keratitis with In Vivo Confocal Microscopy Images. Journal of Digital Imaging. 2023;36(4):1624–32. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Xu F, Jiang L, He W, Huang G, Hong Y, Tang F, et al. The Clinical Value of Explainable Deep Learning for Diagnosing Fungal Keratitis Using in vivo Confocal Microscopy Images. Frontiers in Medicine. 2021;8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Wu X, Qiu Q, Liu Z, Zhao Y, Zhang B, Zhang Y, et al. Hyphae Detection in Fungal Keratitis Images With Adaptive Robust Binary Pattern. IEEE Access. 2018;6:13449–60. [Google Scholar]
- 47.Tang N, Huang G, Lei D, Jiang L, Chen Q, He W, et al. An artificial intelligence approach to classify pathogenic fungal genera of fungal keratitis using corneal confocal microscopy images. International Ophthalmology. 2023;43(7):2203–14. [DOI] [PubMed] [Google Scholar]
- 48.Chu WK, Choi HL, Bhat AK, Jhanji V. Pterygium: new insights. Eye. 2020;34(6):1047–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Fang X, Deshmukh M, Chee ML, Soh Z-D, Teo ZL, Thakur S, et al. Deep learning algorithms for automatic detection of pterygium using anterior segment photographs from slit-lamp and hand-held cameras. British Journal of Ophthalmology. 2022;106(12):1642–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Liu Y, Xu C, Wang S, Chen Y, Lin X, Guo S, et al. Accurate detection and grading of pterygium through smartphone by a fusion training model. British Journal of Ophthalmology. 2023:bjo-2022–322552. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Kim JH, Kim YJ, Lee YJ, Hyon JY, Han SB, Kim KG. Automated histopathological evaluation of pterygium using artificial intelligence. British Journal of Ophthalmology. 2023;107(5):627–34. [DOI] [PubMed] [Google Scholar]
- 52.Wan Q, Wan P, Liu W, cheng Y, Gu S, Shi Q, et al. Tear film cytokines as prognostic indicators for predicting early recurrent pterygium. Experimental Eye Research. 2022;222:109140. [DOI] [PubMed] [Google Scholar]
- 53.Jais FN, Che Azemin MZ, Hilmi MR, Mohd Tamrin MI, Kamal KM. Postsurgery Classification of Best-Corrected Visual Acuity Changes Based on Pterygium Characteristics Using the Machine Learning Technique. The Scientific World Journal. 2021;2021:6211006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Hakim FE, Farooq AV. Dry Eye Disease: An Update in 2022. JAMA. 2022;327(5):478–9. [DOI] [PubMed] [Google Scholar]
- 55.Chhadva P, Goldhardt R, Galor A. Meibomian Gland Disease: The Role of Gland Dysfunction in Dry Eye Disease. Ophthalmology. 2017;124(11s):S20–s6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Li S, Wang Y, Yu C, Li Q, Chang P, Wang D, et al. Unsupervised Learning Based on Meibography Enables Subtyping of Dry Eye Disease and Reveals Ocular Surface Features. Investigative Ophthalmology & Visual Science. 2023;64(13):43-. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Wang Y, Shi F, Wei S, Li X. A Deep Learning Model for Evaluating Meibomian Glands Morphology from Meibography. Journal of Clinical Medicine. 2023;12(3):1053. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Saha RK, Chowdhury AMM, Na K-S, Hwang GD, Eom Y, Kim J, et al. Automated quantification of meibomian gland dropout in infrared meibography using deep learning. The Ocular Surface. 2022;26:283–94. [DOI] [PubMed] [Google Scholar]
- 59.Zhang Z, Lin X, Yu X, Fu Y, Chen X, Yang W, et al. Meibomian Gland Density: An Effective Evaluation Index of Meibomian Gland Dysfunction Based on Deep Learning and Transfer Learning. Journal of Clinical Medicine. 2022;11(9):2396. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Yu Y, Zhou Y, Tian M, Zhou Y, Tan Y, Wu L, et al. Automatic identification of meibomian gland dysfunction with meibography images using deep learning. International Ophthalmology. 2022;42(11):3275–84. [DOI] [PubMed] [Google Scholar]
- 61.Wang J, Li S, Yeh TN, Chakraborty R, Graham AD, Yu SX, et al. Quantifying Meibomian Gland Morphology Using Artificial Intelligence. Optometry and Vision Science. 2021;98(9). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Setu MAK, Horstmann J, Schmidt S, Stern ME, Steven P. Deep learning-based automatic meibomian gland segmentation and morphology assessment in infrared meibography. Scientific Reports. 2021;11(1):7649. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Khan ZK, Umar AI, Shirazi SH, Rasheed A, Qadir A, Gul S. Image based analysis of meibomian gland dysfunction using conditional generative adversarial neural network. BMJ Open Ophthalmology. 2021;6(1):e000436. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Yeh C-H, Yu SX, Lin MC. Meibography Phenotyping and Classification From Unsupervised Discriminative Feature Learning. Translational Vision Science & Technology. 2021;10(2):4-. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Wang J, Yeh TN, Chakraborty R, Yu SX, Lin MC. A Deep Learning Approach for Meibomian Gland Atrophy Evaluation in Meibography Images. Translational Vision Science & Technology. 2019;8(6):37-. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Shimizu E, Ishikawa T, Tanji M, Agata N, Nakayama S, Nakahara Y, et al. Artificial intelligence to estimate the tear film breakup time and diagnose dry eye disease. Scientific Reports. 2023;13(1):5822. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Wang Y, Jia X, Wei S, Li X. A deep learning model established for evaluating lid margin signs with colour anterior segment photography. Eye. 2023;37(7):1377–82. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Qu J-H, Qin X-R, Li C-D, Peng R-M, Xiao G-G, Cheng J, et al. Fully automated grading system for the evaluation of punctate epithelial erosions using deep neural networks. British Journal of Ophthalmology. 2023;107(4):453–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Su TY, Liu ZY, Chen DY. Tear Film Break-Up Time Measurement Using Deep Convolutional Neural Networks for Screening Dry Eye Disease. IEEE Sensors Journal. 2018;18(16):6857–62. [Google Scholar]
- 70.Maruoka S, Tabuchi H, Nagasato D, Masumoto H, Chikama T, Kawai A, et al. Deep Neural Network-Based Method for Detecting Obstructive Meibomian Gland Dysfunction With in Vivo Laser Confocal Microscopy. Cornea. 2020;39(6). [DOI] [PubMed] [Google Scholar]
- 71.Zhang Y-Y, Zhao H, Lin J-Y, Wu S-N, Liu X-W, Zhang H-D, et al. Artificial Intelligence to Detect Meibomian Gland Dysfunction From in-vivo Laser Confocal Microscopy. Frontiers in Medicine. 2021;8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72.Stegmann H, Werkmeister RM, Pfister M, Garhöfer G, Schmetterer L, dos Santos VA. Deep learning segmentation for optical coherence tomography measurements of the lower tear meniscus. Biomedical Optics Express. 2020;11(3):1539–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Chase C, Elsawy A, Eleiwa T, Ozcan E, Tolba M, Abou Shousha M. Comparison of Autonomous AS-OCT Deep Learning Algorithm and Clinical Dry Eye Tests in Diagnosis of Dry Eye Disease. Clin Ophthalmol. 2021;15:4281–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Edorh NA, Maftouhi AE, Djerada Z, Arndt C, Denoyer A. New model to better diagnose dry eye disease integrating OCT corneal epithelial mapping. British Journal of Ophthalmology. 2022;106(11):1488–95. [DOI] [PubMed] [Google Scholar]
- 75.Matthaei M, Hribek A, Clahsen T, Bachmann B, Cursiefen C, Jun AS. Fuchs Endothelial Corneal Dystrophy: Clinical, Genetic, Pathophysiologic, and Therapeutic Aspects. Annual Review of Vision Science. 2019;5(1):151–75. [DOI] [PubMed] [Google Scholar]
- 76.Eleiwa T, Elsawy A, Özcan E, Abou Shousha M. Automated diagnosis and staging of Fuchs’ endothelial cell corneal dystrophy using deep learning. Eye and Vision. 2020;7(1):44. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Shilpashree PS, Suresh KV, Sudhir RR, Srinivas SP. Automated Image Segmentation of the Corneal Endothelium in Patients With Fuchs Dystrophy. Translational Vision Science & Technology. 2021;10(13):27-. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.Gu H, Guo Y, Gu L, Wei A, Xie S, Ye Z, et al. Deep learning for identifying corneal diseases from ocular surface slit-lamp photographs. Scientific Reports. 2020;10(1):17851. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79.Li W, Yang Y, Zhang K, Long E, He L, Zhang L, et al. Dense anatomical annotation of slit-lamp images improves the performance of deep learning for the diagnosis of ophthalmic disorders. Nature Biomedical Engineering. 2020;4(8):767–77. [DOI] [PubMed] [Google Scholar]
- 80.Elsawy A, Eleiwa T, Chase C, Ozcan E, Tolba M, Feuer W, et al. Multidisease Deep Learning Neural Network for the Diagnosis of Corneal Diseases. American Journal of Ophthalmology. 2021;226:252–61. [DOI] [PubMed] [Google Scholar]
- 81.Ueno Y, Oda M, Yamaguchi T, Fukuoka H, Nejima R, Kitaguchi Y, et al. Deep learning model for extensive smartphone-based diagnosis and triage of cataracts and multiple corneal diseases. British Journal of Ophthalmology. 2024:bjo-2023–324488. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82.Shah SD, Brissette A, Sales CS. Rebubbling of DMEK Grafts. In: Rosenberg ED, Nattis AS, Nattis RJ, editors. Operative Dictations in Ophthalmology. Cham: Springer International Publishing; 2021. p. 77–9. [Google Scholar]
- 83.Deshmukh R, Nair S, Ting DSJ, Agarwal T, Beltz J, Vajpayee RB. Graft detachments in endothelial keratoplasty. British Journal of Ophthalmology. 2022;106(1):1–13. [DOI] [PubMed] [Google Scholar]
- 84.Treder M, Lauermann JL, Alnawaiseh M, Eter N. Using Deep Learning in Automated Detection of Graft Detachment in Descemet Membrane Endothelial Keratoplasty: A Pilot Study. Cornea. 2019;38(2). [DOI] [PubMed] [Google Scholar]
- 85.Heslinga FG, Alberti M, Pluim JPW, Cabrerizo J, Veta M. Quantifying Graft Detachment after Descemet’s Membrane Endothelial Keratoplasty with Deep Convolutional Neural Networks. Translational Vision Science & Technology. 2020;9(2):48-. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 86.Muijzer MB, Hoven CMW, Frank LE, Vink G, Wisse RPL, Bartels MC, et al. A machine learning approach to explore predictors of graft detachment following posterior lamellar keratoplasty: a nationwide registry study. Scientific Reports. 2022;12(1):17705. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 87.Patefield A, Meng Y, Airaldi M, Coco G, Vaccaro S, Parekh M, et al. Deep Learning Using Preoperative AS-OCT Predicts Graft Detachment in DMEK. Translational Vision Science & Technology. 2023;12(5):14-. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 88.Alio JL, Montesel A, Sayyad FE, Barraquer RI, Arnalich-Montiel F, Barrio JLAD. Corneal graft failure: an update. British Journal of Ophthalmology. 2021;105(8):1049–58. [DOI] [PubMed] [Google Scholar]
- 89.Ang M, He F, Lang S, Sabanayagam C, Cheng C-Y, Arundhati A, et al. Machine Learning to Analyze Factors Associated With Ten-Year Graft Survival of Keratoplasty for Cornea Endothelial Disease. Frontiers in Medicine. 2022;9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 90.O’Brien RC, Ishwaran H, Szczotka-Flynn LB, Lass JH, Cornea Preservation Time Study G. Random Survival Forests Analysis of Intraoperative Complications as Predictors of Descemet Stripping Automated Endothelial Keratoplasty Graft Failure in the Cornea Preservation Time Study. JAMA Ophthalmology. 2021;139(2):191–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 91.Lin H, Li R, Liu Z, Chen J, Yang Y, Chen H, et al. Diagnostic Efficacy and Therapeutic Decision-making Capacity of an Artificial Intelligence Platform for Childhood Cataracts in Eye Clinics: A Multicentre Randomized Controlled Trial. eClinicalMedicine. 2019;9:52–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 92.Wu D, Xiang Y, Wu X, Yu T, Huang X, Zou Y, et al. Artificial intelligence-tutoring problem-based learning in ophthalmology clerkship. Annals of Translational Medicine. 2019;8(11):700. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 93.Noriega A, Meizner D, Camacho D, Enciso J, Quiroz-Mercado H, Morales-Canton V, et al. Screening Diabetic Retinopathy Using an Automated Retinal Image Analysis System in Independent and Assistive Use Cases in Mexico: Randomized Controlled Trial. JMIR Form Res. 2021;5(8):e25290. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 94.Mathenge W, Whitestone N, Nkurikiye J, Patnaik JL, Piyasena P, Uwaliraye P, et al. Impact of Artificial Intelligence Assessment of Diabetic Retinopathy on Referral Service Uptake in a Low-Resource Setting: The RAIDERS Randomized Trial. Ophthalmology Science. 2022;2(4). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 95.Li B, Chen H, Yu W, Zhang M, Lu F, Ma J, et al. The performance of a deep learning system in assisting junior ophthalmologists in diagnosing 13 major fundus diseases: a prospective multi-center clinical trial. npj Digital Medicine. 2024;7(1):8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.Wolf RM, Channa R, Liu TYA, Zehra A, Bromberger L, Patel D, et al. Autonomous artificial intelligence increases screening and follow-up for diabetic retinopathy in youth: the ACCESS randomized control trial. Nature Communications. 2024;15(1):421. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 97.Li J-PO, Liu H, Ting DSJ, Jeon S, Chan RVP, Kim JE, et al. Digital technology, tele-medicine and artificial intelligence in ophthalmology: A global perspective. Progress in Retinal and Eye Research. 2021;82:100900. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98.Liu X, Rivera SC, Faes L, Ferrante di Ruffano L, Yau C, Keane PA, et al. Reporting guidelines for clinical trials evaluating artificial intelligence interventions are needed. Nature Medicine. 2019;25(10):1467–8. [DOI] [PubMed] [Google Scholar]
- 99.Linardatos P, Papastefanopoulos V, Kotsiantis S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy [Internet]. 2021; 23(1). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 100.Hassija V, Chamola V, Mahapatra A, Singal A, Goel D, Huang K, et al. Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cognitive Computation. 2023. [Google Scholar]
- 101.Mumuni A, Mumuni F. Data augmentation: A comprehensive survey of modern approaches. Array. 2022;16:100258. [Google Scholar]
- 102.FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems. Food and Drug Administration. [Google Scholar]
