Abstract
Purpose:
To quantifiably assess the diagnostic accuracy of Adven-I, a proprietary artificial intelligence (AI)-driven diagnostic system that automatically detects diseases from fundus images. The purpose is to quantify the performance of Adven-i in differentiating a nonreferable (within normal limits) image from a referable (diseased fundus) image and further segregating diabetic retinopathy (DR) from the rest of the abnormalities (non-DR) encompassing the wide spectrum of abnormal pathologies. The assessment is carried out in comparison to manual reading as the reference gold standard. Adven-i is the only AI system classifying retinal abnormalities into DR and non-DR classes separately, apart from predicting nonreferable fundus, while most existing systems classify fundus images into referable and nonreferable DR.
Methods:
The double-blinded study was conducted on retrospective data collected over the course of a year in the ophthalmology outpatient department (OPD) at a top Tier II eyecare hospital in Chandigarh, India. Three vitreoretina specialists who were blinded to one another read the images. The ground-truth was generated on the basis of majority agreement among the readers. An arbitrator's decision was regarded final if all three readers disagreed.
Results:
2261 fundus images were analyzed by Adven-i. The sensitivity and specificity of Adven-i in diagnosing images with abnormalities were 95.12% and 85.77%, respectively, and for segregating DR from rest of the retinal abnormalities were 91.87% and 85.12%, respectively.
Conclusions and Relevance:
Adven-i shows definite promise in automated screening for early diagnosis of referable fundus images including DR. Adven-i can be adopted to scale for mass screening in resource-limited settings.
Keywords: AI, artificial intelligence, ophthalmology, public health, retina screening
The leading global causes of blindness in those aged 50 years and older in 2020 were cataract (15.2 million cases [9% uncertainty intervals (IU) 12.7–18.0]), followed by glaucoma (3.6 million cases [2.8–4.4]), undercorrected refractive error (2.3 million cases [1.8–2.8]), age-related macular degeneration (AMD; 1.8 million cases [1.3–2.4]), and diabetic retinopathy (DR; 0.86 million cases [0.59–1.23]).[1]
A recent cross-sectional, population-based survey in India among persons aged 50 years and above demonstrates that currently, the prevalence of blindness is 1.99%. Cataract (66.2%), corneal opacity (CO), complications from cataract surgery (7.2%), diseases of the posterior segment (5.9%), and glaucoma (5.5%) were the main causes of blindness. Also, it shows that 92.9% of blindness and 97.4% of vision impairment can be avoided.[2]
Retinal diseases account for a significant share of the vision loss burden, compared to other eye-related diseases that can cause blindness.[3]
Of the various retinal disorders, glaucoma, cataract, AMD, DR, and diabetic macular edema (DME) are among the leading retinal diseases. The prevalence of diabetes mellitus in the Indian population is increasing and is expected to reach 109 million by 2030. DR is a condition which damages the retina due to diabetes mellitus. It affects up to 80% of people who have had diabetes for 20 years or more. DR accounts for an estimated 5% of the 45 million blind people worldwide today.
In India, comprehensive eye care services for the prevention, early detection, and management of retinal diseases are almost nonexistent. There is often a shortage of comprehensive eye services, eye care professionals, and resources. Many patients with retinal disorders including DR are not diagnosed before they have significant vision loss or blindness. It is imperative to detect these retinal diseases including DR in their early stages and also to accurately diagnose advanced or sight-threatening stages during screening of retina for necessary clinical intervention and timely treatment.[4,5]
In certain countries, population-based retinal screening programs have significantly decreased the incidence of DR-related blindness.[5,6] However, primary or secondary human readers and, if necessary, an arbitrator are required to read the images in these screening processes.[4,6] This makes screening programs expensive, time-consuming, and complex, necessitating the need of experienced vitreoretinal specialists, posing a barrier to their widespread adoption in developing nations like India and other low- and middle-income countries (LMICs).
Automated artificial intelligence (AI)-based diagnosis of fundus images captured using fundus cameras has the promise of being cost-effective and scalable within population-based retina screening programs.[7,8,9]
The development of AI-based automated methods for retinal screening is being actively pursued. Some of these systems for automated detection of ocular diseases are already being used commercially. A number of AI-based solutions have been developed in the realm of public health, including ophthalmology.
However, to our knowledge, this is the first clinical study evaluating the performance of an AI-based automated system built upon deep learning (DL), convolutional neural networks (CNNs), and image processing techniques in detection of any retinal abnormality encompassing all abnormal pathologies. The study also quantifies the performance of the AI system in classifying DR images from images with other abnormal pathologies.
Methods
Fundus images of outpatient department (OPD) patients at a Tier II premier eye hospital in Chandigarh, India, were captured by a Topcon camera (TRC500DX) over a period of 1 year (November 2019–October 2020).
Adven-i, a proprietary AI-based automated diagnostic tool for retinal images, was used to analyze these retrospective fundus images. Three vitreoretinal specialists who were blind to one another's readings and the AI system's findings annotated these photographs to generate the ground truth or reference gold standard.
Adven-i identifies normal and abnormal fundus images and further classifies DR among abnormal ones.
The objective of any retina screening program is to identify patients with any type of abnormal retinal condition, not just DR, who should be reported to a vitreoretina specialist or an ophthalmologist. This is because in a real-life scenario, any retina screening program will capture all kinds of abnormal fundus images. Adven-i is built to precisely address this.
Institutional review board approval was obtained from the hospital's ethics committee. Informed consent was obtained from all participants. The protocol adhered to the tenets of the Declaration of Helsinki. The automated cloud-based analysis platform is built on proprietary AI technologies. However, authors of the study have no financial interest in these technologies.
Acquisition of retinal images for study
This was a retrospective, cross-sectional study for assessment of diagnostic accuracy of an automated AI system. Patients visiting the outpatient department of an eye hospital in Chandigarh, India, for routine eye check-up or with visual symptoms and other eye-related complaints were examined and color retinal fundus images were captured by a Topcon camera (TRC500DX, manufacturing year 2010). Routinely, fundus images were taken focusing on the area between fields 1 and 2, including the optic disc and the macula, but some images were also captured from the peripheral regions as per the suggestion of the examining doctor. Fundus images acquired over a period of 1 year at OPD were being used in the study mentioned. The data represented real-world scenario, where various abnormal manifestations can be present in the fundus images and are not restricted to DR only. Information regarding age, gender, diabetes status, duration since diabetes onset, postprandial blood glucose level, and any other medical history was not known.
Annotation by human readers
The study dataset consisting of a set of color retinal fundus images was completely anonymized. The human readers who participated in this study for generating the ground truth were blinded to the following: any prior knowledge regarding the disease prevalence, whether or not the images were acquired after pupil dilation, knowledge of the fundus camera vendor, whether mydriatic or nonmydriatic, or any other information which would have any potential bias on the study.
The images were stored on a Health Insurance Portability and Accountability Act (HIPAA)-compliant cloud server (Amazon Web Services) and read by three vitreoretinal specialists, with each of them having more than 5 years of experience.
The three vitreoretinal specialists annotated these images to generate the ground truth as one of the four given class labels, namely, i) normal ii) proliferative DR (PDR), iii) nonproliferative DR (NPDR), and iv) other (any other abnormal pathology which is not DR). The image labeling was carried out following standard protocols. The grading of retinopathy was done according to the international clinical DR severity scale.[10]
There were two more categories, “bad” and “questionable.” The “bad” category was used when the image was not of any diagnostic quality, while the “questionable” category was used only when the grader thought that an abnormality was definitely present, but its nature was uncertain. Particularly, if the likelihood of the abnormality being DR could not be ascertained with confidence, the grader assigned the grade or label of the image as “questionable.”
Since it was a double-blinded study, all the three readers were completely blinded to one another on their annotation or image labeling to avoid any inter-reader bias. An expert vitreoretina specialist was designated as the arbitrator to resolve and provide the final labeling on images if there was complete nonconcurrence among the three readers. The readers were unaware of the prediction of the AI tool to avoid any obvious bias. The image labels collected form all three readers were consolidated and were considered as the final images where two or more readers' labels were the same (by virtue of majority decisions). However, if the labels of all three readers were different, the arbitrator relabeled the images and his decision was considered final.
After combining all three readers' and arbitrator's annotations (if any), the final label (ground truth) was generated as the reference gold standard for each fundus image.
Prediction by online AI system
Adven-i is a cloud-based telemedicine platform that can be accessed from anywhere anytime through any web browser. After being uploaded to our cloud server, retinal fundus images captured with any third-party fundus camera are automatically analyzed. Within a few seconds, clinically relevant reports are generated in real time. Fig. 1 illustrates the workflow of Adven-i.
Figure 1.

Adven-i workflow
A CNN model evaluates the quality of the images to be analyzed and discards those that do not meet diagnostic quality, including those that are not fundus images, from further analysis.
Preprocessing steps were used by employing image processing techniques for preparing the data (images) for generating the CNN models. Common data augmentation techniques were applied during training of all the CNN models.
Several CNN models were developed in-house to differentiate among normal, PDR, NPDR, and other retinal abnormalities (without specifying). The AI models were trained using in-house training data consisting of approximately 100K images collected from various eye care centers in India. The images were captured using both mydriatic and nonmydriatic cameras from various manufacturers such as Topcon, Zeiss, Forus, Intuvision, and Remidio (knowledge on the exact camera models is nonexistent). Internal validation of the CNN models was performed using a validation set of over 50K data points (not included in the training set). Results on these datasets are shown in Table 1.
Table 1.
Performance metric on in-house validation set
| Study module | Sensitivity (%) | Specificity (%) | Accuracy (%) | F1 score (%) | PPV (%) | NPV (%) |
|---|---|---|---|---|---|---|
| Normal versus anormal | 93.92 | 88.31 | 92.26 | 94.47 | 95.03 | 85.91 |
| DR versus others | 92.48 | 86.33 | 87.68 | 89.40 | 65.51 | 97.61 |
| PDR versus NPDR | 83.58 | 87.53 | 86.09 | 81.50 | 90.21 | 91.33 |
DR=diabetic retinopathy, NPDR=nonproliferative DR, NPV=negative predictive value, PDR=proliferative DR, PPV=positive predictive value
The retrospective data received from the study center was not curated, and we have processed the entire dataset without any predefined exclusion criteria.
Statistical analysis
Several statistical metrics including sensitivity, specificity, accuracy, area under the receiver operating characteristic (ROC) curve (AUC), positive predictive value (PPV), and negative predictive value (NPV) were computed for three different scenarios: i) classification of fundus images as normal and abnormal (without specifying), ii) fundus image classification into any DR and all other remaining retinal abnormalities (non-DR), and iii) differentiation of PDR from NPDR fundus images. Performance analysis of Adven-i was carried out in comparison with the ground truth as the reference gold standard. Statistical analysis was done using tools built in-house.
Results
The Adven-i quality module selected 2700 images of diagnosable quality from a dataset of 3200 retinal fundus images. After removing the images from the “questionable” and “bad” categories, a total of 2261 images were processed for automated analysis using various CNN models built in-house. The 2261 dataset images contained 478 normal images, 1783 abnormal images, 923 DR cases, and 860 fundus images of abnormal pathologies that did not include DR. Among the 923 DR images, there were 257 PDR and 660 NPDR images.
Adven i performance was calculated in from the agreement of Adven-i class prediction with respect to the ground-truth generated by the human experts (annotation or class labels). For normal and abnormal image classes, Adven-i had a sensitivity of 95.12% (95% confidence interval [CI], 96.12%–94.12%), a specificity of 85.77% (95% CI, 88.90%–82.64%), and an accuracy of 93.14%. AUC, PPV, and NPV were found to be 0.9632, 96.15%, and 82.49%, respectively.
Here, “abnormal” is an indication of any clinically significant abnormal manifestation, not restricted to diseases or pathologies. As per the observation of the expert human readers (annotators), the retinal abnormalities in the study data consisted of a varied spectrum of prevalent diseases including cases of DR, hypertensive retinopathy (HTR), DME, AMD, drusen, central serous retinopathy (CSR), macular hole, epiretinal membrane, Stargardt disease, Best disease, gyrate atrophy, choroidal melanoma and other macular abnormalities, choroidal neovascularization (CNVM), retinitis pigmentosa, glaucoma, disc suspect, optic disc edema, optic neuropathy, multifocal choroiditis, myopic degeneration, vessel occlusions (Branch Retinal Vein Occlusion (BRVO), Central Retinal Vein Occlusion (CRVO), Central Retinal Artery Occlusion (CRAO), Branch Retinal Artery Occlusion (BRAO)), retinal detachment, vasculitis, and other nonspecific findings. Some of these abnormal images were nonreferable ones and can be categorized as “within normal limits (WNL).”
Analysis of false negatives (FNs) showed that Adven-i incorrectly labeled 87 abnormal images as normal. Of these, only one image was found to be a referable DR (RDR; according to the Scottish Grading Protocol[11]) image, but was misclassified as normal. Seventy-nine of the misclassified abnormal images were not referable and can be categorized as “WNL.” These FNs mainly manifested microaneurysms, small number of exudates, or cotton wool spots. The remaining eight FNs including the DR image were referable. These FNs mainly manifested as very subtle macular abnormalities. Some example images that we misclassified are illustrated in Fig. 2.
Figure 2.

Example of abnormal images misclassified as normal. (a) Disc edema, (b) DR. DR = diabetic retinopathy
There were 68 normal images that Adven-i incorrectly classified as abnormal. All of these images contained artifacts or had illumination issues. Fig. 3 illustrates some example images of false positives (FPs).
Figure 3.

Example of normal image misclassified as abnormal because of (a) artifact due to dirty lens of the camera. (b) Artifact
In terms of referable and nonreferable fundus images, Adven-i registered the sensitivity to be 99.55%, while the specificity and accuracy were 87.79% and 96.63%, respectively.
The performance of Adven-i in detecting and segregating fundus images with DR and other abnormal pathologies not due to DR showed a sensitivity of 91.87 (95% CI, 93.63%–90.11%), a specificity of 85.12% (95% CI, 8.49%–82.73%), and an accuracy of 88.61. AUC was calculated to be 0.944, while PPV and NPV were calculated to be 86.89% and 90.71%, respectively. Seventy-five DR images were misclassified as not being due to DR, with eight of them being PDR cases. The DR images that were misclassified as non-DR were mostly having pathologies of mixed retinopathy and lasered DR cases. Fig. 4 illustrates some example images of DR misclassified as Non-DR case. However, because these were reported as abnormal and referral cases, the patients would still receive doctor's advice and treatment.
Figure 4.

Examples of DR images misclassified as having non-DR abnormal pathologies. (a) moderate NPDR, (b) lasered PDR. DR = diabetic retinopathy, NPDR = nonproliferative DR, PDR = proliferative DR
Adven-i detected more than mild DR (mtmDR) and vision-threatening DR (vtDR) images with 100% accuracy by flagging them as abnormal and thus reporting them as referable cases.
The classification of PDR and NPDR was 80.50% accurate. Specificity was 0.63% and sensitivity was 80.63%. AUC was 0.8753. PPV was calculated to be 61.49, and NPV was calculated to be 91.33%. However, because all of these images were already classified as having manifestations of any DR pathology, they have been correctly flagged as having DR.
All metrics were calculated with respect to the reference gold standard generated by the three readers. Table 2 gives a summary of the performance measures. Average time for inference and report generation per image was <5 s.
Table 2.
Summary of performance metrics on the clinical study data
| Study module | Sensitivity (%) | Specificity (%) | Accuracy (%) | F1 score (%) | PPV (%) | NPV (%) | AUC |
|---|---|---|---|---|---|---|---|
| Normal versus abnormal | 95.12 | 85.77 | 93.14 | 95.63 | 96.15 | 82.49 | 0.9632 |
| DR versus others | 91.87 | 85.12 | 88.61 | 89.50 | 86.89 | 90.71 | 0.9404 |
| PDR versus NPDR | 80.16 | 80.63 | 80.50 | 69.59 | 61.49 | 91.33 | 0.8753 |
AUC=area under the receiver operating characteristic curve, DR=diabetic retinopathy, NPDR=nonproliferative DR, NPV=negative predictive value, PDR=proliferative DR, PPV=positive predictive value
The overall agreement rate and the kappa statistic were calculated for each pair of readers, as well as for the human reader–generated ground truth and AI prediction. A kappa value greater than 0.40, 0.60, or 0.75 indicates moderate, good, or excellent agreement, respectively. The inter-reader agreement, as measured by Cohen's kappa, was greater than 0.6 and less than 0.75 for all three pairs of readers. The agreement between human-generated image labels and AI-generated image labels was 0.767, which is excellent. The plot of sensitivity versus 1- specificity across varying cut-offs generating the ROC curve in the unit square is illustrated in Fig. 5.
Figure 5.

Receiver operating curves
Class activation mapping[12] was also implemented. This gives visual feedback to the physician by displaying the areas of the fundus image that have triggered a positive diagnosis. Examples of outputs are given in Fig. 6.
Figure 6.

Class activation mapping on an example image. (a) Original image (moderate NPDR). (b) Activation map. NPDR = nonproliferative diabetic retinopathy
Limitations
Images were acquired by an older version of Topcon, and 500 images were discarded by the quality module of Adven-i on account of insufficiency of diagnostic quality. The classification criteria of DR used in this study were based on a single 45° photographic field, whereas the early treatment diabetic retinopathy study (ETDRS) classification requires seven standard 30° fields.[13]
In this study, the ETDRS classification criteria were modified. The major difference was that the ETDRS mild, moderate, and severe nonproliferative levels were combined into one label as nonproliferative owing to the inability to count the number of fields involved with specific DR clinical features. So, the current version of AI did not permit classification into different grades of NPDR. We were also unable to assess the sensitivity and specificity of Adven-i in detecting DR as referable or nonreferable, unlike all other studies, although analysis of FPs has demonstrated 100% sensitivity of all referable cases.
The current study was not patient centric, but was image centric as there was no information available to us regarding the left and right eyes of a patient.
In-house developmental work to upgrade and reduce FPs and FNs, respectively, is a continuous process. Another prospective, patient-centric clinical trial is also completed.
Discussion
In this study, Adven-i performance is assessed on fundus photographs taken using a mydriatic table-top camera (Topcon model TRC500DX, manufacturing year 2010).
The study quantifies the diagnostic accuracy of Adven-i in detecting referable abnormal pathologies in the fundus images and distinguishing normal from abnormal fundus images. Adven-i further distinguishes DR from rest of the retinal abnormalities (non-DR) that do not manifest as DR. DR images are further classified as proliferative and nonproliferative.
To our knowledge, this is the first study that evaluated an AI algorithm on a noncurated and mixed dataset (real-world data) of fundus images that included normal, DR, and non-DR fundus images.
There are currently no simple protocols for diagnosing retinal ailments at the last mile. So, 95% of these people, despite having severe retinal diseases, go undiagnosed and untreated.
Complicating matters further is the fact that India has only about 1400 retinal specialists registered with the Vitreo-retinal Society of India (VRSI),[14] implying that there are insufficient skills to deal with and funnel these patients into care systems.
Adven-i was created to address this issue of health-care asymmetry by automatically generating clinically relevant reports in real time and triaging patients requiring clinical advice and treatment. Adven-i can be used to detect retinal abnormalities at an early stage and also with advanced but asymptomatic abnormal conditions requiring treatment.
Various AI systems have been used for automated detection of DR among diabetic cohort and to differentiate nonreferable DR (NRDR) from RDR.[7,8,9,15,16]
In the study of Natarajan et al.[16] using Medios, an offline smartphone-based AI, on 213 patients in a community health center, in both image-wise and patient-wise analyses, the sensitivity for detection of RDR was 100.0% (a total of 15 patients) and specificity was 88.4% (among 12 individuals with cases of mild NPDR as per the human readers, eight (67%) patients were wrongly diagnosed as having RDR by AI, while four (33%) were diagnosed as not having DR). Although the smartphone-based camera was nonmydriatic, patients' pupils were dilated to acquire images. Other than DR, fundus images captured in the community center for this study showed retinitis pigmentosa, drusen, and retinal pigment epithelium changes and were misdiagnosed as RDR by the Medios AI.
In an in-clinic retrospective study by Gulshan et al.[7] on the EyePACS-1 dataset, the sensitivity and specificity were 97.5% and 93.4%, respectively, while on the Messidor dataset, the sensitivity was 96.1% and specificity was 93.9% for detecting RDR. In a study using the cloud-based software EyeArt (Eyenuk),[17] the sensitivity and specificity for detecting any RDR were 91.3% and 91.1%, respectively, and the sensitivity for detecting treatable DR was 98.5%. Ting et al.[10] demonstrated sensitivity and specificity of 90.5% and 91.6%, respectively, for RDR and 100% and 91.1%, respectively, for vision-threatening DR using a table-top fundus camera for image acquisition in a multi-ethnic study.
Adven-i is trained on both DR and non-DR pathologies, unlike the existing solutions discussed above, which label abnormal pathologies as DR. There are numerous common retinal abnormalities which are not DR and labeling all of these abnormal pathologies as DR is clinically incorrect, and all other AI solutions precisely have this limitation.
The superiority end point deemed by the US Food and Drug Administration (FDA) in the pivotal clinical trials evaluating the IDx AI algorithm was a sensitivity of 85% and a specificity of 82.5%.[15] The AI algorithm of Adven-i provides a sensitivity and specificity of 95.12% and 85.77%, respectively, to detect normal and abnormal images and 91.87% and 85.12%, respectively, to detect DR and non-DR images and these performance stats are above the defined metrics.
Adven-i also provides lesion detection map on the image [Fig. 6], which explains AI prediction.
The extremely high sensitivity of Adven-i in detecting retinal abnormalities encompassing a wide range of diseases demonstrates the utility of its adoption in the current workflow and eye care ecosystem. In the absence of retinal specialists, it can be used as an initial point of referral (screening tool) for retinal abnormalities.
Adven-i's efficacy in detecting DR with high accuracy and classifying various stages of the disease also demonstrates its adoption in a variety of use cases such as diabetic care centers, community and primary care centers, corporate and industrial health screening, insurance, and others. Furthermore, Adven-i can operate at different points on the ROC curve for different use cases that require either higher sensitivity over specificity or vice versa.
Conclusions
Patients in eye hospitals, community or public health eye screening camps or centers present with a variety of retinal disorders, including but not limited to DR. As a result, our method of detecting abnormal pathologies in the retina and, as a result, detecting patients with DR is justified.
Adven-i can provide a cost-effective and useful diagnostic report on the images obtained with any fundus camera with or without pupil dilation. Another clinical trial with an upgraded AI system also has validated these findings on a multicentric prospective dataset.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
References
- 1.GBD 2019 Blindness and Vision Impairment Collaborators; Vision Loss Expert Group of the Global Burden of Disease Study Causes of blindness and vision impairment in 2020 and trends over 30 years, and prevalence of avoidable blindness in relation to VISION 2020: The Right to Sight: An analysis for the Global Burden of Disease Study. Lancet Glob Health. 2021;9:e144–60. doi: 10.1016/S2214-109X(20)30489-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Vashist P, Senjam SS, Gupta V, Gupta N, Shamanna BR, Wadhwani M, et al. Blindness and visual impairment and their causes in India: Results of a nationally representative survey. PLoS One. 2022;17:e0271736. doi: 10.1371/journal.pone.0271736. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. [Last accessed on 2023 Aug 23]. Available from: https://www.newindianexpress.com/cities/bengaluru/2018/aug/08/retinal-diseases-leading-cause-of-chronic-blindness-1855127.html .
- 4.Squirrell DM, Talbot JF. Screening for diabetic retinopathy. J R Soc Med. 2003;96:273–6. doi: 10.1258/jrsm.96.6.273. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Bachmann MO, Nelson SJ. Impact of diabetic retinopathy screening on a British district population: Case detection and blindness prevention in an evidence-based model. J Epidemiol Community Health. 1998;52:45–52. doi: 10.1136/jech.52.1.45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Scanlon PH. The English national screening programme for diabetic retinopathy 2003-2016. Acta Diabetol. 2017;54:515–25. doi: 10.1007/s00592-017-0974-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316:2402–10. doi: 10.1001/jama.2016.17216. [DOI] [PubMed] [Google Scholar]
- 8.Ting DSW, Cheung CY, Lim G, Tan GSW, Quang ND, Gan A, et al. Development diseases using retinal images from multiethnic populations with diabetes. JAMA. 2017;318:2211–23. doi: 10.1001/jama.2017.18152. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Rajalakshmi R, Subhashini R, Anjana RM, Mohan V. Automated diabetic retinopathy detection in smartphone based fundus photography using artificial intelligence. Eye (London) 2018;32:1138–44. doi: 10.1038/s41433-018-0064-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Wilkinson CP, Ferris FL III, Klein RE, Lee PP, Agardh CD, Davis M, et al. Global Diabetic Retinopathy Project Group. Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology. 2003;110:1677–82. doi: 10.1016/S0161-6420(03)00475-5. [DOI] [PubMed] [Google Scholar]
- 11.Zacharia S, Wykes W, Yorston D. Grading diabetic retinopathy (DR) using the Scottish grading protocol. Community Eye Health. 2015;28:72–3. [PMC free article] [PubMed] [Google Scholar]
- 12.Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization. [Last accessed on 2023 Aug 25];Proceedings of the IEEE conference on computer vision and pattern recognition. Available from: http://cnnlocalization.csail.mit.edu/Zhou_Learning_Deep_Features_CVPR_2016_paper.pdf. | Published 2016. [Google Scholar]
- 13.Vujosevic S, Benetti E, Massignan F, et al. Screening for diabetic retinopathy: 1 and 3non-mydriatic 45-degree digital fundus photographs 7 standard early treatment diabetic retinopathy study fields. Am J Ophthalmol. 2009;148:111–8. doi: 10.1016/j.ajo.2009.02.031. [DOI] [PubMed] [Google Scholar]
- 14.This information one of the authors have internally collected from sources from. VRSI. https://vrsi.in/ [Last accessed on 2023 Aug 25]. https://vrsi.in/
- 15.Abramoff M, Lavin PT, Birch M, Shah N, Folk J. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med. 2018;1:1–8. doi: 10.1038/s41746-018-0040-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Natarajan S, Jain A, Krishnan R, Rogye A, Sivaprasad S. Diagnostic accuracy of community-based diabetic retinopathy screening with an offline artificial intelligence system on a smartphone. JAMA Ophthalmol. 2019;137:1182–8. doi: 10.1001/jamaophthalmol.2019.2923. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Bhaskaranand M, Ramachandra C, Bhat S, Cuadros J, Nittala MG, Sadda SR, et al. The value of automated diabetic retinopathy screening with the EyeArt system: A study of more than 100,000 consecutive encounters from people with diabetes. Diabetes Technol Ther. 2019;21:635–43. doi: 10.1089/dia.2019.0164. [DOI] [PMC free article] [PubMed] [Google Scholar]
