Abstract
Abstract
Objectives
Despite global research on early detection of age-related macular degeneration (AMD), not enough is being done for large-scale screening. Automated analysis of retinal images captured via smartphone presents a potential solution; however, to our knowledge, such an artificial intelligence (AI) system has not been evaluated. The study aimed to assess the performance of an AI algorithm in detecting referable AMD on images captured on a portable fundus camera.
Design, setting
A retrospective image database from the Age-Related Eye Disease Study (AREDS) and target device was used.
Participants
The algorithm was trained on two distinct data sets with macula-centric images: initially on 108,251 images (55% referable AMD) from AREDS and then fine-tuned on 1108 images (33% referable AMD) captured on Asian eyes using the target device. The model was designed to indicate the presence of referable AMD (intermediate and advanced AMD). Following the first training step, the test set consisted of 909 images (49% referable AMD). For the fine-tuning step, the test set consisted of 238 (34% referable AMD) images. The reference standard for the AREDS data set was fundus image grading by the central reading centre, and for the target device, it was consensus image grading by specialists.
Outcome measures
Area under receiver operating curve (AUC), sensitivity and specificity of algorithm.
Results
Before fine-tuning, the deep learning (DL) algorithm exhibited a test set (from AREDS) sensitivity of 93.48% (95% CI: 90.8% to 95.6%), specificity of 82.33% (95% CI: 78.6% to 85.7%) and AUC of 0.965 (95% CI:0.95 to 0.98). After fine-tuning, the DL algorithm displayed a test set (from the target device) sensitivity of 91.25% (95% CI: 82.8% to 96.4%), specificity of 84.18% (95% CI: 77.5% to 89.5%) and AUC 0.947 (95% CI: 0.911 to 0.982).
Conclusion
The DL algorithm shows promising results in detecting referable AMD from a portable smartphone-based imaging system. This approach can potentially bring effective and affordable AMD screening to underserved areas.
Keywords: medical retina, public health, aged
STRENGTHS AND LIMITATIONS OF THIS STUDY.
Development and validation: The study validated an artificial intelligence (AI) algorithm for detecting referable age-related macular degeneration (AMD) on retinal images that used two different data sets during development, including the Age-Related Eye Disease Study database and a portable fundus camera (target device).
Improved accuracy: Training the AI on a larger, diverse data set before narrowing it down to the target device data set enhanced detection accuracy.
Efficient deployment: The algorithm uses a lightweight architecture, making it deployable on a smartphone-based fundus camera without needing internet connectivity, promoting accessibility.
Narrow focus: The AI was specifically developed and evaluated for referable AMD and does not account for other retinal pathologies.
The study demonstrated promising results on a test set and the next step entails real-world evaluation.
Introduction
Age-related macular degeneration (AMD) is a leading cause of visual impairment in both developed and developing nations, particularly in those aged 65 years and older. Projections indicate that by the year 2040, the global population affected by AMD is expected to reach approximately 288 million people.1 Early AMD is characterised by drusens and abnormalities of the retinal pigment epithelium. As the disease progresses, advanced AMD manifests as neovascular (nAMD) or central geographical atrophy (also called dry or non-exudative AMD). Advanced AMD accounts for an estimated 90% of cases of severe vision loss.2 Neovascular AMD can be treated effectively using anti-vascular endothelial growth factor agents, allowing patients to lead productive lives if diagnosed and treated early.3 Therefore, it is crucial to establish a robust and efficient screening system, especially in areas with low specialist-to-patient ratios.
Fundus photography is the gold standard tool used in primary care settings and efficiently detects AMD.4,6 However, the interpretation of photographs necessitates trained personnel; due to the lack of sufficient experts, a delay is expected in underserved areas, including developing countries. Manual screening is also challenging in the developed world due to a large target population. As a result, creating automated tools to identify referable AMD is useful in both developed and developing countries. Various groups have made considerable efforts in this direction.7,11
The AMD detection algorithm was developed using Convolutional Neural Networks (CNN), a category of deep learning (DL) algorithms. In brief, a computer learns how to solve a vision task by iterating over a large data set of input images and their desired outputs. A large collection of mathematical operations, arranged in layers (the model) and the coefficients used in those operations (the weights) are automatically adjusted to generate the desired output for a given input image. When trained adequately, the resulting model can generalise images beyond the input data set and generate accurate outputs for any image of the same problem space.
Smartphone-based, affordable fundus cameras, powered with artificial intelligence (AI) capabilities, have shown promise and can significantly expand access to care for a broader population. Our group has previously developed and validated a DL-based AI algorithm for screening people with diabetic retinopathy using a smartphone-based fundus imaging system.12,14 The current study describes the results of our AI algorithm for screening referable AMD.
Methodology
Ethical approval
The study was conducted per the Declaration of Helsinki. Approval was obtained from NIH to use the AREDS database (#89924–2 and #89923–2).
The test data set contains de-identified retrospective images from the target device (Remidio FOP NM10 Remidio Innovative Solutions, Bangalore, India) server where providers have taken consent from patients regarding using de-identified data for research and development.
Overview
CNNs excel when trained with vast data sets. Ideally, to achieve top accuracy, data sets should encompass tens of thousands of images. Importantly, these images should mirror the real-world conditions where the CNN will operate. For instance, a CNN trained using images from a specific camera or of a particular ethnicity may not perform as well outside those conditions. This represents a challenge when the goal is to train a CNN for a portable non-mydriatic camera for global use. Generating a training set in the magnitude of tens of thousands of images with a novel device is resource-intensive and time-consuming, especially when focusing on populations with low prevalence for the target disease.
Transfer learning is a common DL technique that can offer a solution. Instead of starting the training process with random weights, weights trained on a related task are used as a starting point to leverage on pre-existing knowledge in the model. This generally leads to quicker and more accurate training outcomes, even with smaller data sets.
Our approach used transfer learning to harness another extensive data set: Age-Related Eye Disease Study (AREDS). This data set was captured with a tabletop camera in the USA. We retrieved 128,918 gradable macula-centred images from the study. We used a two-step process: we initially trained a first model detecting the presence of AMD using solely the AREDS data set. We then fine-tuned this model on a small data set of 1853 images captured by the target device and on another ethnicity. The data set collected on the target device is thus not used to train an AMD detection neural network from scratch. It is used to adapt an existing AMD detection model to the specificities of the target camera and population. Our hypothesis is that this strategy would create an efficient CNN without the need for an extensive data set from the target device and population.
The model was trained to separate non-referable AMD images from referable AMD images as a binary classification. AMD grading was as per the AREDS system: stage 1 (no AMD), stage 2 (early AMD), stage 3 (intermediate AMD) and stage 4 (advanced AMD with foveal geographical atrophy (GA) or choroidal neovascularisation (nAMD)). We defined non-referable AMD as no AMD or early AMD and referable AMD as intermediate or advanced AMD (GA or nAMD). The operating point was chosen to achieve the highest sensitivity for 85% specificity. Extensive experiments were conducted to determine the right model to achieve this. We chose the EfficientNet V.2 architecture, designed by Google, for its suitability for portable, on-the-edge deployments.
Preprocessing
As a first step, the images inputted into the network underwent a cropping process to eliminate black borders caused by the fundus camera. These were then downsampled to a standardised image size of 300×300 pixels. In the first training stage, random horizontal flipping was applied. During the second stage, we introduced various image enhancements such as rotations, contrast and brightness, hue, saturation changes and grid dropout.
First training phase
The first stage of the training used the AREDS data set. This step aimed to train a robust model independent of the target device. This initial model has been trained by fine-tuning a model pretrained on ImageNet, a generic image classification task. During this phase, 128,918 macula-centred fundus images belonging to 4028 subjects and their diagnosis were used. The data set was split into the train (126,823 images), validation (1 186 images), and test (909 images) set. Borderline images, defined as early AMD with a medium drusen area (63–124 microns in diameter), were removed from the training and validation sets but not from the test set. Empirical experimentation showed that this strategy improved neural network convergence. The final train and validation set comprised 108 251 images (55% referable AMD) and 990 images (45% referable AMD). The test set of 909 images contained 49% of referable AMD with all stages of the disease (table 1). In both steps, care was taken to avoid using multiple images of the same patient in different data sets. Image allocation in all sets respected the distribution of AMD stages and other phenotypical data.
Table 1. Data organisation based on fundus images.
| Referable AMD algorithm for | AMD category | No. of images for training | No. of images for validation | No. of images for testing | |
| AREDSn=110,150 | Non-referable | No AMD | 39,268 | 404 | 284 |
| Early | 9332 | 140 | 180 | ||
| Referable | Intermediate | 39,515 | 312 | 371 | |
| Advanced | 20,136 | 134 | 74 | ||
| Target devicen=1584 | Non-referable | No AMD | 541 | 116 | 115 |
| Early | 110 | 26 | 25 | ||
| Both* | 86 | 17 | 18 | ||
| Referable | Intermediate | 194 | 36 | 40 | |
| Advanced | 143 | 36 | 34 | ||
| Both† | 34 | 7 | 6 | ||
One retina specialist graded ‘No AMD’ while another graded ‘Early AMD’, but two out of three agreed to be ‘non-referable AMD’.
One retina specialist graded ‘Intermediate’ while another graded ‘Advanced’, but two out of three agreed to be ‘referable AMD’.
AMDage-related macular degenerationAREDSAge-Related Eye Disease Study
Second training phase
The second stage of training consisted of fine-tuning the model with a much smaller data set captured with the target deployment device. The objective of this step was to adapt the model trained during the first stage to the characteristics (such as image resolution, tint, field of view and pigmentation) of the images captured with the target device. The data set consists of 2012 macula-centred fundus images (ethnicity-South Asian) gathered in a screening and clinical setting using the target device between March 2013 and October 2020 and between January and March 2022 with a mean age of 51.9 years, with a comparable distribution of men (55.7%) and women (44.3%). Adults who participated in outreach screening camps and in-clinical tests with any stage of AMD or deemed to have normal retina were included. Images deemed ungradable/inconclusive on referability by experts and other retinal conditions were excluded. The data set has been labelled by three retina specialists for image quality (online supplemental file 1) and stage of disease based on the AREDS four category classification. 101 images were removed after graders disagreed on referability, and 327 were removed after they deemed them ungradable. The kappa agreements between each specialist’s grades and their consensus for referable AMD were 0.715, 0.824 and 0.722. The remaining 1584 images were split into the train (1108 images, 33% referable AMD), validation (238 images, 33% referable AMD) and test sets (238 images, 34% referable AMD) (table 1).
Statistical analysis
We analysed accuracy, sensitivity, specificity and positive and negative predictive values. We have also used receiver operating characteristic curves to check the detection probability for the algorithm, as every classifier has a trade off between sensitivity and specificity. The 95% CIs were also calculated. The NumPy and scikit-learn Python libraries were used for statistical analysis.
Patient and public involvement
None.
Results
Image data (AREDS and target device) for training, validation and testing was organised and AMD was divided into two broad categories (referable and non-referable) table 1.
Validation
AREDS data set
After the first phase of training on AREDS, the model had a validation sensitivity of 92.83% (95% CI: 90.0% to 95.0%), specificity of 94.12% (95% CI: 91.8% to 95.9%) and area under the curve (AUC) of 0.984 (95% CI: 0.976 to 0.992). On the test set, the sensitivity was 93.48% (95% CI: 90.8% to 95.6%), specificity 82.33% (95% CI: 78.6% to 85.7%) and AUC 0.965 (95% CI: 0.952 to 0.977).
Target device data set
The final model (fine-tuned for the target device) had a sensitivity of 87.34% (95% CI: 78.0% to 93.8%), specificity of 85.53% (95% CI: 79.1% to 90.6%) and AUC of 0.947 (95% CI: 0.911 to 0.982) in detecting referable AMD on the validation set. On the test set of the target device, sensitivity was 91.25% (95% CI: 82.8% to 96.4%), specificity was 84.18% (95% CI: 77.5% to 89.5%) and AUC 0.947 (95% CI: 0.911 to 0.982). At individual stages, the recall for normal eyes was 91.30% (95% CI: 84.6% to 95.8%), which was higher than early AMD (72.00% (95% CI: 50.6% to 87.9%)). Similarly, the recall for advanced AMD was 94.12% (95% CI: 80.3% to 99.2%), which was higher than that of intermediate AMD (87.50% (95% CI: 80.3% to 99.2%) (table 2) (figure 1). The diagnostic accuracy of AMD AI in detecting referable or non-referable AMD against the ophthalmologist grading based on consensus for validation and training set is presented in table 3.
Table 2. Performance of the algorithm on data sets.
| Algorithm for AREDS (following first phase of training)Reference standard:Consensus grading of two certified graders, adjudicated by third senior grader on discrepancies | Algorithm for target device (final model fine-tuned for target device)Reference standard:Consensus grading of three retina specialists | |||||||
| Validation set990 images (45% referable AMD) | Test set909 images (49% referable AMD) | Validation set238 images (33% referable AMD) | Test set238 images (34% referable AMD) | |||||
| Sensitivity (%) | 92.83 | (95% CI: 90.0 to 95.0) | 93.48 | (95% CI: 90.8 to 95.6) | 87.34 | (95% CI: 78.0 to 93.8) | 91.25 | (95% CI: 82.8 to 96.4) |
| Specificity (%) | 94.12 | (95% CI: 91.8 to 95.9) | 82.33 | (95% CI: 78.6 to 85.7) | 85.53 | (95% CI: 79.1 to 90.6) | 84.18 | (95% CI: 77.5 to 89.5) |
| AUC | 0.984 | (95% CI: 0.976 to 0.992) | 0.965 | (95% CI: 0.952 to 0.977) | 0.947 | (95% CI: 0.911 to 0.982) | 0.947 | (95% CI: 0.911 to 0.982) |
| PPV (%) | 92.83 | (95% CI: 90.2 to 94.8) | 83.53 | (95% CI: 80.6 to 86.1) | 75.00 | (95% CI: 67.1 to 81.5) | 74.49 | (95% CI: 66.9 to 80.8) |
| NPV (%) | 94.12 | (95% CI: 91.8 to 95.7) | 92.94 | (95% CI: 90.2 to 94.9) | 93.15 | (95% CI: 88.4 to 96.1) | 95.00 | (95% CI: 90.3 to 97.5) |
AMDage-related macular degenerationAREDSAge-Related Eye Disease StudyAUCarea under the curveNPVNegative Predictive ValuePPVPositive Predictive Value
Figure 1. Receiver operating characteristic curves on validation and test set. AUC, area under the curve; ROC, receiver operating characteristic curve.

Table 3. Medios AI-AMD diagnosis versus ophthalmologist diagnosis.
| AMD diagnosis | Medios AI-AMD for target device | |||
| Validation set | Test set | |||
| Non-referable AMD | Referable AMD | Non-referable AMD | Referable AMD | |
| Ophthalmologist grading* | ||||
| No AMD | 104 | 12 | 105 | 10 |
| Early AMD | 20 | 6 | 18 | 7 |
| Non-referable† | 12 | 5 | 10 | 8 |
| Intermediate | ||||
| AMD | 4 | 32 | 5 | 35 |
| Advanced AMD | 5 | 31 | † | 32 |
| Referable‡ | * | 6 | 0 | 6 |
| Diagnostic accuracy, % | ||||
| Recall No AMD | 89.66(95% CI: 82.6 to 94.5) | 91.30(95% CI: 84.6 to 95.8) | ||
| Recall Early AMD | 76.92(95% CI: 56.4 to 91.0) | 72.00(95% CI: 50.6 to 87.9) | ||
| Recall Intermediate AMD | 88.89(95% CI: 73.9 to 96.9) | 87.50(95% CI: 73.2 to 95.8) | ||
| Recall Advanced AMD | 86.11(95% CI: 70.5 to 95.3) | 94.12(95% CI: 80.3 to 99.3) | ||
Ground Ttruth.
No consensus, but 2two out 3three ophthalmologists graded non-referable.
No consensus, but 2two out 3three ophthalmologists graded referable.
AIartificial intelligenceAMDage-related macular degeneration
Class activation maps (CAM) (figure 2): CAM visualisation demonstrated that CNN successfully identified areas of macular degeneration. Trained CNN helped identify these areas in each image. Figure 2A, B and C represent true positive, false positive and false negative images.
Figure 2. (A) Target device fundus image of a true positive with class activation map highlighting the lesions. (B) Target device fundus image of a false positive with class activation map highlighting the lesions. (C) Target device fundus image of a false negative.

Discussion
In this study, the DL algorithm demonstrated promising sensitivity and specificity in identifying referable AMD. The performance of the DL model was comparable to the reference standard of grading by ophthalmologists. Furthermore, DL performed equally well with the target device and the AREDS database.
Previous studies have explored automated software for referable AMD detection. A meta-analysis by Dong et al included 13 AI-based studies.15 They found an overall sensitivity of 88% and a specificity of 90% with an AUC of 0.983. The majority of the studies in this meta-analysis also used an AREDS database. For studies applying convolutional neural networks on the AREDS database, the pooled AUC, sensitivity and specificity were 0.983 (95% CI: 0.978 to 0.988), 0.88 (95% CI: 0.88 to 0.88) and 0.91 (95% CI: 0.91 to 0.91), respectively. Our specificity after the stage using AREDS is lower (82.3%) but acceptable. This can be attributed to our choice of neural network architecture. Various CNN architectures are available, such as CifarNet, AlexNet, Inception V.1 (GoogleNet), all of which achieve high accuracy but at the expense of heavy architecture. We have used the EfficientNetV.2 (Google) architecture, a lightweight design intended for efficient deployment on a smartphone-based camera, making screening more accessible and effective. This AI algorithm can be deployed as an offline application integrated into the smartphone-based, non-mydriatic retinal imaging system. It can be deployed as a component of the camera control application and thus seamlessly integrated into the image acquisition workflow. An AMD assessment algorithm generates a diagnosis by detecting drusen and/or other characteristics of intermediate and advanced AMD involving the macula.
In this meta-analysis, eight studies reported outcomes for referrable AMD, with an AUC above 0.90 in all but one study by Phan et al, which had an AUC of 0.87, possibly due to a smaller private database containing 279 images.15 16 None of the studies employed smartphone-based fundus cameras as target devices. Our study contributes to the existing knowledge of DL-based AMD detection. The accuracy of AI-based algorithms varies due to differences in architecture, data set size, image quality and validation methodology. Recent works on simultaneous automated cloud-based screening of AMD and diabetic retinopathy have demonstrated promising performance.17,19 Additionally, fundus-image-based algorithms detecting multiple retinal conditions have shown effective results, including the screening of AMD.20 21 Most recently, several algorithms have been developed using different approaches to differentiate the severity stages of AMD which is crucial for early detection, precise diagnosis and clinical treatment strategies.22,25 Mathieu et al trained a model (DeepAlienorNet) to detect the presence of seven different clinical signs, such as types of drusens, hypopigmentation or hyperpigmentation or advanced AMD with a sensitivity and specificity of 0.77 (0.72–0.82) and 0.83 (0.81–0.85), respectively.22 Sarao et al designed an explainable DL model for the detection of GA achieving 100% sensitivity.23 Similarly, Abd El-Khalek et al trained a model to classify retinal images into normal, GA, intermediate AMD and wet AMD and developed a comprehensive computer-aided diagnosis framework for categorisation.24 Morano et al designed a model with a custom architecture that both predicts the presence of AMD and identifies the lesions.25 While such models are essential for retina specialists in clinical decision-making and support, fundus image-based AMD AI screening solutions, such as the one described here are critical for population-level screening.
A key strength of this study is that it does not exclusively rely on the AREDS database but also incorporates a database from the target device featuring real-world digitised images. This approach supports a practical deployment of the algorithm in the field. Undoubtedly, the AREDS is the largest publicly available database with more than 130,000 fundus images; but it is essential to recognise that certain nuances of hard drusen and age-related changes for clinical classification of AMD did not exist in the 1980s during AREDS. This factor might render the AREDS database alone insufficient for developing a robust AI. Additionally, the AREDS images were originally film-based and later digitised, which could potentially impact the performance of the DL algorithm. The target device test data set run on the model trained on AREDS only demonstrated sensitivity and specificity of 88.75% (95% CI: 79.72% to 94.72%) and 60.76% (95% CI: 52.69% to 68.42%), respectively. When the neural network was trained directly on the target device data set without prior training on AREDS, the performance (sensitivity and specificity) was 65% (95% CI: 53.52% to 75.33%) and 68.35% (95% CI: 60.49% to 75.51%), respectively. The approach of training with an AREDS data set and fine-tuning with the target device data set yielded better performance compared against the reference standard (table 2). As a future scope, the performance of the test data set needs further evaluation in real-world studies.
However, this study has a limitation, as it only assesses the diagnostic performance of AI for referable AMD. There is a risk of missing other retinal pathologies if this model is used alone. Another limitation was that metadata was present in up to 75% of the participants’ images on the target device in remote screening contexts, while it was unknown, undisclosed or unavailable in the remaining images.
Our work carries significant clinical and public health-related benefits. It reduces the burden on specialists, enables remote care in hard-to-reach areas and facilitates early intervention that may result in long-term vision preservation. Despite the promises of AI-based predictive models, it is not without challenges. The most important challenge is its dependence on image quality. Consequently, a system for rejecting ungradable, poor-quality images currently under development must be in place. Additionally, AI is often considered a ‘black box’, potentially raising concerns about clinicians’ trust in the system. We have attempted to address the model’s interpretability to some extent through class-activation maps. False negatives remain an issue, particularly for AMD, which requires early treatment to prevent scarring, unlike diabetic retinopathy, usually a slowly progressive disease. However, this same factor underscores the need for early AMD screening using robust tools such as AI.
This current work presents a promising approach toward using DL for automated AMD analysis in a smartphone-based imaging system for the first time. In our continuing research, we plan to perform a prospective evaluation of AI performance across various AMD severity stages in real-world settings. Additionally, we evaluate detecting combined pathologies, which may offer additional advantages and improved clinical applications.
Conclusion
Our study indicates that the DL-based AI algorithm shows promise in detecting referable AMD with high sensitivity and specificity, even when using a small target device and population data set. This success is achieved by fine-tuning a model previously trained on a larger data set for the same task. The approach proved quite effective despite AREDS images being captured by traditional desktop cameras on a different population. Integrating such an algorithm into an application associated with a smartphone-based fundus imaging system could significantly aid in screening populations in underserved areas and would be less expensive.
supplementary material
Footnotes
Funding: Funding support for AREDS was provided by the National Eye Institute (N01-EY-0-2127). We would like to thank the AREDS participants and the AREDS Research Group for their valuable contribution to this research.
Prepub: Prepublication history and additional supplemental material for this paper are available online. To view these files, please visit the journal online (https://doi.org/10.1136/bmjopen-2023-081398).
Provenance and peer review: Not commissioned; externally peer reviewed.
Patient consent for publication: Not applicable.
Data availability free text: The AREDS database can be found at http://www.ncbi.nlm.nih.gov/gap following data accession approval by NEI. The target device data could be shared upon reasonable request to the corresponding author.
Patient and public involvement: Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Ethics approval: The study was approved by Diacon IEC (Diacon Hospital Ethics Committee 15-04-2022).
Contributor Information
Florian Mickael Savoy, Email: florian.savoy@gmail.com.
Divya Parthasarathy Rao, Email: drdivya@remidio.com.
Jun Kai Toh, Email: junkai.7oh@gmail.com.
Bryan Ong, Email: bryanongse@gmail.com.
Anand Sivaraman, Email: anand@remidio.com.
Ashish Sharma, Email: drashish79@hotmail.com.
Taraprasad Das, Email: tpdbei@gmail.co.
Data availability statement
Data are available upon reasonable request.
References
- 1.Wong WL, Su X, Li X, et al. Global prevalence of age-related macular degeneration and disease burden projection for 2020 and 2040: a systematic review and meta-analysis. Lancet Glob Health. 2014;2:e106–16. doi: 10.1016/S2214-109X(13)70145-1. [DOI] [PubMed] [Google Scholar]
- 2.Chappelow AV, Schachat A. Retinal pharmacotherapy - Neovascular age-related macular degeneration. Elsevier Health Sci. 2010:128–32. doi: 10.1016/B978-1-4377-0603-1.00023-5. n.d. [DOI] [Google Scholar]
- 3.Finger RP, Daien V, Eldem BM, et al. Anti-vascular endothelial growth factor in neovascular age-related macular degeneration - a systematic review of the impact of anti-VEGF on patient outcomes and healthcare systems. BMC Ophthalmol. 2020;20:294. doi: 10.1186/s12886-020-01554-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Ferris FL, Wilkinson CP, Bird A, et al. Clinical classification of age-related macular degeneration. Ophthalmology. 2013;120:844–51. doi: 10.1016/j.ophtha.2012.10.036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Kanagasingam Y, Bhuiyan A, Abràmoff MD, et al. Progress on retinal image analysis for age related macular degeneration. Prog Retin Eye Res. 2014;38:20–42. doi: 10.1016/j.preteyeres.2013.10.002. [DOI] [PubMed] [Google Scholar]
- 6.Pirbhai A, Sheidow T, Hooper P. Prospective evaluation of digital non-stereo color fundus photography as a screening tool in age-related macular degeneration. Am J Ophthalmol. 2005;139:455–61. doi: 10.1016/j.ajo.2004.09.077. [DOI] [PubMed] [Google Scholar]
- 7.Keenan TD, Dharssi S, Peng Y, et al. A deep learning approach for automated detection of geographic atrophy from color fundus photographs. Ophthalmology. 2019;126:1533–40. doi: 10.1016/j.ophtha.2019.06.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Zapata MA, Royo-Fibla D, Font O, et al. Artificial intelligence to identify retinal fundus images, quality validation, laterality evaluation, macular degeneration, and suspected glaucoma. Clin Ophthalmol. 2020;14:419–29. doi: 10.2147/OPTH.S235751. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Burlina PM, Joshi N, Pekala M, et al. Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA Ophthalmol. 2017;135:1170–6. doi: 10.1001/jamaophthalmol.2017.3782. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Burlina P, Pacheco KD, Joshi N, et al. Comparing humans and deep learning performance for grading AMD: A study in using universal deep features and transfer learning for automated AMD analysis. Comput Biol Med. 2017;82:80–6. doi: 10.1016/j.compbiomed.2017.01.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Govindaiah A, Smith RT, Bhuiyan A. A new and improved method for automated screening of age-related macular degeneration using ensemble deep neural networks. Annu Int Conf IEEE Eng Med Biol Soc. 2018;2018:702–5. doi: 10.1109/EMBC.2018.8512379. [DOI] [PubMed] [Google Scholar]
- 12.Jain A, Krishnan R, Rogye A, et al. Use of offline artificial intelligence in a smartphone-based fundus camera for community screening of diabetic retinopathy. Indian J Ophthalmol. 2021;69:3150–4. doi: 10.4103/ijo.IJO_3808_20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Natarajan S, Jain A, Krishnan R, et al. Diagnostic accuracy of community-based diabetic retinopathy screening with an offline artificial intelligence system on a smartphone. JAMA Ophthalmol. 2019;137:1182–8. doi: 10.1001/jamaophthalmol.2019.2923. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Sosale B, Aravind SR, Murthy H, et al. Simple, mobile-based artificial intelligence algorithm in the detection of diabetic retinopathy (SMART) study. BMJ Open Diabetes Res Care. 2020;8:e000892. doi: 10.1136/bmjdrc-2019-000892. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Dong L, Yang Q, Zhang RH, et al. Artificial intelligence for the detection of age-related macular degeneration in color fundus photographs: A systematic review and meta-analysis. EClinMed. 2021;35:100875. doi: 10.1016/j.eclinm.2021.100875. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Phan TV, Seoud L, Chakor H, et al. Automatic screening and grading of age-related macular degeneration from texture analysis of fundus images. J Ophthalmol. 2016;2016:5893601. doi: 10.1155/2016/5893601. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Skevas C, Levering M, Engelberts J, et al. Simultaneous screening and classification of diabetic retinopathy and age-related macular degeneration based on fundus photos—a prospective analysis of the RetCAD system. Int J Ophthalmol . 2022;15:1985–93. doi: 10.18240/ijo.2022.12.14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.González-Gonzalo C, Sánchez-Gutiérrez V, Hernández-Martínez P, et al. Evaluation of a deep learning system for the joint automated detection of diabetic retinopathy and age-related macular degeneration. Acta Ophthalmol. 2020;98:368–77. doi: 10.1111/aos.14306. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Bhuiyan A, Govindaiah A, Alauddin S, et al. Combined automated screening for age-related macular degeneration and diabetic retinopathy in primary care settings. Ann Eye Sci . 2021;6:12. doi: 10.21037/aes-20-114. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Kim KM, Heo T-Y, Kim A, et al. Development of a fundus image-based deep learning diagnostic tool for various retinal diseases. J Pers Med. 2021;11:321. doi: 10.3390/jpm11050321. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Lin D, Xiong J, Liu C, et al. Application of Comprehensive Artificial intelligence Retinal Expert (CARE) system: a national real-world evidence study. Lancet Digit Health. 2021;3:e486–95. doi: 10.1016/S2589-7500(21)00086-8. [DOI] [PubMed] [Google Scholar]
- 22.Mathieu A, Ajana S, Korobelnik J-F, et al. DeepAlienorNet: A deep learning model to extract clinical features from colour fundus photography in age-related macular degeneration. Acta Ophthalmol. 2024;102:e823–30. doi: 10.1111/aos.16660. [DOI] [PubMed] [Google Scholar]
- 23.Sarao V, Veritti D, De Nardin A, et al. Explainable artificial intelligence model for the detection of geographic atrophy using colour retinal photographs. BMJ Open Ophthalmol . 2023;8:e001411. doi: 10.1136/bmjophth-2023-001411. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Abd El-Khalek AA, Balaha HM, Alghamdi NS, et al. A concentrated machine learning-based classification system for age-related macular degeneration (AMD) diagnosis using fundus images. Sci Rep. 2024;14:2434. doi: 10.1038/s41598-024-52131-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Morano J, Hervella ÁS, Rouco J, et al. Weakly-supervised detection of AMD-related lesions in color fundus images using explainable deep learning. Comput Methods Programs Biomed. 2023;229:S0169-2607(22)00677-0. doi: 10.1016/j.cmpb.2022.107296. [DOI] [PubMed] [Google Scholar]
