Abstract
Background/Objectives: Telemedicine in diabetic retinopathy (RD) screening is effective but does not reach the entire diabetes population. The use of portable cameras and artificial intelligence (AI) can help in screening diabetes. Methods: We evaluated the ability of two handheld cameras, one based on a smartphone and the other on a smartscope, to obtain images for comparison with OCT. Evaluation was carried out in two stages: the first by two retina specialists and the second using an artificial intelligence algorithm that we developed. Results: The retina specialists reported that the smartphone images required mydriasis in all cases, compared to 73.05% of the smartscope images and 71.11% of the OCT images. Images were ungradable in 27.98% of the retinographs with the smartphone and in 7.98% with the smartscope. The detection of any DR using the AI algorithm showed that the smartphone obtained lower recall values (0.89) and F1 scores (0.89) than the smartscope, with 0.99. Low results were also obtained using the smartphone to detect mild DR (146 retinographs), compared to using the smartscope (218 retinographs). Conclusions: we consider that the use of handheld devices together with AI algorithms for reading retinographs can be useful for DR screening, although the ease of image acquisition through small pupils with these devices needs to be improved.
Keywords: artificial intelligence, diabetic retinopathy, handheld retinal camera, public health, screening, smartphones, telemedicine, image quality
1. Introduction
Diabetic retinopathy (DR) can develop from diabetes mellitus (DM). It causes retinal vasculature in the form of microangiopathy and, at the same time, affects the neurons involved. Consequently, DR is a neurovascular complication of the retina [1].
DM currently affects 537 million patients worldwide, and this number is predicted to rise to 643 million by 2030 and to 783 million by 2045 [2]. The increase in DR runs parallel to this, as DR currently affects 6.4 million people in Europe alone, with an expected increase to 8.6 million by 2025, of whom 30% will need treatment [3]. DR also remains the leading cause of preventable vision loss and blindness in adults aged 20 to 74 years, especially in middle- and high-income countries [4,5].
Given the rapid global increase in DM and increasing life expectancy, DR will remain a major public health challenge. Screening for DR through telemedicine has proven to be effective [6]; therefore, it should be promoted where it is already in place and implemented where it is not. Only early detection of DR will allow us to both prevent its evolution to more severe forms and to treat it early to avoid vision loss [7].
Current screening systems can reach a significant proportion of the patients with DM, but it is difficult to screen all patients every one or two years as recommended [2]. In our experience, only 40% of patients are screened annually [8], despite having four non-mydriatic camera units in our health area. To improve this, extending screening to every two years, according to the metabolic control of patients, has been proposed, along with bringing in other professionals—such as general practitioners (GPs), endocrinologists, and paediatricians—into the system. At a technical level, the use of portable cameras has been proposed. These cameras, together with artificial intelligence (AI) algorithms that read retinographs automatically and help to detect DR, can be used by GPs [9,10].
Based on these proposals, the present study evaluated two portable cameras developed by our team to detect DR, and the results were compared. In addition, a comparison between the retinographies of the three devices was carried out using an AI algorithm [11].
2. Materials and Methods
2.1. Setting and Design
This study was based on a real population of patients with type 2 DM. The reference was our health area (Hospital Universitari de Sant Joan de Reus, Spain) of 226,508 inhabitants, of whom 17,792 patients are registered with DM. The study has been ongoing, based on our diabetic retinopathy screening program [12].
2.2. Objectives
The aim of this study was to evaluate the possibility of using portable cameras instead of fixed non-mydriatic cameras for diabetic retinopathy screening. To achieve this, we compared two portable cameras: one based on a smartphone system (VISTAVIEW, Topcon Healthcare®, USA) and a second that uses a classic but portable retinography system and the AURORA retinograph (AURORA™, Optomed Topcon®, Japan) as a smartscope. Once the sample collection was finished, we compared the retinographs obtained with the smartphone (smartscope) and the retinographs obtained with the standard capture system that we use in the screening program, which uses the OCT TRITON® (TOPCON®, Japan) to obtain retinographies.
In a second phase, the retinographs obtained using the two portable cameras and the OCT retinograph were transferred to electronic medical records (EMRs). From the medical record, the images were analysed using the AI system that we developed in the hospital (MIRA©) [11,12,13,14,15], comparing the results obtained with the images obtained from the three systems.
2.3. Inclusion Criteria
Inclusion criteria were as follows: patients with type 2 DM from our DR screening programme.
2.4. Exclusion Criteria
Exclusion criteria were as follows: type 1 DM and other causes of diabetes mellitus, such as gestational DM, and retinographs that were not in a patient’s electronic medical record or that did not use the DICOM format.
2.5. Methods
For each patient, a retinograph of each eye was obtained from the three devices (the two portable devices and the standard equipment). The images were focused on a point located between the fovea and the temporal side of the papilla, with retinographs using 45–55°. Pharmacological mydriasis was achieved by applying tropicamide eye drops if necessary. The retinographs were sent via a PACS (picture archiving and communication system) to the patient’s electronic medical record (EMR), where they were stored in DICOM (Digital Imaging and Communications in Medicine), a standard format that guarantees the interoperability and compatibility of medical images.
Table 1 shows DR classified with the international classification of DR [16] at ‘Level 1 or mild DR’ and ‘Level 2 or moderate DR’. At ‘Level 3 or severe DR’, we introduced a variant to include images with new vessels in the retina (proliferative DR), and we included any patients with macular oedema. Therefore, our Level 3 would be equivalent to a vision-affecting referable DR or VTDR (vision-threatening diabetic retinopathy).
Table 1.
Classification used in this study. Level 3 includes severe DR, proliferative DR, and macular oedema. DR = diabetic retinopathy, DME= diabetic macular edema.
Level | Description |
---|---|
0 | No retinopathy. |
1 | Mild DR. Microaneurysms only. |
2 | Moderate DR. More than just microaneurysms but less than severe. |
3 |
|
Levels of DME included in Level 3 of current study |
Mild DME: Some thickening of the retina or hard exudates at the posterior pole but distant from the centre of the macula.
Moderate DME: thickening of the retina or hard exudates that approach the centre of the macula but do not involve the centre. Severe DME: thickening of the retina or hard exudates affecting the centre of the macula. |
2.6. Technical Characteristics of the Three Devices
The cameras used were as follows: first, the standard one with which we carry out the DR screening, which is the Oct TRITON, from which the retinographies read by the ophthalmologist responsible for the screening are obtained; second, the portable VISTAVIEW camera, which uses an Android-type smartphone; and third, the portable AURORA camera, which is a portable retinograph or smartscope. The technical difference between them is limited to the pixelation of the images which, for the OCT TRITON and the portable AUROA, is 5 Mpixels and, for the VISTAVIEW smartphone, is 28.4 Mpixels.
2.7. Ethics and Consent
The study was carried out with the approval of the local ethics committee, the Medical Research Ethics Committee (CEIM) of the Pere Virgili Health Research Institute (IISPV), Tarragona, Spain, RetinaReadRisk approval code, protocol version 1. 10 March 2022, CEIM reference number: 071/2022.
This ethical consent was signed by all patients and was the same for the two funding entities. The project received co-funding from the European Institute of Innovation and Technology (EIT), from the European Union under grant agreements 220718 and 230123, and from the Instituto de Salud Carlos III (ISCIII) through the project PI21/00064 (co-funded by the European Union).
2.8. Statistical Methods
Data were analysed via the SPSS program, version 22.0 (IBM® Statistics, Chicago, IL, USA). A descriptive statistical analysis of the quantitative data was performed by determining the mean and standard deviation. For qualitative data, frequency and percentage analysis in each category were used. For all parameters, descriptive statistics (including number of values, mean, standard deviation (SD), and standard error of mean (SEM)) were calculated. A p-value less than 0.05 was considered statistically significant.
The screening performance of the AI Machine Reading Algorithm (MIRA)© was measured using a 2 × 2 confusion matrix/contingency table for each team. Given a classified dataset, there were four basic combinations of real and assigned: true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN).
Statistical evaluation of the dataset included sensitivity, specificity, positive predictive value or accuracy, negative predictive value, harmonic mean or F1 score, and accuracy.
Sensitivity or recall is the proportion of the population for which the test is correct and, in the context of clinical diagnosis, sensitivity and specificity values are considered good when they exceed 80%. Positive or accuracy predictive value, on the one hand, and negative predictive value provide a proportion of the population with a given test result for which the test is correct or incorrect, respectively.
To assess concordance, we determined the values of accuracy and harmonic mean. Diagnostic accuracy is expressed as the proportion of correctly classified subjects among all subjects. Accuracy values of >0.8 show an almost perfect correlation, while values of 0.6–0.79 imply a significant correlation, 0.4–0.59 a moderate correlation, 0.2–0.39 a regular correlation, and 0–0.2 a poor correlation. The harmonic mean or F1 score, also known as the Sørensen–Dice coefficient or Dice similarity coefficient (DSC), is a measure of test performance that combines precision (the positive predictive value) and recall (sensitivity). The highest possible value of F1 is 1, indicating perfect accuracy and recall, and the lowest possible value is 0 if the accuracy or recall is zero. The advantage of using the harmonic mean over accuracy is that it is not affected by the prevalence of the disease, which is the case for accuracy; thus, if the prevalence is low, it tends to provide high accuracy values [17].
3. Results
3.1. A Retinograph of Each Eye Was Obtained with Each of the Three Devices
In total, 4260 retinographs were obtained from 2130 patients. The mean age of the sample was 67.43 ± 11.21 years (38–91 years), of which 1256 (58.96%) were women. Regarding the treatment of DM, 264 patients (12.39%) were treated through diet alone, 1810 (84.98%) with oral antidiabetics, and 56 (2.63%) with insulin or insulin + oral antidiabetics. Finally, 1527 (71.69%) had high blood pressure.
Table 2 shows the differences between the retinographs across the three devices. We first tried to obtain the retinographs without pupil dilatation, but the images did not show any form of DR in 69.83% with the VISTAVIEW, in 83.91% of the cases with the AURORA, and in 71.11% of cases with the Triton.
Table 2.
Comparative study of the two portable devices against the evaluation by two retina specialists. % * = percentage of the total number of patients with diabetic retinopathy.
Patients | Total Retinographs | Patients Who Required Mydriasis | Ungradable Retinographs | ||
---|---|---|---|---|---|
VISTAVIEW | 2130 | 4260 | All cases | 1192/27.98% | |
AURORA | 2130 | 4260 | 1556/73.05% | 340/7.98% | |
Triton | 2130 | 4260 | 1536/71.11% | 20/0.93% | |
No DR | Any DR | Mild DR | Moderate DR | Severe DR | |
VISTAVIEW | 1962/69.83% | 168/7.88% | 93/55.37% * | 57/33.92% * | 18/10.71% * |
AURORA | 1948/83.91% | 186/8.54% | 109/58.39% * | 59/31.72% * | 18/9.89% * |
Triton | 1948/90.03% | 188/8.82% | 110/58.51% * | 60/31.91% * | 18/9.57% * |
After dilation, the percentage of images that were not of sufficient quality for ophthalmologists to evaluate was 27.98% with the VISTAVIEW, 7.98% with the AURORA, but 0.93% with the Triton. The principal causes of poor quality were the following:
-
(i)
The difficulty in focusing the handheld device, particularly the VISTAVIEW;
-
(ii)
The presence of cataracts, obviously affecting all three devices equally;
-
(iii)
Slight mydriasis.
Patients with moderate DR or severe DR were identified with similar success across all three devices, but for mild DR, the AURORA and the Triton were more successful than the VISTAVIEW.
3.2. Statistical Analyses
Data comparing the handheld devices with the Triton are presented in Table 3. Applying the 2 × 2 contingency table, we found the two handheld devices to be reliable. Although the VISTAVIEW had a somewhat lower score in the application of the F1 statistic or harmonic mean (0.93 for any DR and 0.91 for mild DR), the results were good for both in any case.
Table 3.
Study of patients with and without DR. TN = true negative, FN = false negative, TP = true positive, FP = false positive, S = sensitivity, SP = specificity, TPV = true predictive value, NPV = negative predictive value, F1 score or mean harmony, and ACC = accuracy.
TN | FN | TP | FP | S | SP | TPV | NPV | ACC | F1 Score | ||
---|---|---|---|---|---|---|---|---|---|---|---|
Any DR | VISTAVIEW | 1400 | 20 | 168 | 24 | 0.89 | 0.98 | 0.99 | 0.99 | 0.97 | 0.93 |
AURORA | 1942 | 2 | 188 | 6 | 0.99 | 0.99 | 0.97 | 0.99 | 0.99 | 0.98 | |
TN | FN | TP | FP | S | SP | TPV | NPV | ACC | F1 score | ||
Mild DR | VISTAVIEW | 1404 | 17 | 93 | 20 | 0.85 | 0.99 | 0.82 | 0.99 | 0.93 | 0.91 |
AURORA | 1942 | 1 | 109 | 8 | 0.99 | 0.99 | 0.93 | 0.99 | 0.99 | 0.95 |
3.3. The Comparative Study of the Results of the Three Devices Using the MIRA© Algorithm
Table 4 shows the results from reading the images obtained with the MIRA© automatic-reading AI algorithm.
Table 4.
The analysis of the images entered into the MIRA© automatic-reading algorithm.
Classification of Retinographies | ||||||
---|---|---|---|---|---|---|
Ungradable or MIRA | No DR | Any DR | Mild DR | Moderate DR | Severe DR | |
VISTAVIEW | 1362/31.97% | 2330/54.46% | 284/7.88% | 146/55.37% | 102/33.92% | 36/10.71% |
AURORA | 342/8.02% | 3174/74.5% | 372/8.54% | 218/58.39% | 118/31.72% | 36/9.89% |
Standard method | 40/0.93% | 3468/81.4% | 376/8.82% | 220/58.52% | 120/31.91% | 36/9.57% |
According to the type of DR, the MIRA© algorithm read retinographs from the AURORA and Triton devices more accurately than those from the VISTAVIEW. Mild DR was detected in fewer cases with the VISTAVIEW (102 cases) compared to the AURORA (218) and the Triton (220). The latter are what we consider real cases, as they were identified by retina specialists who are the gold standard. The AI algorithm identified moderate DR and severe DR with similar success across the three devices.
3.4. Statistical Study of the Results Obtained by AI Algorithm MIRA©
We evaluated the images obtained by the three devices—the Triton tomography being what we currently consider to be the gold standard—with images read by specialist ophthalmologists.
Table 5 shows the results of the statistical analysis of the performance of the three devices in the detection of DR, using 2 × 2 contingency tables. The sensitivity of the VISTAVIEW smartphone to identify any DR was 0.89; it was outperformed by the portable AURORA (0.99) and by the images obtained by the Triton OCT (0.99).
Table 5.
Statistical analysis using the AI algorithm. TN = true negative, FN = false negative, TP = true positive, FP = false positive, S = sensitivity, SP = specificity, TPV = true predictive value, NPV = negative predictive value, F1 score or mean harmony, and ACC = accuracy.
TN | FN | TP | FP | S | SP | TPV | NPV | ACC | F1 Score | ||
---|---|---|---|---|---|---|---|---|---|---|---|
Any DR | VISTAVIEW® | 2614 | 30 | 284 | 34 | 0.89 | 0.99 | 0.90 | 0.99 | 0.97 | 0.89 |
AURORA® | 1201 | 2 | 130 | 1 | 0.99 | 0.99 | 0.98 | 0.99 | 0.99 | 0.98 | |
Triton | 3842 | 2 | 376 | 2 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | |
TN | FN | TP | FP | S | SP | TPV | NPV | ACC | F1 score | ||
Mild DR | VISTAVIEW® | 2614 | 25 | 146 | 30 | 0.83 | 0.99 | 0.85 | 0.99 | 0.98 | 0.83 |
AURORA® | 1201 | 2 | 130 | 1 | 0.99 | 0.99 | 0.98 | 0.99 | 0.99 | 0.98 | |
Triton | 3842 | 2 | 220 | 2 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 |
Similarly, for mild DR, the scores for the VISTAVIEW dropped to a sensitivity of 0.83, with a positive predictive value of 0.85, compared to the values of 0.99 sensitivity for both the AURORA and the Triton and a positive predictive value of 0.98 for the AURORA. This is confirmed by the TP values and by the F1 harmonic average score: the smartphone scored a harmonic average of 0.89 for detecting any DR and 0.83 for mild DR.
In summary, the statistical study indicates that the AURORA and Triton OCT devices detect with high reliability when DR is present or absent at all stages of DR. The VISTAVIEW device is safe when it indicates that there is no DR but unreliable in detecting cases of mild DR.
4. Discussion
The constant increase and diversity in the population with DM has led to the pursuit of new ways of reaching as much of the diabetic population as possible, and various publications by scientific societies have reported progress via telemedicine [2,6].
The authors believe that portable devices used together with AI algorithms that can read retinographs will be able to reach a larger population than the current model allows [6,18,19,20,21,22].
The present study aimed to evaluate the usefulness of two of those portable devices in screening patients with diabetes mellitus. We selected one smartphone camera and a smartscope or portable retinoscope. We compared the images obtained with these two devices with the retinography obtained through Triton optical coherence tomography, which is the standard for DR screening in our health area. The results showed that the smartphone produced a higher number of ungradable retinographs (27.98%) than the AURORA smartscope (7.98%) or the standard Triton OCT (0.93%). We can report that mydriasis was necessary for all patients when using the smartphone compared to 73.05% using the AURORA and 71.11% using the Triton. The major difficulty was being able to focus accurately on the retina when using the smartphone. We can also report that the smartphone had more difficulty in distinguishing mild DR forms (only 93 cases) than the AURORA (109 cases) and the Triton (110 cases).
In a second step, we analysed the results after the automatic reading of retinographs using the AI algorithm developed in our hospital. As in the first step, the automatic image analysis detected fewer cases of retinographies with mild DR when they were obtained by the VISTAVIEW device compared to the other two devices. Thus, only 146 retinographs with mild DR were detected, compared to 218 with the AURORA or 220 with the OCT.
Regarding the difference between automatic image reading by the AI system (MIRA) and retinologists, we can see the comparison in Table 4. In this table, it is seen that the differences between automatic and human assessments are mainly at the level of gradability; the smartphone has many no-gradability images (31.97%) compared to 8.02% in the smastcope in AI reading, while when reading by retinologists, the no-gradability is only 0.93%.
Regarding the detection of diabetic retinopathy, the differences between human and automatic systems are minimal, so the automatic system for the smartphone detects 7.88% of images with any DR, the smartscope detects 8.54%, while the human eye detects 8.82%. If we look at the level of severe diabetic retinopathy, which would be the cases in which detection is essential, both the human eye and the AI detect all cases in total; the 36 retinography affected by severe diabetic retinopathy are detected.
We could conclude that, for both the human eye and the AI algorithm, the detection of mild DR is more difficult with the smartphone. Perhaps the difference in pixelation of the images between devices may be the cause, or perhaps it is the difficulty in obtaining clear images with the smartphone.
The statistical analysis revealed that the sensitivity or recall, and the positive predictive value or precision, were poorer for the images obtained with the smartphone for detection of any DR or mild DR. The differences were less marked when using the harmonic mean or F1 score, which scored the VISTAVIEW smartphone at 0.97 for accuracy and the AURORA smartscope at 0.99 for the presence of any DR or mild DR. However, when applying the harmonic average, the values changed: 0.93 and 0.91 for the VISTAVIEW and 0.98 and 0.95 for the AURORA for any DR and mild DR, respectively. To differentiate between the smartphone and the smartscope, we had to calculate the harmonic mean or F1 score. This is necessary when prevalence is very low, as was the case in our sample (9%).
In the literature, a similar study was carried out by Jacoba et al. [23]. This study involved a clinical series on 225 eyes of 116 patients in which the presence of macular pathology alone was evaluated. That study used two portable retina smartscopes, the AURORA® (TOPCON, Japan) and the RetinaVue™ (Wellch Allyn, USA). Comparing them with the Cirrus™ OCT (Zeiss, Germany) camera, the study reported good specificity but low sensitivity of the portables in detecting maculopathy.
Another similar study aimed not only to screen for DR, but also to detect macular pathology, whether diabetic or not [22]. The study reported ungradability with the OCT at 0.9%, a value similar to ours, and 4.4% with the AURORA, a value somewhat lower than ours (i.e., 7.98%), although the focus was on the macular area rather than on the whole retina.
A review of the efficacy of smartphones via a meta-analysis carried out by Tan et al. [20] reported the results obtained with five Android or iPhone5S mobile devices. The results showed that, for the detection of any DR, mean sensitivity was 87% (minimum 74%, maximum 94%) and specificity was 94% (minimum 81%, maximum 98%), values not dissimilar to those we obtained using the VISTAVIEW (sensitivity of 88% and specificity of 99%); although, notably, sensitivity for mild DR was very low, with an average of 39% (minimum 10%, maximum 79%). It is important to consider that smartphone technology is constantly changing and improving [24].
The AURORA smartscope has been tested in various studies, yielding results similar to ours. The largest sample used was that of Salongcay et al. in 2023 [25], who reported a percentage of unreadable images of 7.5% (similar to ours, with 7.07%) and an agreement of 82.4% (lower than ours, at 98%). Other studies have reported similar values to ours, with growth figures of 93% reported by Kubin et al. [26] and a positive predictive value of 98% and a specificity of 97% reported by Salongcay et al. [27].
One of the strengths of the present study is that the analysis of images was carried out using those found in the electronic medical record (EMR), rather than those stored in the equipment. This brings together the images in DICOM format for all devices and allows us to compare them more easily. Another strength is the much larger sample size compared to most previous studies. We were able to include 4260 images from 2130 patients: only the Salongcay study of 2023 included a higher number, with 5585 images from 2793 patients.
One limitation of the study is the use of only two devices, which hinders the extrapolation of the results to other devices. Furthermore, the use of a single retinograph focused on a point between the macula and temporal sides of the papilla might also limit the detection of DR, compared to studies that included more retinographs of more fields or perhaps wide-field retinographs, which have been shown to change the severity of DR [28]. Finally, another limitation is that, at Level 3 of DR, we included patients with proliferative retinopathy and macular oedema, which might alter the results when compared to other studies, although our findings can be used as a reference for studies that aim to evaluate the number of patients we detected with referable DR.
5. Conclusions
We can affirm that image capture systems or handheld retina cameras are useful for detecting DR in screening programmes and might be further aided through the incorporation of AI algorithms. We found that images captured with a smartscope or portable retinograph device are superior to those captured with a smartphone, which produced many unreadable images. Both devices require more efficient focusing in order to make image centring easier, improve image readability, and improve the quality of images obtained through small pupils to 3 mm, reducing the need for mydriasis.
6. Patents
Software MIRA 1.0, register SAFE CREATIVE code 2007104712196. Date and time: 10 July 2020, 11:24 UTC.
Acknowledgments
We thank endocrinologists in our hospital for helping us to implement the new screening system using the non-mydriatic fundus cameras and our camera technicians for their work and interest in the diabetes screening. We thank all the patients for being participants in the present study. We also thank Phil Hoddy for his language assistance and for editing and correcting the English text. Also, we thanks to Alex Latorre for informatic assistance.
Author Contributions
Conceptualization, P.R.-A., B.F.-P., E.G.-C., A.V., J.C., I.M.-M. and M.B.-B.; methodology, P.R.-A., B.F.-P., E.G.-C., A.V., M.L.-S., C.M.-L., I.M.-M. and M.B.-B.; validation, P.R.-A., E.G.-C., M.B.-B., A.V., J.C., M.L.-S. and C.M.-L.; formal analysis, P.R.-A., E.G.-C., A.V., J.C., M.L.-S., C.M.-L. and B.F.-P.; investigation, P.R.-A., E.G.-C., M.B.-B., A.V. and J.C.; resources, P.R.-A., B.F.-P., E.G.-C., I.M.-M. and M.B.-B.; writing—original draft preparation, P.R.-A., E.G.-C., A.V., M.B.-B. and B.F.-P., writing—review and editing, P.R.-A., B.F.-P., E.G.-C., A.V., J.C., M.L.-S., C.M.-L. and I.M.-M.; supervision, P.R.-A., A.V., J.C., B.F.-P., I.M.-M. and M.B.-B.; project administration, P.R.-A., E.G.-C., M.B.-B., A.V. and B.F.-P.; funding acquisition, P.R.-A., A.V., I.M.-M. and M.B.-B. All authors have read and agreed to the published version of the manuscript.
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Ethics Committee (CEIM) of Institut d’Investigació Sanitaria Pere Virgili (IISPV), Tarragona, Spain, approval code RetinaReadRisk, protocol version 1. 10 March 2022, reference number CEIM: 071/2022.
Informed Consent Statement
Not applicable.
Data Availability Statement
The original contributions presented in this study are included in the article; further enquiries can be addressed to https://www.iispv.cat/cas-dxit/retinareadrisk-rrr/ (accessed on 28 October 2024) or directly to the authors. The data used, both in the database and in the training dataset for the AI systems, are kept at the following address https://www.iispv.cat/ (accessed on 28 October 2024) and can be accessed by the ophthalmology research group.
Conflicts of Interest
The authors declare no conflicts of interest. The funders had no role in the design of the study or in the collection, analyses, or interpretation of the data; in the writing of the manuscript; or in the decision to publish the results.
Funding Statement
The project received co-funding from the European Institute of Innovation and Technology (EIT), from the European Union under grant agreements 220718 and 230123, and from the Instituto de Salud Carlos III (ISCIII) through the project PI21/00064 (co-funded by the European Union).
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
References
- 1.Simó R., Frontoni S. Neuropathic damage in the diabetic eye: Clinical implications. Curr. Opin. Pharmacol. 2020;55:1–7. doi: 10.1016/j.coph.2020.08.013. [DOI] [PubMed] [Google Scholar]
- 2.IDF Diabetes Atlas, 10th ed.; International Diabetes Federation: Brussles, Belgium, 2021. [(accessed on 24 July 2024)]. Available online: https://www.diabetesatlas.org.
- 3.Li J.Q., Welchowski T., Schmid M., Letow J., Wolpers C., Pascual-Camps I., Holz F.G., Finger R.P. Prevalence, incidence and future projection of diabetic eye disease in Europe: A systematic review and meta-analysis. Eur. J. Epidemiol. 2020;35:11–23. doi: 10.1007/s10654-019-00560-z. [DOI] [PubMed] [Google Scholar]
- 4.Leasher J.L., Bourne R.R., Flaxman S.R., Jonas J.B., Keeffe J., Naidoo K., Pesudovs K., Price H., White R.A., Wong T.Y., et al. Global Estimates on the Number of People Blind or Visually Impaired by Diabetic Retinopathy: A Meta-analysis From 1990 to 2010. Diabetes Care. 2016;39:1643–1649. doi: 10.2337/dc15-2171. [DOI] [PubMed] [Google Scholar]
- 5.Vision Loss Expert Group of the Global Burden of Disease Study. GBD 2019 Blindness and Vision Impairment Collaborators Global estimates on the number of people blind or visually impaired by diabetic retinopathy: A meta-analysis from 2000 to 2020. Eye. 2024;38:2047–2057. doi: 10.1038/s41433-024-03101-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Weng C.Y., Maguire M.G., Flaxel C.J., Jain N., Kim S.J., Patel S., Smith J.R., Kim L.A., Yeh S. Effectiveness of Conventional Digital Fundus Photography-Based Teleretinal Screening for Diabetic Retinopathy and Diabetic Macular Edema: A Report by the American Academy of Ophthalmology. Ophthalmology. 2024;131:927–942. doi: 10.1016/j.ophtha.2024.02.017. [DOI] [PubMed] [Google Scholar]
- 7.Hainsworth D.P., Bebu I., Aiello L.P., Sivitz W., Gubitosi-Klug R., Malone J., White N.H., Danis R., Wallia A., Gao X., et al. Risk Factors for Retinopathy in Type 1 Diabetes: The DCCT/EDIC Study. Diabetes Care. 2019;42:875–882. doi: 10.2337/dc18-2308. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Romero-Aroca P., López-Galvez M., Martinez-Brocca M.A., Pareja-Ríos A., Artola S., Franch-Nadal J., Fernandez-Ballart J., Andonegui J., Baget-Bernaldiz M. Changes in the Epidemiology of Diabetic Retinopathy in Spain: A Systematic Review and Meta-Analysis. Healthcare. 2022;10:1318. doi: 10.3390/healthcare10071318. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Grzybowski A., Rao D.P., Brona P., Negiloni K., Krzywicki T., Savoy F.M. Diagnostic Accuracy of Automated Diabetic Retinopathy Image Assessment Softwares: IDx-DR and Medios Artificial Intelligence. Ophthalmic Res. 2023;66:1286–1292. doi: 10.1159/000534098. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Vought R., Vought V., Shah M., Szirth B., Bhagat N. EyeArt artificial intelligence analysis of diabetic retinopathy in retinal screening events. Int. Ophthalmol. 2023;43:4851–4859. doi: 10.1007/s10792-023-02887-9. [DOI] [PubMed] [Google Scholar]
- 11.Baget-Bernaldiz M., Pedro R.-A., Santos-Blanco E., Navarro-Gil R., Valls A., Moreno A., Rashwan H.A., Puig D. Testing a Deep Learning Algorithm for Detection of Diabetic Retinopathy in a Spanish Diabetic Population and with MESSIDOR Database. Diagnostics. 2021;11:1385. doi: 10.3390/diagnostics11081385. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Romero-Aroca P., De La Riva-Fernandez S., Valls-Mateu A., Sagarra-Alamo R., Moreno-Ribas A., Soler N. Changes observed in diabetic retinopathy. Eight years follow up of a Spanish population. Br. J. Ophthalmol. 2016;100:1366–1371. doi: 10.1136/bjophthalmol-2015-307689. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. [(accessed on 24 July 2024)]. Available online: https://www.adcis.net/en/third-party/messidor/
- 14.RetinaReadRisk (RRR) [(accessed on 24 July 2024)]. Available online: https://www.iispv.cat/cas-dxit/retinareadrisk-rrr/
- 15.Romero-Aroca P., Verges-Puig R., de la Torre J., Valls A., Relaño-Barambio N., Puig D., Baget-Bernaldiz M. Validation of a Deep Learning Algorithm for Diabetic Retinopathy. Telemed. e-Health. 2020;26:1001–1009. doi: 10.1089/tmj.2019.0137. [DOI] [PubMed] [Google Scholar]
- 16.Wilkinson C., Ferris F.L., Klein R.E., Lee P.P., Agardh C.D., Davis M., Dills D., Kampik A., Pararajasegaram R., Verdaguer J.T. Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology. 2003;110:1677–1682. doi: 10.1016/S0161-6420(03)00475-5. [DOI] [PubMed] [Google Scholar]
- 17.Shreffler J., Huecker M.R. StatPearls [Internet] StatPearls Publishing; Treasure Island, FL, USA: 2024. [(accessed on 6 August 2024)]. Diagnostic Testing Accuracy: Sensitivity, Specificity, Predictive Values and Likelihood Ratios. Available online: https://www.ncbi.nlm.nih.gov/books/NBK557491/ [PubMed] [Google Scholar]
- 18.Grauslund J. Diabetic retinopathy screening in the emerging era of artificial intelligence. Diabetologia. 2022;65:1415–1423. doi: 10.1007/s00125-022-05727-0. [DOI] [PubMed] [Google Scholar]
- 19.Jacoba C.M.P., Doan D., Salongcay R.P., Aquino L.A.C., Silva J.P.Y., Salva C.M.G., Zhang D., Alog G.P., Zhang K., Locaylocay K.L.R.B., et al. Performance of Automated Machine Learning for Diabetic Retinopathy Image Classification from Multi-field Handheld Retinal Images. Ophthalmol. Retin. 2023;7:703–712. doi: 10.1016/j.oret.2023.03.003. [DOI] [PubMed] [Google Scholar]
- 20.Tan C.H., Kyaw B.M., Smith H., Tan C.S., Car L.T. Use of Smartphones to Detect Diabetic Retinopathy: Scoping Review and Meta-Analysis of Diagnostic Test Accuracy Studies. J. Med. Internet Res. 2020;22:e16658. doi: 10.2196/16658. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Palermo B.J., D’Amico S.L., Kim B.Y., Brady C.J. Sensitivity and specificity of handheld fundus cameras for eye disease: A systematic review and pooled analysis. Surv. Ophthalmol. 2021;67:1531–1539. doi: 10.1016/j.survophthal.2021.11.006. [DOI] [PubMed] [Google Scholar]
- 22.Kubin A.-M., Huhtinen P., Ohtonen P., Keskitalo A., Wirkkala J., Hautala N. Comparison of 21 artificial intelligence algorithms in automated diabetic retinopathy screening using handheld fundus camera. Ann. Med. 2024;56:2352018. doi: 10.1080/07853890.2024.2352018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Jacoba C.M.P., Salongcay R.P., Rageh A.K., Aquino L.A.C., Alog G.P., Saunar A.V., Peto T., Silva P.S. Comparisons of Handheld Retinal Imaging with Optical Coherence Tomography for the Identification of Macular Pathology in Patients with Diabetes. Ophthalmic Res. 2023;66:903–912. doi: 10.1159/000530720. [DOI] [PubMed] [Google Scholar]
- 24.de Oliveira J.A.E., Nakayama L.F., Ribeiro L.Z., de Oliveira T.V.F., Choi S.N.J.H., Neto E.M., Cardoso V.S., Dib S.A., Melo G.B., Regatieri C.V.S., et al. Clinical validation of a smartphone-based retinal camera for diabetic retinopathy screening. Acta Diabetol. 2023;60:1075–1081. doi: 10.1007/s00592-023-02105-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Salongcay R.P., Aquino L.A.C., Alog G.P., Locaylocay K.B., Saunar A.V., Peto T., Silva P.S. Accuracy of Integrated Artificial Intelligence Grading Using Handheld Retinal Imaging in a Community Diabetic Eye Screening Program. Ophthalmol. Sci. 2023;4:100457. doi: 10.1016/j.xops.2023.100457. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Kubin A., Wirkkala J., Keskitalo A., Ohtonen P., Hautala N. Handheld fundus camera performance, image quality and outcomes of diabetic retinopathy grading in a pilot screening study. Acta Ophthalmol. 2021;99:E1415–E1420. doi: 10.1111/aos.14850. [DOI] [PubMed] [Google Scholar]
- 27.Salongcay R.P., Aquino L.A.C., Salva C.M.G., Saunar A.V., Alog G.P., Sun J.K., Peto T., Silva P.S. Comparison of Handheld Retinal Imaging with ETDRS 7-Standard Field Photography for Diabetic Retinopathy and Diabetic Macular Edema. Ophthalmol. Retin. 2022;6:548–556. doi: 10.1016/j.oret.2022.03.002. [DOI] [PubMed] [Google Scholar]
- 28.Jacoba C.M.P., Salongcay R.P., Aquino L.A.C., Salva C.M.G., Saunar A.V., Alog G.P., Peto T., Silva P.S. Comparisons of handheld retinal imaging devices with ultrawide field images for determining diabetic retinopathy severity. Acta Ophthalmol. 2023;101:670–678. doi: 10.1111/aos.15651. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The original contributions presented in this study are included in the article; further enquiries can be addressed to https://www.iispv.cat/cas-dxit/retinareadrisk-rrr/ (accessed on 28 October 2024) or directly to the authors. The data used, both in the database and in the training dataset for the AI systems, are kept at the following address https://www.iispv.cat/ (accessed on 28 October 2024) and can be accessed by the ophthalmology research group.