Skip to main content
Journal of Diabetes Science and Technology logoLink to Journal of Diabetes Science and Technology
letter
. 2021 Mar 19;15(3):710–712. doi: 10.1177/19322968211000418

Reliability of Classification by Ophthalmologists with Telescreening Fundus Images for Diabetic Retinopathy and Image Quality

Sílvia Rêgo 1,, Marco Dutra-Medeiros 2,3,4,5, Gustavo M Bacelar-Silva 6,7, Tânia Borges 8, Filipe Soares 1, Matilde Monteiro-Soares 6
PMCID: PMC8120038  PMID: 33736493

The diagnostic gold standard to screen diabetic retinopathy is a visual analysis of eye fundus for identification of microvascular lesions.1 In telescreening, high quality of eye fundus photographs is essential to adequately identify disease.2 However, studies on the agreement between ophthalmologists in classifying image quality and its impact on the reliability of diabetic retinopathy diagnosis are scarce3,4 and inexistent, respectively.

In this cross-sectional study, two ophthalmologists (a retina specialist and a general ophthalmologist) blindly and independently classified 350 eye fundus images randomly selected from the Kaggle database containing 53571 images of subjects with diabetes5 using a web annotation tool. After excluding 55 images for being considered not classifiable due to insufficient quality for diagnosis, 295 images were classified for diabetic retinopathy, referable diabetic retinopathy, and maculopathy, as displayed in Table 1. As in the current clinical practice, image quality classification was performed based on non-defined and subjective criteria and diabetic retinopathy was graded using the modified version6 of the International Clinical Disease Severity Scale (ICDSS). Overall agreement, positive and negative specific agreement proportions, and Cohen’s Kappa coefficient (к), with the respective 95% confidence intervals (CIs) were calculated.

Table 1.

Agreement between a General Ophthalmologist and a Retina Specialist for Disease and Image Quality Classification.

Classification, no. (%)
Rater 1 Rater 2
Present Absent Total
Referable diabetic retinopathy, no. (%)
 Present 27 (9) 0 (0) 27 (9)
 Absent 43 (15) 225 (76) 268 (91)
Total 70 (24) 225 (76) 295 (100)
Referable diabetic retinopathy, no. (%)
 Present 7 (5) 2 (1) 9 (3)
 Absent 9 (3) 277 (94) 286 (97)
Total 16 (5) 279 (94) 295 (100)
Maculopathy, no. (%)
 Present 12 (4) 3 (1) 15 (5)
 Absent 15 (5) 265 (90) 280 (95)
Total 27 (9) 268 (91) 295 (100)
Bad or fair Good or excellent Total
Image quality, no. (%)
 Bad or fair 58 (21) 2 (1) 60 (22)
 Good or excellent 113 (41) 104 (38) 217 (78)
Total 171 (62) 106 (38) 277 (100)

Interrater agreement for diabetic retinopathy, maculopathy and referral cases were 85% [95% confidence interval (CI) 73%-97%], 96% (95% CI 72%-100%) and 94% (95% CI 75%-100%), respectively. Ophthalmologists showed considerably higher agreement in excluding diabetic retinopathy, referable diabetic retinopathy and maculopathy, than in identifying them [proportion of agreement of 91% (95% CI 79%-100%) vs 56% (95% CI 44%-68%), 98% (95% CI 74%-100%) vs 56% (95% CI 32%-80%) and 97% (95% CI 78%-100%) vs 57% (95% CI 38%-76%), respectively]. Kappa coefficients obtained for diabetic retinopathy were 0.49 (95% CI 0.37-0.61), for maculopathy 0.54 (95% CI 0.30-0.78) and for referral cases 0.54 (95% CI 0.35-0.73). From clinical perspective, these results are of concern, suggesting that there is considerable variability in the interpretation of disease findings in images. For image quality, proportion of agreement was 58% (95% CI 51%-65%) and kappa value 0.27 (95% CI 0.20-0.34), suggesting different understandings of image quality requirements for a proper diagnosis. Good or excellent image quality improved both interrater reliability and agreement for the identification of disease [with к value raising from 0.49 to 0.62 (95% CI 0.40-0.73) and proportion of agreement from 50% (95% CI 43%-57%) to 64% (95% CI 57%-71%), p<0.05].

Our study highlighted that images classified by ophthalmologists as bad or fair quality were more likely to be classified as screen-positive for diabetic retinopathy, referable diabetic retinopathy and maculopathy, increasing the number of patients requiring further observation and the burden on screening programmes. Ensuring that only good or excellent quality images are sent to ophthalmologists may improve interrater agreement for disease exclusion. Future studies are needed to understand image quality as perceived by ophthalmologists and should be followed by the development of objective and reliable parameters for image quality assessment. Furthermore, the benefit of integrating automated image quality verification in clinical settings to relieve ophthalmologists from grading all images acquired should be assessed.

Acknowledgments

The authors acknowledge Telmo Barbosa, MSc, Fraunhofer Portugal AICOS, for the development and management of the web annotation tool for classification of retinal images by ophthalmologists, and Ricardo Graça, MSc, Fraunhofer Portugal AICOS, for inputs on data preparation. Also, we acknowledge Kaggle Inc. (https://www.kaggle.com/c/diabetic-retinopathy-detection/data) and EyePACS, LLC (http://www.eyepacs.com), for providing the eye fundus images dataset used in this study.

Footnotes

Abbreviations: CI, confidence interval; CIs, confidence intervals; ICDSS, International Clinical Disease Severity Scale; к, Cohen’s Kappa coefficient.

Authors’ Note: Matilde Monteiro-Soares is also affiliated with 9MEDCIDS: Departamento de Medicina da Comunidade, Informação e Decisão em Saúde, Faculdade de Medicina da Universidade do Porto, Porto, Portugal

Author Contributions: Sílvia Rêgo: conception and design, acquisition, analysis and interpretation of data, article draft, final approval of the version to be published, agreement to be accountable for all aspects of the work.

Marco Dutra Medeiros: interpretation of data, article revision, final approval of the version to be published, agreement to be accountable for all aspects of the work.

Gustavo Bacelar-Silva: acquisition and interpretation of data, article revision, final approval of the version to be published, agreement to be accountable for all aspects of the work.

Tânia Borges: acquisition and interpretation of data, article revision, final approval of the version to be published, agreement to be accountable for all aspects of the work.

Filipe Soares: conception and design, acquisition and interpretation of data, article revision, final approval of the version to be published, agreement to be accountable for all aspects of the work.

Matilde Monteiro-Soares: conception and design, interpretation of data, article revision, final approval of the version to be published, agreement to be accountable for all aspects of the work.

Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by Fraunhofer Portugal AICOS (Porto, Portugal). The development of the web-platform for classification of retinal images was supported by Project MDevNet – National Network for Transfer of Knowledge of Medical Devices (POCI-01-0246-FEDER-026792), in the scope of the Portuguese national programme NORTE 2020 under the Portugal 2020. The sponsor or funding organization had no role in the design or conduct of this research.

Statement of Ethics: Ethical approval was not required or obtained for this study, because we used a database of retinal images collected by EyePACS, LLC and publicly available through Kaggle Inc.

References

  • 1. Kollias AN, Ulbig MW. Diabetic retinopathy: early diagnosis and effective treatment. Dtsch Arztebl Int. 2010;107(5):75-84. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Fenner BJ, Wong RLM, Lam WC, et al. Advances in retinal imaging and applications in diabetic retinopathy screening: a review. Ophthalmol Ther. 2018;7(2):333-346. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Ruamviboonsuk P, Teerasuwanajak K, Tiensuwan M, et al. Interobserver agreement in the interpretation of single-field digital fundus images for diabetic retinopathy screening. Ophthalmology 2006;113(5):826-832. [DOI] [PubMed] [Google Scholar]
  • 4. Gegundez-Arias ME, Ortega C, Garrido J, et al. Inter-observer reliability and agreement study on early diagnosis of diabetic retinopathy and diabetic macular edema risk. In: Ortuño F, Rojas I.eds. 4th International Conference on Bioinformatics and Biomedical Engineering, IWBBIO, Granada, Spain, 20-22 April, 2016. Springer; 2016:369-379. [Google Scholar]
  • 5. Kaggle: Your Home for Data Science. Diabetic Retinopathy Detection – identify signs of diabetic retinopathy in eye images. 2015. Accessed July 23, 2019. www.kaggle.com/c/diabetic-retinopathy-detection/data
  • 6. Administração Regional de Saúde do Norte. Manual de Procedimentos do Rastreio da Retinopatia Diabética da Região Norte. 2009. Accessed July 28, 2019. www.arsnorte.min-saude.pt/wp-content/uploads/sites/3/2018/01/Manual-Rastreio-da-Retinopatia-Diabetica-ARSN.pdf

Articles from Journal of Diabetes Science and Technology are provided here courtesy of Diabetes Technology Society

RESOURCES