Skip to main content
Ophthalmology Science logoLink to Ophthalmology Science
letter
. 2024 Jul 17;4(6):100568. doi: 10.1016/j.xops.2024.100568

Reply

Angela S Li 1, Justin Myers 1, Sandra S Stinnett 1, Dilraj S Grewal 1, Glenn J Jaffe 1,
PMCID: PMC11315183  PMID: 39132024

Reply: Response to Letter to the Editor

We thank Dr Sabour and Dr Ghassemi for their comments regarding our paper.1 The authors expressed 2 concerns about our methodology and statistical analysis. Regarding the gradeability concordance, we evaluated all 1969 images that were each graded by both primary readers (Table 1) and used their grades to calculate a prevalence and bias-adjusted kappa of 0.84. This value is suggests very strong agreement between the 2 readers which further supports our conclusion of high interreader reproducibility of gradeability.2

Table 1.

Gradeability Data between 2 Primary Graders

Grader 1 Grader 2
Total
Yes No
Yes 1722 71 1793
No 87 89 176
Total 1809 160 1969

This analysis was done for all 1969 images that were graded by the 2 primary readers, noted here as Grader 1 and Grader 2. Prevalence and bias-adjusted kappa was 0.84 based on this table.

With respect to the quantitative variables, we previously determined that the intraclass correlation coefficient (ICC) (2, 1)3 for lesion area was 0.99 (95% confidence interval also 0.99–0.99). We opted to report Bland-Altman and coefficient of reproducibility (smallest real difference) rather than the ICC, as ICC is known to be highly dependent on the range of values measured, with greater range leading to higher ICC.4 Finally, it is important to note that all measurements were calculated with an individual-based approach rather than the global average approach. Each image included in the lesion area analysis was graded by the same 2 readers and the interreader difference was calculated in a consistent manner.

Footnotes

Disclosure(s):

All authors have completed and submitted the ICMJE disclosures form.

The authors have made the following disclosures:

D.S.G.: Consultant – Genentech, EyePoint, and Regeneron.

G.J.F.: Consultant – Roche/Genentech, Annexon, 4DMT, Eyepoint, and Regeneron.

References

  • 1.Li A.S., Myers J., Stinnett S.S., et al. Gradeability and reproducibility of geographic atrophy measurement in GATHER-1, a Phase II/III randomized interventional trial. Ophthalmol Sci. 2024;4 doi: 10.1016/j.xops.2023.100383. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.McHugh M.L. Interrater reliability: the kappa statistic. Biochem Med. 2012;22:276–282. [PMC free article] [PubMed] [Google Scholar]
  • 3.Shrout P.E., Fleiss J.L. Intraclass correlations: uses in assessing rater reliability. Psychol Bull. 1979;86:420–428. doi: 10.1037//0033-2909.86.2.420. [DOI] [PubMed] [Google Scholar]
  • 4.Patton N., Aslam T., Murray G. Statistical strategies to assess reliability in ophthalmology. Eye (Lond) 2006;20:749–754. doi: 10.1038/sj.eye.6702097. [DOI] [PubMed] [Google Scholar]

Articles from Ophthalmology Science are provided here courtesy of Elsevier

RESOURCES