Skip to main content
. Author manuscript; available in PMC: 2017 Feb 22.
Published in final edited form as: JAMA Ophthalmol. 2016 Feb;134(2):151–158. doi: 10.1001/jamaophthalmol.2015.4625

Table 1.

Agreement (Bland-Altman and Weighted Kappa) of optic disc VCDR scores between different imaging modalities and different graders. The comparison number relates to the specific comparisons that are described in the methodology section.

Comparison Number Reference Image Comparison Image Number VCDR
Weighted Kappa (SD)
Mean Difference 95% limits of agreement

Camera Grader Screen Camera Grader Screen Upper Lower
1 DRS Expert Large DRS Expert Large 100 -0.07 0.07 -0.21 0.90 (0.01)
2 Peek Expert Large Peek Expert Large 100 -0.01 0.16 -0.18 0.77 (0.04)
3a DRS Expert Large Peek Ophth Phone 100 -0.08 -0.11 -0.53 0.30 (0.07)
3b DRS Expert Large Peek Non-Ophth Phone 100 -0.07 0.24 -0.38 0.19 (0.06)
4a Peek Expert Large Peek Ophth Phone 100 -0.08 -0.11 -0.56 0.35 (0.07)
4b Peek Expert Large Peek Non-Ophth Phone 100 -0.06 0.21 -0.33 0.25 (0.06)
5 DRS Expert Large Peek Expert Large 2,152 0.02 0.17 -0.21 0.69 (0.01)
6a DRS Expert Large Peek (Exp.Exam) Expert Large 1,239 -0.02 0.17 -0.20 0.68 (0.02)
6b DRS Expert Large Peek (Lay Exam) Expert Large 913 -0.02 0.16 -0.21 0.71 (0.02)

Note: DRS = reference desktop camera image. Peek = smartphone image. Expert = independent trained grader/image reader. Phone = smartphone on-screen disc grading application. Ophth = App grading performed by ophthalmologist. Non-Ophth = App grading performed by non health care worker.