Skip to main content
PLOS One logoLink to PLOS One
. 2021 May 6;16(5):e0251249. doi: 10.1371/journal.pone.0251249

Intraobserver and interobserver agreement among anterior chamber angle evaluations using automated 360-degree gonio-photos

Masato Matsuo 1,*, Shiro Mizoue 2, Koji Nitta 3, Yasuyuki Takai 1, Kazunobu Sugihara 1, Masaki Tanito 1
Editor: Jinhai Huang4
PMCID: PMC8101769  PMID: 33956906

Abstract

Purpose

To investigate the reproducibility for the iridocorneal angle evaluations using the pictures obtained by a gonioscopic camera, Gonioscope GS-1 (Nidek Co., Gamagori, Japan).

Methods

The pragmatic within-patient comparative diagnostic evaluations for 140 GS-1 gonio-images obtained from 35 eyes of 35 patients at four ocular sectors (superior, temporal, inferior, and nasal angles) were conducted by five independent ophthalmologists including three glaucoma specialists in a masked fashion twice, 1 week apart. We undertook the observer agreement and correlation analyses of Scheie’s angle width and pigmentation gradings and detection of peripheral anterior synechia and Sampaolesi line.

Results

The respective Fleiss’ kappa values for the four elements between manual gonioscopy and automated gonioscope by the glaucoma specialist were 0.22, 0.40, 0.32 and 0.58. Additionally, the respective intraobserver agreements for the four elements by the glaucoma specialist each were 0.32 to 0.65, 0.24 to 0.71, 0.35 to 0.70, and 0.20 to 0.76; the Fleiss’ kappa coefficients for the four elements among the three glaucoma specialists were, respectively, 0.31, 0.38, 0.31, and 0.17; the Fleiss’ kappa coefficients for the angle width and pigmentation gradings between the two glaucoma specialists each were 0.30 to 0.35, and 0.29 to 0.43, respectively. Overall, the Kendall’s tau coefficients for the angle gradings reflected the positive correlations in the evaluations.

Conclusion

Our findings suggested slight-to-substantial intraobserver agreement and slight-to-fair (among the three) or fair-to-moderate (between the two each) interobserver agreement for the angle assessments using GS-1 gonio-photos even by glaucoma specialists. Sufficient training and a solid consensus should allow us to perform more reliable angle assessments using gonio-photos with high reproducibility.

Introduction

Glaucoma is the leading cause of irreversible blindness worldwide, with about 8.4 million people blind from the disease, and iridocorneal angle evaluation by gonioscopy is necessary for glaucoma diagnosis and clinical evaluation [13]. Elevated intraocular pressure (IOP) is the only modifiable and independent risk factor for development and progression of this optic neuropathy. The mechanism of IOP elevation depends on the anatomic and structural status of the drainage system, or anterior chamber angle, and separates the pathology into two major subtypes: open-angle and closed-angle glaucoma.

Primary open-angle glaucoma (POAG) is the most common type, with an estimated 45 million people worldwide with open-angle glaucoma (OAG). This condition is associated with an open anterior chamber angle without other known explanations (i.e., secondary glaucoma) by gonioscopy for progressive glaucomatous optic nerve change [1]. OAG includes pseudoexfoliation glaucoma, which is considered to be the most common type of secondary glaucoma and can advance rapidly with continuous high IOP and be refractory to several therapeutic interventions [4]. The high pigmentation in trabecular meshwork (TM) and Sampaolesi’s line are important for the diagnosis. In OAG, the IOP is regulated primarily by resistance to aqueous humor outflow through the TM [5]. High pigmentation levels in the TM may contribute to increased outflow resistance and increased IOP. Therefore, assessment of the TM pigmentation is necessary for glaucoma diagnosis and clinical evaluation [3, 6]. Moreover, the other types of secondary glaucoma also require the examination for angle chromatic information to identify underlying causes that may alter the treatment plan.

While primary angle-closure glaucoma (PACG), which is characterized by elevated IOP as a result of mechanical obstruction of the TM by either apposition of the peripheral iris to the TM or an angle closed by synechia, causes more blindness than POAG, particularly in Asia. According to the International Society of Geographic and Epidemiologic Ophthalmology (ISGEO) protocol, primary angle closure diseases were classified into 3 categories: primary angle closure suspect (PACS), primary angle closure (PAC) and PACG, which means a continuum between open and closed angles. Thus, early detection of a narrow angle and peripheral anterior synechia (PAS) and appropriate treatment in earlier stages by laser iridotomy or lens extraction can prevent the progression and the glaucomatous optic neuropathy [7, 8]. Therefore, gonioscopy is essential for glaucoma diagnoses and clinical evaluations.

The clinical standard for evaluating the TM remains gonioscopy [3, 9, 10], because no other method is useful for estimating the chromatic information of the TM. Although alternative methods for evaluating the iridocorneal angle, such as anterior-segment optical coherence tomography (AS-OCT) and ultrasound biomicroscopy (UBM), are commonly used, these techniques only provide anatomic quantification. Alternatively, conventional gonioscopy requires skill and time, and it is difficult to obtain images of sufficient quality in a standardized manner because of the manual requirements. Moreover, gonioscopy can examine at one time only a limited contiguous portion of the iridocorneal angle, and the technique relies on subjective assessment; therefore, it is problematic to assess gonioscopic findings because of its substantial intraobserver and interobserver variability [11, 12]. Despite its importance, clinicians still do not use gonioscopy enough, with usage rates during first-time glaucoma clinic visits ranging from 17.96% to 45.9%. Moreover, a low rate of gonioscopy (49%) also was reported during the 4 to 5 years preceding glaucoma surgery [13].

The Gonioscope GS-1 (Nidek Co., Gamagori, Japan) is a recently released gonioscopic camera that covers 360 degrees of the angle and provides true-color gonio-images automatically in a standardized manner in less than 1 minute/eye. As these images can be analyzed post-hoc, physicians can make detailed observations, including image manipulation to magnify any abnormalities. This technology is intended to assist in decision-making concerning the degree of angle opening or suitability for performing certain procedures, such as laser trabeculoplasty or angle-based surgical procedures. Moreover, the technology should enable telemedicine or tele-glaucoma care with standardized gonio-photos by glaucoma specialists in remote locations, which can contribute to improved glaucoma diagnostic rates and reduce preventable visual loss if the disease is detected early enough [14]. However, as with conventional gonioscopy, there likely is observer variability associated with assessments using the GS-1 gonio-photos, which should be evaluated in detail. Teixeira et al. first reported interobserver agreement for Shaffer’s grading, angle closure, and detection of other angle structural abnormalities using the prototype GS-1 gonio-photos obtained by an ophthalmology resident and a glaucoma specialist [9]; the intraobserver and interobserver agreements of angle evaluations by ophthalmologists, especially glaucoma specialists, remain unknown. Moreover, the angle pigmentation grades were not assessed in the study, and angle evaluations were performed only inferonasally, inferotemporally, superotemporally, and superonasally, which are not common gonioscopic assessment sites and recordings. Ideally, this validation should be performed in a diagnostic agreement study by well-trained ophthalmologists familiar with gonioscopy and among various types of patients that can potentially demonstrate that the images collected through the device will allow the same diagnosis by different doctors. In the current study, we investigated the observer agreements of independent ophthalmologists including glaucoma specialists in multiple centers for angle evaluations in four primary sectors, inferiorly, superiorly, temporally, and nasally, using the images obtained by a new commercial version of the Gonioscope GS-1.

Methods

The institutional review boards of Shimane University Faculty of Medicine, Izumo, Japan, Ehime University Graduate School of Medicine, Matsuyama, Japan and Fukui-ken Saiseikai Hospital, Fukui, Japan, reviewed and approved the research, respectively, and waived written informed consent. Each participating center has a specialized glaucoma unit. All research adhered to the tenets of the Declaration of Helsinki.

Overview of the study design

The pragmatic within-patient comparative diagnostic evaluations for 140 GS-1 gonio-images by five independent ophthalmologists (S.M., K.N., Y.T., K.S., and M.T.) were conducted in a masked fashion twice 1 week apart. Three of the five ophthalmologists were glaucoma specialists (S.M., K.N., and M.T.) belonging to different tertiary care centers. We evaluated the intraobserver and interobserver agreement values of the Scheie’s angle width and pigmentation gradings and detection of PAS and the Sampaolesi line. In the Scheie’s grading system, the angle widths are defined as wide if all structures are visible up to the iris root and its attachment to the anterior ciliary body; grade I if all angle structures are visible up to the scleral spur; grade II when angle structures are visible only up to the posterior TM; grade III when only the Schwalbe lines and the anterior TM are visible; and grade IV when no TM is observed. We adopted the grading system for the experimental design. Moreover, angle pigmentation was graded from 0 (no pigmentation) to IV (severe pigmentation) with increasing degrees of pigmentation by the Scheie grading system [15]. The Department of Ophthalmology, Shimane University Faculty of Medicine, was responsible for the test data preparation, data acquisition, and analyses.

Study subjects

Study subjects were identified from the medical records of patients who underwent successful evaluations using the Gonioscope GS-1 images obtained in four sectors, superiorly, temporally, inferiorly, and nasally, at Shimane University Hospital from October 2018 to January 2019. Each patient underwent a comprehensive eye examination that included autorefraction (RC-5000, Tomey, Nagoya, Japan), visual acuity measurement, slit-lamp examination, IOP measurement using Goldmann applanation tonometry (Haag-Streit, Koniz, Switzerland), conventional static and dynamic gonioscopy with an Ocular Magna View Two-Mirror Gonio (Ocular Instruments, Bellevue, WA, USA) under slit-lamp illumination, corneal thickness measurement by specular microscopy (EM-3000; Tomey), central anterior chamber depth measurement using OA-2000 (Tomey), fundus examination, and other examinations as determined by the clinician as part of the clinical examination. Manual gonioscopy was performed by a single glaucoma specialist (M.T.) and angle width grades were evaluated with the modified Shaffer’s grading system for the clinical assessment. In the Shaffer scheme, grade 0 was considered for the 0° wide angles when no TM could be observed; grade 1 was recorded for the 5° to 15° wide angles when only Schwalbe’s lines and the anterior TM were visible; grade 2 was assigned for 15° to 25° wide angles when angle structures were visible only until the posterior TM; grade 3 was used for 25° to 35° wide angles when all angle structures were visible up to the scleral spur, and grade 4 for greater than 35° wide angles if all structures were visible up to the iris root and its attachment to the anterior ciliary body [9, 16]. The patient age, gender, clinical history, ocular characteristics, and clinical diagnosis of each study eye were recorded.

Ultimately, the study included gonio-images obtained in four ocular sectors of 35 eyes of 35 Japanese patients: five eyes of five normal patients, five eyes with ocular hypertension, 10 eyes with POAG, five eyes with pseudoexfoliation glaucoma, five eyes with secondary OAG, and five eyes with PACG. Glaucomatous eyes were defined based on the characteristic optic disc appearance (localized or diffuse neuroretinal rim thinning/notching), the presence of a retinal nerve fiber layer (RNFL) defect in the corresponding region, and the presence of a visual field defect corresponding to the structural change with Humphrey Field Analyzer (Carl Zeiss, Dublin, CA, USA). The glaucoma types were diagnosed via slit-lamp examination and manual gonioscopy. The normal controls had IOPs less than or equal to 21 mmHg, no history of IOP elevation, no glaucomatous optic disc appearance, and no RNFL defect; eyes with ocular hypertension had IOPs over 21 mmHg, no secondary cause of IOP elevations, no glaucomatous optic disc appearance, and no RNFL defect [17].

Subsequently, one masked observer (M.M.) assessed the image quality of the gonio-images based on previous studies (i.e., grade 0 indicated clear and focused; grade 1 slightly blurred with visible details; and grade 2 blurred with no discernible details) [9, 16]. The study excluded whole eyes with poor-quality images that had at least one grade 2 image in four ocular sectors. After the image quality assessments, 17 poor-quality ones (23.9%) were excluded from angle evaluations, which were composed of 11 eyes of 11 subjects for one eye and 6 eyes of 3 subjects for both eyes. If both eyes met the eligibility criteria, the eye with the better-quality images was selected [3].

The other exclusion criteria were a history of trauma, intraocular surgery including uncomplicated cataract surgery, laser iridotomy, and laser trabeculoplasty that could potentially change the intra-anterior chamber environment and drastically alter angle structures and TM pigmentation [3].

Gonioscope GS-1 imaging

The Gonioscope GS-1 system includes a 16-mirror faceted automatically rotating optical contact prism illuminated by a white light-emitting diode lamp, and a built-in, high-resolution color camera. Each facet of the prism projects white light to a 22.5-degree portion of the angle. The gonioscope camera can take 17 pictures simulating indirect static gonioscopy at varying focal depths from each of the facets, for a total of 272 gonio-photos according to the same protocol as defined by the manufacturer [3, 9, 13].

After instilling topical anesthetic eye drops of 0.4% oxybuprocaine hydrochloride (Benoxil ophthalmic solution 0.4%, Santen, Osaka, Japan) into the eye and using a lens coupling gel (GenTeal Gel, Alcon, Fort Worth, TX, USA), the images were captured by the Gonioscope GS-1 with the participants seated in a darkened room and looking in primary gaze. During the first step of the image-acquisition process, the instrument was moved manually toward the apex of the patient’s cornea until light contact was obtained and a focused image appeared on the gonioscope screen. Unlike manual gonioscopy, the proprietary pressure sensor feedback mechanism built into the GS-1 allows for sufficient contact between the cornea and gonioprism coupled with a gel ointment for imaging. A fixation target in the center of the gonio-prism stabilized the ocular movements while the prism touched the eye surface; if any indentation was detected, the device did not proceed, and no images were acquired until the indention was eliminated. Second, the machine automatically achieves fine focus on the angle structures and takes 16 sequential high-resolution photographs at multiple focal planes. With additional manual focusing time, examining one eye requires less than 1 minute [3, 9, 13].

Evaluation of GS-1 gonio-images

Interobserver agreement and correlation analyses using GS-1 gonio-photos

The five independent ophthalmologists were board-certified ophthalmic specialists of the Japanese Ophthalmological Society (JOS) who had completed 5 years of training at a JOS-approved educational institution of ophthalmology and passed an examination administered by the JOS. The three glaucoma specialists in three different tertiary care centers diagnosed and treated glaucoma patients daily. One of the authors (M.M.) provided the participating ophthalmologists with a short explanation about angle evaluation with GS-1 gonio-images regarding the 140 randomized GS-1 gonio-images (first test, S1 Data) and the answer sheets were sent by e-mail. Thereafter, the five ophthalmologists including the three glaucoma specialists performed the diagnostic evaluations during the first test without specific clinical information about the patients. The Shaffer’s grading system is commonly used in daily clinical practice; however, it depends on both the visibility of the anatomical structures of the angle and its angularity. While the Scheie’s angle width grading system is also commonly used gonioscopic grading scheme based upon the visibility of the anatomical structures of the angle only, therefore, we thought the Scheie’s system was more appropriate for angle evaluations with GS-1 gonio-photos, because it is easy to judge the visibility of the anatomical structures but it is difficult to expect the angularity by one planar image. Thus, each ophthalmologist was asked to individually evaluate the width of Scheie’s angle and pigmentation gradings [15] and the presence or absence of PAS and Sampaolesi line [9]. For analysis, we modified the original Scheie’s grading systems by labeling 0 (wide or 0), 1 (I), 2 (II), 3 (III), and 4 (IV) [18]. Then, we analyzed the interobserver agreements for angle evaluations using the gonioscopic photos.

Intraobserver reproducibility for angle evaluations between manual gonioscopy and automated gonioscope

To evaluate the intraobserver reproducibility for angle evaluations, we compared the outcomes with manual gonioscopy and those with automated gonioscope by the glaucoma specialist (M.T.) in the first test. For analysis, the Shaffer’s angle width grade with gonioscopy were translated to the Scheie’s angle width grade by converting wide or 0 (4), I (3), II (2), III (1), and IV (0). We also analyzed the intraobserver reproducibility for closed angle detection in each sector (n = 140) and in each eye (n = 35). Additionally, for analysis, we defined the ‘‘closed angle sector” as grade 0 or 1 in the Shaffer’s system with gonioscopy and regarded it as grade III or IV in the Scheie’s system with GS-1 image evaluation, and we also defined the ‘‘closed angle eye” when 3 or 4 sectors in the eye were closed.

Intraobserver agreement and correlation analyses using GS-1 gonio-photos

One week after the first test, the five ophthalmologists received the different randomized GS-1 gonio-images from the same set (second test, S2 Data) and the answer sheet and conducted the diagnostic evaluations during the second test without specific clinical information about the patients. This was performed to determine the intraobserver agreements for angle evaluations using the gonioscopic photos.

Statistical analysis

We evaluated the agreement levels of the evaluations of Scheie’s angle width and pigmentation gradings and each of the possible gonioscopic findings in the detection of PAS and the Sampaolesi line [9, 18]. Fleiss’ kappa statistic was used to assess the agreement for binary and nominal variables, including the intraobserver and interobserver agreement [19]. A bootstrap method was used to calculate the 95% confidence intervals [20]. The analyses were performed in R statistical software version 3.5.3 [21]. The agreement of the kappa statistics was interpreted as poor (~0.20), fair (0.20–0.40), moderate (0.40–0.60), substantial (0.60–0.80), and excellent (0.80~) based on the proposal by Altman [22, 23]. Kendall rank correlation coefficients were also calculated to assess the statistical associations based on the ranks of the data by using JMP Pro 14 software (SAS Institute Japan Inc., Tokyo, Japan). The clinical and demographic characteristics are expressed as the mean and standard deviation for continuous variables or by number and frequency for discrete variables.

Results

A total of 140 images of 35 eyes of 35 subjects were included in the study. Table 1 shows the demographic data and clinical characteristics of the study subjects. The participants were Japanese (mean age, 61.5 ± 14.3 years; range, 23–83). Women (n = 14) comprised 40% of the subjects.

Table 1. Demographics and clinical characteristics of the study subjects.

Characteristics Total (n = 35)
Age, years, mean ± SD (range) 61.5±14.3 (23–83)
Women, number (%) 14 (40)
Right eye, number (%) 19 (54)
LogMAR best corrected visual acuity, mean ± SD (range) 0.11±0.37 (-0.08–1.85)
Intraocular pressure, mmHg, mean ± SD (range) 18.3±8.5 (9–57)
Corneal curvature, diopters, mean ± SD (range) 43.5±1.6 (40.4–47.9)
Spherical equivalent refractive error, diopters, mean ± SD (range) -2.4±3.3 (-10.8–1.6)
Corneal thickness, μm, mean ± SD (range) 525.5±32.5 (437–585)
Central anterior chamber depth, mm, mean ± SD (range) 3.20±0.38 (2.23–3.84)
Lens status, phakic number (%) 35 (100)
Shaffer’s angle width grade, mean ± SD (range) 3.29±0.84 (1–4)
Scheie’s angle pigmentation grade, mean ± SD (range) 1.13±0.74 (0–3)
Normal, number (%) 5 (14)
OH, number (%) 5 (14)
POAG patients, number (%) 10 (29)
PEG patients, number (%) 5 (14)
SOAG patients, number (%) 5 (14)
PACG patients, number (%) 5 (14)
Image quality grading, mean ± SD (range) 0.33±0.47 (0–1)

LogMAR = logarithm of the minimum angle of resolution; OH = ocular hypertension; POAG = primary open-angle glaucoma; PEG = pseudoexfoliation glaucoma; SOAG = secondary open-angle glaucoma; PACG = primary angle-closure glaucoma; SD = standard deviation.

Intraobserver reproducibility for angle evaluations between manual gonioscopy and automated gonioscope

Table 2 shows the agreement analyses for angle evaluations between manual gonioscopy and automated GS-1 gonioscope by the single glaucoma specialist (M.T.). One hundred forty positions from the four ocular sectors of 35 eyes were evaluated. Compared with the results of gonioscopy, the outcomes of Scheie’s angle pigmentation grading, PAS detection and the Sampaolesi line detection using GS-1 gonio-photos were significantly overestimated (P<0.05, respectively), and the outcome of Scheie’s angle width grading by automated gonio-photos also tended to be overestimated. The Fleiss’ kappa values for the four elements, i.e., Scheie’s angle width and pigmentation gradings and for detection of PAS and Sampaolesi line were 0.22 (fair), 0.40 (fair), 0.32 (fair) and 0.58 (substantial), respectively. In addition, the Kendall’s tau coefficients were 0.47, 0.65, 0.39 and 0.62 (P<0.01, respectively). Moreover, we also analyzed the agreements for closed angle detection in each sector (n = 140) and in each eye (n = 35), and the Fleiss’ kappa (95% CI) were 0.72 (0.49–0.95) and 1.00 (1.00–1.00). Additionally, the Kendall’s tau coefficients were 0.75 and 1.00 (P<0.01, respectively). S1 Fig shows the radar charts of the distributions of iridocorneal angle evaluations with manual gonioscopy and with automated gonioscope for visualizing the variabilities. S1 Table demonstrates the comparison of manual gonioscopy and automated gonioscope with contingency table.

Table 2. Agreement analyses for angle evaluations between manual gonioscopy and automated gonioscope.

Parameters Manual gonioscopy (mean ± SD) Automated gonioscope (mean ± SD) Wilcoxon signed rank test (P) Fleiss’ kappa coefficient Kendall rank correlation coefficient
Kappa (95% CI) Landis-Koch score* Tau P
Scheie’s angle width grading, n = 140 0.71 ± 0.89 0.87 ± 1.08 0.09 0.22 (0.10–0.35) Fair agreement 0.47 <0.01
Scheie’s angle pigmentation grading, n = 140 1.13 ± 0.90 1.41 ± 1.03 <0.01 0.40 (0.28–0.51) Fair agreement 0.65 <0.01
PAS detection, n = 140 0.04 ± 0.20 0.10 ± 0.29 <0.01 0.32 (0.03–0.61) Fair agreement 0.39 <0.01
Sampaolesi line detection, n = 140 0.06 ± 0.25 0.12 ± 0.33 0.01 0.58 (0.35–0.81) Moderate agreement 0.62 <0.01

PAS = peripheral anterior synechia; SD = standard deviation; CI = confidence interval.

*The Landis-Koch score is used to interpret the kappa coefficient.

Intraobserver agreement and correlation analyses for angle evaluations using GS-1 gonio-photos

Table 3 shows the intraobserver agreement values of the independent ophthalmologists (observers 1, 2, 3, 4, and 5) including the glaucoma specialists (observers 1, 2, and 3). One hundred forty images obtained from the four ocular sectors of 35 eyes were evaluated, and the kappa agreements between the first and second tests were evaluated. Only observer 3 was inexperienced using the Gonioscope GS-1 in the clinic. The intraobserver agreement values for the four elements, i.e., Scheie’s angle width and pigmentation gradings and for detection of PAS and the Sampaolesi line, by the three glaucoma specialists ranged from 0.32 to 0.65, 0.24 to 0.71, 0.35 to 0.70 (all fair to substantial), and 0.20 to 0.76 (slight to substantial), respectively. In addition, the intraobserver agreement values for the five ophthalmologists ranged from 0.32 to 0.65 and 0.24 to 0.71 (both fair to substantial), 0.35 to 1.00 (fair to almost perfect), and -0.01 to 0.76 (poor to substantial), respectively. The glaucoma specialists did not always have higher Fleiss’ kappa coefficients than the other ophthalmologists, while the intraobserver agreement values of the glaucoma specialists were the highest except for PAS detection. Additionally, the two glaucoma specialists (observers 1 and 2) who had experience using the Gonioscope GS-1 in the clinic tended to have higher scores than the other (observer 3) in all the angle evaluations. Fig 1 shows the radar charts of the distributions of iridocorneal angle evaluations from the first and second tests by the three glaucoma specialists for visualizing the intraobserver variabilities. Moreover, the Kendall’s tau coefficients for angle width gradings were 0.48 to 0.81 (P<0.01, respectively) and for angle pigmentation gradings were 0.48 to 0.82 (P<0.01, respectively), which reflected the positive correlations in the evaluations (Table 4). S2 Table demonstrates the comparison of Scheie’s angle gradings by glaucoma specialists with automated gonioscope between the first and second tests with contingency table.

Table 3. Intraobserver agreement analyses for angle evaluations using gonioscopic photos obtained using the GS-1.

Parameters Total (n = 140) Superior position (n = 35) Temporal position (n = 35) Inferior position (n = 35) Nasal position (n = 35)
Fleiss’ kappa coefficient (95% CI) Landis-Koch score* Fleiss’ kappa coefficient (95% CI) Fleiss’ kappa coefficient (95% CI) Fleiss’ kappa coefficient (95% CI) Fleiss’ kappa coefficient (95% CI)
Scheie’s angle width grading
Observer 1 0.65 (0.55–0.75) Substantial agreement 0.76 (0.57–0.92) 0.67 (0.46–0.86) 0.55 (0.30–0.76) 0.60 (0.34–0.82)
Observer 2 0.63 (0.51–0.76) Substantial agreement 0.78 (0.44–1.00) 0.75 (0.49–0.94) 0.40 (0.16–0.64) 0.63 (0.34–0.90)
Observer 3 0.32 (0.17–0.46) Fair agreement 0.68 (0.42–0.89) 0.19 (0.00–0.34) 0.15 (-0.14–0.43) -0.03 (-0.23–0.17)
Observer 4 0.50 (0.30–0.65) Moderate agreement 0.25 (-0.10–0.55) 0.50 (0.09–0.78) 0.85 (-0.01–1.00) 0.53 (0.11–0.84)
Observer 5 0.40 (0.26–0.53) Fair agreement 0.41 (0.13–0.66) 0.37 (0.05–0.64) 0.60 (0.34–0.81) 0.17 (-0.09–0.42)
Scheie’s angle pigmentation grading
Observer 1 0.67 (0.56–0.76) Substantial agreement 0.61 (0.36–0.81) 0.53 (0.27–0.73) 0.53 (0.30–0.74) 0.87 (0.69–1.00)
Observer 2 0.71 (0.59–0.82) Substantial agreement 0.80 (0.46–1.00) 0.58 (0.18–0.87) 0.70 (0.48–0.88) 0.67 (0.31–0.90)
Observer 3 0.24 (0.13–0.36) Fair agreement 0.28 (0.00–0.52) 0.43 (0.15–0.64) 0.05 (-0.19–0.27) 0.07 (-0.19–0.31)
Observer 4 0.65 (0.54–0.75) Substantial agreement 0.65 (0.41–0.84) 0.79 (0.55–0.95) 0.57 (0.31–0.79) 0.47 (0.18–0.69)
Observer 5 0.45 (0.30–0.57) Moderate agreement 0.47 (0.16–0.73) 0.49 (0.16–0.75) 0.37 (0.10–0.63) 0.40 (0.10–0.65)
PAS detection
Observer 1 0.60 (0.32–0.82) Moderate agreement 0.77 (0.30–1.00) 0.63 (-0.06–1.00) 0.20 (-0.15–0.67) 1.00 (1.00–1.00)
Observer 2 0.70 (0.32–1.00) Substantial agreement 0.65 (-0.05–1.00) 1.00 (1.00–1.00) 0.72 (-0.02–1.00) -0.02 (-0.06–0.01)
Observer 3 0.35 (0.08–0.58) Fair agreement 0.44 (-0.09–0.87) 0.64 (-0.04–1.00) 0.11 (-0.23–0.46) -0.01 (-0.06–0.01)
Observer 4 1.00 (1.00–1.00) Almost perfect agreement 1.00 (1.00–1.00) NA NA NA
Observer 5 0.51 (0.34–0.67) Moderate agreement 0.62 (-0.04–1.00) 0.60 (0.15–0.92) 0.77 (0.44–1.00) 0.12 (-0.26–0.42)
Sampaolesi line detection
Observer 1 0.60 (0.39–0.77) Moderate agreement NA -0.03 (-0.08–0.01) 0.22 (-0.12–0.49) NA
Observer 2 0.76 (0.39–1.00) Substantial agreement NA NA 0.72 (0.30–1.00) NA
Observer 3 0.20 (-0.02–0.43) Slight agreement NA 0.07 (-0.21–0.46) 0.21 (-0.16–0.51) -0.08 (-0.17–-0.03)
Observer 4 -0.01 (-0.03–0.00) Poor agreement NA -0.03 (-0.08–0.01) -0.03 (-0.08–0.01) NA
Observer 5 0.53 (-0.01–0.89) Moderate agreement NA 0.64 (-0.04–1.00) 0.35 (-0.09–1.00) NA

Observer 1, 2, 3 = glaucoma specialists.

Observer 4, 5 = ophthalmologists.

PAS = peripheral anterior synechia; CI = confidence interval; NA = not applicable.

*The Landis-Koch score is used to interpret the kappa coefficient.

Fig 1. The radar charts of the mean distributions of iridocorneal angle evaluations during the first and second tests by three independent glaucoma specialists for visualizing intraobserver variabilities in (A) Scheie’s angle width grading (0 to 4), (B) Scheie’s angle pigmentation grading (0 to 4), (C) PAS detection (0 to 1), and (D) Sampaolesi line detection (0 to 1).

Fig 1

S = superior; T = temporal; I = inferior; N = nasal.

Table 4. Intra-observer correlation analyses for angle gradings using gonioscopic photos of GS-1.

Parameters Kendall rank correlation coefficient (total, n = 140) Kendall coefficient (superior position, n = 35) Kendall coefficient (temporal position, n = 35) Kendall coefficient (inferior position, n = 35) Kendall coefficient (nasal position, n = 35)
Tau P Tau P Tau P Tau P Tau P
Scheie’s angle width grading
Observer 1 0.81 <0.01 0.88 <0.01 0.87 <0.01 0.75 <0.01 0.78 <0.01
Observer 2 0.75 <0.01 0.81 <0.01 0.82 <0.01 0.66 <0.01 0.75 <0.01
Observer 3 0.48 <0.01 0.84 <0.01 0.61 <0.01 0.21 0.17 0.12 0.50
Observer 4 0.60 <0.01 0.52 <0.01 0.68 <0.01 0.86 <0.01 0.53 <0.01
Observer 5 0.52 <0.01 0.64 <0.01 0.51 <0.01 0.65 <0.01 0.27 0.09
Scheie’s angle pigmentation grading
Observer 1 0.82 <0.01 0.77 <0.01 0.68 <0.01 0.74 <0.01 0.93 <0.01
Observer 2 0.81 <0.01 0.83 <0.01 0.72 <0.01 0.84 <0.01 0.75 <0.01
Observer 3 0.48 <0.01 0.51 <0.01 0.57 <0.01 0.07 0.64 0.44 <0.01
Observer 4 0.79 <0.01 0.79 <0.01 0.82 <0.01 0.75 <0.01 0.74 <0.01
Observer 5 0.54 <0.01 0.63 <0.01 0.59 <0.01 0.47 <0.01 0.55 <0.01

Observer 1, 2, 3 = glaucoma specialists.

Observer 4, 5 = ophthalmologists.

Interobserver agreement and correlation analyses for angle evaluations using GS-1 gonio-photos

Table 5 shows the interobserver agreement values among the independent glaucoma specialists (observers 1, 2, and 3) and among the five ophthalmologists (observers 1, 2, 3, 4, and 5) for the angle evaluations using gonioscopic photos during the first test. One hundred forty images obtained from four ocular sectors of 35 eyes were evaluated, and the kappa agreements were tested. The kappa coefficients of reliability for Scheie’s angle width and pigmentation gradings and for detection of PAS and the Sampaolesi line among the three glaucoma specialists were 0.31, 0.38, and 0.31 (all fair), and 0.17 (slight), respectively. In addition, the kappa coefficients of reliability for the agreement values among the five ophthalmologists were 0.17 (slight agreement) and 0.34 (fair agreement) and 0.09, and 0.14 (both slight), respectively. Overall, the Fleiss’ kappa coefficients among the three glaucoma specialists were higher than those among the five ophthalmologists. Fig 2 shows the radar charts of the distributions of the iridocorneal angle evaluations from the first test by the three glaucoma specialists for visualizing the interobserver variabilities. Moreover, we also performed the interobserver agreement and correlation analyses between the two glaucoma specialists each for the angle gradings using gonioscopic photos during the first test. The kappa coefficients for angle width and pigmentation gradings were 0.30 to 0.35 (fair agreement each), and 0.29 to 0.43 (fair to moderate), which showed the similar values. The Kendall’s tau coefficients were 0.53 to 0.65 (P<0.01, respectively) and 0.62 to 0.69 (P<0.01, respectively), which reflected the positive correlations between the two each (Table 6). S3 Table demonstrates the comparison of Scheie’s angle gradings with automated gonioscope between a glaucoma specialist and the others in first test with contingency table. We additionally assessed the effects of GS-1 image quality on the observer agreements for angle evaluations. As the result, the observer agreements using grade 0 images seems not always better than those using grade 1 images (S4 Table).

Table 5. Interobserver agreement analyses for angle evaluations using GS-1 gonioscopic photos during the first test.

Parameters Total (n = 140) Superior position (n = 35) Temporal position (n = 35) Inferior position (n = 35) Nasal position (n = 35)
Fleiss’ kappa coefficient (95% CI) Landis-Koch score* Fleiss’ kappa coefficient (95% CI) Fleiss’ kappa coefficient (95% CI) Fleiss’ kappa coefficient (95% CI) Fleiss’ kappa coefficient (95% CI)
Scheie’s angle width grading
Among three glaucoma specialists 0.31 (0.22–0.40) Fair agreement 0.32 (0.14–0.48) 0.43 (0.31–0.52) 0.36 (0.17–0.53) 0.07 (-0.09–0.23)
Among all five ophthalmologlists 0.17 (0.11–0.23) Slight agreement 0.16 (0.03–0.28) 0.17 (0.05–0.24) 0.19 (0.05–0.31) 0.12 (0.02–0.21)
Scheie’s angle pigmentation grading
Among three glaucoma specialists 0.38 (0.30–0.46) Fair agreement 0.26 (0.05–0.46) 0.44 (0.20–0.62) 0.23 (0.07–0.38) 0.44 (0.25–0.60)
Among all five ophthalmologlists 0.34 (0.28–0.40) Fair agreement 0.34 (0.17–0.48) 0.29 (0.11–0.43) 0.24 (0.13–0.33) 0.40 (0.24–0.51)
PAS detection
Among three glaucoma specialists 0.31 (0.09–0.49) Fair agreement 0.11 (-0.08–0.38) 0.65 (-0.08–0.38) 0.22 (-0.04–0.39) -0.01 (-0.04–0.01)
Among all five ophthalmologlists 0.09 (0.01–0.17) Slight agreement 0.12 (0.03–0.19) 0.19 (-0.05–0.37) 0.22 (0.05–0.35) -0.13 (-0.17–0.09)
Sampaolesi line detection
Among three glaucoma specialists 0.17 (-0.01–0.35) Slight agreement NA -0.11 (-0.19–0.05) 0.12 (-0.15–0.34) -0.05 (-0.10–0.01)
Among all five ophthalmologlists 0.14 (0.01–0.26) Slight agreement NA 0.04 (-0.05–0.13) 0.09 (-0.09–0.23) -0.03 (-0.06–-0.01)

PAS = peripheral anterior synechia; CI = confidence interval; NA = not applicable.

*The Landis-Koch score is used to interpret the kappa coefficient.

Fig 2. The radar charts of the mean distributions of iridocorneal angle evaluations during the first test by three independent glaucoma specialists for visualizing interobserver variabilities in (A) Scheie’s angle width grading (0 to 4), (B) Scheie’s angle pigmentation grading (0 to 4), (C) PAS detection (0 to 1), and (d) Sampaolesi line detection (0 to 1).

Fig 2

S = superior; T = temporal; I = inferior; N = nasal.

Table 6. Inter-observer reproducibility analyses for angle gradings by glaucoma specialists using gonioscopic photos of GS-1.

Parameters Fleiss’ kappa coefficient (total, n = 140) Kendall rank correlation coefficient (total, n = 140)
Kappa (95% CI) Landis-Koch score* Tau P
Scheie’s angle width grading
Observer 1 vs. Observer 2 0.31 (0.20 to 0.42) Fair agreement 0.65 <0.01
Observer 1 vs. Observer 3 0.30 (0.19 to 0.41) Fair agreement 0.57 <0.01
Observer 2 vs. Observer 3 0.35 (0.21 to 0.49) Fair agreement 0.53 <0.01
Scheie’s angle pigmentation grading
Observer 1 vs. Observer 2 0.40 (0.29 to 0.52) Fair agreement 0.69 <0.01
Observer 1 vs. Observer 3 0.43 (0.31 to 0.54) Moderate agreement 0.65 <0.01
Observer 2 vs. Observer 3 0.29 (0.17 to 0.41) Fair agreement 0.62 <0.01

Observer 1, 2, 3 = glaucoma specialists.

Discussion

Using the images obtained by the Gonioscope GS-1, we evaluated the intraobserver and interobserver agreement values of five independent ophthalmologists including three glaucoma specialists in multiple centers for several iridocorneal angle evaluations in four ocular sectors. Fleiss’ kappa statistic was used to demonstrate the observer agreements for angle evaluations with GS-1 images because the exact match is desirable in clinical evaluations. In addition, we also performed the Kendall rank correlation coefficient analyses especially for the ordinal scales, which would be beneficial to assess the statistical associations based on the ranks of the data.

Manual gonioscopy, a contact method developed in the 1800s that requires topical anesthesia and patient cooperation, is the current clinical standard for angle assessment because we can perform both static and dynamic angle evaluations with it, which enables us to distinguish between an anatomically closed angle with iridotrabecular contact (ITC, apposition) and PAS. However, gonioscopy is subjective, and findings may vary with corneal pressure, lighting conditions, angle pigmentation, and iris convexity. Therefore, the interobserver (intervisit) agreement for angle opening with binary scale (narrow or open) with gonioscopy was moderate to excellent, 0.53 to 0.86 and the interobserver agreement was moderate to substantial, 0.57 to 0.69, which was calculated with kappa statistics [24]. In addition, the interobserver agreement for angle width gradings between the resident’s observations and the glaucoma specialist by manual gonioscopy was substantial, 0.75, which was determined with weighted kappa statistics [25]. The lack of objective documentation of the gonioscopic findings makes it inappropriate for long-term follow-up. Considering this, the angle evaluations using recordable standardized gonio-photos may contribute to more reliable and reproducible assessments. However, the reproducibility of the angle evaluations using manual standardized gonio-photos has not always been high. Phu et al. reported that the agreement values for angle width evaluations between the results of gonioscopy performed by a senior optometrist and gonioscopic image evaluations by three experienced optometrists were 0.39 to 0.45 (fair to moderate agreement); the interobserver agreement values for the evaluations with gonio-photography between the two were 0.42 to 0.76 (moderate to substantial agreement), which were analyzed by weighted Fleiss’ kappa [18]. Therefore, manual evaluations and gonioscopic image evaluations with conventional gonioscopy have limitations, respectively.

While, the inter- and intraobserver agreement for EyeCam which is the other angle visualization and photography device was excellent in angle closure detection (kappa = 0.82 and 0.87), and the intraobserver agreement between gonioscopy and EyeCam in angle closure detection was moderate (kappa = 0.52–0.60), which were calculated with unweighted kappa. The intraobserver agreement between gonioscopy and EyeCam in angle width grading was moderate (kappa = 0.52–0.60, calculated with weighted kappa) [16]. However, participants must be performed EyeCam imaging in suprine position with the 130°direct gonio-lens, which takes longer than gonioscopy, approximately 5 to 10 minutes per eye and the image is quite different from the common manual indirect gonioscopy, which might be the reasons that the device is not used in common. Moreover, it is also disadvantageous in terms of acquiring standard images. Unlike the GS-1’s automated angle imaging system, there are many manual operations such as adjusting the illumination condition, position and tilt of gonioscopy lens for photographing with EyeCam, which could affect the angle evaluations [13, 16].

As shown in Table 2, compared with the results of manual gonioscopy, the outcomes using GS-1 gonio-photos tended to be overestimated. In addition, the Fleiss’ kappa values for angle width grading, and for detection of PAS and the Sampaolesi line by the glaucoma specialist were different from the previous results of a prototype version of GS-1 (kappa = 0.48, 0.46 and 0.09, respectively) [9]. Additionally, the Kendall’s tau coefficients represented the positive correlations between the two methods. Gonioscopy allows us to perform not only conventional static but also dynamic gonioscopy, and we can observe the part we want to see in more detail until we are able to judge. Therefore, the assessment with gonioscopy by the glaucoma specialist would be different from that with the single static GS-1 image and could have better diagnostic power than it. On the other hand, the Fleiss’ kappa values for closed angle detection with GS-1 in each sector and in each eye were better than the prototype GS-1 and might be better than EyeCam [9, 16]. The high agreement for closed angle detection supported the usefulness of the device which could be suitable for the screening because, for the purpose of screening, as in fundus photography that is commonly used in medical checkup, the overestimation would be less of a problem. The important thing is to operate the two methods for appropriate purposes.

Table 3 would reflect the certain views on the intraobserver agreements of the independent ophthalmologists including the glaucoma specialists and the existence of a large measurement bias in the method. In addition, we could find the similar tendency in each sector analysis, however it would be difficult to conclude the findings also holds in each sector because of the large differences of value and confidence interval. While, the Kendall’s tau coefficients represented the positive correlations between the outcomes of first and second test in total and in most of each sector analysis (Table 4). To the best of our knowledge, no study has evaluated the intraobserver agreement in angle evaluations using standardized gonio-photos with indirect gonioscopy; however, even among the glaucoma specialists in the current study, the intraobserver agreement values were not always substantial or higher scores. The manual gonioscopy technique may have a longer-than-desired learning curve [9]. Therefore, as with conventional gonioscopy, angle evaluations using the gonio-photos also likely have an associated learning curve that can affect the results. In fact, the two glaucoma specialists who were experienced using the Gonioscope GS-1 tended to have higher scores than the other. This suggested that even highly experienced glaucoma specialists need additional training to assess the angles using the gonio-images in clinical settings; this can be accomplished by comparing the actual slit-lamp findings and manual gonioscopy findings before evaluating the angles only using the gonio-photos, which could be a hidden potential limitation of the device. Moreover, the two ophthalmologists who were not glaucoma specialists experienced in the use of the Gonioscope GS-1 tended to have higher intraobserver agreement scores in the angle gradings than the inexperienced glaucoma specialist, likely also for the same reason. In addition, if using the gonio-photos, the angle assessments, especially in the presence of ambiguous findings, may be difficult even for the experts. Moreover, when considering PAS detection by observer 4, for example, in the presence or absence of specific findings, a tendency may exist for overestimation or underestimation depending on the observer. Thus, careful interpretation is important since the apparent match rate is high in the cases.

Overall, the Fleiss’ kappa coefficients among the three glaucoma specialists were higher than those among the five ophthalmologists and the interobserver agreements were relatively lower than those of the intraobserver agreements (Table 5), which could have been affected by some differences in angle recognition among the different ophthalmologists. In addition, the interobserver agreements improved according to the proportion of the glaucoma specialists among the observers. While, the Kendall’s tau coefficients between the two glaucoma specialists each represented the positive correlations (Table 6). Therefore, as expected, the angle assessments by glaucoma specialists were the most similar and thus the most reliable. We could find the similar tendency in each sector analysis; however, it would be difficult to conclude the findings also holds in each sector because of the large differences of value and confidence interval. Additionally, even the current agreement among the three glaucoma specialists and between the two each did not always achieve high scores. The interobserver agreement values for Scheie’s angle pigmentation grading were similar between the observer groups. Therefore, the grading is easier, and relatively good accuracy can be achieved from the beginning. In addition, the agreement for detecting Sampaolesi lines between manual gonioscopy and the gonio-image assessment was slight (kappa = 0.16), and in a previous report, the interobserver agreement and kappa coefficient between an ophthalmology resident and glaucoma specialist was slight (kappa = 0.09) [9]. Considering that the intraobserver agreement for detecting Sampaolesi lines by the glaucoma specialists tended to be high, the low interobserver agreement scores probably resulted from some differences in the recognition by the observers rather than the characteristics of the assessments with the gonio-images. Therefore, to reduce the interobserver variability in the angle evaluations with the gonio-photos, the consensus on the angle findings between the observers should be reconfirmed.

We additionally assessed the effects of GS-1 image quality on the observer agreements for angle evaluations. As shown in S4 Table, the observer agreements using grade 0 images seems not always better than those using grade 1 images, probably because we completely excluded the grade 2 (blurred with no discernible details) images. However, our study had several limitations. Because our study subjects were from university hospitals that provide special glaucoma cares on a daily basis in Japan, there may have been referral bias. Moreover, our study subjects were all Japanese. The irises and angles of Asians differ in color from those of Caucasians, which may differ from other races. We also excluded subjects with poor-quality images, which could cause selection bias. The sample size was not sufficiently large to include all angle types, and there likely is a limit to generalizability of the current results to all angle evaluations with the gonio-photos. However, it is impractical to evaluate the diagnostic reproducibility of all rare angle findings with small sample size. Therefore, we limited the evaluation items in our study. Finally, the ophthalmologists participating in the study evaluated the angles using only the gonio-photos and had no access to other information, which differs markedly from normal clinical situations and may cause misclassification bias.

In conclusion, the current study confirmed the results of previous reports and demonstrated new perspectives about the reproducibility of iridocorneal angle assessments using the gonio-images by the five independent ophthalmologists including three glaucoma specialists. As result, the high agreement between gonioscopy and Gonioscope GS-1 for closed angle detection supported the usefulness of the device for screening, however the further study must be needed. The intraobserver agreement levels for Scheie’s angle width and pigmentation gradings and for detecting PAS and Sampaolesi lines among three glaucoma specialists were fair to substantial for the first three and slight to substantial for the last, while the interobserver agreements were fair for the first three and slight for the last. Our findings suggested slight-to-substantial intraobserver agreement and slight-to-fair (among the three) or fair-to-moderate (between the two each) interobserver agreement for the angle assessments using the gonio-photos even by glaucoma specialists. Generally, the intraobserver agreement levels among the glaucoma specialists tended to be high, and the interobserver agreements improved based on the proportion of the glaucoma specialists among the observers. Therefore, the angle assessments by the glaucoma specialists were the most similar and thus the most reliable. However, as with conventional gonioscopy, the angle evaluation using the gonio-photos likely is associated with a specific learning curve and the need for additional training in clinical practice before assessments can be performed using only the gonio-photos even for glaucoma specialists. It also is necessary to reconfirm the consensus regarding the angle findings among the observers to reduce the interobserver variabilities in angle evaluations when using the gonio-photos. Sufficient training and a solid consensus should allow us to perform more reliable angle assessments using GS-1 gonio-photos with high reproducibility.

Supporting information

S1 Fig. The radar charts of the distributions of iridocorneal angle evaluations with manual gonioscopy and automated gonioscope by the glaucoma specialist (MT) for visualizing the variabilities in (A) Scheie’s angle width grading, (B) Scheie’s angle pigmentation grading, (C) PAS detection, and (D) Sampaolesi line detection.

(TIF)

S1 Table. Comparison of manual gonioscopy and automated gonioscope in all angle gradings.

(DOCX)

S2 Table. Comparison of Scheie’s angle gradings by glaucoma specialists with automated gonioscope between first and second tests in all images.

(DOCX)

S3 Table. Comparison of Scheie’s angle gradings with automated gonioscope between a glaucoma specialist and the others in first test.

(DOCX)

S4 Table. Effect of image quality on observer agreement for angle evaluations using gonioscopic photos of GS-1.

(DOCX)

S1 Data. The first test of randomized 140 gonio-images.

(PDF)

S2 Data. The second test of different randomized 140 gonio-images.

(PDF)

Data Availability

All relevant data are within the manuscript and its Supporting information files.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Prum BE Jr., Rosenberg LF, Gedde SJ, Mansberger SL, Stein JD, Moroi SE, et al. Primary Open-Angle Glaucoma Preferred Practice Pattern((R)) Guidelines. Ophthalmology. 2016;123(1):P41–p111. Epub 2015/11/20. 10.1016/j.ophtha.2015.10.053 . [DOI] [PubMed] [Google Scholar]
  • 2.Prum BE Jr., Herndon LW Jr., Moroi SE, Mansberger SL, Stein JD, Lim MC, et al. Primary Angle Closure Preferred Practice Pattern((R)) Guidelines. Ophthalmology. 2016;123(1):P1–p40. Epub 2015/11/20. 10.1016/j.ophtha.2015.10.049 . [DOI] [PubMed] [Google Scholar]
  • 3.Matsuo M, Pajaro S, De Giusti A, Tanito M. Automated anterior chamber angle pigmentation analyses using 360° gonioscopy. Br J Ophthalmol. 2019:bjophthalmol-2019-314320. 10.1136/bjophthalmol-2019-314320 [DOI] [PubMed] [Google Scholar]
  • 4.Topouzis F, Founti P, Yu F, Wilson MR, Coleman AL. Twelve-Year Incidence and Baseline Risk Factors for Pseudoexfoliation: The Thessaloniki Eye Study (An American Ophthalmological Society Thesis). Am J Ophthalmol. 2019;206:192–214. 10.1016/j.ajo.2019.05.005 [DOI] [PubMed] [Google Scholar]
  • 5.Vranka JA, Kelley MJ, Acott TS, Keller KE. Extracellular matrix in the trabecular meshwork: intraocular pressure regulation and dysregulation in glaucoma. Exp Eye Research. 2015;133:112–25. 10.1016/j.exer.2014.07.014 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.De Giusti A, Pajaro S, Tanito M. Automatic Pigmentation Grading of the Trabecular Meshwork in Gonioscopic Images. Proc. Of Computational Pathology and Ophthalmic Medical Image Analysis (COMPAY-OMIA 2018): Springer International Publishing. 2018; 193–200.
  • 7.Sun X, Dai Y, Chen Y, Yu D-Y, Cringle SJ, Chen J, et al. Primary angle closure glaucoma: What we know and what we don’t know. Prog Retin Eye Res. 2017;57:26–45. 10.1016/j.preteyeres.2016.12.003 [DOI] [PubMed] [Google Scholar]
  • 8.Porporato N, Baskaran M, Aung T. Role of anterior segment optical coherence tomography in angle-closure disease: a review. Clin Exp Ophthalmol. 2018;46(2):147–57. Epub 2017/12/02. 10.1111/ceo.13120 . [DOI] [PubMed] [Google Scholar]
  • 9.Teixeira F, Sousa DC, Leal I, Barata A, Neves CM, Pinto LA. Automated gonioscopy photography for iridocorneal angle grading. Eur J Ophthalmol. 2018:1120672118806436. Epub 2018/10/27. 10.1177/1120672118806436 . [DOI] [PubMed] [Google Scholar]
  • 10.Porporato N, Baskaran M, Tun TA, Sultana R, Tan M, Quah JH, et al. Understanding diagnostic disagreement in angle closure assessment between anterior segment optical coherence tomography and gonioscopy. Br J Ophthalmol. 2019. Epub 2019/09/08. 10.1136/bjophthalmol-2019-314672 . [DOI] [PubMed] [Google Scholar]
  • 11.Friedman DS, He M. Anterior chamber angle assessment techniques. Surv Ophthalmol. 2008;53(3):250–73. Epub 2008/05/27. 10.1016/j.survophthal.2007.10.012 . [DOI] [PubMed] [Google Scholar]
  • 12.See JL. Imaging of the anterior segment in glaucoma. Clin Exp Ophthalmol. 2009;37(5):506–13. Epub 2009/07/25. 10.1111/j.1442-9071.2009.02081.x . [DOI] [PubMed] [Google Scholar]
  • 13.Shi Y, Yang X, Marion KM, Francis BA, Sadda SR, Chopra V. Novel and Semiautomated 360-Degree Gonioscopic Anterior Chamber Angle Imaging in Under 60 Seconds. Ophthalmol Glaucoma. 2019;2(4):215–23. 10.1016/j.ogla.2019.04.002 [DOI] [PubMed] [Google Scholar]
  • 14.He M, Jiang Y, Huang S, Chang DS, Munoz B, Aung T, et al. Laser peripheral iridotomy for the prevention of angle closure: a single-centre, randomised controlled trial. Lancet (London, England). 2019;393(10181):1609–18. Epub 2019/03/18. 10.1016/S0140-6736(18)32607-2 . [DOI] [PubMed] [Google Scholar]
  • 15.Scheie HG. Width and pigmentation of the angle of the anterior chamber; a system of grading by gonioscopy. AMA archives of ophthalmology. 1957;58(4):510–2. Epub 1957/10/01. 10.1001/archopht.1957.00940010526005 . [DOI] [PubMed] [Google Scholar]
  • 16.Murakami Y, Wang D, Burkemper B, Lin SC, Varma R. A Population-Based Assessment of the Agreement Between Grading of Goniophotographic Images and Gonioscopy in the Chinese-American Eye Study (CHES). Invest Ophthalmol Vis Sci. 2016;57(10):4512–6. Epub 2016/08/30. 10.1167/iovs.15-18434 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Kass MA, Heuer DK, Higginbotham EJ, Johnson CA, Keltner JL, Miller JP, et al. The Ocular Hypertension Treatment Study: a randomized trial determines that topical ocular hypotensive medication delays or prevents the onset of primary open-angle glaucoma. Arch Ophthalmol. 2002;120(6):701–13; discussion 829–30. Epub 2002/06/07. 10.1001/archopht.120.6.701 . [DOI] [PubMed] [Google Scholar]
  • 18.Phu J, Wang H, Khou V, Zhang S, Kalloniatis M. Remote Grading of the Anterior Chamber Angle Using Goniophotographs and Optical Coherence Tomography: Implications for Telemedicine or Virtual Clinics. Transl Vis Sci Techn. 2019;8(5):16. Epub 2019/10/08. 10.1167/tvst.8.5.16 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Fleiss JL, Nee JC, Landis JR. Large sample variance of kappa in the case of different sets of raters. Psychological Bulletin. 1979;86(5):974–7. 10.1037/0033-2909.86.5.974 [DOI] [Google Scholar]
  • 20.Efron B, Tibshirani R. Bootstrap Methods for Standard Errors, Confidence Intervals, and Other Measures of Statistical Accuracy. Statist Sci. 1986;1(1):54–75. 10.1214/ss/1177013815 [DOI] [Google Scholar]
  • 21.R core team: R: a language and environment for statistical computing. R foundation for statistical computing, 2019. https://www.R-project.org/.
  • 22.Altman DG. Practical Statistics for Medical Research. London: Chapman and Hall; 1991. [Google Scholar]
  • 23.Tanito M, Nitta K, Katai M, Kitaoka Y, Yokoyama Y, Omodaka K, et al. Validation of formula-predicted glaucomatous optic disc appearances: the Glaucoma Stereo Analysis Study. Acta Ophthalmol. 2019;97(1):e42–e9. Epub 2018/07/20. 10.1111/aos.13816 . [DOI] [PubMed] [Google Scholar]
  • 24.Rigi M, Bell NP, Lee DA, Baker LA, Chuang AZ, Nguyen D, et al. Agreement between Gonioscopic Examination and Swept Source Fourier Domain Anterior Segment Optical Coherence Tomography Imaging. Journal of Ophthalmology. 2016;2016:1727039. 10.1155/2016/1727039 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Thomas R, George T, Braganza A, Muliyil J. The flashlight test and van Herick’s test are poor predictors for occludable angles. Australian and New Zealand journal of ophthalmology. 1996;24(3):251–6. Epub 1996/08/01. 10.1111/j.1442-9071.1996.tb01588.x . [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Jinhai Huang

16 Dec 2020

PONE-D-20-24626

Intraobserver and interobserver agreement among anterior chamber angle evaluations using automated 360-degree gonio-photos

PLOS ONE

Dear Dr. Matsuo,

Please submit your revised manuscript by Jan 30 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Jinhai Huang, M.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please delete it from any other section.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Matsuo and colleagues reported on the agreement of anterior chamber angle evaluations using the GS-1. This is an emerging technique and thus this kind of study is useful in contributing to the literature. There are a number of primary concerns regarding the writing and the analysis approach that the authors could consider, as well as aspects of the discussion that appear to be lacking in the requisite depth for a critical review of the technology.

Introduction:

1) The first two paragraphs appear to be needlessly long: the authors could consider reshaping this to identify two succinct points: 1) angle closure as a disease entity is important to identify as it has a different prognostic course compared to open angle glaucoma; 2) secondary open angle (and closed anglee) glaucomas require examination of the anterior chamber angle to identify underlying causes that may alter the treatment plan. Paragraph two in particular is a bit confusing in its tone with regard to the usefulness and limitations of gonioscopy. Moreover, there are some aspects of gonioscopy that are glossed over, such as the fact that grading systems for treatment titration rely solely on the gonioscopic impression (e.g. Prum Jr et al 2016 Ophthalmology), and that major clinical trials use gonioscopy as the technique of choice (He et al 2019 Lancet). Gonioscopy lenses such as the G6 in theory offer a more uninterrupted view of the anterior chamber angle so the statement on page 6, line 10 is not quite true.

2) Aside from the pigmentation grade, the configuration and distribution of pigment is important clinically, such as in cases of burnt out pigment dispersion syndrome, mottled versus homogenous pigmentation - there are numerous citations for this, such as from Rob Ritch.

Methods:

1) Why did the authors use Scheie classification in the first instance (page 8), followed by Shaffer (page 9)? Both are different and provide the opposite ordinal grades. This was unclear, but suggests that one was used by the clinician for the assessment and the other for the experimental design? Needs clarification as to the purpose of the grading system.

2) Page 10: the reader is slightly confused with the exclusion process... was the whole eye excluded or was it just one image per eye that was excluded - based on lines 14-16. How was the better quality image determined if both were equal? Were some angles systematically more likely to have better images? For example, if considering conventional imaging modalities, it is not surprising to find that superior angle assessment is more problematic compared to other quadrants due to anatomic lid interference (e.g. Xu et al 2019 TVST).

3) A limitation of the GS-1, like AS-OCT is that it is performed in primary gaze without lens tilt, despite many clinical trials adding the dynamic component of gonioscopy (e.g. He et al 2019 Lancet) and clinical guidelines recommending identification of iridotrabecular contact that is only possible with lens tilt or off-axis gaze (e.g. Prum Jr et al 2016 Ophthalmology). This requires a comment on page 11.

4) The methods require a bit more transparency in the writing. Pages 12-13 for example appear particularly disorganised. The reader is introduced to five independent observers firstly on page 12, but then on page 13 line 3 onwards, a presumably singular observer is used for the intraobserver evaluation. However, then the authors state that a "second set" was used to further evaluate intraobserver agreements presumably for the five independent observers on line 16? Another example is page 13 line 13 - are the different randomised images from the same set or another prospectively collected set? The writing needs further clarification and would perhaps benefit from separation into subheadings that allow clarity in the methods.

Results

1) There are a few issues regarding the presentation of Table 1. The authors did not seem to state the techniques used to measure many of the continuous variables reported in the table. Further to Table 1, why was pseudoexfoliative glaucoma and secondary open angle glaucoma separated? The reporting of the distribution of Shaffer's angle width but Scheie's angle pigmentation grade represents to the reader unnecessary differentiation and muddling of the grading schemes - why not just stick to one? Further, it could be argued that the reading of a mean and SD for these grades is questionable, due to their non-linearity and ordinal nature (see Phu et al 2020 OPO)... it may be more useful to break down the distribution of the exact numbers of each grade, especially since the Fleiss's kappa was used. Finally, the additive value of topical glaucoma medications is questionable: the paper is not really reporting on the contribution of these to angle grading and the hypothesis being tested is unlikely to be confounded by these factors. It would also be more informative to provide the proportion of poor image grades that were subsequently excluded from analysis in this table or in the results text. This would reinforce the authors' choice of a pragmatic approach and a more realistic impression of the deployment of GS-1.

2) With regard to reporting Table 2 and the kappa values, it is unclear to the reader why Fleiss's kappa was used for binary variables (PAS and Sampaolesi line - present/absent). This confusion stems from the lack of contingency tables being presented in the results and the confusing methods. The authors should seek to clarify the most appropriate statistical method used. Fleiss's kappa may be appropriate for agreement between the observers but from the writing it is hard to say what was compared.

3) The reporting of closed angle detection is somewhat unclear and strikes the reader as somewhat misleading and requires clarification in writing. Not all of the 140 quadrants were closed, and so the question is whether the kappa was related to the binary outcome of "open" vs. "closed", or the agreement between observers (as noted above).

4) Page 17, line 14 refers to observers 1-5 and glaucoma specialists 1-3, but this was somewhat unclear from the methods as well.

5) Table 3 (and similar): there was extreme variability in the kappa values across quadrants and across observers: could the authors comment on whether there was a systematic difference or random difference in the text? This is a very interesting result and the reader may be tempted to think it may be related to the distribution of angle grades, whereby some may be more obvious and easy to score (e.g. closed or very wide open angles) compared to the middling grades.

6) Overall, comments 3-5 above suggest that contingency tables may play a role in the reporting of the data

Discussion

1) The second paragraph of the discussion is confusing, and the reader is unsure what the authors are trying to say. The tone is confusing in whether 10% is a high number or a low number?

2) With regard to objectivity, one questions the value of precise grading given the wealth of data required to change the management plan. For example, the decision to treat would be based on a multitude of factors aside from the angle appearance, including accessibility to health care, age, lens status, epidemiological risk factors and others (see Thomas and Walland 2013 CEO).

3) Page 27, lines 6-11: with respect to the discussion on gonio photographs, there is a limitation previously noted in the literature that the authors seemed to have omitted: the fact that many of these are conducted in primary gaze, and lack the dynamism of manual gonioscopy, which is really the key advantage of the technique. This requires further discussion.

4) A general question with regards to the comparison with the EyeCam, the light source for imaging and photography should be mentioned or discussed at some point. There are significant efforts taken in standardising gonioscopy as a procedure to mitigate introduction of stray light through the pupil.

5) Another general comment with the discussion: there is far too much re-reporting of the results and this causes bloat in the discussion. The authors should revise this for succinctness.

6) The issue of angle grade distribution was mentioned on page 31, but the analysis seems to be lacking in terms of accounting for it as a confounding factor.

7) Another limitation of the grading system for image quality, which should also be noted in the references cited by the authors, is that the entirety of the image needs not be clear for evaluation. This does not seem to be captured with this method and should be discussed.

8) The authors raise an interesting point about the learning curve and the value of experience. This was discussed by Phu and colleagues and may be worthwhile integrated as part of this discussion (Phu et al 2019 OVS). What they may consider also acknowledging is the acquisition step. Though the processes described in the methods appear to be largely automated, there may be some component of interpreting whether the image is useable. A pragmatic approach as described by the authors would necessitate consideration of practical aspects such as the limitations and barriers to successful image acquisition and whether there are systematic errors in that process. A consecutive sampling strategy would be necessary as well as reporting of the errors associated with the technique. Finally, a calibration exercise would be suitable for this particular purpose.

Minor comments

- Page 12, line 9-10: please clarify that the participants are the ophthalmologists

- Page 13, line 22: nominal or ordinal?

- Table 1: Dioptor should be "Diopter"

- There is some inconsistency between "sectors" and "quadrants" used throughout the manuscript

- Page 25: there is some inconsistency in the way that the results are reported in the text. It is quite wordy and somewhat repetitive from the tables. The authors should note this trend throughout the manuscript and seek to clarify and write succinctly.

- Page 25, line 15: there should be a clearer distinction between an ophthalmologist (general?) and glaucoma specialist

- Why were there "NA" results in Supp Table 1 - was not clear

- The units on Figures 1 and 2 are not clearly described in the Figure Captions. Although somewhat intuitive from the text, this may deserve further clarification.

References:

Ritch Am J Ophthalmol. 1998 Sep;126(3):442-5.

Phu et al Ophthalmic Physiol Opt. 2020 Sep;40(5):617-631.

Xu et al Transl Vis Sci Technol . 2019 Mar 26;8(2):52019

Prum Jr et al Ophthalmology. 2016 Jan;123(1):P1-P40.

Thomas and Walland Clin Exp Ophthalmol. 2013 Apr;41(3):282-92.

He et al Lancet. 2019 Apr 20;393(10181):1609-1618.

Phu et al Optom Vis Sci. 2019 Oct;96(10):751-760.

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 May 6;16(5):e0251249. doi: 10.1371/journal.pone.0251249.r002

Author response to Decision Letter 0


7 Jan 2021

Ref: PONE-D-20-24626

Intraobserver and interobserver agreement among anterior chamber angle evaluations using automated 360-degree gonio-photos

PLOS ONE

Dear Dr. Huang,

We appreciate your reconsideration of our manuscript entitled “Intraobserver and interobserver agreement among anterior chamber angle evaluations using automated 360-degree gonio-photos” for publication in PLoS One as a Full Paper.

We disagree with some of the reviewer's comments, however we received constructive feedback. We have revised our manuscript and made a point-by-point response to the comments as follows. We have worked hard to incorporate your feedback and hope that these revisions persuade you to accept our submission.

We state that this manuscript has not been published elsewhere and is not under consideration by another journal. All authors have approved the manuscript and agree with submission to PLOS ONE. Thank you!

Best regards,

Masato Matsuo, MD, PhD

Department of Ophthalmology,

Shimane University Faculty of Medicine,

Enya 89-1, Izumo, Shimane, JAPAN.

mmpeaceful@yahoo.ne.jp

matsuondmc@gmail.com

Journal Requirements:

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Response

We have revised our manuscript to meet the PLOS ONE's style requirements.

2. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please delete it from any other section.

Response

We have deleted our ethics statement from our title page.

Reviewer #1’s Comments:

Introduction:

1) The first two paragraphs appear to be needlessly long: the authors could consider reshaping this to identify two succinct points: 1) angle closure as a disease entity is important to identify as it has a different prognostic course compared to open angle glaucoma; 2) secondary open angle (and closed angle) glaucomas require examination of the anterior chamber angle to identify underlying causes that may alter the treatment plan. Paragraph two in particular is a bit confusing in its tone with regard to the usefulness and limitations of gonioscopy. Moreover, there are some aspects of gonioscopy that are glossed over, such as the fact that grading systems for treatment titration rely solely on the gonioscopic impression (e.g. Prum Jr et al 2016 Ophthalmology), and that major clinical trials use gonioscopy as the technique of choice (He et al 2019 Lancet). Gonioscopy lenses such as the G6 in theory offer a more uninterrupted view of the anterior chamber angle so the statement on page 6, line 10 is not quite true.

Response

Thank you for the comment, however we disagree with you. According to the journal’s submission guideline, we tried to provide the background that puts the manuscript into context and allows readers outside the field to understand the purpose and significance of the study. The readers are not limited to glaucoma specialists or ophthalmologists. Therefore, it is necessary to define the problem addressed and explain why it is important.

Regarding 1), we also think early detection of a narrow angle and peripheral anterior synechia (PAS) is vital to prevent primary angle-closure glaucoma (PACG) as described in paragraph 1. However, we cannot agree with your idea that angle closure is important to identify as it has a different prognostic course compared to open angle glaucoma, because we sometimes experience cases of open-angle glaucoma (OAG) with end-stage visual field impairment at the first visit. If we could detect such patients early enough, we surely prevent the visual field loss due to OAG. Therefore, the early OAG detection and treatment is also clinically important.

As for 2), we agree with your comment that secondary glaucomas require examination of the anterior chamber angle to identify underlying causes that may alter the treatment plan. Thus, we have added the statement in the manuscript L18-20 (pg 4). We believe that the change helps our manuscript more contextual. On the other hand, regarding the other indications, we do not agree with you. We pointed out the limitation of subjective assessments with gonioscopy in the manuscript L12-13 (pg 5). Additionally, we also know the major clinical trials use gonioscopy as the technique of choice, however the fact has nothing to do with the reliability of the test. Moreover, “gonioscopy can examine at one time only a limited contiguous portion of the iridocorneal angle” means that we can observe only a limited position of the angle with conventional gonioscopy at one time even if using G6 because of the manual nature. On the other hand, gonioscopic camera can take whole angle images simultaneously. Therefore, we pointed out the characteristics and limitations of the gonioscopy in the manuscript.

2) Aside from the pigmentation grade, the configuration and distribution of pigment is important clinically, such as in cases of burnt out pigment dispersion syndrome, mottled versus homogenous pigmentation - there are numerous citations for this, such as from Rob Ritch.

Response

We also believe the importance of configuration and distribution of pigment; however, it is impractical to evaluate the diagnostic reproducibility of all angle findings. Pigment dispersion syndrome is clinically rare, and the evaluation is out of our main purpose. Moreover, it is difficult to evaluate the observer agreement of rare finding with small sample size practically. Therefore, we limited the evaluation items in our study.

Methods:

1) Why did the authors use Scheie classification in the first instance (page 8), followed by Shaffer (page 9)? Both are different and provide the opposite ordinal grades. This was unclear, but suggests that one was used by the clinician for the assessment and the other for the experimental design? Needs clarification as to the purpose of the grading system.

Response

Thank you for the comment. As you pointed out, the Scheie’s angle width grading system was used for the experimental design and the Shaffer’s grading system was for the clinical assessment as we described in the manuscript L16-23 (pg 11). Moreover, following your suggestion, we have added the supplementary explanations for easy understanding in the manuscript L18 (pg 7), L14-15 (pg 8).

2) Page 10: the reader is slightly confused with the exclusion process... was the whole eye excluded or was it just one image per eye that was excluded - based on lines 14-16. How was the better quality image determined if both were equal? Were some angles systematically more likely to have better images? For example, if considering conventional imaging modalities, it is not surprising to find that superior angle assessment is more problematic compared to other quadrants due to anatomic lid interference (e.g. Xu et al 2019 TVST).

Response

Thank you for the comment. We paraphrased the word for clarity in the manuscript L17 (pg 9).

No images with same quality were found in both eyes. The image quality assessment was done only to pre-exclude images for which angle evaluation was not possible, which is not the main purpose of our study. Therefore, the further analysis and consideration are redundant and will blur the purpose and results.

3) A limitation of the GS-1, like AS-OCT is that it is performed in primary gaze without lens tilt, despite many clinical trials adding the dynamic component of gonioscopy (e.g. He et al 2019 Lancet) and clinical guidelines recommending identification of iridotrabecular contact that is only possible with lens tilt or off-axis gaze (e.g. Prum Jr et al 2016 Ophthalmology). This requires a comment on page 11.

Response

Thank you for the comment. We also agree with you as we described in paragraph 4 in our discussion. Moreover, according to your suggestion, we have changed the statement to make it easier for readers to understand in the manuscript L8-9 (pg 10).

4) The methods require a bit more transparency in the writing. Pages 12-13 for example appear particularly disorganized. The reader is introduced to five independent observers firstly on page 12, but then on page 13 line 3 onwards, a presumably singular observer is used for the intraobserver evaluation. However, then the authors state that a "second set" was used to further evaluate intraobserver agreements presumably for the five independent observers on line 16? Another example is page 13 line 13 - are the different randomized images from the same set or another prospectively collected set? The writing needs further clarification and would perhaps benefit from separation into subheadings that allow clarity in the methods.

Response

Thank you for the comment. Following your advice, we have made the writing separated into subheadings and added the supplementary explanations in our manuscript. Please see page 11 to 12.

Results

1) There are a few issues regarding the presentation of Table 1. The authors did not seem to state the techniques used to measure many of the continuous variables reported in the table. Further to Table 1, why was pseudoexfoliative glaucoma and secondary open angle glaucoma separated? The reporting of the distribution of Shaffer's angle width but Scheie's angle pigmentation grade represents to the reader unnecessary differentiation and muddling of the grading schemes - why not just stick to one? Further, it could be argued that the reading of a mean and SD for these grades is questionable, due to their non-linearity and ordinal nature (see Phu et al 2020 OPO)... it may be more useful to break down the distribution of the exact numbers of each grade, especially since the Fleiss's kappa was used. Finally, the additive value of topical glaucoma medications is questionable: the paper is not really reporting on the contribution of these to angle grading and the hypothesis being tested is unlikely to be confounded by these factors. It would also be more informative to provide the proportion of poor image grades that were subsequently excluded from analysis in this table or in the results text. This would reinforce the authors' choice of a pragmatic approach and a more realistic impression of the deployment of GS-1.

Response

Thank you for the comment. Table 1 is only the demographics and clinical characteristics of the study subjects and stating all the measurement technique seems to be unnecessary and verbose. Regarding the second point, as we described in our introduction, pseudoexfoliation glaucoma is considered to be the most common type of secondary glaucoma and can advance rapidly with continuous high IOP and be refractory to several therapeutic interventions. Therefore, it is clinically important, and we analyzed pseudoexfoliation glaucoma separately. As for the third point, please see the above response for Methods 1). As for the fourth point, we disagree with you. The mean and SD have been used in the clinical gradings and was also used in some recent research for the angle evaluations with gonio-photos (see Teixeira et al., Eur J Ophthalmol. 2018, Matsuo et al, Br J Ophthalmol. 2019). Breaking down the distribution of the exact numbers of each grade would only make the manuscript unnecessarily redundant and obscure the purpose and results of this study. As for the final point, we can understand your meaning, however we cannot agree with you. This study already contains more than enough information to make a single treatise, and any further additions will not only unnecessarily obscure the meaning but will also lose sight of its original purpose. Again, Table 1 was just the demographics and clinical characteristics of the study subjects. Please understand the main focus of our research.

2) With regard to reporting Table 2 and the kappa values, it is unclear to the reader why Fleiss's kappa was used for binary variables (PAS and Sampaolesi line - present/absent). This confusion stems from the lack of contingency tables being presented in the results and the confusing methods. The authors should seek to clarify the most appropriate statistical method used. Fleiss's kappa may be appropriate for agreement between the observers but from the writing it is hard to say what was compared.

Response

We are sorry to mention that we disagree with you. Table 2 clearly shows the intraobserver reproducibility for angle evaluations between manual gonioscopy and automated gonioscope by one observer (MT). Because we cannot compare results analyzed by different statistical methods, we calculated both Fleiss' kappa coefficient and Kendall rank correlation coefficient for grading (nominal) and binary scales at the same time. Additionally, the Fleiss' kappa analysis in Table 2 is confirmation of the previous report (see Teixeira et al., Eur J Ophthalmol. 2018).

3) The reporting of closed angle detection is somewhat unclear and strikes the reader as somewhat misleading and requires clarification in writing. Not all of the 140 quadrants were closed, and so the question is whether the kappa was related to the binary outcome of "open" vs. "closed", or the agreement between observers (as noted above).

Response

Thank you for the query. As we described in the methods, we analyzed the intraobserver reproducibility for closed angle detection in each quadrant (n=140) and in each eye (n=35) comparing the outcomes with manual gonioscopy and those with automated gonioscope by one observer. Therefore, the kappa was related to the binary outcomes of "open" vs. "closed".

4) Page 17, line 14 refers to observers 1-5 and glaucoma specialists 1-3, but this was somewhat unclear from the methods as well.

Response

We described it in our manuscript. Please see L14-16 (pg 11).

5) Table 3 (and similar): there was extreme variability in the kappa values across quadrants and across observers: could the authors comment on whether there was a systematic difference or random difference in the text? This is a very interesting result and the reader may be tempted to think it may be related to the distribution of angle grades, whereby some may be more obvious and easy to score (e.g. closed or very wide open angles) compared to the middling grades.

Response

Thank you for the comment. There would be the tendency that the intraobserver agreement values of the glaucoma specialist with enough experience using the GS-1 in the clinic were higher than those of the other ophthalmologists as described in the manuscript. On the other hand, the extreme variability in the kappa values across quadrants seems to be happened by chance. However, we cannot conclude it on a clear evidence because of the small sample size. Moreover, they deviate from the original purpose, thus we would like to leave them to the future large-scale research.

6) Overall, comments 3-5 above suggest that contingency tables may play a role in the reporting of the data

Response

We do not agree with you. Again, adding more data is redundant.

Discussion

1) The second paragraph of the discussion is confusing, and the reader is unsure what the authors are trying to say. The tone is confusing in whether 10% is a high number or a low number?

Response

Thank you for the advice. We have revised our manuscript to make it easier to understand for readers in the second paragraph.

2) With regard to objectivity, one questions the value of precise grading given the wealth of data required to change the management plan. For example, the decision to treat would be based on a multitude of factors aside from the angle appearance, including accessibility to health care, age, lens status, epidemiological risk factors and others (see Thomas and Walland 2013 CEO).

Response

We disagree with you. All the angle parameters in our study can be objectively evaluated based on the angle findings, and it is worthwhile to examine the accurate inter- and intra-observer agreements for angle evaluations only with the novel device in considering the reliability. Accurate angle assessments and clinical judgments based on it are separate issues, and how to apply them to each patient should be considered in the other future study.

3) Page 27, lines 6-11: with respect to the discussion on gonio photographs, there is a limitation previously noted in the literature that the authors seemed to have omitted: the fact that many of these are conducted in primary gaze, and lack the dynamism of manual gonioscopy, which is really the key advantage of the technique. This requires further discussion.

Response

Thank you for the comment. We described it in the manuscript L22-23 (pg 27), L1-3 (pg 28). Moreover, we have added the sentence to make it easier to understand in the discussion, L5-7 (pg 26).

4) A general question with regards to the comparison with the EyeCam, the light source for imaging and photography should be mentioned or discussed at some point. There are significant efforts taken in standardizing gonioscopy as a procedure to mitigate introduction of stray light through the pupil.

Response

Thank you for the query. According to your suggestion, we have added the related information in the manuscript, L12-16 (pg 27).

5) Another general comment with the discussion: there is far too much re-reporting of the results and this causes bloat in the discussion. The authors should revise this for succinctness.

Response

Thank you for the advice. We have revised and shortened the relevant part in our discussion.

6) The issue of angle grade distribution was mentioned on page 31, but the analysis seems to be lacking in terms of accounting for it as a confounding factor.

Response

We do not agree with you. We assessed the effects of image quality on observer agreements for angle evaluations using gonioscopic photos of GS-1. As the result, the observer agreements using grade 0 images were not always better than those using grade 1 images, probably because we completely excluded the grade 2 (blurred with no discernible details) images. Therefore, it is unlikely that the differences in image quality can be the confounding factor, and such detailed examinations that deviate from the original purpose are redundant and we would like to leave it to the other future study.

7) Another limitation of the grading system for image quality, which should also be noted in the references cited by the authors, is that the entirety of the image needs not be clear for evaluation. This does not seem to be captured with this method and should be discussed.

Response

We disagree with you. Same as on the previous reports, we assessed the image quality of the gonio-images and excluded the blurred images with no discernible details, which means all the unevaluable images was eliminated. You might be worried about the slightly vague but determinable images that were included in grade 1 by our definition, which we have already discussed in the above response to comment 6).

8) The authors raise an interesting point about the learning curve and the value of experience. This was discussed by Phu and colleagues and may be worthwhile integrated as part of this discussion (Phu et al 2019 OVS). What they may consider also acknowledging is the acquisition step. Though the processes described in the methods appear to be largely automated, there may be some component of interpreting whether the image is useable. A pragmatic approach as described by the authors would necessitate consideration of practical aspects such as the limitations and barriers to successful image acquisition and whether there are systematic errors in that process. A consecutive sampling strategy would be necessary as well as reporting of the errors associated with the technique. Finally, a calibration exercise would be suitable for this particular purpose.

Response

We disagree with your comments because they are out of our main focus of the study. The primary purpose was to analyze the reproducibility for the angle evaluations with the newly released gonioscopic camera, and it was the first verification to investigate to what extent glaucoma specialists and ophthalmologists made the same angle evaluations using the standardized gonio-images. As a result of the research, we could draw some important conclusions.

First of all, the recommended reference study by Phu et al was quite different from ours in the purpose, study design and methods. It was conducted with the medical records of gonioscopy results among different practitioners retrospectively, which must have full of bias including the observer bias to know the specific clinical information about the patients. Therefore, the result would be quite different from the pure angle evaluation and it is tough to draw the conclusion for the learning curve and the value of experience, and we would not like to make it reference.

With regard to the second point, as we described in the methods, we defined the exclusion criteria as the whole eyes with poor-quality images that had at least one grade 2 image in four ocular sectors, and finally 17 poor-quality eyes (23.9%) were excluded from angle evaluations. On the other hand, in the image acquisition with prototype GS-1, Teixeira et al reported that 22.7% eyes were excluded from the angle image grading as information would be lacking concerning at least two quadrants, and Shi et al demonstrated that 8.33% sections were not gradable owing to poor image quality. These studies’ exclusion criteria, conditions, and version of GS-1 were different from ours, thus we could not compare them directly, however, our exclusion rate was quite near to that of Teixeira’s. The reviewer seemed to stick to the further analysis for the limitations and barriers to successful image acquisition and whether there are systematic errors in the GS-1. However, the study already contains more than enough information to make a single manuscript, and the result is enough for our conclusions. A further detailed examination or analysis for the reviewer’s point is out of our main purpose and redundant.

Regarding the final points, the reviewer seemed to be misunderstanding that we did not conduct the consecutive sampling and calibration exercise, which were successfully done in the study. In fact, we did the consecutive sampling, therefore, we could demonstrate the successful image acquisition rate. On the other hand, the poor-quality GS-1 image eyes were excluded in the image selection process. Thus, strictly speaking, it was not intended for all consecutive patients who underwent GS-1, and we refrained from describing it. Moreover, we had several years of experience using GS-1 since it was prototyped (Matsuo et al., Br J Ophthalmol. 2019), so the calibration exercise in the image acquisition would be sufficient.

Minor comments

- Page 12, line 9-10: please clarify that the participants are the ophthalmologists

Response

Thank you for the comment. We have paraphrased the word for clarity in the manuscript L11-12 (pg 11).

- Page 13, line 22: nominal or ordinal?

Response

Nominal is correct. The analysis was performed by regarding the grading scale as the nominal scale to compare the results with each other.

- Table 1: Dioptor should be "Diopter"

Response

Thank you for making the typo. We have revised the word in Table 1.

- There is some inconsistency between "sectors" and "quadrants" used throughout the manuscript

Response

According to your advice, we have unified the word to sector.

- Page 25: there is some inconsistency in the way that the results are reported in the text. It is quite wordy and somewhat repetitive from the tables. The authors should note this trend throughout the manuscript and seek to clarify and write succinctly.

Response

The suggestion is same as the reviewer’s comment in Discussion 5). According to your advice, we have shortened and simplified the relevant part.

- Page 25, line 15: there should be a clearer distinction between an ophthalmologist (general?) and glaucoma specialist

Response

What part were you pointing out? We did not know the relevant part.

- Why were there "NA" results in Supp Table 1 - was not clear

Response

In our study, the analyses for Fleiss’ kappa statistic were performed in R statistical software version 3.5.3. and the calculations for Kendall rank correlation coefficients were conducted with JMP Pro 14 software. Because we are not the developer of those software, we do not know the exact answer. However, the discussion about it for S1 Fig is out of the main focus of the study and redundant.

- The units on Figures 1 and 2 are not clearly described in the Figure Captions. Although somewhat intuitive from the text, this may deserve further clarification.

Response

Thank you for your suggestion. We have added the supplementary explanation in the Figure legends.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Jinhai Huang

23 Feb 2021

PONE-D-20-24626R1

Intraobserver and interobserver agreement among anterior chamber angle evaluations using automated 360-degree gonio-photos

PLOS ONE

Dear Dr. Matsuo,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Apr 09 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Jinhai Huang, M.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have done an admirable job in addressing several of the comments and have engaged in a scholarly discussion. There are a few points of disagreement, but these are mostly related to the interpretation of the literature. I have listed several comments where there remain some disagreements, and I have some suggestions for the authors to improve clarity.

1) Authors' comment: However, we cannot agree with your idea that angle closure is important to identify as it has a different prognostic course compared to open angle glaucoma, because we sometimes experience cases of open-angle glaucoma (OAG) with end-stage visual field impairment at the first visit. If we could detect such patients early enough, we surely prevent the visual field loss due to OAG. Therefore, the early OAG detection and treatment is also clinically important.

Response: Whilst this is true (e.g. the work of Boodhna, Crabb and colleagues), the point of the paper does not appear to be related to POAG. The concept of making an introduction succinct is to engage with the reader in order to arrive at the core purpose of the study. I suggest that the authors use a new paragraph at page 4 line 20, beginning with the sentence "While" to disconnect POAG and PACG. On that point, whilst PLOS is a general science journal, the readership of a paper reporting on the outcomes of a highly specialised ophthalmic tool is most likely someone in the field.

Also, if the authors wish to describe the spectrum of angle closure disease at such length and comprehensiveness, it would be remiss of them to exclude the continuum between open and closed angles, as the prevention stage is touted by some to occur well before vision loss.

2) Authors' comment: We also believe the importance of configuration and distribution of pigment; however, it is impractical to evaluate the diagnostic reproducibility of all angle findings. Pigment dispersion syndrome is clinically rare, and the evaluation is out of our main purpose. Moreover, it is difficult to evaluate the observer agreement of rare finding with small sample size practically. Therefore, we limited the evaluation items in our study.

Response: There appears to be no statement or change made describing this limitation in the study - or if there is, it has not been clearly noted by the authors.

3) Authors' comment: Breaking down the distribution of the exact numbers of each grade would only make the manuscript unnecessarily redundant and obscure the purpose and results of this study. As for the final point, we can understand your meaning, however we cannot agree with you. This study already contains more than enough information to make a single treatise, and any further additions will not only unnecessarily obscure the meaning but will also lose sight of its original purpose. Again, Table 1 was just the demographics and clinical characteristics of the study subjects. Please understand the main focus of our research.

Response: This sounds almost contradictory. On one hand, the authors are willing to retain information on topical glaucoma medications even though its value has not been described anywhere in the text, nor is it immediately obvious to the reader - even a glaucoma expert - that its contribution is, even though the authors state it is a "main focus of their research". On the other hand, the contribution of angle grades on repeatability indices has more obvious value, but the authors have not provided justification on why it has been excluded.

4) Authors' comment: We are sorry to mention that we disagree with you. Table 2 clearly shows the intraobserver reproducibility for angle evaluations between manual gonioscopy and automated gonioscope by one observer (MT). Because we cannot compare results analyzed by different statistical methods, we calculated both Fleiss' kappa coefficient and Kendall rank correlation coefficient for grading (nominal) and binary scales at the same time. Additionally, the Fleiss' kappa analysis in Table 2 is confirmation of the previous report (see Teixeira et al., Eur J Ophthalmol. 2018).

Response: My point relates to comment 3 above - if you had the contingency table available, it would provide a more useful interpretation of repeatability at different angle grades. Kappa values are a useful statistical tool for comparison with previous studies - this is true - however if the authors are aiming for a pragmatic interpretation of the results, then kappa values become more subjective.

5) Authors' comment: We described it in our manuscript. Please see L14-16 (pg 11).

Response: this is written in an ambiguous manner, as it can be interpreted as 5 + 3 examiners, when it is 5, within which 3 were glaucoma specialists. This should be rephrased for clarity throughout the manuscript.

6) Authors' comment: Response

Thank you for the comment. There would be the tendency that the intraobserver agreement values of the glaucoma specialist with enough experience using the GS-1 in the clinic were higher than those of the other ophthalmologists as described in the manuscript. On the other hand, the extreme variability in the kappa values across quadrants seems to be happened by chance. However, we cannot conclude it on a clear evidence because of the small sample size. Moreover, they deviate from the original purpose, thus we would like to leave them to the future large-scale research.

and

We do not agree with you. Again, adding more data is redundant.

Response: Unfortunately, I disagree with these points. It is not redundant because it plans an important role in data visualisation and potentially to a less biased interpretation of the results.

7) Authors' comment: We disagree with you. All the angle parameters in our study can be objectively evaluated based on the angle findings, and it is worthwhile to examine the accurate inter- and intra-observer agreements for angle evaluations only with the novel device in considering the reliability. Accurate angle assessments and clinical judgments based on it are separate issues, and how to apply them to each patient should be considered in the other future study.

Response: Unfortunately, I also disagree with the conviction of the authors here and their statement that they are fully separate issues. It is important to remain skeptical of emphasising objective measures too much, and the authors have too quickly dismissed their contribution into a more holistic model of health care.

8) Authors' comment: Therefore, it is unlikely that the differences in image quality can be the confounding factor, and such detailed examinations that deviate from the original purpose are redundant and we would like to leave it to the other future study.

Response: I have noted that the authors appear to say a lot of things are redundant and deviate from the original purpose. I disagree with the notion that image quality - such an important component of any imaging based study - is redundant. As a reader, that is one of the first things one would question. The authors could consider at least reporting this in the supplementary material.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 May 6;16(5):e0251249. doi: 10.1371/journal.pone.0251249.r004

Author response to Decision Letter 1


6 Mar 2021

`````` March 3, 2020

Jinhai Huang, M.D.

Academic Editor

PLOS ONE

Dear Dr. Huang,

Ref: PONE-D-20-24626

Intraobserver and interobserver agreement among anterior chamber angle evaluations using automated 360-degree gonio-photos

PLOS ONE

We appreciate your reconsideration of our manuscript entitled “Intraobserver and interobserver agreement among anterior chamber angle evaluations using automated 360-degree gonio-photos” for publication in PLOS ONE as a Full Paper.

We received constructive feedback. We have revised our manuscript and made a point-by-point response to the comments as follows. We have worked hard to incorporate your feedback and hope that these revisions persuade you to accept our submission.

We state that this manuscript has not been published elsewhere and is not under consideration by another journal. All authors have approved the manuscript and agree with submission to PLOS ONE. Thank you!

Best regards,

Masato Matsuo, MD, PhD

Department of Ophthalmology,

Shimane University Faculty of Medicine,

Enya 89-1, Izumo, Shimane, JAPAN.

mmpeaceful@yahoo.ne.jp

matsuondmc@gmail.com

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Response

We have checked our reference list to ensure that it is complete and correct.

Reviewer #1’s Comments:

1) Reviewer‘s comment:

The first two paragraphs appear to be needlessly long: the authors could consider reshaping this to identify two succinct points: 1) angle closure as a disease entity is important to identify as it has a different prognostic course compared to open angle glaucoma; 2) secondary open angle (and closed angle) glaucomas require examination of the anterior chamber angle to identify underlying causes that may alter the treatment plan. Paragraph two in particular is a bit confusing in its tone with regard to the usefulness and limitations of gonioscopy. Moreover, there are some aspects of gonioscopy that are glossed over, such as the fact that grading systems for treatment titration rely solely on the gonioscopic impression (e.g. Prum Jr et al 2016 Ophthalmology), and that major clinical trials use gonioscopy as the technique of choice (He et al 2019 Lancet). Gonioscopy lenses such as the G6 in theory offer a more uninterrupted view of the anterior chamber angle so the statement on page 6, line 10 is not quite true.

Authors' comment:

Thank you for the comment, however we disagree with you. According to the journal’s submission guideline, we tried to provide the background that puts the manuscript into context and allows readers outside the field to understand the purpose and significance of the study. The readers are not limited to glaucoma specialists or ophthalmologists. Therefore, it is necessary to define the problem addressed and explain why it is important.

Regarding 1), we also think early detection of a narrow angle and peripheral anterior synechia (PAS) is vital to prevent primary angle-closure glaucoma (PACG) as described in paragraph 1. However, we cannot agree with your idea that angle closure is important to identify as it has a different prognostic course compared to open angle glaucoma, because we sometimes experience cases of open-angle glaucoma (OAG) with end-stage visual field impairment at the first visit. If we could detect such patients early enough, we surely prevent the visual field loss due to OAG. Therefore, the early OAG detection and treatment is also clinically important.

As for 2), we agree with your comment that secondary glaucomas require examination of the anterior chamber angle to identify underlying causes that may alter the treatment plan. Thus, we have added the statement in the manuscript L18-20 (pg 4). We believe that the change helps our manuscript more contextual. On the other hand, regarding the other indications, we do not agree with you. We pointed out the limitation of subjective assessments with gonioscopy in the manuscript L12-13 (pg 5). Additionally, we also know the major clinical trials use gonioscopy as the technique of choice, however the fact has nothing to do with the reliability of the test. Moreover, “gonioscopy can examine at one time only a limited contiguous portion of the iridocorneal angle” means that we can observe only a limited position of the angle with conventional gonioscopy at one time even if using G6 because of the manual nature. On the other hand, gonioscopic camera can take whole angle images simultaneously. Therefore, we pointed out the characteristics and limitations of the gonioscopy in the manuscript.

Reviewer’s comment:

Whilst this is true (e.g. the work of Boodhna, Crabb and colleagues), the point of the paper does not appear to be related to POAG. The concept of making an introduction succinct is to engage with the reader in order to arrive at the core purpose of the study. I suggest that the authors use a new paragraph at page 4 line 20, beginning with the sentence "While" to disconnect POAG and PACG. On that point, whilst PLOS is a general science journal, the readership of a paper reporting on the outcomes of a highly specialised ophthalmic tool is most likely someone in the field.

Also, if the authors wish to describe the spectrum of angle closure disease at such length and comprehensiveness, it would be remiss of them to exclude the continuum between open and closed angles, as the prevention stage is touted by some to occur well before vision loss.

Response

Thank you for the comment. POAG is the most common type, and the condition is associated with an open anterior chamber angle without other known explanations (i.e., secondary glaucoma) for progressive glaucomatous optic nerve change. Thus, gonioscopic angle assessment is also essential for the management and our study is related to POAG. On the other hand, we have decided to add the supplementary explanations and divided into the paragraphs for easy understanding for the readers according to your suggestion in the manuscript L9-12 (pg 4), L1, L4-10 (pg 5).

2) Reviewer’s comment:

Aside from the pigmentation grade, the configuration and distribution of pigment is important clinically, such as in cases of burnt out pigment dispersion syndrome, mottled versus homogenous pigmentation - there are numerous citations for this, such as from Rob Ritch.

Authors' comment:

We also believe the importance of configuration and distribution of pigment; however, it is impractical to evaluate the diagnostic reproducibility of all angle findings. Pigment dispersion syndrome is clinically rare, and the evaluation is out of our main purpose. Moreover, it is difficult to evaluate the observer agreement of rare finding with small sample size practically. Therefore, we limited the evaluation items in our study.

Reviewer’s comment:

There appears to be no statement or change made describing this limitation in the study - or if there is, it has not been clearly noted by the authors.

Response

Thank you for the comment. We described the limitation in the manuscript L22-23 (pg 35), L1 (pg 36). Moreover, for better understanding, we have generated the additional ideas to supplement in the manuscript L2-3 (pg 36).

3) Reviewer’s comment:

There are a few issues regarding the presentation of Table 1. The authors did not seem to state the techniques used to measure many of the continuous variables reported in the table. Further to Table 1, why was pseudoexfoliative glaucoma and secondary open angle glaucoma separated? The reporting of the distribution of Shaffer's angle width but Scheie's angle pigmentation grade represents to the reader unnecessary differentiation and muddling of the grading schemes - why not just stick to one? Further, it could be argued that the reading of a mean and SD for these grades is questionable, due to their non-linearity and ordinal nature (see Phu et al 2020 OPO)... it may be more useful to break down the distribution of the exact numbers of each grade, especially since the Fleiss's kappa was used. Finally, the additive value of topical glaucoma medications is questionable: the paper is not really reporting on the contribution of these to angle grading and the hypothesis being tested is unlikely to be confounded by these factors. It would also be more informative to provide the proportion of poor image grades that were subsequently excluded from analysis in this table or in the results text. This would reinforce the authors' choice of a pragmatic approach and a more realistic impression of the deployment of GS-1.

Authors' comment:

Thank you for the comment. Table 1 is only the demographics and clinical characteristics of the study subjects and stating all the measurement technique seems to be unnecessary and verbose. Regarding the second point, as we described in our introduction, pseudoexfoliation glaucoma is considered to be the most common type of secondary glaucoma and can advance rapidly with continuous high IOP and be refractory to several therapeutic interventions. Therefore, it is clinically important, and we analyzed pseudoexfoliation glaucoma separately. As for the third point, please see the above response for Methods 1). As for the fourth point, we disagree with you. The mean and SD have been used in the clinical gradings and was also used in some recent research for the angle evaluations with gonio-photos (see Teixeira et al., Eur J Ophthalmol. 2018, Matsuo et al, Br J Ophthalmol. 2019). Breaking down the distribution of the exact numbers of each grade would only make the manuscript unnecessarily redundant and obscure the purpose and results of this study. As for the final point, we can understand your meaning, however we cannot agree with you. This study already contains more than enough information to make a single treatise, and any further additions will not only unnecessarily obscure the meaning but will also lose sight of its original purpose. Again, Table 1 was just the demographics and clinical characteristics of the study subjects. Please understand the main focus of our research.

Reviewer's comment:

This sounds almost contradictory. On one hand, the authors are willing to retain information on topical glaucoma medications even though its value has not been described anywhere in the text, nor is it immediately obvious to the reader - even a glaucoma expert - that its contribution is, even though the authors state it is a "main focus of their research". On the other hand, the contribution of angle grades on repeatability indices has more obvious value, but the authors have not provided justification on why it has been excluded.

Response

Thank you for the comment. It seems to be misunderstood. As we described in our previous response, information on topical glaucoma medications in Table 1 is only the demographics and clinical characteristics of the study subjects. We believe it is okay to keep it, however we would like to remove it if it interferes with the reader's understanding. Additionally, we found the typographical error in Table 1, and we have corrected it. Please see the underlined part. Moreover, according to your suggestion, we have added the contingency table demonstrating the distribution of angle grades in S1 Table.

4) Reviewers' comment:

With regard to reporting Table 2 and the kappa values, it is unclear to the reader why Fleiss's kappa was used for binary variables (PAS and Sampaolesi line - present/absent). This confusion stems from the lack of contingency tables being presented in the results and the confusing methods. The authors should seek to clarify the most appropriate statistical method used. Fleiss's kappa may be appropriate for agreement between the observers but from the writing it is hard to say what was compared.

Authors' comment:

We are sorry to mention that we disagree with you. Table 2 clearly shows the intraobserver reproducibility for angle evaluations between manual gonioscopy and automated gonioscope by one observer (MT). Because we cannot compare results analyzed by different statistical methods, we calculated both Fleiss' kappa coefficient and Kendall rank correlation coefficient for grading (nominal) and binary scales at the same time. Additionally, the Fleiss' kappa analysis in Table 2 is confirmation of the previous report (see Teixeira et al., Eur J Ophthalmol. 2018).

Reviewers' comment:

My point relates to comment 3 above - if you had the contingency table available, it would provide a more useful interpretation of repeatability at different angle grades. Kappa values are a useful statistical tool for comparison with previous studies - this is true - however if the authors are aiming for a pragmatic interpretation of the results, then kappa values become more subjective.

Response

Thank you for the suggestion. We have added the contingency tables in Supplemental material (S1 Table).

5) Reviewers' comment:

Page 17, line 14 refers to observers 1-5 and glaucoma specialists 1-3, but this was somewhat unclear from the methods as well.

Authors' comment:

We described it in our manuscript. Please see L14-16 (pg 11).

Reviewer’s comment:

this is written in an ambiguous manner, as it can be interpreted as 5 + 3 examiners, when it is 5, within which 3 were glaucoma specialists. This should be rephrased for clarity throughout the manuscript.

Response

Thank you for the suggestion. As you pointed out, we have paraphrased the relevant part and added the supplemental explanation in the manuscript L18-19 (pg 7), L2 (pg 20), L2 (pg 21).

6) Reviewer’s comment:

Table 3 (and similar): there was extreme variability in the kappa values across quadrants and across observers: could the authors comment on whether there was a systematic difference or random difference in the text? This is a very interesting result and the reader may be tempted to think it may be related to the distribution of angle grades, whereby some may be more obvious and easy to score (e.g. closed or very wide open angles) compared to the middling grades.

and

Overall, comments 3-5 above suggest that contingency tables may play a role in the reporting of the data

Authors' comment:

Thank you for the comment. There would be the tendency that the intraobserver agreement values of the glaucoma specialist with enough experience using the GS-1 in the clinic were higher than those of the other ophthalmologists as described in the manuscript. On the other hand, the extreme variability in the kappa values across quadrants seems to be happened by chance. However, we cannot conclude it on a clear evidence because of the small sample size. Moreover, they deviate from the original purpose, thus we would like to leave them to the future large-scale research.

and

We do not agree with you. Again, adding more data is redundant.

Reviewer’s comment:

Unfortunately, I disagree with these points. It is not redundant because it plans an important role in data visualization and potentially to a less biased interpretation of the results.

Response

Thank you for the suggestion. We have added the contingency tables in Supplemental materials (S2 Table and S3 Table).

7) Reviewer’s comment:

With regard to objectivity, one questions the value of precise grading given the wealth of data required to change the management plan. For example, the decision to treat would be based on a multitude of factors aside from the angle appearance, including accessibility to health care, age, lens status, epidemiological risk factors and others (see Thomas and Walland 2013 CEO).

Authors' comment:

We disagree with you. All the angle parameters in our study can be objectively evaluated based on the angle findings, and it is worthwhile to examine the accurate inter- and intra-observer agreements for angle evaluations only with the novel device in considering the reliability. Accurate angle assessments and clinical judgments based on it are separate issues, and how to apply them to each patient should be considered in the other future study.

Reviewer’s comment:

Unfortunately, I also disagree with the conviction of the authors here and their statement that they are fully separate issues. It is important to remain skeptical of emphasising objective measures too much, and the authors have too quickly dismissed their contribution into a more holistic model of health care.

Response:

Thank you for the suggestion. We have added the contingency tables in Supplemental materials (S2 Table and S3 Table).

8) Reviewer’s comment:

The issue of angle grade distribution was mentioned on page 31, but the analysis seems to be lacking in terms of accounting for it as a confounding factor.

Authors' comment:

We do not agree with you. We assessed the effects of image quality on observer agreements for angle evaluations using gonioscopic photos of GS-1. As the result, the observer agreements using grade 0 images were not always better than those using grade 1 images, probably because we completely excluded the grade 2 (blurred with no discernible details) images. Therefore, it is unlikely that the differences in image quality can be the confounding factor, and such detailed examinations that deviate from the original purpose are redundant and we would like to leave it to the other future study.

Reviewer’s comment:

I have noted that the authors appear to say a lot of things are redundant and deviate from the original purpose. I disagree with the notion that image quality - such an important component of any imaging based study - is redundant. As a reader, that is one of the first things one would question. The authors could consider at least reporting this in the supplementary material.

Response

Thank you for the comment. Interpreting our results, we believe it is unlikely that the differences in image quality could be the confounding factor when grade 2 images were excluded. Moreover, as far as we know, there is no statistical method to adjust the confounding factors to evaluate the Fleiss’ kappa value.

Attachment

Submitted filename: Response to Reviewers 2nd revision.docx

Decision Letter 2

Jinhai Huang

23 Apr 2021

Intraobserver and interobserver agreement among anterior chamber angle evaluations using automated 360-degree gonio-photos

PONE-D-20-24626R2

Dear Dr. Matsuo,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Jinhai Huang, M.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Acceptance letter

Jinhai Huang

27 Apr 2021

PONE-D-20-24626R2

Intraobserver and interobserver agreement among anterior chamber angle evaluations using automated 360-degree gonio-photos

Dear Dr. Matsuo:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Jinhai Huang

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig. The radar charts of the distributions of iridocorneal angle evaluations with manual gonioscopy and automated gonioscope by the glaucoma specialist (MT) for visualizing the variabilities in (A) Scheie’s angle width grading, (B) Scheie’s angle pigmentation grading, (C) PAS detection, and (D) Sampaolesi line detection.

    (TIF)

    S1 Table. Comparison of manual gonioscopy and automated gonioscope in all angle gradings.

    (DOCX)

    S2 Table. Comparison of Scheie’s angle gradings by glaucoma specialists with automated gonioscope between first and second tests in all images.

    (DOCX)

    S3 Table. Comparison of Scheie’s angle gradings with automated gonioscope between a glaucoma specialist and the others in first test.

    (DOCX)

    S4 Table. Effect of image quality on observer agreement for angle evaluations using gonioscopic photos of GS-1.

    (DOCX)

    S1 Data. The first test of randomized 140 gonio-images.

    (PDF)

    S2 Data. The second test of different randomized 140 gonio-images.

    (PDF)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers 2nd revision.docx

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES