Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Jun 1.
Published in final edited form as: Am J Ophthalmol. 2021 Feb 9;226:100–107. doi: 10.1016/j.ajo.2021.02.004

Glaucoma Expert-level Detection of Angle Closure in Goniophotographs with Convolutional Neural Networks: The Chinese American Eye Study: Automated Angle Closure Detection in Goniophotographs

Michael Chiang a, Daniel Guth b, Anmol A Pardeshi a, Jasmeen Randhawa c, Alice Shen a, Meghan Shan a, Justin Dredge a, Annie Nguyen a, Kimberly Gokoffski a, Brandon J Wong a, Brian Song a, Shan Lin d, Rohit Varma e, Benjamin Y Xu a,*
PMCID: PMC8286291  NIHMSID: NIHMS1674317  PMID: 33577791

Abstract

Purpose:

To compare the performance of a novel convolutional neural network (CNN) classifier and human graders in detecting angle closure in EyeCam goniophotographs.

Design:

Retrospective cross-sectional study.

Methods:

Subjects from the Chinese American Eye Study (CHES) underwent EyeCam goniophotography in four angle quadrants. A CNN classifier based on the ResNet-50 architecture was trained to detect angle closure, defined as inability to visualize the pigmented trabecular meshwork, using reference labels by a single experienced glaucoma specialist. The performance of the CNN classifier was assessed using an independent test dataset and reference labels by the single glaucoma specialist or a panel of three glaucoma specialists. This performance was compared to that of nine human graders with a range of clinical experience. Outcome measures included area under the receiver operating characteristic curve (AUC) metrics and Cohen kappa coefficients in the binary classification of open or closed angle.

Results:

The CNN classifier was developed using 29,706 open and 2,929 closed angle images. The independent test dataset was comprised of 600 open and 400 closed angle images. The CNN classifier achieved excellent performance based on single-grader (AUC=0.969) and consensus (AUC=0.952) labels. The agreement between the CNN classifier and consensus labels (κ=0.746) surpassed that of all non-reference human graders (κ=0.578-0.702). Human grader agreement with consensus labels improved with clinical experience (p=0.03).

Conclusion:

A CNN classifier can effectively detect angle closure in goniophotographs with performance comparable to that of an experienced glaucoma specialist. This provides an automated method to support remote detection of patients at risk for primary angle closure glaucoma (PACG).

Keywords: Angle closure, primary angle closure glaucoma, artificial intelligence, goniophotography, gonioscopy

Graphical Abstract

Closure of the anterior chamber angle impairs outflow of aqueous humor through the trabecular meshwork, leading to elevated intraocular pressure and glaucomatous optic neuropathy. Goniophotography is one method for evaluating the angle and detecting eyes at risk for angle closure glaucoma. However, manual assessment of goniotographs is time- and expertise-dependent. In this study, we develop an automated deep learning classifier that detects angle closure in EyeCam goniophotographs with performance comparable to an experienced glaucoma specialist.

Introduction

Closure of the anterior chamber angle is the primary risk factor for developing primary angle closure glaucoma (PACG), a leading cause of permanent vision loss worldwide.1 Angle closure leads to impaired outflow of aqueous humor and elevated intraocular pressure (IOP), a key risk factor for glaucomatous optic neuropathy.2 There are effective treatments for angle closure, such as laser peripheral iridotomy (LPI) or lens extraction, that can lower IOP and decrease the risk of developing PACG and glaucoma-related vision loss.3-5 However, angle closure must first be detected before eyecare providers can assess its severity and provide appropriate clinical care. The challenge of detecting patients at risk for PACG is magnified by the fact that most cases occur in parts of the world with relatively limited access to eyecare.1,6

Gonioscopy is the current clinical standard for evaluating the angle, detecting angle closure, and determining the clinical management of patients at risk for PACG. However, gonioscopy has several shortcomings that limit its utility for wide-spread detection of angle closure. Gonioscopy is a skill-dependent examination technique with limited inter-observer agreement, even among experienced glaucoma specialists.7 In addition, gonioscopy must be performed in person, and results of the examination cannot be viewed or verified remotely by other eyecare providers. Finally, records of gonioscopic examinations are descriptive, which makes it difficult to monitor for longitudinal progression of angle closure.

Goniophotography is an alternative method for evaluating the angle and detecting angle closure that has some benefits over gonioscopy. There are semi-automated goniophotography devices such as the EyeCam (Clarity Medical Systems, Pleasanton, CA) and Gonioscope GS-1 (Nidek Co., Gamagori, Japan) that can be operated by a trained technician. In addition, goniophotographs can be evaluated remotely by expert graders and compared to identify anatomical changes over time. Crucially, there is moderate to excellent agreement between gonioscopy and both manual goniophotography and EyeCam in the detection of angle closure.8-10 However, a current limitation of goniophotography is that images require manual interpretation by a human grader. This process can be labor- and time-intensive, and it is unclear how much clinical experience plays a role in the grader’s performance. In this study, we apply deep learning methods to population-based EyeCam data to develop an automated convolutional neural network (CNN) classifier that grades goniophotographs and detects angle closure. We then compare the performance of this CNN classifier to that of human graders with a range of clinical experience.

Methods

Subjects were recruited as part of the Chinese American Eye Study (CHES), a population-based, cross-sectional study that included 4,572 Chinese participants aged 50 years and older residing in the city of Monterey Park, California. Ethics committee approval was previously obtained from the University of Southern California Medical Center Institutional Review Board. All study procedures adhered to the recommendations of the Declaration of Helsinki. All study participants provided informed consent.

Inclusion criteria for the study included receipt of EyeCam goniophotography during CHES. Exclusion criteria included media opacities that precluded visualization of angle structures during EyeCam goniophotography. Subjects with history of prior LPI and/or eye surgery (e.g., cataract extraction, incisional glaucoma surgery) were not excluded as it is possible to have persistent angle closure despite treatment. Both eyes from a single subject could be recruited so long as they fulfilled the inclusion and exclusion criteria.

EyeCam Imaging and Image Grading

EyeCam imaging was performed by a single trained technician with the subject in the supine position under dark ambient lighting conditions (0.1 cd/m2). Topical anesthetic drops (Proparacaine hydrochloride 0.5%; Alcon Laboratories, Inc., Fort Worth, TX, USA) and a coupling gel were applied to the eye. Images were obtained from all four quadrants (inferior, superior, nasal, and temporal quadrants sequentially) of both eyes. Multiple images could be taken per quadrant if image quality was deemed to be unsatisfactory. Care was taken to avoid compressing or deforming the eye. If the view of the angle was blocked by a convex iris curvature, the technician was permitted to rotate the probe tip up to 10 degrees anteriorly along the cornea to better visualize angle structures.

EyeCam images were uploaded to a password-protected online data storage system. Images were originally graded between the years 2012 and 2013 by a single reference glaucoma specialist (S.L.) with 18 years of clinical ophthalmology experience (including years spent in residency). Each image was evaluated for angle grade and image quality. Angle grading was based on the visualization of anatomical landmarks in at least half (greater than 50%) of each quadrant: grade 0, no structures visualized; grade 1, non-pigmented TM visible; grade 2; pigmented TM visible; grade 3, scleral spur visible; grade 4, ciliary body visible. These angle grading categories matched the modified Shaffer classification system used to grade the eyes clinically on gonioscopy. Image quality was graded between 1 and 3, with grade 1 representing a clear image, grade 2 a slightly blurred image with distinguishable angle structures, and grade 3 a blurry image with indistinguishable angle structures. Images were not excluded from the analysis or CNN classifier development and testing based on image quality. The single reference glaucoma specialist (S.L.) also regraded 550 randomly selected images (66 grade 0, 116 grade 1, 98 grade 2, 140 grade 3, 130 grade 4) at least one month after they were originally graded in 2013.

450 randomly selected images were added to the 550 regraded images to create a balanced test dataset (200 of each grade) comprised of 1,000 images. This test dataset was graded in the year 2020 by 8 other graders with a range of clinical ophthalmology experience (including years spent in residency). Two graders (B.S., B.W.) were fellowship-trained glaucoma specialists with 11 and 5 years of clinical experience. Two graders were non-glaucoma specialists (K.G., A.N.) with 8 and 5 years of clinical experience. The remaining graders included a glaucoma fellow with 4 years of experience (J.D.), two residents with 3 years and 1 year of experience (M.S., A.S.), and a 1st year medical student (J.R.). The medical student was provided with a half-hour lecture on angle anatomy due to having no prior experience examining the angle or grading goniophotographs. In addition, all graders were provided with a reference dataset comprised of 50 images per angle grade (grades 0 to 4) randomly selected from the training dataset. Each image was labelled with the angle grade provided by the reference glaucoma specialist and used to train the CNN classifier. The graders were instructed to review as many of reference images as needed to feel comfortable with the task prior to grading the test dataset. They were also instructed to provide an angle grade to be consistent with the reference glaucoma specialist and CNN classifier. These grades were later binarized to assess the performance of each grader. The graders were not informed about the distribution of angle grades in the test dataset.

Convolutional Neural Network Training

All images were reoriented so that the cornea/sclera was on the left and the iris was on the right (Figure 1). Excess images from the test dataset due to image grouping by subject were discarded. An unbalanced training dataset was generated using all images from the remaining subjects. There was no overlap of subjects between the training and test datasets in order to prevent data leakage (i.e., inter- and intra-eye correlations). Data manipulations were performed in the Python programming language.

Figure 1:

Figure 1:

Representative EyeCam images of a closed (top) and open (bottom) angle.

A CNN classifier was developed to classify EyeCam images as either open angle (grades 2 to 4) or closed angle (grades 0 and 1). For a given image, the CNN produced a normalized probability distribution over Shaffer grades p = [p0, p1, p2, p3, p4]. Binary probabilities for closed angle (grades 0 and 1) and open angle (grades 2 to 4) were generated by summing probabilities over the corresponding grades (i.e., Pclosed = p0 + p1 and Popen = p2 + p3 + p4). A closed angle prediction was considered a positive detection event.

Images were preprocessed to 224 × 224 pixels in order to reduce hardware demands during classifier training. RGB channels were normalized to have a mean of [0.485, 0.456, 0.406] and a standard deviation of [0.229, 0.224, 0.225]. During training, images were augmented through random rotation between 0 and 20 degrees, random vertical flips, random horizontal flips, and random zoom between 100% and 110%, and random perturbations to balance and contrast. Differences in class distributions in the unbalanced training dataset were addressed by oversampling the under-represented classes.

The CNN was based on the ResNet-50 architecture pretrained on the ImageNet Challenge dataset.11 The average pooling layer was replaced by an adaptive pooling layer where bin size is proportional to input image size; this enables the CNN to be applied to input images of arbitrary sizes.12 This feature was not used during training due to limited video random access memory (VRAM). The final fully connected layer of the ResNet-50 architecture was changed to have 5 nodes. Softmax regression was used to calculate the multinomial probability of the 5 grades with a cross-entropy loss used during training.

A cyclical learning rate was set using the “1cycle learning rate policy”.13 The final layer of the CNN was trained first, prior to fine-tuning all layers via back-propagation. Test-time augmentation was performed by applying the same augmentations at test time and averaging predictions over augmentation variants.

Convolutional Neural Network Testing

Reference labels in the test dataset were determined in two ways: (1) angle status (open or closed) provided by the single reference glaucoma specialist (S.L.), or (2) angle status (open or closed) determined by the consensus of three glaucoma specialists (S.L., B.S., B.W.), defined as the majority opinion of the three graders.

The performance of the CNN classifier and human graders in the binary classification of closed (grades 0 and 1) or open (grades 2 to 4) angle was compared by plotting the receiver operating characteristic (ROC) curve of the CNN classifier with the false positive rates (FPRs) and true positive rates (TPRs) of human predictions. In order to evaluate variability in CNN performance, ROC curves corresponding to the lower and upper bounds of the 95% confidence interval in AUC values generated by bootstrapping. The predictive accuracy of the CNN classifier was calculated for each angle grade class (grades 0 to 4) based on single-grader labels. Accuracy was defined as (true positive + true negative) / all cases.

The 550 images regraded by the single reference glaucoma specialist (S.L.) were used to assess intra-grader repeatability of angle grades. This regraded dataset was upsampled to generate a balanced dataset (i.e. equal numbers of grades 0, 1, 2, 3, and 4) of 1,000 images, similar to the test dataset graded by the other human graders.

Class activation maps were generated using gradient-weighted class activation mapping to visualize what image features were important to CNN function.14

Statistical analysis

All statistical analysis was performed using Version 14.2 of the Stata® statistical software package (StataCorp LLC, College Station, TX). Analyses were conducted using a significance level of 0.05.

Continuous variables were described by calculating means, standard deviations (SDs), and ranges. Categorical variables were described by calculating counts and percentages. Cohen’s kappa coefficients were calculated to assess the agreement between the consensus labels and CNN classifier or human graders in the binary classification of open or closed angle.

Linear regression was performed to detect linear trends in the relationship between years of clinical experience and FPRs or TPRs of human graders in the detection of angle closure based on single-grader and consensus labels. The same method was applied to detect linear trends in the relationship between kappa coefficients and years of clinical experience based on consensus labels. Grades by all human graders, excluding the reference grades used to train the CNN classifier, were included in the linear trend analyses for single-grader labels. Grades by the three glaucoma specialists used to derive the consensus labels were excluded from the linear trend analyses for consensus labels. However, the second set of grades provided by the reference glaucoma specialist for the test dataset were included in the analyses.

Results

4,152 of 4,582 (90.6%) total CHES subjects received EyeCam imaging. The mean age of subjects included in the study was 61.5 ± 8.8 years (range 55-99). 1,523 (36.7%) subjects were male and 2,629 (63.3%) were female.

The total dataset was comprised of 33,635 EyeCam images (935 grade 0; 2,394 grade 1; 6,504 grade 2; 14,575 grade 3; 9,227 grade 4) graded by the reference glaucoma specialist. The training dataset was comprised of 32,635 images (735 grade 0; 2,194 grade 1; 6,304 grade 2; 14,375 grade 3; 9,027 grade 4) from 3,999 subjects. The independent test dataset was comprised of 1,000 images (200 of each grade) from 153 subjects.

Deep Learning Classifier and Human Performance in Detecting Angle Closure

The CNN classifier achieved an AUC of 0.969 (95% confidence interval 0.961-0.976) in detecting angle closure based on single-grader labels (Figure 2). Human graders demonstrated a range of performance in the same task, with a significant trend toward increased TPR (range = 0.701-0.973; p = 0.01) but not FPR (range = 0.042-0.219; p = 0.31) with increased clinical experience (Supplementary Table 1).

Figure 2:

Figure 2:

ROC curve with 95% confidence interval (grey bar) of CNN classifier performance in detecting angle closure in the test dataset based on labels by the reference glaucoma specialist. Performance of human graders shown with years of clinical experience in parentheses.

The kappa coefficient for the agreement between CNN classifier and the single-grader labels was 0.823 (Supplementary Table 1). This was greater than the reference glaucoma specialist, who achieved a kappa coefficient of 0.754 when regrading the test dataset. The remaining graders had kappa coefficients ranging between 0.580 to 0.722 (median = 0.655). There was no association between agreement with the single-grader labels and clinical experienced (p = 0.616).

The predictive accuracy of the CNN classifier in detecting angle closure based on single-grader labels among images with grader-assigned angle grade 0 was 97.5%, angle grade 1 was 90.0%, angle grade 2 was 65.5%, angle grade 3 was 99.0%, and angle grade 4 was 100.0% (Supplementary Figure 1). 79 of the 96 (82.3%) of misclassifications (open angle predicted as angle closure or vice versa) occurred in images corresponding to grader-assigned angle grade 1 or 2 (Supplementary Figure 2).

The CNN classifier achieved an AUC of 0.952 (95% confidence interval, 0.942-0.960) in detecting angle closure based on consensus labels (Figure 3). Human graders again demonstrated a range of performance in the same task, with a significant trend toward increased TPR (range = 0.630-0.893; p = 0.03) but not FPR (range = 0.023-0.126; p = 0.18) with increased clinical experience (Supplementary Table 2).

Figure 3:

Figure 3:

ROC curve with 95% confidence interval (grey bar) of CNN classifier performance in detecting angle closure in the test dataset based on labels by the panel of glaucoma specialists. Performance of human graders shown with years of clinical experience in parentheses.

The kappa coefficient for the agreement between CNN classifier and the consensus labels was 0.746 (Supplementary Table 2). This was similar to the reference glaucoma specialist, who achieved a kappa coefficient of 0.773 when regrading the test dataset. The remaining graders had kappa coefficients ranging between 0.578 to 0.702 (median = 0.656). There was a significant trend toward improved agreement with the consensus labels with increased clinical experienced (p = 0.03).

The performance of the CNN classifier was better for images with quality grade 1 (AUC = 0.981, N = 495) compared to images with quality grades 2 (AUC = 0.908, N = 484) and 3 (AUC = 0.923, N = 21).

The CNN classifier focused primarily on the sclera-iris junction to detect angle closure based on class activation maps indicating its strategy (Figure 4). The central portion of images appeared to be more salient than peripheral portions.

Figure 4:

Figure 4:

Representative class activation maps of the final layer of the CNN indicating the most salient (red and yellow) regions of the images. Representative images of open (top) and closed (bottom) angles.

Discussion

In this study, we compared the performance of a novel CNN classifier and human graders in detecting angle closure in EyeCam goniophotographs. The CNN classifier based on the ResNet-50 architecture achieved excellent performance in detecting angle closure based on both single-grader and consensus labels. Classifier agreement with the consensus labels also surpassed that of non-reference human graders, which tended to improve with increased clinical experience. Class activation maps revealed that the CNN classifier demonstrates human-like behavior by focusing on portions of images that contain salient anatomical features. We believe these findings have important implications for automating clinical evaluations of the anterior chamber angle, remote care of angle closure patients, and reducing barriers to eyecare in populations at high risk for PACG.

The performance of our CNN classifier exceeds the intra-grader repeatability of an experienced glaucoma specialist and the diagnostic ability of human graders with less clinical experience. We first used reference labels provided by a single experienced glaucoma specialist with nearly two decades of clinical experience to train and test the CNN classifier. Its performance (κ=0.823, all quadrants) exceeded that of a previously reported classification algorithm for detecting angle closure in EyeCam images developed using traditional image analysis techniques (κ=0.50-0.73, individual quadrants).15 The CNN classifier also demonstrated superior agreement with single-grader labels compared to all human graders, surpassing even the reference glaucoma specialist regrading the test dataset. However, this comparison may have been biased by the fact that the CNN classifier was developed to simulate the grading behavior of the reference glaucoma specialist. Therefore, we used consensus labels provided by a panel of three glaucoma specialists to assess the generalizability of the CNN classifier in a scenario where the reference glaucoma specialist was not the sole determinant of angle status. In this case, the agreement of the CNN classifier with the consensus labels remained superior to that of human graders who did not contribute to the consensus and was comparable to the reference glaucoma specialist regrading the test dataset. This suggests that the superior performance by the CNN classifier was unrelated to an inherent bias of data labels and comparisons.

Aside from approximating human intra-grader repeatability, the CNN demonstrates other human-like behaviors. First, the accuracy of the CNN classifier is worst for angle grade 2 images, which intuitively should be associated with the highest uncertainty by human graders due to challenges of identifying pigmented TM when directly adjacent to the iris. The majority of misclassifications occurred when the reference grade was 2 and the predicted grade was 1 or vice versa. Second, its performance is reduced by decreased image quality. While it is surprising that classifier performance was similar for image quality grade 2 and 3 images, this may be related to the low number of poor-quality images (N = 21). Finally, class activation maps indicate that the CNN classifier focuses on the central sclera-iris junction, which simulates the general strategy employed by human graders. Further analysis of these maps may provide insight into what strategy the CNN (e.g. best or worst case), and indirectly the reference glaucoma specialist, employed to make its predictions.

Clinical experience appears to play an important role in the performance of human graders in detecting angle closure in goniophotographs. Among non-reference human graders, level of agreement with consensus labels provided by the panel of glaucoma specialists was significantly correlated with clinical experience. Interestingly, grader sensitivity appeared to improve with clinical experience while specificity did not, despite varying among graders. These results suggest that there is a tangible benefit to having an experienced clinician evaluate goniophotographs to detect angle closure. However, if this is not feasible, a CNN classifier trained on high-quality reference labels by an experienced glaucoma specialist can provide comparable performance.

Agreement between gonioscopy, the clinical standard for detecting angle closure, and manually graded EyeCam ranges between moderate to excellent in the detection of angle closure.8-10 This agreement was previously reported to be moderate to excellent depending on the quadrant (κ=0.52-0.60) in the same CHES dataset used by our current study.10 These metrics are slightly lower than those reported by a smaller hospital-based study in Singapore, in which agreement was excellent in all quadrants (κ=0.73-0.88).15 Disagreement between gonioscopy and EyeCam may arise due to differences in optics between the goniolens and EyeCam camera or in body position during examination.16,17 While a detailed analysis of agreement between gonioscopy and EyeCam fell outside the scope of our current study, we agree that further investigation is necessary to determine the utility of our CNN classifier in detecting gonioscopic angle closure.

The burden of PACG on patients and healthcare systems is expected to increase over the next two decades due to aging of the world’s population and rising demands for healthcare resources. Two solutions that have been proposed in other fields of medicine are telemedicine and artificial intelligence (AI) assisted care. 18,19 Remote screening of patients to detect patients with or at high risk for PACG will be crucial in reducing the significant ocular morbidity associated with the disease; PACG remains a common cause of both unilateral and bilateral permanent blindness worldwide even though treatments with laser and surgery are highly effective if the disease is detected early in its course.3-5,19 AI-assisted care of angle closure patients may lead to better disease detection than by less experienced eyecare providers alone, as exampled by the CNN classifier developed in this study. While automated methods to detect angle closure exist for anterior segment optical coherence tomography (AS-OCT), automated goniophotography provides an alternative in the detection of patients with angle closure who would benefit from a complete ocular examination by a trained eyecare provider to rule out PACG.20,21

Our study has some limitations. First, grading of EyeCam goniophotographs in CHES was performed by a single experienced glaucoma expert. While our analyses indicate that classifier performance is excellent even when assessed using reference labels by a panel of glaucoma specialists, it is possible that a classifier trained using consensus labels would produce more generalizable performance. Second, all subjects in CHES were self-identified as Chinese-American, which could limit classifier generalizability to other demographic groups.22,23 Finally, all images were acquired using a single goniophotography device. EyeCam and manual goniophotography demonstrate similar performance in detecting gonioscopic angle closure, but there is only fair agreement between the two.9 Therefore, the CNN classifier may not generalize to other forms of goniophotography. In addition, the EyeCam as a goniophotography device has its own limitations, such as the need to image patients in the supine position, that may limit its agreement with gonioscopy and convenience as a screening tool.22,23 Dynamic indentation of globe to widen the angle and identify PAS is also challenging due to the shape and size of the EyeCam imaging probe. Finally, EyeCam takes as long as gonioscopy per quadrant, if not longer, even when performed by a trained technician. Therefore, generalizability studies using faster and less operator-dependent modern goniophotography devices, such as the Gonioscope GS-1, are warranted.

In this study, we developed an automated CNN classifier for detecting angle closure in EyeCam goniophotographs. The recent landmark Zhongshan Angle Closure Prevention (ZAP) Trial has raised questions about the benefit of treating patients with early angle closure, and further work is needed to identify patients with early angle closure who are at high risk of elevated IOP and glaucomatous optic neuropathy.3,24 However, it does not diminish the urgent need to develop automated methods to detect patients with angle closure who may benefit from a referral to a glaucoma specialist to assess for these clinical features. We hope this study prompts continued development of automated clinical methods that improve and modernize the detection and management of patients at risk for PACG.

Supplementary Material

1
2
3
4
5

Acknowledgements

This work was supported by grants U10 EY017337, K23 EY029763, and P30 EY029220 from the National Eye Institute, National Institute of Health, Bethesda, Maryland; a Young Clinician Scientist Research Award from the American Glaucoma Society, San Francisco, CA; a Grant-in-Aid Research Award from Fight for Sight, New York, NY; a Clinical and Community Research Award from the Southern California Clinical and Translational Science Institute, Los Angeles, CA; and an unrestricted grant to the Department of Ophthalmology from Research to Prevent Blindness, New York, NY.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Disclosures

M.C., D.G., A.A.P., J.R., A.S., M.S., J.D., A.N., K.G., B.W., B.S., S.L., B.X. have no financial disclosures. R.V. is a consultant for Allegro Inc., Allergan, and Bausch Health Companies Inc.

References

  • 1.Tham YC, Li X, Wong TY, Quigley HA, Aung T, Cheng CY. Global prevalence of glaucoma and projections of glaucoma burden through 2040: A systematic review and meta-analysis. Ophthalmology. 2014;121(11):2081–2090. [DOI] [PubMed] [Google Scholar]
  • 2.Weinreb RN, Aung T, Medeiros FA. The pathophysiology and treatment of glaucoma: A review. JAMA - J Am Med Assoc. 2014;311(18):1901–1911. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.He M, Jiang Y, Huang S, et al. Laser peripheral iridotomy for the prevention of angle closure: a single-centre, randomised controlled trial. Lancet. 2019;393(10181):1609–1618. [DOI] [PubMed] [Google Scholar]
  • 4.Azuara-Blanco A, Burr J, Ramsay C, et al. Effectiveness of early lens extraction for the treatment of primary angle-closure glaucoma (EAGLE): a randomised controlled trial. Lancet. 2016;388(10052):1389–1397. [DOI] [PubMed] [Google Scholar]
  • 5.Radhakrishnan S, Chen PP, Junk AK, Nouri-Mahdavi K, Chen TC. Laser Peripheral Iridotomy in Primary Angle Closure. Ophthalmology. 2018;125(7):1110–1120. [DOI] [PubMed] [Google Scholar]
  • 6.Foster PJ. The epidemiology of primary angle closure and associated glaucomatous optic neuropathy. Semin Ophthalmol. 2002;17(2):50–58. [DOI] [PubMed] [Google Scholar]
  • 7.Rigi M, Bell NP, Lee DA, et al. Agreement between Gonioscopic Examination and Swept Source Fourier Domain Anterior Segment Optical Coherence Tomography Imaging. J Ophthalmol. 2016;2016:1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Perera SA, Baskaran M, Friedman DS, et al. Use of eyecam for imaging the anterior chamber angle. Investig Ophthalmol Vis Sci. 2010;51(6):2993–2997. [DOI] [PubMed] [Google Scholar]
  • 9.Baskaran M, Perera SA, Nongpiur ME, et al. Angle assessment by eyecam, goniophotography, and gonioscopy. J Glaucoma. 2012;21(7):493–497. [DOI] [PubMed] [Google Scholar]
  • 10.Murakami Y, Wang D, Burkemper B, et al. A population-based assessment of the agreement between grading of goniophotographic images and gonioscopy in the Chinese-American eye study (CHES). Investig Ophthalmol Vis Sci. 2016;57(10):4512–4516. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Russakovsky O, Deng J, Su H, et al. ImageNet Large Scale Visual Recognition Challenge. Int J Comput Vis. 2015;115(3):211–252. [Google Scholar]
  • 12.He K, Zhang X, Ren S, Sun J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans Pattern Anal Mach Intell. 2015;37(9):1904–1916. [DOI] [PubMed] [Google Scholar]
  • 13.Smith LN. A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay. March 2018. [Google Scholar]
  • 14.Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization.; 2017. [Google Scholar]
  • 15.Baskaran M, Cheng J, Perera SA, Tun TA, Liu J, Aung T. Automated analysis of angle closure from anterior chamber angle images. Investig Ophthalmol Vis Sci. 2014;55(11):7669–7673. [DOI] [PubMed] [Google Scholar]
  • 16.Xu BY, Pardeshi AA, Burkemper B, et al. Quantitative Evaluation of Gonioscopic and EyeCam Assessments of Angle Dimensions Using Anterior Segment Optical Coherence Tomography. Transl Vis Sci Technol. 2018;7(6):33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Xu BY, Pardeshi AA, Burkemper B, et al. Differences in Anterior Chamber Angle Assessments Between Gonioscopy, EyeCam, and Anterior Segment OCT: The Chinese American Eye Study. Transl Vis Sci Technol. 2019;8(2):5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Emanuel EJ, Wachter RM. Artificial Intelligence in Health Care: Will the Value Match the Hype? JAMA - J Am Med Assoc. 2019;321(23):2281–2282. [DOI] [PubMed] [Google Scholar]
  • 19.Friedman DS, Foster PJ, Aung T, He M. Angle closure and angle-closure glaucoma: What we are doing now and what we will be doing in the future. Clin Exp Ophthalmol. 2012;40(4):381–387. [DOI] [PubMed] [Google Scholar]
  • 20.Fu H, Baskaran M, Xu Y, et al. A Deep Learning System for Automated Angle-Closure Detection in Anterior Segment Optical Coherence Tomography Images. Am J Ophthalmol. 2019;203(0):37–45. [DOI] [PubMed] [Google Scholar]
  • 21.Xu BY, Chiang M, Chaudhary S, Kulkarni S, Pardeshi AA, Varma R. Deep Learning Classifiers for Automated Detection of Gonioscopic Angle Closure Based on Anterior Segment OCT Images. Am J Ophthalmol. 2019;208:273–280. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Oh YG, Minelli S, Spaeth GL, Steinman WC. The anterior chamber angle is different in different racial groups: A gonioscopic study. Eye. 1994;8(1):104–108. [DOI] [PubMed] [Google Scholar]
  • 23.Wang D, Qi M, He M, Wu L, Lin S. Ethnic difference of the anterior chamber area and volume and its association with angle width. Invest Ophthalmol Vis Sci. 2012;53(6):3139–3144. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Xu BY, Burkemper B, Lewinger JP, et al. Correlation between Intraocular Pressure and Angle Configuration Measured by OCT. Ophthalmol Glaucoma. 2018;1(3):158–166. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1
2
3
4
5

RESOURCES