Abstract
Purpose
To develop an artificial intelligence algorithm to automatically identify the anterior segment structures and assess multiple parameters of primary angle closure disease (PACD) in ultrasound biomicroscopy (UBM) images.
Design
Development and validation of an artificial intelligence algorithm for UBM images.
Methods:
2339 UBM images from 592 subjects were collected for algorithm development. A multitissue segmentation model based on deep learning was developed for automatic identification of anterior segments and localisation of scleral spur. Then, measurement of the typical angle parameters was performed from the predicted results, including angle-opening distance at 500 µm (AOD 500), trabecular–ciliary angle (TCA) and iris area. We then collected 222 UBM images from 45 subjects in two centres for model validation.
Results
The multitissue identification model established in this study reached mean Intersection over Union (IoU) of 0.98, 0.98 and 0.98 on cornea segmentation, iris segmentation and ciliary body segmentation and a mean error distance of 1.07 pixels on scleral spur localisation. Our model got a mean IoU of 0.98, 0.98 and 0.99 on cornea segmentation, iris segmentation and ciliary body segmentation and a mean error distance of 0.49 pixels on scleral spur localisation in open-angle images and received 0.98, 0.98, 0.978 and 1.42 pixels respectively in angle-closure images. The mean differences between automatic and manual measurement of the angle parameters were 3.07 μm of AOD, 3.34 degrees of TCA and 0.05 mm2 of iris area.
Conclusions
The automatic method of multitissue identification for PACD eyes developed was feasible, and the automatic measurement of angle parameters was reliable.
Keywords: Glaucoma
WHAT IS ALREADY KNOWN ON THIS TOPIC.
WHAT THIS STUDY ADDS
This study developed a deep learning-based model to automatically achieve the cornea, iris, ciliary segmentation and sclera spur detection and measure angle parameters of UBM images in primary angle closure disease eyes.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
This deep learning system has potential for aiding in disease diagnosis, treatment decisions, and monitoring the changing trend of the angle parameters after the treatment by automatic identification of anterior segments and measurements of the angle parameters.
Introduction
Glaucoma is the leading cause of irreversible blindness worldwide.1 Primary glaucoma can be classified into two broad categories: primary open-angle glaucoma (POAG) and primary angle-closure glaucoma (PACG). Compared with the rates of blindness in POAG, PACG on average carries a greater risk of severe, bilateral visual impairment,2 and Asians exhibit the highest prevalence of PACG.1 PACG is caused by disorders of the iris, the lens and retrolenticular structures.3 There are different degrees and types of angle closure, which range from irido-trabecular contact (ITC) with normal intraocular pressure (IOP) to total peripheral anterior synechiae (PAS) with elevated IOP and glaucomatous changes. Therefore, primary angle-closure disease (PACD) can be categorised into primary angle-closure suspect (PACS), primary angle closure (PAC) and PACG.4
The early detection of anterior segment abnormalities was an efficient way to prevent permanent loss of vision caused by PACG. Many techniques have been used to evaluate anterior segment structure of the eye, including slit-lamp microscopy, gonioscopy, anterior segment optical coherence tomography (AS-OCT) and ultrasound biomicroscopy (UBM).5,7 UBM allows for the acquisition of real-time images of the angle, with resolution of between 25 μm and 50 μm.3 Compared with AS-OCT, UBM can visualise posteriorly located structures such as the ciliary body, lens zonules and the anterior choroid, making it useful for identifying specific causes of angle closure. UBM parameters could provide detailed information related to different subtypes of PACD.8
Recent advancements in artificial intelligence have sparked interest in developing deep learning models for medical images, making automatic assessment available. Novel approaches show promise on accurate clinical assistance for glaucoma, including early-onset diagnosis with optical coherence tomography (OCT),9 glaucomatous optic neuropathy detection with fundus photographs,10 and angle-closure assessment with AS-OCT.11 Meanwhile, the application of deep learning on UBM remains an emerging area. Wang et al developed a convolutional neural network for localisation of the scleral spur in UBM images of open-angle eyes.12 They also realised automated segmentation of the whole anterior chamber and quantitative measurement of angle parameters.13 Minhaz et al developed a UBM image processing system to segment ciliary tissues and quantitatively analyse cyclophotocoagulation treatments.14
Practically, the pathological mechanisms of PACD exist because of primary anatomic variations in the size, position and relationship of the anterior segment structures (cornea, iris, ciliary body, lens),15 such as large ciliary body and anterior iris insertion which increase the likelihood of angle closure.16 However, existing UBM image processing systems13 treat all the tissues as a unibody to segment because of the complexity dealing with the fuzzy boundaries. Multitissue segmentation including cornea, iris and ciliary body has rarely been explored for further analysis of the tissue-wise variations, despite its potential significance on tracking disease progression and providing richer quantitative information. Besides, most of the studies are validated on single-centre dataset,13 17 which may underperform when tested on different populations in clinical practice. Furthermore, performances on angle-closure images are not conducted as well, because of the difficulty of scleral spur localisation for angle-closure images, the quantitative assessment of anterior chamber angle (ACA) was only performed on open-angle closure images in previous study.13 Nevertheless, UBM is not only useful for identifying the causes of angle closure but also provides information for different subtypes of PACD. Therefore, automatic quantitative assessment of ACA on angle-closure images would be a great improvement for PACD diagnosis and treatment.
In this study, we developed a deep learning-based model, called Anterior Segment Multi-Tissue Net (ASM-Net), to achieve the cornea, iris, ciliary segmentation and spur detection. Boundary-aware loss was proposed to promote the robustness against the fuzzy edge. Based on the predicted segmentation and detection results, typical angle parameter quantification was conducted, including angle-opening distance at 500 µm (AOD 500), trabecular–ciliary angle (TCA) and iris area. To validate the robustness and generalisation ability, we conducted a multicentre test on both open-angle and angle-closure images from the glaucoma clinics of Peking University People’s Hospital and Peking University International Hospital. This study has potential significance on PACD diagnosis and improving clinical research efficiency by providing richer and more accurate tissue information.
Methods
Patient involvement
Patients were not involved in the design, or conduct, or reporting, or dissemination plans of our research.
Patients
This study was approved by the ethics committee of Peking University People’s Hospital (2018PHC011) and adhering to the tenets of the Declaration of Helsinki. Written informed consent was obtained for all subjects. Subjects diagnosed with PAC/PACG (including acute PAC, acute PACG, chronic PAC and chronic PACG) were recruited from the glaucoma clinics of Peking University People’s Hospital (centre 1) from February 2016 through July 2022 and Peking University International Hospital (centre 2) from November 2020 through January 2023. And, normal subjects were recruited as open-angle control without glaucoma. Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of our research.
Acute PAC/PACG (APAC(G)) was defined as eyes with two of the following symptoms: ocular or periocular pain, headache, nausea and/or vomiting, blurred vision, and halos around lights; and the following ophthalmologic findings: an IOP of more than 30 mm Hg, conjunctival hyperaemia, corneal epithelial oedema, shallow anterior chamber with angle closure, iris bombe, and mid-dilated pupil without or with glaucomatous optic neuropathy and visual field defect. The fellow eyes of APAC(G) [F-APAC(G)] were defined as the fellow eyes of patients with a recent unilateral APAC(G) that never experienced an acute attack and showed no signs of a prior acute attack. Only F-APAC(G) eyes were included.
Eyes with chronic PAC/PACG were defined as eyes without symptoms or signs of a prior acute attack, including glaucomatous fleck, keratic precipitates, or iris atrophy. Patients had an occludable drainage angle and features indicating that trabecular obstruction by the peripheral iris has occurred, such as peripheral anterior synechiae, elevated intraocular pressure, along with glaucomatous optic neuropathy or visual field defect in CPACG eyes, namely, the CPAC eyes without glaucomatous optic neuropathy or visual field defect.
Exclusion criteria were: (1) Patients who had previous intraocular surgery or laser treatment (eg, cataract surgery, laser trabeculoplasty, laser peripheral iridectomy, and laser iridoplasty; (2) Patients with a history of ocular diseases that may cause secondary angle closure (eg, ocular trauma, iris neovascularisation, tumour, uveitis, as well as lens intumescence and subluxation); (3) Patients using any medication that can affect the structure of the anterior chamber, such as the miotic agent; and (4) Patients who were unable to finish UBM examinations.
Anterior chamber angle assessment was based on gonioscopy examination. Gonioscopy was performed in dimly lit room by a glaucoma specialist (HJW from centre 1 and QJY from centre 2) using a Zeiss-style four-mirror gonioscopy lens (Model G-4, Volk Optical, Inc., Mentor, OH) with and without indentation. An occludable angle was defined as the invisibility of the posterior trabecular meshwork under a dynamic compression technique.
Dataset partitioning
UBM (Aviso, Quantel Medical, Inc., Bozeman, MT) measurements were performed using a 50-MHz transducer by two experienced operators (WYY from centre 1 and ZY from centre 2) who were masked to the clinical data. Imaging parameters were set uniformly during all tests: using a gain of 100 decibels (db), Dyn = 50 db and Tgc = 0 db. All subjects underwent UBM examinations in a supine position in room light (illumination 120 lux, measured with a luminance metre [Model ST-92, Beijing Teachers University Photoelectricity Instrument Factory, Beijing, China]). The UBM transducer was held perpendicular to the ocular structures at the limbal region to be examined. Both eyes of each subject were measured in the superior, inferior, temporal and nasal quadrant. Only images with a clear view of the scleral spur, angle, ciliary body, iris and anterior surface of the lens were included for analysis. The labelling and measurement process of the UBM image was performed by two ophthalmologists (YKY and KL) masked to clinical data with an in-built calliper in the UBM software independently. We performed repeatability and reproducibility analysis of UBM measurements. The first observer (YKY) masked parameters twice within 2 weeks to test intraobserver variability. A second observer (LK) masked the same images independently on a different day to decide inter-observer variability. The intraobserver and inter-observer variability were calculated using the coefficient of the intraclass correlation (ICC).
Manual Segmentation and angle parameters Quantification
In this experiment, cornea, iris and ciliary body were identified based on the different tissue density among these segments. The scleral spur (SS) was identified based on the differential tissue density between the collagen fibres of the scleral spur and the longitudinal muscle of the ciliary body. The anterior segment parameters (AOD 500, TCA and iris area) were measured according to the methods of previous study8 as shown in figure 1:
Figure 1. The definition of anterior segment parameters. AOD 500: the distance between the posterior corneal surface and the anterior iris surface on a line perpendicular to the trabecular meshwork 500 µm from the scleral spur. TCA: the angle between the posterior corneal surface and the anterior surface of the ciliary body. Iris area: the cumulative cross-sectional area of the full length (from spur to pupil) of the iris.

AOD 500: the distance between the posterior corneal surface and the anterior iris surface on a line perpendicular to the trabecular meshwork 500 µm from the scleral spur.
TCA: the angle between the posterior corneal surface and the anterior surface of the ciliary body.
Iris area: the cumulative cross-sectional area of the full length (from spur to pupil) of the iris.
After identifying the scleral spur (Point SS), a circle with the radius of 500 μm was automatically drawn from the Point SS as centre, and the intersection point of the circle and posterior corneal surface was found as Point A. For AOD, the normal of posterior corneal surface based on Point A was made, and the intersection point between the normal and the anterior iris surface was found as Point B, then AOD was got by calculating the Euclidean distance between Point A and Point B. For TCA, a line tangent to the anterior surface of the ciliary body through Point A was drawn, then the angle between this line and the line connecting Point A and Point SS could be calculated. Finally, pixels of the iris segmentation mask were added up to get the iris area. All these measurement experiments were conducted with OpenCV, an open-source Python library for computer vision.
Anterior segment multi-tissue Segmentation and spur detection
Our deep learning method includes two models, the anterior segment multitissue segmentation model and the spur detection model. Both two models have the same network architecture with different output layers, which are developed based on the basis of BAS-Net.18 The main network architecture consists of two modules, a prediction module and residual refinement module. This course-to-fine strategy makes the model first focus on the general anterior segment region and then pay attention to the specific tissue areas for a better boundary quality. Similar with U-Net architecture, the prediction module is an Encoder-Decoder network, which is responsible for producing saliency map from the input images with dense supervision. For the encoder part, an architecture with a convolution layer followed by six res-blocks formed stages is constructed. Different from ResNet-34,19 the first layer is modified with 64 convolution filters with a size of 3×3 and stride of 1, from the original size of 7×7 and stride of 2, and the following pooling operation is removed. To extract both the abstract global information and the low-level detailed information, a decoder part which is symmetrical to the encoder part with deep supervision is adopted. Refinement module is a residual encoder-decoder network which takes the coarse saliency maps from the prediction module and calculates the residuals from the ground truth. The module consists of an input layer, an encoder with the down-sampling of non-overlapping max pooling operation, a bridge, the decoder with bilinear interpolation for up-sampling and an output layer.
For anterior segment multitissue segmentation model, to promote the segmentation robustness against the ambitious edge areas, boundary-aware loss was proposed. We extracted the areas near the edge of tissues and made the model pay more attention to these area during training. First, edge line of the cornea, iris and ciliary body was obtained by calculating the gradient between different tissue masks. Then, Gaussian blur was applied on these lines to generate a pixel-level weight map (loss mask). Finally, loss was multiplied with the loss mask during gradient backpropagation. The spur detection model shares the same main structure with the segmentation model. Gaussian heat maps were generated using the scleral spur’s labelled coordinates in UBM images as the input to train the localisation model. Leveraging this model, UBM images were transferred into the heatmaps showing the pixel-wise possibilities of the scleral spur’s appearance.
To ensure consistency across our dataset and compatibility with the BAS-Net architecture, each UBM image was processed to fit a standard format. We first extracted a region of interest (ROI) by cropping the central 430×740 pixels from each image. Following the ROI extraction, we unified the intensity range of these images to facilitate consistent image analysis. Finally, the images were reshaped into a uniform size of 512×512 pixels to ensure that they were suitable for the model input. To improve the robustness and generalisation abilities, data augmentation was performed by random rotation (−30, +30), random shift (−12, 12), random scaling (0.8, 1.2), horizontal flipping and intensity transfer. Transfer learning was used by pretraining the model on ImageNet.20 We trained our models using the Adam optimiser on random mini-batches of size 16 and applied weight decay with a factor of 1E-3 for regularisation. All deep learning experiments were conducted on 32 central processing units server, using NVIDIA Tesla P100 graphic processing units.
Performance evaluation
For multitissue segmentation, we quantitatively evaluated network performance using instance-level Intersection over Union (IoU) scores.21 In all prediction/ground truth pairs, we computed regular IoU scores, which were calculated as the ratio of the area of overlap between the predicted segmentation and the ground truth segmentation to the area of union of the two segmentations. These IoU scores were averaged over all instances in all images, resulting in the instance-level IoU score. This metric accurately denoted the mean detected area coverage per instance. For scleral spur localisation, the performance was assessed by calculating the Euclidean distance,22 which is a matric of squared distances between the model-predicted coordinates and the labelled coordinates.
For angle parameter quantification, we chose mean error distance, mean error angle and mean error area (mean absolute value of the difference between the ground truth angle (area) and calculated angle (area)) as the evaluation indicators.23 To evaluate the consistency between the manually and deep learning measured values, we calculated the intraclass correlation coefficient (ICC) based on the angle parameters measured by the ophthalmologists and automatically measured by the deep learning system. In the assessment, p values less than or equal to 0.05 were considered significant.
Results
Patient population
In this study, we compiled a comprehensive dataset of 2561 UBM images obtained from 637 subjects across two separate centres. Detailed demographic information and clinical examination data of these subjects are provided in online supplemental table 1. We adopted a patient-level randomisation approach for splitting the dataset into training, validation and test sets. This strategy guarantees that images from the same patient were exclusively included in a single set. From centre 1, we used a total of 2017 UBM images from 547 subjects to constitute our training set. This set comprised 748 images from patients with acute angle-closure glaucoma, 754 images from patients with chronic glaucoma and 515 images from normal subjects. Our validation set included 322 UBM images from 44 subjects, with a detailed breakdown of 137 images from acute angle-closure glaucoma patients, 143 images from chronic angle-closure glaucoma patients and 42 images from normal subjects. To evaluate the model’s performance on a multicentre level, we formed a test set that included images from both centre 1 and centre 2. The test set consisted of 138 UBM images from 29 patients at centre 1 and 84 images from 16 patients at centre 2, totaling 222 images. Within this test set, 105 images were from patients diagnosed with open angle, while 117 images were from patients with angle closure.
The intraobserver and inter-observer intraclass correlation of manual measurement of UBM parameters were 0.98 to 0.92 and 0.88 to 0.97, respectively, which demonstrated good repeatability and reproducibility of manual UBM measurements in this study (online supplemental table 2).
Performance of deep learning models
For multitissue segmentation, results on three typical UBM images (F-APAC(G), CPAC(G) and normal) including the cornea, iris and ciliary body in the test set are shown in figure 2 and table 1. On centre 1 test set, the model got a mean IoU (Intersection over Union) of 0.98 of cornea segmentation, mean IoU of 0.98 on iris segmentation and 0.99 of ciliary body segmentation. On centre 2 test set, the model got the mean IoU of 0.97, 0.97 and 0.98 respectively. On total test set, the model got a mean IoU of 0.98 of cornea segmentation, mean IoU of 0.98 on iris segmentation and 0.98 of ciliary body segmentation.
Figure 2. The results of three typical UBM images (F-APAC(G), CPAC(G) and normal) of multitissue segmentation. The cornea (coloured in blue), iris (coloured in green) and ciliary body (coloured in red) marked by ophthalmologists and predicted by model detection. UBM, ultrasound biomicroscopy.
Table 1. The results of multitissue segmentation and scleral spur localisation.
| Performance of multitissue segmentation | Performance of spur detection | ||||
| Cornea (IoU) | Iris (IoU) | Ciliary body (IoU) | Mean error distance (pixel) | Mean error distance (μm) | |
| Centre 1 | 0.98 | 0.98 | 0.99 | 1.24 | 27.32 |
| Centre 2 | 0.97 | 0.97 | 0.98 | 0.36 | 8.93 |
| Total | 0.98 | 0.98 | 0.98 | 1.07 | 26.75 |
IoUIntersection over Union
For scleral spur localisation, results of the UBM images are shown in figure 3 and table 1. Manual ground truths are shown in red dots; results of model detection are shown in green dots. On centre 1 test set, the model got a mean error distance of 1.24 pixels (27.32 μm in Euclid distance) from the ground truth. On centre 2 test set, the model reached a mean error distance of 0.36 pixels (8.93 μm in Euclid distance) from the ground truth. On total test set, the model reached a mean error distance of 1.07 pixels (26.75 μm in Euclid distance) from the ground truth; the Euclidean distance distribution of the model was 82.19% within 10 µm, 84.93% within 100 µm and 93.15% within 150 µm.
Figure 3. The scleral spur locations marked by ophthalmologists (green dot) and predicted by model detection (red dot).

The comparison results between open angle and angle closure are shown in table 2. For multitissue segmentation, our model got a mean IoU of 0.98 of cornea segmentation, mean IoU of 0.98 on iris segmentation and 0.99 of ciliary body segmentation in open-angle images and got the mean IoU of 0.98, 0.98 and 0.98 respectively in angle-closure images. For scleral spur localisation, our model reached a mean error distance of 0.49 pixels (12.24 μm in Euclid distance) in open-angle images and a mean error distance of 1.42 pixels (35.46 μm in Euclid distance) in angle-closure images from the ground truth.
Table 2. Comparison of multitissue segmentation and scleral spur localisation between open-angle and angle-closure UBM images.
| Performance of multitissue segmentation | Performance of spur detection | ||||
| Cornea (IoU) | Iris (IoU) | Ciliary body (IoU) | Mean error distance (pixel) | Mean error distance (μm) | |
| Open angle | 0.98 | 0.98 | 0.99 | 0.49 | 12.24 |
| Angle closure | 0.98 | 0.98 | 0.98 | 1.42 | 35.46 |
IoUIntersection over UnionUBMultrasound biomicroscopy
Performance of angle parameters Quantification
Accurate segmentation of multiple tissues and localisation of scleral spur made it possible to quantify the angle parameters. In this experiment, on centre 1 test set, the mean error on AOD was 3.09 μm, the mean error on TCA was 3.38 degrees and the mean error on iris area was 0.04 mm². On centre 2 test set, the similar performance of a mean error of 2.99 μm on AOD, a mean error of 3.17 degrees on TCA and a mean error of 0.06 mm² on iris area was achieved. On total test set, the mean difference on AOD was 3.07 μm, the mean difference on TCA was 3.34 degrees and the mean difference on iris area was 0.05 mm². The results of angle parameters and iris area are shown in online supplemental table 3.
The ICC values between manual measurement and automatic measurement of TCA, AOD 500 and iris area were 0.99, 0.92 and 0.99, respectively (online supplemental table 4).
Discussion
In this study, we developed an anterior segment multitissue identification and parameter assessment system leveraging deep learning algorithms. The results suggested that the artificial intelligence system can automatically classify scleral spur from UBM images and segment cornea, iris and ciliary body separately with high accuracy (0.98 of cornea segmentation, iris segmentation and ciliary body). And, this measurement algorithms automatically quantify typical angle parameters such as AOD 500, TCA and iris area in good agreement with the manual measurement of angle-closure images (the mean differences on AOD of 3.07 μm, the mean differences on TCA of 3.34 degrees and the mean differences on iris area of 0.05 mm²).
Recently, automatic image processing on localisation and classification of the ACA using images acquired by AS-OCT has been widely investigated24,27. But compared with UBM, AS-OCT cannot penetrate dense tissues, which cannot provide sufficient evidences in the diagnosis and management of glaucoma, especially PACD.28 29 Thus, recent study has attempted generating synthesised UBM images from AS-OCT images for iridociliary assessment and got a high consistency between measurements of real and synthetic UBM images according to the Bland–Altman analysis.30 However, UBM image typically has less image quality than AS-OCT which increase the difficulty of automatic image processing. Both Yu and Wang provided an artificial intelligence algorithm for automatic assessment of ACA,13 29 while they identify the structure of ACA including cornea, iris and ciliary body as a unibody for analysis, and achieving the classification accuracy of 97.2% and 98.18%, respectively. However, the primary anatomic variations of the size, position and relationship of the anterior segment structures (cornea, iris, ciliary body, lens) are the critical pathological mechanism of PACD. Therefore, automatic multitissue identification for UBM is an important character of automatic assessment of anterior segment of the eyes.
As aforementioned, UBM image has less image quality than AS-OCT which increase the difficulty of automatic image processing, especially, for the fuzzy edge between different tissues. To address this, we introduce two new strategies. First, an architecture named ASM-Net was developed, which is a course-to-fine architecture first focusing on the general anterior chamber region for an overall identification, and then, the specific tissue areas for the detailed boundary quality. Second, boundary-aware loss was proposed to promote the robustness against the fuzzy edge by reweighting the cross-entropy loss and making the model pay more attention to edge during training. Both of techniques were essential to finally realise the UBM-based multitissue segmentation. As shown in figure 2, the model shows its robustness on the edge areas between different tissues. Based on the strategies, the performance of this model shows an excellent agreement with the manual ground truth over 0.98 in test set from centre 1 (the same centre with the training set) and a strong generalisation ability over 0.97 in test set from centre 2 (a different centre from training set), by the IoU similarity coefficient on all three tissues.
Primary anatomic variations in the size, position and relationship of the anterior segment structures (cornea, iris, ciliary body, lens) are related to the pathogenesis of PACD.15 While both AS-OCT and UBM are important adjuncts to evaluate anterior segment structures, statistically significant differences exist between AS-OCT and UBM in chamber angle parameters such as AOD and TIA, so AS-OCT and UBM are not interchangeable.31 Unlike gonioscopy and AS-OCT, UBM is capable of imaging the structures behind iris. The morphology and position of ciliary body and iris were shown to play an important role in the development of PACD.32 33 Additionally, quantitative measurements of UBM images revealed that the position of the ciliary processes is one of the main differences between angle-closure eyes and normal eyes.34 However, previous deep learning algorithms identify the anterior segment structures in UBM images as one category,12 13 and there was no identification of the boundaries among the cornea, sclera, iris and ciliary body that would limit the usage of their algorithms. In the present study, the deep learning method identifies multitissue segmentation automatically, including cornea, iris and ciliary body. The ICC value of iris area was 0.99 between automatic and manual measurement, which showed this multitissue segmentation algorithm was highly consistent with the manual measurement. This method could provide more morphologic and quantitative parameters of iris and ciliary body.
In previous studies,13 UBM images were classified into open-angle and angle-closure images at first, and the quantitative assessment of ACA was only performed on open-angle images due to the difficulty of SS localisation for angle-closure images. In our study, an algorithm for the automatic localisation of SS was developed that could perform on both open-angle and angle-closure UBM images. Using this method, angle-closure glaucoma patients’ SS could be localised accurately; the mean Euclidian distance of the SS localisation model was 27.32 μm and the Euclidean distance distribution of the model was 82.19% within 10 µm, 84.93% within 100 µm and 93.15% within 150 µm, which was achieved better results than previous algorithm (5.62% within 10 µm, 80.80% within 100 µm and 92.74% within 150 µm).13 With an angle-closure SS localisation by our algorithm, the quantitative assessment of ACA could be performed on angle-closure UBM images, which was unavailable in previous methods.
The ICC value between manual measurement and automatic measurement of AOD 500 was 0.99. For comparison, the ICC value of AOD 500 achieved by Wang et al was 0.97.17 Compared with the previous system, the present deep learning system achieved better consistency with the manual measurement results. For analysing the angle parameters, the consistency of open angle measured by the deep learning system was better than angle closure. Accurate measurement of angle parameters relies on precise localisation of the scleral spur.
Studies showed that a large proportion of patients with PACD were undiagnosed until their first acute attack.35 36 If more high-risk PACD patients could be identified with UBM before the onset of the disease, these patients could receive prophylactic treatment to avoid acute attack and vision loss.33 However, UBM requires a well-trained operator to perform and an ophthalmologist to analyse, and anterior segment parameter measurements of the images are time-consuming. Therefore, this deep learning system could be helpful in diagnosing the disease, deciding who should be referred for further evaluation and treatment, and monitoring the changing trend of the angle parameters after the treatment by automatic identification of anterior segments and measurements of the angle parameters.
This study has some limitations. The first limitation is lack of absolute ground truth in UBM image annotation. Although the intraobserver and inter-observer ICCs were good, the UBM image labelling process is subjective and human errors are inevitable, which may affect the performance of deep learning model. The second limitation is that only Chinese population was evaluated; the results may not apply to other ethnic groups. Third, gonioscopy was not involved as a reference standard, because this automated solution was only meant for comparison with clinician’s interpretation of UBM images. In addition, the UBM images were taken from the same brand of devices in two centres. Due to the differences of image contrast and resolution on different brand of devices, this system might not perform the same quality and performance when it was applied on images from other brand of UBM devices.
In summary, an anterior segment multitissue identification and parameter assessment system based on a deep learning algorithm was developed in this study. This automatic method of multitissue identification and automatic measurement of angle parameters had been proved to be feasible and reliable. In the future, more studies may be needed to evaluate the performance of this system for clinical implementation and even for the use with different brands of UBM devices.
supplementary material
Acknowledgements
The authors would like to acknowledge and thank Wang Y and Zhang Y who perform UBM measurement.
Footnotes
Funding: Supported by the Beijing Science and Technology Plan Project (Z191100007619045).
Provenance and peer review: Part of a topic collection; not commissioned; externally peer reviewed.
Patient consent for publication: Not applicable.
Ethics approval: This study involves human participants and was approved by the ethics committee of Peking University People’s Hospital (2018PHC011) and adhering to the tenets of the Declaration of Helsinki. Participants gave informed consent to participate in the study before taking part.
Patient and public involvement: Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Data availability statement
All data relevant to the study are included in the article or uploaded as supplementary information.
References
- 1.Tham YC, Li X, Wong TY, et al. Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and meta-analysis. Ophthalmol. 2014;121:2081–90. doi: 10.1016/j.ophtha.2014.05.013. [DOI] [PubMed] [Google Scholar]
- 2.Quigley HA, Broman AT. The number of people with glaucoma worldwide in 2010 and 2020. Br J Ophthalmol. 2006;90:262–7. doi: 10.1136/bjo.2005.081224. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Weinreb RN, Aung T, Medeiros FA. The pathophysiology and treatment of glaucoma: a review. JAMA. 2014;311:1901–11. doi: 10.1001/jama.2014.3192. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Foster PJ, Buhrmann R, Quigley HA, et al. The definition and classification of glaucoma in prevalence surveys. Br J Ophthalmol. 2002;86:238–42. doi: 10.1136/bjo.86.2.238. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Kwon J, Sung KR, Han S. Long-term Changes in Anterior Segment Characteristics of Eyes With Different Primary Angle-Closure Mechanisms. Am J Ophthalmol. 2018;191:54–63. doi: 10.1016/j.ajo.2018.04.005. [DOI] [PubMed] [Google Scholar]
- 6.V.k. S, Hong XJJ, V.m. M, et al. Progress in anterior chamber angle imaging for glaucoma risk prediction – A review on clinical equipment, practice and research. Medical Engineering & Physics. 2016;38:1383–91. doi: 10.1016/j.medengphy.2016.09.014. [DOI] [PubMed] [Google Scholar]
- 7.Sakata LM, Lavanya R, Friedman DS, et al. Comparison of Gonioscopy and Anterior Segment Ocular Coherence Tomography in Detecting Angle Closure in Different Quadrants of the Anterior Chamber Angle. Ophthalmology . 2008;115:769–74. doi: 10.1016/j.ophtha.2007.06.030. [DOI] [PubMed] [Google Scholar]
- 8.You S, Liang Z, Yang K, et al. Novel Discoveries of Anterior Segment Parameters in Fellow Eyes of Acute Primary Angle Closure and Chronic Primary Angle Closure Glaucoma. Invest Ophthalmol Vis Sci. 2021;62:6. doi: 10.1167/iovs.62.14.6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Asaoka R, Murata H, Hirasawa K, et al. Using Deep Learning and Transfer Learning to Accurately Diagnose Early-Onset Glaucoma From Macular Optical Coherence Tomography Images. Am J Ophthalmol. 2019;198:136–45. doi: 10.1016/j.ajo.2018.10.007. [DOI] [PubMed] [Google Scholar]
- 10.Li Z, He Y, Keel S, et al. Efficacy of a Deep Learning System for Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs. Ophthalmology . 2018;125:1199–206. doi: 10.1016/j.ophtha.2018.01.023. [DOI] [PubMed] [Google Scholar]
- 11.Xu BY, Chiang M, Chaudhary S, et al. Deep Learning Classifiers for Automated Detection of Gonioscopic Angle Closure Based on Anterior Segment OCT Images. Am J Ophthalmol. 2019;208:273–80. doi: 10.1016/j.ajo.2019.08.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Wang W, Wang L, Wang T, et al. Automatic Localization of the Scleral Spur Using Deep Learning and Ultrasound Biomicroscopy. Transl Vis Sci Technol. 2021;10:28. doi: 10.1167/tvst.10.9.28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Wang W, Wang L, Wang X, et al. A Deep Learning System for Automatic Assessment of Anterior Chamber Angle in Ultrasound Biomicroscopy Images. Trans Vis Sci Tech. 2021;10:21. doi: 10.1167/tvst.10.11.21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Minhaz AT, Sevgi DD, Kwak S, et al. Deep Learning Segmentation, Visualization, and Automated 3D Assessment of Ciliary Body in 3D Ultrasound Biomicroscopy Images. Transl Vis Sci Technol. 2022;11:3. doi: 10.1167/tvst.11.10.3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Tarongoy P, Ho CL, Walton DS. Angle-closure glaucoma: the role of the lens in the pathogenesis, prevention, and treatment. Surv Ophthalmol. 2009;54:211–25. doi: 10.1016/j.survophthal.2008.12.002. [DOI] [PubMed] [Google Scholar]
- 16.Ku JY, Nongpiur ME, Park J, et al. Qualitative evaluation of the iris and ciliary body by ultrasound biomicroscopy in subjects with angle closure. J Glaucoma. 2014;23:583–8. doi: 10.1097/IJG.0b013e318285fede. [DOI] [PubMed] [Google Scholar]
- 17.Yu J, Li W, Chen Q, et al. Automatic Classification of Anterior Chamber Angle Based on Ultrasound Biomicroscopy Images. Ophthalmic Res. 2021;64:732–9. doi: 10.1159/000510924. [DOI] [PubMed] [Google Scholar]
- 18.Qin X, Zhang Z, Huang C, et al. BASNet: boundary-aware salient object detection. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); Long Beach, CA, USA. Available. [DOI] [Google Scholar]
- 19.He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, NV, USA. [DOI] [Google Scholar]
- 20.Deng J, Dong W, Socher R, et al. ImageNet: a large-scale hierarchical image database. 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops); Miami, FL. [DOI] [Google Scholar]
- 21.Everingham M, Gool LV, Williams CK, et al. The pascal voc2012 challenge results. 2012
- 22.Danielsson P-E. Euclidean distance mapping. Comput Graph Image Process. 1980;14:227–48. doi: 10.1016/0146-664X(80)90054-4. [DOI] [Google Scholar]
- 23.Lombardi F. New Metrics for the Reliability of Approximate and Probabilistic Adders. IEEE Trans Comput. 2013;62:1760–71. doi: 10.1109/TC.2012.146. [DOI] [Google Scholar]
- 24.Hao H, Zhao Y, Yan Q, et al. Angle-closure assessment in anterior segment OCT images via deep learning. Med Image Anal. 2021;69:101956. doi: 10.1016/j.media.2021.101956. [DOI] [PubMed] [Google Scholar]
- 25.Li W, Chen Q, Jiang C, et al. Automatic Anterior Chamber Angle Classification Using Deep Learning System and Anterior Segment Optical Coherence Tomography Images. Transl Vis Sci Technol. 2021;10:19. doi: 10.1167/tvst.10.6.19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Liu P, Higashita R, Guo PY, et al. Reproducibility of deep learning based scleral spur localisation and anterior chamber angle measurements from anterior segment optical coherence tomography images. Br J Ophthalmol. 2023;107:802–8. doi: 10.1136/bjophthalmol-2021-319798. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Fu H, Baskaran M, Xu Y, et al. A Deep Learning System for Automated Angle-Closure Detection in Anterior Segment Optical Coherence Tomography Images. Am J Ophthalmol. 2019;203:37–45. doi: 10.1016/j.ajo.2019.02.028. [DOI] [PubMed] [Google Scholar]
- 28.Nolan W. Anterior segment imaging: ultrasound biomicroscopy and anterior segment optical coherence tomography. Curr Opin Ophthalmol. 2008;19:115–21. doi: 10.1097/ICU.0b013e3282f40bba. [DOI] [PubMed] [Google Scholar]
- 29.Wang D, Pekmezci M, Basham RP, et al. Comparison of different modes in optical coherence tomography and ultrasound biomicroscopy in anterior chamber angle assessment. J Glaucoma. 2009;18:472–8. doi: 10.1097/IJG.0b013e31818fb41d. [DOI] [PubMed] [Google Scholar]
- 30.Ye H, Yang Y, Mao K, et al. Generating Synthesized Ultrasound Biomicroscopy Images from Anterior Segment Optical Coherent Tomography Images by Generative Adversarial Networks for Iridociliary Assessment. Ophthalmol Ther. 2022;11:1817–31. doi: 10.1007/s40123-022-00548-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Bu Q, Hu D, Zhu H, et al. Swept-source optical coherence tomography and ultrasound biomicroscopy study of anterior segment parameters in primary angle-closure glaucoma. Graefes Arch Clin Exp Ophthalmol. 2023;261:1651–8. doi: 10.1007/s00417-022-05970-6. [DOI] [PubMed] [Google Scholar]
- 32.Marchini G, Pagliarusco A, Toscano A, et al. Ultrasound biomicroscopic and conventional ultrasonographic study of ocular dimensions in primary angle-closure glaucoma. Ophthalmol. 1998;105:2091–8. doi: 10.1016/S0161-6420(98)91132-0. [DOI] [PubMed] [Google Scholar]
- 33.Wang W, Zhou M, Huang W, et al. Does acute primary angle-closure cause an increased choroidal thickness? Invest Ophthalmol Vis Sci. 2013;54:3538–45. doi: 10.1167/iovs.13-11728. [DOI] [PubMed] [Google Scholar]
- 34.Sun X, Dai Y, Chen Y, et al. Primary angle closure glaucoma: What we know and what we don’t know. Prog Retin Eye Res. 2017;57:26–45. doi: 10.1016/j.preteyeres.2016.12.003. [DOI] [PubMed] [Google Scholar]
- 35.Sawaguchi S, Sakai H, Iwase A, et al. Prevalence of Primary Angle Closure and Primary Angle-Closure Glaucoma in a Southwestern Rural Population of Japan. Ophthalmology . 2012;119:1134–42. doi: 10.1016/j.ophtha.2011.12.038. [DOI] [PubMed] [Google Scholar]
- 36.He M, Foster PJ, Ge J, et al. Prevalence and clinical characteristics of glaucoma in adult Chinese: a population-based study in Liwan District, Guangzhou. Invest Ophthalmol Vis Sci. 2006;47:2782–8. doi: 10.1167/iovs.06-0051. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
All data relevant to the study are included in the article or uploaded as supplementary information.

