Abstract
Current standard quantitative 3D spectral-domain optical coherence tomography (SD-OCT) analyses of various ocular diseases is limited in detecting structural damage at early pathologic stages. This is mostly because only a small fraction of the 3D data is used in the current method of quantifying the structure of interest. This paper presents a novel SD-OCT data analysis technique, taking full advantage of the 3D dataset. The proposed algorithm uses machine classifier to analyze SD-OCT images after grouping adjacent pixels into super pixel in order to detect glaucomatous damage. A 3D SD-OCT image is first converted into a 2D feature map and partitioned into over a hundred super pixels. Machine classifier analysis using boosting algorithm is performed on super pixel features. One hundred and ninety-two 3D OCT images of the optic nerve head region were tested. Area under the receiver operating characteristic (AUC) was computed to evaluate the glaucoma discrimination performance of the algorithm and compare it to the commercial software output. The AUC of normal vs glaucoma suspect eyes using the proposed method was statistically significantly higher than the current method (0.855 and 0.707, respectively, p=0.031). This new method has the potential to improve early detection of glaucomatous structural damages.
Keywords: Super Pixel, 3D OCT, Glaucoma Analysis, Retinal Image Processing
I. Introduction
Optical coherence tomography (OCT) is a rapidly evolving technology in biomedical imaging that gained a significant clinical impact in ophthalmology [1]. One of the main use of this technology is for detection and follow-up of subjects with glaucoma. Glaucoma is an ophthalmic disease characterized by gradual structural changes such as thinning of the retinal nerve fiber layer (RNFL). OCT RNFL thickness measurements provide an important structural analysis tool in current clinical glaucoma management. Spectral-domain (SD-) OCT’s fast scanning speed allows three-dimensional (3D) volume scanning of retinal layers, which may offer more detailed and accurate quantitative analysis of the retinal structure than ever before. This will likely improve early disease detection and glaucoma progression detection. 3D OCT retinal images are composed of a series of cross-sectional scans (B-scan; Fig. 1) from top to bottom (in x–y plane) of the scanning region on retina. Each B-scan consists of certain number of high-resolution one-dimensional scans in the z direction (A-scan).
Fig. 1.
An example of a 3D OCT image. (A) 3D OCT image composed of consecutive B-scans, each B-scan consists numerous A-scans. (B) OCT fundus image, generated by averaging intensity values of each A-scan. (C) Pseudo-color retinal nerve fiber layer (RNFL) thickness map.
Although 3D OCT images are widely available, software advancements have not improved synchronously with the hardware counterparts. The actual OCT structural measurements do not take full advantage of the 3D dataset (200×200 samplings). Only 512 samplings along a 3.4 mm circle out of the 40,000 samplings (1.28%) are used in the current RNFL thickness analysis [2] (Fig. 2A, red circle). The circumpapillary measurement is compared with normative database (Fig. 2B) and summarized in 4 quadrants and 12 clock hours for clinical use (Fig 2C). The early signs of pathologic damage may be unnoticed, since the majority of the 3D dataset is not sampled (Fig. 2A). Moreover, subtle pathologic changes are difficult to detect using pre-defined sectors since all the data in each sector is summarized by a single index, which is not a sensitive method to assess early disease damage. So far, only a few publications regarding glaucoma assessment have been using the full 3D OCT information. Most SD-OCT devices only provide qualitative 3D image analysis to compare measurements to normative databases with color-coded results as red ("outside normal limits"), green ("within normal limits"), and yellow ("borderline"). For example, Cirrus HD-OCT (Carl Zeiss Meditec, Inc., Dublin, CA) provides a two-dimensional (2D) RNFL thickness deviation map with 4×4 adjacent pixels grouped together to form fixed size super pixel (Fig. 2A). Quantitatively summarizing the full 3D dataset to one or several key indices is a fundamental challenge in glaucoma structural analysis and progression monitoring.
Fig. 2.
An example of circumpapillary RNFL analysis as provided by Cirrus HD-OCT. (A) Overlay of retinal nerve fiber layer thickness (RNFL) deviation map on the OCT fundus image with structural damage (red and yellow regions) outside the 3.4mm circle samplings (red circle), (B) RNFL thickness profile along the 3.4mm circle is within the normal range (green range), (C) The average RNFL thickness in 4 quadrants and 12 clock hours.
Fixed size super pixel analysis has shown to be a potential tool to detect pathologic changes [3][4]. However, this method ignores the variable spatial architecture of the structural changes. For example, localized glaucomatous RNFL damage commonly exhibits an arcuate shape. Advanced super pixel segmentation is to partition an image into a moderate number of close-to-homogenous segments with variable sizes, called super pixels. When moving from pixels to super pixels, most of the structures in the image are conserved. Since super pixels are computationally efficient and perceptually meaningful, super pixel segmentation has been used successfully in large-scale applications, such as, image segmentation [5], video analysis and object tracking [6], stereo matching [7], object detection and pattern recognition [8]. In medical application, compared to a 4×4 fixed size unit, super pixel with variable size segmentation provides a way to group homogenous image pixels to provide a more natural representation of pixel similarities.
In this paper, a novel 3D SD-OCT data analysis technique utilizing the full 3D dataset is proposed to improve the ability of detecting glaucoma structural damage at early stages. The new method uses self-size-adjusting super pixel segmentation and machine classifier to quantitatively assess the 3D dataset in order to improve glaucoma detection.
II. Method
Many ocular diseases demonstrate areas of pathologic change with measureable differences from unaffected areas when considering features such as retinal layer thickness, internal reflectivity, etc. These pathologically affected areas usually share similar characteristics. Variable size super pixel segmentation provides an efficient way to depict this natural representation by grouping the similar neighboring sampled points based on the homogeneity of various features. Moreover, it reduces the complexity of 3D images from hundreds of thousands of voxels to only a few hundred super pixels. A self-size-adjusting super pixel machine classifier algorithm is proposed to quantitatively analyze the full 3D OCT dataset grouped into super pixels in order to improve glaucoma detection at an early stage of the disease. The kernels of the proposed algorithm include feature map generation, super pixel segmentation, feature extraction, and classification.
A. 2D Feature Map Generation
An A-scan in the 3D OCT image (200×200×1024 voxels) is considered as a unit when converting a 3D OCT image into a 2D map (200×200 pixels). Each A-scan corresponds to a pixel in the 2D map. The first stage of the proposed method is to generate a 2D feature map as the input of super pixel segmentation. Four features, including RNFL thickness, RNFL reflectivity, blood vessel, and deviation from the normative database, are extracted, combined and converted into the feature map, depicted in a block diagram as shown in Fig. 3.
Fig. 3.
Flowchart of converting a 3D OCT image into a 2D feature map
Retinal layer segmentation [9] is first applied on each 3D dataset to obtain the RNFL and its thickness map, denoted as IRNFL. RNFL reflectivity of each voxel is normalized to each A-scan’s saturation. The RNFL reflectivity map, IReflectivity, is computed by taking the average reflectivity within the RNFL along each A-scan, which is converted to the range of [0, 1]. The retinal blood vessels, IVessel, are automatically detected using a 3D boosting algorithm [10] and removed from the RNFL thickness map using cubic interpolation. The RNFL thickness map, IRNFL, is compared with a normative database to obtain a deviation map, IDeviation. Each pixel in IDeviation is labeled from 0 to 1 corresponding to the pathologic damage stage from advanced damage to normal. To reduce the variation of the normative database, all RNFL thickness maps are normalized to the population’s average retinal nerve fiber bundle path (RNFBP) location [11]. The final feature map is an adjusted RNFL thickness map combining RNFL thickness, reflectivity, blood vessels and deviation from the normative database, written as:
| (1) |
where f(•) is the operation of blood vessel removing, both IReflectivity and IDeviation are in the range of [0, 1].
B. Self-size-adjusting super pixel segmentation
Variable size/shape super pixels are automatically segmented on the 2D feature map by grouping homogeneous neighboring pixels using an ncut algorithm [12]. The optic nerve head (ONH) is detected using an active contour model [13] and masked for the super pixel segmentation. The size of each super pixel is automatically adjusted with the pre-defined criteria based on the pathologic contexts of glaucoma. To be more sensitive to RNFL thinning (glaucomatous damage), thinner RNFL (lower intensity in the feature map) is defined with smaller super pixels.
The feature map is initially segmented into 100 super pixels. Super pixels smaller than the pre-defined limit are merged into its most similar neighbor. Each initially segmented super pixel is recursively partitioned into N segments (N more super pixels), while N is a function of mean, standard deviation and size of the given super pixel. The recursion stops once N is less than 2 or the size of super pixel is smaller than the pre-defined limit. The super pixel number and size are automatically adjusted by this recursive partition. In the segmented feature map, damaged areas tend to have smaller super pixels with thinner RNFL, while normal regions have larger super pixels (Fig. 3). This map provides a qualitative analysis with more natural representation.
C. Feature Extraction
As quantitative disease indices, the following super pixel features are extracted and used as the inputs of machine learning classifier: mean, SD, 3rd and 4th central moments, and histogram distribution of super pixel RNFL thickness and size. Two thresholds are set based on prior knowledge to obtain two sub-groups of super pixel with large size and small size and the feature parameters were computed for each setting. Global features, i.e. average RNFL thickness both at the 3.4mm circle and on the entire scan region, are also included. A total of 68 features were extracted including super pixel and global features.
D. Glaucoma Classification
The glaucoma classification is performed by an implementation of LogitBoost [14] adaptive boosting algorithm. The ensemble classifier was trained using shallow depth decision trees as the base classifiers. At each boosting round a sampling of images that were misclassified in the previous round were used for training purposes. The base classifier is trained and added to the ensemble. Subsequent base classifiers are added one by one and the weights are adjusted with each addition, with the goal of minimizing the overall training error. Gold standard definition for the training of the machine classifier was established by glaucoma expert’s diagnosis on each eye, which was labeled as normal (N), glaucoma suspect (GS) or glaucoma (G), based on clinical findings (visual field, disc photograph and eye exam). Because the boosting algorithm is a two-class classifier, only two of the three groups (N, GS, and G) are used to train the classifier each time. The classifier is applied to the super pixel segmented 3D OCT images labeling each image as a continuous number ranging from negative (normal) to positive (disease).
Ten-fold cross validation is used to train/test the machine classifiers. The dataset is randomly partitioned into 10 sub-folds with the uniform distribution for each diagnosis, “normal” and “disease”. One single sub-fold is used as a testing dataset while the other 9 sub-folds are used as training dataset. The training/testing operation is repeated 10 times. Each image is used in the testing dataset only once to obtain its machine classifier output.
Area under the receiver operating characteristics (AUCs) of the machine classifier outputs, for discriminating between normal and glaucomatous eyes, is compared to the current method of diagnosis - circumpapillary RNFL thickness generated by Cirrus HD-OCT software.
III. Result and discussion
One hundred and ninety-two eyes of 96 subjects (44 normal, 59 glaucoma suspect and 89 glaucomatous eyes) were tested using Cirrus HD-OCT (ONH Cube 200×200 scan protocol). An independent dataset, including 46 eyes of 46 subjects (randomly selected one eye for each subject), was used as the normative database. The proposed method was tested on three different combinations: N vs G+GS, N vs GS, and N vs G. Super pixel segmentation was first applied on the 2D feature map generated from the 3D OCT image. A boosting algorithm was then performed on the extracted features to automatically identify diseased eye (glaucoma or glaucoma suspect).
Examples of the results of super pixel segmentation are given in Fig 4. The super pixel boundaries were superimposed on the adjusted RNFL thickness map, where a brighter pixel intensity corresponds to a thicker RNFL. Various distributions of super pixel size could be clearly observed on these three images with different diagnoses. The super pixel processing significantly enhanced the localized damage with smaller super pixels. Transforming into super pixel features, a 3D OCT image was represented by dozens of features efficiently.
Fig. 4.
Super pixel segmentation on 3D OCT images of normal (A), glaucoma suspect (B), and glaucomatous (C) eyes. Various shape/size super pixel boundaries are labeled by red on 2D feature map (adjusted RNFL thickness map).
Table 1 summarizes the discrimination ability of the proposed method compared with the output of commercialized software. If both eyes have the same clinical diagnosis, one eye from each subject was randomly selected to compute AUCs. The AUC of normal vs glaucoma suspect eyes was statistically significantly improved from 0.707 (the circumpapillary RNFL thickness performance) to 0.855 (p=0.031, DeLong test). The AUCs did not show significant differences for the other two comparisons. This is because late stage glaucoma presents with more globalized damage, therefore the variably sized super pixel analysis offers no more advantages than the circumpapillary RNFL analysis in late stage glaucoma discrimination. This can be improved by including more super pixel features, such as the averaged area and total number of super pixels.
Table 1.
AUCs computed with machine classifier compared to Cirrus HD-OCT software generated RNFL thickness.
| RNFL thickness |
Proposed method |
AUC Difference [95% CI] |
|
|---|---|---|---|
| N vs G+GS | 0.812 | 0.846 | 0.035 [−0.053, 0.122] |
| N vs GS | 0.707 | 0.855 | 0.148* [0.013, 0.283] |
| N vs G | 0.872 | 0.904 | 0.032 [−0.033, 0.098] |
CI – confidence interval, * – statistically significant, N – normal eyes, G – glaucomatous eyes, GS – glaucoma suspect eyes.
IV. Conclusions
The legacy method of OCT analysis has limitations in detecting localized structural damages because it only utilizes a fraction of the 3D data. In this paper, a new 3D OCT data analysis technique, super pixel machine classifier approach, has been presented to quantitatively summarize the 3D dataset and automatically identify glaucomatous eyes. Experimental results showed that this novel 3D OCT analysis technique was better at discriminating between normal and glaucoma suspect eyes than the traditional circumpapillary RNFL analysis, and performed similarly for normal vs glaucomatous eyes. This new method has the potential to improve early detection of glaucomatous structural damages. The proposed method can be easily extended to other ocular diseases by changing the features corresponding to the various pathologic contexts.
Acknowledgments
This work was supported in part by NIH R01-EY013178; P30-EY008098; Eye and Ear Foundation (Pittsburgh, PA); Research to Prevent Blindness.
Contributor Information
Juan Xu, Department of Ophthalmology, UPMC Eye Center, Eye and Ear Institute, Ophthalmology and Visual Science Research Center, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213 USA.
Hiroshi Ishikawa, Department of Ophthalmology, UPMC Eye Center, Eye and Ear Institute, Ophthalmology and Visual Science Research Center, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213 USA; Dept. of Bioengineering, Swanson School of Engineering, Univ. of Pittsburgh, Pittsburgh, PA 15213.
Gadi Wollstein, Department of Ophthalmology, UPMC Eye Center, Eye and Ear Institute, Ophthalmology and Visual Science Research Center, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213 USA.
Joel S. Schuman, Department of Ophthalmology, UPMC Eye Center, Eye and Ear Institute, Ophthalmology and Visual Science Research Center, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213 USA Dept. of Bioengineering, Swanson School of Engineering, Univ. of Pittsburgh, Pittsburgh, PA 15213.
References
- 1.Schuman JS. Spectral domain optical coherence tomography for glaucoma. Trans Am Ophthalmol Soc. 2008;vol. 106:426–428. [PMC free article] [PubMed] [Google Scholar]
- 2. http://wwwmediteczeisscom/cirrus.
- 3.Chauhan BC, Blanchard JW, Hamilton DC, et al. Technique for detecting serial topographic changes in the optic disc and peripapillary retina using scanning laser tomography. Invest Ophthalmol Vis Sci. 2001;vol. 41:775–782. [PubMed] [Google Scholar]
- 4.Ishikawa H, Bilonick RA, Wollstein G, et al. Macular inner-retinal layer thickness super pixel analysis for glaucoma using spectral domain optical coherence tomography (SD-OCT); Annual ARVO meeting; 2009. May, [Google Scholar]
- 5.Ren XF, Malik J. ICCV '03. vol. 1. Nice; 2003. Learning a classification model for segmentation; pp. 10–17. [Google Scholar]
- 6.Drucker F, MacCormick J. Fast superpixels for video analysis; Proceedings of the 2009 International Conference on Motion and Video Computing; 2009. pp. 55–62. [Google Scholar]
- 7.Tong HY, Liu S, Liu NJ, et al. A novel object-oriented stereo matching on multi-scale superpixels for low-resolution depth mapping; 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); 2010. pp. 5046–5049. [DOI] [PubMed] [Google Scholar]
- 8.Mori G, Ren XF, Efros A, et al. CVPR '04. vol. 2. Washington, DC: 2004. Recovering human body configurations: combining segmentation and recognition; pp. 326–333. [Google Scholar]
- 9.Ishikawa H, Stein DM, Wollstein G, et al. Macular segmentation with optical coherence tomography. Invest Ophthalmol Vis Sci. 2005;vol. 46:2012–2017. doi: 10.1167/iovs.04-0335. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Xu J, Tolliver D, Ishikawa H, et al. 3D OCT retinal vessel segmentation based on boosting learning. World Congress on Medical Physics and Biomedical Engineering. 2009;vol. 25:179–182. [Google Scholar]
- 11.Xu J, Ishikawa H, Wollstein G, et al. Automated detection of major retinal nerve fiber bundle path (RNFBP) on three-dimensional (3D) spectral domain optical coherence tomography (SD-OCT) images. Annual Meeting of the International Society for Imaging in the Eye (ISIE); May 2010; Ft. Lauderdale. [Google Scholar]
- 12.Shi JB, Malik J. Normalized cuts and image segmentation. IEEE Trans. on Pattern Analysis and Machine Intelligence. 2000 Aug;vol.22(no.8):888–905. [Google Scholar]
- 13.Xu J, Chutatape O, Chew P. Automated optic disk boundary detection by modified active contour model. IEEE Trans. on Biomedical Engineering. 2007 Mar;vol. 54(no. 3) doi: 10.1109/TBME.2006.888831. [DOI] [PubMed] [Google Scholar]
- 14.Friedman J, Hastie T, Tibshirani R. Additive logistic regression: a statistical view of boosting. Annals of Statistics. 2000;vol. 28(no.2):337–407. [Google Scholar]




