Abstract.
We propose an automated segmentation method to detect, segment, and quantify hyperreflective foci (HFs) in three-dimensional (3-D) spectral domain optical coherence tomography (SD-OCT). The algorithm is divided into three stages: preprocessing, layer segmentation, and HF segmentation. In this paper, a supervised classifier (random forest) was used to produce the set of boundary probabilities in which an optimal graph search method was then applied to identify and produce the layer segmentation using the Sobel edge algorithm. An automated grow-cut algorithm was applied to segment the HFs. The proposed algorithm was tested on 20 3-D SD-OCT volumes from 20 patients diagnosed with proliferative diabetic retinopathy (PDR) and diabetic macular edema (DME). The average dice similarity coefficient and correlation coefficient () are 62.30%, 96.90% for PDR, and 63.80%, 97.50% for DME, respectively. The proposed algorithm can provide clinicians with accurate quantitative information, such as the size and volume of the HFs. This can assist in clinical diagnosis, treatment, disease monitoring, and progression.
Keywords: hyperreflective foci segmentation, spectral domain optical coherence tomography, layer segmentation, grow-cut, diabetic retinopathy
1. Introduction
Diabetic retinopathy (DR) is the leading cause of blindness in patient’s age 16- to 64-years-old in developed countries.1,2 Based on the presence of its clinical features, DR is classified into five types, namely mild nonproliferative diabetic retinopathy (NPDR), moderate NPDR, severe NPDR, PDR, and diabetic macular edema (DME).3 In eyes with PDR [as shown in Fig. 1(a)], fibro-vascular proliferation results from ischemia and release of vasoproliferative factors. Proliferation extends beyond the internal limiting membrane (ILM) and results in vitreous hemorrhage and vitreoretinal traction, which can lead to neovascular glaucoma and severe vision loss.4,5 NPDR is swelling of the retina in the area that serves the central vision. The small blood vessels in the retina become damaged and can leak fluid into the retinal tissue, resulting in blurry vision and loss of portions of the field of vision. DME as shown in Fig. 1(d) is one of the most common causes of visual loss in both proliferative and nonproliferative diabetic retinopathies. It occurs when fluid and protein deposits collect on or under the macula of the eye, causing it to thicken and swell (edema). The swelling may distort a person’s central vision. Diabetic patients without NDR are at risk since the longer a person has diabetes, the higher their risk of developing some ocular problem.6 DR often goes unnoticed until vision loss occurs; early detection, timely treatment, and appropriate follow-up care of diabetic eye disease can protect against vision loss.
Fig. 1.
B-scan of (a) PDR and (d) DME, showing HFs (in yellow arrows) at different retinal layers. (b) and (e) are the 2-D false color images of (a) and (d), (c) and (f) are the 3-D intensity value distributions.
Several algorithms were designed for the segmentation of retinal anatomy in DR, but early research in the automated detection of pathology generally investigated fluorescein angiography images, rather than the present advanced frameworks. More recently, researchers are exploring different frameworks, such as OCT and fundus images, using different techniques in which some relied on global image processing, whereas others investigated the localization and segmentation of DR. In this literature, we highlight different methods that have been widely used in the localization and segmentation of DR. In one of the studies,7 the authors filtered the images using fixed scales and orientations of a Gabor filter bank of several filters. Then, the stages of NPDR were obtained by analyzing the numbers of maxima in the energy versus orientation plot. Finally, various DR diseases, such as PDR, NPDR, and DME, were differentiated using thresholding and edge detection techniques. They obtained a good classification rate in macula-related diseases using 24 eye images. In another study, Sopharak et al.8 proposed an exudate detection technique based on mathematical morphology on retinal images of nondilated pupils that are of low-quality images. Their work was based on this technique because it is very fast and requires lower computational time. The method in Ref. 9 presented an extension of the gradient method for the detection of red lesions on fundus images. The images were corrected for shading and noise, and then the expanding gradient was calculated for each of the pixels. A map of the image was obtained, and thresholding was performed to obtain the target object. From the 687 true red lesions included in 47 images of eye fundus, 584 red lesions were found using this method and 106 false red spots were additionally found. Another study10 developed an automated system to analyze the retinal images for important features of DR using image-processing techniques and an image classifier based on an artificial neural network, which classifies the images according to the disease conditions. Vascular network, optic disk, and lesions such as exudates were identified by this work.
Another work based on computer-aided detection was developed in Ref. 11. It detected the fovea, blood vessel network, optic disk, and bright and dark lesions associated with DR. The lesion detection was accomplished through the process of eliminating the normal retinal components: blood vessels, fovea, and optic disk. The image was partitioned into two regions: fovea and nonfovea since each has a different background. Statistical adaptive thresholding and filtering were then applied. In another study,12 the proposed algorithm for hard exudates detection is composed of four main stages: image preprocessing and enhancement, feature extraction, classification, and postprocessing. The image was enhanced to normalize the intensity and contrast, whereas the algorithm extracts the dynamical training sets from each image. Then, the algorithm classified the pixels using a Fisher’s linear discriminant, after which a postprocessing technique was applied to differentiate between the hard exudates and the cotton wool spots and other artifacts. The work discussed so far focused mainly on the automatic identification, detection, and segmentation of DR in fundus images. Thus, these techniques are not applicable to the segmentation and localization of DR in SD-OCT images. Therefore, in this paper, we present an automated segmentation of hyperreflective foci (HFs) method in SD-OCT with DR.
HFs are commonly found in diabetic patients.13–15 Coscas et al.13,14 first reported the presence of HFs in spectral domain optical coherence tomography (SD-OCT), as small in size and scattered throughout all retina layers but mainly located in the outer retina layers around fluid accumulation in the intraretinal cystoid spaces. They suggested that the presence of HFs could affect the prognosis and treatment decisions, particularly in patients with age-related macular degeneration (AMD). Subsequently, HFs have also been reported in retinal vein occlusion and DME.16,17 Bolz et al.15 reported and described the HFs as punctiform hyperreflective elements distributed throughout all retinal layers in eyes with different types of DME. Uji et al.18 also reported the presence of HFs in the outer and inner retinas in eyes with DME. Reduction in HFs has been reported in patients with diabetic diseases after treatment and has correlated positively to visual acuity outcomes.19,20 Additionally, other reports suggest that the number and potentially the location of HFs may be a predictor of final treatment outcome in diabetic disease.17–19 As such, we expect that the volume occupied by HFs in the retinal may be a useful diagnostic metric. Quantitative tools for assessing HFs may lead to better metrics for choosing treatment protocols. Automated methods for segmentation of the HFs are necessary to efficiently assess an entire 3-D OCT image stack and to estimate the total HFs volume.
Segmentation of HFs is tedious and complicated because HFs are difficult to identify, segment, and quantify. The major challenges in this segmentation process are highlighted below. Figure 2(a) shows the presence of HFs in all retinal layers, most especially between the ILM and inner segment–outer segment (IS–OS) layers. In this case, the layers are difficult to segment. Ruptured retinal layers, as shown in Fig. 2(b), are another characteristic of the HFs. This tends to make the HFs difficult to locate since the layers cannot be identified for segmentation. Layer segmentation, which is the most important step toward segmenting the HFs, is difficult to perform for both cases. To date, there is no layer segmentation algorithm for this problem. Figure 2(c) shows another challenge of HFs segmentation based on its similarity with the background: indifference between the HFs and the background pixels; the foreground (HFs) and the background exhibit extremely close similarity. Identification of the foreground and differentiating between the foreground and the background are a challenge in HFs segmentation. This problem is shown in Figs. 1(c) and 1(f), where the intensity distributions of the HFs and the background are difficult to separate, and in Figs. 1(b) and 1(e), where it is virtually impossible to identify which pixel belongs to the HFs or the background. The weak boundary of HFs is shown in Figs. 2(d) and 2(e). For successful segmentation, the target object must possess a strong boundary for identification, recognition, and classification. Since this feature is rarely available in HFs segmentation, this formulates into another major challenge in this segmentation process. Another setback is the presence of noise in the OCT images. The primary noise is the speckle noise, which has a significant effect on HFs segmentation since speckles are similar to HFs in reflectivity (brightness), size, and shape (both have no definite shape). The application of a denoising algorithm tends to eliminate smaller HFs.
Fig. 2.
Identification of HFs in retinal OCT images. (a) HFs in different layers, (b) HFs in ruptured layers, (c) similarity of HFs with background, and (d), (e) HFs weak boundary.
We note that currently there is no known software available to manually or automatically obtain the HFs volume from OCT machines. Using image-editing software, which is available under a contractual agreement with such vendors to manually mark out the HFs region by an expert grader, is possible. With massive HFs segmentation in OCT, analyzing and segmentation of HFs in large data acquisition is time consuming and tiring. Furthermore, this manual segmentation through an expert grader depends on an individual’s visual strength, which is less reliable. This technique lacks efficiency and accuracy; hence, a better technique is proposed in this research. To address this need, this paper presents a fully automated HFs segmentation technique for 3-D SD-OCT images to evaluate the potential role of HFs as an independent prognostic of the visual outcome in treated patients, as well as disease monitoring and progression. To our knowledge, this is the first study to segment the HFs in SD-OCT in patients with DR.
This paper’s main contributions include (1) an automatic seed generation proposed, making the traditional grow-cut algorithm an automated process, (2) amendment to the core grow-cut algorithm, making its performance better than the state-of-the-art, (3) improvement in the computational time of the grow-cut algorithm, and (4) experimental results with 20 eyes in 10 PDR and 10 DME patients demonstrate that our method can achieve high HFs segmentation accuracy and effectively measure the HFs volume.
2. Materials and Methods
2.1. Patients and Data Acquisition
OCT images of patients diagnosed with varied levels of retinopathy severity of DME and PDR were obtained using a Cirrus SD-OCT device (Carl Zeiss Meditec, Inc., Dublin, California). The scan covered a area centered on the fovea. The volume dimension was with a voxel size of . Out of the 20 subjects, 10 were diagnosed with PDR and the other 10 were diagnosed with DME. These patients, with various forms and levels of DR, ranged in age from 35- to 65-years-old with an average age of 42. The 20 3-D SD-OCT cubes from the 20 patients were carefully reviewed for the presence of HFs, noticeably as round or oval shapes in different retinal layers. Informed consent was obtained, and the study was approved by the Institutional Review Board of the First Affiliated Hospital of Nanjing Medical University.
2.2. Algorithm Overview
Figure 3 summarizes the algorithm. A preprocessing step, which includes denoising and intensity normalization, was first performed to remove artifacts and to establish intensity consistency. Layer segmentation using an optimal graph search method, coupled with a supervised classifier (random forest), followed. Finally, an automated grow-cut method was applied to segment the HFs. The full process is described in the sections below.
Fig. 3.
Overview of the proposed automated HFs segmentation algorithm. B-filter, bilateral filtering; ILM, inner limiting membrane; and IS–OS, inner segment–outer segment.
We have chosen the random forest classifier based on our need in this research as follows: (a) easy, simple, and fewer parameters are needed, (b) its ability to learn complex nonlinear relationship accurately, (c) it performs better than the state-of-the-art classifiers, (d) its ability to handle multilabel problems, (e) it is computationally efficient, and (f) it generates a probability for each of the labels, which we needed in soft classification of a one pixel wide boundary in this research since hard classification of such pixel suffers dramatically from both false positives and false negatives. These probability maps are then input to a boundary identification algorithm (3-D graph search), which finds contours that separate the retinal layers in each OCT image.
2.3. Preprocessing
2.3.1. Denoising
A bilateral filter is quite effective in removing modest levels of noise, and its performance has received renewed attention in the image-processing community.21,22 In image denoising applications, edge preservation is a priority, and this important feature of a bilateral filter23–25 is required in this study. The OCT images are corrupted with noise, most especially speckle and additive white Gaussian noise,26 and the corrupted image is defined as
| (1) |
where is finite rectangular domain of , is the clean image, is the level of noise, and is an independent variable distributed as . The denoised image is obtained as
| (2) |
where , are the spatial and range kernels, respectively, and , where .23 The domain is a restricted square neighborhood.23,27 The result is shown in Fig. 4.
Fig. 4.
Denoised image. (a) Input image and (b) denoised output image using bilateral filter.
2.3.2. Intensity normalization
The denoised OCT images suffer from intensity variation. This problem made it difficult to extract features successfully from the OCT images. To overcome this problem, we applied the intensity normalization to balance these variations. This is always an important step in quantitative OCT image analysis, especially when extracting features based on intensity. To normalize these intensity variations, we rescaled the intensity values on each of the OCT B-scan by contrast rescaling in the range . Contrast rescaling was applied on each B-scan in which the intensity in the range is rescaled to [0,1], where denotes the maximum intensity value, which is obtained by median filtering (kernel size 12 pixels) each of the individual A-scans within the same B-scan. Intensity values larger than are rescaled to 1. The major problem with this is that some hyperintense spots were found at the surface of the image. The effect of these spots on the performance of the algorithm cannot be overlooked. Therefore, we applied a more robust approach by setting the value to a value that is 5% more than the maximum of the whole median filtered image. The 5% of was chosen after several trial-and-error experiments to ascertain which gives the best result with the least effect. At 5% of , the normalization of intensity inconsistency by constrast rescaling removes hyperintense reflections that are found at the surface of the retinal while retaining the overall intensity values in the B-scan.
2.4. Layer Segmentation
Layer segmentation consists of several subprocesses. The processes include boundary identification, feature extraction, boundary classification, and layers boundary tracing using the graph search method. These subprocesses are discussed in detail below.
2.4.1. Boundary identification
After the preprocessing steps, an anisotropic Gaussian smoothing was applied at different orientations and scales.28 The Gaussian smoothing of scale and with an orientation 0 deg was then rotated at and from the horizontal (on each B-scan) direction to obtain the top , center (0), and the bottom . Then, an effective feature neighborhood29 was applied, with an additional filter bank in the A-scan direction and a second derivative was taken. The two boundaries with the highest positive gradient profile (the ILM and the IS–OS layer, in Fig. 5) were once again smoothed with Gaussian filter and for ILM and IS–OS for proper boundary points detection since HFs are mainly present between the ILM and IS–OS in this study.18
Fig. 5.
Layers detection in SD-OCT, (a) vertical layer detection and (b) horizontal detection. The arrows show the respective layers (orange arrow—ILM and the blue arrow—IS–OS).
2.4.2. Features extraction
Since the layers were identified, adequate features were needed to properly outline the layers. Features extraction is an important step for segmenting any structures in medical images.30,31 To avoid feature redundancy and repetitions, we applied a wrapper feature selection method.32 Wrapper methods utilize the classifier as a black box to score the subsets of features based on their predictive power. Wrapper methods based on support vector machine (SVM) have been widely used in the machine learning field.33,34 We used the SVM recursive feature elimination (SVM-RFE)33–37 to select useful and important features. This method uses a backward feature elimination scheme to recursively remove insignificant features from subsets of features. In each recursive step, it ranks the features based on the amount of reduction in the objective function. It then eliminates the bottom ranked feature from the results. Out of the 70 features inputted, 30 features were ranked as significantly important. These features are spatial, local, and context awareness. The spatial features help to localize retinal pixels within a generic retina using a coordinate system that is defined by the geometry of the subject-specific retina. The context-aware features allow the classifier to learn local relationships between neighboring points without explicitly calculating any new features. The local features provide information about the current pixel. These extracted 30 features based on the boundary detected for each layer were retained for later usage. Features 1 to 9 are the nine intensity values of the neighborhood applied on each pixel. Features 10 to 21 were extracted per pixel from the signed value of the first and magnitude of the second derivatives on 128 B-scan images corresponding to two scales and three orientations. Features 22 to 24 are the relative distances of each pixel along the A-scan among the layer boundaries obtained in the boundary identification section. Features 25 to 30 are the average vertical gradients in the neighborhood under the current pixel. These features were used in training the random forest classifier. The random forest classifier uses ground truth labels, created by an expert, to learn. This classifier is then used to obtain the boundary probability at each pixel by computing these features for a new set of data inputted.38
2.4.3. Boundary classification based on features
To classify these identified layers, we used a supervised classifier (random forest). Random forest38 was chosen because of its probability generation for each label, which tends to provide a soft classification with each tree providing a separate classification for an input feature vector since the training is randomized between the trees. Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest.38 Each decision tree is constructed using a random subset of the training data. Given an ensemble of classifiers of decision tree, a random vector is generated, independent of the past random vectors but with the same distribution, and a tree is grown using the training set and , resulting in a classifier , where is an input feature vector. The dimension of is 30 in this paper. For vector data, we provide an estimated label , where is the set of all labels and . The posterior probability for any predicted label can be obtained as , where is an indicator function for class . Then a final label estimate for the classification can be obtained as .
This supervised classifier is trained to find the two boundaries between layers. The full training data set comprises 3-D OCT volumes manually delineated from 18 subjects, which is not part of the test image. Each volume contains 128 B-scans. Thus, for a given B-scan, we use all of the segmented boundary points for training (i.e., 1024 points) for each of the boundaries. In as much as each B-scan contains 1024 A-scans. This approach focuses on identifying the one pixel wide boundary between layers. It is difficult in classifying the pixel found in between the boundaries due to weaker feature response. These boundaries are converted to layer segmentations by assigning each boundary pixel to the layer above it.39 The random classifier outputs a set of boundary probabilities that are well defined visually, and then an optimal graph search algorithm is applied to generate the boundary curve from the boundary probabilities to compute the final boundaries.
2.4.4. Optimal graph search
After the classifier classified the boundaries, we applied the Graph search algorithm in which we adhered to the method used in Ref. 40, using the basic algorithm. Graph search optimization has been widely used in OCT.40 The approach in Ref. 40 defines a cost based on the boundary probability estimated in all B-scans and finds the collection of boundary surfaces with their respective orders coupled with the minimum cost over the entire 3-D volume. Then, multiple constraints are used to bound the intra- and intersurface distances between adjacent pixels in the - and -directions and the min and max distances between the surfaces. The cost is calculated as 1 minus the boundary probability since it is a minimum nonnegative cost solution. The max-flow/min-cut algorithm was then used to obtain the surface.41 Since the layers were now successfully segmented into ILM and IS–OS layers, this restricted region becomes the region-of-interest (ROI).
2.5. Hyperreflective Foci Segmentation
After obtaining the ROI, the HFs are now restricted to a region. But HFs segmentation is tedious and complicated because of the following: (1) pixel intensity similarity between the HFs and the background, (2) difficulty in layer extraction and segmentation, (3) HFs have no fixed location, (4) speckles are similar to small HFs, and (5) HFs boundaries are weak. These problems possess a challenge, which calls for a suitable solution to address it. Hence, we propose a framework for HFs segmentation based on the grow-cut algorithm,42 which models the image using cellular automata. The grow-cut method uses continuous state cellular automata to interactively label (segment) the image using user manually supplied seeds.42 Many authors have used the grow-cut algorithm in medical image segmentation.43–45 Although cellular automata are spatially and temporally discrete, abstract computational systems have proven to be useful tools both as general models of complexity and as specific representations of nonlinear dynamics in various scientific fields. Cellular automaton is a model of a system of cell objects with some specific characteristics.46 These characteristics include (1) the cells live on a grid, (2) each cell has a state, and (3) each cell has a neighborhood. It is represented by the four tuple , where is the finite or infinite lattice, is a finite set of cell states, is the finite neighborhood, and is the local transition function.46–57 In the grow-cut method, the cells represent the image pixels and the feature vector is the gray-scale intensities (as in our case). The state of each cell is given by , for each image pixel consists of a strength value in an interval [0,1], an image feature vector , and a label .
The traditional grow-cut algorithm requires manually selecting the seed, which is tiring and time consuming when the quantity of the images is large. In addition, each HF in the image exhibits a different range of pixel values, although close enough, yet it is difficult for the ordinary eye to differentiate between the HFs and the speckle noise or other reflective spots when selecting the initial seed. In this paper, we proposed an automated seed supply method. The automata are initialized by assigning corresponding labels seeds, using the regional and local maximum concept.
-
Step 1:
Obtain the regional maxima from the constrained boundary of ILM and IS–OS within 26-connected neighborhoods. The regional maxima are connected components of pixels with a fixed intensity value, , whose external boundary pixels have a lesser pixel value than . Identified regional maxima pixels are set to “1;” all others pixels are set to “0.”
-
Step 2:
From step 1, the location of the regional maxima is known. Extract the pixel intensity value of all the regional maxima. Compute the values in a matrix size, where denotes number of regional maxima.
-
Step 3:
Removing false (bright) pixel—speckles. Let represent the computed matrix in step 2; then arrange the pixel values in in ascending order. Get the successive minimum adjacent difference of , represented as , where . Use .
-
Step 4:
Apply dilation operation with an appropriate spherical structuring element whose radius is seven pixels. The dilation operator used is to expand unconnected edges while erosion smoothing the image by cleaning up small bright spots in the image. All pixels are considered; thus, new pixel regional maxima are obtained and compute in a matrix , by replacing each point with these values.
-
Step 5:
At the point where , we obtain the local maxima location in .
-
Step 6:
Compute the average of local maxima, which is the initial seed point (label).
The pixel with initial labels is assigned the cell strength of 1, whereas the unlabeled cells are set to 0. A pseudocode for the automated grow-cut algorithm is given below in Algorithm 1.
After the initialization of the labels, an iteration rule in Eqs. (3) and (4, see Algorithm 1) is applied
| (3) |
| (4) |
where is the pixel similarity function at different nodes and , which range from [0,1], as58
| (5) |
This function is equivalent to the weight function in which is the absolute intensity difference between neighboring nodes of and . HFs intensities are used as the image features, with a 26-cell cubic neighborhood since our images are in 3-D. Additionally, HFs voxels are brighter than the background, and a prior awareness of HFs voxels can be used in modifying the transition function . The significant contribution of our method to the grow-cut algorithm includes (a) an automatic seed generation proposed, (b) amendment to the core grow-cut algorithm making its performance better than the state-of-the-art, and (c) improvement in the computational time of the grow-cut algorithm.
Algorithm 1.
Automated grow-cut.
| //For each cell |
| For |
| Start and |
| //label obtained from steps 1-6 |
| for |
| if seed is of class |
| else |
| end for |
| Do (until converge) |
| For |
| For |
| //pixel neighbors attack current cell |
| where |
| // and are cell features (intensity) |
| //update state at p |
| end for |
| // copy previous state |
| end for |
| end do |
| end for |
3. Results and Analysis
3.1. Evaluation Parameters
3.1.1. Dice similarity coefficient
Dice is defined as the intersection between two similar labeled regions in ground truth segmentation and automatic segmentation over the average volume of these two regions. Its values range between 0 (no overlap) and 1 (perfect agreement). In this study, the Dice values are expressed as percentages and obtained using
| (6) |
3.1.2. Correlation coefficient–
The correlation coefficient measures the strength and direction of a linear relationship between two variables. In this paper, the two variables are the ground truth and automatic segmentation . The closer the absolute value of is to one, the closer the data are described by a linear equation. A correlation () indicates a perfect negative correlation, whereas a correlation () indicates a perfect positive correlation. Datasets with values of close to zero show little to no straight-line relationship
| (7) |
where is the ground truth segmentation, is the automatic segmentation, is the covariance between ground truth and automatic, is the standard deviation of ground truth, is the standard deviation of automatic, and probability value is the -value.
Probability value is the probability for a given statistical model that, when the null hypothesis () is true, the statistical model summary would be the same as or more extreme than the actual observed results. It simply tests a null hypothesis against an alternative hypothesis using a dataset.
3.2. Segmentation Results
3.2.1. Comparative analysis of automatic and ground truth segmentations
Data from 20 patients with PDR and DME were analyzed. The results from the proposed algorithm were compared with the results from the ground truth. Tables 1 and 2 show the mean volume of both the PDR and DME of the HFs, and Table 1 shows the results of applying the metrics of Sec. 3.1, to the segmentation results of each retinal cube using the expert labels as ground truth. The results are averages for each PDR and DME retinal cubes. The proposed algorithm and the expert grading (ground truth) were quantitatively and qualitatively compared. The performance of the algorithm for HFs in PDR and DME in Tables 1 and 2 could be observed for the total of 20 experiments, 18 of which were undersegmented, with only 2 being over-segmented. The under-segmentation effects were largely due to the inconsistent intensity of the HFs, where the expert grader considered some areas to be HFs and the proposed algorithm rejected such areas due to high significant property difference as shown in Fig. 6. Another causative agent is the weak boundary of the HFs, as shown in Fig. 7. In such cases, the expert grader marked and included the faded trailing boundary pixels of the HFs, whereas the proposed algorithm considered such pixels as background pixels since their properties were more similar to the background pixels than the HFs pixels. Appropriate layers segmentation could have reduced the under-segmentation of the proposed algorithm. As shown in Fig. 8(d), such layers segmentation would increase the segmentation accuracy of the algorithm. Therefore, our layers segmentation algorithm reduced the under-segmentation caused by the collapsed IS-OS layers by allowing segmentation of HFs IS–OS layers.
Table 1.
Mean volume of HFs in PDR using the ground truth (pixel).
| Experiments | Ground trutha | Automatica | Differencea | Vol. seg. (%) |
|---|---|---|---|---|
| 1 | 0.049 | 0.090 | 0.041 | 183.673 |
| 2 | 0.229 | 0.189 | 0.040 | 82.533 |
| 3 | 1.993 | 1.617 | 0.376 | 81.134 |
| 4 | 1.296 | 0.657 | 0.639 | 50.694 |
| 5 | 0.255 | 0.194 | 0.061 | 76.078 |
| 6 | 0.047 | 0.045 | 0.002 | 95.745 |
| 7 | 0.635 | 0.414 | 0.221 | 65.197 |
| 8 | 0.994 | 0.687 | 0.307 | 69.115 |
| 9 | 0.633 | 0.365 | 0.268 | 57.662 |
| 10 | 0.097 | 0.053 | 0.044 | 54.639 |
Multiply each value with .
Table 2.
Mean volume of HFs in DME using the ground truth (pixel).
| Experiments | Ground trutha | Automatica | Differencea | Vol. seg. (%) |
|---|---|---|---|---|
| 1 | 0.046 | 0.042 | 0.004 | 91.304 |
| 2 | 0.003 | 0.018 | 0.015 | 600.000 |
| 3 | 0.299 | 0.284 | 0.015 | 94.983 |
| 4 | 0.831 | 0.791 | 0.040 | 95.187 |
| 5 | 0.252 | 0.174 | 0.078 | 69.048 |
| 6 | 3.133 | 1.735 | 1.398 | 55.378 |
| 7 | 0.793 | 0.498 | 0.295 | 62.800 |
| 8 | 0.157 | 0.079 | 0.078 | 50.318 |
| 9 | 0.829 | 0.627 | 0.202 | 75.633 |
| 10 | 0.305 | 0.065 | 0.240 | 21.311 |
Multiply each value with .
Fig. 6.
Unsegmentated region result compared with ground truth. (a) original image, (b) the ground truth, (c) automatic result, (d) ground truth binarized image, and (e) automatic binarised image. Yellow arrows identify the ambigious region classified as HFs region by the expert human grader.
Fig. 7.

Segmentation results compared with ground truth. (a) original image, (b) the ground truth, (c) the automatic result, (d) ground truth binarized image, and (e) automatic binarized image. Yellow arrows identify the false negative segmentation results corrected by an expert human grader.
Fig. 8.

Effect of layers segmentation. (a) original image, (b) the ground truth, (c) the automatic result, (d) layers segmentation result, (e) ground truth binarized image, and (f) automatic binarized image. Yellow arrows identify the false negative segmentation results corrected by an expert human grader.
To date, there is no layers segmentation algorithm suitable for these cases. The over-segmentations were due to the false pixels exhibiting the same properties, such as with HFs. These pixels are scattered and are present in every layer of the retinal image, making it difficult for ordinary eyes and even the proposed algorithm to differentiate from HFs since both have the same properties. This fluctuation in the intensity, and the uneven intensity range of the background, resulted in false negative segmentation for the proposed algorithm as shown in Fig. 9. The assumption was that any bounded area should be discarded and removed since it is regarded as a speckle noise in this research work. Figure 10 shows one of the cases in which, amid the challenges of scattered varying HFs-like object, both the expert grader and the proposed algorithm rejected such pixel regions and classified such regions correctly.
Fig. 9.

Segmentation results compared with ground truth. (a) original image, (b) the ground truth, (c) the automatic result, (d) ground truth binarized image, and (e) automatic binarized image. Yellow arrows identify the false positive segmentation results corrected by an expert human grader.
Fig. 10.

Automatic segmentation results. (a) original image, (b) the ground truth, and (c) the automatic result. Yellow arrows indicate the pixels both rejected by the expert grader and the proposed automatic algorithm.
As shown in Table 3, the proposed algorithm agreed well with the ground truth. The dice similarity coefficient (DSC) for both the PDR and DME is 62.30% and 63.80%, respectively, with the correlation coefficient of 96.90% and 97.50%, respectively. The results show a good correlation between the proposed algorithm and the ground truth. The indication of this measurement revealed that the proposed algorithm performed better in DME than in PDR. This is due to the fact that, in most DME images, the HFs are bigger (size) than in PDR, so the proposed algorithm could easily lose smaller bounded area pixels due to many aforementioned challenges. The high -values of 0.555 and 0.473 and a high estimate of correlation coefficient could only occur with a very small sample size.59 The alternative hypothesis might in fact still be true, but, owing to a small sample size, the study did not have enough power to detect that the was likely to be false. Due to the cost of groundtruth and the time consumption, this research work could not afford to add more groundtruth data to verify that the is false.
Table 3.
Mean of HFs using the ground truth (%).
| Metrics | PDR | DME |
|---|---|---|
| dice | 0.623 | 0.638 |
| correlation coefficient () | 0.969 | 0.975 |
| -value | 0.555 | 0.473 |
Figure 10 shows the automatic segmentation result of the SD-OCT images slice with boundaries clearly segmented. The first row shows the original image, the second column shows the groundtruth, and the third shows the automatic segmentation results. It shows a case in which the HFs boundaries were segmented accurately amid the aforementioned challenges. Both the HFs and the background pixels were classified accurately. Figures 11(a)–11(e) show a 3-D volumetric rendering HFs for proper visualization. This is a result of a cube (128 images) with high segmentation accuracy. Figure 11(a) is the frontal view of the segmented HFs in white, Fig. 11(b) is the imposed automated segmentation and the ground truth (white), and Figs. 11(c)–11(e) are the rear and orthogonal views in different angles, respectively.
Fig. 11.

3-D visualization of segmented HFs. (a–e) are quantification of volumetric spaces of HFs in different 3-D projections. (a) the frontal of view of the ground truth (white), (b) the automatic segmented HFs (red) imposed on the ground truth, (c) rear view of the imposed automatic on the ground truth, and (d), (e) are the orthogonal views in different angles.
Figure 12 shows the linear regression analysis results, and Fig. 13 shows the Bland–Altman plots for the automatic segmentation results versus the ground truth for PDR and DME. It is observed from Figs. 12 and 13 that the automatic segmentation results have strong correlation with the PDR ground truth () and the DME ground truth (). In this research work, 20 retinal 3-D OCT images were segmented, with bias and limits of agreement calculated separately for each of the PDR and DME. An analysis based on these 20 subjects obtained the agreement plot as shown in Fig. 13. The 95% limit of agreement of PDR and of DME contain 95% (8/10) PDR and (9/10) DME of the difference scores. The mean difference (bias) of the measurements between the ground truth and the proposed automatic segmentation algorithm is for PDR and for DME. The standard deviations of the difference are 3.82 for PDR and 4.66 for DME, and the width of the 95% limits of agreements are 4.04 for PDR and 8.02 for DME. Hence, the automatic segmentation results from the proposed algorithm can replace the expert (ground truth). Figure 14 shows the dissimilarity between the ground truth and the proposed automatic algorithm. The volumes obtained in both the PDR and DME have a good correlation with the ground truth, with only an abrupt difference in volume between the ground truth and the proposed automatic algorithm in experiment 10 of the DME. This is associated with difficulty in segmenting the layers as shown in Fig. 15.
Fig. 12.
Correlation (a) PDR and (b) DME of the volume of HFs measured by automatic segmentation result and ground truth.
Fig. 13.
Agreement (a) PDR and (b) DME of the volume of HFs measured by automatic segmentation result and ground truth.
Fig. 14.
Discrepancy (a) PDR and (b) DME of the volume of HFs measured by automatic segmentation result and ground truth.
Fig. 15.
Comparison between the proposed method and other state-of–the-art medical image segmentation algorithms. (a1–a3) Experimental images used in the comparison, (b1–b3) the ground truths, (c1–c3) the results of the proposed method, (d1–d3) the results of the level set method, (e1–e3) the results of the active contour method, and (f1–f3) the results of the traditional grow-cut method.
4. Discussion and Conclusion
In this paper, we developed an automated HFs segmentation algorithm to segment HFs in 3-D retinal SD-OCT using cellular automata and the grow-cut algorithm, which involves three main steps: (1) the preprocessing, (2) the layer segmentation, and (3) the HFs segmentation. The proposed method was compared with other segmentation algorithms in a clinical application. Active contour,60 level set,61 and traditional grow-cut algorithm possess high segmentation accuracy and performance; therefore, these algorithms are widely used in medical image segmentation. As shown in Fig. 15, the results of the proposed method and the three state-of-the-art methods are presented. It is obvious that all three methods are incomparable in terms of accuracy in segmenting the HFs to the proposed method. Additionally, active contour performs better than the level set method. The active contour method uses the initialization point (manually selected HFs pixel) to segment the HFs from an initial contour and automatically continues to trace the connected neighboring pixels in the image. The level set method does not depend solely on the initialization point (manually selected HFs pixel) and then splits automatically to detect more connected HFs component in the image. However, the traditional grow-cut uses cellular automaton as an image model in which the automata evolution models segmentation process by manually selected HFs pixel as the foreground. Each cell of the automata has some label, and during the evolution some cells captures their neighbors and replace their labels. All these methods are widely used in medical image segmentation, and their performances vary in different frameworks. In this case, the active contour method might be more suitable than the level set and the traditional grow-cut method. Although these methods (level set, active contour, and traditional grow-cut) perform poorly in this study, their performance in some other framework is highly commendable.
We experienced three major challenges, namely layer segmentation, HFs weak boundary, and the speckle noise. The effect of these challenges is quite visible in the overall results of the algorithm, yet we were able to minimize the effects. The layer segmentation part of the algorithm could segment the layers to their respective layers as required by the overall algorithm. This implies that any HFs out of the designated layer boundaries are not considered for segmentation, whereas an expert human grader would.
While most of the HFs in this research were found within the designated layer boundaries, the algorithm performed poorly in some special case (the layers were totally invisible) as shown in Fig. 16, due to total layers rupture of the retinal. These phenomena are often in HFs segmentation, yet its effect is substantial in the overall performance of the algorithm.
Fig. 16.

Examples of total layers rupture. (a) Some layers are still visible, (b) most layers in the left part of the image are invisible, and (c) all layers are unidentifiable except (ILM).
Since HFs suffer from seriously weak boundaries, this poses a serious challenge for the algorithm to properly classify the boundary pixels into their respective classes. The boundary pixels and the background pixel intensity values are so similar that there is a high tendency for misclassifying those pixels. This effect is shown in Fig. 7, in which an expert human grader corrected the segmentation result. The images used in this research contained speckle noise, which is one of the problems related to OCT images. These speckles are also found where HFs are situated, with a similar appearance to HFs. To reduce these speckles, we denoised the images with caution. The aim is to avoid eliminating some smaller HFs while removing speckles. The remaining speckles in the image are the major issues in the segmentation process since both HFs and the speckles are similar. Speckles could easily be mistaken for HFs by the algorithm. To reduce this tendency, size was used as a key criterion for the HFs segmentation. Thus, this minimized the error due to speckle noise.
The algorithm was tested on 20 datasets of 3-D OCT images of patients diagnosed with PDR and DME diseases. These are the various levels of the DR in the dataset: (1) PDR [(a) early PDR, (b) high risk PDR, and (c) advance PDR] and (2) DME [(a) nonclinically significant DME and (b) clinically significant DME].62 The PDR dataset consists of one early PDR, three high-risk PDR, and six advance PDR. The DME dataset consists of three nonclinically significant DME and seven clinically significant DME. Our experimental results achieve reasonable consistency with the ground truth. The performance evaluation of segmentation results demonstrates that our HFs segmentation method is efficient and accurate when compared with the ground truth. The mean of DSC, correlation coefficient (), and probability value (-value) for the proposed algorithm are 62.30%, 96.90%, and 0.55, respectively, for PDR and 63.80, 97.50, and 0.47, respectively, for DME. The linear regression analysis also shows close correlation between the automatic segmented. The proposed algorithm shows a close correlation with the ground truth () and () for both PDR and DME. This indicates that the segmented HFs are in agreement with that of the ground truth. To the best of our knowledge, this is the first work on HFs segmentation. Hence, our algorithm could only be compared with the ground truth done by experts. The traditional grow-cut algorithm has been improved significantly based on time and segmentation efficiency. One of the disadvantages of the proposed algorithm is the time consumption in operation since it works on each pixel of the 3-D images. The proposed algorithm was implemented in MATLAB R2013a and tested on a PC with Intel® core (TM) i5-4200U CPU@ 1.60 GHz and 8 GB of RAM. The mean running time is per cube (128 images). The computational time for each of the state-of-the-art segmentation algorithms and the proposed algorithm is presented in Table 4. The proposed algorithm has better computational time as compared with the other segmentation algorithms, followed by the level set algorithm, which is faster than the active contour algorithm in this case. The traditional grow-cut algorithm has the highest computational time as reported in Table 4.
Table 4.
Mean computational time of various segmentation methods as compared with the proposed method (min).
| Segmentation methods | Computational time (minutes) per image cubes (128 images) |
|---|---|
| Proposed method | 20 |
| Traditional grow-cut | 50 |
| Level set | 25 |
| Active contour | 27 |
To the best of our knowledge, several works have only described or rather reported the presence of HFs in DR and other related diseases. Since HFs are only present in retinal diseases, this formation leads to the conclusion that HFs are not regarded as a disease, but their presence could lead ophthalmologists to make accurate conclusions about the diseases, and make efficient treatment processes for patients with such diseases. Hence, segmentation of HFs could serve as one of the diagnosis tools in analyzing such diseases. The unsegmented ambiguous area by the proposed algorithm will have less clinical impact since the HFs are not the main disease but rather one of the several diagnosis tools used in analyzing such diseases. In addition, no evidence has shown that the volume of HFs is proportional to any of such diseases, but rather the segmented volume would support the ophthalmologists in their decision-making and diagnosis. We present an algorithm that can detect, segment, and quantify HFs in SD-OCT, which has never been available by either commercial software, open source, or individuals. This work presents an eye-opener for researchers by breaking the barrier that HFs could only be segmented manually rather than by a computer algorithm. Although this algorithm serves as a groundbreaking research area for other researchers or commercial softwares to explore and propose better methods, our future work will include improving the algorithm for better performance. In conclusion, the segmentation results of our algorithm can be used in clinical diagnosis, treatment planning, and disease monitoring and progression of diabetic patients.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (61671242, 61701222), a grant from the Fundamental Research Funds for the Central Universities (30920140111004), Suzhou industral innovation project (SS201759), and six-talent peaks project in Jiangsu Province (2014-SWYY-024).
Biographies
Idowu Paul Okuwobi received his BS degree in aeronautical engineering and his MS degree in mechanical and electrical engineering from Nanjing University of Aeronautics and Astronautics, China, in 2012 and 2015, respectively. He is a PhD student at the Nanjing University of Science and Technology, China. His current research interests include medical image segmentation and analysis, digital image processing, image segmentation, machine learning, and pattern recognition.
Wen Fan received her MD degree from Wuhan University in 2012. She is a lecturer at the Medical College of the Nanjing Medical University and an attending doctor in the ophthalmology department of the First Affiliated Hospital of Nanjing Medical University. She is mainly engaged in fundamental and clinical research of ophthalmology imaging.
Chenchen Yu received his bachelor’s degree from Nanjing University of Science and Technology in 2015. Currently, he is a master’s degree student in the same university. His research is about medical image processing and analysis.
Songtao Yuan is a chief physician of ophthalmology and an associate professor. Currently, he is a member of the Chinese Medical Association as well as the Association for Research in Vision and Ophthalmology of the United States (ARVO). His fields of expertise are diseases of retina and vitreous, retinal degeneration diseases, and research on fundamental and transformation of ophthalmic regeneration. He is an expert in vitreous and retinal surgery, especially in complex and joint surgeries.
Qinghuai Liu is a vitreoretinal specialist, professor of ophthalmology, and the head of the Department of Ophthalmology, the First Affiliated Hospital of Nanjing Medical University (NMU). He is a committee member of Chinese Ophthalmological Society (COS) and the chairman of Jiangsu Ophthalmological Society. He has mainly focused his research on the prevalence, gene susceptibility, pathological mechanism, biomarker identification, and translational medicine of retinal diseases, such as AMD and diabetic retinopathy (DR).
Yuhan Zhang earned his master’s degree from Nanjing University of Science and Technology in April 2017. Currently, he is a doctorate student at the Nanjing University of Science and Technology. His doctoral research is about SD-OCT image processing.
Bekalo Loza received her BS degree in computer science from Hawassa University, Ethiopia, in 2009 and her MS degree in geoinformatics from the University of Twentee, Netherlands, in 2014. She worked as a lecturer at the Hawassa University, Ethiopia from 2009 to 2012. Currently, she is a doctorate student at Nanjing University of Science and Technology. Her doctorate research is medical image analysis.
Qiang Chen received his BSc degree in computer science and his PhD in pattern recognition and intelligence systems from Nanjing University of Science and Technology, China, in 2002 and 2007, respectively. Currently, he is a professor with the School of Computer Science and Engineering at the Nanjing University of Science and Technology. His main research topics are image processing and analysis.
Disclosures
No conflicts of interest, financial or otherwise, are declared by the authors.
References
- 1.Cheung N., Mitchell P., Wong T. Y., “Diabetic retinopathy,” Lancet 376(9735), 124–136 (2010).https://doi.org/10.1016/S0140-6736(09)62124-3 [DOI] [PubMed] [Google Scholar]
- 2.Bourne R. R., et al. , “Causes of vision loss worldwide, 1990–2010: a systematic analysis,” Lancet Global Health 1(6), e339–e349 (2013).https://doi.org/10.1016/S2214-109X(13)70113-X [DOI] [PubMed] [Google Scholar]
- 3.Philip S., et al. , “The efficacy of automated ‘disease/no disease’ grading for diabetic retinopathy in a systematic screening programme,” Br. J. Ophthalmol. 91(11), 1512–1517 (2007).https://doi.org/10.1136/bjo.2007.119453 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.The Diabetic Retinopathy Study Research Group, “Four risk factors for severe visual loss in diabetic retinopathy,” Arch. Ophthalmol. 97(4), 654–655 (1979).https://doi.org/10.1001/archopht.1979.01020010310003 [DOI] [PubMed] [Google Scholar]
- 5.The Diabetic Retinopathy Study Research Group, “Photocoagulation treatment of proliferative diabetic retinopathy: clinical application of diabetic retinopathy study (DRS) findings,” Arch. Ophthalmol. 88(7), 583–600 (1981).https://doi.org/10.1016/S0161-6420(81)34978-1 [PubMed] [Google Scholar]
- 6.Yau J. W., et al. , “Global prevalence and major risk factors of diabetic retinopathy,” Diabetes Care 35(3), 556–564 (2012).https://doi.org/10.2337/dc11-1909 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Zahlmann G., et al. , “Hybrid fuzzy image processing for situation assessment,” IEEE Eng. Med. Biol. Mag. 19(1), 76–83 (2000).https://doi.org/10.1109/51.816246 [DOI] [PubMed] [Google Scholar]
- 8.Sopharak A., et al. , “Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods,” Comput. Med. Imaging Graphics 32(8), 720–727 (2008).https://doi.org/10.1016/j.compmedimag.2008.08.009 [DOI] [PubMed] [Google Scholar]
- 9.Badea P., Danciu D., Davidescu L., “Preliminary results on using an extension of gradient method for detection of red lesions on eye fundus photographs,” in IEEE Int. Conf. on Automation, Quality and Testing, Robotics (AQTR 2008), Vol. 3, pp. 43–48 (2008).https://doi.org/10.1109/AQTR.2008.4588879 [Google Scholar]
- 10.David J., et al. , “Neural network based retinal image analysis,” in Congress on Image and Signal Processing (CISP 2008), Vol. 2, pp. 49–53 (2008).https://doi.org/10.1109/CISP.2008.666 [Google Scholar]
- 11.Katia E., Figueiredo R. J. P., “Automatic detection and diagnosis of diabetic retinopathy,” in IEEE Int. Conf. on Image Processing, Vol. 2, pp. 445–448 (2007).https://doi.org/10.1109/ICIP.2007.4379188 [Google Scholar]
- 12.Sánchez I., et al. , “A novel automatic image processing algorithm for detection of hard exudates based on retinal image analysis,” Med. Eng. Phys. 30, 350–357 (2008).https://doi.org/10.1016/j.medengphy.2007.04.010 [DOI] [PubMed] [Google Scholar]
- 13.Coscas G., et al. , “Spectral domain OCT in age-related macular degeneration: preliminary results with spectralis HRA-OCT,” J. Ophtalmol. 31(4), 353–361 (2008).https://doi.org/10.1016/S0181-5512(08)71429-3 [DOI] [PubMed] [Google Scholar]
- 14.Coscas G., et al. , “Optical coherence tomography in age-related macular degeneration,” Springer Medizin Verlag, Heidelberg: (2009). [Google Scholar]
- 15.Bolz M., et al. , “Optical coherence tomographic hyperreflective foci: a morphologic sign of lipid extravasation in diabetic macular edema,” Ophthalmology 116(5), 914–920 (2009).https://doi.org/10.1016/j.ophtha.2008.12.039 [DOI] [PubMed] [Google Scholar]
- 16.Ogino K., et al. , “Characteristics of optical coherence tomographic hyperreflective foci in retinal vein occlusion,” Retina 32(1), 77–85 (2012).https://doi.org/10.1097/IAE.0b013e318217ffc7 [DOI] [PubMed] [Google Scholar]
- 17.Framme C., et al. , “Behavior of SD-OCT-detected hyperreflective foci in the retina of anti-VEGF-treated patients with diabetic macular edema,” Invest. Ophthalmol. Visual Sci. 53(9), 5814–5818 (2012).https://doi.org/10.1167/iovs.12-9950 [DOI] [PubMed] [Google Scholar]
- 18.Uji A., et al. , “Association between hyperreflective foci in the outer retina, status of photoreceptor layer, and visual acuity in diabetic macular edema,” Am. J. Ophthalmol. 153(4), 710–717 (2012).https://doi.org/10.1016/j.ajo.2011.08.041 [DOI] [PubMed] [Google Scholar]
- 19.Bolz M., et al. , “Optical coherence tomographic hyperreflective foci: a morphologic sign of lipid extravasation in diabetic macular edema,” Ophthalmology 116(5), 914–920 (2009).https://doi.org/10.1016/j.ophtha.2008.12.039 [DOI] [PubMed] [Google Scholar]
- 20.Coscas G., et al. , “Hyperreflective dots: a new spectral-domain optical coherence tomography entity for follow-up and prognosis in exudative age-related macular degeneration,” Ophthalmologica 229(1), 32–37 (2013).https://doi.org/10.1159/000342159 [DOI] [PubMed] [Google Scholar]
- 21.Knaus C., Zwicker M., “Progressive image denoising,” IEEE Trans. Image Process. 23(7), 3114–3125 (2014).https://doi.org/10.1109/TIP.2014.2326771 [DOI] [PubMed] [Google Scholar]
- 22.Pierazzo N., et al. , “Non-local dual denoising,” in IEEE Int. Conf. on Image Processing (ICIP ’22), pp. 813–81 (2014).https://doi.org/10.1109/ICIP.2014.7025163 [Google Scholar]
- 23.Tomasi C., Manduchi R., “Bilateral filtering for gray and color images,” in Sixth Int. Conf. on Computer Vision, pp. 839–846 (1998).https://doi.org/10.1109/ICCV.1998.710815 [Google Scholar]
- 24.Aleksic M., Smirnov M., Goma S., “Novel bilateral filter approach: image noise reduction with sharpening,” Proc. SPIE 6069, 60690F (2006).https://doi.org/10.1117/12.643880 [Google Scholar]
- 25.Liu C., et al. , “Noise estimation from a single image,” in Proc. IEEE Computer Vision and Pattern Recognition, Vol. 1, pp. 901–908 (2006).https://doi.org/10.1109/CVPR.2006.207 [Google Scholar]
- 26.Gargesha M., et al. , “Denoising and 4D visualization of OCT images,” Opt. Express 16(16), 12313–12333 (2008).https://doi.org/10.1364/OE.16.012313 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Kornprobst P., Tumblin J., “Bilateral filtering: theory and applications,” Found. Trends Comput. Graphics Vision 4(1), 1–73 (2009).https://doi.org/10.1561/0600000020 [Google Scholar]
- 28.Geusebroek J.-M., Smeulders A., van de Weijer J., “Fast anisotropic Gauss filtering,” IEEE Trans. Image Process. 12(8), 938–943 (2003).https://doi.org/10.1109/TIP.2003.812429 [DOI] [PubMed] [Google Scholar]
- 29.Varma M., Zisserman A., “Texture classification: are filter banks necessary?” in Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, pp. 691–698 (2003).https://doi.org/10.1109/CVPR.2003.1211534 [Google Scholar]
- 30.Xu Y., et al. , “MDCT-based 3-D textural classification of emphyema and early smoing related lung pathologies,” IEEE Trans. Med. Imaging 25(4), 464–475 (2006).https://doi.org/10.1109/TMI.2006.870889 [DOI] [PubMed] [Google Scholar]
- 31.Ahmed S., Iftekharuddin K. M., “Efficacy of texture, shape, and intensity feature fusion for posterior-fossa tumor segmentation in MRI,” IEEE Trans. Inf. Technol. Biomed. 15(2), 206–213 (2011).https://doi.org/10.1109/TITB.2011.2104376 [DOI] [PubMed] [Google Scholar]
- 32.Kohavi R., John G. H., “Wrapper for feature subset selection,” Artif. Intell. 97, 273–324 (1997).https://doi.org/10.1016/S0004-3702(97)00043-X [Google Scholar]
- 33.Shao Z., et al. , “A new electricity price prediction strategy using mutual information-based SVM-RFE classification,” Renewable Sustainable Energy Rev. 70, 330–341 (2017).https://doi.org/10.1016/j.rser.2016.11.155 [Google Scholar]
- 34.Shieh M., Yang C., “Multiclass SVM-RFE for product form feature selection,” Expert Syst. Appl. 35(1–2), 531–541 (2008).https://doi.org/10.1016/j.eswa.2007.07.043 [Google Scholar]
- 35.Tapia E., Bulacio P., Angelone L., “Sparse and stable gene selection with consensus SVM-RFE,” Pattern Recognit. Lett. 33(2), 164–172 (2012).https://doi.org/10.1016/j.patrec.2011.09.031 [Google Scholar]
- 36.Mishra S., Mishra D., “SVM-BT-RFE: an improved gene selection framework using Bayesian T-test embedded in support vector machine (recursive feature elimination) algorithm,” Karbala Int. J. Modern Sci. 1(2), 86–96 (2015).https://doi.org/10.1016/j.kijoms.2015.10.002 [Google Scholar]
- 37.Zhou X., et al. , “Eye tracking data guided feature selection for image classification,” Pattern Recognit. 63, 56–70 (2017).https://doi.org/10.1016/j.patcog.2016.09.007 [Google Scholar]
- 38.Breiman L., “Random forests,” Mach. Learn. 45(1), 5–32 (2001).https://doi.org/10.1023/A:1010933404324 [Google Scholar]
- 39.Lang A., et al. , “Retinal layer segmentation of macular OCT images using boundary classification,” Biomed. Opt. Express 4(7), 1133–1152 (2013).https://doi.org/10.1364/BOE.4.001133 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Li K., et al. , “Optimal surface segmentation in volumetric images—a graph-theoretic approach,” IEEE Trans. Pattern Anal. Mach. Intell. 28(1), 119–134 (2006).https://doi.org/10.1109/TPAMI.2006.19 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Boykov Y., Kolmogorov V., “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1124–1137 (2004).https://doi.org/10.1109/TPAMI.2004.60 [DOI] [PubMed] [Google Scholar]
- 42.Vezhnevets V., Konouchine V., “Grow-cut interactive multi-label N-D image segmentation,” in Proc. of Graphicon, pp. 150–156 (2005). [Google Scholar]
- 43.Ghosh P., et al. , “Unsupervised grow-cut: cellular automata-based medical image segmentation,” in First IEEE Int. Conf. on Healthcare Informatics, Imaging and Systems Biology (HISB 2011), pp. 40–47 (2011).https://doi.org/10.1109/HISB.2011.44 [Google Scholar]
- 44.Ryba T., Jirik M., Zelezny M., “An automatic liver segmentation algorithm based on grow cut and level sets,” Pattern Recognit. Image Anal. 23(4), 502–507 (2013).https://doi.org/10.1134/S1054661813040147 [Google Scholar]
- 45.Katsigiannis S., Eleni Z., Dimitris M., “Grow-cut based automatic cDNA microarray image segmentation,” IEEE Trans. Nanobiosci. 14(1), 138–145 (2015).https://doi.org/10.1109/TNB.2014.2369961 [DOI] [PubMed] [Google Scholar]
- 46.Von Neumann J., “The general and logical theory of automata,” in Cerebral Mechanisms in Behavior: The Hixon Symp., New York: (1951). [Google Scholar]
- 47.Wolfram S., “Statistical mechanics of cellular automata,” Rev. Mod. Phys. 55(3), 601–644, (1983).https://doi.org/10.1103/RevModPhys.55.601 [Google Scholar]
- 48.Toffoli T., “Computation and construction universality of reversible cellular automata,” J. Comput. Syst. Sci. 15(2), 213–231 (1977).https://doi.org/10.1016/S0022-0000(77)80007-X [Google Scholar]
- 49.Searle J. R., The Rediscovery of the Mind, MIT Press, Cambridge: (1992). [Google Scholar]
- 50.Svozil K., “Are quantum fields cellular automata?” Phys. Lett. 119(4), 153–156 (1986).https://doi.org/10.1016/0375-9601(86)90436-6 [Google Scholar]
- 51.Poincaré H., Science and Method, New York: (1914). [Google Scholar]
- 52.Putnam H., Representation and Reality, MIT Press, Cambridge, Massachusetts: (1988). [Google Scholar]
- 53.Richards F., Meyer T., Packard N., “Extracting cellular automaton rules directly from experimental data,” Phys. D 45(1–3), 189–202 (1990).https://doi.org/10.1016/0167-2789(90)90182-O [Google Scholar]
- 54.Myhill J., “The converse of Moore’s Garden-of-Eden theorem,” Proc. Am. Math. Soc. 14, 685–686 (1963).https://doi.org/10.1090/S0002-9939-1963-0155764-9 [Google Scholar]
- 55.Moore C., “Recursion theory on the reals and continuous-time computation,” Theor. Comput. Sci. 162, 23–44 (1995).https://doi.org/10.1016/0304-3975(95)00248-0 [Google Scholar]
- 56.Moore E. F., “Machine models of self-reproduction,” in Proc. Symp. in Applied Mathematics, Vol. 14, pp. 17–33 (1962). [Google Scholar]
- 57.Mitchell M., Crutchfield J. P., Das R., “Evolving cellular automata with genetic algorithm: a review of recent works,” in Proc. of the First Int. Conf. on Evolutionary Computation and Its Applications (1996). [Google Scholar]
- 58.Hamamci A., et al. , “Cellular automata segmentation of brain tumors on post contrast MR images,” in Medical Image Computing and Computer-Assisted Intervention (MICCAI 2010), pp. 137–146 (2010). [DOI] [PubMed] [Google Scholar]
- 59.Dorey F., “The p value: what is it and what does it tell you?” Clin. Orthop. Relat. Res. 468(11), 2297–2298 (2010).https://doi.org/10.1007/s11999-010-1402-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Ge Q., et al. , “Active contour evolved by joint probability classification on Riemannian manifold,” Signal, Image Video Process. 10(7), 1257–1264 (2016).https://doi.org/10.1007/s11760-016-0891-8 [Google Scholar]
- 61.Zhang K., et al. , “A level set approach to image segmentation with intensity inhomogeneity,” IEEE Trans. Cybern. 46(2), 546–557 (2016).https://doi.org/10.1109/TCYB.2015.2409119 [DOI] [PubMed] [Google Scholar]
- 62.Mookiah M. R. K., et al. , “Computer-aided diagnosis of diabetic retinopathy,” Comput. Biol. Med. 43, 2136–2155 (2013).https://doi.org/10.1016/j.compbiomed.2013.10.007 [DOI] [PubMed] [Google Scholar]










