Abstract
Optical Coherence Tomography (OCT) constitutes an imaging technique that is increasing its popularity in the ophthalmology field, since it offers a more complete set of information about the main retinal structures. Hence, it offers detailed information about the eye fundus morphology, allowing the identification of many intraretinal pathological signs. For that reason, over the recent years, Computer-Aided Diagnosis (CAD) systems have spread to work with this image modality and analyze its information. A crucial step for the analysis of the retinal tissues implies the identification and delimitation of the different retinal layers. In this context, we present in this work a fully automatic method for the identification of the main retinal layers that delimits the retinal region. Thus, an active contour-based model was completely adapted and optimized to segment these main retinal boundaries. This fully automatic method uses the information of the horizontal placement of these retinal layers and their relative location over the analyzed images to restrict the search space, considering the presence of shadows that are normally generated by pathological or non-pathological artifacts. The validation process was done using the groundtruth of an expert ophthalmologist analyzing healthy as well as unhealthy patients with different degrees of diabetic retinopathy (without macular edema, with macular edema and with lesions in the photoreceptor layers). Quantitative results are in line with the state of the art of this domain, providing accurate segmentations of the retinal layers even when significative pathological alterations are present in the eye fundus. Therefore, the proposed method is robust enough to be used in complex environments, making it feasible for the ophthalmologists in their routine clinical practice.
Keywords: Computer science, Medical imaging, Ophthalmology
1. Introduction
Nowadays, many significative diseases can be identified by the analysis of retinal images [1]. In this line, a precise segmentation of the main ocular structures [2] [3] is useful for the posterior diagnosis and treatment of relevant retinal and systemic pathologies [4].
Among the different retinal image modalities, Optical Coherence Tomography (OCT) is a powerful tool for the diagnosis of retinal diseases, since the image acquisition in these devices consists in a contactless, non-invasive method which gives a set of images of the main retinal structures in real time [5], [6]. Nowadays, OCT imaging can be employed in the analysis and diagnose of many relevant diseases: the information regarding the thickness of OCT layers is useful in the diagnosis of glaucoma [7], macular degeneration [8], [9] or diabetic retinopathy [10], among others. Also, the retinal thinning is studied in multiple sclerosis [11]. Therefore, an adequate segmentation of the layers of the retina is crucial for clinical experts as they delimit the region of interest as well as offer relevant clinical information.
In this work, the top boundaries of the Internal Limiting Membrane (ILM) and the Ellipsoid (limiting with the Myoid) (M/E) are included, as well as both boundaries of the Retinal Pigment Epithelium-Bruch's Complex, limiting with the Interdigitation zone (I/RPE) and the Choroid (RPE/C), see Figure 1. These layers correspond to the retinal thickness (ILM and RPE/C) and the photoreceptor zone (M/E and I/RPE), also known as the External Zone of Photoreceptors (EZPR). Many studies correlated these retinal regions with the visual acuity measurement [12], [13], [14]. The efficacy of therapies to treat some of the most common eye-diseases, including diabetic macular edema or age-related macular degeneration, are usually evaluated by the retinal thickness (ILM and RPE/C) measurement [15], [16], [17]. Additionally, the photoreceptor zone (M/E and I/RPE) is also affected by these diseases and, therefore, is correlated with the visual acuity loss [18], [19], [20]. Given this clinical background and relevance, we aimed the mentioned layers as the most representative and informative ones, delimiting the regions of the most studied diseases that use the analysis of OCT imaging as reference.
Figure 1.
Analyzed retinal layer boundaries and regions of clinical interest in an OCT image.
Different methods have been proposed in previous works to segment the layers of the retina. As reference, a graph-based technique is presented in [21], [22], achieving the segmentation by finding the minimum-closed-set in the geometric graph, while in [23] the proposal includes a multi-scale 3-D graph search for the optic nerve head segmentation. Dynamic programming was used in [24] to automatically segment seven retinal layers. A sparsity-based process is presented in [25], where the input image is transformed into a layer-like domain to the later application of graph-theory. A further proposal with diffusion maps using a sparse representation of the graphs is presented in [26], where its main limitation is the influence of the value for the size of the rectangles (or cubes) used in the approach. In [27], the authors proposed a learning strategy for the retinal layers segmentation task. This method uses a random forest (RF) algorithm for the classification of the pixels between these boundaries. A dual-gradient-based method is proposed in [28] to segment nine intra-retinal boundaries, combining gradient information to achieve the shortest path-based segmentation. Although the results were promising, the segmentation in the area of the fovea was not accurate regarding the manual expert segmentations. That work is extended in [29], where the segmentation is obtained to construct layer maps representing the thickness in a context of pathological patients. For this kind of segmentation tasks, the active contour models seem suitable. A multi-object geometric deformable model is used in [30], where nine retinal layers are segmented. In [31], the authors proposed an adapted the Chan-Vese's model [32], an active contour-based model using a Mumford-Shah functional [33], for the extraction of the retinal layers in OCT images of rodents. Posteriorly, this strategy was refined in a semiautomatic method presented in [34]. Although this method achieved an adequate performance, it still presents some intrinsic limitations of the active contour models as, for reference, the improvable segmentation results in images with intensity inhomogeneity, the sensitivity to the initial location of the level set contours in complicated images and the high time-consuming of the re-initialization step of the algorithm [35], [36].
Recently, deep learning-based strategies have been used successfully for the automatic retinal layer segmentation issue. In [37], the authors proposed an automatic system that uses convolutional neural networks (CNN) for the thickness segmentation in OCT scans. Other work, [38], also proposed a CNN based deep learning approach to simultaneously segment three surfaces. This approach demonstrated a low computation cost and a higher performance compared to the graph based approaches and the U-NET learning-based method. In [39], the authors proposed a method also using a CNN implementation, followed by an analysis of the effects of the patch size as well as the network architecture design on the CNN performance and the subsequent layer segmentation. A new fully-convolutional architecture was presented in [40]. This architecture, called ReLayNet, uses a contracting path of encoders to learn a hierarchy of contextual features, followed by an expansive path as decoder for the retinal layer and fluid segmentation.
The main contributions of the paper include: (i) a new strategy based on an active contour model that uses the horizontal placement of the layers of the retina and their relative location over the analyzed images to restrict the search space and to enclose the possible movements of the model to reach the different desired layers; (ii) not all the proposals to segment the layers of the retina considered the presence of shades that are normally generated by blood vessels, exudates, cysts or any other pathological or non-pathological artifact; (iii) to date, none proposal considered the pathological scenarios (patients without macular edema, with macular edema and with lesions in the photoreceptor layers) where the retinal layers suffer significative alterations, including the fusion of some of them or the possibility of their absence in particular regions of the retina, as illustrated in Figure 2.
Figure 2.
OCT images from a healthy patient (a) and a patient with some pathology altering the EZPR (b). The arrows identify regions where this layer is altered or almost missing.
In this paper, we propose a fully automatic methodology to segment the main retinal layer boundaries using OCT images, not only when the layers are clearly perceived but also in the previously exposed situations. The proposed method, based on active contours, was completely adapted and optimized for this specific search space using anatomical knowledge of the retinal layers. Additionally, the adopted segmentation method includes specific refinement phases to correct possible mistakes and improve as much as possible the performance of the layer segmentation task. Preliminary results of an early proposal were presented in [41], initially demonstrating the potential of the followed strategy.
This paper is organized as follows: Section 2 presents the proposed methodology for the automatic retinal layer segmentation and the characteristics of all its stages. Section 3 presents the results that were obtained with the method. Finally, Section 4 presents the conclusions about the proposed system as well as possible future lines of work.
2. Methodology
Generally, the methodology, shown schematically in Figure 3, is divided in three main stages: firstly, in the preprocessing stage, the region of interest (ROI) is bounded. Thus, a first approach for the boundary is obtained to initialize the active contour that is used in the next stage; secondly, the automatic segmentation of the retinal layers is performed; and finally, refinement processes are applied to obtain a better adjustment. Further details about the stages are going to be discussed next.
Figure 3.
Phases of the presented methodology.
2.1. Preprocessing
Considering that the minimization process is sensitive to fall in local minima, the contour initialization must be close to the real existing boundary location. Thus, the first step consists of bounding the ROI where the searching process must be done. Also, this is useful to reduce the computational cost of the method.
For that purpose, enhancement operations are applied, mainly based on smoothing filters. Then, as all the target boundaries are determined by bright surfaces, the image is thresholded depending on the intensity distribution of the image. To do that, a threshold covering a percent of the darkest pixels in the image is applied. The binarized image is processed with morphological operators to fill the holes. Finally, a first approach for the boundary is determined by the pixels that remains at each column. With the aim of obtaining a smooth initial boundary for the next phase, their points are fitted to a curve. Figure 4 shows a representative example of this process (in this case, for the RPE/C layer), where the first approach of the boundary is given by the pixels of the bottom boundary of the ROI (b), being shown the obtained curve in (c).
Figure 4.
First approach obtained for a generic layer boundary (in this case, RPE/C): (a) original image; (b) mask covering the ROI; (c) boundary in red as result of the fitting points of the last row of the mask.
These steps are only used to segment the ILM and RPE/C layers, as they serve as boundaries of the retina. Therefore, active contours for inner layers can be initialized based on them.
2.2. Retinal Layers Segmentation
Based on anatomical knowledge of the retinal layers, we designed a strategy completely adapted and optimized using an active contour model for this specific segmentation task. Kass et al. [42] introduce the concept of snakes, or active contours. An active contour is an energy-minimizing spline guided by external and internal forces that pull the contour towards features in the image, following:
| (1) |
| (2) |
| (3) |
where the internal energy is usually defined as a function of a first-order term , which gives a measure of elasticity, and the second-order term , which makes the contour to acquire a thin-plate behavior representing the curvature, both governed by parameters α and β, respectively. External energy, , attracts the contour to the desired intensity characteristics of the image I.
In this work, a boundary corresponds to a sequence of points, one per column of the image, being the topology used in this problem a lineal sequence of equidistant nodes. Each node corresponds to one pixel of the image, being connected to two neighbors, except for the first and the last ones, which are located in the first and last columns. During the minimization process, the nodes can make displacements into a neighborhood of size pixels, except for the nodes in the limits, which only can move along the rows. The model energy minimization was done using a greedy approach for simplicity and optimal computational cost, but other techniques could be applied.
The model herein proposed is designed to work over two images at the same time, each one enhanced in a different way. The idea is to be able to combine different levels of information detail into the model. The first image is the original one smoothed and with enhanced contrast, containing precise information. The second image provides a coarse level of information, useful in the first steps of the energy minimization. In this image, bright areas have been remarked using a more aggressive process: firstly, the image is blurred with a median filter of pixels; then, morphological operators and Gaussian blurring are applied. Figure 5 shows a sample image and results after both processing approaches, being (b) and (c) the images that are considered in the energy-minimization process.
Figure 5.
Images used in the energy-minimization process: (a) sample original image; (b) first image used in the process, resulting of enhanced contrast; (c) second image used in the process, resulting of coarse preprocessing (median filter of kx × ky = 19 × 3).
Considering that the external energy is defined as a function of the edges and the intensity of the image, the different terms used in this problem are grouped in edge-based terms and intensity-based terms.
Edge-based information
Edges are extracted with the Sobel algorithm with Non-maximum suppression. Since the layer boundaries are determined by light-to-dark or dark-to-light transitions for each case, only edges associated to the considered transition are extracted. For instance, while the ILM layer corresponds to a dark-to-light edges, RPE/C is associated to light-to-dark ones.
With regards to the image edges, their importance can be determined on two ways. Firstly, the gradient can consider the intensity variation over non-adjacent pixels. Thus, it is computed considering a given pixel and another one located at a distance . This is useful to obtain edges that are associated to the different layer boundaries that are considered in this work, because some of them, such as ILM, are more immediate than other ones. On the other hand, edges with low magnitude can be discarded using a threshold .
Although the gradient distance seems useful in most of the cases, after some initial test, that idea can not be universally generalized for every retinal location or appearance. This is given by the fact that, when the retinal layers disposition in the image present a vertical sufficient slope, the gradient distance makes the contour be attracted to the nearest edges, which are not usually the correct ones. This situation is described in Figure 6 for an easier understanding. To take into account these kind of situations, the distance to the best edge is rewritten. Thus, for each pixel, only the edges located in its nearest columns are considered. Differences in results are reflected in Figure 6(e) and (f).
Figure 6.
Influence of the gradient distance computed over a neighborhood in the active contour evolution, with the remarkable ROI squared: (a) input image after enhancement, with the initial active contour; (b) strongest edges per column; (c) image of gradient distance, with zoom applied over the ROI, showing schematically a node (green circle) and two near edges (black squares), being the nearest (red circle) which attracts the node; (d) result using gradient distance in (c); (e) image of gradient distance computed over a neighborhood of ncol columns (limit marked by black dot-line), with zoom showing how the limitation is solved; (f) result using gradient distance in (e).
The gradient distance term is extracted for both the smoothed and the preprocessed images, controlled by w2 and , respectively. The term reflecting information of edges is extracted only for the second image, governed by . With the purpose of a fast evolution of the contour and also to avoid the influence of some edges that would make the model reaching wrong segmentations, a new energy term is added governed by , representing the distance between a node and the strongest edge in the search area. This term constitutes an important heuristic, which facilitates the movement of the contour in the first steps of its evolution, although it should be relaxed later in order to perform a finer adjustment.
Intensity-based information
This term is computed as follows: firstly, addition of the intensities in the pixels above (or below) is obtained; then, pixel with highest (or lowest) value at each column is marked, building image . Thus, term is defined as the distance of each pixel in the image to the nearest non-zero pixel in . Since this term is equivalent to the gradient-distance over the image , it is computed using a neighborhood to avoid analogous problems of those presented in Figure 6.
Apart from gradient and intensity-based information, the proposed model includes an additional term governed by ε to reinforce regions located on the top part of the image with respect to others at the bottom. This term is only used for the ILM segmentation.
Once all the energy terms are known, it is necessary to introduce that, depending on the complexity and the environment of each boundary, the contour adjustment is not performed with constant energy weights along the energy minimization process (the objective to minimize is not constant). This is derived by the initial interest in driving the contour evolution avoiding possible obstacles during the first steps of the process, while in later stages, the control over its deformations is essential to achieve a smooth and refined adjustment. According to that, the adjustment is tackled in steps, modifying the parameters dynamically in function of the boundary complexity and the information to be considered. In general, the external energy presents a highest influence in the first steps to attract the contour to the desired features, while the internal energy is predominant at the end of the process to guarantee the continuity and smoothness of the model.
After the contour adjustment, its nodes are interpolated to obtain the entire boundary (see Figure 7). Lineal interpolation is used in this work, considering that there are enough nodes to cover the boundary and represent properly the details of its shape.
Figure 7.
Generation of the entire boundary: (a) nodes after the minimization process (red points); (b) interpolated layer boundary points (red line).
Once the general method segments a generic layer boundary in the OCT retinal images, particular features involved in each boundary can be described.
2.2.1. Segmentation of the ILM layer
As the ILM is placed at the top of the image, the first edge is searched in the binarized image, allowing to initialize the active contour and the mask to be used in the energy minimization process around it, as Figure 8 represents.
Figure 8.
Preprocessing of the ILM layer: (a) original image; (b) mask with the initial approach for the layer boundary (red line) and the limits considered to bound the ROI (red dot-rectangle) and (c) image after preprocessing.
It is remarkable that the involved steps are also useful to discard misleading regions from the ROI. For instance, when the Posterior Hyaloide is present (see Figure 9(a)), the implemented process avoids the detection of that region, bounding the real ILM. Thus, the region where retinal layers are located is delimited.
Figure 9.
Presence of (a) Posterior Hyaloid and (b) ERM, indicated by arrows.
Whereas the obtained boundary is used to initialize the contour, the mask is also useful to bound the region where the model can be moved. In this sense, only the edges corresponding to dark-to-light transitions are considered. It was observed that some nodes can be attracted to M/E, specially in the limits of the image, where the information is low. In order to avoid this behavior, as said, the term ε is included, encouraging regions that are located at the top of the image.
On the other hand, the presence of the Epiretinal Membrane (ERM) must be taken into account, because it deteriorates the contour adjustment, as it can cause a stronger dark-to-light transition than the ILM. A sample image including this layer is shown in Figure 9(b). This problem was presented and solved in [43], where a graph-based method is used to segment the ILM boundary. In that work, edges in the same column with similar direction, located nearby, are detected. Thus, the edge, located above is penalized to encourage the detection of the other one. A similar idea is applied here, by modifying the image of the edges and therefore, the gradient distance, with the purpose of attracting the contour to the correct boundary.
In order to segment the ILM, the adjustment is done in steps: initially, the external energy presents a high influence; next, this relevance lies on the internal energy and finally, more nodes are added in the middle part of the image, where the fovea should be located, to perform a more detailed and adjusted segmentation. Once the contour is adjusted, the nodes are interpolated to obtain the entire layer. Figure 10 shows the evolution of the active contour and the final segmentation obtained over the sample image of Figure 8.
Figure 10.
Segmentation of the ILM layer: (a) initial active contour after preprocessing; (b) active contour after first step; (c) active contour after second step and adding nodes in the center of the image; (d) layer boundary as result of interpolating nodes in (c).
2.2.2. Segmentation of the RPE/C layer
Analogously to the ILM case, the segmentation of the RPE/C layer begins bounding the ROI and providing a first approach for the active contour. Thus, after generating and processing the binarized image, the last edge is searched (which should be near the real location of the RPE/C layer) and a curve is fitted to that. Using this approach, the mask used in the minimization process is delimited establishing a region above the curve used to initialize the active contour.
The active contour is configured as follows: the gradient information only considers light-to-dark transitions. Regarding the intensity-based information, light areas are encouraged to attract the contour to the desired region.
The minimization process is composed by the steps shown in Figure 11: firstly, the contour is attracted to the boundary in a coarse way. Then, it is fitted to a curve ignoring information in the extrema, given the image sides present lower information and the nodes usually fall in local minima. This curve is used as initialization of a new contour, replacing the first one. With the new contour, a more accurate adjustment is done. Finally, the boundary of RPE/C is generated interpolating the contour nodes. Figure 12 presents the evolution of the active contour and the final result.
Figure 11.

Main steps of the minimization process.
Figure 12.
Segmentation of the RPE/C layer: (a) original image where the rectangle identifies the ROI after the preprocessing; (b) initial active contour; (c) configuration after the first step; (d) after the second step; (e) layer boundary result of interpolating nodes.
Despite the internal energy included in the contour, the low definition of this boundary (specially in both sides of the image) causes that some nodes end the process in wrong isolated positions. This situation is reflected in Figure 13, showing the evolution of the active contour, whose final configuration (c) presents an outlier, identified by an arrow. In order to avoid this situation, after the minimization steps, a process of relocation of outliers explained below is applied. Once every outlier in the active contour is detected and corrected, the boundary is generated through node interpolation.
Figure 13.
Presence of outlier after segmenting RPE/C: (a) original image where the rectangle identifies the ROI after the preprocessing; (b) and (c) ROI zoomed with initial active contour and configuration after two steps of minimization, respectively. Arrow in (c) indicates a node trapped in a local minimum (outlier).
Relocation of outliers
As it was presented above in the RPE/C layer segmentation, after the contour adjustment, some nodes can be trapped in local minima and the obtained boundary can present deviations. With the purpose of avoiding that, we propose a process of detection and replacement of these nodes (outliers). For the detection, initial and final nodes are firstly analyzed followed by, the inner nodes.
In order to check a node in the active contour of size n, the process is described as follows: slope defined by two nodes and , is obtained for each pair of consecutive nodes; thus, considering also slopes and for the following nodes and the approximation obtained by Lagrange polynomial method for the node , it is possible to introduce the rule for nodes in the limits:
| (4) |
where establishes the maximum value that can take. In this way, when the slope between consecutive nodes changes in sign and takes high values, it is assumed that the associated node is an outlier. Then, this node is changed by , which is the approximation given by the Lagrange polynomial method, based on its neighboring nodes.
We could observe that the last node takes wrong values more usually than the first one. This is related to the inherently sequential process of the energy minimization: considering n nodes, it is applied from the first node to the last one , which correspond to nodes located in the left and right sides of the image in the physical context. Consequently, a new constraint must be included for the last node, rewriting Eq. (4) as follows:
| (5) |
being the approximation obtained for the node , using Lagrange extrapolation method. An example with a node in the right side of the image which has been relocated is shown in Figure 14.
Figure 14.
Relocation of outliers after the model adjustment: (a) original image, with the region considered in the segmentation of RPE/C (rectangle); (b), (c) zoom applied over the rectangle in (a) with the nodes at the end of the process and after replacing the outlier (marked with arrow), respectively.
For the rest of the nodes of the active contour, determining if they are outliers is done in a different way. Following the idea of the second derivative between nodes, the difference between slopes and is computed. After all the differences are computed, the median value is calculated for establishing an outlier criterion:
| (6) |
where is a threshold to determine if the difference is too far from the median difference . When the condition expressed in Eq. (6) is satisfied, the node is considered an outlier and should be replaced. Since a node which is not located in the extrema counts on neighbors on both sides, Lagrange extrapolation is not needed and the approximation can be given by the mean point between its neighbors, as Eq. (7) establishes. Figure 15 shows an example of this situation, where the outlier (node indicated by the arrow) is replaced.
| (7) |
Figure 15.
Relocation of outliers (not in the extrema) in the model after the contour adjustment: (a) zoomed image, with nodes at the end of the process; (b) final nodes after replacing the outlier (marked with arrow).
2.2.3. Segmentation of the M/E layer
Once the boundaries of the RPE/C and ILM layers are segmented, the segmentation process of the M/E can be more easily initialized. Thus, considering that n equidistant nodes are used, the x-coordinate i (column in the OCT image) is known for each node. Therefore, the y-coordinates (row) are obtained in the following way:
| (8) |
where and are the y-coordinates taken by points in the boundaries of ILM and RPE/C, for each column i. That means that the new node is between both layers, but nearer RPE/C than ILM, what usually occurs.
The active contour model to segment the M/E considers edges corresponding to dark-to-light transitions. Regarding the intensity-based terms, pixels with dark areas above are encouraged, but also with bright areas below, this way considering the intensities of Myoid and Ellipsoid. The minimization process is composed by steps, as Figure 16 presents schematically: firstly, the external energy governs the process; after that, the contour is fitted to a curve, ignoring information in the extrema. That curve is used to initialize a new contour, which replaces the first one (analogous process to that used for RPE/C). Based on this new contour, the mask is bounded again. The internal energy is increased in the second step to obtain an appropriate adjustment. However, since this layer tends to be stretched when the presence of the fovea is more evident, two new steps are included: after adding more nodes around the fovea, the contour is adjusted, keeping the rest of them fixed; then, the distance regarding the RPE/C is checked, modifying the location of those nodes that are too close to that layer to perform the final minimization process. This method shows accurate results, even in the fovea region, while presenting good continuity and smoothness. Figure 17 presents a segmentation example.
Figure 16.

Schema of the active-contour-based process to segment the M/E layer.
Figure 17.
Active contour used to segment the boundary of M/E: (a) initialization, based on the location of the ILM and RPE/C; (b) after the step with predominant external energy; (c) second active contour, based on fitting the first one to a curve, ignoring the nodes in extrema; (d) after increasing the internal energy; (e) final adjustment, with nodes added in the area of the fovea; (f) boundary after node interpolation.
Although this model usually provides appropriate results, it has been observed that the shades produced mainly by the vessels have a strong influence in the model evolution. They do not only cause smoother edges in these regions, but also that intensity-based term that attracts the contour to wrong areas. This term represents the distance to the best pixel at each column in the image (where the “goodness” of the pixel is determined by the intensity above or below). Under the presence of a shade, this information is not valid. This is specially misleading in the M/E detection, because it involves encouraging pixels with dark areas above and these structures greatly alter the results. Figure 18 reflects this problem, where (a) presents the sample image with the vessel shade marked with an arrow and (b) the image obtained to compute the term (c). Regions where provides wrong information (obviously due to the vessel shades) are marked with circles. Under these conditions, a coarse identification of vessel shades is done, with the purpose of modifying this information in the columns where they are located.
Figure 18.
Intensity-based energy term κadd influence under the presence of shades: (a) sample image, with an arrow indicating the vessel shade; (b) Iadd (best pixel per column, considering the lowest intensity above with nκ = 20), where misleading regions are marked in circles; (c) image representing κadd, using a neighborhood of columns; (d) active contour after the two first steps.
Vessel shades detection
The presence of any possible shade provoked by the vessels is identified using vertical regions over the image where the intensity suffers a significant drop, as shown in Figure 18(a). Given the complexity of an accurate detection, a coarse detection is presented in this work, detecting these intensity drops along the image. To do that, firstly, the mean intensity by columns i in the OCT image is calculated, constructing a vector . After blurring this vector to smooth the information, the median value is calculated. Using the results, a function is designed to identify the presence of the vessel shades over the columns i of the image:
| (9) |
where is the result of vec for column i and is the corresponding standard deviation over the vector. A given column is, therefore, determined as vessel if its smoothed mean intensity is significantly distant from the median intensity value. The used threshold is automatically calculated given the conditions of each image.
When a column is identified with the presence of a vessel shade, the image does not include information of its best pixel. Therefore, the term takes high energy values in order to penalize them. Figure 19 shows the differences derived from modifying this term to exclude columns associated to the vessel shades and the resulting layer delimitation, in (c).
Figure 19.
Intensity-based energy term κadd, influence after excluding the vessel shades: (a) Iadd after excluding columns detected as vessels shades; (b) image representing κadd, using a neighborhood of columns, after the exclusion of the columns identified as vessel shades, whose pixels take high values; (c) resulting active contour after the two steps.
2.2.4. Segmentation of the I/RPE layer
This boundary separates the Interdigitation (I) and RPE layers, determining the bottom boundary of the EZPR. As it was presented in Section 1, alterations in this zone are correlated to several pathologies. Therefore, segmenting this boundary is essential. However, its presence is not always appreciable, because sometimes there is not enough information to distinguish it. In addition, EZPR can be missed or altered, making its detection even more difficult.
The active contour used to segment this boundary is initialized based on the M/E and RPE/C locations. Thus, considering that n equidistant nodes are used, x-coordinate i (column in the image) is known for each node. Therefore, y-coordinate (row) are obtained as follows:
| (10) |
where and are the y-coordinates taken by points in boundaries RPE/C and M/E, for each column i. Thus, the new node is equidistant to both pixels in those boundaries. Regarding the mask used to bound the ROI, it is restricted based on the M/E and RPE/C locations.
Segmentation of the I/RPE is not straightforward, given it is not boundary and information of intensity and edges is insufficient (edges are mainly short and disconnected, even after a blurring process). Only the intensity below each pixel is considered, because when the EZPR presents alterations, the region above I/RPE is darker than usual. Therefore, this term would attract the contour to wrong regions. Gradient-based information considers intensity transitions. However, the presence of this boundary is not precise, with edges mainly short and disconnected, even after a blurring process. In addition, more than one dark-to-light transition can be found in the region, specially when the foveal depression is more salient and the Ellipsoid layer is stretched (Figure 20). This causes a higher transition between the photoreceptors Outer Segment (OS) and the Interdigitation (I) zone in the center of these images, which is more accused than in the rest of the images. To avoid that, instead of the I/RPE, the boundary OS/I attracts the active contour, the images are processed to erase the first top edges in the ROI. Only edges located around the fovea column are erased and, as this situation is usually given when the Ellipsoid stretches, only inner images in the sequence are processed in this way (the OCT sequence is centered in the fovea).
Figure 20.
Influence of the Ellipsoid stretching in the gradient information considered to extraction of the I/RPE: (a) original image; (b) zoomed image shows the OS/I boundary in red, above the I/RPE one, in blue. Active contour will be attracted to OS/I boundary, unless its correspondent edges were erased.
As explained before, the lack of information in this area is significative. Given the imprecise presence of this boundary combined with the initialization based on the previous boundaries (RPE/C and M/E), that makes internal constraints being almost satisfied, the selection of parameters is not very relevant in this case to achieve an appropriate adjustment. After the energy minimization, the final node interpolation provides the entire retinal layer.
Figure 21(a) shows the segmentation obtained for the M/E and I/RPE layer boundaries over an OCT scan. Despite that this approach provides satisfactory results, it presents some limitations, specially if the EZPR is altered, as Figure 21(b) shows. Because of this, a specific refinement stage is designed to correct possibles mistakes of the segmentation and improve the performance of the proposed method, even under pathological conditions.
Figure 21.
Segmentation of I/RPE and M/E (in red) for two different images, which have been zoomed: (a) successful segmentation; (b) confusion between I/RPE and M/E, due to the altered EZPR.
2.3. Refinement
To obtain precise segmentations of the layers of the retina, two specific refinement phases are introduced to achieve an accurate performance even in pathological cases presenting different degrees of retinal degenerations that can be derived by many eye diseases, such as diabetic retinopathy. Further details about these refinement phases are going to be discussed next.
2.3.1. M/E layer refinement
Although the results of the M/E segmentation are accurate, in the case of the EZPR, there are some intensity alterations given that the information in that area is lower and the gradient information misleading. Therefore, the segmentation model shows a bias of delimiting the I/RPE layer, instead of the objective, the M/E one. This situation is shown in Figure 21, where (a) illustrates a successful result whereas (b) an erroneous M/E segmentation. For that reason, a new approach for the M/E detection is performed to correct erroneous deviations. In this way, the refinement process is composed of steps, as the current boundary is already close to its aimed position. Firstly, the active contour model is initialized near the M/E layer. Then, a set of nodes is added to the foveal region to achieve a refined delimitation. Figure 22(b) shows the correction results over (a), to illustrate this phase.
Figure 22.
M/E Refinement: (a) wrong result, produced by the pathological condition in the EZPR, making the M/E boundary segmentation confusing; (b) correction using the refinement approach over (a).
2.3.2. I/RPE and RPE/C refinement
Once these retinal layers are delimited, a refinement process may be applied taking advantage of their morphological localization. In particular, it may be assumed that both retinal layers are quite parallel. Therefore, this step intends to find the unique parametric polynomial which best fits both boundaries. A polynomial of degree 3 was determined as enough to contain the possible curvatures of these boundaries. It is important to remark that keeping the polynomial degree low is interesting for computational purposes, also avoiding rough deviations. Thus, a set of curves C is calculated using all the points corresponding to the final contour nodes used in the segmentation process. Only points from the contour are considered because they spot the position of the boundary, excluding those derived through interpolation.
The curves in set C are obtained based on the nodes of the I/RPE segmentation. To do that, firstly, we calculate the difference between each individual node on the I/RPE layer and the curve point of the corresponding column. Secondly, the curve C is moved to fit with the RPE/C layer. To do that, all the rows that are close to the RPE/C layer are studied, covering the margin between the highest and the lowest points of the boundary, as Figure 23 shows.
Figure 23.

Adjustment process of the refinement of the RPE/C layer segmentation, where the curve C is determined by the red dot-line, the nodes of the M/E layer are the red circles and the nodes of the RPE/C layer are represented by the blue circles covering the mci range.
The fitness measure is given by the nodes in both boundaries matching the curve. Since this measure is very restrictive regarding the exact position of a node, in order to gain feasibility, it is formulated as the difference between each node and the aimed curve c, given the information of the 8-connected neighborhood :
| (11) |
Hence, the distance between the curve and the retinal layer is calculated as the sum of all the distances , where N represents the total number of nodes:
| (12) |
Although the total number of nodes in the final contour for each boundary can be different, the importance of the distance of each boundary is already reflected in Eq. (12), with the division factor representing the number of nodes in the boundary.
Taking into account that the curve c is constructed with the nodes belonging to the boundary of the I/RPE layer and, therefore, should be relocated to obtain the projection over the RPE/C layer. Thus, the global measurement dg covering information of distance for both boundaries is defined as:
| (13) |
where and represents the boundaries and . Since the projection of each curve c may be located covering a range , as Figure 23 presented, the global distance dg is redefined:
| (14) |
where is the projection of c over the boundary considering the row r in the environment defined by .
Thus, the aimed curve c is the one minimizing the distance between these boundaries, determined by Eq. (15). Given the extraction of the curve c, it substitutes RPE/C and M/E layers, considering the appropriate modifications in the aforementioned retinal layers.
| (15) |
Figure 24 presents the results (with zoom applied over the region of interest) using the refinement method, marking the nodes adjusted with red circles. Once all the possible curves for the I/RPE layer have been explored, the best configuration is obtained using control points, indicated as yellow circles.
Figure 24.
RPE/C and I/RPE segmentation refinement: (a) set of end nodes (red circles) used in the delimitation process; the big yellow circles correspond to the control points of the best configuration. (b) I/RPE and RPE/C final segmentation.
Regarding the image dataset that was used for the validation of the methodology, we expose that all the procedures in this study adhered to the tenets of the Declaration of Helsinki. More detailed information about the ethical committee that approved this study is presented below in Section 3.
3. Results & discussion
The proposed method has been validated using 40 OCT sequences, obtained with a CIRRUSTM Carl Zeiss Meditec. Each sequence includes 128 histological sections, summing a total of 5120 images. The sequences belong to 34 healthy individuals and other 6 with different degrees of diabetic retinopathy (3 individuals without macular edema, 1 individual with macular edema and 2 individuals with lesions in the photoreceptor layers). Each histological section presents a resolution of pixels, where a pixel size covers μm/pixel. The left and right margins of the sections were not faced in the tests [24], removing the 10% of the both sides to avoid the inclusion of areas with low information.
The “Comité de Ética da Investigación de A Coruña-Ferrol” committee belonging to the “Rede Galega de Comités de Ética da Investigación” attached to the regional government “Secretaría Xeral Técnica da Consellería de Sanidade da Xunta de Galicia” approved this study (Ref. 2014/437), which was conducted in accordance with the tenets of the Helsinki Declaration. Written informed consent was obtained from all studied patients.
Regarding the parameters, they were empirically established with a preliminary test to those that offered accurate results. Table 1 presents the model parametrization that was optimized for this problem. Absence of parameters in this table means that they were not applied to those cases. Regarding the weights that were used during the active contour model evolution, they are presented in Table 2, Table 3, with the purpose of allowing reproducibility of the results that were obtained in this work.
Table 1.
Values of the parameters that are used to initialize the active contour model for all the boundaries.
| Boundary | ILM | M/E | I/RPE | RPE/C |
|---|---|---|---|---|
| p | 75 | 85 | ||
| 1 | 1 | 2 | 4 | |
| 30 | 30 | 5 | 0 | |
| 2 | 1 | 2 | 3 | |
| 10 | 10 | 10 | 0 | |
| nnol | 21 | 9 | 9 | 9 |
| Wn × hn | ||||
| kx × ky | ||||
| tsp | 0.5 | |||
| tmed | 0.25 | |||
| tthick | 10 | |||
| ncmin | 4 | |||
Table 2.
Values of the parameters that were used in the active contour model to segment the ILM and RPE/C layers (absence of parameter means that it is set to zero).
| ILM |
RPE/C |
||||
|---|---|---|---|---|---|
| Step | 1 | 2 | 3 | 1 | 2 |
| nac | 26 | 26 | 40 | 21 | 21 |
| α | 0.0001 | 0.001 | 0.0001 | 0.05 | 0.0001 |
| β | 0.0001 | 0 | 0.0001 | 0.001 | 0.0001 |
| γ | 0.0001 | 0.0001 | 0 | 0.0001 | 0.0001 |
| nnhb | 5 | 5 | 5 | 5 | 5 |
| w1 | – | – | – | – | – |
| w2 | 0 | 0.005 | 0.001 | 0 | 0.0001 |
| w2prep | 0.001 | 0.005 | 0.001 | 0 | 0.001 |
| w3prep | −0.01 | 0 | 0 | −0.01 | −0.001 |
| w4prep | 0.1 | 0.01 | 0.01 | 0.01 | 0 |
| κ | – | – | – | 0.1 | 0.001 |
| nκ | – | – | – | 10 | 10 |
| ε | 0.001 | 0 | 0 | – | – |
Table 3.
Values of the parameters that were used with the active contour model to segment M/E, I/RPE and refine M/E (absence of parameter means that it is set to zero). In M/E, when the intensities above and below are considered, the correspondent values for nκ are represented as .
| M/E |
I/RPE |
M/E Correction |
|||||
|---|---|---|---|---|---|---|---|
| Step | 1 | 2 | 3 | 4 | 1 | 1 | 2 |
| nac | 18 | 18 | 18 | 26 | 25 | 21 | 31 |
| α | 0 | 0.0001 | 0.001 | 0.001 | – | – | – |
| β | 0 | 0.0001 | 0.005 | 0.001 | 0.0001 | 0 | 0 |
| γ | 0 | 0.0001 | 0.001 | 0.0001 | 0.0001 | 0.0001 | 0.0001 |
| nnhb | 5 | 5 | 5 | 5 | 5 | 5 | 5 |
| w1 | 0 | 0 | 0.001 | 0 | – | – | – |
| w2 | 0 | 0 | 0.01 | 0.01 | 0.001 | 0.001 | 0.001 |
| w2prep | 0.001 | 0.001 | 0.001 | 0.01 | 0.01 | 0.01 | 0.01 |
| w3prep | 0 | −0.005 | 0 | 0 | – | – | – |
| w4prep | 0 | 0.01 | 0.00001 | 0.00001 | 0.001 | 0.001 | 0.001 |
| κ | 0.05 | 0.001 | 0.001 | 0 | – | 0.001 | 0 |
| nκ | 20/30 | 20/30 | 20/40 | – | – | 10 | – |
| ε | – | – | – | – | – | – | – |
Regarding the existence of speckle noise, we emphasize that one of the main advantages of using an active contour model is their significative tolerance to noisy scenarios, being more independent on a specific noise removal process than other strategies. Consequently, no further denoising strategy was adopted in addition to the general smoothing filters of the pre-processing stage.
Two experiments were conducted to validate the proposal, both in patients with pathological and non-pathological conditions. Firstly, the results over all the 5120 sections were reviewed by a clinical expert indicating, for each boundary, if the segmentation was correctly achieved or not. Hence, we can derive a qualitative evaluation of the proposed method. The purpose of this experiment consists of demonstrating the clinical feasibility of the proposed methodology using a significantly large dataset. Then, we calculate a quantitative measurement of the method's performance. In this experiment, the method results were compared with the manual labeling of an expert. Given the large size of the used image dataset, a random subset of 100 images was analyzed. Figure 25 includes different representative segmented cases that were reviewed by a clinical expert in these experiments.
Figure 25.
Segmentations obtained in the OCT images, with layers segmented in red. The green lines indicate the regions that were eliminated from the evaluation process. (a) and (b) histological sections from patients with pathological conditions; (c) and (d) histological sections from healthy patients.
3.1. Experiment I
The aim of this experiment is the assessment of the method over a large dataset (all the 5120 images) to determine the feasibility of the proposed approach in the daily clinical practice. As the dataset is big enough to be marked manually, the expert clinician revised all the computationally extracted boundaries of each image section in a visual way to indicate if they were successfully segmented or not. For a better analysis, we detailed the results that were obtained by our proposal for the 34 healthy patients and the other 6 with different degrees of diabetic retinopathy (3 patients without macular edema, 1 patient with macular edema and 2 patients with lesions in the photoreceptor layers). The obtained results are shown in Table 4.
Table 4.
Success rates of the presented system over the set of 5120 histological sections from patients with pathological and non-pathological conditions. 1st, 2nd, 3rd, 4th & 5th row, success – not success segmentated images (S/NS); 6th row, corresponding success rates (% of OCT scans with successful delimitations).
| Boundary | ILM | M/E | I/RPE | RPE/C | Total |
|---|---|---|---|---|---|
| Health (S/NS) | 4350/2 | 4223/129 | 4114/238 | 4133/219 | 16820/588 |
| Without Edema (S/NS) | 384/0 | 381/3 | 309/75 | 309/75 | 1383/153 |
| With Edema (S/NS) | 128/0 | 126/2 | 81/47 | 90/38 | 425/87 |
| Photoreceptors (S/NS) | 256/0 | 249/7 | 203/53 | 206/50 | 914/110 |
| Total (S/NS) | 5118/2 | 4979/141 | 4707/413 | 4738/382 | 19542/938 |
| Success rate (%) | 99.960 | 97.246 | 91.933 | 92.539 | 95.419 |
The expert not only marked the boundaries at each image as successful/unsuccessful, but also the area where they are not correctly segmented. For that reason, it is possible to study the distribution of the mistakes along the images. This is relevant given that, observing the results, most of them mainly correspond to small deviations or they are located on the image sides (next to 10% margins). With that purpose, the length of the areas marked as unsuccessful were measured. Then, they are categorized using a set Cat of groups, based on their importance, taking as reference the image width , as follows:
| (16) |
For this problem, a set categories are defined (1, 2, 3, 4, 5), representing different lengths. After categorizing all the marks that were made by the expert, the summarized results are shown in Table 5.
Table 5.
Distribution of the mark lengths over all the dataset, where each cell shows the percent of marks falling in each category. Most of the mistakes (category 1) correspond to small deviations in the segmentation.
| Error category | No error | 1(smallest) | 2 | 3 | 4 | 5(biggest) |
|---|---|---|---|---|---|---|
| Frequency (%) | 95.419 | 3.964 | 0.512 | 0.053 | 0.024 | 0.024 |
3.1.1. Discussion
Table 4 shows the percentage of images that the expert determined as successfully segmented or not. The M/E and ILM segmentations offered the best results, being those for the case of RPE/C slightly lower. Wrong segmentations are identified mainly in the segmentation of the I/RPE layer, as expected, given its highest complexity of segmentation added to its presence, that is not even immediate for the clinicians.
As happens typically with computational methodologies for medical imaging, the proposed system obtains better results under the normal conditions in healthy tissues. We have to consider that pathological OCT images includes a significative degradation of the retinal layers structure, for what it represents a significantly more challenging scenario than the healthy cases. Given that, we consider that the results of the proposed method are also satisfactory for all the pathological cases that were presented in our dataset. In fact, we reported a success rate of 87.14% for all the pathological images, numbers that we consider satisfactory. We also highlight that the initialization of the active contour model for all the retinal layers is independent of the different pathological conditions or disorders that are analyzed in this work.
Regarding the distribution of the mark lengths over all the dataset, Table 5 shows that most of the mistakes (3.964%) belong to the first category, what means that they correspond to small deviations in the segmentations that were obtained for the boundaries. In fact, the percent of mistakes affecting to more than half of the boundary length is depreciable (lower than 0.03% of the mistakes). The similar tendencies can be observed between pathological and non-pathological cases. Therefore, combining these results with the rates presented in Table 4, we confirm the suitability and robustness of the proposed method even when it is applied to complex clinical scenarios.
3.2. Experiment II
The previous experiment constitutes an adequate approach to analyze the kind of mistakes that are typically made by the method. Complementary, a second experiment was made to further validate the method. Hence, 100 histological sections were randomly selected from the entire dataset, being 25 healthy cases, 25 without macular edema, 25 with macular edema and 25 cases with lesions in the photoreceptor layers.
Table 6, Table 7 show respectively the unsigned and signed boundary differences between the segmentations that are produced by the proposed methodology and those labeled by the clinical expert. The results are presented into subgroups, analyzing different pathological and non-pathological scenarios. We want to highlight that the state-of-the-art methods were validated with different datasets, under different conditions and settings (image size, pixel-level resolution, quality, OCT device, signal averaging, enhanced depth, image acquisition protocol, different pathological cases, labeling of different clinical experts,...), reasons for what a completely fair comparison seems extremely complicated. Despite that, the most representative methods with specified results were collected to be able to compare the current method with the state of the art, heaving in mind these limitations.
Table 6.
Unsigned boundary differences of the proposed method and other state of the art methods, in terms of mean ± std in pixels (results for the state of the art methods were converted to pixels). Since different settings were used in each method, a direct comparison is not possible, but they show that the proposed method is in line with the state of the art in the boundary segmentation issue.
Table 7.
Signed boundary differences of the proposed method and other state of the art methods, in terms of mean ± std in pixels (results for the state of the art methods were converted to pixels). Since different settings were used in each method, a direct comparison is not possible, but they show that the proposed method is in line with the state of the art in the boundary segmentation issue.
Additionally, we calculated the Jaccard index and Dice similarity coefficient (Eqs. (17) & (18), respectively) between the manually annotated regions and the segmentation outputs of the proposed method. These measurements use, as reference, the true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN).
| (17) |
| (18) |
Table 8 presents the results of the comparative analysis through Jaccard and Dice coefficients of the retinal layers segmentation. In addition, we also structure the results of this analysis by grouping them according to the different degrees of diabetic retinopathy that were presented in our dataset. For this analysis, we considered four retinal regions: (ILM)–(M/E), (M/E)–(I/RPE), (I/RPE)–(RPE/C) and (ILM)–(RPE/C).
Table 8.
Dice similarity coefficient and Jaccard index in the retinal layers segmentation.
|
(ILM)–(M/E) |
(M/E)–(I/RPE) |
(I/RPE)–(RPE/C) |
(ILM)–(RPE/C) |
|||||
|---|---|---|---|---|---|---|---|---|
| Jaccard | Dice | Jaccard | Dice | Jaccard | Dice | Jaccard | Dice | |
| Health | 0.954 | 0.976 | 0.773 | 0.869 | 0.762 | 0.852 | 0.968 | 0.983 |
| Without Edema | 0.954 | 0.976 | 0.741 | 0.851 | 0.769 | 0.869 | 0.965 | 0.982 |
| With Edema | 0.955 | 0.976 | 0.694 | 0.817 | 0.768 | 0.868 | 0.969 | 0.984 |
| Photoreceptors | 0.955 | 0.977 | 0.740 | 0.850 | 0.768 | 0.868 | 0.968 | 0.984 |
| All Patients | 0.954 | 0.976 | 0.737 | 0.847 | 0.767 | 0.864 | 0.967 | 0.983 |
Table 9 represents the Dice results of the methods of the state of the art and our proposal. Details of these methods were previously introduced and described in Section 1. As shown, our method offers a competitive performance respect to other proposals.
Table 9.
Dice similarity coefficients obtained by the proposed method and the state of the art methods. Since different settings were used in each method, a direct comparison is not possible, but they show that the proposed method is in line with the state of the art in the boundary segmentation issue.
3.2.1. Discussion
The mean unsigned differences and the mean signed differences that were calculated for all the boundaries determine that the proposed methodology segments the layer boundaries accurately, as we can observe in Table 6, Table 7. Although an accurate comparison is extremely complicated, given each state of the art method was validated under different conditions and settings, the results, at a pixel-wise level, achieved by the proposed methodology demonstrates a competitive performance in relation with the previous works. In fact, the method obtained error rates around 1 pixel in all the considered scenarios. Given the resolution of the used images, we consider that these rates are significantly low. In addition, the error rates are not only low but also stable for all the boundaries and for all the different pathological cases that were analyzed in our dataset, a desirable characteristic for any proposed computational tool. Despite that the standard deviation is appropriate, the results can be improved if higher resolutions were considered.
Specifically, the results presented in Table 6, Table 7 also suggest the suitability of the approach that was designed for the M/E layer by the detection of overlapping boundaries and correcting the M/E segmentation. Regarding the RPC/C layer, the low obtained error leads to suppose that the refinement process obtains a satisfactory result for this retinal layer. Regarding the I/RPE layer, a high accuracy is achieved. This is remarkable, considering that not all the methods tackle the segmentation of this boundary (only results for [41] and the Isfahan dataset [26] are available).
Table 8 presents the results using the Jaccard index and the Dice similarity coefficient. As we can see, satisfactory results were achieved for all the different pathological conditions that were present in our dataset. Table 9 lists a comparative analysis between different reference works of the literature and our proposed strategy. Despite considering complete boundaries, this proposal obtains similar results to most of the recent methods, also including deep learning strategies. This significant advantage suggests, along with the results from the previous experiments, that the proposed methodology returns accurate segmentations even in complex environments, making it feasible for the ophthalmologists in their routine clinical practice.
4. Conclusions
A fully automatic system to segment the main retinal layers using OCT images is presented in this work. In particular, four layer boundaries are delimited: ILM, M/E, I/RPE and RPE/C. The existence of large variability of structures, not only blood vessels but also others as exudates, cysts or other pathological or non-pathological artifacts that are frequently present in the OCT images harden enormously the segmentation process, representing a significant challenge.
The method was completely adapted and optimized for the specific search space that is addressed in this work, aiming to obtain a low computational cost. Refinement processes have been designed to correct possible segmentation errors and improve the accuracy in the obtained results.
Two experiments were designed to assess the method, both in patients with pathological and non-pathological conditions. The first experiment demonstrated that the method provides appropriate results over a large dataset of 5120 OCT images. In the second experiment, the accuracy is quantitatively evaluated, comparing the results with other state of the art methods, showing that the proposed method presents a high potential with respect to other similar approaches.
In this line, the method can be extended to segment more retinal boundaries. In addition, information of adjacent scans of the same OCT sequence could also be used in the delimitation process to improve the accuracy results. Coarse detection of the vessel shades could be refined to obtain more precise results and be used in the segmentation of other layers. Besides that, this method should be tested over images with different resolution. That leads to assume that an interesting task to be done is building a dataset of OCT images, marked by different experts, to make a consistent comparison between the methods. In addition, further validations could be implemented by the increase of the dataset dimensionality, including other types of ocular diseases, as for example, cases with macular hole or age-related macular degeneration.
Declarations
Author contribution statement
Ana González-López: Performed the experiments; Wrote the paper.
Joaquim de Moura: Performed the experiments.
Jorge Novo: Analyzed and interpreted the data.
Marcos Ortega: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data.
Manuel G. Penedo: Conceived and designed the experiments.
Funding statement
This work was supported by the Instituto de Salud Carlos III, Government of Spain and FEDER funds of the European Union (PI14/02161 and DTS15/00153), the Ministerio de Economía y Competitividad, Government of Spain (DPI2015-69948-R), the Xunta de Galicia, Centro singular de investigacíon de Galicia accreditation 2016–2019 (ED431G/01), and Grupos de Referencia Competitiva (ED431C 2016-047).
Competing interest statement
The authors declare no conflict of interest.
Additional information
No additional information is available for this paper.
References
- 1.Wong T., Klein R., Sharrett A., Schmidt M., Pankow J., Couper D., Klein B., Hubbard L., Duncan B., Investigators A. Retinal arteriolar narrowing and risk of diabetes mellitus in middle-aged persons. J. Am. Med. Assoc. 2002;287:2528–2533. doi: 10.1001/jama.287.19.2528. [DOI] [PubMed] [Google Scholar]
- 2.de Moura J., Novo J., Charlón P., Barreira N., Ortega M. Enhanced visualization of the retinal vasculature using depth information in oct. Med. Biol. Eng. Comput. 2017;55(12):2209–2225. doi: 10.1007/s11517-017-1660-8. [DOI] [PubMed] [Google Scholar]
- 3.Novo J., Penedo M., Santos J. Optic disc segmentation by means of GA-Optimized Topological Active Nets. Image Analysis and Recognition; ICIAR'08; 2008. pp. 807–816. [Google Scholar]
- 4.Ikram M., de Jong F., Bos M., Vingerling J., Hofman A., Koudstaal P., de Jong P., Breteler M. Retinal vessel diameters and risk of stroke: the Rotterdam study. Neurology. 2006;66(9):1339–1343. doi: 10.1212/01.wnl.0000210533.24338.ea. [DOI] [PubMed] [Google Scholar]
- 5.Puzyeyeva O., Lam W., Flanagan J., Brent M., Devenyi R., Mandelcorn M., Wong T., Hudson C. High-resolution optical coherence tomography retinal imaging: a case series illustrating potential and limitations. J. Ophthalmol. 2011;2011:1–6. doi: 10.1155/2011/764183. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Yaqoob Z., Wu J., Yang C. Spectral domain optical coherence tomography: a better OCT imaging strategy. BioTechniques. 2005;39(6 Suppl):S6–S13. doi: 10.2144/000112090. [DOI] [PubMed] [Google Scholar]
- 7.Bowd C., Weinreb R., Williams J., Zangwill L. The retinal nerve fiber layer thickness in ocular hypertensive, normal, and glaucomatous eyes with optical coherence tomography. Arch. Ophthalmol. 2000;118:22–26. doi: 10.1001/archopht.118.1.22. [DOI] [PubMed] [Google Scholar]
- 8.Keane P., Patel P., Liakopoulos S., Heussen F., Sadda S., Tufail A. Evaluation of age-related macular degeneration with optical coherence tomography. Surv. Ophthalmol. 2012;57(5):389–414. doi: 10.1016/j.survophthal.2012.01.006. [DOI] [PubMed] [Google Scholar]
- 9.de Moura J., Novo J., Rouco J., Penedo M.G., Ortega M. Conference on Artificial Intelligence in Medicine in Europe. Springer; 2017. Automatic identification of intraretinal cystoid regions in optical coherence tomography; pp. 305–315. [Google Scholar]
- 10.Samagaio G., Estévez A., de Moura J., Novo J., Fernandez M.I., Ortega M. Automatic macular edema identification and characterization using OCT images. Comput. Methods Programs Biomed. 2018;163:47–63. doi: 10.1016/j.cmpb.2018.05.033. [DOI] [PubMed] [Google Scholar]
- 11.Albrecht P., Ringelstein M., Mueller A., Keser N., Dietlein T., Lappas A., Foerster A., Hartung H., Aktas O., Methner A. Degeneration of retinal layers in multiple sclerosis subtypes quantified by optical coherence tomography. Mult. Scler. J. 2012;18(10):1422–1429. doi: 10.1177/1352458512439237. [DOI] [PubMed] [Google Scholar]
- 12.Rangaraju L., Jiang X., McAnany J.J., Tan M.R., Wanek J., Blair N.P., Lim J.I., Shahidi M. Association between visual acuity and retinal layer metrics in diabetics with and without macular edema. J. Ophthalmol. 2018;2018 doi: 10.1155/2018/1089043. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Okada K., Yamamoto S., Mizunoya S., Hoshino A., Arai M., Takatsuna Y. Correlation of retinal sensitivity measured with fundus-related microperimetry to visual acuity and retinal thickness in eyes with diabetic macular edema. Eye. 2006;20(7):805. doi: 10.1038/sj.eye.6702014. [DOI] [PubMed] [Google Scholar]
- 14.Larsson J., Zhu M., Sutter F., Gillies M.C. Relation between reduction of foveal thickness and visual acuity in diabetic macular edema treated with intravitreal triamcinolone. Am. J. Ophthalmol. 2005;139(5):802–806. doi: 10.1016/j.ajo.2004.12.054. [DOI] [PubMed] [Google Scholar]
- 15.Network D.R.C.R. Relationship between optical coherence tomography–measured central retinal thickness and visual acuity in diabetic macular edema. Ophthalmology. 2007;114(3):525–536. doi: 10.1016/j.ophtha.2006.06.052. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Hee M.R., Puliafito C.A., Wong C., Duker J.S., Reichel E., Rutledge B., Schuman J.S., Swanson E.A., Fujimoto J.G. Quantitative assessment of macular edema with optical coherence tomography. Arch. Ophthalmol. 1995;113(8):1019–1029. doi: 10.1001/archopht.1995.01100080071031. [DOI] [PubMed] [Google Scholar]
- 17.Sandberg M.A., Brockhurst R.J., Gaudio A.R., Berson E.L. The association between visual acuity and central retinal thickness in retinitis pigmentosa. Investig. Ophthalmol. Vis. Sci. 2005;46(9):3349–3354. doi: 10.1167/iovs.04-1383. [DOI] [PubMed] [Google Scholar]
- 18.Theodossiadis P.G., Theodossiadis G.P., Charonis A., Emfietzoglou I., Grigoropoulos V.G., Liarakos V.S. The photoreceptor layer as a prognostic factor for visual acuity in the secondary epiretinal membrane after retinal detachment surgery: imaging analysis by spectral-domain optical coherence tomography. Am. J. Ophthalmol. 2011;151(6):973–980. doi: 10.1016/j.ajo.2010.12.014. [DOI] [PubMed] [Google Scholar]
- 19.Piccolino F.C., de La Longrais R.R., Ravera G., Eandi C.M., Ventre L., Manea M. The foveal photoreceptor layer and visual acuity loss in central serous chorioretinopathy. Am. J. Ophthalmol. 2005;139(1):87–99. doi: 10.1016/j.ajo.2004.08.037. [DOI] [PubMed] [Google Scholar]
- 20.Kobayashi M., Iwase T., Yamamoto K., Ra E., Murotani K., Matsui S., Terasaki H. Association between photoreceptor regeneration and visual acuity following surgery for rhegmatogenous retinal detachment. Investig. Ophthalmol. Vis. Sci. 2016;57(3):889–898. doi: 10.1167/iovs.15-18403. [DOI] [PubMed] [Google Scholar]
- 21.Haeker M., Sonka M., Kardonc R., Shah V., Wu X., Abrámoff M. Automated segmentation of intraretinal layers from macular optical coherence tomography images. Proc. SPIE: Med. Imag. 2007;6512 [Google Scholar]
- 22.Garvin M., Abrámoff M., Wu X., Russell S., Burns T., Sonka M. Automated 3D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images. IEEE Trans. Med. Imaging. 2009;28(9):1436–1447. doi: 10.1109/TMI.2009.2016958. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Lee K., Niemeijer M., Garvin M., Kwon Y., Sonka M., Abrámoff M. Segmentation of the optic disc in 3-d oct scans of the optic nerve head. IEEE Trans. Med. Imaging. 2010;29(1):159–168. doi: 10.1109/TMI.2009.2031324. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Chiu S., Li X., Nicholas P., Toth C., Izatt J., Farsiu S. Automatic segmentation of seven retinal layers in sdoct images congruent with expert manual segmentation. Opt. Express. 2010;18(18):19413–19428. doi: 10.1364/OE.18.019413. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Tokayer J., Ortega A., Huang D. Sparsity-based retinal layer segmentation of optical coherence tomography images. 18th IEEE International Conference on Image Processing; ICIP'11; 2011. pp. 449–452. [Google Scholar]
- 26.Kafieh R., Rabbani H., Abrámoff M., Sonka M. Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map. Med. Image Anal. 2013;17(8):907–928. doi: 10.1016/j.media.2013.05.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Lang A., Carass A., Hauser M., Sotirchos E.S., Calabresi P.A., Ying H.S., Prince J.L. Retinal layer segmentation of macular oct images using boundary classification. Biomed. Opt. Express. 2013;4(7):1133–1152. doi: 10.1364/BOE.4.001133. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Yang Q., Reisman C., Wang Z., Fukuma Y., Hangai M., Yoshimura N., Tomidokoro A., Araie M., Raza A., Hood D., Chan K. Automated layer segmentation of macular oct images using dual-scale gradient information. Opt. Express. 2010;18(20):21293–21307. doi: 10.1364/OE.18.021293. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Yang Q., Reisman C., Chan K., Ramachandran R., Raza A., Hood D. Automated segmentation of outer retinal layers in macular oct images of patients with retinitis pigmentosa. Biomed. Opt. Express. 2011;2(9):2493–2503. doi: 10.1364/BOE.2.002493. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Carass A., Lang A., Hauser M., Calabresi P.A., Ying H.S., Prince J.L. Multiple-object geometric deformable model for segmentation of macular oct. Biomed. Opt. Express. 2014;5(4):1062–1074. doi: 10.1364/BOE.5.001062. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Yazdanpanah A., Hamarneh G., Smith B., Sarunic M. Intra-retinal layer segmentation in optical coherence tomography using an active contour approach. Medical Image Computing and Computer-Assisted Intervention; MICCAI'09; 2009. pp. 649–656. [DOI] [PubMed] [Google Scholar]
- 32.Chan T.F., Vese L.A. Active contours without edges. IEEE Trans. Image Process. 2001;10(2):266–277. doi: 10.1109/83.902291. [DOI] [PubMed] [Google Scholar]
- 33.Mumford D., Shah J. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 1989;42(5):577–685. [Google Scholar]
- 34.Yazdanpanah A., Hamarneh G., Smith B., Sarunic M. Segmentation of intra-retinal layers from optical coherence tomography images using an active contour approach. IEEE Trans. Med. Imaging. 2011;30(2):484–496. doi: 10.1109/TMI.2010.2087390. [DOI] [PubMed] [Google Scholar]
- 35.Liu S., Peng Y. A local region-based Chan–Vese model for image segmentation. Pattern Recognit. 2012;45(7):2769–2779. [Google Scholar]
- 36.Yuan Y., He C. Adaptive active contours without edges. Math. Comput. Model. 2012;55(5–6):1705–1721. [Google Scholar]
- 37.Venhuizen F.G., van Ginneken B., Liefers B., van Grinsven M.J., Fauser S., Hoyng C., Theelen T., Sánchez C.I. Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks. Biomed. Opt. Express. 2017;8(7):3292–3316. doi: 10.1364/BOE.8.003292. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Shah A., Zhou L., Abrámoff M.D., Wu X. Multiple surface segmentation using convolution neural nets: application to retinal layer segmentation in oct images. Biomed. Opt. Express. 2018;9(9):4509–4526. doi: 10.1364/BOE.9.004509. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Hamwood J., Alonso-Caneiro D., Read S.A., Vincent S.J., Collins M.J. Effect of patch size and network architecture on a convolutional neural network approach for automatic segmentation of oct retinal layers. Biomed. Opt. Express. 2018;9(7):3049–3066. doi: 10.1364/BOE.9.003049. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Roy A.G., Conjeti S., Karri S.P.K., Sheet D., Katouzian A., Wachinger C., Navab N. Relaynet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks. Biomed. Opt. Express. 2017;8(8):3627–3642. doi: 10.1364/BOE.8.003627. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.González A., Ortega M., Penedo M., Charlón P. International Conference on Image Analysis and Recognition. vol. 8815. 2014. Automatic robust segmentation of retinal layers in oct images with refinement stages; pp. 337–345. (Lect. Notes Comput. Sci.). [Google Scholar]
- 42.Kass M., Witkin A., Terzopoulos D. Snakes: active contour models. Int. J. Comput. Vis. 1988;1(4):321–331. [Google Scholar]
- 43.González A., Penedo M., Vázquez S., Novo J., Charlón P. Cost function selection for a graph-based segmentation in OCT retinal images. Computer Aided Systems Theory; EUROCAST'13; 2013. pp. 125–132. [Google Scholar]
- 44.Montuoro A., Waldstein S.M., Gerendas B.S., Schmidt-Erfurth U., Bogunović H. Joint retinal layer and fluid segmentation in oct scans of eyes with severe macular edema using unsupervised representation and auto-context. Biomed. Opt. Express. 2017;8(3):1874–1888. doi: 10.1364/BOE.8.001874. [DOI] [PMC free article] [PubMed] [Google Scholar]
























