Abstract.
Retinal blood vessels indicate some serious health ramifications, such as cardiovascular disease and stroke. Thanks to modern imaging technology, high-resolution images provide detailed information to help analyze retinal vascular features before symptoms associated with such conditions fully develop. Additionally, these retinal images can be used by ophthalmologists to facilitate diagnosis and the procedures of eye surgery. A fuzzy noise reduction algorithm was employed to enhance color images corrupted by Gaussian noise. The present paper proposes employing a contrast limited adaptive histogram equalization to enhance illumination and increase the contrast of retinal images captured from state-of-the-art cameras. Possessing directional properties, the multistructure elements method can lead to high-performance edge detection. Therefore, multistructure elements-based morphology operators are used to detect high-quality image ridges. Following this detection, the irrelevant ridges, which are not part of the vessel tree, were removed by morphological operators by reconstruction, attempting also to keep the thin vessels preserved. A combined method of connected components analysis (CCA) in conjunction with a thresholding approach was further used to identify the ridges that correspond to vessels. The application of CCA can yield higher efficiency when it is locally applied rather than applied on the whole image. The significance of our work lies in the way in which several methods are effectively combined and the originality of the database employed, making this work unique in the literature. Computer simulation results in wide-field retinal images with up to a 200-deg field of view are a testimony of the efficacy of the proposed approach, with an accuracy of 0.9524.
Keywords: wide-field retinal image, blood vessel segmentation, fuzzy noise reduction, contrast limited adaptive histogram equalization method, morphology operators using reconstruction, multistructure elements morphology
1. Introduction
The retina is a delicate membrane located at the rear of the eye with a complex and layered structure that includes several layers of neurons capable of being visualized as a retinal image (Fundus image) by the fundus camera. Retinal images are extensively used to provide valuable information on the way in which local ocular disease progresses, and this information can also reveal diabetes, arteriosclerosis, high blood pressure, cardiovascular diseases, and even stroke.1,2 Therefore, a close examination of features of retinal vessels can be used to reveal early segmentation of corresponding diseases.
One of the problems that retinal surgeons face within diagnostic or retinal surgical procedures is a small field of view (FOV) and images taken from the usual retina in a way that the details of these images, including small capillaries, are not visible or barely visible. Therefore, our purpose is to extract vessels using large-scale (wide-field) retinal images that can greatly help physicians during surgery or even help them to diagnose various retina-related diseases. In other words, this paper examines the problems of retinal laser surgery. This surgery is currently being done manually and has proven to be an effective treatment for leading blindness-causing conditions,3 such as age-related macular degeneration, diabetic retinopathy, and degenerative myopia. Despite laser retinal surgery being the most popular treatment for the conditions, the current success rate is relatively low, because the first treatment has a recurrence and/or persistence rate of . These failures are the result of the manual nature of the surgery and the nonexistence of spatial reckoning aids in the clinical instrumentation. The physician has great difficulty developing a complete view of the retina, responding fast enough to movements of the patient’s eye, which might move uncontrollably during surgery, and providing quantitative interpretation of optical dosage. The compelling requirement to minimize invasiveness indicates that computer vision algorithms are promising for addressing the aforementioned problems.4 Hence, it would be favorable to run an automatic surgery under conditions that are controllable and reproducible. Computer vision methods pertaining to mosaicing retina images can be an effective tool to realize these goals. Such mosaic images can be utilized for diagnosis and prevention and providing a spatial map for the laser guidance system during treatment. The first objective of the authors in using this database, which was collected by the authors for the first time, is the segmentation of retinal vessels in a wider scope than traditional databases, such as digital retinal images for vessel extraction (DRIVE) and structured analysis of the retina (STARE). Because the process of detection of lesions and diseases of eyes has not been considered in this paper, the disappearance and loss of these lesions in an image with a larger FOV—knowing that the full pattern of blood vessels is preserved—is not our first priority. However, in the next paper, the presence and detection of these lesions will be carefully evaluated with the new algorithm. In Sec. 3, the authors refer to the wide-field retinal images (Mosaics images). The technology of making a mosaic from ordinary images is not new, but this technology and its cameras that capture the images of the retina are newly designed and built, and before this, these images have not been used for retinal vessel segmentation. Other imaging techniques such as “optos cameras” create other images that are not similar to Fundus retinal images and have more medical aspects, and they are not evaluated under mathematical algorithms. Retinal image mosaicing corresponds to stitching of multiple correlated images to create a larger wide-angle image of a scene. Being a synthetic composition created from a sequence of retinal images, a retinal image mosaic can be acquired by comprehending geometric relationships among images. In this paper, retinal wide-field images constructed using the mosaicing technique were used for vessel segmentation, and the retinal mosaic database used in this paper is completely novel.
An automation-based method for analyzing retinal images is desirable and is particularly vital to accommodate mass screening programs.5,6 Retinal image preprocessing, segmentation, and analysis aim to assist automated methods to assist the early segmentation and to further prevent patients from severe vision loss. Retinal diseases affect the features of blood vessels; particularly, bifurcations, reflectivity, and tortuosity of the vessels changed by the disease processes.7,8 Quantification of these features for medical diagnosis requires precise vessel segmentation. In fact, extracting vessels from the background as accurately as possible is much desired. In this paper, a retinal blood vessel segmentation method is presented, where wide-field retinal images were utilized based on a pixel-based approach in the presence of noise, which is modeled as Gaussian noise. Noise may be generated in an image for various reasons. For example, during acquisition and transmission processes in channels, faulty memory location in hardware may be caused by high-energy spikes generated during transmission. Corruption of images caused by noise makes an accurate vessel map difficult. Gaussian distribution for noise is an appropriate estimation for the noise behavior during segmentation. To reduce noise in fundus color images, it is important to note that the processing of three-color channel is essential to obtaining an accurate image and achieving a better vessel map. As such, our proposed method uses a multipass processing approach that gradually reduces the noise in the R, G, and B channels of the image before the retinal vessel structure is extracted.9,10 Reduced run times and image peak signal-to-noise-ratio (PSNR) improvement using our selected fuzzy filter are the primary advantages of the work presented in this paper.
In proposed paper, following the noise reduction, a contrast limited adaptive histogram equalization (CLAHE) method was employed in the preprocessing stage to enhance the illumination and increase the contrast of retinal images.11 CLAHE demonstrates higher flexibility in selecting the local histogram mapping function. Once the clipping level of the histogram was established, unfavorable noise amplification was curtailed accordingly. Vessel segmentation (edge detection) could be enhanced using the multistructure elements (SEs) method, which features directionality. A combined method of mathematical morphology in connection with multistructure elements method was, therefore, used in the next step to detect the image ridges.12 Afterward, morphological opening by reconstruction was used to eliminate false ridges, which are not related to the vessel tree, also trying to avoid inadvertently eliminating the thin vessel edges. The morphological opening by reconstruction takes advantage of multistructure elements that boost the performance of this process.13 The specific range of the diameter of blood vessels imposes a restriction on the size of the SEs and their filtering power. Consequently, there still remain several false edges that must be eliminated by further use of connected components analysis (CCA) and filtering that are performed locally for better performance. The local application involves an image being divided into some sections, where CCA and length filtering are performed separately and by one for each section.14 The result of this stage demonstrated a marked enhancement in the blood vessels segmentation.
The rest of the paper has the following organization: Sec. 2 explains some state-of-the-art retinal blood vessel segmentation methods. Section 3 provides a wide-field retinal images overview. Section 4 describes the preprocessing, and Sec. 5 presents proposed methods based on mathematical morphology. Subsequently, experimental results and analysis are presented in Sec. 6. Finally, Sec. 7 concludes the paper and provides future work prospects.
2. Prior Work
Increased attention has been given to automated segmentation of retinal blood vessels in the last few years.15 Works conducted on vascular segmentation are mainly classified into two groups—supervised and unsupervised methods. In unsupervised methods, the properties of structures to be identified are physically hard-coded into the structure of the algorithm, and learning is restricted or fully ignored. In contrast, for unsupervised methods, they predominately use matched filtering, vessel tracking, morphological transformations, and model-based algorithms. According to Ref. 16, the matched-filtering employs a 2-D linear structuring element to construct a Gaussian intensity profile for the retinal blood vessels. Utilization of Gaussians and associated derivatives helps improve segmentation. For efficient extraction of the vessels boundaries, the structuring element used rotates in the range of 8 to 12 times to encompass the vessels in disparate configurations. The drawback of this method is related to its long-time processing, requiring the stopping criterion to be checked pixel by pixel. Gabor filters are introduced in Ref. 17 to help track and further extract the blood vessels. However, this method detects too many vessel pixels that are in fact false edges. Combining morphological transformations with curvature information together with matched filtering is presented in Ref. 18 to allow for centerline detection. Furthermore, high complexity has been cited as the weakness of this method and is the result of the vessel-filling operation being after center-line detection. The high complexity also suffers from sensitivity to false edges caused by bright region edges. Morphological top-hat reconstruction of the negative green plane image is another approach to acquire a vessel improved image, as described in Ref. 19. Among the pixels obtained through this algorithm, the pixels that exhibit higher brightness are considered the main portions of the vasculature. This is followed by applying adaptive thresholding by iteratively adding thinner vessel branches. A highly accurate segmentation for vasculature is achieved through these iterations, being satisfactory for both normal and abnormal retinas. In addition, Refs. 20 and 21 present other vessel segmentation algorithms based on morphological top-hat transform. These works used curvelet transform to enhance the contrast of retinal images by outstanding the edge images in different scales and directions in their preprocessing section.
Presented in Ref. 22, a model-based approach attempts to extract blood vessel structures by means of a convolution with a Laplacian kernel. The result of this is later passed on to a thresholding stage, and broken line sections are then integrated. The method proposed in Ref. 23 refines the previous method, employing a Laplacian operator to prune the noisy objects while taking center lines as the reference. Although the highlight of this method is primarily on images containing bright abnormalities, it underperforms where retinal images with red lesions are involved. Proposing a perceptive transformation-based method, the work in Ref. 24 attempts to segment vessels in retinal images containing both bright and red lesions. Another work25 proposed a method based on modeling, where locally adaptive thresholding was employed; the verification process also included the vessel information. Despite its much higher generalizability over the matched-filter methods, it tends to be less accurate. Active contours were utilized in another vessel segmentation method, model-based;26 its downside, however, is the massive computation involved. There were multiscale vessel segmentation approaches employed in Refs. 27 and 28 which vessel pixels were determined using gradient-based information along with neighborhood analysis. The mentioned algorithms and methods, being unsupervised, suffer the drawback of being highly computational or excessively sensitive to retinal abnormalities.
Also, authors in Ref. 29 introduced an unsupervised method for the automatic segmentation of blood vessels in retinal fundus images based on the combination of receptive fields (CORF) computational model of a simple cell in visual cortex and its implementation called combination of shifted filter responses (COSFIRE). The report also proposed a filter that selectively responds to vessels and is called B-COSFIRE, with B standing for bar, which is an abstraction for a vessel. This method is based on the existing COSFIRE approach that can be effectively used to detect bar-shaped structures such as blood vessels. Their B-COSFIRE filter is nonlinear, as it achieves orientation selectivity by multiplying the output of a group of difference-of-Gaussians (DOG) filters, whose supports are aligned in a collinear manner. It is tolerant to rotation variations and to slight deformations. The B-COSFIRE filter shows higher robustness to noise than methods based on weighted summation or convolution (template matching).
In addition, Ref. 30 presents a robust, unsupervised, and fully automatic filter-based approach for retinal vessel segmentation based on using a left-invariant rotating derivative (LIRD) frame and a locally adaptive derivative (LAD) frame. The LAD is adaptive to the local line structures and is found by Eigen-system analysis of the left-invariant Hessian matrix (computed with the LIRD). They proposed filters based on 3-D rotating frames in so-called orientation scores that are functions on the Lie-group domain of positions and orientations by means of a wavelet-type transform. A 2-D image is lifted to a 3-D orientation score, where elongated structures are disentangled into their corresponding orientation planes. The main contribution of Ref. 31 is an unsupervised method to detect blood vessels in fundus images using a coarse-to-fine approach. This strategy combines Gaussian smoothing, a morphological top-hat operator, and vessel contrast enhancement for background homogenization and noise reduction. Here, statistics of spatial dependency and probability are used to coarsely approximate the vessel map with an adaptive local thresholding scheme. The coarse segmentation is then refined through curvature analysis and morphological reconstruction to reduce pixel mislabeling and better estimate the retinal vessel tree. Their proposed approach effectively addressed main image distortions by reducing mislabeling of central vessel reflex regions and false-positive detection of the pathological patterns step by removing the remaining noise and artificial vessels.
In contrast, there are the supervised vessel segmentation algorithms categorizing pixels into two classes of vessel and nonvessel. In supervised methods, segmentation algorithms get the fundamental information by gaining from image patches clarified by ground truth. The K-nearest neighbor (KNN) classifier was used in Ref. 32, where a 31-feature set was acquired by means of Gaussians and their derivatives. An enhancement over this was achieved in Ref. 33, where ridge-based vessel detection was employed. The ridge element closest to each pixel represents that pixel, which provides image sectioning. A 27-feature set is computed for each pixel, and the result of which is then transferred to be used for the KNN classifier. The feature set being large causes these algorithms to run slow. The other handicap here is the dependency of these methods on training data and their oversensitivity to false edges.
Two enhanced methods were proposed in Ref. 34 to obtain a blood vessel segmentation with high accuracy (Acc). Both methods alike attempt to eliminate false vessels and lesion affected vessels to achieve accurately segmented blood vessels. The first method has a supervised approach and is a region level-based method that, in accordance with true and false vessels morphology, extracts region-based features. Subsequently, the most striking features are obtained by means of some feature selection techniques. The regions are then classified into either true or false methods through the selected features. The second method is a kind of classification method, in which an algorithm called neighborhood-based region filling was introduced. The purpose of this algorithm is essentially to minimize the legions effect through inpainting lesion regions prior to segmentation. This is conducted though having the lesions neighborhood estimated. A Gaussian mixture model (GMM) classifier-based method was proposed in Ref. 35, where Gabor-wavelets were employed to extract a six-feature set. The requirements of a large training dataset and long training period for the GMM models are the drawbacks. A combination of line operators and support vector machine (SVM) classifier was a method proposed in Ref. 36, where a three-feature set is achieved. The large sensitivity of the method to the training data as well as its immense computational cost stemming from utilizing the SVM classifiers makes it not an ideal option. A reduced number of pixels plus obtaining an optimal feature set was presented in a method in Ref. 37 to boost the Acc and precision that can be achieved in the blood vessel segmentation. The other advantage of this method is its relatively small volume of computation involved. In addition, in another method presented in Ref. 38, boosting and bagging strategies were applied along with 200 decision trees to classify vessels. The method also employed Gabor filters to extract a nine-feature set. The boosting strategy; however, causes the computational complexity to be very high.
There is only one other supervised method whose outcome is not influenced by the training dataset, as proposed in Ref. 39. Here, the authors used a neural network for classification, and a seven-feature set was extracted through neighborhood parameters as well as an invariants-based method. The proposed retinal blood vessel segmentation method is inspired by the method in Ref. 39 to design a segmentation algorithm with low dependency on training data and fast computation. Thus far, computational complexity associated with vessel segmentation algorithms has received exclusive attention in Ref. 26. The authors in Ref. 40 proposed a supervised method for the delineation of blood vessels in retinal images, which is effective for vessels of different thickness. In their proposed method, a set of B-COSFIRE filters were selected for vessels and vessel endings. They used the selected features to train an SVM classifier with a linear kernel. The SVM classifier is particularly suited for binary classification problems since it finds an optimal separation hyperplane that maximizes the margin between the classes. In addition, Ref. 41 presented a new supervised method for vessel segmentation based on remolded cross modality data transformation problem. A wide and deep neural network was proposed to model the relations between the retinal image and the vessel map. The main contributions of this study include the following: (1) a neural network that can output the label map of all pixels for the input image patch instead of just the single label of the center pixel and the training algorithm and (2) a synthesis strategy to construct the vessel probability map of the retinal image.
The authors in Ref. 42 proposed a supervised segmentation technique that uses a deep neural network trained [deep learning (DL)] on a large sample of examples (Fundus imagery) preprocessed with global contrast normalization and zero-phase whitening and augmented using geometric transformations and gamma corrections. The method was presented by providing the entire supervised character of the neural approach that learns from raw pixel data and does not rely on any prior domain knowledge on vessel structure. The method is also resistant to the phenomenon of central vessel reflex, sensitive in detection of fine vessels, and fares well on pathological cases. Also, Ref. 43 presented a supervised multilevel convolutional neural networks model applied for automatic blood vessel segmentation in retinal fundus images. In the network, there are two input branches with different resolution input images. A proposed max-resizing technique is used to reduce the resolution of the input image in the second branch of the network, increasing the generalization of the training course. In addition, they used both the dropout and spatial-dropout layers to leverage their benefits. The dropout and spatial-dropout layers are applied to avoid over-fitting issues, reducing testing performance. Dropout and spatial-dropout turn off a number of pixels in the image to generalize its features to improve the performance of the network. Eventually, the performance of proposed method and several of the state-of-the-art methods is compared in Sec. 6.5.
3. Wide-Field Retinal Images Overview
Retinal images used in previous research display the angle between 30 deg to 45 deg (12% to 18%) FOV in the retinal surface. The Mosaic software combines a set of images that frame different fields of the retina and produce a single larger image. In an on-line-basis procedure, multiple fields (min 2, max 7 images) are selected, and the mosaic process commences. The server then automatically generates an image that becomes more and more comprehensive and then results in a high-resolution image of the retina. The image resulted from this process is known as a “wide-field” image. This type of imaging, which provides a detailed and extensive image of the target, is a result of wide-angle photography, being made possible thanks to certain types of lens as well as the mosaic photography. In other words, wide-field retinal images take advantage of specialized imaging techniques capturing images with extended FOVs. Such elaborate imaging allows for pathology detection by observing the far peripheral retina such as holes, tears, tumors, and vascular anomalies that might have been missed when binocular indirect ophthalmoscopy (BIO) is exclusively used. This modern imaging technique in retinal imaging enables us to take images of areas of up to 200 deg (80%) of retinal surface, observing the main part of posterior pole and the bulk of the retinal periphery. Figures 1(a) and 1(b), respectively, show the capturing power of previous generation imaging techniques and modern imaging techniques in terms of FOV.
Fig. 1.
FOV in retinal image. (a) and (b) Previous generation and modern generation wide-field imaging techniques, respectively.
Also, Figs. 2(a) and 2(b), respectively, show the samples of wide-field retinal images for left and right eyes in our database, which were captured from state-of-the-art cameras.
Fig. 2.
Samples of our database. (a) and (b) Wide-field retinal images from left and right eye, respectively.
3.1. System Block Diagram
Figure 3 shows the block diagram as well as the different parts of our blood vessel segmentation method based on the proposed retinal imaging. The associated subblocks of proposed framework can be outlined as follows:
Fig. 3.
Block diagram of proposed retinal blood vessel segmentation system.
-
•
Fuzzy noise reduction: Two model-based prefiltering schemes are employed to reduce or eliminate noise from the acquired retinal color images. The filtering is designed to first ensure that image details are completely preserved and then to reduce the number of false vessels caused by noise.
-
•
Recovered green channel extraction: Performing the noise filtering phase and acquiring the close-to-original image, channel green is extracted to detect blood vessels.
-
•
CLAHE method: To enhance the illumination and increase the contrast of retinal images, this method is applied.
-
•
Modified top-hat transform: The retinal image may still contain some misclassified pixels; therefore, to achieve an integral and seamless vascular tree structure, the modified top-hat morphological enhancement technique is employed.
-
•
Morphological operators by reconstruction: Preserving edge information has been cited as the major disadvantage of conventional morphology opening and closing, and there are the following defects, such as introduction of new edges, contour deformation, and edge drifts. To compensate for these deficiencies, morphological reconstruction operators are utilized.
-
•
Blood vessel segmentation: Vessel segmentation (edge detection) can be enhanced using the multistructure elements method that features directionality.
-
•
False-edges removal: Since removing the uneven background remains unavoidable in the previous stage, false-edges removal by means of morphological opening by reconstruction as an appropriate approach to filter these false edges is employed.
-
•
Length filtering via CCA: Applied to clear the result of the pixels not being a part of the vessel tree.
As mentioned, the significance of our work lies in the way in which several methods are effectively combined and the originality of the database employed, making this work unique in the literature.
4. Preprocessing
4.1. Fuzzy Noise Reduction
Noise might be unwittingly introduced inside digital images while images are in process of acquisition or transmission, even in modern imaging techniques. In this section, an additive noise that possesses a particular distribution is added to each image pixel, and a Gaussian noise is such a noise. Hence, by adding a zero-mean Gaussian noise, a retinal noisy image is generated for testing. Figures 4(a) and 4(b) show the noisy images of Figs. 2(a) and 2(b), respectively, with the standard deviation being .
Fig. 4.
Noisy proposed retinal images of Figs. 2(a) and 2(b), respectively, with .
Then, a fuzzy-based noise reduction method is employed in which noisy pixels are classified into two different fuzzy classes to realize the following two purposes: to accurately preserve the image integrity and to obliterate false information caused by the noise from the image.10 This method provides a smoother and more precise output, as it considers the effects of different fuzzy classes of noisy pixels upon the subsequent vessel segmentation procedure.
-
1.
Class pixels that are the pixels corrupted with amplitudes only slightly different from that of the neighbors.
-
2.
Class pixels that are the pixels corrupted with amplitudes considerably larger than that of the neighbors.
Class pixels are commonly known as outliers. Noisy pixels with large amplitude are reflective of the tail section of the Gaussian distribution. Having a rather low probability, their number is quite limited. However, they can have a detrimental effect since outliers are present in the filtered data, and this produces false vessels that can further reduce the quality of the vessel map. Consequently, two distinct filtering techniques are suggested in this paper to accommodate class and class pixels. The chosen method is based on multipass processing. represents the multichannel image at the pass , and represents the input noisy image. represents the pixel value [] at pixel location present in the k’th channel . Also, the three channels, G (green), R (red), and B (blue), are denoted by , 2, and 3, respectively. For a color image, the amount of “” equals 256 (), and the multipass processing includes the following steps.
4.1.1. Class prefiltering
Class prefiltering works based on the differences between the pixel intended to be processed and the neighboring pixels in a way that differences with small amplitude are regarded as noise and are to be reduced while differences with large amplitude are regarded as vessels that are preserved. A procedure involving two steps is applied to the image channels to improve the efficacy of the smoothing action. This procedure is expressed in the following equations:
| (1) |
| (2) |
where is a nonlinear function expressed by the following:
| (3) |
where is an integer , that shows the limitation of parameterized nonlinear function []. As indicated by [Eq. (1)], the filtering process is initially applied to the three input channels, namely , , and . The output are three intermediate filtered components, namely , , and , which are shown in Fig. 3. The image datasets {see [Eq. (2)]} go through a second filtering pass, resulting in a refined set of data, namely , , and .
The basic algorithm for the filtering involves a window selection and gradually eliminating the values that vary substantially from the element in the center, and this is performed to minimize detail removed during noise cancelation. Subsequently, when the subtraction of the central pixel from its neighbors yields absolute values less than , it is assumed that only noise is present. In this situation, a profound smoothing is applied and the output is the arithmetic mean of the pixel values present in the neighborhood. Differences larger than indicate an image edge; thus, their contribution is zero {see [Eq. (3)]}.
The intermediate situations with absolute value differences larger than yet still less than are treated as a compromise between the two opposite effects. According to what was stated earlier in this work, filtering action depends exclusively on the value of parameter . Generally, small values are more successful in preserving fine details; in contrast, large values tend to create a powerful noise cancelation. The experimental results show that this method can support all the color image channels, R, G, and B. Furthermore, this method can remain effective should the contrast among these components be lower than gray-scale pictures. In short, this filtering action (type ) depends on the value of parameter only.
Small values better preserve fine details; large values produce a strong noise cancelation. Also, according to [Eq. (3)], this mechanism aims at gradually excluding pixel values that are very different from the central element to avoid blurring the image details during noise removal. Basically, this method varies the value of the parameter from a minimum to a maximum and considers the progressive mean square error [] between the noisy image filtered and the same image filtered with .
4.1.2. Class prefiltering
Class prefiltering has a different approach in processing the target pixel’s value difference from its surrounding pixels: should all the value differences be large, the pixel is likely to be an outlier and, hence, will be rejected. Consequently, the noise elimination process for this stage employs a different and nonlinear model, as described in the following equations:
| (4) |
where
| (5) |
where denotes the membership function describing the fuzzy relation “ is so bigger than ”
| (6) |
A uniform neighborhood is then considered and expressed as follows: ; , as an example, allow to be a positive outlier. According to [Eq. (5)], . Therefore, [Eq. (4)] yields . The outlier has then been removed. This filtering process enhances the robustness of the vessel segmentation while noise pulses are present. Then, because the retinal blood vessels in the green channel image of the original colored image have the highest contrast with the background, this channel is selected to use the proposed retinal blood vessel segmentation algorithm. Figures 5(a) and 5(b) show the output of recovered green channel of images extracted from Figs. 4(a) and 4(b), respectively.
Fig. 5.
Result of recovered green channel of proposed images extracted from Figs. 4(a) and 4(b), respectively.
4.2. Contrast Limited Adaptive Histogram Equalization Method
A retinal image is required for improving the quality prior to the vessel segmentation and shows that the contrast of retinal images extracted in the past section is low for the vessel segmentation algorithm. Therefore, to increase the image contrast, the CLAHE method is applied. The CLAHE introduces a clip limit (CL) to help resolve drawbacks such as the peaks appearing in the histogram and noise. It curbs the amplification by clipping the histogram at a preset value before the cumulative distribution function (CDF) is computed. This keeps the slope of the CDF and hence that of the transformation function at bay. The clip value, also known as the clip limit, turns on the normalization of the histogram, thereby depending on the size of the neighborhood area. The redistribution causes some bins to go over the clip limit, resulting in an effective clip limit larger than the limit that is prescribed and whose exact value depends on the image.11 Two major parameters are recognized for the CLAHE: clip limit (CL) and block size (BS). Though have been heuristically established by users, the main influence of these parameters lies in controlling the image quality.
Figures 6(a) and 6(b) show the result of CLAHE from Figs. 5(a), and 5(b), respectively. Clip limit is 0.005 and block size is assumed to be in the proposed method. Also, Figs. 6(c) and 6(d) show the histogram used in the CLAHE method (red color) and recovered green channel images (blue color—see Fig. 5). The histogram diagrams indicate an enhancement in both the contrast and the average intensity level occurring using the CLAHE method, with most pixels attaining higher light intensities.
Fig. 6.
(a) and (b) The result of CLAHE method extracted from Figs. 5(a) and 5(b), respectively, (, ). (c) and (d) The comparison of histogram of CLAHE images (red color) with recovered green channel images (blue color—see Fig. 5).
5. Proposed Method Based on Mathematical Morphology
5.1. Basic Theory
The retinal images that were extracted from the previous section may still contain some misclassified pixels; therefore, to achieve an integral and seamless vascular tree structure, the misclassified pixels are eliminated while maintaining vessels thickness. A morphological method is employed to fill the gap present between the two parallel curves, and the mathematical morphology provides a nonlinear method to segment edges.44 There are four main mathematical morphological operators: erosion, dilation, opening, and closing.45 Erosion operators work to filter the inner image, whereas dilation operators modify the outer image. The opening operator is composed of erosion followed by dilation and closing, including a dilation operator followed by erosion. Opening is generally a smoother function, smoothing the image and breaking new gaps. In contrast, closing possesses fusion function, fusing narrow breaks, eliminating tiny holes, and filling the gaps in the counters. To alleviate the problems still present in the morphological transform, a modified version of the top-hat transform is suggested where an amalgamation of closing and opening operators is applied to the original image; furthermore, the result is compared against the original image by using of a minimum operator to obtain an image that is close to the original image, save for the edges. The modified top-hat transform is represented by45
| (7) |
where and represent the SEs for closing (•) and opening (∘) operators, respectively.
5.2. Selection of Multidirectional Structure of Elements
Being distributed in various directions in the retinal images, the nature of blood vessels and arterioles render a simple structure element ineffective. Consequently, a multidirectional structure is preferred to be used in the improved morphology function.12 In addition, choosing the structure element is a crucial factor in morphological image processing, and the size and shape of SE determine the final state of the detected edges. The fundamental theory behind multidirectional structure elements morphology requires the establishment of different structure elements within the same square window, where these structure elements include almost all the line extending directions. Consider ; , represents a digital image, and suppose represents its center; then, the structure elements in square window can be expressed as follows:
| (8) |
where , and represents the direction angle of SE. For our work, . Also, within the square, the structure elements have the following direction angles: 0 deg, 22.5 deg, 45 deg, 67.5 deg, 90 deg, 112.5 deg, 135 deg, and 157.5 deg, where represents several for and the SE built from integration of all of all directions. Figure 7 shows some of the substructure elements with dimensions of .
Fig. 7.
Some different directional structure elements in square window. (a) with 0 deg direction angle, (b) with 45 deg direction angle, (c) with 90 deg direction angle, (d) with 135 deg direction angle, and (e) square structure.
5.3. Morphological Operators by Reconstruction
Not impeccably preserving edge information has been cited as the major disadvantage of conventional morphology opening and closing. To compensate of this deficiency, a new operator was proposed by Ref. 46. This operator, named - and -sieves, focuses on the size exclusively of the feature while entirely ignoring the shape of the features. To accommodate shape and size of similar features, morphological operators by reconstruction serve as an apt alternative.13 Taking the image as and as the mask image, geodesic dilation can be expressed by the following equation:
| (9) |
In addition, the equation yielding geodesic erosion can be expressed as follows:
| (10) |
The marker image is controlled by iteration-wise running of the operator for both the dilation and erosion modes. Subsequent to a certain number of iterations, however, both equations corresponding to geodesic erosion and dilation reach a steady state, where no appreciable change is observed in either. Consequently
| (11) |
The previous equation results in opening and closing operations by reconstruction that are expressed as follows:
| (12) |
| (13) |
Morphological opening by reconstruction hence initially involves removing bright features that are lower than the SE. Consequently, the algorithm attempts to retrieve the contours of components not entirely eliminated by enlarging in iterations. In this process, the original image is considered to serve as the reference. In addition, closing by reconstruction addresses possible dark features. Morphological opening and closing by reconstruction hence stops the potential defects present in conventional morphological opening and closing methods, defects such as introduction of new edges, contour deformation, and edge drifts. Figure 8 shows this distinction and superiority, and the same SE is used for both methods.
Fig. 8.
(a) Original image, (b) result of conventional opening, and (c) result of opening by reconstruction.46
5.4. Blood Vessel Segmentation Using Multistructure Elements Morphology
Blood vessel segmentation (edge detection) by means of multistructure elements morphology requires the earlier SEs of morphological vessel detector being replaced by the newly introduced SE and using the following algorithm.12
-
1.
Generate the proposed SEs corresponding to the required directional resolution.
-
2.
Apply the selected vessel detector function to the original image by using of the achieved SEs in (1) and then procure the subedge image .
-
3.Place the produced from (2) in the following [Eq. (14)] to obtain all detected vessels:
where represents the total vessel image, is the number of , and represents the weight assigned to each subvessel image. To produce equal effects for each , the assigned weights can be given equal values, ; in addition, they can alternatively be calculated by other methods.47 In addition, if any information regarding the processed image available, the degree of significance of information contained can be the basis for assigning the weights. The magnitude of each is proportional to the amount of edge information contained; hence, it would be logical to assign higher weight values to the with greater magnitude to make its contribution in making more prominent. The weights can, therefore, be obtained by the following:(14)
This way of weight assigning results in the higher having a larger effect. The blood vessels can be detected through this method; however, false edges resulting from uneven background remains unavoidable at this stage.(15)
5.5. False-Edges Removal by Means of Morphological Opening by Reconstruction
The edge detection step yields a result containing false edges that do not correspond to blood vessels but rather stem from background illumination being irregular. Morphological opening is an appropriate approach to filter these false edges. Conventional morphological opening methods; however, eliminate the aforesaid unfavorable edges and remove certain parts of the blood vessel edges, especially the thinner vessel edges. To resolve this issue, the morphological opening by reconstruction was used. This is a two-step procedure: morphological opening in the conventional way followed by reconstruction and dilation. The performance of this procedure is enhanced using multistructure elements for the opening stage. Multistructure elements exhibit large sensitivity to edges, and this sensitivity is omnipresent direction-wise, resulting in a more precise elimination of false edges. The same SEs used in the edge detection step were used; except their assigned weights were different. The maximum rather than weights assigned individually to each opted for reconstruction, as shown by the following equation:
| (16) |
The advantage in this step is that the weak false edges are erased and are, hence, not allowed in the reconstruction of . Subsequently, reconstruction by dilation is performed where a flat structuring such as a square element is used. However, obliterating all unwanted objects at this step is not within reach. Assigning a higher value to SE for an opening may lead to higher elimination; however, the blood vessels being on average five pixels wide means that some smaller vessels may be entirely lost while they cannot be retrieved using reconstruction by dilation. In short, some undesirable objects persist at this stage and are not eliminated until the length filtering step.
5.6. Length Filtering via Connected Component Analysis
Length filtering is applied to clear the final result of the pixels not being a part of the vessel tree. The concept known as CCA is employed in which several connected pixel components that hold values higher than a certain amount (threshold value) are labeled and characterized as a single object. The threshold value can be obtained through a straightforward thresholding method proposed in14
| (17) |
where and represent the mean and standard deviation, respectively. Here, should be small enough () for images with poor contrast. The thresholding process functions were based on the following condition:
| (18) |
where denotes the pixel intensity value at location in any given image. Based on the observations, a complete image submitted for length filtering process produces less than desirable results because the input gray-scale image within this step includes high-thickness vessels possessing high gray levels while thin vessels hold low gray levels close to gray levels seen in false edges.
The thresholding equation corresponds to standard deviation of gray levels; as a result, in images where wide range of gray levels exist, assigning a single threshold value for the whole image may result in some parts of thin vessels being lost. This problem can be best tackled by employing an adaptive CCA, where images are divided into several tiles to which length filtering is separately applied. In this way, no block covers a broad range of gray levels, and suitable thresholding can be applied to each tile, resulting in the satisfactory removal of false edges. Following the CCA being applied, the components with less than a prearranged threshold value, set by [Eq. (17)], are omitted. The final step involves integrating the results into a single image representing the final blood vessel detection result. This algorithm can be summarized in the following steps:
-
A.
Divide the image into tiles of ; consider ½ interpolation to forestall windowing effect.
-
B.
Apply the explained algorithm for thresholding to each of the tiles separately and acquire the appropriate threshold for each tile.
-
C.
Apply CCA to tiles individually while considering the pixels containing gray levels higher than the set threshold only.
-
D.
Apply the length filtering method to each of the tiles while keeping the components with lengths higher than the set threshold.
-
E.
Congregate all the results into one image.
As previously mentioned, the image is decomposed into several image blocks and length filtering, and CCA is applied locally to eliminate the remained false edges more effectively. The size of each block is and is achieved experimentally. Figures 9(a) and 9(b) show the result of vessel segmentation map based on multistructure elements morphology by reconstruction extracted from Figs. 6(a) and 6(b), respectively.
Fig. 9.
(a) and (b) Vessel segmented map results extracted from Figs. 6(a) and 6(b), respectively.
6. Experimental Results and Analysis
6.1. Dataset and Implementation
The proposed method was tested on 50 sets of wide-field retinal images collected retrospectively. To equalize the size of all the taken images and reduce the volume of calculations, all images were resized to . The process of merging and combining the images and creating a unit image was automatically produced by a software of the related Zeiss Company, and no person was involved in its creation, and all patient identifiers were removed from the images. This database contains 21 diseases images such as cardiovascular diseases, diabetes, and arteriosclerosis, and the rest of the images were taken from healthy people. Also, the process of capturing and recording each image and manual vessels segmentation has been approved (annotated) by an ophthalmologist in Noor Eye Hospital, Tehran, Iran.
Fundus images of seven fields were acquired using a Zeiss fundus camera with a 45 deg FOV for each image, and after the mosaicing process, the outcome is a high-resolution retinal image with up to 200 deg FOV. The resulting composite images vary in spatial resolution due to differences in image overlap and exclusion of single images by the mosaicing procedure. The datasets provided two sets of hand-labeled images segmented by two ophthalmologists as ground truth for the retinal vessels segmentation method because the segmentations of both ophthalmologists were different. In other words, one of these manual segmentations was usually used as a ground truth, and the other one was used for comparison with our computer-generated segmentation method with those of an independent human observer. The mask of the images for these datasets using a threshold technique is mentioned in Eq. (17). The first ophthalmologist hand-labeled as the ground truth to evaluate our segmentation technique. All images were used to test on our proposed algorithm. Furthermore, these images have manual segmentations that can be used as ground truth. The images were manually segmented twice by two ophthalmologists, resulting in set “A” and set “B.” The ophthalmologist of set “A” marked 655,381 pixels as vessel and 5,892,921 pixels as background, and the values for set “B” were 912,142 pixels as vessel and 5,636,160 pixels as background, respectively. The set “A” were used as ground truth. In this work, all our experiments implemented in MATLAB had a 3.1-GHz CPU and 4-GB RAM. The FOV of wide-field retinal images was fixed and unchanged in this paper and did not change during the experiments. In all databases, five statistical measures were estimated for each test image:
-
•
TPF: true positive fraction
-
•
FPF: false positive fraction
-
•
: the area under the ROC curve (AUC)
-
•
Acc: accuracy
-
•
MCC: Matthews correlation coefficient
The significant difference between the observers indicates the difficulty (the number of detected pixels by both observers) of vascular segmentation on these databases.
6.2. Enhancement Evaluation
To achieve a quantitative evaluation of the noise performance, the vessels map of the wide-field retinal images was corrupted by noise with standard deviation of to . Noise performance of the method was assessed by means of the widely used PSNR, defined as follows:
| (19) |
where MSE represents the mean-squared error computed by the following:
| (20) |
where and represent the original and enhanced images, respectively. To examine the performance of our noise reduction filter, noises with different standard deviations were applied to our fuzzy filter and other methods. The result demonstrates the marked superiority of our method in Wide-Field retinal images, as shown in Table 1.
Table 1.
Performance comparison with other filtering methods in various standard deviations.
| Filtering method | PSNR values in dB | ||
|---|---|---|---|
| Mean filter | 30.4312 | 26.2134 | 21.3417 |
| Median filter | 30.8732 | 26.5213 | 21.6752 |
| Wiener filter | 31.5603 | 27.3261 | 22.8734 |
| Fuzzy proposed filter | 35.7075 | 31.0352 | 26.4896 |
As mentioned, Table 1 shows the PSNR on the retinal mosaic images database (wide-field) in the presence of Gaussian noise with different variances. This table shows the average value on the entire database. The parameter that changes in Table 1 is the variance of Gaussian noise (, 6, 8) that is additive noise that can be arbitrarily changed. In other words, the noise variance values are added to the mosaic image and then the PSNR is measured.
PSNR, often expressed in decibels (dB), is defined as the ratio of the power of a signal (meaningful information) and the power of background noise (unwanted signal). In most cases, a higher PSNR indicates that the reconstruction is of higher quality, and the opposite is also true. Therefore, rows of Table 1 show that by increasing the level of noise variance, the PSNR range, which represents the image quality, is reduced. However, Table 1 shows that the PSNR range obtained from the fuzzy proposed filter is far better than the other mentioned methods.
6.3. Segmentation Evaluation
Two performance indicators were employed to evaluate the performance of the algorithm proposed. The first one is called a receiver operating curve (ROC) in which an ROC space is defined using the true positive fraction (TPF) and the false positive fraction (FPF), known as 1-Specificity, represented as x- and y-axes, respectively, in the ROC curve. These two terms represent the relative tradeoffs between true positive and false positive. The TPF, also known as sensitivity, is a factor expressed by a ratio, where the number of pixels correctly classified as vessel pixels (TP) is the nominator and the aggregate number of vessel pixels in the gold standard segmentation is the denominator, as follows:
| (21) |
The FPF is a factor expressed by a ratio, where the number of pixels incorrectly classified as vessel pixels (FP) is the nominator and the aggregate number of nonvessel pixels in the gold standard is the denominator, as follows:
| (22) |
where
-
•
TP (true positive): These are the pixels that have been correctly recognized as blood vessels.
-
•
FP (false positive): These are the pixels that have been mistakenly recognized as blood vessels.
-
•
TN (true negative): These are the nonblood vessels that have been correctly recognized as not being blood vessels.
-
•
FN (false negative): These are the blood vessel pixels that the algorithm has failed to recognize as blood vessels.
According to the definitions, the Acc for one image is the fraction of pixels correctly classified at a specified threshold and is defined as follows:
| (23) |
where Acc is an overall measure that provides the ratio of total well-detected pixels evaluated against the gold standard hand-labeled segmentation. The results of our proposed algorithm indicate the Acc rate (mean) was 0.9524% for recognizing and extracting the blood vessels wide-field retinal images. The second assessment criterion is the Matthews correlation coefficient (MCC), and it can be defined as the following:
| (24) |
where , , and . The MCC is a measure of the quality of a binary classification. It is suitable even when the number of samples in the two classes varies substantially. In addition, the non-vessel pixels outnumber by seven times the vessel pixels, and the MCC values vary between and . A higher value indicates a better prediction. Specifically, a value of indicates a perfect prediction, 0 indicates a prediction that is equivalent to random, and indicates a completely wrong prediction.29 The third assessment criterion is , and represents the area under the ROC curve (AUC) and indicates discrimination or the ability of the classifier to unmistakably differentiate between vessel and nonvessel pixels. An area of 1 is equivalent to a perfect classification. Figure 10 shows the ROC curve for our wide-field database in the presence of Gaussian noise with standard deviation , and the Acc and values. To evaluate the influence of noise on the ROC curve, by adding zero-mean Gaussian noise with various standard deviation, the Acc is computed.
Fig. 10.
ROC curve for visualization classification performance of our method on the wide-field retinal database.
Figure 11 shows the ROC curves obtained from applying the three noise standard deviations to our wide-field retinal images database, where the Acc decreased as the noise standard deviation grew. The comparison of these curves and their statistical differences with each other is described in Sec. 6.2.
Fig. 11.
ROC curves in various noise standard deviations.
6.4. Execution Time
Another issue of concern is the run-time of the algorithm, and the run-time is a method to assess the performance of the algorithm on wide-field retinal images. The run-time of the proposed algorithm is 24 s Therefore, the proposed method detects the retinal blood vessels within this time with a high-level of Acc, demonstrating the excellence of our proposed approach performance. Table 2 lists each step time and the total segmentation time in the proposed method.
Table 2.
Time cost for each step and total segmentation time.
| Step | Time (s) |
|---|---|
| Fuzzy noise reduction | 9 |
| CLAHE method | 3 |
| Modified top-hat transform | 2 |
| Morphological reconstruction | 2 |
| Vessel segmentation | 4 |
| False-edges removal | 2 |
| Connected component analysis | 2 |
| Total time | 24 |
6.5. Comparison with Other Databases
Since our wide-field database is not directly comparable with other common retinal databases because of the absence of similar databases, to test the performance of our algorithm based on the other databases, the proposed method was tested on DRIVE33 and STARE48 databases as follows:
-
•
DRIVE: Includes an aggregate of 40 color retinal images that are acquired through a course of diabetic retinopathy screening programs conducted in the Netherlands. The images were captured by means of a Canon CR5 non-mydriatic 3-CCD camera (Canon, Tokyo, Japan)—45° FOV. Image resolution was . Finally, the 40-image (with seven pathological images) set was categorized equally into two classes, training set and test set. A train set with 20 images (with three pathological images) and a test set with another 20 images (with four pathological images). The images in the test set were manually segmented twice by two human experts, resulting in set A and set B. The human expert of set A marked 577,649 pixels as vessel and 3,960,494 pixels as background, and the values for set B were 556,532 pixels as vessel and 3,981,611 pixels as background, respectively. The set A was used as ground truth.
-
•
STARE: Includes 20 color retinal images, half of which indicate a certain pathology. The slides were obtained using a Topcon TRV-50 fundus camera (Topcon, Tokyo, Japan), at 35 deg FOV. Later, the photos were converted to digital images of . Finally, the 20-image (10 pathological images) set was categorized into training dataset and test dataset, each of which containing 10 images. Two human experts segmented these images. The images in the test set were manually segmented twice by two human experts, resulting in set A and set B. The first human expert of set A marked 615,726 pixels as vessel and 5,293,034 pixels as background, and the values for the second human expert of set B were 879,695 pixels as vessel and 5,029,065 pixels as background, respectively. The segmentations by the first observer of set A were used as ground truth.
Figure 12 shows the result of some of the intermediate and final stages of the proposed retinal blood vessel segmentation algorithm on DRIVE and STARE databases, first and second rows, respectively. The Acc was calculated for each parameter value; the optimal value is then selected as the value resulting in the maximal Acc and is illustrated in ROC curves. In addition, in this paper, there are no dependencies between the images in the databases, and all databases are independent from each other. Thus, the entire process of algorithms designed for retinal vessels segmentation in this paper (see Fig. 3) was applied separately on each database (wide-field, DRIVE, and STARE). Furthermore, the vessel segmentation assessment tests in this paper were separately performed on all single mentioned databases.
Fig. 12.
Illustrates an excerpt of the results of the proposed algorithm at some stages on DRIVE (first row), and STARE (second row), respectively. (a) and (b) Original images, (c) and (d) results of CLAHE method, (, ), and (e) and (f) final results of the proposed segmentation algorithm.
Additionally, Fig. 13 shows the ROC curves on DRIVE, STARE, and wide-field retinal databases with both Acc and values, respectively, in the presence of additive Gaussian noise with .
Fig. 13.
The ROC curves on DRIVE, STARE, and wide-field retinal databases.
Table 3 shows a synopsis of the performance indicators based on all applied databases in the proposed blood vessel segmentation algorithm in the presence of additive Gaussian noise with . Because of the large discrepancy among the details of the images in all databases mentioned in Table 3, which is due to the size of these images, by increasing the size and details of the images in each databases, the Acc of the algorithm is reduced and the processing time of the algorithm is increased. In addition, FPF = 1- Specificity, was considered.
Table 3.
Results obtained from the proposed algorithm on the utilized databases.
| Database | TPF | FPF | Acc | MCC | Time (s) | |
|---|---|---|---|---|---|---|
| DRIVE | 0.7893 | 0.0209 | 0.9694 | 0.9673 | 0.7529 | 17 |
| STARE | 0.7799 | 0.0302 | 0.9697 | 0.9662 | 0.7438 | 20 |
| Wide-field | 0.7835 | 0.0285 | 0.9558 | 0.9558 | 0.7422 | 24 |
By switching from the wide-field database to the DRIVE and STARE databases, the following conclusions can be achieved:
-
•
Because the wide-field images have larger FOV than DRIVE and STARE images, the volume of vessels segmentation pixels in this database is more than from DRIVE and STARE databases.
-
•
Volume computing and processing on the Wide-Field database are more than from DRIVE and STARE databases.
-
•
Processing time for full implement of our algorithms (see Fig. 3) in the wide-field retinal database is longer than the processing time from DRIVE and STARE databases.
-
•
The Acc of the proposed method on DRIVE and STARE databases is more than the wide-field retinal database.
-
•
PSNR values only based on our fuzzy filter on the DRIVE and STARE databases are presented in Table 4.
Table 4.
Performance with fuzzy filtering method in various standard deviation.
| Fuzzy proposed filter | PSNR values in dB | ||
|---|---|---|---|
| DRIVE | 38.2305 | 33.8921 | 28.9465 |
| STARE | 37.4592 | 31.6734 | 27.9032 |
Consequently, Table 4 shows that the PSNR range, which represents the image quality, is reduced when the level of noise variance is increased. According to the use of the image database in this paper, which is collected by the authors, the following points were considered:
-
•
Design an impressive structure (see Fig. 3) to complete retinal vessel segmentation in this paper.
-
•
Proposed algorithms with simple understanding for everyone.
-
•
In this paper, the main objective of the author’s, focus on finding the best results in the retinal blood vessels segmentation, using the methods with low volume and high-speed computing. Therefore, the experimental results in Sec. 6, shows our achieved goals.
Table 5 provides a comparison between the performance of proposed method and several other state-of-the-art methods. Using DRIVE and STARE images, in which they have seven and 10 pathological images, respectively, the proposed method excels in terms of some performance indicators with average Acc, and the desired results were also achieved.39 Although the superiority may only be marginal when compared with recent methods, our method was primarily used and tested for high-resolution wide-field retinal images.
Table 5.
Comparative performance summary.
| Segmentation method | DRIVE | STARE | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| TPF | FPF | Acc | MCC | Time | TPF | FPF | Acc | MCC | Time | |||
| Mendonca and Campilho18 | 0.734 | 0.024 | — | 0.945 | — | 2.5 min | 0.699 | 0.027 | — | 0.944 | — | 3 min |
| Roychowdhury et al.19 | 0.739 | 0.022 | 0.967 | 0.949 | — | 2.45 s | 0.732 | 0.016 | 0.967 | 0.956 | — | 3.95 s |
| Miri and Mahloojifar20 | 0.7352 | 0.0205 | — | 0.9458 | — | — | — | — | — | — | — | |
| Shahbeig21 | 0.7612 | 0.0392 | — | 0.9458 | — | — | — | — | — | — | — | — |
| Jiang and Mojon25 | 0.83 | 0.1 | 0.932 | 0.891 | — | 8 to 36 s | 0.857 | 0.1 | 0.929 | 0.901 | — | 8 to 36 s |
| Azzopardi et al.29 | 0.7655 | 0.0296 | — | 0.9442 | 0.7475 | 10 s | 0.7716 | 0.0299 | — | 0.9497 | 0.7335 | 10 s |
| Zhang et al.30 | 0.7743 | 0.0275 | — | 0.9476 | — | — | 0.7791 | 0.0242 | — | 0.9554 | — | — |
| Neto et al.31 | 0.7806 | 0.0371 | — | 0.8718 | — | — | 0.8344 | 0.0557 | — | 0.8894 | — | — |
| Staal et al.33 | 0.719 | 0.023 | 0.952 | 0.944 | — | 15 min | 0.697 | 0.019 | 0.961 | 0.952 | — | 15 min |
| Waheed et al.34 | 0.8224 | 0.0222 | — | 0.9644 | — | — | 0.8187 | 0.0311 | — | 0.9572 | — | — |
| Soares et al.35 | 0.733 | 0.022 | 0.961 | 0.946 | — | 3 min | 0.72 | 0.025 | 0.967 | 0.948 | — | 3 min |
| Ricci and Perfetti36 | 0.775 | 0.028 | 0.963 | 0.959 | — | — | 0.903 | 0.061 | 0.968 | 0.965 | — | — |
| Marin et al.39 | 0.7068 | 0.0305 | 0.958 | 0.9452 | — | 0.694 | 0.018 | 0.977 | 0.952 | — | ||
| Strisciuglio et al.40 | 0.7901 | 0.0292 | — | 0.9454 | 0.7525 | 75 s | 0.8046 | 0.0190 | — | 0.9561 | 0.7548 | 210 s |
| Li et al.41 | 0.7569 | 0.0184 | — | 0.9527 | 0.7475 | 1.2 min | 0.7726 | 0.0156 | — | 0.9628 | 0.7335 | 1.2 min |
| Liskowski and Krawiec42 | 0.7763 | 0.0232 | — | 0.9495 | — | — | 0.7867 | 0.0246 | — | 0.9566 | — | — |
| Ngo and Han43 | 0.7464 | 0.0164 | — | 0.9533 | — | — | — | — | — | — | — | — |
| Proposed method | 0.7893 | 0.0209 | 0.9694 | 0.9673 | 0.7529 | 17 s | 0.7799 | 0.0302 | 0.9697 | 0.9662 | 0.7438 | 20 s |
7. Conclusion and Future Works
A discussion regarding Acc needs to clearly incorporate its clinical relevancy for a laser photocoagulation procedure. In the case of pan retinal laser photocoagulation, in which numerous laser spots are positioned in the retinal periphery, it is immensely important to create an equal distribution of lesions. For treatments of the central retina, the two factors of accuracy and patient safety gain paramount significance. Therefore, wide-field retinal images benefit from specialized imaging techniques that capture images with elongated FOVs. High-quality retinal wide-field images such as those seen in Fig. 2 can be instrumental in diagnosing many eye conditions, particularly those involving the retinal periphery. This imaging allows for pathology detection in the far peripheral retina where tears, holes, tumors, and vascular anomalies may exist. These abnormalities might be missed while applying binocular indirect ophthalmoscopy (BIO) alone. Wide-field fundus retinal images provide essential information of the eyes, helping to track the progression of potential diseases.
In this paper, a mosaic retina-based method for vessel segmentation was proposed. The mosaiced image displays a much wider FOV of the retina with enhanced vessel structures and aids the experts in classifying diseases. The other applications of these wide-field imaging assist physicians during diagnosis and retinal laser surgery in online treatment monitoring, treatment planning, spatial dosimetry, alarming, and triggering safety shutoffs when the laser strays from the intended target, error detection, change analysis between patient visits, and providing virtual reality tools for surgical simulation. In our approach, data corruption is modeled by introducing Gaussian noise for which a multipass prefiltering scheme is employed to increase the accuracy of the vessel segmentation process. The filtering module is composed of nonlinear models because the wide-field retinal images are full of details and can generate motion artifacts. To reduce the noise, the method was applied to the R, G, and B channels of the image, and the G channel was then extracted for vessel segmentation. The results indicate that the fuzzy selected filter can be used to eliminate Gaussian noise from color images without having the useful information in the image distorted. In our paper, to simulate real conditions and retinal vessels segmentation in real and actual terms, Gaussian noise was added to retinal color image, and then by using an optimal and effective algorithm, noise from each color image channel was removed. Noise reduction process on color images is far more difficult and more complex than noise reduction on binary images. Experiments performed in this paper on color images database, which the results are shown in Table 1, show the better fuzzy filter results to Gaussian noise reduction compared with other methods.
Using CLAHE as a preprocessor method to boost the illumination of the proposed images and increase the contrast, the problem of peaks in the histogram that cause noise-like artifacts in homogeneous region of image was solved. According to the proposed approach based on using simple and functional algorithms to achieve the best response in retinal vessel segmentation, many algorithms to improve image contrast in the spatial and frequency domain were considered and implemented, which the most important and best response was earned by the CLAHE algorithm. In this paper, the best response was obtained by the CLAHE algorithm after many changes in the parameters on wide-field retinal images database were obtained.
Being highly sensitive to edges in a multidirectional fashion, multistructure elements morphology managed to satisfactorily detect the blood vessel edges. In addition, due to the multistructure elements employed, morphological opening by reconstruction successfully eliminated the unwanted edges without unwittingly obliterating the thin vessels. Subsequent application of CCA and length filtering at a local level resulted in more accurate elimination of undesirable edges. The suggested multistructure elements morphology enables the identification of thin and tiny vessels that might have otherwise been missed. The fact that some thin vessels might be missed is due to utilization of a simple thresholding method. However, there is always a tradeoff between attempting to eliminate more false edges and maintaining the pixels associated with small vessels. Nevertheless, a more advanced thresholding method is required for the small vessels to be found without mistakenly including irrelevant edge pixels. Additionally, when retinal images feature severe legions, a more advanced thresholding method is required. In addition, future work entails proposing a more sophisticated thresholding method to achieve higher accuracy and accommodate cases where lesions are severe. By comparing the statistics of the values obtained from this paper mentioned in Table 3, the table shows that increasing the size and details of the images in each database reduces the Acc of the proposed method and increases the processing time. There are the following effective limiting conditions to the proposed method:
-
•
High cost of these types of cameras imaging retinal vessels and their scarcity.
-
•
In our country, there are only two of these cameras, so the process of collecting the database of these images is difficult.
-
•
Because of getting about seven images of each person to automatically combining them and creating a unit image by software, capturing time is increased, leading to eye fatigue and lack of cooperation from the person in the capturing process.
-
•
Because of too much capturing time, temperature noise and other environmental noises in the image processing are effective. For this reason, authors simulated real conditions during capturing, added Gaussian noise to each image, and then used effective fuzzy noise reduction algorithm to assess and solve the problem.
Future work will also focus on improving the fusion mechanism to combine the information more effectively from even poor-quality images. In addition, future research will focus on the extraction of other features, such as microaneurysms and exudates, and work on abnormality wide-field retinal images, providing the database for everyone.
Biographies
Morteza Modarresi Asem has been working as a lecturer in electronics and biomedical engineering departments at Tehran Medical Sciences University for about 8 years. His research interests include image processing, biometrics, pattern recognition, machine vision, and computer science. His current research program focuses on medical imaging methods to improve them for the early identification, assessment, and prediction of lesions to therapy.
Iman Sheikh Oveisi received his BSc and MSc degrees in biomedical engineering (bioelectric) from the Department of Biomedical Engineering, Science and Research, Tehran, Iran. Then, he worked for 6 years on the image processing and biometric authentication systems. His current research interests include image and video processing, medical imaging, medical image processing, computer vision, pattern recognition, color image processing, biometric authentication systems, human action recognition, human interface, and signal processing.
Mona Janbozorgi: Biography is not available.
Disclosures
No conflicts of interest, financial or otherwise, are declared by the authors.
References
- 1.Wasan B., et al. , “Vascular network changes in the retina with age and hypertension,” J. Hypertens. 13(12) 1724–1728 (1995).https://doi.org/10.1097/00004872-199512010-00039 [PubMed] [Google Scholar]
- 2.Kolb H., et al. , “Webvision: organization of the retina and visual system,” http://webvision.med.utah.edu/ (2005). [PubMed]
- 3.Abràmoff M. D., Garvin M. K., Sonka M., “Retinal imaging and image analysis,” IEEE Rev. Biomed. Eng. 3, 169–208 (2010).https://doi.org/10.1109/RBME.2010.2084567 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Wright C. H., et al. , “Hybrid approach to retinal tracking and laser aiming for photocoagulation,” J. Biomed. Opt. 2(2), 195–203 (1997).https://doi.org/10.1117/12.268964 [DOI] [PubMed] [Google Scholar]
- 5.Walter T., et al. , “A contribution of image processing to the diagnosis of diabetic retinopathy detection of exudates in color fundus images of the human retina,” IEEE Trans. Med. Imaging 21(10), 1236–1243 (2002).https://doi.org/10.1109/TMI.2002.806290 [DOI] [PubMed] [Google Scholar]
- 6.Li H., Chutatape O., “Automated feature extraction in color retinal images by a model based approach,” IEEE Trans. Biomed. Eng. 51(2), 246–254 (2004).https://doi.org/10.1109/TBME.2003.820400 [DOI] [PubMed] [Google Scholar]
- 7.Patton N., et al. , “Retinal image analysis: concepts, applications and potential,” Prog. Retinal Eye Res. 25, 99–127 (2006).https://doi.org/10.1016/j.preteyeres.2005.07.001 [DOI] [PubMed] [Google Scholar]
- 8.Narasimha-Iyer H., et al. , “Robust detection and classification of longitudinal changes in color retinal fundus images for monitoring diabetic retinopathy,” IEEE Trans. Biomed. Eng. 53(6), 1084–1098 (2006).https://doi.org/10.1109/TBME.2005.863971 [DOI] [PubMed] [Google Scholar]
- 9.Russo F., “A method for estimation and filtering of Gaussian noise in images,” IEEE Trans. Instrum. Meas. 52(4), 1148–1154 (2003).https://doi.org/10.1109/TIM.2003.815989 [Google Scholar]
- 10.Russo F., Lazzari A., “Color edge detection in presence of Gaussian noise using nonlinear prefiltering,” IEEE Trans. Med. Image 54(1), 352–358 (2005).https://doi.org/10.1109/TIM.2004.834074 [Google Scholar]
- 11.Zuiderveld K., Contrast Limited Adaptive Histogram Equalization, Academic Press Professional, Inc., San Diego, California: (1994). [Google Scholar]
- 12.NagaRaju C., et al. , “Morphological edge detection algorithm based on multi-structure elements of different directions,” Int. J. Inf. Commun. Technol. Res. 1(1), 37–43 (2011). [Google Scholar]
- 13.Mukhoopadhyay S., Chanda B., “Multiscale morphological segmentation of gray-scale images,” IEEE Trans. Image Process. 12(5), 533–549 (2003).https://doi.org/10.1109/TIP.2003.810757 [DOI] [PubMed] [Google Scholar]
- 14.Hamadani N., “Automatic target cueing in IR imagery,” Master’s Thesis, Air Force Institute Technology, WPAFB, Ohio (1981). [Google Scholar]
- 15.Fraz M., et al. , “Blood vessel segmentation methodologies in retinal images a survey,” Comput. Methods Programs Biomed. 108, 407–433 (2012).https://doi.org/10.1016/j.cmpb.2012.03.009 [DOI] [PubMed] [Google Scholar]
- 16.Hoover A. D., Kouznetsova V., Goldbaum M., “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Trans. Med. Imaging 19, 203–210 (2000).https://doi.org/10.1109/42.845178 [DOI] [PubMed] [Google Scholar]
- 17.Rangayyan R., et al. , “Detection of blood vessels in the retina using Gabor filters,” in Canadian Conf. on Electrical and Computer Engineering, pp. 717–720 (2007).https://doi.org/10.1109/CCECE.2007.184 [Google Scholar]
- 18.Mendonca A. M., Campilho A., “Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction,” IEEE Trans. Med. Imaging 25(9), 1200–1213 (2006).https://doi.org/10.1109/TMI.2006.879955 [DOI] [PubMed] [Google Scholar]
- 19.Roychowdhury S., Koozekanani D. D., Parhi K. K., “Iterative vessel segmentation of fundus images,” IEEE Trans. Biomed. Eng. 62(7), 1738–1749 (2015).https://doi.org/10.1109/TBME.2015.2403295 [DOI] [PubMed] [Google Scholar]
- 20.Miri M. S., Mahloojifar A., “Retinal image analysis using curvelet transform and multistructure elements morphology by reconstruction,” IEEE Trans. Biomed. Eng. 58(5), 1183–1192 (2011).https://doi.org/10.1109/TBME.2010.2097599 [DOI] [PubMed] [Google Scholar]
- 21.Shahbeig S., “Automatic and quick blood vessels extraction algorithm in retinal images,” IET Image Proc. 7(4), 392–400 (2013).https://doi.org/10.1049/iet-ipr.2012.0472 [Google Scholar]
- 22.Vermeer K. A., et al. , “A model based method for retinal blood vessel detection,” Comput. Biol. Med. 34(3), 209–219 (2004).https://doi.org/10.1016/S0010-4825(03)00055-6 [DOI] [PubMed] [Google Scholar]
- 23.Lam B., Yan H., “A novel vessel segmentation algorithm for pathological retina images based on the divergence of vector fields,” IEEE Trans. Med. Imaging 27(2), 237–246 (2008).https://doi.org/10.1109/TMI.2007.909827 [DOI] [PubMed] [Google Scholar]
- 24.Lam B., Gao Y., Liew A.-C., “General retinal vessel segmentation using regularization-based multi concavity modeling,” IEEE Trans. Med. Imaging 29(7), 1369–1381 (2010).https://doi.org/10.1109/TMI.2010.2043259 [DOI] [PubMed] [Google Scholar]
- 25.Jiang X., Mojon D., “Adaptive local thresholding by verification based multi threshold probing with application to vessel detection in retinal images,” IEEE Trans. Pattern Anal. Mach. Intell. 25(1), 131–137 (2003).https://doi.org/10.1109/TPAMI.2003.1159954 [Google Scholar]
- 26.Al-Diri B., Hunter A., Steel D., “An active contour model for segmenting and measuring retinal vessels,” IEEE Trans. Med. Imaging 28(9), 1488–1497 (2009).https://doi.org/10.1109/TMI.2009.2017941 [DOI] [PubMed] [Google Scholar]
- 27.Budai A., Michelson G., Hornegger J., “Multiscale blood vessel segmentation in retinal fundus images,” in Proc. Bildverarbeitung für die Medizin, pp. 261–265 (2010). [Google Scholar]
- 28.Budai A., et al. , “Robust vessel segmentation in fundus images,” Int. J. Biomed. Imaging 2013, 154860 (2013).https://doi.org/10.1155/2013/154860 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Azzopardi G., et al. , “Trainable COSFIRE filters for vessel delineation with application to retinal images,” Med. Image Anal. 19(1), 46–57 (2015).https://doi.org/10.1016/j.media.2014.08.002 [DOI] [PubMed] [Google Scholar]
- 30.Zhang J., et al. , “Robust retinal vessel segmentation via locally adaptive derivative frames in orientation scores,” IEEE Trans. Med. Imaging 35(12), 2631–2644 (2016).https://doi.org/10.1109/TMI.2016.2587062 [DOI] [PubMed] [Google Scholar]
- 31.Neto L. C., et al. , “An unsupervised coarse-to-fine algorithm for blood vessel segmentation in fundus images,” Expert Syst. Appl. 78, 182–192 (2017).https://doi.org/10.1016/j.eswa.2017.02.015 [Google Scholar]
- 32.Niemeijer M., et al. , “Comparative study of retinal vessel segmentation methods on a new publicly available database,” Proc. SPIE 5370, 648–656 (2004).https://doi.org/10.1117/12.535349 [Google Scholar]
- 33.Staal J. J., et al. , “Ridge based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imaging 23(4), 501–509 (2004).https://doi.org/10.1109/TMI.2004.825627 [DOI] [PubMed] [Google Scholar]
- 34.Waheed A., et al. , “Removal of false blood vessels using shape based features and image inpainting,” Hindawi Publ. Corp. J. Sens. 2015, 839894 (2015).https://doi.org/10.1155/2015/839894 [Google Scholar]
- 35.Soares J., et al. , “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Trans. Med. Imaging 25(9), 1214–1222 (2006).https://doi.org/10.1109/TMI.2006.879967 [DOI] [PubMed] [Google Scholar]
- 36.Ricci E., Perfetti R., “Retinal blood vessel segmentation using line operators and support vector classification,” IEEE Trans. Med. Imaging 26(10), 1357–1365 (2007).https://doi.org/10.1109/TMI.2007.898551 [DOI] [PubMed] [Google Scholar]
- 37.Roychowdhury S., Koozekanani D. D., Parhi K. K., “Blood vessel segmentation of fundus images by major vessel extraction and sub-image classification,” IEEE J. Biomed. Health Inf. 19(3), 1118–1128 (2015).https://doi.org/10.1109/JBHI.2014.2335617 [DOI] [PubMed] [Google Scholar]
- 38.Fraz M., et al. , “An ensemble classification-based approach applied to retinal blood vessel segmentation,” IEEE Trans. Biomed. Eng. 59(9), 2538–2548 (2012).https://doi.org/10.1109/TBME.2012.2205687 [DOI] [PubMed] [Google Scholar]
- 39.Marin D., et al. , “A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features,” IEEE Trans. Med. Imaging 30(1), 146–158 (2011).https://doi.org/10.1109/TMI.2010.2064333 [DOI] [PubMed] [Google Scholar]
- 40.Strisciuglio N., et al. “Supervised vessel delineation in retinal fundus images with the automatic selection of B-COSFIRE filters,” Mach. Vision Appl. 27(8), 1137–1149 (2016).https://doi.org/10.1007/s00138-016-0781-7 [Google Scholar]
- 41.Li Q., et al. , “A cross-modality learning approach for vessel segmentation in retinal images,” IEEE Trans. Med. Imaging 35(1), 109–118 (2016).https://doi.org/10.1109/TMI.2015.2457891 [DOI] [PubMed] [Google Scholar]
- 42.Liskowski P., Krawiec K., “Segmenting retinal blood vessels with deep neural networks,” IEEE Trans. Med. Imaging 35(11), 2369–2380 (2016).https://doi.org/10.1109/TMI.2016.2546227 [DOI] [PubMed] [Google Scholar]
- 43.Ngo L., Han J., “Multi-level deep neural network for efficient segmentation of blood vessels in fundus images,” Electron. Lett. 53(16), 1096–1098 (2017).https://doi.org/10.1049/el.2017.2066 [Google Scholar]
- 44.Salembier P., “Comparison of some morphological segmentation algorithms based on contrast enhancement—application to automatic defect detection,” in Signal Processing V: Theories Application, Elsevier; (1990). [Google Scholar]
- 45.Zana F., Klein J. C., “Segmentation of vessel like patterns using mathematical morphology and curvature evaluation,” IEEE Trans. Image Process. 10(7), 1010–1019 (2001).https://doi.org/10.1109/83.931095 [DOI] [PubMed] [Google Scholar]
- 46.Bangham J. A., et al. , “Morphological scale-space preserving transforms in many dimensions,” J. Electron. Imaging 5(3), 283–299 (1996).https://doi.org/10.1117/12.243349 [Google Scholar]
- 47.Ma Y., Yang M., Li L., “A kind of omni-directional multi-angle structuring elements adaptive morphological filters,” J. Chin. Inst. Commun. 25(9), 86–92 (2004). [Google Scholar]
- 48.Hoover A. D., Kouznetsova V., Goldbaum M., “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Trans. Med. Imaging 19(3), 203–210 (2000).https://doi.org/10.1109/42.845178 [DOI] [PubMed] [Google Scholar]













