Abstract.
A color fundus image is an image of the inner wall of the eyeball taken with a fundus camera. Doctors can observe retinal vessel changes in the image, and these changes can be used to diagnose many serious diseases such as atherosclerosis, glaucoma, and age-related macular degeneration. Automated segmentation of retinal vessels can facilitate more efficient diagnosis of these diseases. We propose an improved U-net architecture to segment retinal vessels. Multiscale input layer and dense block are introduced into the conventional U-net, so that the network can make use of richer spatial context information. The proposed method is evaluated on the public dataset DRIVE, achieving 0.8199 in sensitivity and 0.9561 in accuracy. Especially for thin blood vessels, which are difficult to detect because of their low contrast with the background pixels, the segmentation results have been improved.
Keywords: retinal vessel segmentation, U-net, multiscale, dense block
1. Introduction
The color fundus image is an image of the inner wall of the eyeball taken from different angles with a fundus camera. The process of obtaining the image is noninvasive and painless. We can directly observe retinal vascular changes in fundus images, and these changes can be used to diagnose many serious diseases, such as atherosclerosis, glaucoma, and age-related macular degeneration.1 However, it is labor-intensive and time-consuming for doctors to manually analyze blood vessels of fundus images. The automated segmentation of retinal layers can help them save efforts and time.
Much work has been done on retinal vessel segmentation, but many difficulties persist. First, the shapes, sizes, and gray levels of blood vessels are varied. Second, some vessels have very low contrast with the background. Third, some pathological changes appear as bright spots with narrow and dark spaces, showing characteristics similar to those of blood vessels.2 Existing segmentation methods include supervised methods and unsupervised methods (elaborated in Sec. 2). Among the supervised methods, neural network-based methods can greatly improve the accuracy of vessel segmentation. However, the segmentation of thin vessels is still poses problems because of their low contrast with the surrounding pixels. In this paper, we propose an improved U-net architecture to segment retinal vessels. It achieves not only good overall segmentation accuracy for retinal images but also better segmentation results for thin vessels. The main contributions of this work are as follows.
-
1)
Different scale image patches are used as inputs to the network, allowing it to learn richer multiscale information. Dense blocks are introduced into the U-net to obtain additional spatial context information. Due to these strategies, our method is capable of detecting more vessel pixels and allowing better segmentation of the thin vessels.
-
2)
Compared with the conventional, or “vanilla,” U-net architecture, our method provides better sensitivity and accuracy of vessel segmentation. For the segmentation of thin vessels especially, our method shows obvious advantages in both qualitative and quantitative analyses.
-
3)
Compared with the state-of-the-art retinal vessel segmentation methods, our method obtains the best sensitivity, accuracy, and geometric mean of sensitivity and specificity. The area under the receiver operating characteristic (ROC) curve also shows that our method outperforms the state-of-the-art methods.
The remainder of this paper is organized as follows. Sec. 2 reviews some advanced methods of vascular segmentation. Section 3 describes the proposed method in detail, and Sec. 4 introduces the datasets used for the experiments and the experimental parameters. Section 5 presents and discusses the experimental results. We conclude this paper in Sec. 6.
2. Related Work
Much work has been done on the retinal vessel segmentation. It can be divided into two categories: unsupervised methods and supervised methods.
The unsupervised methods attempt to utilize prior knowledge of vascular structure or gray features. However, it is difficult to cover all vascular structures due to their various manifestations. The unsupervised methods mainly include model-based methods, matched filter, and morphology methods. Roychowdhury et al.3 identified vessel pixels iteratively by adaptive thresholding. Zhao et al.4 utilized hybrid region information of the retinal image to construct an infinite active contour model. Zhao et al.5 combined the level set and region growing algorithm to segment retinal vessels. Zhang et al.6 presented a set of new filters based on 3-D orientation scores. Soomro et al.7,8 improved sensitivity of vessels with color-to-gray conversion based on principal component analysis, contrast-sensitive approaches, and so on. Ramlugun et al.9 used a Gabor filter to enhance images and developed a double-sided thresholding scheme. Yin et al.10 used a Hessian matrix to enhanced vessels and combined it with a binary image obtained using a threshold that maximized entropy. Nguyen et al.11 segmented retinal vessels with a multiscale line detector. Koukounis et al.12 utilized a matched filter with signed integers to enhance the contrast between vascular and nonvascular pixels. Azzopardi et al.13 presented a B-COSFIRE filter responding to vessels selectively. Khan et al.14 presented several contrast-sensitive measures to increase the sensitivity of existing retinal vessel segmentation methods.
Depending on the manner in which the features are extracted, supervised methods can be further divided into two categories, namely those based on hand-crafted features and those based on deep features. The former need feature designing and classifier training. In this kind of method, effective feature extraction is critical to the results of segmentation. Corresponding postprocessing is usually needed to improve the accuracy of segmentation. Orlando and Blaschko15 segmented blood vessels using a discriminatively trained, fully connected conditional random field model. Roychowdhury et al.16 combined the major vessel with the result of subimage classification. Strisciuglio et al.17 constructed a feature vector with the responses of a B-COSFIRE filter and trained a support vector machine classification. Zhang et al.18 used a brain-inspired wavelet transform and the random forest to segment retinal blood vessels. Fraz et al.19 designed a feature vector including the orientation of the gradient vector field, line strength, and responses of the Gabor and morphological filters and fed it into an integration framework of bagged and boosted decision trees.
Recent research mainly focuses on methods based on deep features. They can automatically learn hierarchical features without prior knowledge. Segmentation results usually do not require any postprocessing. These studies propose neural network architectures with different optimization strategies. Guo et al.20 and Oliveira et al.21 reinforced samples for deep learning. Hu et al.22 and Yan et al.23 optimized the loss function. Hajabdollahi et al.24 provided a network structure with quantized fully connected layers and pruned convolutional layers. Dasgupta and Singh25 considered the segmentation task as a multilabel classification task and utilized the advantages of combining convolutional neural networks with structured prediction. Feng et al.26 introduced skip connection into neural networks. Jiang et al.27 presented a fully convolutional ALextNet network. Mo and Zhang28 and Fu et al.29 fused multilevel side-output layers to obtain a vessel probability map. Li et al.30 proposed a cross-modality learning approach. Liskowski and Krawiec31 used deep learning for reliable detection of blood vessels. Among the methods based on deep features, U-net is a semantic segmentation network based on a fully convolutional network, which is suitable for medical image segmentation. In this paper, we propose an improved U-net network to segment retinal vessels. We describe this method in the next section.
3. Method
3.1. Outline of the Method
Figure 1 shows the flowchart of the proposed method. First, we preprocess the color retinal images and obtain enhanced gray images. Then the image patches are extracted around the vessel pixels, and they are used as the inputs for the improved U-net. Patch probability maps outputted from the improved U-net are overlapping combined to produce the image probability map. The final segmentation result is obtained through binary segmentation. The details are presented as follows.
Fig. 1.
Flowchart of the proposed method.
3.2. Network Architecture of the Improved U-net
As shown in Fig. 2, our network architecture is patch-based. Similar to the original U-Net architecture, its left side is the encoder path and its right side is the decoder path. Each encoder layer performs convolution (with batch normalization and rectified linear unit (ReLU) activation) and densely connected convolution to produce multichannel encoder feature maps. This is followed by the down-sampling operation. The decoder path utilizes the deconvolution layer to up-sample the feature maps. The feature map of the encoder path is concatenated to the corresponding up-sampled decoder feature map by the skip connections. Finally, the feature maps outputted from the final decoder layer are activated by a softmax function. Two channel probabilities are obtained, one is for the vessels and another is for the nonvessels.
Fig. 2.
Network architecture of the improved U-net.
We improve the vanilla U-net with regard to the following two aspects.
-
1)
Multiscale input layer. Multiscale input has proved to be effective in improving segmentation quality.32 Unlike other works that put multiscale images into multistream networks, respectively, and fuse the output map of each network as the last output, we employ the average pooling layer to down sample the image and feed these images of different scales into each layer of the encoder path.
-
2)
Dense block. As shown in Fig. 3, we use a three-layer dense block. For each layer, the feature-maps of all the preceding layers are used as inputs, and their own feature-maps are used as inputs for the subsequent layers. The first two layers implement a transformation consisting of batch normalization, ReLU activation, and convolution. The last layer only implements ReLU activation.
Fig. 3.
The dense block architecture.
3.3. Optimizer and Loss Function
Adam is used as the optimizer of the network. It only requires first-order gradients with little memory requirement. It calculates individual adaptive learning rates for different parameters from estimates of first and second moments of the gradients.
We use binary cross entropy to measure the difference between the predicted distribution and the true distribution of the image pixel. The loss function is defined as follows:
| (1) |
where is the number of samples, is the true label of the th sample, and is the predicted probability of the th sample.
4. Experimental Setup
4.1. Datasets
The evaluation of the proposed method is conducted on a publicly available dataset DRIVE.33 DRIVE consists of 40 retinal images, and each image was captured using 8 bits per color plane at . Among these images, 33 do not show any sign of diabetic retinopathy and 7 show signs of mild early diabetic retinopathy. They are divided into two groups: 20 training images and 20 test images. Each training image has one manual segmented image as ground truth, and each test image has two manual segmented images. As suggested by Ref. 23, most previous methods use the first one as ground truth for evaluation. For fair comparison with other methods, we also follow the same protocol.
4.2. Experiment Scheme and Parameter Setting
Our experiment is implemented on Keras. Adam is used as the optimizer of the network and the learning rate is initialized to 0.001. In the training stage, 20 training images are duplicated to 40 images. The 40 color retinal images are converted into gray images and enhanced. Then we extract 20,480 patches of from the images, which are used as the inputs to the network. We set the batch size to 25 and carry out three epochs to train the network. In the test stage, a test image is converted to gray image and enhanced in the same way as the training stage. Then we extract patches of from the gray image with a stride of 5 in both width and height. The prediction maps of patches are got and overlapping combined into a prediction image. Finally, we implement binary segmentation with a threshold of 0.5.
4.3. Evaluation Metrics
The segmentation results are measured in the image field of view defined by a mask image. We use four measures to evaluate our method: sensitivity (Se), specificity (Sp), accuracy (Acc), and geometric mean of sensitivity and specificity (G-mean). They are defined as follows:
| (2) |
| (3) |
| (4) |
| (5) |
Here, true positives (TP) are vessel pixels correctly identified as vessels, false positives (FP) are nonvessel pixels incorrectly identified as vessels, true negative (TN) are nonvessel pixels correctly identified as nonvessels, and false negative (FN) are vessel pixels incorrectly identified as nonvessels. Se measures the proportion of correctly classified vessel pixels, Sp measures the proportion of correctly classified nonvessel pixels, and Acc measures the proportion of correctly classified total pixels. G-mean measures the balance between Se and Sp by taking their geometric mean, whose value lies between 0 and 1.34 Moreover, a ROC curve and the area under the ROC curve (AUC) are used to evaluate the performance of the proposed method. The larger the AUC is, the better the performance of the classifier. The AUC will be equal to 1 when the classifier is perfect.
5. Results and Discussion
5.1. Overall Performance of the Proposed Method
The performance of the method on the DRIVE dataset is shown in Table 1, with the second human observer results as the reference. The average Se, Sp, Acc, and G-mean of our method are 0.8199, 0.9762, 0.9561, and 0.8946, respectively. All the four measures are better than those of the second observer.
Table 1.
Segmentation results on DRIVE with the second observer results as the reference.
| Method | Se | Sp | Acc | G-mean |
|---|---|---|---|---|
| Second human observer | 0.7796 | 0.9717 | 0.9470 | 0.8704 |
| Proposed method | 0.8199 | 0.9762 | 0.9561 | 0.8946 |
We also select two images and show their segmentation results in Fig. 4. Figures 4(a1) and 4(a2) show the original retinal images, and Figs. 4(b1) and 4(b2) show the mask images. Figures 4(c1) and 4(c2) show the first observer manual segmentation as the ground truth, and Figs. 4(d1) and 4(d2) show the segmentation results of our method. The segmented images show that our method detects retinal blood vessels well.
Fig. 4.
Segmentation results of two retinal images: (a1), (a2) the color retinal images; (b1), (b2) the mask images; (c1), (c2) the ground truth; and (d1), (d2) the segmentation results of our method.
5.2. Evaluation of the Improved U-net Architecture
In order to prove the effectiveness of the improved U-net architecture, we compare the segmentation result of the improved U-net with that of the vanilla U-net. From Table 2, we see that the sensitivity, accuracy, and G-mean of the improved U-net are better than those of the vanilla U-net. Notably, the sensitivity has improved considerably.
Table 2.
Comparison with the vanilla U-net.
| Method | Se | Sp | Acc | G-mean |
|---|---|---|---|---|
| Vanilla U-net | 0.7872 | 0.9805 | 0.9557 | 0.8785 |
| Improved U-net | 0.8199 | 0.9762 | 0.9561 | 0.8946 |
Furthermore, we compare the segmentation sensitivity and accuracy of each retinal image, which are illustrated in Figs. 5(a) and 5(b), respectively. Figure 5(a) shows that the segmentation sensitivity of each image obtained by the improved U-net is better than that of the vanilla U-net. Figure 5(b) shows that the segmentation accuracy of most images obtained by the improved U-net is better than that of the vanilla U-net.
Fig. 5.
Comparison of sensitivity and accuracy: (a) the sensitivity of the vanilla U-net and the improved U-net and (b) the accuracy of the vanilla U-net and the improved U-net.
We plot the ROC curves and show the AUC values in Fig. 6. Figure 6 shows that the ROC curve of the improved U-net is closer to the upper left corner than that of the vanilla U-net. The AUC of the improved U-net is 0.9796, which is also better than that of the vanilla U-net (0.9773). This demonstrates that our method obtains a better classifier.
Fig. 6.
ROC curves and AUC for blood vessel segmentation on the DRIVE test dataset.
In order to compare the accuracy of these two methods for thin vessel segmentation, we divide a ground truth image into two parts. The first contains thin vessels that are smaller than five pixels, and the rest are denoted as large vessels. For example, the first test ground truth image is shown in Fig. 7(a), the ground truth for the thin vessels is shown in Fig. 7(b), and the ground truth for the large vessels is shown in Fig. 7(c). According to the thin vessels ground truth map, we calculate the thin vessels’ sensitivities for the vanilla U-net and the improved U-net for each image, and present them in Fig. 8. We can see that the improved U-net has obvious advantages in detecting thin vessels. We also display the visual segmentation results of the thin vessels in Fig. 9. Regions r1, r2, r3, and r4 include thin vessels with low contrast, and the improved U-net can segment them better than the vanilla U-net. Take region r3 as an example, the segmentation results of the thin vessels by our method match the ground truth well, whereas the vanilla U-net misses most of the vessel pixels.
Fig. 7.
Division of (a) ground truth into (b) thin vessels and (c) large vessels.
Fig. 8.
Segmentation sensitivity of thin vessels.
Fig. 9.
Segmentation results of thin vessels.
5.3. Comparison with the State-of-the Art Methods
We compare the performance of our method with the state-of-the-art methods in terms of Se, Sp, Acc, AUC, and G-mean in Table 3. Only the G-mean is calculated by us, and the other data in the table are reported in their corresponding references. The best evaluation values for the unsupervised and supervised methods are marked separately (via bold). The best Se, Sp, Acc, AUC, and G-mean values of the unsupervised methods are 0.7743, 0.982, 0.954, 0.9718, and 0.8678, respectively. The best Se, Sp, Acc, AUC, and G-mean values of the supervised methods are 0.7861, 0.983, 0.9542, 0.9782, and 0.8738, respectively. Our method outperforms the two methods as its Se, Acc, AUC, and G-mean are 0.8199, 0.9561, 0.9796, and 0.8946, respectively, and the results indicate comparable specificity. The good Se and G-mean indicate that our method not only provides the best vessel pixel classification but also maintains a good balance between Se and Sp.
Table 3.
Comparison with state-of-the-art methods. The bold marks the best evaluation value for each class of methods, i.e., the unsupervised methods and the supervised methods.
| Type | Method | Year | Se | Sp | Acc | AUC | G-mean |
|---|---|---|---|---|---|---|---|
| Unsupervised | Roychowdhury et al.3 | 2015 | 0.739 | 0.978 | 0.949 | 0.967 | 0.8501 |
| Zhao et al.4 | 2015 | 0.742 | 0.982 | 0.954 | 0.862 | 0.8536 | |
| Zhang et al.6 | 2016 | 0.7743 | 0.9725 | 0.9476 | 0.9636 | 0.8678 | |
| Soomro et al.8 | 2016 | 0.7143 | 0.9681 | 0.9461 | 0.9718 | 0.8316 | |
| Yin et al.10 | 2014 | 0.7556 | — | 0.9475 | — | — | |
| Azzopardi et al.13 | 2015 | 0.7655 | 0.9704 | 0.9442 | 0.9614 | 0.8619 | |
| Khan et al.14 | 2017 | 0.754 | 0.964 | 0.944 | — | 0.8526 | |
| Supervised | Orlando and Blaschko15 | 2014 | 0.785 | 0.967 | — | — | 0.8713 |
| Roychowdhury et al.16 | 2015 | 0.725 | 0.983 | 0.952 | 0.962 | 0.8442 | |
| Strisciuglio et al.17 | 2016 | 0.7777 | 0.9702 | 0.9454 | 0.9597 | 0.8686 | |
| Zhang et al.18 | 2017 | 0.7861 | 0.9712 | 0.9466 | 0.9703 | 0.8738 | |
| Dasgupta and Singh25 | 2017 | 0.7691 | 0.9801 | 0.9533 | 0.9744 | 0.8682 | |
| Li et al.30 | 2016 | 0.7569 | 0.9816 | 0.9527 | 0.9738 | 0.8620 | |
| Fu et al.29 | 2016 | 0.7294 | — | 0.9470 | — | — | |
| Mo and Zhang28 | 2017 | 0.7779 | 0.9780 | 0.9521 | 0.9782 | 0.8722 | |
| Hu et al.22 | 2018 | 0.7772 | 0.9793 | 0.9533 | 0.9759 | 0.8724 | |
| Yan et al.23 | 2018 | 0.7653 | 0.9818 | 0.9542 | 0.9752 | 0.8668 | |
| Proposed method | 2019 | 0.8199 | 0.9762 | 0.9561 | 0.9796 | 0.8946 |
5.4. Computational Time
The experiment is conducted on hardware configured with Intel Xeon E5-2683 2.0 GHz and NVIDIA Titan XP GPU. About 21.5 s are needed to segment a retinal image. We list the computational time reported by Refs. 3, 6, 10, 13, 16, 18, and 30 in Table 4. Since the implementation software and platform are different, we cannot compare the computational time directly and fairly. However, as shown in Table 3, and Figs. 8 and 9, our method not only outperforms all these methods in terms of Se, Acc, AUC, and G-mean, but also performs well in detecting thin vessels. Achieving such good segmentation results in 21.5 s is acceptable for assistant diagnosis.
Table 4.
Computational time of different methods.
| Method | Time | System |
|---|---|---|
| Roychowdhury et al.3 | 2.45 s | 2.6 GHz, 2 GB RAM |
| Zhang et al.6 | 20 s | Mathematica 10.2, 2.7 GHz CPU |
| Yin et al.10 | About 24 s | MATLAB R2013, Intel® Core™ i5-3470 CPU (3.20 GHz), 8 GB RAM |
| Azzopardi et al.13 | 10 s | 2 GHz processor |
| Roychowdhury et al.16 | 3.11 s | MATLAB, Intel Core i3, 2.6 GHz, 2 GB RAM |
| Zhang et al.18 | 23.4 s | MATLAB R2015a, 2.7 GHz CPU |
| Li et al.30 | 1.2 min | MATLAB 2014a, AMD Athlon II X4 645 CPU, 3.10 GHz, 4 GB RAM |
| Proposed method | 21.5 s | Intel Xeon E5-2683 2.0 GHz, NVIDIA Titan XP GPU |
6. Conclusion
In this paper, we introduce multiscale inputs and dense block into the vanilla U-net and apply it to segment blood vessels in fundus image. The experimental results demonstrate that our method provides very good segmentation results compared with the vanilla U-net and the state-of-the-art methods. For the segmentation of thin vessels in particular, the results of our method show obvious improvements. Thus this method is of great significance as it can assist doctors in diagnosing fundus-related diseases. Currently, the proposed method only extracts image patches around vessel pixels as the input to the network. In the future, we shall improve upon this work using other data augmentation methods.
Acknowledgments
This research work was supported by the National Natural Science Foundation of China (Nos. 61573380 and 61672542).
Biographies
Kejuan Yue received her MS degree from the North China University of Technology. She is a PhD candidate at Central South University, China. Her research interests include computer vision and medical image analysis.
Beiji Zou received his BS, MS, and PhD degrees from Zhejiang University in 1982, Qinghua University in 1984, and Hunan University in 2001, respectively. He is currently a professor at the School of Computer Science and Engineering of Central South University. His research interests include computer graphics and image processing.
Zailiang Chen received his PhD in computer science from Central South University in 2012. He is currently an associate professor. His recent research interests include computer vision, medical image analysis, and large-scale medical image processing.
Qing Liu received her bachelor’s degree and her PhD in computer science and technology from Central South University, Changsha, China, in 2011 and 2017, respectively. She is currently a postdoc researcher at Central South University. Her research interests include salient object detection and medical image analysis.
Disclosures
No conflicts of interest, financial or otherwise, are declared by the authors.
References
- 1.Zhu C. Z., et al. , “A survey of retinal vessel segmentation in fundus images,” J. Comput. Aided Des. Comput. Graphics 27(11), 2046–2057 (2015). [Google Scholar]
- 2.Ng E. Y. K., et al. , Image Analysis and Modeling in Ophthalmology, pp. 23–47, CRC Press, Boca Raton, Florida: (2014). [Google Scholar]
- 3.Roychowdhury S., et al. , “Iterative vessel segmentation of fundus images,” IEEE Trans. Biomed. Eng. 62(7), 1738–1749 (2015). 10.1109/TBME.2015.2403295 [DOI] [PubMed] [Google Scholar]
- 4.Zhao Y., et al. , “Automated vessel segmentation using infinite perimeter active contour model with hybrid region information with application to retina images,” IEEE Trans. Med. Imaging 34(9), 1797–1807 (2015). 10.1109/TMI.2015.2409024 [DOI] [PubMed] [Google Scholar]
- 5.Zhao Y. Q., et al. , “Retinal vessels segmentation based on level set and region growing,” Pattern Recognit. 47(7), 2437–2446 (2014). 10.1016/j.patcog.2014.01.006 [DOI] [Google Scholar]
- 6.Zhang J., et al. , “Robust retinal vessel segmentation via locally adaptive derivative frames in orientation scores,” IEEE Trans. Med. Imaging 35(12), 2631–2644 (2016). 10.1109/TMI.2016.2587062 [DOI] [PubMed] [Google Scholar]
- 7.Soomro T. A., et al. , “Contrast normalization steps for increased sensitivity of a retinal image segmentation method,” J. Signal Image Video Process. 11(8), 1509–1517 (2017). 10.1007/s11760-017-1114-7 [DOI] [Google Scholar]
- 8.Soomro T. A., et al. , “Automatic retinal vessel extraction algorithm,” in Int. Conf. Image and Vision Comput., IEEE, New Zealand: (2016). 10.1109/DICTA.2016.7797013 [DOI] [Google Scholar]
- 9.Ramlugun G. S., Nagarajan V. K., Chakraborty C., “Small retinal vessels extraction towards proliferative diabetic retinopathy screening,” Expert Syst. Appl. 39(1), 1141–1146 (2012). 10.1016/j.eswa.2011.07.115 [DOI] [Google Scholar]
- 10.Yin X., et al. , “Accurate image analysis of the retina using hessian matrix and binarisation of thresholded entropy with application of texture mapping,” PLoS One 9(4), 1–17 (2014). 10.1371/journal.pone.0095943 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Nguyen U. T. V., et al. , “An effective retinal blood vessel segmentation method using multi-scale line detection,” Pattern Recognit. 46(3), 703–715 (2013). 10.1016/j.patcog.2012.08.009 [DOI] [Google Scholar]
- 12.Koukounis D., et al. , “A high performance hardware architecture for portable, low-power retinal vessel segmentation,” Integration 47(3), 377–386 (2014). 10.1016/j.vlsi.2013.11.005 [DOI] [Google Scholar]
- 13.Azzopardi G., et al. , “Trainable COSFIRE filters for vessel delineation with application to retinal images,” Med. Image Anal. 19(1), 46–57 (2015). 10.1016/j.media.2014.08.002 [DOI] [PubMed] [Google Scholar]
- 14.Khan M. A. U., et al. , “Boosting sensitivity of a retinal vessel segmentation algorithm,” Pattern Anal. Appl. 22, 583–599 (2017). 10.1007/s10044-017-0661-4 [DOI] [Google Scholar]
- 15.Orlando J. I., Blaschko M., “Learning fully-connected CRFs for blood vessel segmentation in retinal images,” Lect. Notes Comput. Sci. 8673, 634–641 (2014). 10.1007/978-3-319-10404-1 [DOI] [PubMed] [Google Scholar]
- 16.Roychowdhury S., Koozekanani D. D., Parhi K. K., “Blood vessel segmentation of fundus images by major vessel extraction and subimage classification,” IEEE J. Biomed. Health Inf. 19(3), 1118–1128 (2015). 10.1109/JBHI.2014.2335617 [DOI] [PubMed] [Google Scholar]
- 17.Strisciuglio N., et al. , “Supervised vessel delineation in retinal fundus images with the automatic selection of B-COSFIRE filters,” Mach. Vision Appl. 27(8), 1137–1149 (2016). 10.1007/s00138-016-0781-7 [DOI] [Google Scholar]
- 18.Zhang J., et al. , “Retinal vessel delineation using a brain-inspired wavelet transform and random forest,” Pattern Recognit. 69, 107–123 (2017). 10.1016/j.patcog.2017.04.008 [DOI] [Google Scholar]
- 19.Fraz M. M., et al. , “An ensemble classification-based approach applied to retinal blood vessel segmentation,” IEEE Trans. Biomed. Eng. 59(9), 2538–2548 (2012). 10.1109/TBME.2012.2205687 [DOI] [PubMed] [Google Scholar]
- 20.Guo Y., et al. , “A retinal vessel detection approach using convolution neural network with reinforcement sample learning strategy,” Measurement 125, 586–591 (2018). 10.1016/j.measurement.2018.05.003 [DOI] [Google Scholar]
- 21.Oliveira A., Pereira S., Silva C. A., “Retinal vessel segmentation based on fully convolutional neural networks,” Expert Syst. Appl. 112, 229–242 (2018). 10.1016/j.eswa.2018.06.034 [DOI] [Google Scholar]
- 22.Hu K., et al. , “Retinal vessel segmentation of color fundus images using multiscale convolutional neural network with an improved cross-entropy loss function,” Neurocomputing 309, 179–191 (2018). 10.1016/j.neucom.2018.05.011 [DOI] [Google Scholar]
- 23.Yan Z., Yang X., Cheng K. T., “Joint segment-level and pixel-wise losses for deep learning based retinal vessel segmentation,” IEEE Trans. Biomed. Eng. 65(9), 1912–1923 (2018). 10.1109/TBME.10 [DOI] [PubMed] [Google Scholar]
- 24.Hajabdollahi M., et al. , “Low complexity convolutional neural network for vessel segmentation in portable retinal diagnostic devices,” in 25th IEEE Int. Conf. Image Process. (ICIP), IEEE, pp. 2785–2789 (2018). 10.1109/ICIP.2018.8451665 [DOI] [Google Scholar]
- 25.Dasgupta A., Singh S., “A fully convolutional neural network based structured prediction approach towards the retinal vessel segmentation,” in 14th Int. Symp. Biomed. Imaging, IEEE, pp. 248–251 (2017). 10.1109/ISBI.2017.7950512 [DOI] [Google Scholar]
- 26.Feng Z., Yang J., Yao L., “Patch-based fully convolutional neural network with skip connections for retinal blood vessel segmentation,” in IEEE Int. Conf. Image Process., IEEE, pp. 1742–1746 (2018). 10.1109/ICIP.2017.8296580 [DOI] [Google Scholar]
- 27.Jiang Z., et al. , “Retinal blood vessel segmentation using fully convolutional network with transfer learning,” Comput. Med. Imaging Graphics 68, 1–15 (2018). 10.1016/j.compmedimag.2018.04.005 [DOI] [PubMed] [Google Scholar]
- 28.Mo J., Zhang L., “Multi-level deep supervised networks for retinal vessel segmentation,” Int. J. Comput. Assisted Radiol. Surg. 12(12), 2181–2193 (2017). 10.1007/s11548-017-1619-0 [DOI] [PubMed] [Google Scholar]
- 29.Fu H., et al. , “Retinal vessel segmentation via deep learning network and fully-connected conditional random fields,” in 13th Int. Symp. Biomed. Imaging (ISBI), IEEE, pp. 698–701 (2016). 10.1109/ISBI.2016.7493362 [DOI] [Google Scholar]
- 30.Li Q., et al. , “A cross-modality learning approach for vessel segmentation in retinal images,” IEEE Trans. Med. Imaging 35(1), 109–118 (2016). 10.1109/TMI.2015.2457891 [DOI] [PubMed] [Google Scholar]
- 31.Liskowski P., Krawiec K., “Segmenting retinal blood vessels with deep neural networks,” IEEE Trans. Med. Imaging 35(11), 2369–2380 (2016). 10.1109/TMI.2016.2546227 [DOI] [PubMed] [Google Scholar]
- 32.Fu H., et al. , “Joint optic disc and cup segmentation based on multi-label deep network and polar transformation,” IEEE Trans. Med. Imaging 37(7), 1597–1605 (2018). 10.1109/TMI.2018.2791488 [DOI] [PubMed] [Google Scholar]
- 33.Staal J., et al. , “Ridge based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imaging 23, 501–509 (2004). 10.1109/TMI.2004.825627 [DOI] [PubMed] [Google Scholar]
- 34.Kubat M., Holte R. C., Matwin S., “Machine learning for the detection of oil spills in satellite radar images,” Mach. Learn. 30(2-3), 195–215 (1998). 10.1023/A:1007452223027 [DOI] [Google Scholar]









