Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2015 Nov 5;2015:1140–1147.

Vessel Delineation in Retinal Images using Leung-Malik filters and Two Levels Hierarchical Learning

Ehsan S Varnousfaderani 1, Siamak Yousefi 1, Christopher Bowd 1, Akram Belghith 1, Michael H Goldbaum 1
PMCID: PMC4765663  PMID: 26958253

Abstract

Blood vessel segmentation is important for the analysis of ocular fundus images for diseases affecting vessel caliber, occlusion, leakage, inflammation, and proliferation. We introduce a novel supervised method to evaluate performance of Leung-Malik filters in delineating vessels. First, feature vectors are extracted for every pixel with respect to the response of Leung-Malik filters on green channel retinal images in different orientations and scales. A two level hierarchical learning framework is proposed to segment vessels in retinal images with confounding disease abnormalities. In the first level, three expert classifiers are trained to delineate 1) vessels, 2) background, and 3) retinal pathologies including abnormal pathologies such as lesions and anatomical structures such as optic disc. In the second level, a new classifier is trained to detect vessels and non-vessel pixels based on results of the expert classifiers. Qualitative evaluation shows the effectiveness of the proposed expert classifiers in modeling retinal pathologies. Quantitative results on two standard datasets STARE (AUC = 0.971, Acc=0.927) and DRIVE (AUC = 0.955, Acc =0.903) are comparable with other state-of-the-art vessel segmentation methods.

Introduction

Color photography of the ocular fundus is a widely used imaging modality that permits the non-invasive analysis of the retinal microvasculature. Color images allow ophthalmologists and trained medical professionals to diagnose and monitor progression of retinal diseases, including age-related macular degeneration [1] and diabetic retinopathy [2]. Diabetic retinopathy as the primary cause of blindness can be prevented with treatment at an early stage, therefore WHO suggest yearly retina screening of patients. The reliable vessel segmentation can facilitate screening and improve vascular analysis that leads to improve Diabetic retinopathy detection [2, 3].

The manual analysis of ocular fundus images is time-consuming, expensive, and sometimes inaccurate. Computer aided ocular fundus image analysis permits the quantification of the extent of retinal abnormalities in vascular diseases and is fast and inexpensive and allows batch mode processing [3] [4]. Blood vessel segmentation is an important early step for the analysis of ocular fundus images. This can be a challenging task if the images are low quality and noisy vary in brightness, or have lesions underlying or adjacent to blood vessels [5]. Some automatic vessel segmentation methods extract feature vectors for pixels and use a set of manually segmented vessels (by experts) to train their classifiers, while the others are mainly developed based on filter response thresholding or other rule based techniques.

The matched filter technique [6] [7] is one of the earliest blood vessel segmentation methods that uses the maximum response of 12 different templates (2D kernel with Gaussian cross sections), in different orientations, to detect blood vessels. The filter parameters are then changed to increase the matched filter response to blood vessel detection by Al-Rawi [8]. The tracking approaches [9] [10] [11] build vessel trees by propagating vessel labels from manually or automatically selected pixels to unknown pixels, based on underlying correlations among pixels or tracking vessel center lines. The mathematical morphology is combined with cross- curvature evaluation and linear filtering by [12] [13] [14] to segment vessel-like patterns from background. The most recent filtering-based method is called B-COSFIRE [15], which uses a combination of shifted filter responses to extract vessels. B-COSFIRE is a trainable filter approach in which the selectivity of a filter is determined automatically based on user prototype patterns such as straight vessels, bifurcations or a crossover points.

In learning based approaches, the proposed method by Ricci [16] uses a line detector on the inverted green channel of a retinal image. The average gray level on line passing through the pixels is computed in different orientations and the line with maximum value is selected. The line strength is computed as a difference between the selected line and the average of contrasts in the square neighborhood of the pixel. The line strength is high for vessel pixels and low for non-vessel pixels that allows unsupervised pixel classification by thresholding. Furthermore, an orthogonal line is defined based on a line of three pixels, centered on the midpoint of the main line and orthogonal to it. Then, its strength is obtained again by subtracting the average intensity on the orthogonal line from average intensity of the square neighboring of the pixel. The orthogonal line strength, line strength and intensity of pixels are combined to build a three-dimensional feature vector for every pixel. Later a SVM classifier is used to improve the supervised vessel classification result.

The proposed method by Fraz and et al [17] builds feature vectors based on orientation analysis of the gradient vector field, line strength measure, Gabor filter responses and morphological transformation for every pixel. Then, supervised methods, based on an ensemble of bagged and boosted decision trees, are used to classify pixels as vessels and non-vessels. The line elements are approximated based on grouped image ridges in the method developed by Staal [18]. The properties of line elements and their associate patch are used to build feature vectors for every pixel while the proposed method by Niemeijer [19] uses a response of a multiscale Gaussian filter to build feature vectors. Then, the k-Nearest Neighbor (kNN) classifier is used to classify pixels in both methods. Soares et al. [20] use a two-dimensional Gabor wavelet filter response, in multiple scales, to build feature vectors for every pixel. Then, a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures is used to classify pixels as vessel and non-vessels. Most of the supervising method are designed to classify vessels and non-vessel regions regardless of the presence of abnormal pathologies such as lesions or exudates. Their performance is high when dealing with normal retinal images while it degrades drastically when dealing with abnormal retinal images.

The main goal of this paper is to develop a vessel segmentation method based on Leung-Malik filters [21] that seeks to minimize false positives and negatives in the face of retinal pathologies such as lesions, exudates and optic nerve abnormalities in diseased retinas. The proposed method uses a response of Leung-Malik filters in different scale and orientations to build set of feature vectors for every pixel. Then, a two level hierarchical learning framework is employed to segment vessels in abnormal retinal images. In the first levels, three classifier is trained to detect vessels, background and abnormalities then in the second level, a new classifier is trained to combine results of first level classification to delineate vessel and non-vessels. The performance of the proposed method on standard databases including STARE [7] and Drive [18] is comparable to the state of the art vessel segmentation methods.

The rest of the paper is organized as follows: in Section 2 we explain the proposed method and show how the LM filters responses can be used to detect blood vessels. In Section 3 we discuss and evaluate our method followed by discussions and finally we draw conclusions in Section 4

Method

The proposed method delineates vessels in 2D fundus images by classifying vessel or non-vessel pixels. The response of multi-scale, multi orientation Leung-Malik filter bank [21] is used to extract features for every pixel. The Leung-Malik filter bank consists of 60 filters, including first and second derivatives of Gaussians at 8 orientations and 3 scales (making a total of 48 filters); 8 Laplacian of Gaussian (LOG) filters; and 4 Gaussian filters. In our experiments the first and second derivative filters occur at the scale σ={1,2,2} with an elongation factor of 3 (σx = σ, σy = 3σx). The Gaussians occur at the four basic scales σ = σ={1,2,2,22} while the 8 LOG filters occur at scale σ={1,2,2,22,3,32,6,62}. The examples of the LM filter bank is shown in Fig 1.b. The standard configuration for scales and number of orientations is used [21]. The maximum response of LM filters on the green channel image for different orientations, standard deviation of maximum responses on different scales and intensity value and its standard deviation in neighborhood of 5×5 of pixels are used to build a 14 dimensional feature vector for pixels as shown in Fig 1.c. The LM filter bank extracts both textural, shape and intensity based features. The first and second derivatives of Gaussian in different orientation and scales have high response for elongated and vessel like objects and moreover the LOG and Gaussian filters have high responses for blob objects like exudate and hemorrhages. The extracted features in Fig 1.c show how effectively the vessel like objects are highlighted in different orientation and scales using LM filters. Once feature vectors are extracted for every pixel then a Two Level Hierarchical Classification is applied to detect vessel and non-vessel pixels in abnormal retinal images as shown in Fig 2.

Fig 1.

Fig 1.

Feature extraction Process. a) Green channel input retinal image and its ground truth segmentation taken from STARE database. b) The Leung-Malik filter bank consists of 60 multiscale and multiorientational filters. c) Extracted 14 dimensional features.

Fig 2.

Fig 2.

Two level Hierarchical classification framework. a) Expert classifiers to detect vessels, abnormalities and background pixels. b) Binary classification results and probability maps generated by expert classifiers, c) local means and standard deviation of probability map as an input features for the classifier in the next level.d) classifier in the level 2 and its outcome.

In the case of multi-class classification problem, the hierarchical classification framework takes advantages of simple expert classifiers in the first level to discriminate one class against all others. The expert classifiers are ensembles of n decision trees and they are trained to discriminate pixels in vessels, background and abnormal regions. Abnormalities in our experiments are considered as lesions, exudate, optic disk and regions with different textural pattern from normal retinal image. The expert classifiers are shown in Fig 2.a.

The decision tree generates class label however the probability of samples originating from the class can be computed as the fraction of observations of the class in a tree leaf. The final probability map for expert classifiers is computed by averaging probability maps generated by all decision trees. The binary classification and probability maps of expert classifiers are shown in the Fig 2.b.

The local mean and standard deviation in 3×3 square neighborhood in probability maps of three expert classifiers are used to build 6 dimensional feature vectors for every pixel. The 6 dimensional feature vectors are shown in Fig 2.c and it is used as an input for the classifier in the second level. In the second level, a new classifier based on ensembles of n decision trees is trained on 6 dimensional feature vectors (results of expert classifiers in the level one) to classify pixels as vessel or non-vessel. The classifier in the second level and its results are shown in Fig 2.d. Different classifiers such as K nearest neighbor, Gaussian mixture models and ensembles of decision trees are used in the first and second levels but ensembles of 60 decision trees outperforms other classifiers. Therefore in our experiments the ensembles of 60 decision trees are used in the first and second level.

The performance of the proposed method is evaluated on standard STARE [7] and DRIVE [18] data sets. The STARE data set, collected at the Shiley Eye Center of UC San Diego, is used as a training set for the classifier. The STARE consists of 20 color retinal images (10 normal and 10 with pathology) captured by a Topcon TRV-50 fundus camera at 35 deg FOV. Two observers (an expert ophthalmologist, MHG) manually segmented images in which 10.4% and 14.9% of pixels are segmented as vessels by first and second observers respectively. In our experiment, segmentation by the first observer is used as ground truth as it is commonly used by other methods. In dealing with STARE data set a Cross-validation is used to train and test the classifiers on 75% and 25% images, respectively. This process is repeated 4 times to test all images. The DRIVE data set consists of 40 images (20 for train and 20 for test) captured by a Canon CR5 nonmydriatic 3CCD camera at 45 field of view (FOV). The training and test sets are manually segmented one and two times respectively by observers trained by an ophthalmologist. In test set, 12.3% and 12.79% of pixels are segmented as vessels by first and second observers respectively and first observer is used as ground truth.

Results and Discussions

In qualitative evaluation, the performance of the proposed method in dealing with retinal images with abnormalities is evaluated. The green channel retinal images with and without abnormalities from STARE and DRIVE databases are shown in the Fig 2.a. The results of the three expert classifiers in detecting the image background, image abnormalities, and vessels are encoded with red, green and blue colors as shown in the Fig 3.b. The segmented vessels by the proposed method and the ground truth images are shown in the Fig. 3.c, d.

Fig 3.

Fig 3.

Illustration of Vessel segmentations. a) Green channel of retinal images from STARE and DRIVE datasets. b) Result of three expert classifiers in detecting background, abnormalities and vessels, c) segmented vessels by proposed method, d) Groundtruth vessel segmentations

The abnormalities close to the macula in the first STARE image are successfully detected by expert classifiers, but some part of their boundaries are wrongly considered as vessels, as highlighted with green and blue respectively in Fig 3.b. The expert classifiers successfully detect scattered and intensive pathologies in STARE images two and three, as shown in fig.3.b. The proposed method takes advantage of expert classifiers and is able to segment vessels that are surrounded by confounding abnormalities, as shown for STARE images in the Fig 3.c. Fewer of the DRIVE images abnormalities, and result of vessel segmentation is comparable to ground truth labeling, (Fig. 3.c, d). The regional post processing based on the shape and morphological characteristics of vessel or connecting disconnected vessels can improve the results, but in order to have fair comparison with other methods we do not use any post processing in our experiments.

The evaluation metrics of area under the curve (AUC) and accuracy (Acc) are used to quantitatively compare performance of the proposed method with state-of-the-art vessel segmentation methods on STARE and DRIVE databases, as shown in the Table. 1. The accuracy of segmentation is computed as Acc=TP+TNN, in which TP, TN and N are true positive, true negative and number of pixels respectively. To avoid having to select a single threshold for classification, one may scan through all possible thresholds, and observe the effect on the true positive rate and the false positive rate. The Area under the graphed curve is a reliable measure to compare performance of different methods. In our experiment the internal function (perfcurve) of Matlab software is used to estimate the AUC. The proposed method achieves AUC of 0.971 on the STARE database, which is comparable with methods developed by Fraz [17], Marin [22] and Ricci [16] [22], with AUC of 0.977, 0.977 and 0.968 respectively. The performance of the proposed method is better on the STARE database than on the DRIVE database, because the STARE database has more retinal images with abnormalities.

Table 1.

Performance of the proposed method and state-of-the-art methods on STARE and DRIVE databases with respect to AUC and ACC.

STARE DRIVE
AUC Acc AUC Acc
Proposed Method 0.971 0.927 0.955 0.903
Staall et al. (2004) [18] 0.961 0.952 0.952 0.944
Soares et al. (2004) [20] 0.967 0.948 0.961 0.947
Al-Rawi et al (2007) [8] 0.947 0.909 0.944 0.954
Ricci and Perfetti (2007) [16] 0.968 0.965 0.963 0.960
Marin et al (2011) [22] 0.977 0.953 0.959 0.945
Fraz et al (2012) [17] 0.977 0.953 0.975 0.948
B-COSFIRE (2015) [5] 0.956 0.950 0.961 0.944

Conclusion

In this paper we have proposed a novel method to segment vessels in retinal images that have confounding abnormalities. The proposed method takes advantages of multiorientational Leung-Malik filters in different scales and makes use of two level hierarchical learning framework to detect vessels in diseased retinal images. The retinal abnormalities, vessels, and background are modeled by expert classifiers in the first level. The outcomes of the expert classifiers are combined to detect vessels in the second level. The qualitative evaluation shows that retinal abnormalities are successfully delineated from vessels and background by the expert classifiers. Moreover the quantitative evaluation on two standard data sets, STARE (AUC = 0.971, Acc=0.927), DRIVE (AUC = 0.955, Acc =0.903), are comparable with state-of-the-art vessel segmentation methods.

References

  • [1].Abràmoff Michael D, Garvin Mona K, Sonka Milan. Retinal imaging and image analysis. IEEE reviews in biomedical engineering. 2010;3:169–208. doi: 10.1109/RBME.2010.2084567. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Gelman Rony, Martinez-Perez M Elena, Vanderveen Deborah K, Moskowitz Anne, Fulton Anne B. Diagnosis of plus disease in retinopathy of prematurity using Retinal Image multiScale Analysis. Investigative ophthalmology & visual science. 2005;46(12):4734–4738. doi: 10.1167/iovs.05-0646. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Ribeiro Luisa, Bernardes Rui, Cunha-Vaz José. Computer-aided Analysis of Fundus Photographs. 2011:104–7. [Google Scholar]
  • [4].Belghith Akram, Bowd Christopher, Weinreb Robert N, Zangwill Linda M. A hierarchical framework for estimating neuroretinal rim area using 3D spectral domain optical coherence tomography (SD-OCT) optic nerve head (ONH) images of healthy and glaucoma eyes; In IEEE Conf on Engineering in Medicine and Biology Society (EMBC), 2014 36th; pp. 3869–3872. [DOI] [PubMed] [Google Scholar]
  • [5].Azzopardi George, Strisciuglio Nicola, Vento Mario, Petkov Nicolai. Trainable COSFIRE filters for vessel delineation with application to retinal images. Medical image analysis. 2015;19(1):46–57. doi: 10.1016/j.media.2014.08.002. [DOI] [PubMed] [Google Scholar]
  • [6].Chaudhuri Subhasis, Chatterjee Shankar, Katz Norman, Nelson Mark, Goldbaum Michael. Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Transactions on medical imaging. 1989;8(3):263–269. doi: 10.1109/42.34715. [DOI] [PubMed] [Google Scholar]
  • [7].Hoover Adam, Kouznetsova Valentina, Goldbaum Michael. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. Medical Imaging, IEEE Transactions on. 2000;19(3):203–210. doi: 10.1109/42.845178. [DOI] [PubMed] [Google Scholar]
  • [8].Al-Rawi Mohammed, Qutaishat Munib, Arrar Mohammed. An improved matched filter for blood vessel detection of digital retinal images. Computers in Biology and Medicine. 2007;37(2):262–267. doi: 10.1016/j.compbiomed.2006.03.003. [DOI] [PubMed] [Google Scholar]
  • [9].Liu Iching, Sun Ying. Recursive tracking of vascular networks in angiograms based on the detection-deletion scheme. Medical Imaging, IEEE Transactions on. 1993;12(2):334–341. doi: 10.1109/42.232264. [DOI] [PubMed] [Google Scholar]
  • [10].Zhou Liang, Rzeszotarski Mark S, Singerman Lawrence J, Chokreff Jeanne M. The detection and quantification of retinopathy using digital angiograms. Medical Imaging, IEEE Transactions on. 1994;13(4):619–626. doi: 10.1109/42.363106. [DOI] [PubMed] [Google Scholar]
  • [11].Chutatape O, Zheng Liu, Krishnan SM. Retinal blood vessel detection and tracking by matched Gaussian and Kalman filters; proceedings of the 20th Annual International Conference of the IEEE In Engineering in Medicine and Biology Society; 1998. [Google Scholar]
  • [12].Zana Frederic, Klein J-C. Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation. Image Processing, IEEE Transactions on. 2001;10(7):1010–1019. doi: 10.1109/83.931095. [DOI] [PubMed] [Google Scholar]
  • [13].Heneghan Conor, Flynn John, O’Keefe Michael, Cahill Mark. Characterization of changes in blood vessel width and tortuosity in retinopathy of prematurity using image analysis. Medical image analysis. 2002;6(4):407–429. doi: 10.1016/s1361-8415(02)00058-0. [DOI] [PubMed] [Google Scholar]
  • [14].Mendonca Ana Maria, Campilho Aurelio. Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction. Medical Imaging, IEEE Transactions on. 2006;25(9):1200–1213. doi: 10.1109/tmi.2006.879955. [DOI] [PubMed] [Google Scholar]
  • [15].Azzopardi George, Strisciuglio Nicola, Vento Mario, Petkov Nicolai. Trainable COSFIRE filters for vessel delineation with application to retinal images. Medical image analysis. 2015;19(1):46–57. doi: 10.1016/j.media.2014.08.002. [DOI] [PubMed] [Google Scholar]
  • [16].Ricci Elisa, Perfetti Renzo. Retinal blood vessel segmentation using line operators and support vector classification. Medical Imaging, IEEE Transactions on. 2007;26(10):1357–1365. doi: 10.1109/TMI.2007.898551. [DOI] [PubMed] [Google Scholar]
  • [17].Fraz Muhammad Moazam, Remagnino Paolo, Hoppe Andreas, Uyyanonvara Bunyarit, Rudnicka Alicja R, Owen Christopher G, Barman Sarah A. An ensemble classification-based approach applied to retinal blood vessel segmentation. Biomedical Engineering, IEEE Transactions on. 2012;59(9):2538–2548. doi: 10.1109/TBME.2012.2205687. [DOI] [PubMed] [Google Scholar]
  • [18].Staal Joes, Abràmoff Michael D, Niemeijer Meindert, Viergever Max A, van Ginneken Bram. Ridge-based vessel segmentation in color images of the retina. Medical Imaging, IEEE Transactions on. 2004;23(4):501–509. doi: 10.1109/TMI.2004.825627. [DOI] [PubMed] [Google Scholar]
  • [19].Niemeijer Meindert, Garvin Mona K, van Ginneken Bram, Sonka Milan, Abramoff Michael D. Medical Imaging, 69141R–69141R International Society for Optics and Photonics. 2008. Vessel segmentation in 3D spectral OCT scans of the retina. [Google Scholar]
  • [20].Soares João VB, Leandro Jorge JG, Cesar Roberto M, Jelinek Herbert F, Cree Michael J. Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification. Medical Imaging, IEEE Transactions on. 2006;25(9):1214–1222. doi: 10.1109/tmi.2006.879967. [DOI] [PubMed] [Google Scholar]
  • [21].Malik TLaJ. Representing and recognizing the visual appearance of materials using three-dimensional textons. International Journal of Computer Vision. 2001 Jun;43(1):29–44. [Google Scholar]
  • [22].Marín Diego, Aquino Arturo, Gegúndez-Arias Manuel Emilio, Bravo José Manuel. A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features. IEEE Transactions on Medical Imaging. 2011;(1):146–158. doi: 10.1109/TMI.2010.2064333. [DOI] [PubMed] [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES