Skip to main content
Journal of Digital Imaging logoLink to Journal of Digital Imaging
. 2013 Mar 21;26(6):1107–1115. doi: 10.1007/s10278-013-9585-8

A New Blood Vessel Extraction Technique Using Edge Enhancement and Object Classification

Shahriar Badsha 1, Ahmed Wasif Reza 1,, Kim Geok Tan 2, Kaharudin Dimyati 3
PMCID: PMC3824929  PMID: 23515843

Abstract

Diabetic retinopathy (DR) is increasing progressively pushing the demand of automatic extraction and classification of severity of diseases. Blood vessel extraction from the fundus image is a vital and challenging task. Therefore, this paper presents a new, computationally simple, and automatic method to extract the retinal blood vessel. The proposed method comprises several basic image processing techniques, namely edge enhancement by standard template, noise removal, thresholding, morphological operation, and object classification. The proposed method has been tested on a set of retinal images. The retinal images were collected from the DRIVE database and we have employed robust performance analysis to evaluate the accuracy. The results obtained from this study reveal that the proposed method offers an average accuracy of about 97 %, sensitivity of 99 %, specificity of 86 %, and predictive value of 98 %, which is superior to various well-known techniques.

Keywords: Diabetic retinopathy, Kirsch’s template, Object classification, Vessel detection, Image processing

Introduction

Blood vessel extraction is the key component to detect and diagnosis of several eye diseases for the ophthalmologists [1]. It is provoked by diabetes mellitus complications and approximately 2 % of the patients affected by this disorder are completely blind and about 10 % experience vision degradation after 15 years of diabetes [24] as a result of diabetic retinopathy (DR) complication. Therefore, blood vessel extraction from the fundus images poses a vital step to solve various practical applications, for instance, diagnosis of the retinal vessels and registration of retinal images acquired at different times. The blood vessel extraction algorithms play an important role in automated radiological diagnostic systems. Basically, segmentation and classification process depends on the image quality, application field, automated or semi-automated approach, and other explicit factors. There are several existing segmentation methods but all of them failed to extract the blood vessel from every single medical image. However, some methods make use of pure intensity-based pattern recognition techniques, such as threshold, followed by the connected neighborhood component analysis [5, 6] and some other techniques incorporate explicit vessel models for the extraction of vessel contours [79]. Prior to applying segmentation algorithm, some methods may possibly need image pre-processing depending on the image characteristics and the general image artifacts, for example noise [10, 11]. On the other hand, some methods use post-processing to remove the complications arising from over-segmentation. There are also some other various approaches that have been reported for the detection of vessels and edges, namely mathematical morphology, threshold probing, centerline approach, ridge-based approach, supervised classification, deformable models, and tracking [1216].

In this paper, we have implemented both pre-processing and post-processing approaches to extract the blood vessel, which is found to be very close to the gold standard segmented result (handheld image). The edge enhancement technique and object classification is mainly focused in this paper to remove the small objects from the fundus image. The remaining part of this paper is organized as follows. The blood vessel extraction technique based on the mathematical morphology is described in Proposed Methodology section. The experimental results along with some qualitative and quantitative comparisons are depicted in Results and Performance Analysis section. Finally, Conclusion section provides the concluding remarks.

Proposed Methodology

The retina is a light-sensitive tissue lining the interior surface of the eye and is a layered structure with several layers of neurons interconnected by synapses. The vein and central retinal artery appear close by each other at the nasal side of the center of the optic disk [17]. Information about the structure of blood vessels can facilitate categorizing the severity of diseases and can also assist as a landmark during segmentation operation. Typical features of the fundus image are presented in Fig. 1. In this study, our main focus is to extract the blood vessels precisely and to replace the existing vessel extraction techniques. Therefore, the proposed method uses the following steps: (1) edge enhancement, (2) average filtering and histogram equalization, (3) binarization, (4) morphological operation, (5) object classification, and (6) optical disk and border subtraction. Figure 2 shows the overall procedure of the proposed vessel extraction algorithm.

Fig. 1.

Fig. 1

Typical color fundus image

Fig. 2.

Fig. 2

Proposed methodology

In the first step of edge enhancement, the edges are highlighted from the retinal RGB image. Then after grayscale conversion of the retinal image, it needs to be filtered and histogram equalized because of uneven noise and low intensity as well as needs to make the edges of blood vessel clearer. After binarization, there will be so many small unwanted objects created due to edge enhancement. Hence, we develop an object classification technique to remove the small objects. Also, it is necessary here to extract the optic disk, which is subsequently used to remove the optic disk from the blood vessel image. The details of every step are shown below.

  1. Edge Enhancement

The proposed method uses the Kirsch’s template [17] for color images. In our proposed algorithm, Kirsch’s template is used for detecting the blood vessels from the retinal images. Figure 3 shows the Kirsch’s templates, which rotate automatically. The Kirsch’s operator is one of the discrete versions of the first order derivatives used for edge enhancement and detection. For detecting the edges, the operator uses eight templates, which are consecutively rotated by 45°. By convolving the image with eight template impulse response arrays in each and every pixel, the gradient is then computed. Therefore, the gradient of different directions is achieved. The final gradient is the summation of the enhanced edges by considering all directions for RGB channel rather than any single channel only. Here, in Fig. 4, various directional enhanced images are presented.

  • 2.

    Grayscale Conversion and Average Filtering

Fig. 3.

Fig. 3

Arrays of Kirsch’s method

Fig. 4.

Fig. 4

ah Edge enhancement for various directions; i color fundus image; j sum of enhanced edges in all directions

In the next step, we have converted the edge enhanced RGB image to its grayscale image using Eq. (1). To convert the RGB image to its grayscale form, it is needed to calculate the values of its red, green, and blue primaries in linear intensity encoding, by gamma expansion. If the output gray image is I and the red, green, and blue components are R, G, and B, respectively, then,

graphic file with name M1.gif 1

Here, the intensity gradient between the foregrounds (blood vessels in this case) is relatively low with its background. To choose an accurate threshold value for segregating the objects of interest is a difficult task. Hence, edge enhancement or pre-processing of the image for subsequent analysis turns out to be essential. Therefore, an averaging filter of size 25 × 35 [1820] is chosen heuristically after several trials containing equal weights of “1”, is applied to the grayscale image Ri (where i∈1 to 25 × 35). The average filtering process is realized as follows [1820].

graphic file with name M2.gif 2
  • 3.

    Histogram Equalization

The image obtained from the above is in grayscale to the gray levels in the range [0, L − 1] and the histogram of the image is a discrete function as below:

graphic file with name M3.gif 3

where rk is the kth gray level and nk is the number of pixels having a gray level rk [21, 22]. Usually, a normalized histogram is [23]:

graphic file with name M4.gif 4

where n is the total number of pixels and k = 0,1,…,L − 1. When the histogram is forced to be uniform, the transformation process is termed as histogram equalization, as shown below [22]:

graphic file with name M5.gif 5

Here, r takes the values that represent the limits of color value, the transformation function T(r) must be [22]: (a) single-valued, (b) monotonically increasing in [0,1], and (c) Inline graphic

If the transformation function is not monotonically increasing, it is possible to invert gray levels in the resulted image. The inverse transformation from s to r is computed as follows [23]:

graphic file with name M7.gif 6

Let pr(r) and ps(s) be the probability density function (PDF) for r and s. Now, if pr(r) and ps(s) are known and T − 1(s) is single-valued and monotonically increasing in [0,1], then ps(s) is [23]:

graphic file with name M8.gif 7

The transformation function can be written as [23]:

graphic file with name M9.gif 8

However, the cumulative distribution function of variable r is in the right side of Eq. (7). The transformation function is single-valued, monotonically increasing, and T ∈[0,1] for r ∈[0,1].

graphic file with name M10.gif 9

By substituting Eq. (8) in Eq. (6), we obtain:

graphic file with name M11.gif 10

where Inline graphic is the approximation of the probability of occurrence of the gray level rk. Here, Eq. (10) is the adaptive histogram equalization [23], which is applied to the image obtained from step (ii).

  • 4.

    Binarization

We can see from Fig. 5 that the intensity variation of gray image is not bimodal. If we examine the image obtained after histogram equalization as above, there is a dominant peak at the higher intensity level in the histogram bin. Therefore, in the process of binarization, it is quite obvious that a high threshold value, close to 1 might provide an image containing all blood vessels. In our proposed method, binarization process is done by setting a heuristic threshold of 0.77. We have tested 20 different fundus images with the adopted threshold value and found extremely suitable in all cases. This binarization process will result in two groups of pixels, as illustrated in Eq. (11) [1820].

graphic file with name M13.gif 11
  • 5.

    Morphological Closing

Fig. 5.

Fig. 5

Gray fundus image (left) and binary image (right)

Morphological closing is necessary to close the holes or empty area within the blood vessel created through the Kirsch’s template matching as demonstrated before. The closing of A by B is obtained (as shown in Fig. 6) by the dilation of A by B, followed by erosion of the resulting structure by B [22].

graphic file with name M14.gif 12
  • 6.

    Removal of Small Objects Using Object Classification

Fig. 6.

Fig. 6

Binarized image (left) and after closing (right)

The blood vessel obtained from binarization process is not satisfactory because of noise and many small unwanted objects. At the first step of our procedure, we use Kirsch’s method to enhance the edges, however, it also enhances the noise as well. As a result, the final binary output contains noise/unwanted small objects as well as some black vein going through the detected blood vessel. To eliminate or reduce these noises, we applied an operation based on calculating the area of each object in an image. This is a simple method to eliminate the unwanted species from the binary image. We first classified the whole image into different objects based on their area. The procedure is shown as follows.

The area of the particular object contained by the image is to count the number of pixels in the object for which (j, k) = 1. The perimeter of the enclosed object can be calculated by applying the following equation [24].

graphic file with name M15.gif 13

where PE is the perimeter. And,

graphic file with name M16.gif 14

After classification process, we eliminate those objects, which are under a certain threshold value of 30. Figure 7 shows the obtained results after removal of small objects.

  • 7.

    Removal of Border and Optic Disk

Fig. 7.

Fig. 7

a With noise and unwanted objects; b after removal of noise and objects

To remove the border of Fig. 7(b), the main task is to choose the appropriate marker and mask images. The original image f(x,y) is used as the mask image and the marker image fm is defined by the following equation [16, 25]:

graphic file with name M17.gif 15

The reconstructed image Rf (fm) contains only the enclosed objects that touch the border, while the set difference f − Rf (fm) contains only the objects of interest from the original image that do not touch the border.

Optic disk is the most apparent feature (the brightest area that can be observed as a pale, well-defined round or vertically somewhat oval) in a fundus image, which is the entrance region of blood vessels and optic nerve to the retina and serves as the locus of most other features. To remove the optic disk in our study, detection of optic disk is implemented in the red channel of the fundus image in four steps: red channel extraction, grayscale conversion, histogram equalization, and binarization. Initially, the original RGB retinal image is taken. We observe that the optic disk is predominantly clearer in the red channel. Therefore, we choose the red channel only and convert it to grayscale. This shortcoming is astounded by linear histogram adjustment. Now, we use a simple technique to extract the region of the optic disk. We convert the grayscale image to its binary with a heuristically high threshold value of t = 0.72. The resultant processed image contains the detected optical disk (not shown in the figure) that is used to remove the optic disk from the fundus image.

  • 8.

    Morphological Erosion

One of the drawbacks of Kirsch’s filter is that, it thickens the blood vessel more than it appears. Therefore, to minimize this tradeoff, morphological erosion is introduced at the final phase. The erosion operator takes two inputs: one is the image that is to be eroded and another is the structuring element of Fig. 8 that determines the precise effect of erosion. Suppose, X is the set of Euclidean coordinates corresponding to the input binary image and K is the set of coordinates for the structuring element. Let, in the region of x, Kx denotes the translation of K. Then, the erosion of X by K is the set of all points x such that Kx is a subset of X [26, 27].

Fig. 8.

Fig. 8

A structuring element

Let, we consider, there is an image matrix containing “0” and “1” in Fig. 9. This binary image is to be eroded by the structuring element of Fig. 8 and the resultant eroded image obtained is shown in Fig. 10.

Fig. 9.

Fig. 9

Binary image containing “0” and “1”

Fig. 10.

Fig. 10

After erosion with structuring element

Finally, using this underlying concept of morphological erosion as stated above and after removal of optic disk and the border, we obtain the extracted blood vessel as shown in Fig. 11. It can be observed that the proposed technique can somehow identify the thinner edges as well, which is the most challenging task.

Fig. 11.

Fig. 11

Final resultant image

Results and Performance Analysis

In this study, the software selected to perform the experiment is MATLAB and the software module used is Image Processing Toolbox [22]. The proposed method has been verified by the DRIVE database images [28]. The DRIVE database contains 20 color images of the retina with 565 × 585 pixels and 8 bits per color channel. To measure the performance carefully, the gold standard segmentation result is included in our study. The experimental settings of the vessel segmented image (obtained using the proposed technique) and gold standard image are kept analogous for performance evaluation. The gold standard [16] is a manual segmentation result (by human grader) provided along with each DRIVE database image to compute the performance measures. The performance evaluation and comparisons are accomplished based on the four measured procedures, namely true positive fraction (TPF), false-positive fraction (FPF), specificity (TNF), and predictive (PV) analysis. The accuracy is calculated by the ratio of the number of correctly classified pixels to the total number of pixels in the image [27]. The TPF or sensitivity represents the fraction of pixels correctly classified as vessel pixels, whereas the FPF defines the fraction of pixels erroneously classified as vessel pixels. Conversely, the fraction of pixels correctly identified as not in the vessel pixel, is known as TNF. The PV is the probability, which confirms that a pixel has been really classified as a vessel pixel. For detailed mathematical computations, we may refer to [1820, 29]. Figure 12 shows the obtained TPF, FPF, TNF, PV, and accuracy using the proposed method, while Table 1 compares the results obtained using the proposed algorithm with those obtained by other well-known or existing algorithms [13, 16, 25, 30, 31]. In addition, Fig. 13 presents the original images and the segmented blood vessel images using the proposed technique in comparison with gold standard segmentation results.

Fig. 12.

Fig. 12

Obtained TPF, FPF, TNF, PV, and accuracy

Table 1.

Comparison between the existing blood vessel segmentation algorithms and the proposed technique based on DRIVE database

Method Sensitivity (TPF) FPF PV Specificity (TNF) Accuracy
Second human observer [28] 0.7761 0.0275 0.9473
Mendonca (gray scale) [30] 0.7315 0.0219 0.9463
Mendonca (green-channel) [30] 0.7344 0.0236 0.9452
Staal [28, 13] 0.7194 0.0227 0.9442
Niemeijer [28, 31] 0.6898 0.0304 0.9417
RGB-Q [16] 0.7704 0.0693
G-Q [16] 0.7500 0.0732
H-maxima [25] 0.8431 0.0283 0.9653
Proposed algorithm 0.9899 0.0371 0.98 0.86 0.9731

Fig. 13.

Fig. 13

Original and segmented image: a original; b segmented (proposed); c gold standard

Conclusion

In this paper, a new technique based on edge enhancement and object classification is presented for automatic extraction of blood vessels in the fundus image. The performance of the proposed approach is evaluated by comparing DRIVE database images. The results obtained from this study show that the proposed algorithm is a powerful technique compared to other well-known methods as listed in Table 1, which achieves superior performances in terms of TPF of 99 %, TNF of 86 %, and PV of 98 %, respectively. It is clearly noticeable that the proposed technique significantly outperforms other well-known methods and the segmented image obtained using the proposed method is almost close (matching accuracy of about 97 %) to gold standard image, which is a significant contribution in this study.

However, one of the major fundamental problems in the field of biomedical image analysis is the shortage of accurate and efficient computer-aided diagnostic tool to assist the fundus image extraction and evaluation process. From the medical viewpoint, the proposed method can be supportive to assist the ophthalmologists while assessing or analyzing the fundus images. Hence, the proposed method of segmented vessels can be applied in a clinical setting of computer assisted diagnosis. It should be pointed out here that the proposed technique will not replace the physicians or ophthalmologists; however, it will help and improve the working efficiency by reducing time, huge workload, and unavoidable human errors. In the future, the proposed approach can be applied for image registration purposes to track the changes in retinal images for monitoring DR.

References

  • 1.Rawi MA, Qutaishat M, Arrar M. An improved matched filter for blood vessel detection of digital retinal images. Computers in Biology and Medicine. 2007;37(2):262–267. doi: 10.1016/j.compbiomed.2006.03.003. [DOI] [PubMed] [Google Scholar]
  • 2.Klein R, Meuer SM, Moss SE, Klein BE. Retinal microa-neurysm counts and 10-year progression of diabetic retinopathy. Arch. Ophthalmol. 1995;113:1386–1391. doi: 10.1001/archopht.1995.01100110046024. [DOI] [PubMed] [Google Scholar]
  • 3.Massin P, Erginay A, Gaudric A. Rétinopathie Diabétique. New York: Elsevier; 2000. [Google Scholar]
  • 4.Marin D, Aquino A, Gegundez-Arias ME, Bravo JM. A New Supervised Method for Blood Vessel Segmentation in Retinal Images by Using Gray-Level and Moment Invariants-Based Features. IEEE Transactions on Medical Imaging. 2011;30(1):146–158. doi: 10.1109/TMI.2010.2064333. [DOI] [PubMed] [Google Scholar]
  • 5.Higgins WE, Spyra WJT, Ritman EL, Kim Y, Spelman FA. Automatic extraction of the arterial tree from 3-D angiograms. IEEE Conf Eng. in Medicine and Bio. 1989;2:563–564. [Google Scholar]
  • 6.Niki N, Kawata Y, Sato H, Kumazaki T. 3D imaging of blood vessels using x-ray rotational angiographic system. IEEE Med. Imaging Conf. 1993;3:1873–1877. [Google Scholar]
  • 7.Molina C, Prause G, Radeva P, Sonka M. 3-D catheter path reconstruction from biplane angiograms. SPIE. 1998;3338:504–512. doi: 10.1117/12.310929. [DOI] [Google Scholar]
  • 8.A. Klein, T. K. Egglin, J. S. Pollak, F. Lee, and A. Amini, “Identifying vascular features with orientation specific filters and b-spline snakes,” in IEEE Computers in Cardiology, pp. 113–116, 1994.
  • 9.Klein AK, Lee F, Amini AA. Quantitative coronary angiography with deformable spline models. IEEE Trans. on Med. Img. 1997;16:468–482. doi: 10.1109/42.640737. [DOI] [PubMed] [Google Scholar]
  • 10.Guo D, Richardson P. Automatic vessel extraction from angiogram images. IEEE Computers in Cardiology. 1998;25:441–444. [Google Scholar]
  • 11.Sato Y, Nakajima S, Shiraga N, Atsumi H, Yoshida S, Koller T, Gerig G, Kikinis R. 3D multi-scale line filter for segmentation and visualization of curvilinear structures in medical images. IEEE Medical Image Analysis. 1998;2:143–168. doi: 10.1016/S1361-8415(98)80009-1. [DOI] [PubMed] [Google Scholar]
  • 12.Jiang X, Mojon D. Adaptive local thresholding by verification based multithreshold probing with application to vessel detection in retinal images. IEEE Trans. Pattern Analy. 2003;254(1):131–137. doi: 10.1109/TPAMI.2003.1159954. [DOI] [Google Scholar]
  • 13.Staal J, Abramoff MD, Niemeijer M, Viergever MA, van Ginneken B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imag. 2004;23(4):501–509. doi: 10.1109/TMI.2004.825627. [DOI] [PubMed] [Google Scholar]
  • 14.Zana F, Klein J-C. Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation. IEEE Trans. Image Process. 2000;10(7):1010–1019. doi: 10.1109/83.931095. [DOI] [PubMed] [Google Scholar]
  • 15.Hoover A, Kouznetsova V, Goldbaum M. Locating blood vessels in retinal images by piece wise threshold probing of a matched filter response. IEEE Trans. Med Imag. 2000;19(3):203–210. doi: 10.1109/42.845178. [DOI] [PubMed] [Google Scholar]
  • 16.Reza AW, Eswaran C, Hati S. Diabetic Retinopathy: A Quadtree Based Blood Vessel Detection Algorithm Using RGB Components in Fundus Images. Journal of Medical Systems. 2008;32:147–155. doi: 10.1007/s10916-007-9117-5. [DOI] [PubMed] [Google Scholar]
  • 17.H. Li and O. Chutatape, “Fundus Image Feature Extraction,” Proceedings of the 22nd Annual EMBS International conference, Chicago IL, pp. 3071–3073, July 2000.
  • 18.Reza AW, Eswaran C. A Decision Support System for Automatic Screening of Non-proliferative Diabetic Retinopathy. Journal of Medical Systems. 2011;35:17–24. doi: 10.1007/s10916-009-9337-y. [DOI] [PubMed] [Google Scholar]
  • 19.Reza AW, Eswaran C, Hati S. Automatic Tracing of Optic Disc and Exudates from Color Fundus Images Using Fixed and Variable Thresholds. Journal of Medical Systems. 2009;33:73–80. doi: 10.1007/s10916-008-9166-4. [DOI] [PubMed] [Google Scholar]
  • 20.Reza AW, Eswaran C, Dimyati K. Diagnosis of Diabetic Retinopathy: Automatic Extraction of Optic Disc and Exudates from Retinal Images using Marker-controlled Watershed Transformation. Journal of Medical Systems. 2011;35:1491–1501. doi: 10.1007/s10916-009-9426-y. [DOI] [PubMed] [Google Scholar]
  • 21.Y. Ma, K. Zhan, and Z. Wang, “Image Enhancement,” Applications of Pulse-Coupled Neural Networks, pp. 61–82, 2011.
  • 22.Gonzalez RC, Woods RE, Eddins SL. Digital image processing using MATLAB. Upper Saddle River: Prentice Hall; 2004. [Google Scholar]
  • 23.Haller EA. Adaptive histogram equalization in GIS. Annals of the University of Craiova—Mathematics and Computer Science Series. 2011;38(1):100–104. [Google Scholar]
  • 24.Prat WK. Digital Image Processing: PIKS Inside. 3. New York: Wiley; 2002. [Google Scholar]
  • 25.Saleh MD, Eswaran C. An efficient algorithm for retinal blood vessel segmentation using h-maxima transform and multilevel thresholding. Computer Methods in Biomechanics and Biomedical Engineering. 2012;15(5):517–525. doi: 10.1080/10255842.2010.545949. [DOI] [PubMed] [Google Scholar]
  • 26.A. Jain, Fundamentals of Digital Image Processing. Prentice-Hall, 1986, pp. 384.
  • 27.R. Haralick and L. Shapiro, Computer and Robot Vision. vol. 1, Chap. 5, Addison-Wesley, 1992.
  • 28.M. Niemeijer and B. van Ginneken, 2002 [Online]. Available: http://www.isi.uu.nl/Research/Databases/DRIVE/
  • 29.Saleh MD, Eswaran C, Mueen A. An Automated Blood Vessel Segmentation Algorithm Using Histogram Equalization and Automatic Threshold Selection. Journal of Digital Imaging. 2011;24(4):564–572. doi: 10.1007/s10278-010-9302-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Mendonca AM, Campilho A. Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction. IEEE Trans. Med. Imag. 2006;25(9):1200–1213. doi: 10.1109/TMI.2006.879955. [DOI] [PubMed] [Google Scholar]
  • 31.M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. D. Abramoff, “Comparative study of retinal vessel segmentation methods on a new publicly available database,” Proc. SPIE Med. Imag., pp. 648–656, 2004.

Articles from Journal of Digital Imaging are provided here courtesy of Springer

RESOURCES