Skip to main content
Computational and Mathematical Methods in Medicine logoLink to Computational and Mathematical Methods in Medicine
. 2017 Apr 23;2017:9854825. doi: 10.1155/2017/9854825

Automated Detection of Red Lesions Using Superpixel Multichannel Multifeature

Wei Zhou 1,2, Chengdong Wu 1,2, Dali Chen 1, Zhenzhu Wang 1, Yugen Yi 3,*, Wenyou Du 1
PMCID: PMC5420439  PMID: 28512511

Abstract

Red lesions can be regarded as one of the earliest lesions in diabetic retinopathy (DR) and automatic detection of red lesions plays a critical role in diabetic retinopathy diagnosis. In this paper, a novel superpixel Multichannel Multifeature (MCMF) classification approach is proposed for red lesion detection. In this paper, firstly, a new candidate extraction method based on superpixel is proposed. Then, these candidates are characterized by multichannel features, as well as the contextual feature. Next, FDA classifier is introduced to classify the red lesions among the candidates. Finally, a postprocessing technique based on multiscale blood vessels detection is modified for removing nonlesions appearing as red. Experiments on publicly available DiaretDB1 database are conducted to verify the effectiveness of our proposed method.

1. Introduction

Diabetic retinopathy (DR) is one of the most serious performances of diabetic and the majority of people suffering from diabetes mellitus for more than ten years will eventually develop DR [1]. Moreover, with the development of the disease, it will cause vision loss [2]. Regular follow-up has been shown to help patients delay the progression of blindness and visual loss. Digital color fundus photography owns its low-cost and patient friendliness which is a prerequisite for large scale screening [3]. However, in DR screening program, the limited number of specialists cannot keep up with the rapid increasing number of DR patients. Under this circumstance, developing an automatic detection technique based on fundus images becomes extremely essential and urgent [4].

There are several different components in retinal images (see Figure 1), such as blood vessels, fovea, macula, and optic disc. In clinical, ophthalmologists classify DR into two primarily phases, namely, nonproliferative DR (NPDR) and proliferative DR (PDR) [5]. NPDR can be regarded as the initial phase of DR. During this phase, the blood vessels become thin and leak fluid onto the retina [5]. Several lesions such as red lesions (microaneurysms and hemorrhages; see Figure 1), yellowish or bright spots (hard and soft exudates; see Figure 1), and interretinal microvascular abnormalities (IRMA) [6] can be produced during the phase of NPDR. The second phase of DR is the PDR, because the blood vessels cannot obtain enough oxygen causing the blood vessels to grow in different regions of the retina images for maintaining the adequate oxygen. Naturally, these blood vessels are prone to leakage, which may lead to vision lost. In this paper, we mainly focus on the detection of red lesions containing microaneurysms and hemorrhages, which are the earliest lesions and more complicated compared to other kinds of lesions detection in the DR.

Figure 1.

Figure 1

A retinal image with different types of lesions and main anatomical features.

Numerous approaches have been proposed for red lesion detection. Among them, the earliest paper based on MA detection was proposed by Baudoin et al. [7], which utilized mathematical morphology approach to detect the microaneurysms in fluorescein angiography images of the fundus. After that, two variants of morphological top-hat transformation methods for detecting MAs within fluorescein angiograms were developed by Spencer et al. [8] and Frame et al. [9]. Although the fluorescein angiograms can improve the contrast between the fundus and their background, the usage of intravenous contrast agents is not applicable for everyone, especially the pregnant woman [10]. Therefore, fluorescein angiograms cannot be widely used in public DR screening programs and the adaption of digital color fundus photographs is a better choice for screening purpose.

A number of algorithms based on filtering have been proposed for red lesion detection in digital color fundus photographs. Quellec et al. [11] adopted a template matching method based on subbands of wavelet transformed images for microaneurysm detection, which can solve the uneven illumination or high-frequency noise problems. Besides, Zhang et al. [12] employed Multiscale Gaussian Correlation Coefficients (MSCF) by computing the correlation coefficient between the grayscale distribution of microaneurysms and the Gaussian function to detect MAs. However, MAs and hemorrhages have large variations in their appearance, for example, shape and size. Therefore, how to design a suitable template to match them becomes a vital problem.

Apart from the above-described detection approaches, several mathematical morphology based approaches have been proposed for the detection of red lesions. Júnior and Welfer [13] designed a five-stage detection approach using mathematical morphology for detecting microaneurysms and hemorrhages, obtaining a sensitivity of 87.69% and a specificity of 92.44%. Ravishankar et al. [14] put forward an automatic feature extraction method for multiple lesions detection. In their method, geometrical relationships of different features and lesions can be used along with simple morphological operations. Jaafar et al. [15] suggested a combination of mathematical morphology and rule-based classification approach for the detection of red lesions, which achieved a sensitivity of 86.20% and a specificity of 98.80%. However, the above-mentioned mathematical morphology based approaches heavily depend on the choosing of structuring elements, and it may weaken the performance of algorithms when changing their size and shape. To address this limitation, some pixel classification based methods have been proposed for the detection of red lesions in color fundus photographs [1620]. Niemeijer et al. [16] incorporated morphological top-hat transform with a kNN classifier into a unified framework for automatic detection of red lesions. After that, Sánchez et al. [17] employed the Gaussian mixture model and logistic regression classification for the detection of red lesions. Furthermore, Zhang et al. [18] designed a microaneurysm detection approach, which integrates the dictionary learning (DL) with Sparse Representation Classification (SRC). The microaneurysms can be identified by calculated reconstruction error. Considering the importance of multifeature, Zhou et al. [19] presented a novel approach which combines the multiple features with dictionary learning for microaneurysm detection. The total reconstruction error of multifeature is used for microaneurysm classification. Besides, a microaneurysm detection approach using Sparse PCA and T2 statistic is developed in [20]. Since their approach just needs to learn one class data, the class imbalance problem can be addressed.

All the above-mentioned detection approaches regard image pixels as the basic unit to distinguish red lesions from nonred lesions. However, image pixels are a consequence of the discrete representation of images but not natural entities. Compared to the pixel-based image representation, superpixel-based image representation is more consistent with human visual cognition and contains less redundancy [2123]. The reason lies in the fact that superpixel groups pixels into perceptually meaningful atomic regions with similar color and texture and has the ability to adhere to image boundaries. Therefore, it can not only provide a convenient manner to compute image features but also greatly reduce the complexity of subsequent image processing tasks. However, how to detect the true red lesions from a series of superpixel segmentation results is still a problem.

To overcome the above-mentioned issues, in this paper, we propose a novel red lesion detection approach based on superpixel Multichannel Multifeature (MCMF). Two main contributions are as follows: on one hand, a novel candidate extraction scheme based on superpixel segmentation is given, which is more consistent with human visual cognition and contains less redundancy, improving efficiency and accuracy of subsequent image processing tasks. On the other hand, extensive features extracted from multiple-channel images have been proposed and introduced to our feature extraction process, which can improve the performance of red lesion detection.

Our proposed approach consists of the following five phases: first of all, preprocessing is used to make red lesions more visible. Next, candidates can be extracted by applying superpixel segmentation in digital color fundus photographs. And then, these candidates are characterized using not only intensity features in multichannel images (here, “multichannel” means that a series of images can be produced by different operations based on the original image), but also the contextual feature. Besides, Fisher Discriminant Analysis (FDA) [24] is used to perform the classification of candidates. Finally, a postprocessing technique is applied to distinguish red lesions from nonlesions appearing as red such as fovea and blood vessels. Experiments on DiaretDB1 database [25] demonstrate the effectiveness of our proposed method.

The remainder of the paper is organized as follows. Section 2 describes the proposed MCMF red lesion detection algorithm. The proposed approach is verified in experiments and the corresponding experimental results are reported in Section 3. We end with the conclusion in Section 4.

2. The Proposed Method

In this section, we mainly introduce the proposed MCMF method for the detection of red lesions. Figure 2 depicts a flowchart for our proposed approach, which consists of the following five stages: preprocessing, candidate extraction, feature extraction, classification, and postprocessing. Each stage will be described in detail in the following subsections.

Figure 2.

Figure 2

A system description of the proposed approach.

2.1. Preprocessing

The large luminosity, poor contrast, and noise always occur in retinal fundus images [11], which affect seriously the diagnostic process of DR and automatic detection of lesions, especially for red lesions. In order to address these problems and make a suitable image for red lesion detection, contrast limited adaptive histogram equalization (CLAHE) [26] method is applied to make the hidden features more visible. Besides, Gaussian smoothing filter with a width of 5 and a standard deviation of 1 is also incorporated for reducing the effect of noise further. Two instances for describing an improvement in color saturation and contrast between lesions and background are shown in Figure 3.

Figure 3.

Figure 3

From left to right: original RGB color retinal images (a) and (c); enhancement images (b) and (d).

2.2. Candidate Extraction

Given a preprocessed image I, firstly, SLIC transforms the image I into CIELAB color space. Assume that there are P pixels in the image and the number of initialized cluster centers is k; the grid interval S is defined as S=P/k. Secondly, the distance between pixels and the cluster centers should be calculated in each 2S × 2S region. Here, SLIC combines the color and pixel positions with the cluster centers [l, a, b, x, y]T to compute the distance, where [l, a, b]T is three parameters from CIELAB color space and [x, y]T is the position of the pixels. The distance of the pixel i to the kth cluster center can be defined as in the following.

Dik=dlab2+mS2dxy2dlab=lilk2+aiak2+bibk2dxy=xixk2+yiyk2, (1)

where m is the weight factor and its range varies from 1 to 40 [21] and (li, ai, bi) is the color values of the pixel i at the position of (xi, yi) of the image I.

Region size means the size of superpixel segmentation, which plays a vital role in SLIC. For larger region sizes, spatial distances overweigh color proximity, causing the obtained superpixels to not adhere well to image boundaries. For smaller region sizes, the converse is true [21]. Considering the red lesions always vary in size, we will choose a suitable size as our segmentation standard which will be discussed in our experiments. Figure 4 gives the superpixel segmentation results with different region sizes.

Figure 4.

Figure 4

Retinal image segmentation using SLIC. (a) Original retinal fundus image; (b) detail from part of (a); (c)–(e) superpixel segmentation with region sizes 10, 30, and 50 pixels, respectively.

2.3. Feature Extraction

As for red lesion detection, some specific properties need to be considered. Firstly, some red lesions are very near the blood vessels, making it hard to distinguish them just in traditional RGB color space. Moreover, the appearance of red lesions is similar to the normal structures of the retinal, such as the blood vessels and the fovea, causing too much false positive (FP). Finally, the shape and size of red lesions especially for large hemorrhages are varied. Based on these facts, we adopt the strategy of extracting different features based on multichannel images for each candidate in this paper. The chosen multichannel images are listed as follows.

Channel_1-Channel_2. Green channel image IG of the original image Ioriginal and enhanced green channel image Igreen are obtained after preprocessing (see Figures 5(a) and 5(b)).

Figure 5.

Figure 5

Multichannel images. (a) Original green channel image IG, (b)–(g) enhanced green channel image Igreen, enhanced low intensity structure image Idark_enhanced, Ilesions, close operation image Iclose, enhanced hue image IHue_enhanced, and M component image IM of CMYK color space, respectively.

Channel_3. Alternating Sequential Filtering (ASF) containing a set of morphological closing γ and opening ϑ operations with different sizes of disc-shaped structuring element K (10, 20 and 40) is used to estimate the background of preprocessing image in terms of (2). The image of preprocessing is removed from fASF and we will obtain the result image Idark_enhanced according to (3). A typical result of this operation is illustrated in Figure 5(c).

fASF=ϑnKγ2KϑKγKIgreen (2)
Idark_enhanced=fASFIgreen (3)

Channel_4. Median filtering with a 25 × 25 pixel kernel to Igreen is used for calculating background image Ibg and then Ibg can be removed from Igreen to obtain the shade correlation image Isc. A series of morphological open operations using 12 line structures of length 9 pixels with different angles ranging from 15 degrees to 165 degrees with the increase of 15 are applied to Isc for locating blood vessels. By combining the maximum pixel value at each pixel position in all 12 images, the blood vessels can be obtained. These blood vessels are then subtracted from Isc to form Ilesions containing mainly red lesions with small size. The result of Ilesions is depicted in Figure 5(d).

Channel_5. Morphological close operation [27] is applied to Igreen for eliminating shrinks or thin objects by using a disc-shaped structuring element B with radius of 10 pixels; the result image Iclose is illustrated in Figure 5(e).

Channel_6. I Hue_enhanced can be obtained by extracting the hue channel image of preprocessing image (see Figure 5(f)).

Channel_7. I M is M component image of the preprocessing image in CMYK color space. The dark structures such as red lesions, blood vessels, and the fovea in IM are shown as white. On the contrary, the bright structures such as exudates and the optic disc appear as black (see Figure 5(g)).

For each of the above-described multichannel images, four kinds of statistic features including the maximum, minimum, mean, and median are extracted from each candidate. Besides, the total average intensity and standard deviation of the each preprocessing retinal image are also imported.

Since red lesions appear as darker regions with brighter surroundings, based on this characteristic, we also develop a novel and effective feature for distinguishing a candidate from its surroundings and background by mean intensity and the barycenter distance between the neighbor candidates. Here, let av_Gi and pi denote the mean color in original green channel and barycenter position of the ith candidate (Ri, i = 1,2,…, N), respectively. The proposed contextual feature is listed as follows:

Si=jNiav_Gi/av_Igreen×d1/d2Nid1=av_Giav_Gj22if av_Giav_Gjav_Giav_Gj22if av_Gi>av_Gjd2=pipj22=ImRiImPRiImRjImPRj22jNi, (4)

where ‖·‖22 denotes a quadratic term of l2-norm, N(i) represents the set of neighbors of candidate i, and Ni is a total of pixels of candidate i. ImP is the barycenter position vector, constituted by position vector of pixel Im. d1 is the mean intensity difference and d2 is the barycenter positions difference between candidate i and its neighbor candidate j. Here, the neighbor is defined as the 7 times of the region size empirically.

Basically speaking, our proposed contextual feature needs to calculate the mean gray value of each candidate av_Gi with the aim of making the feature more stable and precise for red lesions. Global mean of image av_Igreen is imported for eliminating the influence caused by the varying retinal pigmentations and different image acquisition processes. If the value of d1 is positive and large, current candidate is more likely to be a true red lesion, and the converse is false. Besides, the value of Si is also affected by the barycenter positions distance d2 between candidate i and candidate j. From (3), it is clear that a larger Si indicates that the current candidate more likely belongs to red lesions, and vice versa.

In summary, thirty-one features are computed on each candidate and they can be represented as a vector in a 31-dimension feature set F = {f1, f2,…, f31}. Since the different features fi always varied in values and ranges, we normalize each feature for all the candidates to have zero mean and unit variance by using the following:

fi=fiμiσi, (5)

where μi is the mean of the ith feature and σi is its standard deviation.

2.4. Classification

In this section, Fisher Discriminant Analysis (FDA) [24] is applied to perform the classification of the red lesion candidates. The basic idea of FDA is to seek a transformation matrix which maximizes the between-class scatter and minimizes the within-class scatter simultaneously. Assume a labeled candidate dataset matrix X = {X1, X2}, where X1 = {x1, x2,…, xn1} and X2 = {x1, x2,…, xn2}, the matrix Xk  (k ∈ {1,2}, 1 = red, 2 = non-red) is from Rd×nk,  d denotes corresponding feature dimension, and nk is the total number of samples (n = n1 + n2) in the kth class.

Let Sb and Sw be the between-class scatter matrix and within-class scatter matrix:

Sb=k=12nkμkμμkμTSw=k=12xiCkxiμkxiμkT, (6)

where μk = (1/nk)∑xiCkxi is the mean vector of the kth class and μ = (X1 + X2)/n  is the mean vector of the whole candidate dataset.

FDA transformation matrix W can be found by maximizing the following optimization problem:

W=wTSbwwTSww. (7)

The above optimization problem can be regarded as the generalized eigenvalue problem below [28]:

Sbφ=λSwφ, (8)

where λ is the generalized eigenvalue and the vector φ is the corresponding eigenvector which is one of columns of the FDA transform matrix W.

Since the projected class means WTμ1 and WTμ2 are well separated [24], we can choose average of the two projected means as a threshold for classification. In this case, the threshold parameter c can be found:

c=WTμ1+μ22. (9)

Given a new sample x, it belongs to class 1, if classification rule WTx > c, or else it belongs to class 2.

2.5. Post-Processing

After the classification stage, some nonlesions appearing as red such as blood vessels and fovea are often present and may be erroneously detected as red lesions. In order to avoid this problem, a postprocessing stage is incorporated into our approach for removing them and improving the robustness of the proposed method.

2.5.1. Multiscale Blood Vessels Detection

Since red lesions and blood vessel have the similar appearance, it is hard to distinguish them effectively. Besides, the red lesions cannot occur on the blood vessels [12]. In order to remove any possible nonlesions caused by blood vessels, a multiscale morphological blood vessel extraction method (MSM) based on [13] is modified. Here, multiscale consists of four different sizes of disc-shaped structure element such as scale = [2 3 4 5] according to [29]. For each scale, we conduct Steps 1to 5 (scale_num is the number of disc structure elements and scale_num equals 4) to extract blood vessels map and the final blood vessels map can be obtained by fusing the blood vessels segmentation maps under all the scales. More details are listed as follows.

Step 1 . —

Morphological opening ϑ and closing γ operations with structuring element K = scale  (i), (i represents the ith iteration, i = 1,2, 3,4, such as scale  (1) = 2 and scale  (4) = 5) are used to estimate the background of preprocessing image ICLAHE according to (10) and the result f1 can be obtained.

f1=γKϑKICLAHE. (10)
Step 2 . —

The high intensity structures can be eliminated by subtracting the CLAHE result image from f1; see Figure 6(a).

f2=f1ICLAHE. (11)
Figure 6.

Figure 6

A series of morphological operations are used for blood vessels detection. (a) f2, reduction of high intensity structures; (b) f3, sum of morphological openings; (c) f4, morphological reconstruction by dilation; (d) f5scale, regional minimum and close operation (e.g., we take scale  (4) = 5).

Step 3 . —

Morphological opening with varying structuring elements is applied to f2 for locating blood vessels. Here, we use the linear structuring elements ψ with 12 different angles ranging from 15 degrees to 165 degrees with the increase of 15. By accumulating the pixel values at each pixel position in all 12 images, we can obtain the image f3 according to the following (see Figure 6(b)).

f3=ψ1f2+ψ2f2+,ψ12f2. (12)
Step 4 . —

According to (13), a morphological dilation reconstruction is used to detect the blood vessels furthermore, which is denoted by R (see Figure 6(c)).

f4=Rf3. (13)
Step 5 . —

A binary vessel structure map f5scale can be obtained by combining morphological operator of regional minimum RM with close operation Rclose according to the following (see Figure 6(d)).

f5scale=RcloseRMf4 (14)

Repeat Steps 1to 5 until the value of scale reaches the maximum iteration number (scale_num).

At last, we will obtain the final blood vessels map f5 by combining each of the blood vessels segmentation maps with the logical OR operation under all the scales (see Figures 7(a)7(d)) (“|” represents logical OR operation and the entire vessel network is shown in Figure 7(e)).

f5=f51f52f5scale_num. (15)
Figure 7.

Figure 7

Blood vessels extraction results. (a)–(d) show the different detection results with varying sizes of disc-shaped structuring elements 2, 3, 4, and 5, respectively; (e) the combination result of (a)–(d); (a1)–(d1) are the blood vessels map differences between (e) and (a)–(d), respectively.

The main difference between our proposed MSM blood vessels extraction method and the single scale blood vessels detection method [13] lies in whether to take the multiscale into consideration or not. Figures 7(a)7(d) show the blood vessels maps produced by [13] and Figures 7(a1)7(d1) are the difference images by subtracting the images in Figures 7(a)7(d) from MSM detection result shown in Figure 7(e). Comparing with these results, we can learn that our proposed approach achieves a better robust and complete blood vessels segmentation result than the single scale blood vessels detection [13], which can do well in reducing the FP in red lesion detection.

2.5.2. Elimination of the Fovea

The fovea appears as dark region located in the center of the macula region of the retina. Since the fovea region has a relative low intensity profile and its appearance is commonly much similar with the background causing it to be hard to distinguish from true red lesions. In order to improve the accuracy of the proposed method, the removal of fovea is indispensable. A method [16] by considering the spatial relationship between the diameter of the optic disc and the region of the fovea is adopted to remove the fovea. The final red lesion map result is shown in Figure 8.

Figure 8.

Figure 8

The final result of our proposed MCMF. (a) Original retinal image; (b) the corresponding ground truth; (c) final red lesions map; (d) overlaying red lesions map on original retinal image.

Figure 9 depicts more examples for our proposed MCMF approach for red lesions detection in the retinal images containing red lesions. Figure 9(a) shows original color retinal images; Figure 9(b) illustrates their detected red lesion maps. From these results, we can learn that our proposed MCMF red lesion detection approach can detect most of red lesions in the retinal images regardless of their varying sizes and shapes.

Figure 9.

Figure 9

Results of our proposed red lesion detection approach on abnormal retinal images. (a) Original retinal images; (b) the corresponding red lesions maps.

3. Experimental Results and Analysis

3.1. Database Description

In this section, we conduct extensive experiments to validate and evaluate the effectiveness of our proposed red lesion detection method on public DiaretDB1 retinal image database [25].

The DiaretDB1 database (Standard Diabetic Retinopathy Database Calibration level 1, version 1) [25] is a public database available on the web. The database contains a total of 89 RGB color fundus images with the fixed 1500 × 1152 resolution and 50° field of view. Among the 89 fundus images, 5 images are healthy and the remaining 84 retinal images are abnormal, which are annotated by four different clinical experts. In our experiment, the training set consists of 40 retinal images selected randomly. The testing set is composed of the remaining 49 images.

3.2. Assessment of Classification Performance

We take two evaluation criterions to verify the effectiveness of our proposed red lesion detection method, such as sensitivity and specificity. These measures are calculated based on the given red lesion ground truth, which can be seen from the following:

sensitivity=TPTP+FNspecificity=TNTN+FP, (16)

where true positive (TP) is the number of red lesions that are correctly identified, false negative (FN) is the number of lesions incorrectly found as nonred lesions, false positive (FP) is the number of lesions incorrectly found as red lesions, and true negative (TN) is the number of nonred lesions that are correctly identified. These criteria are also used to evaluate the performance of different methods for the detection of red lesions.

In our experiments, we employ the Receiver Operating Characteristics (ROC) curve to evaluate the effectiveness of the proposed red lesion detection method. An ROC curve is the plot of sensitivity on the vertical axis and (1 − specificity) on the horizontal axis. In ROC curve, the upper left corner represents perfect classification. Besides, the area under the ROC curve is denoted as the AUC value for measuring and describing the algorithm performance. A larger AUC value indicates a better classifier performance.

3.3. Results

In this section, we will carry out three experiments based on DiaretDB1 dataset to verify the effectiveness of our proposed method. In our experiments, we employ two different kinds of criteria to evaluate the method performance, including image-based [25] and pixel-based [30] criteria. Image-based criterion means to classify an image either as “normal” or “abnormal” (i.e., detecting the absence or presence of red lesions anywhere in the image). That is to say, a retinal image is considered as pathological if it presents one or more red lesions; otherwise it is normal [22]. However, in pixel-based criterion, we adopt the connected component level validation [28] by counting the number of pixels detected correctly. According to [28], connected components are considered as true positive (TP) if they totally or partially overlap with the ground truth (i.e. a red lesion is considered as TP if the detected connected component overlapped at least 75% of the area manually annotated by the expert but less than 100%, and all other conditions are considered to be false detection). The TN, FP, and FN are calculated in the same manner. Based on these observations, algorithm validated based on the image-based criterion typically achieves a better performance than the pixel-based criterion since it is not necessary to detect all the red lesions in the images. For more details refer to [30]. Taking the clinical point of view and screening applications into consideration, it is more interesting to evaluate the experiment results at image-based criterion [31].

There is a parameter in our proposed method, region size (the size of superpixel), which impacts the performance of our proposed method. How to choose a suitable size becomes a critical problem in our experiment. Besides, for the same test sample, different classifiers may obtain the different classification results. So the choosing of classifier is equally important.

In our first experiment, two supervised classifiers, FDA and kNN (k nearest neighborhood) [32], as the underlining classifiers are selected and the optimal one will be used for our following experiments. Here, we set c as 0.01 for FDA classification according to (9). Three different region sizes (10, 30, and 50) as our segmentation standards are used to select the optimal classifier based on image-based criterion. Firstly, training samples with three different region sizes are used to train corresponding classifiers containing FDA and kNN, respectively; and then we adopt the trained classifiers to classify test samples with various region sizes and obtain corresponding classification scores. The experiment results are shown in Figure 10.

Figure 10.

Figure 10

Sensitivity versus 1 − specificity curves with varied region sizes by applying two different classifiers.

According to Figure 10, we can learn that FDA classifier can achieve the better results than kNN classifier in all the region sizes. The reasons are listed as follows: since the kNN classifier works by comparing the Euclidean distance of a test sample with k labeled training samples, which are also known as k nearest neighbors on the whole feature space, due to the original feature space containing some irrelevant or redundant features, which causes misclassification for kNN. Different from kNN, FDA is a well-known linear technique for dimensionality reduction and feature extraction, which can avoid the above problem greatly. FDA makes full use of the label information to find the optimal projection vectors by maximizing between-class scatter matrix and minimizing within-class scatter matrix simultaneously and achieves a better classification performance. Therefore, the FDA will be chosen as our optimal classifier for the following experiments.

Besides, judging from Figure 10, we can see that when the region size is set to 10, our proposed algorithm achieves the best performance both in FDA and in kNN. Yet, we can also notice that, for each kind of classifiers, the different choosing of region sizes is not sensitivity for our classification result, which can achieve quite similar ROC curves.

In our second experiment, we will find the optimal region size based on pixel-based criterion [30]. The ROC curves are shown in Figure 11.

Figure 11.

Figure 11

ROC curves of pixel-based criterion on testing set with varying region sizes.

Judging from the above experiment results, it is easy to find that the performances of all the region sizes in the pixel-based criterion are relatively lower than those in the image-based criterion. The reason lies in the fact that the range of ground truth provided in database is not very precise and always larger than the true size of red lesions. So when each obtained red lesion map compares with its ground truth, the imprecise ground truth will affect the method's performance. What is more, choosing the smaller size of superpixel, the segmentation results are more accurate than the bigger ones and when the region size is set to 10, our proposed algorithm achieves the best performance (i.e., AUC = 0.74) as shown in Table 1. However, smaller region size leads to higher computing complexity. When the region size = 10, the average execute time for each image approximately takes 116.23 s and the average computing time reaches about 10.37 s for the region size = 50 listed in Table 1 (the average running time of per image is evaluated using Matlab R2015a on a PC with Intel Core i5 running Windows 7 at the clock of 3.30 GHZ with 16 G RAM).

Table 1.

Average execute time of per image and AUC with varying region sizes at pixel-based criterion.

AUC Average time in seconds
Region size = 10 0.74 116.23 s
Region size = 30 0.73 38.05 s
Region size = 50 0.70 10.37 s

In addition, considering the clinical point of view and for screening applications [31], choosing the bigger region size has completely attended their goals. Taking the above reasons into consideration, we set the region size (superpixel segmentation size) as 50 in our next experiment.

In our last experiment, we employ image-based evaluation method proposed by Kauppi et al. [25] to verify the effectiveness of the proposed method by comparing it with other state-of-the-art methods. According to [25], each image needs to provide a score and a high score means a high probability that a lesion presents in corresponding image. With the provided scores, the sensitivity and specificity measures can be calculated. By image-based comparison, the proposed approach achieves a sensitivity of 83.30% and a specificity of 97.30%. Table 2 lists several comparison results obtained by the existing red lesion detection methods on DiaretDB1 database.

Table 2.

Performance results on DiaretDB1 dataset.

Authors Sensitivity Specificity
Proposed method 83.30% 97.30%
Sánchez et al. [17] 87.69% 92.44%
Ravishankar et al. [14] 95.10% 90.50%
Jaafar et al. [15] 98.80% 86.20%
Roychowdhury et al. [33] 75.50% 93.73%

From the comparison results of various algorithms illustrated in Table 2, we can know that our proposed method is more reliable than the other methods and achieves satisfactory result. But there are still some points that need to be mentioned as follows. Firstly, from Figures 8 and 9, it can be found that the residues of blood vessels close to red lesions cannot be removed from the final red lesion maps causing false positives. Secondly, when there are small size red lesions contained in the retinal images, they may not be completely detected.

4. Conclusion

To summarize, we put forward a novel red lesion detection based on superpixel Multichannel Multifeature (MCMF) classification in color retinal images, which is able to detect the red lesions efficiently regardless of their variability in appearance and size. Firstly, the whole image is segmented into a series of candidates using superpixel segmentation. And then, multiple features from the multichannel images as well as the contextual feature are proposed for describing each candidate. Next, FDA is introduced to classify the red lesions among the candidates. Finally, a postprocessing technique is applied to distinguish red lesions from blood vessels and fovea. Experiment results on DiaretDB1 database demonstrate that our proposed method is effective for red lesion detection.

Since our proposed approach extracts a number of features for each superpixel, complex relationships among the extracted features exist and other classifier (e.g., neural network or Extreme Learning Machine) may lead better classification result which could be validated and researched in future works. In addition, applying our proposed framework to other lesions detection is also another interesting topic for future study.

Acknowledgments

This work is supported by National Natural Science Foundation of China (nos. 61471110 and 61602221), Foundation of Liaoning Educational Department (L2014090), and Fundamental Research Funds for the Central Universities (N140403005, N162610004, and N160404003).

Conflicts of Interest

All authors declare that the support for this research does not lead to any conflicts of interest regarding the publication of this paper.

References

  • 1.Kertes P. J., Johnson M. T. Evidence Based Eye Care. 1st. Philadelphia, Pa, USA: Lippincott Williams & Wilkins; 2007. [Google Scholar]
  • 2.Klein B. E., Moss S. E., Klein R., Surawicz T. S. The Wisconsin epidemiologic study of diabetic retinopathy. XIII. Relationship of serum cholesterol to retinopathy and hard exudates. Archives of Ophthalmology. 1991;102(4):520–526. doi: 10.1016/s0161-6420(91)32145-6. [DOI] [PubMed] [Google Scholar]
  • 3.Faust O., Rajendra A. U., Ng E. Y. K., Ng K.-H., Suri J. S. Algorithms for the automated detection of diabetic retinopathy using digital fundus images: a review. Journal of Medical Systems. 2012;36(1):145–157. doi: 10.1007/s10916-010-9454-7. [DOI] [PubMed] [Google Scholar]
  • 4.Abràmoff M. D., Niemeijer M., Suttorp-Schulten M. S. A., Viergever M. A., Russell S. R., Van Ginneken B. Evaluation of a system for automatic detection of diabetic retinopathy from color fundus photographs in a large population of patients with diabetes. Diabetes Care. 2008;31(2):193–198. doi: 10.2337/dc07-1312. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Crick R. P., Khaw P. T. A Textbook of Clinical Ophthalmology: A Practical Guide to Disorders of the Eyes and Their Management. Singapore: World Scientific; 1997. [DOI] [Google Scholar]
  • 6.Frank R. N. Diabetic retinopathy. The New England Journal of Medicine. 2004;350(1):48–58. doi: 10.1056/nejmra021678. [DOI] [PubMed] [Google Scholar]
  • 7.Baudoin C. E., Lay B. J., Klein J. C. Automatic detection of microaneurysms in diabetic fluorescein angiography. Revue d'Epidemiologie et de Sante Publique. 1984;32(3-4):254–261. [PubMed] [Google Scholar]
  • 8.Spencer T., Olson J. A., McHardy K. C., Sharp P. F., Forrester J. V. An image-processing strategy for the segmentation and quantification of microaneurysms in fluorescein angiograms of the ocular fundus. Computers and Biomedical Research. 1996;29(4):284–302. doi: 10.1006/cbmr.1996.0021. [DOI] [PubMed] [Google Scholar]
  • 9.Frame A. J., Undrill P. E., Cree M. J., et al. A comparison of computer based classification methods applied to the detection of microaneurysms in ophthalmic fluorescein angiograms. Computers in Biology and Medicine. 1998;28(3):225–238. doi: 10.1016/S0010-4825(98)00011-0. [DOI] [PubMed] [Google Scholar]
  • 10.Yannuzzi L. A., Rohrer K. T., Tindel L. J., et al. Fluorescein angiography complication survey. Ophthalmology. 1986;93(5):611–617. doi: 10.1016/S0161-6420(86)33697-2. [DOI] [PubMed] [Google Scholar]
  • 11.Quellec G., Lamard M., Josselin P. M., Cazuguel G., Cochener B., Roux C. Optimal wavelet transform for the detection of microaneurysms in retina photographs. IEEE Transactions on Medical Imaging. 2008;27(9):1230–1241. doi: 10.1109/tmi.2008.920619. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Zhang B., Wu X., You J., Li Q., Karray F. Detection of microaneurysms using multi-scale correlation coefficients. Pattern Recognition. 2010;43(6):2237–2248. doi: 10.1016/j.patcog.2009.12.017. [DOI] [Google Scholar]
  • 13.Júnior S. B., Welfer D. Automatic detection of microaneurysms and hemorrhages in color eye fundus images. International Journal of Computer Science and Information Technology. 2013;5(5):21–37. doi: 10.5121/ijcsit.2013.5502. [DOI] [Google Scholar]
  • 14.Ravishankar S., Jain A., Mittal A. Automated feature extraction for early detection of diabetic retinopathy in fundus images. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR '09); June 2009; pp. 210–217. [DOI] [Google Scholar]
  • 15.Jaafar H. F., Nandi A. K., Al-Nuaimy W. Automated detection of red lesions from digital colour fundus photographs. Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society; 2011; pp. 6232–6235. [DOI] [PubMed] [Google Scholar]
  • 16.Niemeijer M., Van Ginneken B., Staal J., Suttorp-Schulten M. S. A., Abràmoff M. D. Automatic detection of red lesions in digital color fundus photographs. IEEE Transactions on Medical Imaging. 2005;24(5):584–592. doi: 10.1109/TMI.2005.843738. [DOI] [PubMed] [Google Scholar]
  • 17.Sánchez C. I., Hornero R., Mayo A., García M. Mixture model-based clustering and logistic regression for automatic detection of microaneurysms in retinal images. Medical Imaging 2009: Computer-Aided Diagnosis; February 2009; Lake Buena Vista, Fla, USA. [DOI] [Google Scholar]
  • 18.Zhang B., Karray F., Li Q., Zhang L. Sparse representation classifier for microaneurysm detection and retinal blood vessel extraction. Information Sciences. 2012;200(1):78–90. doi: 10.1016/j.ins.2012.03.003. [DOI] [Google Scholar]
  • 19.Zhou W., Wu C., Chen D., Wang Z., Yi Y., Du W. Automatic microaneurysms detection based on multifeature fusion dictionary learning. Computational and Mathematical Methods in Medicine. 2017;2017:11. doi: 10.1155/2017/2483137.2483137 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Zhou W., Wu C., Chen D., Wang Z., Yi Y., Du W. Automatic microaneurysm detection using the sparse principal component analysis based unsupervised classification method. IEEE Access. 2017;5(1):2169–3536. [Google Scholar]
  • 21.Achanta R., Shaji A., Smith K., Lucchi A., Fua P., Süsstrunk S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2012;34(11):2274–2281. doi: 10.1109/TPAMI.2012.120. [DOI] [PubMed] [Google Scholar]
  • 22.Levinshtein A., Stere A., Kutulakos K. N., Fleet D. J., Dickinson S. J., Siddiqi K. TurboPixels: fast superpixels using geometric flows. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2009;31(12):2290–2297. doi: 10.1109/tpami.2009.96. [DOI] [PubMed] [Google Scholar]
  • 23.Veksler O., Boykov Y., Mehrani P. Superpixels and supervoxels in an energy optimization framework. Proceedings of the European Conference on Computer Vision; September 2010; Crete, Greece. pp. 211–224. [DOI] [Google Scholar]
  • 24.Vapnik V. N. An overview of statistical learning theory. IEEE Transactions on Neural Networks. 1999;10(5):988–999. doi: 10.1109/72.788640. [DOI] [PubMed] [Google Scholar]
  • 25.Kauppi T., Kalesnykiene V., Kamarainen J.-K., et al. The DIARETDB1 diabetic retinopathy database and evaluation protocol. Proceedings of the British Machine Vision Conference (BMVC '07); 2007; Warwick, UK. University of Warwick; pp. 252–261. [Google Scholar]
  • 26.Zuiderveld K. Graphics Gems. 1994. Contrast limited adaptive histogram equalization; pp. 474–485. [Google Scholar]
  • 27.Chanwimaluang T., Fan G. An efficient blood vessel detection algorithm for retinal images using local entropy thresholding. Proceedings of the International Symposium on Circuits and Systems; 2003; pp. 21–24. [Google Scholar]
  • 28.Fukunaga K. Introduction to Statistical Pattern Recognition. Academic Press; 1972. [Google Scholar]
  • 29.Goldbaum M., Moezzi S., Taylor A., et al. Automated diagnosis and image understanding with object extraction, object classification, and inferencing in retinal images. Proceedings of the IEEE International Conference on Image Processing (ICIP '96); September 1996; pp. 695–698. [Google Scholar]
  • 30.Giancardo L., Meriaudeau F., Karnowski T. P., Li Y., Tobin K. W., Jr., Chaum E. Automatic retina exudates segmentation without a manually labelled training set. Proceedings of the 8th IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI '11); April 2011; pp. 1396–1400. [DOI] [Google Scholar]
  • 31.Zhang X., Thibault G., Decencière E., et al. Exudate detection in color retinal images for mass screening of diabetic retinopathy. Medical Image Analysis. 2014;18(7):1026–1043. doi: 10.1016/j.media.2014.05.004. [DOI] [PubMed] [Google Scholar]
  • 32.Richard O. D., Peter E. H., David G. S. Pattern Classification. New York, NY, USA: John Wiley & Sons; 2001. [Google Scholar]
  • 33.Roychowdhury S., Koozekanani D. D., Parhi K. K. Screening fundus images for diabetic retinopathy. Signals, Systems & Computers. 2012;43(4):1641–1645. [Google Scholar]

Articles from Computational and Mathematical Methods in Medicine are provided here courtesy of Wiley

RESOURCES