Skip to main content
Health Information Science and Systems logoLink to Health Information Science and Systems
. 2019 Nov 8;7(1):26. doi: 10.1007/s13755-019-0090-4

Artery/vein classification of retinal vessels using classifiers fusion

Xiao-Xia Yin 1,, Samra Irshad 2,#, Yanchun Zhang 2
PMCID: PMC6841783  PMID: 31749960

Abstract

The morphological changes in retinal blood vessels indicate cardiovascular diseases and consequently those diseases lead to ocular complications such as Hypertensive Retinopathy. One of the significant clinical findings related to this ocular abnormality is alteration of width of vessel. The classification of retinal vessels into arteries and veins in eye fundus images is a relevant task for the automatic assessment of vascular changes. This paper presents an important approach to solve this problem by means of feature ranking strategies and multiple classifiers decision-combination scheme that is specifically adapted for artery/vein classification. For this, three databases are used with a local dataset of 44 images and two publically available databases, INSPIRE-AVR containing 40 images and VICAVR containing 58 images. The local database also contains images with pathologically diseased structures. The performance of the proposed system is assessed by comparing the experimental results with the gold standard estimations as well as with the results of previous methodologies, achieving promising classification performance, with an over all accuracy of 90.45%, 93.90% and 87.82%, in retinal blood vessel separation for Local, INSPIRE-AVR and VICAVR dataset, respectively.

Keywords: Hypertensive retinopathy, Retinal vessel classification, Optic disk, Region of analysis, SVMs

Introduction

A large number of reports in retinal analysis have focused on vessel segmentation [30, 42, 43]. There are a few researchers that have focused on detection of vessel bifurcations and crossover points [1, 2]. As far as retinal vessel classification is concerned, there are existing methods that recognize vessels as veins and arteries either automatically or semi-automatically [29, 40]. The most prominent visual difference between retinal veins and arteries is their color. Arteries are light in color due to the abundance of oxygenated hemoglobin and this fact has led to the use of intensity-based features for vessel recognition. The pioneer study was proposed by [14] for arteriovenous classification in which retinal image was divided into four quadrants based on optic disk center and vessel segments found in each quadrant are classified using color features. This quadrant division based approach is also adopted in later studies [37]. In another method proposed by the same group [38], several circular regions of different radii around optic disk are assessed to obtain the classes of vessels in each zone using vessel profile features. A similar approach is proposed by [26] with a little variation by dividing the image in four quadrants with further partitions in upper and lower regions. With the quick progress of deep learning techniques, retinal vessel segmentation is also suggested to be extracted using U-Net based fully convolutional network in which the training strategy relies on the strong use of data augmentation [4], before a classification to be conducted for the recognition of artery and vein vessels. More deep learning technologies that are used for retinal vessel segmentation can be refereed [22, 25, 41].

The disease progression makes the retinal vessels prone to many different kind of pathologies e.g. vessel tortuosity, hard and soft exudates, branch retinal artery and vein occlusion, sheathing of vessels, focal arteriolar narrowing and optic disk swelling, which deteriorate the width and intensity of vessels. This paper attempts to provide a system for reliable retinal vessel classification with potential in Hypertensive retinopathy (HR) diagnosis that is robust to presence of pathological structures in fundus images. Furthermore, the features extracted for retinal vessel classification may contain redundant information. For this purpose, we exploited two feature ranking strategies for feature selection. In addition to the feature selection process, the class prediction performance of retinal vessels also depends significantly on the particular machine learning algorithm. Most of classification based vessel recognition approaches use single classifier. For example, a recent decision-ensemble based on decision trees with bootstrap aggregation is used by [12] for retinal vessel classification, but this ensemble system is created by employing one type of base classifier. In bootstrap method, different predictive models are generated using different subsets of data and then the decisions of all those models is averaged out. Fusion of different classifiers decisions is known to have enhanced performance as compared to single classifier and has been used in many machine learning applications [6, 9, 24, 28]. This motivated us to examine the performance of different classifiers combination for retinal vessel classification.

This article contains five sections; introduction and review of previous work is already explained in this section, “Methodology” section explains the methodology adopted for retinal image pre-processing, vascular network extraction, Optic Disk localization and boundary segmentation, determination of region of analysis, feature extraction for vessel recognition. “Feature selection and classification of vessels” section focusses on the details of feature selection process and retinal vessel classification followed by experimental results in “Experimental results” section. “Discussion and conclusion” section summarizes the contributions and limitations of this research.

Methodology

Firstly, we acquire the retinal images via fundus camera which are then pre-processed for noise removal and computational complexity reduction. After that, the retinal vascular network is detected using Gabor filter bank and a binary vessel map is generated. Next, the position of optic disk is determined using Laplacian of Gaussian filter with highest vessel density feature. Based on the optic disk boundary, a circular region of interest is defined around optic disk and the vessels within this region are obtained. Vessel junctions i.e. bifurcations and crossovers, present in the extracted vessel segments are detected and then, differentiated using local variance based method to remove cross-overs. In the next step, a set of 81 features is extracted from retinal vessels. The acquired features are then subject to two feature-ranking methods i.e. Pearson Correlation Coefficient and Relief-F method. Afterwards, these fused feature subsets are given as an input to decision-fusion framework and final labels of retinal vessel segments are obtained. It is true that the different machine learning algorithms have their own unique strengths. For example, SVM is good at handling missing data; K-NN is insensitive to outliers, decision tree is good at dealing with irrelevant features, Naïve Bayes is good for handling multiple classes and so on. In this paper, we focus on the classifier fusion including K-NN, SVMs, and Naïve Bayes for improved retinal vessel classification performance.

Pre-processing and vessel segmentation

Pre-Processing is an important strategy in analysis of medical images with an aim at removing image noise and reduce computational complexity. Via using local mean and variance based method [17] on green component of image, dark background is segmented from digital fundus image in order to reduce the computational complexity, and then binary segmentation mask is formed by thresholding operation.

Two-dimentional (2-D) Gabor filter bank is employed on pre-processed green channel of an image here for effective discrimination between retinal vessels and fundus area [21]. The purpose of using Gabor Wavelet bank is its localization characteristic, which allows image noise removal and accurate enhancement of vessels with different widths.

Identification of the position of optic disk

Optic Disk (OD), as shown in Fig. 1a, is a bright circular region in retina from which all the blood vessels emerge. Laplacian of Gaussian (LoG) filter and the highest vessel density property of OD is used in this research to detect the location of OD [16, 36]. LoG filter is applied on red channel of RGB image for the enhancement of the location of OD. The threshold value used for binarization of the enhanced image is given in Eq. (1) [36].

T=0.6×mLoG 1

Fig. 1.

Fig. 1

OD boundary. a OD boundary marked in green color and RoI is shown between concentric red and blue circle. b Retinal vessels segments inside RoI

Extraction of vessel segments from vascular tree within ‘Analysis Zone’

Once the position and boundary of OD is determined, a Region of Interest (RoI) is identified for extraction of candidate retinal blood vessels which are to be categorized into artery or vein class. As suggested by [15, 19, 31], a circular zone that is at a specific distance from OD, is marked and the vessels within this zone are considered for classification. Although, some of the researchers have classified the complete retinal vessel network [5, 10, 39], since our goal here is to calculate AVR which can be efficiently calculated by assessing the blood vessels in a specific circular zone around OD, therefore, only the vessel portions within a circular zone are selected. A fixed circular RoI around OD is identified by placing two concentric circles; one at 1/4 Disk Diameter (DD) and another at 1 DD, from OD boundary. Another reason for selection of this zone is to ignore the vessel portions near OD because glial tissue or perivascular sheathing may influence the vessel segments in OD proximity [35]. Figure 1a shows the OD boundary and RoI between the two concentric circles and 1 (b) shows the vessels extracted from measurement zone.

After the vessels within RoI are extracted, the next phase is to split those connected vessels into isolated vessel segments by determining the vessel junction points (bifurcations and cross-overs). These landmarks make the vessel classification and measurement task ambiguous. The subimages in Fig. 2 illustrates this phenomenon where by examining it can be noticed that the arteriovenous crossing in vessel center-lines appears as one vessel segment, shown in Fig. 2c. In order to rectify the false landmark, it is important to differentiate between bifurcations and crossovers. Moreover, for AVR calculation, the arteries and veins needed to be properly distinguished, therefore the vessels with crossing points must be isolated. In this paper, we adopt a local variance based method for differentiation between two types of junction points. The local variance based method is explained as follows.

  • Firstly, the binary vessel map is skeletonized and then potential junction points are extracted. A level-set based skeletonization algorithm is proposed in [33] for smooth and centered structures than other thinning methods. Moreover, center-line vessels avoid pruning branches via this algorithm.

  • Then, this skeleton-vessel image is convolved with a kernel of 3 × 3 shown in Fig. 3a and for each pixel, the number of neighboring pixels are counted. The location of pixels which have three or more than three neighbors is taken. These locations indicate vessel junction points i.e. bifurcations, arteriovenous crossings or crossing between vessel and capillary. Figure 3b shows the pixel localized that have three or more than three neighbors whereas (c) shows a single arteriovenous crossing which appears as two bifurcation points.

  • Now, to differentiate between a junction that is bifurcation or arteriovenous crossing, the local variance based method is used. A circular window of radius 11 is employed here that is made centered on the detected junction points, as shown in Fig. 3d, where the junction points are shown in red with a circular window in green. Since the vessel crossings do not contain pixels from just one type of vessel i.e. it either contains pixels from (artery and vein) or (artery and capillary) or (vein and capillary), the captured variance will show a spike in variance if it is a cross-over and will have more variance than bifurcation points. This variance-based property is used here to characterize the vessel junctions as cross-overs or bifurcations.

Fig. 2.

Fig. 2

Arteriovenous Crossing phenomenon. a Slice from original image showing Arteriovenous crossing. b Corresponding binary vessel segment. c Skeletonized vessel segment

Fig. 3.

Fig. 3

Detection of vessel junctions. a 3 × 3 window used to detect the junctions. b Location of vessel junctions. c An arteriovenous crossing which fakes as two vessel bifurcations. d Circular window (in green) centered on junction points in (red)

After the crossovers are identified and removed, the binary image now contains veins, arteries and small thin capillaries. However, due to unavailability of ground truth, the thin capillaries are not considered in next phase i.e. feature extraction for vessel classification.

Feature extraction for vessel classification

Once the bifurcations and crossing-overs have been differentiated, the crossover points are eroded in original extracted vessel map. As a result, we will have an image that does not have vessel crossovers. Afterwards, features are extracted from each candidate vessel segment. Each detected vessel is regarded as a sample for classification and represented by a feature vector containing several features.

In this paper, 81 features are proposed for representation of blood vessels in retina. Feature selection process removes any irrelevant features before final vessel labeling. Let xi be the candidate vessel sample considered for retinal vessel classification, where i=1,2,,M, and M being the total number of vessel samples. The true class label of xi vessel sample is yi, where yi can only take on from two values, since the number of vessel classes is two i.e. artery and vein. Denoted by F=f1,f2,,fN, the feature matrix where j=1,2,,N and N being the total number of features extracted from each vessel sample. So the dimensions of input data matrix X is M×N, where each vessel sample xi in X is represented by N distinct feature values. Y is the vector containing values of true class labels yi. The features extracted for retinal vessel classification comprises of: (I) First order statistical features characterizing the properties of histogram of vessel pixels in different color spaces. (II) Spatial distribution of gradient magnitude representing the changes in vessel intensity with respect to fundus area pixels. (III) Features based on filter responses, where the filters borrowed from [13, 23, 34]. These features are extracted from either vessel centerlines or complete vessel segments. Features extracted from vessel center-lines are effective due to the presence of light reflex in center of arteries and this light reflex in arteries makes them distinguishable from veins since veins are darker in intensity. RGB, CIE L*a*b, CMYK and YCbCr color spaces are exploited for extraction of features. All the features are concatenated into a single feature set, consisting of 81 features. It has been observed in literature that features obtained using different methods outperform single type of features because visual representations acquired using multiple techniques have the ability to capture various aspects of same object in image [7, 8]. Table 1 tabulates the features which are used in this paper for retinal vessel recognition.

Table 1.

Details of feature set

Feature no. Feature description Type
1–21 (f1f21) Mean, Standard Deviation, and Entropy of vessel Center-line pixels in (Red and Green channel of RGB; Luminance and Chrominance channel of YCbCr color space, L and B channel of L*a*b and M of CMYK) Histogram-based statistical features
22–49 (f22f49) Minimum and Maximum value of gradient magnitude, Kurtosis and Variance of gradient magnitude histogram in (Red and Green channel of RGB; Luminance and chrominance channel of YCbCr color space, L and B channel of L*a*b; and M of CMYK) Gradient magnitude and spatial distribution of gradient magnitude
50–70 (f50f70) Minimum, Maximum and Variance of pixel intensities of complete vessel segment in (Red and Green channel of RGB; Luminance and chrominance channel of YCbCr, L and B channel of L*a*b, color space and M channel of CMYK) Values directly calculated from complete vessel pixel intensities
71 (f71) Ratio of center line pixels intensity average to average of intensities in complete vessel segment (in Green channel of RGB) Average of pixel intensity values taken from center-line pixels and complete vessel segments
72–81 (f72f81) Minimum and Maximum value of responses of filters from LM, Schmid and MR filter banks in red and green channels of RGB image Values calculated from raw pixel values in filtered images

Feature selection and classification of vessels

In this section, the method proposed for feature selection and vessel classification is explained. To select the significant features, the features are first ranked using two feature ranking strategies and then the classification accuracy of three classifiers is used as a threshold to select an optimal feature subset from ranked feature list. The feature subsets selected depending on accuracy of each classifier are then fused to make a single feature subset and will be used by hybrid labelling method for retinal vessel classification. We will first discuss the feature selection process here followed by vessel classification scheme.

Ranking of features and selection of optimal feature subset

The motivation for including a feature selection module is that the extracted features may contain some redundant data that leads to overfitting of prediction model and consequently reduction in prediction accuracy. In this paper, we use two feature-ranking techniques to rank the features but a novel approach is followed for selecting features from ranked feature list. The system works by first ranking the features according to two feature ranking methods i.e. Pearson Correlation Coefficient [18] and Relief-F method [20] and then selection of optimal features. In this paper, we use the maximum classification accuracy of classifiers as the threshold to select the features. A particular number of top ranked features that yield maximum accuracy on classifier is selected. We employed three classifiers for selection of optimal number of top ranked features. The optimal feature subsets selected using three classifiers are combined and used by proposed hybrid classification technique for vessel classification.

Class label yi and feature value fj of every sample xi is given as an input to feature ranking strategy. Both these techniques evaluate correlation of each feature with the class label using some criteria [3, 11] and a rank is generated for each feature. The features are then arranged in descending order of their ranks i.e. Fo=fr1,fr2,fr3,.,frN, where fr1 and frN denotes the features with highest and lowest rank, respectively. For selection of feature subset, the ranked features Fo are given as an input by constructing ‘n’ feature subsets, in which first feature subset is initialized by incorporating only the highest ranked feature, the second subset is constructed by adding second top-ranked feature in the first subset and this process is repeated until the last feature subset contains all ranked features. By applying different combinations of highly ranked features to classifiers, different predictive models are generated which have different accuracies. The optimal predictive model is the one with maximum classification accuracy and the feature subset corresponding to this predictive model is selected. Now since we have employed three supervised classifiers, so three optimal feature subsets are obtained. The number of features in the optimal subsets is not specific. This hybrid approach allows us to select different feature combinations from ranked feature list. The ranked optimal feature subsets obtained using Pearson Correlation Coefficient method on k-NN, SVM and Naïve Bayes classifier are denoted by fp1, fp2, and fp3, respectively, and those obtained using Relief-F method on k-NN, SVM and Naïve Bayes classifier are denoted by fr1,fr2 and fr3, respectively. Finally the union of those optimal feature subsets is taken as given in Eqs. (2) and (3), and the resulting feature subsets, FP and FR are used by proposed hybrid classification scheme for retinal vessel labelling. The feature ranking approaches used are elaborated below.

FP=fp1fp2fp3 2
FR=fr1fr2fr3 3

Pearson correlation coefficient method

The Pearson Correlation Coefficient (PCC) method calculates linear correlation between individual features and class labels [18]. In this paper, we use PCC method to obtain rank of features. This PCC method finds correlation pi for relevance assessment of the feature fj with the corresponding class label yi and as an output, a correlation score for each individual feature is generated. PCC of a vessel sample xi (where xiX) and class label yi (where yiY) is calculated as given in Eq. (4), where cov is covariance and σ is variance.

pxi,yi=covxi,yiσxiσyi 4

Relief-F algorithm

The second feature ranking algorithm that is used here is Relief-F algorithm [20]. In Relief-F algorithm, each feature gets a weight depending upon its strength for distinguishing between those opposite class samples that are near to each other and difficult to differentiate. The feature rank is calculated by taking a data point at random and considering the K-nearest neighbors of that data point [20]. The K-nearest neighbors are taken from both the classes and by considering their contribution, the strength of feature is analyzed. In order to find the optimal value of K for this feature ranking technique, we analyzed the weights of features by varying the value of K. Here, maximum value of K is taken as 75% of total instances because according to theory of Relief-F concept [20], if the value of K is taken too small, then estimates would be difficult to generalize on highly-varied data whereas if the value of K is equal to number of instances, then significance of relevant features will be deteriorated.

Experimental results

Through experimental results, we aim to investigate if the proposed feature selection and classifier fusion technique improves the retinal vessel classification task and subsequent AVR calculation. We first give a description of datasets used in this research. Then we elaborate the procedure for choosing the optimal parameters of classifiers. Afterwards, we illustrate the feature ranks obtained from two different strategies and features selected after application of ranked features on classifiers. Then we show the results of applying selected features on proposed decision combination framework followed by AVR calculation results. Computer program in this work is implemented using computer with 1.80 GHz processor and 4.0 GB RAM. Commercial software MATLAB is used for implementation purpose.

Specifications of datasets

The methodology is evaluated on three databases; a database collected from Armed Forces Institute of Ophthalmology (AFIO), Pakistan, and two public labelled databases i.e. INSPIRE-AVR [27] and VICAVR database [38]. In this research, only those vessel segments are used for retinal classification and quantification purpose whose labels are provided in ground truth.

The local database contains 44 retinal images in JPEG format with dimensions 1504 × 1000, including 11 images containing pathological structures as shown in Table 2. True vessel labels and AVRs are acquired by an expert ophthalmologist that will be are considered as a ground truth. INSPIRE-AVR is a publically available database containing high-resolution 40 OD centered healthy retinal images, acquired at the University of Iowa Hospitals and Clinics. These images are of size 2392 × 2048 and available in JPEG format. AVR values estimated by two observers are provided with INSPIRE-AVR database, to be used for comparison and the vessel labels are acquired from our ophthalmologist. Third database used is VICAVR that contains OD-centered 58 images of size 768 × 576. The artery-vein labels and vessel caliber for VICAVR database, obtained from three human experts are available with the dataset. Table 2 describes the complete specifications of datasets used in this study. The images in local dataset are marked by ophthalmologist on the basis of visual appearance; therefore, underlying causes of different abnormal structures are not exploited here. Additionally, detection and diagnosis of the other retinal pathologies occurring independently or associated with HR are beyond the scope of this research. In this paper, we have evaluated the results only for vessel classification. The unclassified vessels are not included in evaluation.

Table 2.

Specifications of Datasets

Datasets Number and size of images Number of vessel segments for classification Ground truth and details of pathologies
Local Dataset 44 images of dimensions (1504 × 1000) 356 vessel samples (195 vein vessels, 161 artery segments)

Corresponding AVR values for 44 images estimated by expert ophthalmologistImages include:

20 Non-HR images

11 images with Grade-I HR including (4 images with vessel sheathing near OD, 2 images with hard exudates and hemorrhages, 1 image with branch retinal vein occlusion)

8 images with Grade-II HR including (1 image with vessel sheathing near OD, 1 image with cotton wool spots and hemorrhages)

3 images with Grade-III HR including (2 images with cotton wool spots and hemorrhages)

2 images with Grade-IV HR including (2 images with cotton wool spots, hemorrhages and OD swelling)

INSPIRE-AVR dataset 40 retinal images of dimensions (2392 × 2048) 410 vessel samples (201 artery segments, 209 vein segments) Corresponding AVR values for 40 images estimated by two Observers
VICAVR database 58 retinal images of dimensions (768 × 576) 476 vessel samples (244 vein vessels, 232 artery segments) Artery/Vein labels and AVR values for 40 images estimated by three Observers

Parameter tuning of classifiers

Classification accuracy of three classifiers (k-NN, SVM and Naïve Bayes) is used as a threshold to select the optimal top ranked features. Before giving ranked features as an input to classifier, for feature selection, the parameters of classifier are tuned. This is done in ‘parameter tuning phase’, where the dataset is divided into two parts, training set (70% of data) and validation set (30% of data). The classifier is tested with different parameters using training data and then accuracy is evaluated on validation set. Complete ranked feature set of 81 features is given as input to classifier for tuning of parameters. The parameters showing maximum accuracy on validation set are selected. All three supervised classifiers are trained once using training data and then tested on validation set to acquire optimal parameters. The values of optimal parameters of classifiers are mentioned in Table 3.

Table 3.

Parameters configuration for different classifiers

Classifiers Classifier parameters selection using features ranked by both methods
k-NN Nearest neighbors k = 3
SVM ‘RBF’ kernel with scaling factor 7
Naïve Bayes ‘kernel’ distribution function

Once the optimal parameters are acquired, three classifier models (k-NN, SVM and Naïve Bayes) are refit again to entire dataset using 10-fold cross validation and classification accuracy of these three classifiers is used to select optimal number of ranked features from ranked feature list. In tenfold cross-validation, data is divided into ten subsets, out of which nine are retained for training and one is used for testing. The samples which are included in each fold are randomly selected. Each fold is iteratively tested and the rest of folds are kept for training. Different combinations of ranked feature subsets are given as an input to classifier and the subset leading to maximum classification accuracy is selected. The performance metrics calculated for evaluation of ranked feature subsets on classifiers are, accuracy, sensitivity and specificity. However, accuracy metric is used as a stopping criteria to select the number of top ranked features as a threshold to select the optimal feature subset. Sensitivity the is true positive rate (positives) and specificity is true negative rate (negatives).

Performance of feature raking methods and selection of optimal features

In order to rank the features using Relief-F method, the value of K, number of nearest neighbors is determined. We analyzed the weights of features with varying the number of nearest neighbors i.e. K = 1 to 250, 1 to 287 and 1 to 333, for local, INSPIRE-AVR and VICAVR datasets, respectively, and the corresponding optimal value of K for which the weights of features become stable is 196, 132 and 245, respectively.

For selection of top ranked features, different subsets of features acquired from ranked feature lists are applied on k-NN, SVM and Naïve Bayes classifier using tenfold cross validation. This lead to generation of 81 predictive models with different accuracy, sensitivity and specificity metrics, as illustrated in Fig. 4 with upper and lower curves showing performance metrics for local and INSPIRE-AVR dataset, respectively. The feature subset that maximizes the classification accuracy on each classifier is selected. For a better visualization and comparison purpose, we concatenated the performance curves of both datasets on same x-axis in Fig. 4. Each classifier’s response for each feature subset is different, which indicates the importance of features combination on the class label outcomes. Use of different classifiers for selection of optimal number of features, resulted in selection of feature subsets containing different number of features. These feature subsets will be combined using union operation to obtain a final feature subset.

Fig. 4.

Fig. 4

Performance metrics (Green, Blue and Red curves depicting Accuracy, Sensitivity and Specificity, respectively) for different ranked feature subsets, upper curves: Local dataset and lower curves: INSPIRE-AVR dataset. a, c and e Classification performance of k-NN, SVM and Naïve Bayes classifier on features ranked by PCC. b, d and f Classification performance of k-NN, SVM and Naïve Bayes classifier on features ranked by Relief-F method

It is observed that the features ranked via Relief-F method shows relatively high sensitivity with slightly increased classification accuracy in using SVM for classification; and when using Naïve Bayes classifier, local datasets shows better classification accuracy than PCC method. However, the PCC feature ranking allows relatively high specificity; accuracy of classification of INSPIRE-AVR dataset is slightly higher than the classification of local datasets, apart from the SVMs for classification with opposite classification performance.

We also tested if the use of feature ranking is beneficial by analyzing the effect of feature subsets obtained from ranked features list using Relief-F method, and raw feature list without using ranking algorithm, on k-NN classifier for VICAVR database. From Fig. 5, it is found that feature ranking strategies have actually contributed in increasing the classification accuracy. Notice that the average accuracy and peak accuracy attained from ranked list is much higher as compared to accuracy illustrated using unordered feature list.

Fig. 5.

Fig. 5

Comparison of classification accuracy achieved for raw feature subsets (unordered) and ordered feature list using Relief-F method on k-NN classifier for VICAVR dataset. a Classification performance of feature subsets from unordered feature list. b Performance attained using feature subsets from ranked lists acquired through Relief-F method

Since we use classification accuracy generated by three classifiers as a criteria to select optimal feature subset, so three different feature subsets will be selected for each dataset. As the number of features showing maximum classification accuracy on each classifier is different, even for the same dataset. So, in order to have a single feature subset that can be used by proposed hybrid classification approach, we combine the feature subsets acquired using different classifiers. Majority of features in feature subset selected using PCC with pre-evaluations by three classifiers are also present in feature subset acquired using Relief-F method. This indicates the similarity in ranking of two different strategies.

Retinal vessel classification using proposed hybrid classification scheme with optimal feature subset

The proposed hybrid classification scheme combines the labels generated by three classifiers i.e. k-NN, SVM and Naïve Bayes. This decision-combination is chosen for vessel classification because it represents the joint strength of objective function of multiple classifiers. In majority voting, the votes given by each classifier for a certain sample are counted and the class with maximum votes is assigned to the sample. For example, if SVM and Naïve Bayes assigns ‘Artery’ to a sample whereas k-NN assigns ‘Vein’, the final label will be given as ‘Artery’, since the ‘Artery’ class has two votes. After assigning the labels using proposed hybrid classification method, accuracy, sensitivity and specificity is calculated by comparing the final labels with ground truth. The classification performance achieved using proposed decision fusion framework with optimal feature subsets (FP) and (FR), acquired using PCC and Relief-F method, is given in Tables 4 and 5, respectively. Overall, hybrid classification scheme has led to an increment in the vessel classification accuracy, both for feature subsets acquired from PCC and Relief-F ranking list. We have illustrated the increments in classification accuracy obtained with proposed classification technique and with those acquired by single classifiers on same feature subsets, as shown in Figs. 6 (using PCC) and 7 (using Relief-F). For all datasets, the increase in vessel classification performance show the improvement induced by the multi-classifier decision combination. The highest vessel classification accuracy is observed for INSPIRE-AVR dataset with 93.9% correct rate using Relief-F ranking and the proposed hybrid classification method. An interesting observation is the error rate for classification of artery samples in local dataset, with 6.2% and 5.3% misclassification that is highest among all datasets. An obvious explanation for this result is the presence of pathological structures in local dataset images that deteriorate the appearance of arteries and thus make their classification more challenging. On the other hand, both for INSPIRE-AVR and VICAVR dataset, misclassification rate for veins is more high compared to arteries.

Table 4.

Performance of hybrid classification for feature subsets acquired using PCC method

Datasets Number of features in selected subset Hybrid classification approach
Accuracy (%) Sensitivity (%) Specificity (%)
Local dataset 55 90.17 91.45 89.22
INSPIRE-AVR dataset 55 93.41 95.02 91.87
VICAVR dataset 37 87.82 85.08 90.79

Table 5.

Performance of hybrid classification for feature subsets acquired using Relief-F method

Datasets Number of features in selected subset Hybrid classification approach
Accuracy (%) Sensitivity (%) Specificity (%)
Local dataset 59 90.45 90.45 90.45
INSPIRE-AVR dataset 53 93.90 95.52 92.34
VICAVR dataset 33 85.92 83.95 87.98

Fig. 6.

Fig. 6

Comparison of classification accuracy obtained using single classifier (shown in blue) and proposed hybrid classification (shown in orange), for feature subset from PCC ranking

Fig. 7.

Fig. 7

Comparison of classification accuracy obtained using single classifier (shown in blue) and proposed hybrid classification (shown in orange), for feature subset from Relief-F ranking

The highest vessel classification accuracy is observed for INSPIRE-AVR dataset with 93.9% correct rate using Relief-F ranking and the proposed hybrid classification method. An interesting observation is the error rate for classification of artery samples in local dataset, with 6.2% and 5.3% misclassification that is highest among all datasets. An obvious explanation for this result is the presence of pathological structures in local dataset images that deteriorate the appearance of arteries and thus make their classification more challenging. On the other hand, both for INSPIRE-AVR and VICAVR dataset, the misclassification rate for veins is more high as compared to arteries.

Figure 8 shows an example of two fundus images, selected from local database (Fig. 8a) with the segmented vessel network (Fig. 8b), candidate vessel segments for retinal vessel classification inside RoI (Fig. 8c) and classified vessel segments illustrated in Fig. 8d, with blue and red circles labeling vessels that are classified correctly as vein and artery, respectively, whereas white and green circles show unclassified vessel segments for which ground truth is not available and vein segments mis-classified as artery, respectively.

Fig. 8.

Fig. 8

Classification of retinal vessels (images taken from local database). a Original retinal images. b Vessel segmentation results. c Vessels inside RoI. d Classified retinal vessel portions (blue: vein, red: artery, white: neither artery or vein (unclassified), green: vein that has been wrongly classified as artery)

Comparison of vessel classification accuracy with other state-of-art approaches

Table 6 tabulates the techniques and outputs reported by previous researchers for retinal vessel classification using INSPIRE-AVR and VICAVR datasets and makes a comparison with the results shown by our system. We used accuracy, sensitivity and specificity metrics for comparison with vessel recognition results presented by already existing alternative methods. For comparison purpose with previous approaches, we selected the vessel classification results using hybrid labelling scheme with feature selection method that shows highest performance. Highest vessel classification accuracy is achieved using Relief-F ranking method with hybrid classification approach for local dataset (90.45%) and INSPIRE-AVR dataset (93.90%), while for VICAVR database, the highest accuracy (87.82%) is obtained using PCC method with hybrid classification method. For comparison of VICAVR results, among the opinion of three experts, we used the ground truth provided by Expert 1 with the dataset [37].

Table 6.

Comparison of vessel classification with previous methods

Method Dataset Technique Performance parameter
Accuracy (%) Sensitivity (%) Specificity (%)
[27] INSPIRE-AVR database (All center-line pixels detected in RoI) 27 features from Red, Green, Hue, Saturation and Intensity plane + prior information of retinal vessel arrangement + LDA classifier ~78 ~78
[5]

INSPIRE-AVR

Database (All vessel segments inside RoI)

19 features based on HSI and RGB color channel + LDA classifier 91.1 91 86
VICAVR dataset (All vessel segments including unclassified vessels) 89.8
[32] INSPIRE-AVR dataset (483 vessel segments) 4 features from Green, Red and Hue planes + Squared-loss Mutual Information clustering 87.6
[37] VICAVR database (All vessel segments including unclassified vessels) 5 features from vessel profiles in RGB, HSL and gray-level color space with clustering and tracking approach 88.8
[39] VICAVR database 29 features from RGB, LAB, and YCbCr + SVM classifier 92.4
Our method INSPIRE-AVR dataset (total 410 vessel segments inside RoI, unclassified vessel segments not included) 53 features including 18 features from gradient magnitude, 16 features from pixel raw values, 12 from histogram based and 6 from filter responses + Relief-F ranking followed by feature selection using three classifiers + decision fusion scheme 93.90 95.52 92.34
Local Database (total 356 vessel segments inside RoI) 59 features including 20 features from gradient magnitude, 19 features from pixel raw values, 14 from histogram based and 5 from filter responses + Relief-F ranking followed by feature selection using three classifiers + decision fusion scheme 90.45 90.45 90.45
VICAVR (total 476 vessel segments, unclassified vessel segments not included) 37 features including 4 features from gradient magnitude, 16 features from pixel raw values, 10 from histogram based and 6 from filter responses + PCC ranking followed by feature selection using three classifiers + decision fusion scheme 87.82 85.08 90.79

Area Under Curve (AUC) metric is used by [27] for evaluation of their proposed vessel classification approach, however, in order to facilitate the comparison with vessel classification results of our system, we use the sensitivity and specificity values approximated by [5] from the AUC reported by [27]. Note from Table 6 that both the sensitivity and specificity values presented by our system show a considerable superiority to the ones approximated for [27]. It should be noted that [27] showed vessel classification results only for vessel center-line pixels whereas our method is evaluated on complete vessel segments in RoI. On the other hand, our method has not considered the small capillaries in vessel classification task due to the unavailability of ground truth. Although, the vessel center-line features used in [27] are also included in this research, but our method exploits comparatively a larger number of color planes for vessel characterization from both healthy and pathological-diseased images. Note that this comparison is with the approximated sensitivity and specificity values for vessel classification presented by [27].

In comparison with vessel classification results proposed by [5], they acquired 91.1% accuracy for INSPIRE-AVR dataset using twofold cross validation with Linear Discriminant Analysis (LDA) classifier while our method achieved an accuracy of 92.26% using tenfold cross validation with fusion of classifier decisions. In addition, note that [5] achieved this accuracy using only a set of 19 features whereas our system obtained the mentioned accuracy using an optimal set of 37 features. In this case, although our system achieved slightly higher vessel classification accuracy, sensitivity and specificity but if the number of features are compared, our system is inferior in efficiency as compared to the ones presented by [5]. The vessel classification approach presented by [5] is tested on three databases and the results are reported for vessels inside RoI as well as complete retinal vessel tree. However, for computation of AVR, the classification of complete vessel tree is not necessary. Classification of complete vessel tree is computationally expensive as compared to classification of small vessel portions inside RoI. In our method, only the vessels inside RoI are used, thus reducing the processing of redundant pixels that do not contribute. In order to have system with less complexity and increased efficiency, only the vessels inside RoI are assessed.

An accuracy of 87.6% is reported by [32] for the INSPIRE-AVR dataset as compared to 92.44% accuracy in our research, however, the sample size used in their paper is comparatively larger. Two features i.e. mean of Red and Green channel from centerline pixels used by [32] for vessel recognition are also incorporated in our method. It is worth noting that none of these studies consider pathologically diseased images for evaluation of their proposed methods. For VICAVR, our system showed comparatively less classification accuracy. An accuracy of 88.8% is achieved by [37] on VICAVR dataset which is a little higher than the ones achieved by our system. Recently, [39] reported an accuracy of 92.4% on VICAVR dataset, however, they have not mentioned the number of vessels used for evaluation of their method. An apparent reason for low classification rate achieved using VICAVR can be the difference of ground-truth used for evaluation purpose. The classification results presented for both healthy image database and diseased image database proves the capability of system in recognizing the vessels with higher accuracy.

Discussion and conclusion

The proposed methodology includes seven modules: (a) automatic detection and segmentation of retinal vessels; (b) extraction of novel feature set to categorize vessels; (c) ranking of features by Pearson Correlation Coefficient and Relief-F method; (d) selection of features from ranked feature lists based on classification accuracy of three classifiers; (e) classification of vessels by hybrid classification framework using selected feature subset. Particularly, two feature ranking techniques (PCC and Relief-F) followed by three classifiers for selection of features with multi-decision combination method for retinal vessel classification. The effect of specific feature ranking techniques with the use of multiple classifiers for feature selection and incorporation of ‘joint strength’ of three supervised prediction models has not been evaluated in the past, therefore, the results obtained by the experiments can be used as a baseline or reference for future research.

The proposed methodology offers comparable results and works robustly on three databases acquired from different fundus cameras with different settings. The experimental evaluations highlight the strength of proposed vessel recognition model in capturing the relation between input features and classification outcomes effectively. Particularly, the arrangement of features and combinations of subsets according to feature lists ranked by PCC and Relief-F method have contributed to increase the retinal vessel classification accuracy, as compared to performance of features without ranking. However, one of the important limitations of the proposed algorithm is its dependency on the vessel segmentation results. Retinal vessels are known to be deteriorated in higher grades of HR which affects the vessel delineation process. Therefore, enhancing the performance of vessel segmentation is likely to improve the classification process, providing more efficient and robust computer-aided analysis system. Another constraint of our method is the limit in vessel classification performance due to presence of retinal pathologies since these pathologies may influence reflectivity. Although the presence of retinal pathologies is viewed to be less problematic in our research, the vessel classification error remained high for images with the pathological structures. The segmentation of these pathologies from retinal images can contribute to the vessel classification effectiveness. Moreover, the sample size in our proposed research is comparatively small, especially for higher grades of HR. Deep learning technology can be the way to achieve accurate vessel segmentation, which will form our future research. The experimental evaluations are promising; If there are more structural information of vessels to be involved, we may acquire greater classification accuracy.

Acknowledgements

The author would like to thank Dr. Umer Salman from Hameed Latif Hospital, Lahore, Pakistan for assisting in providing ground truth for retinal vessel classification.

Compliance with ethical standards

Conflict of interest

The author declares that there is no conflict of interests regarding the publication of this paper.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Samra Irshad has made the same contribution in writing this paper as Xiao-Xia Yin.

References

  • 1.Aibinu AM, Iqbal MI, Shafie AA, Salami MJE, Nilsson M. Vascular intersection detection in retina fundus images using a new hybrid approach. Comput Biol Med. 2010;40(1):81–89. doi: 10.1016/j.compbiomed.2009.11.004. [DOI] [PubMed] [Google Scholar]
  • 2.Azzopardi G, Petkov N. Automatic detection of vascular bifurcations in segmented retinal images using trainable COSFIRE filters. Pattern Recogn Lett. 2013;34(8):922–933. doi: 10.1016/j.patrec.2012.11.002. [DOI] [Google Scholar]
  • 3.Beheshti I, Demirel H, Farokhian F, Yang C, Matsuda H, Initiative Alzheimer’s Disease Neuroimaging. Structural MRI-based detection of Alzheimer’s disease using feature ranking and classification error. Comput Methods Programs Biomed. 2016;137:177–193. doi: 10.1016/j.cmpb.2016.09.019. [DOI] [PubMed] [Google Scholar]
  • 4.Bhuiyan A, Hussain MdA, Wong Y, Klein TR. Retinal artery and vein classification for automatic vessel caliber grading. In: Conference proceedings: annual international conference of the IEEE Engineering in Medicine and Biology Society. 2018. pp. 870–873. 10.1109/embc.2018.8512287. [DOI] [PubMed]
  • 5.Dashtbozorg B, Mendonça AM, Campilho A. An automatic method for the estimation of arteriolar-to-venular ratio in retinal images. In: 26th international symposium on computer-based medical systems (CBMS). 2013. pp. 512–513. IEEE.
  • 6.Devi SS, Roy A, Singha J, Sheikh SA, Laskar RH. Malaria infected erythrocyte classification based on a hybrid classifier using microscopic images of thin blood smear. Multimed Tools Appl. 2016;77:631. doi: 10.1007/s11042-016-4264-7. [DOI] [Google Scholar]
  • 7.Dimitrovski I, Kocev D, Kitanovski I, Loskovska S, Džeroski S. Improved medical image modality classification using a combination of visual and textual features. Comput Med Imaging Graph. 2015;39:14–26. doi: 10.1016/j.compmedimag.2014.06.005. [DOI] [PubMed] [Google Scholar]
  • 8.Douze M, Ramisa A, Schmid C. Combining attributes and fisher vectors for efficient image retrieval. In: Computer vision and pattern recognition CVPR. 2011. pp. 745–752. IEEE.
  • 9.Du P, Zhang W, Sun H. Multiple classifier systems. Berlin: Springer; 2009. Multiple classifier combination for hyperspectral remote sensing image classification; pp. 52–61. [Google Scholar]
  • 10.Estrada R, Allingham MJ, Mettu PS, Cousins SW, Tomasi C, Farsiu S. Retinal artery-vein classification via topology estimation. IEEE Trans Med Imaging. 2015;34(12):2518–2534. doi: 10.1109/TMI.2015.2443117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Fakhraei S, Soltanian-Zadeh H, Fotouhi F. Bias and stability of single variable classifiers for feature ranking and selection. Expert Syst Appl. 2014;41(15):6945–6958. doi: 10.1016/j.eswa.2014.05.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Fraz MM, Rudnicka AR, Owen CG, Strachan DP, Barman SA. Automated arteriole and venule recognition in retinal images using ensemble classification. In: International conference on computer vision theory and applications VISAPP. 2014. pp. 194–202. IEEE.
  • 13.Geusebroek JM, Smeulders AW, Van De Weijer J. Fast anisotropic gauss filtering. IEEE Trans Image Process. 2003;12(8):938–943. doi: 10.1109/TIP.2003.812429. [DOI] [PubMed] [Google Scholar]
  • 14.Grisan E, Ruggeri A. A divide et impera strategy for automatic classification of retinal vessels into arteries and veins. In: Proceedings of the 25th annual international conference of the IEEE Engineering in Medicine and Biology Society. 2003. pp. 890–893.
  • 15.Hubbard LD, Brothers RJ, King WN, Clegg LX, Klein R, Cooper LS, Sharrett AR, Davis MD, Cai J, Atherosclerosis Risk in Communities Study Group Methods for evaluation of retinal microvascular abnormalities associated with hypertension/sclerosis in the atherosclerosis risk in communities study. Ophthalmology. 1999;106(12):2269–2280. doi: 10.1016/S0161-6420(99)90525-0. [DOI] [PubMed] [Google Scholar]
  • 16.Irshad S, Yin X, Li LQ, Salman U. International Symposium on Visual Computing. Berlin: Springer; 2016. Automatic optic disk segmentation in presence of disk blurring; pp. 13–23. [Google Scholar]
  • 17.Jamal I, Akram MU, Tariq A. Retinal image preprocessing: background and noise segmentation. TELKOMNIKA. 2012;10(3):537–544. doi: 10.12928/telkomnika.v10i3.834. [DOI] [Google Scholar]
  • 18.Kendall MG, Gibbons JD. Rank correlation methods. 5. New York: Oxford University Press; 1990. [Google Scholar]
  • 19.Knudtson MC, Lee KE, Hubbard LD, Wong TY, Klein R, Klein BEK. Revised formulas for summarizing retinal vessel diameters. Curr Eye Res. 2003;27(3):143–149. doi: 10.1076/ceyr.27.3.143.16049. [DOI] [PubMed] [Google Scholar]
  • 20.Kononenko I. European conference on machine learning. Berlin: Springer; 1994. Estimating attributes: analysis and extensions of RELIEF; pp. 171–182. [Google Scholar]
  • 21.Lee TS. Image representation using 2D Gabor wavelets. IEEE Trans Pattern Anal Mach Intell. 1996;18(10):959–971. doi: 10.1109/34.541406. [DOI] [Google Scholar]
  • 22.LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–444. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
  • 23.Leung T, Malik J. Representing and recognizing the visual appearance of materials using three-dimensional textons. Int J Comput Vision. 2001;43(1):29–44. doi: 10.1023/A:1011126920638. [DOI] [Google Scholar]
  • 24.Liu M, Zhang D, Shen D. Hierarchical fusion of features and classifier decisions for Alzheimer’s disease diagnosis. Hum Brain Mapp. 2014;35(4):1305–1319. doi: 10.1002/hbm.22254. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Liu Z-F, Zhang Y-Zh, Liu P-Zh, Zhang Y, Luo Y-M, Du Y-Zh, Peng Y. Retinal vessel segmentation using densely connected convolution neural network with colorful fundus images. J Med Imaging Health Inf. 2018;8(6):1300–1307. doi: 10.1166/jmihi.2018.2429. [DOI] [Google Scholar]
  • 26.Mirsharif Q, Tajeripour F, Pourreza H. Automated characterization of blood vessels as arteries and veins in retinal images. Comput Med Imaging Graph. 2013;37(7):607–617. doi: 10.1016/j.compmedimag.2013.06.003. [DOI] [PubMed] [Google Scholar]
  • 27.Niemeijer M, Xu X, Dumitrescu AV, Gupta P, van Ginneken B, Folk JC, Abramoff MD. Automated measurement of the arteriolar-to-venular width ratio in digital color fundus photographs. IEEE Trans Med Imaging. 2011;30(11):1941–1950. doi: 10.1109/TMI.2011.2159619. [DOI] [PubMed] [Google Scholar]
  • 28.Niu G, Han T, Yang BS, Tan ACC. Multi-agent decision fusion for motor fault diagnosis. Mech Syst Signal Process. 2007;21(3):1285–1299. doi: 10.1016/j.ymssp.2006.03.003. [DOI] [Google Scholar]
  • 29.Ortíz D, Cubides M, Suárez A, Zequera M, Quiroga J, Gómez J, Arroyo N. Support system for the preventive diagnosis of hypertensive retinopathy. In: 37th annual international conference of the IEEE Engineering in Medicine and Biology Society EMBC. 2010. pp. 5649–5652. IEEE. [DOI] [PubMed]
  • 30.Pandey D, Yin X, Wang H, Zhang Y. Accurate vessel segmentation using maximum entropy incorporating line detection and phase-preserving denoising. Comput Vis Image Underst. 2017;155:162–172. doi: 10.1016/j.cviu.2016.12.005. [DOI] [Google Scholar]
  • 31.Parr JC, Spears GFS. Mathematic relationships between the width of a retinal artery and the widths of its branches. Am J Ophthalmol. 1974;77(4):478–483. doi: 10.1016/0002-9394(74)90458-9. [DOI] [PubMed] [Google Scholar]
  • 32.Relan D, MacGillivray T, Ballerini L, Trucco E. Retinal vessel classification: sorting arteries and veins. In: 35th annual international conference of the IEEE Engineering in Medicine and Biology Society EMBC. 2013. pp. 7396–7399. IEEE. [DOI] [PubMed]
  • 33.Rumpf M, Telea A. A continuous skeletonization method based on level sets. In: Proceedings of the symposium on data visualisation. 2002. pp. 151-ff. Eurographics Association.
  • 34.Schmid C. Constructing models for content-based image retrieval. In: Computer vision and pattern recognition CVPR. 2001. IEEE.
  • 35.Stokoe NL, Turner RW. Normal retinal vascular pattern. Arteriovenous ratio as a measure of arterial calibre. Br J Ophthalmol. 1966;50(1):21. doi: 10.1136/bjo.50.1.21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Usman A, Khitran SA, Akram MU, Nadeem Y. A robust algorithm for optic disc segmentation from colored fundus images. In: International conference image analysis and recognition. 2014. pp. 303–310. Berlin: Springer.
  • 37.Vázquez SG, Cancela B, Barreira N, Penedo MG, Saez M. On the automatic computation of the arterio-venous ratio in retinal images: using minimal paths for the artery/vein classification. In: International conference on digital image computing: techniques and applications DICTA. 2010. pp. 599–604. IEEE.
  • 38.Vázquez SG, Cancela B, Barreira N, Penedo MG, Rodríguez-Blanco M, Seijo MP, de Tuero GC, Barceló MA, Saez M. Improving retinal artery and vein classification by means of a minimal path approach. Mach Vis Appl. 2013;24(5):919–930. doi: 10.1007/s00138-012-0442-4. [DOI] [Google Scholar]
  • 39.Vijayakumar V, Koozekanani DD, White R, Kohler J, Roychowdhury S, Parhi KK. Artery/vein classification of retinal blood vessels using feature selection. In: 38th annual international conference of the IEEE Engineering in Medicine and Biology Society EMBC. 2016. pp. 1320–1323. IEEE. [DOI] [PubMed]
  • 40.Welikala RA, Fraz MM, Hayat S, Rudnicka AR, Foster PJ, Whincup PH, Owen CG, Strachan DP, Barman SA. Automated retinal vessel recognition and measurements on large datasets. Conf Proc IEEE Eng Med Biol Soc. 2015;2015:5239–5242. doi: 10.1109/EMBC.2015.7319573. [DOI] [PubMed] [Google Scholar]
  • 41.Welikala RA, Foster PJ, Whincup PH, Rudnicka AR, Owen CG, Strachan DP, Barman SA. Automated arteriole and venule classification using deep learning for retinal images from the UK Biobank cohort. Comput Biol Med. 2017;90:23–32. doi: 10.1016/j.compbiomed.2017.09.005. [DOI] [PubMed] [Google Scholar]
  • 42.Yin X, Ng BW, He J, Zhang Y, Abbott D. Accurate image analysis of the retina using hessian matrix and binarisation of thresholded entropy with application of texture mapping. PLoS ONE. 2014;9(4):e95943. doi: 10.1371/journal.pone.0095943. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Zhao Y, Rada L, Chen K, Harding SP, Zheng Y. Automated vessel segmentation using infinite perimeter active contour model with hybrid region information with application to retinal images. IEEE Trans Med Imaging. 2015;34(9):1797–1807. doi: 10.1109/TMI.2015.2409024. [DOI] [PubMed] [Google Scholar]

Articles from Health Information Science and Systems are provided here courtesy of Springer

RESOURCES