Abstract
Hypertensive retinopathy (HR) refers to changes in the morphological diameter of the retinal vessels due to persistent high blood pressure. Early detection of such changes helps in preventing blindness or even death due to stroke. These changes can be quantified by computing the arteriovenous ratio and the tortuosity severity in the retinal vasculature. This paper presents a decision support system for detecting and grading HR using morphometric analysis of retinal vasculature, particularly measuring the arteriovenous ratio (AVR) and retinal vessel tortuosity. In the first step, the retinal blood vessels are segmented and classified as arteries and veins. Then, the width of arteries and veins is measured within the region of interest around the optic disk. Next, a new iterative method is proposed to compute the AVR from the caliber measurements of arteries and veins using Parr–Hubbard and Knudtson methods. Moreover, the retinal vessel tortuosity severity index is computed for each image using 14 tortuosity severity metrics. In the end, a hybrid decision support system is proposed for the detection and grading of HR using AVR and tortuosity severity index. Furthermore, we present a new publicly available retinal vessel morphometry (RVM) dataset to evaluate the proposed methodology. The RVM dataset contains 504 retinal images with pixel-level annotations for vessel segmentation, artery/vein classification, and optic disk localization. The image-level labels for vessel tortuosity index and HR grade are also available. The proposed methods of iterative AVR measurement, tortuosity index, and HR grading are evaluated using the new RVM dataset. The results indicate that the proposed method gives superior performance than existing methods. The presented methodology is a novel advancement in automated detection and grading of HR, which can potentially be used as a clinical decision support system.
Keywords: Computer-aided diagnosis (CAD), Retinal images, Retinal blood vessels analysis, Hypertensive retinopathy, End-to-end pipeline
Introduction
Hypertensive retinopathy (HR) is manifested in the retina with morphometric changes in the retinal vasculature, such as an increase in the vascular tortuosity, arteriovenous width ratio (AVR), and arteriovenous (AV) nicking, and macular edema in the later stages [11, 17, 52]. HR is characterized by retinal vasculature inflammation, resulting in reduced visual acuity and linked with stroke in later life. HR can result in other health complications such as stroke, vision loss, and cardiovascular disease without timely diagnosis and treatment.
The ophthalmologists examine these morphometric changes in retinal vessels for detection and grading of HR on digital images of the retina acquired by the fundus camera. The early signs of HR are not evident in general, and it is often diagnosed later, which adds complications to its treatment. Therefore, patients with hypertension should be advised to follow up with an ophthalmologist regularly [54] to prevent an increase in the retinopathy severity, which may lead to stroke or even death.
Therefore, an automated system to examine the morphometric changes in the blood vessels to diagnose HR can assist the clinicians and the patients in optimal HR disease management. To this end, we propose an automated system for morphometric analysis of retinal vasculature to perform detection and grading of hypertensive retinopathy. The proposed system quantifies arteriovenous ratio and vessel tortuosity severity and consequently computes HR severity based on these measures. Initially, the retinal blood vessels are segmented and classified in arteries and veins. The optic disk (OD) is located, and the region of interest around the OD is identified for computing the arteriovenous ratio. The width of arteries and veins is measured within this. A new iterative method is proposed to compute the AVR from the caliber measurements of arteries and veins using Parr–Hubbard and Knudtson methods. Moreover, retinal vessel tortuosity severity index is computed for each image by using 14 tortuosity severity metrics. In the end, a hybrid decision support system is proposed for detection and grading of HR using AVR and tortuosity severity indices. Furthermore, in order to evaluate the proposed methodology, we present a new publicly available retinal vessel morphometry (RVM) dataset. The RVM dataset contains 504 retinal images, with pixel-level annotations for vessel segmentation, artery/vein classification, and optic disk localization. The image-level labels for vessel tortuosity index and HR grade are also available. The dataset is available at http://vision.seecs.edu.pk/datasets.
The major contributions of the proposed work are following:
We proposed a novel automated system that fuses AVR and tortuosity analysis for detection and grading of HR. To the best of our knowledge, no such system was proposed previously.
A new iterative method incorporating vessel caliber is presented for computing the AVR within the ROI region around OD.
For vessel tortuosity, a unique combination of 14 features is used to compute vessel tortuosity, mincing the routine working of ophthalmologists.
Development of a dataset for HR detection and grading contains pixel-level labels for vessel segmentation, AV classification, and OD localization, and image-level labels of vessel tortuosity and HR grading. The annotations are done by the ophthalmologists.
The paper is organized as follows. The related studies are reviewed in “Related Work”. “Methodology” explains the proposed methodology. The details of retinal vessel morphometry (RVM) dataset are given in “Materials”. The experimental results are presented in “Results”. Finally, the conclusions are stated in “Conclusion”.
Related Work
One of the most significant features for HR diagnosis is the change in the vascular diameter [50]. The arteriovenous width ratio, tortuosity severity, and arteriovenous (AV) nicking severity are proportional to the HR severity.
Researchers have proposed various techniques for HR identification, most of which focus on artery–vein classification and the arteriovenous width ratio [55]. Akbar et al. [5] included the measurement of papilledema (PE) for HR grading, while Gulzar et al. [21] and Hu et al. [27] introduced different graph-based methods for AV classification. These methods involve manual graph evaluation and long computation time. Other studies [22, 25, 43] have adopted some feature-based methods. The drawback of such methods is that the ROI (generally, the optic disk (OD)) is manually localized.
Several AV classification techniques have been adopted to classify the retinal vasculature in fundus images [19, 55], where a feature-based method is used. The color-profiling and textural features are prepared, followed by classification of the arteries and veins using the KNN classifier.
Dzemyda and Paunknis [48] proposed an adaptive algorithm for vessel measurement without the need for tuning the algorithm for specific circumstances or imaging equipment. The main contribution of this method is that it extracts the features of the blood vessels depending on how the algorithm uses the spatial vessel dependency and the vessel width measurement. This approach was validated and compared with other existing techniques that measure the artery–vein ratio (AVR). The results showed that it achieves a higher vessel classification accuracy. Here, the vessel parts required for the measurement of the arteriovenous width ratio are identified and used for the classification of the arteries as well as the veins. The main drawback of this approach is its inadequate classification accuracy, while its main advantage is its universality. It is adaptive to variations in the size and quality of the images.
Rodrigues and Marengoni [44] introduced a novel algorithm based on the mathematical morphology and wavelet transform for OD segmentation. This algorithm uses a Hessian-based multi-scale filtering technique to segment the vascular tree in the color eye fundus image. Both the vessel tree and the OD are used to analyze the retinal fundus image. The OD is used to identify the ROI in the fundus image. The lower-frequency representation of the image is obtained by applying the Haar wavelet transform, and the mathematical morphology is used to improve the segmentation results. A filtering approach based on the Hessian technique is used to achieve the tree vessel segmentation, which can be performed using the second-order derivatives. This enables one to explore the tubular shape of the blood vessel and determine whether the pixels belong to the vessel.
Akbar et al. [4] proposed an automated method for the identification of HR at various stages by measuring the arteriovenous and papilledema in the fundus retinal images. This approach consists of two modules: analysis of the vasculature for assessing the optic nerve head (ONH) and measurement of the arteriovenous width ratio for papilledema analysis. The first module employs the hybrid feature set to classify the arteries and veins, using a support vector machine (SVM) classifier with the radial basis function (RBF) kernel to measure the arteriovenous width ratio. The second module analyzes the ONH region to identify signs of papilledema. It uses various features with the SVM and RBF for the classification of papilledema. Both modules were evaluated to determine the performance of the proposed system on various publicly available datasets. The results showed that these modules enable the system to achieve high classification accuracy.
Furthermore, a few researchers, e.g., Bhargava [10], studied HR indicators such as tortuosity and AV nicking as well as their effects on the severity of HR, where the AV-nicking measure or silver/copper wiring measure is employed as a feature in the feature set that is used in the classification of the HR severity.
In addition, an integrated papilledema severity grading scheme based on fundus image diagnosis has been proposed [15], where blood vessel segmentation is performed via thresholding at multiple layers. A total of 26 characteristics were extracted from a cropped ROI and segmented vessels (color, osculation of the OD border, segmented vessels, texture-based, and profiling features). These characteristics are employed in the classification of papilledema severity using the SVM. The performance of the method was evaluated on 90 images of the STARE dataset and 70 images of the local dataset.
This study presents a hybrid decision support system based on arteriovenous ratio and vessel tortuosity severity for detection and grading of HR.
Methodology
In this paper, a methodology for the automated detection and identification of HR severity is proposed. To the best of our knowledge, this is the first study to use a four-grade scale to quantify the tortuosity severity in a retinal image. In addition, it proposes an iterative AVR method for measuring the arteriovenous width ratio. Moreover, a new scale for measuring the HR severity is presented. AVR calculation is performed based on the Parr–Hubbard formulas using two methods: (1) the AVR median method and (2) the proposed iterative AVR calculation method. Each method was implemented in two experiments. The first experiment was performed on the R1 region, where the ROI was the area from 2 (radius of optic disk (OD)) to 3 (radius of the OD). The second experiment was performed on the R2 region, where the ROI was the area from 2 (radius of OD) to 5 (radius of the OD). The computation of the arteriovenous width ratio, tortuosity severity, and HR severity involves multiple steps, which include preprocessing, OD localization, ROI identification, vessel segmentation, artery–vein classification, arteriovenous width ratio, and tortuosity calculation, severity grading for each of the measures, and finally, determination of HR severity. The hybrid methodology for HR diagnosis via quantification of the arteriovenous width ratio and tortuosity is illustrated in Fig. 1.
Fig. 1.
HR diagnosis via the hybrid method of quantifying the arteriovenous width ratio and tortuosity severity
Preprocessing
The retinal images are preprocessed by using methodology proposed by Foracchi et al. [16] with the aim to minimize the intra-image luminosity variations and contrast enhancement of retinal blood vessels with respect to background.
Retinal Vasculature Segmentation and Vessel Fragment Extraction
Vasculature segmentation from colored images of the fundus of the retina is a challenging task because of several issues such as poor contrast, uneven illumination, choroidal vascularization, center light reflex, and background artifacts including impulse noise and background homogenization [41]. Nevertheless, many supervised and unsupervised methods are available for retinal vessel segmentation [12, 23, 24, 46, 53, 56]. We aim to effectively segment the vessels of the retina using the optimized parameters of a filter as defined in a previous study [7]. The proposed vessel division technique involves optimization of the B-COSFIRE results using multi-objective optimization. This method is useful for vessel edge detection and for overcoming the central light reflex problem. Figure 2b shows a cropped image of the retina. One can observe the segmented image and the central light reflex obtained using our technique. Note that the entire vasculature, including the vessels with the central light reflex, can be appropriately segmented by the proposed technique. Therefore, it was included in our method for vessel segmentation. Figure 2c shows the results of the artery–vein segmentation using the method proposed in our previous studies [6, 8].
Fig. 2.
Vessel segmentation: (a) original image, (b) vessel segmentation results, (c) artery–vein segmentation results
Several methods have been proposed for vessel skeleton fragment extraction [36]. The vascular skeleton is removed using a thinning process [35] that iteratively erodes a layer of border pixels from each connected component within the black-and-white image while preserving its connectivity until each connected component is the skeleton itself. This method yields an enhanced skeletonization result by eliminating the noisy spur pixels and smoothing the generated skeleton as follows:
The initial skeletonization results included an extra noisy spur, i.e., a connected spur of tiny fragments to the left and right side of each vessel, which was removed by iterative detection and elimination of the endpoints until the main vessel trunk was reached. The extra small segments are eliminated from the vessel, and its pixels are no longer found as endpoints until the last pixel of such a noisy spur fragment. The same result can be achieved using the MATLAB function bwmorph() for spur detection.
An additional phenomenon has been observed in the results of the MATLAB mentioned above function, whereby one pixel remains in the vessel skeleton from the root of each deleted spur piece, resulting in the existence of L-shaped angles in the vessel curve (see Fig. 3a, d for the L-shaped issue before and after the enhancement). This extra pixel affects the subsequent steps, as it adds two extra branch points in the MATLAB branch point extraction function bwmorph() for branch point detection. It also breaks the vessel segment into multiple fragments (see Fig. 3b, e for the fragment shapes before and after the enhancement). To solve this problem, we improved the skeletonization results using the following pseudo-code:
Fig. 3.
Visual illustration of the vessel segments before and after optimization: (a) existence of the corner pixels in L shapes in the vessel segments, (b) resolving the root cause (c), wrong sub-segments generated instead of one segment, (d) correctly localized vessel segment, (e) each vessel segment fragmented as a result of resolving the root cause, (f) no fragmented full vessel segment as a result of the resolved root cause. Initially, wrong sub-fragments were extracted owing to the existence of corner pixels. These sub-fragmenting phenomena were resolved, and the proper vessel fragments were finally extracted
The enhanced vessel fragment extraction can be observed in Fig. 3c, which shows multiple fragments before the enhancement, and in Fig. 3f, which shows the entire vessel fragment extracted from the intersection/bifurcation point to the next intersection/bifurcation point or leaf.
Vessel Tortuosity Measurement
The steps involved in generating vessel fragments from segmented retinal vasculature are illustrated in Figs. 4 and 5 shows the methodology of calculating the tortuosity metrics. These 14 tortuosity metrics are then applied to the generated vessel parts (segments). The metrics are calculated and registered for each of the vessel segment in the feature set to classify the 504 images into four severity levels via machine learning. The classification results are reviewed and finalized by expert ophthalmologists, and four tortuosity severity levels are set to classify the image into one of the four classes.
Fig. 4.
ROI extraction: (a) the cropped OD from the original image for illustration purposes, (b) processed green channel, (c) processed black-and-white image, (d) OD center and radius, (e) annotation of co-centered circles in the original image to identify the ROI, (f) the proposed ring filter, (g) segmented R1 region using R1 ring filter, (h) segmented R2 region using R2 ring filter
Fig. 5.
Steps for generating the tortuosity severity levels
Tortuosity Feature Set Preparation
A feature set is created for all the images in our newly proposed retinal vessel morphology (RVM) dataset. The tortuosity is calculated using 14 tortuosity metrics [30] (straight-line distance (chord), geodesic distance (arc), distance metric (DM), distance factor (DF), DF-1 (1), standard deviation of average curvature, total curvature (2), total squared curvature (3), total curvature normalized by the arc length(4), total squared curvature normalized by the arc length (5), total curvature normalized by the chord length (6), total squared curvature normalized by the chord length (7), sum of angles metric in degrees (SOAM), sum of angles metric in radians (SOAMr), normal inflection count metric(ICM), and binomial inflection count metric (ICMb)). The generated quantified tortuosity measures are calculated for each vessel fragment in each image to obtain fragment-wise summary statistics, followed by the creation of image-wise tortuosity statistics, which includes the number of vessel fragments in the image. For each tortuosity measure, we calculate summary statistics (minimum, maximum, and average). The entity relationship diagram of the created feature set is shown in Fig. 6.
Fig. 6.
Snapshots of the image after each stage: (a) original image, (b) vessel segmentation, (c) detection of intersection points from (d) skeletonized image, (e) identifying vessel fragments, (f) measuring the vessel tortuosity
OD Localization and ROI Segmentation
OD segmentation is essential for the segmentation of the ROI that contains the artery and vein fragments. These arteries and veins are inputs for calculating the arteriovenous width ratio and HR severity. The OD is oval in shape and yellowish in color, and it represents the center of the vasculature tree branching. It is identified using the following methods: maximum intensity region detection, boundary extraction [9, 47], circle Hough transform [3], GrowCut algorithm [2], and polar coordinates [57].
In this study, we detect the OD by extracting the green channel (see Fig. 7b) and then cleaning bright lesions from the background using contrast-limited adaptive histogram equalization (CLAHE), removing dark components, identifying the average brightness, and finally performing edge detection with the circle Hough transform [51]. The OD is detected, and its center and radius (R) are determined (see Fig. 7d); accordingly, R1 (2R < ROI < 3R; see Fig. 7g) and R2 (2R < ROI < 5R; see Fig. 7h) are identified as the ROIs.
Fig. 7.
Entity relationship diagram (ERD) for the feature set of tortuosity metrics created for the RVM dataset images
Arteriovenous Vessel Classification in ROI
The classification of arteries and veins is a critical step in HR diagnosis. Our previously proposed method [8] is adopted to classify arteries and veins using a fully convolutional neural network with deep learning (see Fig. 8c).
Fig. 8.
Splitting the artery fragments from the vein fragments within the ROI for AVR calculation: (a) original image, (b) segmented ROI, (c) artery–vein segmentation, (d) artery fragments, (e) vein fragments
This study mainly focuses on the arteriovenous width ratio. The artery and vein fragments within either R1 or R2 are the only vessel fragments used in the arteriovenous width ratio calculation, depending on the experimental requirements. R1 is defined as the ring segment around the OD that is 1–1.5 times the OD diameter from the OD center. Therefore, we designed a ring cut filter such that its pixel coordinates are identified based on the OD center and its radius (ROD). The 1’s of the filter pixels are between 2 ROD and 3 ROD, while the rest of the filter area comprises 0’s. We used the area between the circles for morphological operations to eventually segment the ROI image used for calculating the AVR. The filter represents a ring of all the pixels between the inner circle of radius 2R and the outer circle of radius 3R, as shown in Eq. (1) and Fig. 7f. The filter is used to cut the ring part of the image using Eqs. (1) and (2) to isolate the ROI and facilitate AV classification (see Fig. 8).
1 |
2 |
Arteriovenous Width Ratio Calculation in ROI
The widths of the vessel fragments are the core input for arteriovenous width ratio calculation and are important for determining and grading HR severity [18, 20, 25, 49]. Many width quantification techniques have been proposed in the literature. In our method, we calculate the vessel fragment width using the technique proposed in [37] for each vessel fragment in the ROI. As we have a set of artery and vein fragments in the ROI, we define the set of vessel fragments in the ROI as follows:
3 |
Accurate measurement of the arteriovenous width ratio is critical for achieving optimal vessel width measurement, requiring accurate blood vessel segmentation. The approach proposed in [21] involves the automated enhancement of a multi-scale linear structure to achieve vessel division, and it uses Gabor wavelets and multilayered thresholding. In our approach, we have followed [7] to obtain the fragmented vasculature to proceed with the AVR measurement. To calculate the fragment width, for the fragment , we identify its skeleton as follows:
4 |
Then, we generate the fragment edge sides, side1 and side2, to quantify the diameter of the vessel fragment at point (see Fig. 9a). We refer to as the hypothetical straight-line segment , centered at the coordinates of , as outlined in [37]. T is selected to have a value higher than any possible vessel width. Subsequently, we rotate the segment T around the point with predefined angle increments up to 360 degrees, and we identify the intersection points between and the vessel fragment sides, where the intersection points at each rotation angle are given by
5 |
Fig. 9.
Vessel width measurement: (a) vessel width technique, (b) defined as the anticipated artery trunk width from the first pair of wider and narrower retinal artery branches Wa and Wb, respectively; similarly, is calculated, indicating the estimated vein trunk width from the first pair of wider and narrower retinal vein branches Wa and Wb, respectively
With the two computed intersection points for each rotation angle, we compute the length of . After all the rotations in the rotation stream, the vessel length is computed as the minimum value, and its specific rotation angle is used to identify the vessel width. The vessel caliber at the point is the minimum straight-line distance between the two intersections
6 |
We calculate the width at all the points in the retinal vessel fragment , and the final vessel fragment width is equal to the average of all the widths at each point along with the fragment skeleton.
7 |
AV classification is the next critical step in calculating the AVR accurately. We used the method proposed in [8] to classify the arteries and veins in the ROI vessel fragments. Moreover, the vessel fragment widths for the six widest arteries and the six widest veins were selected to proceed with the arteriovenous width ratio calculation.
The Parr–Hubbard [28] and Knudtson [33] formulas were used for the computation of the arteriovenous width ratio. These formulas provide more reliable results and have been used for AVR measurement in previous studies [1, 26, 29, 32, 34, 37, 43]. The AVR was also introduced in [14]. First, the OD was segmented by match-filtering, followed by identification of the ROI and fragmentation of the vessels, and a data-mining algorithm was finally used to classify the blood vessels as arterioles and venules. Subsequently, the vessel widths and AVR were calculated.
Briefly, we identify the ROI ring segment, in which all its pixels fall within the two concentric circles, whose diameters are 1 and 1.5 times the diameter of the ONH. The six widest veins and arteries that appear in this identified ring segment (ROI) are used in the AVR calculation (see Fig. 10), as explained in detail by Parr and Hubbard [28] and Knudtson [33]. The formula is assumed to be applied to the six widest veins and arteries within the identified ROI. The width of the six widest vessels is calculated as follows, and we then proceed with the calculation of the arteriovenous width ratio:
8 |
Fig. 10.
Illustration of formulas for the calculation of arteriovenous width ratio, CRAE, and CRVE
The Parr–Hubbard methods define the central retinal artery equivalent (CRAE) and central retinal vein equivalent (CRVE) quantification [28], as shown in Eqs. (9) and (10), while the Knudtson method [33] defines them as shown in Eqs. (11) and (12). Figure 9b shows the natural relation between the contributions of two sister branches, and , with the root trunk or of the vessel bifurcation for arteries and veins, to elucidate Eqs. (9) and (11). In this study, we introduce three AVR iterations for measuring the AVR ratio, and we propose a new scale for measuring the HR severity. The three-iteration AVR method starts by calculating ; the six blood vessels are reduced to three estimated diameters after implementing the CRAE formula in the first iteration. These three widest arteries and veins are again merged into a fresh trunk estimate, whereas the remaining vessels are transferred to the next CRAE or CRVE calculation of the next iteration; see Fig. 10. We end up with two wider vessels merged into a fresh trunk estimate in iteration two to generate the final value of CRAE.
The same concept is applied to in the three iterations to calculate the final value of CRVE. The formulas used to calculate CRAE and CRVE are shown in Eqs. (9) and (13). CRAE is derived as a result of applying Eq. (9) to the six widest arteries in the three iterations. Before iteration one, we sort the widths in descending order. CRAE is calculated thrice. First, it is calculated from the highest and lowest values. Second, it is calculated from the next highest and lowest values, where the two width inputs are the outputs of the previous iteration (see Fig. 10). Third, the CRAE is calculated with the two median values. Similarly, CRVE is derived as a result of applying Eq. (13) to the six widest veins in the ROI in three iterations. The number of width values are thus reduced from six in iteration one to three in iteration two and finally to one in iteration three, and that value is considered the value of CRVE.
9 |
10 |
Knudtson [33] provided the following definitions:
11 |
12 |
Hence, and are used to finalize the calculation of CRAE and CRVE, as shown in Fig. 10. Then, the arteriovenous width ratio is calculated using Eq. (8).
As the vessel widths are the only input parameters of the AVR formula, the morphometric variations in the artery/vein widths affect the arteriovenous width ratio by the amount of deviation from the original healthy retina vessel widths. This method supports the mapping of the measured AVR value to the HR severity based on the KWB severity. The KWB scale defines HR severity with values from 1 to 4 [11, 31]. Table 5 summarizes the direct relation between HR severity and the AVR results. The second column in Table 1 relates HR severity with the AVR median, while the third column in Table 5 relates HR severity with the iterative AVR generated scale. Comparing our calculated AVR value with this new scale enables us to diagnose the HR severity.
Table 5.
Deriving HR severity levels from AVR median results ranges and iterative AVR results ranges
AVR severity level | AVR median result range | Iterative AVR result range | Symptoms |
---|---|---|---|
Normal | 0.667-0.75 | 0.5-0.755 | No abnormal signs |
Level 1 | 0.5 | 0.5 | Mild narrowing of veins |
Level 2 | 0.33 | 0.32 | Moderate to severe narrowing of venules and AV nicking |
Level 3 | 0.25 | 0.25 | Vessel crossing that is right-angled |
Level 4 | 0.2 | 0.23 | Papilledema along with all of the above |
The coordinates of the splitting point (a, b) are
13 |
where and are the computed AVR values from the iterative AVR calculation, while L1 and L2 are the two complementing percentages that split the line segment into two lengths at point (a, b) as and . Thus, the new range of values of the iterative AVR method is split equally.
Hypertensive Retinopathy Severity-Level Identification
In [1, 32, 43], the authors quantified HR severity based on AVR according to Table 1, which lists several phases of HR based on the AVR severity.
Table 1.
HR severity based on AVR median ranges
AVR severity level | AVR median | Symptoms |
---|---|---|
Normal | 0.667-0.75 | No abnormal signs |
Level 1 | 0.5 | Mild narrowing of veins |
Level 2 | 0.33 | Moderate to severe narrowing of venules and AV nicking |
Level 3 | 0.25 | Vessel crossing that is right-angled |
Level 4 | 0.2 | Papilledema along with all of the above |
According to the obtained results, we propose a new scale for AVR severity grading based on the newly proposed iterative AVR calculation method explained in the previous section. The arteriovenous width ratio is calculated using the AVR median approach and iterative AVR approach, which facilitates the generation of a new scale for grading the arteriovenous ratio (see Table 5).
Moreover, we propose a hybrid method for HR severity decision making based on the results of AVR severity and tortuosity severity (see Table 2).
14 |
The formula assumes a default value of 0 for AV-nicking severity and papilledema severity so that it is applicable to our experiments. However, the AV-nicking severity and papilledema severity can be studied in the future or included as additional factors that influence the automated HR severity decision. AV-nicking grading has been studied in [39, 45], while papilledema severity has been studied in [15].
Table 2.
Hybrid HR severity grading based on KWB scale and hybrid severity levels of AVR and tortuosity
HR severity | AVR Severity | Tortuosity Severity |
---|---|---|
0 | 0 | 1 |
1 | 1 | 2 |
2 | 2 | 3,4 |
3 | 3 | |
4 | 4 |
In the above equation, AVRs is the AVR severity level, Ts is the tortuosity severity level, and is the HR severity level.
As detailed in “Tortuosity Feature Set Preparation”, the tortuosity metrics are calculated, and the tortuosity severity level of each of the 504 images of the RVM dataset is determined. The resulting tortuosity values are classified into four grades, where 1 = normal, 2 = moderate, and 4 = severe. The tortuosity grades are employed in Eq. (14) to enhance the identification of the HR severity level.
For example, the HR severity level will be 4 because of the existence of OD edema or severe changes in the arteriovenous width ratio (4). Similarly, it will be 3 whenever AVR is 3.
Moreover, whenever , the HR severity level is 2, and whenever there is tortuosity with no AV nicking that satisfies the condition of the equation , the HR severity level is 1 (see Table 2 for the details).
Although such a scale is automated, it is also aligned with the ophthalmologist’s manual diagnoses steps for HR using the Keith–Wagener–Barker (KWB) grading scale [10]: malignant OD edema (level 4), cotton wool spots, or hemorrhages (level 3), AV nicking without cotton wool spots (level 2), tortuosity and mild narrowing (level 1), or a normal healthy retina.
Materials
The newly proposed RVM dataset [8] is well equipped for deep learning and particularly for solving different types of retinal image analysis problems, namely vessel segmentation, AV classification, tortuosity severity labels at image level, OD labeling, tortuosity severity level as an image-level feature, and HR severity level as an image-level feature because, in this dataset, we have provided the ground-truth labels for each retinal image.
The dataset is created by setting five types of ground-truth labels for the 504 original colored fundus images taken from the MESSIDOR dataset [13], where the fundus images are collected from 50 middle-aged respondents using non-mydriatic fundus cameras (Topcon) with a resolution of 2002 2000 pixels. The left and right eye images are both captured.
For each label type, 504 labels are created. The colored vessel segmentation labels are used to execute the deep-learning-optimized method to segment the vessels from the original retinal image, which is taken from the RVM dataset. The images and their labels, for type-1 and type-2, have a size of 2002 2000 pixels. Each original retinal image has four labels, one monochrome, and the others colored, for vessel segmentation problems. Similarly, there are two different labels for RVM dataset problems, while type-3 and type-4 labels are available for the image-level feature set of the RVM dataset.
In this study, the dataset has also been extended with the tortuosity severity level and the HR severity level for each of the 504 images. The feature sets are described in the entity relationship (ERD) diagram “Tortuosity Feature Set Preparation”. The image-level feature set includes a marker for the HR severity as well as optimized-segment images [8]. The vessels are manually labeled by two experts using the image labeler application in MATLAB R2017b. The vessel segmentation labels and AV classification labels were verified by two ophthalmologists at Saqr Hospital and RAK Medical & Health Sciences University, Ras al Khaimah, UAE. Two other ophthalmologists verified the severity-level labels of tortuosity and HR at RAK Medical & Health Sciences University and the International Medical Center, Ras al Khaimah, UAE.
Labeling Methodology
To create the AV classification and vessel segmentation ground-truth labels, we measured the original color images, the green channel of the images, and the grayscale preprocessed images. The vessel-type labeled set for the vessels is created according to the process shown in Fig. 11. The labeling of the vessels is performed systematically, starting with manual marking, according to a set of physiological characteristics of the retina. Then, annotation is performed using the image labeler with a preprocessed version of the original image by two computer vision specialists and an ophthalmologist.
Fig. 11.
Artery–vein labeling methodology
The validation process includes counter-validation by two computer vision specialists and review verification by two ophthalmologists for the class of the vessels. In the first stage, we manually marked the vessels on a preprocessed image. Preprocessing aims to increase the discrimination between the arteries, veins, and background to obtain an enhanced version of the image. Preprocessing is achieved by applying morphological processing, normalization, and intensity averaging to each RGB layer and then mapping the intensity values in the grayscale of each layer to new values by saturating the bottom 2% and the top 3% of all pixel values. This operation increases the contrast of the output preprocessed image. The vessel labeling methodology is completed in the following steps:
The line-operator filter method is utilized to obtain an enhanced version of the vessel image. The preprocessed image marks an artery as (a) and a vein as (v).
The vessels are manually labeled by an expert using the known artery–vein distinguishing features explained in Section 2.1; then, the vessels are annotated by two experts using the image labeler application available in MATLAB as well as the VAMPIRE vessel segmentation tool [42].
Validation is then performed by observers who collaborate during the labeling process to review and finalize the annotation type of entire vessel segments. The label is finally generated from the reviewed annotations.
The next step in the label preparation is the generation of the AV classification labels and the vessel segmentation labels from the original images and the annotations.
The OD ground-truth labels are created by annotating a copy of the original retinal image and saving it as a label. The tortuosity severity levels and HR severity ground-truth labels are created using custom-made software, a screenshot of which is shown in Fig. 12b. It helps to match each image with the tortuosity levels using the three labels. The results are obtained by two computer vision specialists, reviewed, and finalized by two ophthalmologists from RAK Medical & Health Sciences University and Saqr Hospital in Ras al Khaimah, UAE. The prepared annotation and review software supports the labelers with the statistical details of all the calculated statistics for each retinal image tortuosity metric (see Fig. 12).
Fig. 12.
Ground-truth review screens for (a) vessel segmentation and AV classification labels, (b) OD labels, tortuosity severity labels, and HR severity labels
The validation step ensures complete agreement between the reviewers and the annotators. If there is an issue, the reviewers report it to the annotators, who fix and resend the results until the total agreement is achieved. This approach provides a reliable dataset of AV classification labels for researchers in the field.
Results
This section presents the quantitative results of measuring the arteriovenous width ratio using the median approach and the proposed iterative AVR calculation method. The results include the new scale of severity levels linked with the results of the proposed method. Materials section focuses on adding the HR severity labels to the 504 images as an extension of the RVM dataset; the analysis is concluded by comparing the results with those of previous methods.
Tortuosity Severity-Level Qualitative Results
Figure 13 shows retinal images classified according to the four grades of severity. The difference due to the increase in the tortuosity grades may be observed across normal, mild, moderate, and severe.
Fig. 13.
Sample retinal images for each tortuosity severity level: (1) normal, (2) mild, (3) moderate, (4) severe
Tortuosity Severity-Level Quantitative Results
The tortuosity labels manually prepared according to the method described in “Labelling Methodology” enable us to use the feature set in supervised learning as well as unsupervised learning methods. Hence, we used the dataset in multiple experiments to finalize the tortuosity severity grading. Three experiments were performed using the K-means, decision tree (J48), and ensemble (rotation forest) machine learning models. The results are summarized below (See Table 3).
Table 3.
Comparison of results of tortuosity severity-level classifiers (J48, ensemble (rotation forest), and K-means)
Model | TP Rate | FP Rate | Precision | Recall | F-Measure | ROC Area |
---|---|---|---|---|---|---|
Decision Tree (J48) | 0.927 | 0.084 | 0.879 | 0.927 | 0.988 | 0.994 |
Ensemble (Rotation Forest) | 0.869 | 0.132 | 0.667 | 0.869 | 0.484 | 0.953 |
K-means | 0.647 | 0.463 | 0.331 | 0.335 | 0.403 | 0.501 |
The generated feature set is used as an input to the first tortuosity grading (K-means clustering) to create four clusters of severity levels. Furthermore, two other classification algorithms with high performance in detecting the tortuosity severity (decision trees and ensemble (rotation forest)) were tested. Finally, the J48 classification method was selected and applied to the prepared feature set to classify each image based on the tortuosity severity to one of the four tortuosity severity levels (normal, mild, moderate, severe).
The second tortuosity grading classification model was created by training the image tortuosity features dataset using the decision tree (J48) learning model. The learning and evaluation were performed using the tenfold approach. The time required to train the model was 0.01 s on a PC with a Core-I7 processor and 16 GB RAM. The number of correctly classified records was 467 out of 504 (92.66% accuracy). The number of incorrectly classified records was 37 (7.3%). The kappa statistic was 0.857, and the mean absolute error was 0.163. This model achieved the best classification results compared with the human judgment label, as it achieved 92.65% of the tortuosity severity levels. The predicted tortuosity grades were compared with the actual manually labeled grades; the results are summarized in the confusion matrix in Table 4.
Table 4.
Decision tree (J48) model results for classifying the RVM dataset retinal images into four grades of tortuosity severity: actual-to-prediction confusion matrix
classified as Actual | 1-No tortuosity | 2-Mild tortuosity | 3-Moderate tortuosity | 4-Severe tortuosity |
---|---|---|---|---|
1-No tortuosity | 2 | 20 | 0 | 0 |
2-Mild | 0 | 269 | 0 | 0 |
3-Moderate | 0 | 14 | 196 | 0 |
4-Severe tortuosity | 0 | 3 | 0 | 0 |
The third model for classifying the tortuosity severity levels is created from the rotation-forest machine learning model using the J48 learning model. The time required to train the model was 215.02 s on a PC with a Core-I7 processor and 16 GB RAM. The number of correctly classified records was 438 out of 504 (86.90% accuracy). The number of incorrectly classified records was 66 (13.09%). The kappa statistics was 0.746. Moreover, the mean absolute error was minimized to 0.045. The final root means squared error was 0.162, while the relative absolute error was 24.86% and the root relative squared error was 67.69%. Its accuracy (86%) is comparable to that of human judgment.
AVR Qualitative Results
All the 504 retinal images in the RVM dataset were processed to compute the arteriovenous width ratio. Two ring filters were created for each image to segment the ROI. The first ring filter cuts all the pixels that lie between 2 (radius of the OD) and 3 (radius of the OD), creating a specific ring for each image. Hereafter, we refer to this filter as 2ROD-3ROD, which segments the R1 region in the retina. The second ring filter cuts all the pixels between 2 (radius of the OD) and 5 (radius of the OD), creating a ring specific to each image. Hereafter, we refer to this filter as 2ROD-5ROD, which segments the R2 region in the retina.
In the second experiment, the 2ROD-3ROD filter was used to segment the retinal ROI between 2R and 3R. Then, the segmented vessels were identified and classified as arteries and veins, followed by the morphometric analysis steps to measure the vessel widths and identify the top six widest arteries and veins, as illustrated in Fig. 15.
Fig. 15.
To compute the arteriovenous width ratio, the widths of the artery/vein fragments within the ROI are calculated: (a) original image, (b) ROI vessel fragments annotated in blue, (c) the fragments classified as arteries and veins, where each fragment is annotated by its fragment number in yellow and green labels for arteries and veins, respectively, to emphasize the recognized classification, (d) the width of the top six arteries and top six veins, where the labels at the center of each fragment represent the calculated (width 10) for better visibility
Figure 14 shows the top six arteries and top six veins in the R1 and R2 regions. The top six arteries and veins in the R1 region are spread around the whole retinal ring, while in the R2 region, the top six arteries and veins do not cover the whole ring despite the greater area of R2. Hence, we have focused on presenting the results for the R1 region only.
Fig. 14.
The top six arteries and top six veins in the segmented 2ROD-5ROD ROI vs. 2ROD-3ROD ROI: (a) original image, (b) 2ROD-5ROD ROI vessel fragments annotated in blue, (c) the widths of the top six arteries and top six veins in 2ROD-5ROD ROI, (d) the widths of the top six arteries and top six veins in 2ROD-3ROD ROI, where the labels at the center of each fragment represent the calculated (width 10) for better visibility
AVR Quantitative Results
Then, the AVR is calculated using the Parr–Hubbard formulas by two methods. The first is achieved using the two median artery widths and the two median vein widths to calculate the CRAE, CRVE, and AVR. The second AVR calculation method uses the proposed iterative AVR to calculate CRAE and CRAVE, which is explained in the section of the AVR calculation. The results are summarized in Fig. 16b. In the box blot, it is clear that the iterative AVR approach achieves AVR values between 0 and 1, and its four quartiles of the range of the AVR results are narrowed from 0.23 to 0.59 compared to the AVR median method, in which the AVR values vary between 0.29 and 0.82.
Fig. 16.
Box plot comparing (a) the results of the proposed iterative AVR formula vs. those of the AVR median approach using the Parr–Hubbard method and (b) the results of the proposed iterative AVR formula vs. those of the AVR median approach using the Knudtson method on all R1 ROIs in the RVM images
HR Qualitative Results
Figure 17 shows retinal images classified according to the four grades of severity. The differences due to the increase in the HR severity levels may be observed across severity levels 0 to 4.
Fig. 17.
RVM dataset labels for HR: (a) normal (b) severity level 1, (c) severity level 2, (d) severity level 3, (e) severity level 4
HR Quantitative Results
The AVR ranges proposed for the HR severity levels in [1, 32, 43] are mapped to the new set of ranges for each severity level on the basis of the new range of AVR results, as is evident in Figs. 16 and 18a. The new scale values of the HR severity levels based on the proposed method are mapped using the identification of the (a,b) coordinates, which are the values wherein the severity switches from one level to another. The new scale was established via proportional splitting of the whole length of the new iterative AVR HR interval into five severity intervals with the same proportional splitting distances as those of the AVR median intervals; see Eq. (10), Fig. 18b, and Table 5.
Fig. 18.
Linear regression of results of iterative AVR vs. AVR median: (a) comparison between the proposed AVR 3 iterations formula vs. the AVR median approach using the Parr–Hubbard method, (b) proportional splitting of the HR severity level ranges from the whole range of values in the results of the proposed AVR 3 iterations method with the same proportional splitting distances in the AVR median intervals
Using linear regression, the AVR median and iterative AVR results are represented graphically in Fig. 18a. In addition, Fig. 18b shows the identification of the split points between each severity level and the next severity level using Eq. (10), i.e., the mathematical split concept, which helps generate the new scale of HR severity levels based on the scale previously used in the literature and the data generated by measuring the arteriovenous ratio using the Parr–Hubbard formulas and mathematical geometric mapping to identify the exact values of splitting between the severity levels.
As a result of applying Eq. (10), the HR severity levels are identified (see Table 5) using the mathematical split point method based on the previous AVR scale. From Table 6, we find that the agreement between the two measures is higher for severity levels 3 and 4, while it is lower for severity levels 0 and 1. The reason for the difference is that the proposed grading method considers 12 vessel segments in the arteriovenous width ratio calculation. By contrast, the AVR median method considers only two fragments out of six and ignores the other four segments in CRAE and CRVE calculation. By including 12 segments in the calculation, the proposed method can detect abnormalities that may exist in the width of these segments and not only in the median vessel segments. The level of agreement between the two measures is quantified in Table 7.
Table 6.
Comparison of the results of the four severity grades of iterative AVR and AVR median methods on the RVM dataset images
AVR median Iterative AVR | 0-Normal | Level 1 | Level 2 | Level 3 | Level 4 |
---|---|---|---|---|---|
0-Normal | 122 | 0 | 0 | 0 | 0 |
Level 1 | 0 | 364 | 0 | 0 | 0 |
Level 2 | 0 | 0 | 13 | 0 | 1 |
Level 3 | 0 | 0 | 0 | 2 | 0 |
Level 4 | 0 | 0 | 0 | 2 | 0 |
Table 7.
Proposed iterative AVR method vs. AVR median method: agreement statistics of two measures
Agreement Analysis for: | Precision | Matthews Correlation Coefficient | Kappa | Sensitivity(Recall) | Specificity | Accuracy |
---|---|---|---|---|---|---|
Similarity of the two measures | 0.700 | 0.734 | 0.981 | 0.786 | 0.998 | 0.994 |
The predicted HR severity levels compared to the manually labeled severity levels are summarized in the confusion matrix in Tables 8 and 9 using rotation forest machine learning model and the first ophthalmologist HR severity-level results as the model label. The number of correctly classified records was 488 out of 504 (96.82% accuracy). The number of incorrectly classified records was 16 (3.17%). The kappa statistics was 0.954. Moreover, the mean absolute error was minimized to 0.11. The final root mean squared error was 0.162. Its accuracy (96.82%) is comparable to that of human judgment.
Table 8.
HR diagnosis results for classifying the RVM dataset retinal images into four HR severity grades: actual-to-prediction confusion matrix
classified as Actual | 0-Normal AVR | Level 1 | Level 2 | Level 3 | Level 4 |
---|---|---|---|---|---|
0-Normal AVR | 55 | 1 | 3 | 0 | 0 |
Level 1 | 0 | 169 | 1 | 0 | 0 |
Level 2 | 0 | 1 | 205 | 0 | 0 |
Level 3 | 0 | 0 | 6 | 56 | 0 |
Level 4 | 0 | 0 | 3 | 1 | 3 |
Table 9.
Hybrid HR diagnosis results for classifying the RVM dataset retinal images into four HR severity grades—Statistics of comparing the Hybrid median with the ophthalmologist label
Hybrid HR diagnosis stats: | Precision | Matthews Correlation Coefficient | Kappa | Sensitivity(Recall) | F-Measure | Accuracy |
---|---|---|---|---|---|---|
Rotation forest | 0.970 | 0.968 | 0.953 | 0.955 | 0.967 | 0.968 |
Comparison with Other Methods
Table 10 compares the proposed method with previous methods in terms of HR detection accuracy. The table also highlights the dataset used in each method (either local or small-sized datasets compared to the RVM dataset, where all the 504 images are used to identify the tortuosity severity and AVR severity that contribute to the diagnosis of HR severity).
Table 10.
Comparison of results of the proposed method with those of previous HR diagnostic methods
Author/ (Year) | Image processing/ Methods | Database/ No. of images | Accuracy% |
---|---|---|---|
Manikis et al. (2011) [37] | CLAHE-based vessel segmentation | DRIVE(40) STARE (20) | 93.7 93.1 |
Khitran et al. (2014) [32] | Gabor wavelet, multilayered thresholding, hybrid classifier | VICAVR(58), DRIVE(40) | 96.50 98.00 |
Noronha et al. (2012) [40] | Radon transform for AV classification. | Drive (40) | 92.00 |
Muramatsu et al. (2010) [38] | Hough transform for OD detection. | Drive (40) | 75.00 |
Shahzad et al. (2018) [4] | HR grading including OD swelling. | AVRDB (100) | 98.76 |
This work | Semantic FCNN segmentation for AV classification, Iterative AVR calculation | RVM (504) | 96.82 |
Conclusion
This paper presented a novel method for integrated identification of HR severity in retinal images. The proposed method was evaluated on retinal images of our newly developed RVM dataset, where a new set of OD labels, tortuosity severity labels, and HR severity labels were added. Although HR diagnosis is clinically critical, few studies have investigated it. Most existing studies have focused only on AVR severity.
The retinal morphometry (RVM) dataset that is used in this work will be released publicly. The research community can use this dataset in their experiments in hypertensive retinopathy diagnosing, tortuosity severity-level classification, optic disk detection, artery–vein classification, and vessel segmentation. The AV classification or vessel segmentation labels are available in the black and white or semantic colored format. OD labels are available as annotated retinal images and OD feature set of radius and center of each image. The tortuosity and hypertensive retinopathy severity levels are available in the form of a list of the image name and its severity level of tortuosity and the severity level of HR. The RVM database will be available at http://vision.seecs.edu.pk/dlav or by emailing the authors; we welcome opinions about their experience of using this dataset to keep the RVM dataset updated.
We proposed a new method for calculating the arteriovenous width ratio in three iterations. Thus, we established the iterative AVR calculation method and identified new AVR ranges for each HR severity level. Most existing AVR studies have not used the same AVR range to map the five AVR grades and four HR grades. Thus, further investigation is required to identify the most accurate grading classification ranges as well as to correlate these grades with HR severity levels.
The proposed iterative AVR method computes the arteriovenous ratio based on the widths of 12 arteries segments in the ROI that cover the entire ring area, which helps to consider the healthiness of all the vessels in the AVR judgment instead of relying only on the width of two median values that may not provide the complete picture of arterial narrowing or nipping that may occur in non-median vessel fragments.
The proposed hybrid HR severity formula makes the AVR severity scale closer to the KWB medical scale. It combines the AVR and tortuosity severities to identify the HR severity based on more evidently manifested morphological changes in the retinal image. These are provided as input to the HR severity formula as per Eq. (14). This formula may include AV-nicking severity and OD swelling severity with zero values. However, it will remain applicable until a mechanism to measure these parameters is added in the future. For now, the AVR severity and tortuosity severity can be taken as the input to automate the HR severity judgment. Directions for future research include identifying and monitoring the early phases of HR through arterial cross sections and adding the severity levels of AV nicking and OD swelling to introduce further quantification factors that will facilitate the diagnosis of HR and cardiovascular disease.
Data Availability
The annotated dataset is available at http://vision.seecs.edu.pk/datasets. This work is part of the first author PhD thesis defended on 27 August 2020; the thesis and the related source code are protected by copyrights law No. 404-2021 in Ministry of Economics in UAE and 153 counties.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Sufian A. Badawi, Email: sufian.badawi@seecs.edu.pk
Muhammad Moazam Fraz, Email: moazam.fraz@seecs.edu.pk.
References
- 1.Abbasi, U.G., Akram, M.U.: Classification of blood vessels as arteries and veins for diagnosis of hypertensive retinopathy. In: 2014 10th International Computer Engineering Conference (ICENCO), pp. 5–9. IEEE (2014)
- 2.Abdullah, M., Fraz, M.M.: Application of grow cut algorithm for localization and extraction of optic disc in retinal images. In: 2015 12th International Conference on High-capacity Optical Networks and Enabling/Emerging Technologies (HONET), pp. 1–5. IEEE (2015)
- 3.Abdullah, M., Fraz, M.M., Barman, S.A.: Localization and segmentation of optic disc in retinal images using circular hough transform and grow-cut algorithm. PeerJ 4, e2003 (2016) [DOI] [PMC free article] [PubMed]
- 4.Akbar, S., Akram, M.U., Sharif, M., Tariq, A., ullah Yasin, U.: Arteriovenous ratio and papilledema based hybrid decision support system for detection and grading of hypertensive retinopathy. Computer methods and programs in biomedicine 154, 123–141 (2018) [DOI] [PubMed]
- 5.Akbar, S., Hassan, T., Akram, M.U., Yasin, U.U., Basit, I.: Avrdb: annotated dataset for vessel segmentation and calculation of arteriovenous ratio (2017)
- 6.AlBadawi, S., Fraz, M.: Arterioles and venules classification in retinal images using fully convolutional deep neural network. In: International Conference Image Analysis and Recognition, pp. 659–668. Springer (2018)
- 7.Badawi, S.A., Fraz, M.M.: Optimizing the trainable b-cosfire filter for retinal blood vessel segmentation. PeerJ 6, e5855 (2018) [DOI] [PMC free article] [PubMed]
- 8.Badawi, S.A., Fraz, M.M.: Multiloss function based deep convolutional neural network for segmentation of retinal vasculature into arterioles and venules. BioMed research international 2019 (2019) [DOI] [PMC free article] [PubMed]
- 9.Basit, A., Fraz, M.M.: Optic disc detection and boundary extraction in retinal images. Applied optics 54(11), 3440–3447 (2015) [DOI] [PubMed]
- 10.Bhargava, M., Wong, T.: Current concepts in hypertensive retinopathy. Retinal Physician 10, 43–54 (2013)
- 11.Bowling, B.: Kanski's clinical ophthalmology: a systematic approach. Saunders Ltd (2015)
- 12.Dash, J., Bhoi, N.: An unsupervised approach for extraction of blood vessels from fundus images. Journal of digital imaging 31(6), 857–868 (2018) [DOI] [PMC free article] [PubMed]
- 13.Decencière, E., Zhang, X., Cazuguel, G., Lay, B., Cochener, B., Trone, C., Gain, P., Ordonez, R., Massin, P., Erginay, A., et al.: Feedback on a publicly distributed image database: the messidor database. Image Analysis & Stereology 33(3), 231–234 (2014)
- 14.Faheem, M.R., Din, M.: Diagnosing hypertensive retinopathy through retinal images. Biomedical Research and Therapy 2(10), 385–388 (2015)
- 15.Fatima, K.N., Hassan, T., Akram, M.U., Akhtar, M., Butt, W.H.: Fully automated diagnosis of papilledema through robust extraction of vascular patterns and ocular pathology from fundus photographs. Biomedical optics express 8(2), 1005–1024 (2017) [DOI] [PMC free article] [PubMed]
- 16.Foracchia, M., Grisan, E., Ruggeri, A.: Luminosity and contrast normalization in retinal images. Medical Image Analysis 9(3), 179–190 (2005) [DOI] [PubMed]
- 17.Fraz, M., Badar, M., Malik, A., Barman, S.: Computational methods for exudates detection and macular edema estimation in retinal images: a survey. Archives of Computational Methods in Engineering pp. 1–28 (2018)
- 18.Fraz, M., Remagnino, P., Hoppe, A., Rudnicka, A.R., Owen, C.G., Whincup, P., Barman, S.: Quantification of blood vessel calibre in retinal images of multi-ethnic school children using a model based approach. Computerized Medical Imaging and Graphics 37(1), 48–60 (2013) [DOI] [PubMed]
- 19.Fraz, M., Rudnicka, A.R., Owen, C.G., Strachan, D., Barman, S.A.: Automated arteriole and venule recognition in retinal images using ensemble classification. In: 2014 International Conference on Computer Vision Theory and Applications (VISAPP), vol. 3, pp. 194–202. IEEE (2014)
- 20.Fraz, M.M., Barman, S.A.: Computer vision algorithms applied to retinal vessel segmentation and quantification of vessel caliber. Image Analysis and Modeling in Ophthalmology 49 (2014)
- 21.Fraz, M.M., Jahangir, W., Zahid, S., Hamayun, M.M., Barman, S.A.: Multiscale segmentation of exudates in retinal images using contextual cues and ensemble classification. Biomedical Signal Processing and Control 35, 50–62 (2017)
- 22.Fraz, M.M., Remagnino, P., Hoppe, A., Uyyanonvara, B., Owen, C.G., Rudnicka, A.R., Barman, S.: Retinal vessel extraction using first-order derivative of gaussian and morphological processing. In: International Symposium on Visual Computing, pp. 410–420. Springer (2011)
- 23.Fraz, M.M., Remagnino, P., Hoppe, A., Uyyanonvara, B., Rudnicka, A.R., Owen, C.G., Barman, S.A.: Blood vessel segmentation methodologies in retinal images{a survey. Computer methods and programs in biomedicine 108(1), 407–433 (2012) [DOI] [PubMed]
- 24.Fraz, M.M., Rudnicka, A.R., Owen, C.G., Barman, S.A.: Delineation of blood vessels in pediatric retinal images using decision trees-based ensemble classification. International journal of computer assisted radiology and surgery 9(5), 795–811 (2014) [DOI] [PubMed]
- 25.Fraz, M.M., Welikala, R., Rudnicka, A.R., Owen, C.G., Strachan, D., Barman, S.A.: Quartz: Quantitative analysis of retinal vessel topology and size–an automated system for quantification of retinal vessels morphology. Expert Systems with Applications 42(20), 7221–7234 (2015)
- 26.Heitmar, R., Kalitzeos, A.A., Panesar, V.: Comparison of two formulas used to calculate summarized retinal vessel calibers. Optometry and Vision Science 92(11), 1085–1091 (2015) [DOI] [PubMed]
- 27.Hu, Q., Abràmoff, M.D., Garvin, M.K.: Automated separation of binary overlapping trees in low-contrast color retinal images. In: International conference on medical image computing and computer-assisted intervention, pp. 436–443. Springer (2013) [DOI] [PubMed]
- 28.Hubbard, L.D., Brothers, R.J., King, W.N., Clegg, L.X., Klein, R., Cooper, L.S., Sharrett, A.R., Davis, M.D., Cai, J., in Communities Study Group, A.R., et al.: Methods for evaluation of retinal microvascular abnormalities associated with hypertension/sclerosis in the atherosclerosis risk in communities study. Ophthalmology 106(12), 2269–2280 (1999) [DOI] [PubMed]
- 29.Irshad, S., Akram, M.U.: Classification of retinal vessels into arteries and veins for detection of hypertensive retinopathy. In: 2014 Cairo International Biomedical Engineering Conference (CIBEC), pp. 133–136. IEEE (2014)
- 30.Kalitzeos, A.A., Lip, G.Y., Heitmar, R.: Retinal vessel tortuosity measures and their applications. Experimental eye research 106, 40–46 (2013) [DOI] [PubMed]
- 31.Keith, N.: Some different types of essential hypertension: their course and prognosis. Am J Med Sci 268, 336–345 (1974) [DOI] [PubMed]
- 32.Khitran, S., Akram, M.U., Usman, A., Yasin, U.: Automated system for the detection of hypertensive retinopathy. In: 2014 4th International Conference on Image Processing Theory, Tools and Applications (IPTA), pp. 1–6. IEEE (2014)
- 33.Knudtson, M.D., Lee, K.E., Hubbard, L.D., Wong, T.Y., Klein, R., Klein, B.E.: Revised formulas for summarizing retinal vessel diameters. Current eye research 27(3), 143–149 (2003) [DOI] [PubMed]
- 34.Li, X., Wee, W.G.: Retinal vessel detection and measurement for computer-aided medical diagnosis. Journal of digital imaging 27(1), 120–132 (2014) [DOI] [PMC free article] [PubMed]
- 35.Lotmar, W., Freiburghaus, A., Bracher, D.: Measurement of vessel tortuosity on fundus photographs. Albrecht von Graefes Archiv für klinische und experimentelle Ophthalmologie 211(1), 49–57 (1979) [DOI] [PubMed]
- 36.Maddah, M., Soltanian-Zadeh, H., Afzali-Kusha, A., Shahrokni, A., Zhang, Z.G.: Three-dimensional analysis of complex branching vessels in confocal microscopy images. Computerized Medical Imaging and Graphics 29(6), 487–498 (2005) [DOI] [PubMed]
- 37.Manikis, G.C., Sakkalis, V., Zabulis, X., Karamaounas, P., Triantafyllou, A., Douma, S., Zamboulis, C., Marias, K.: An image analysis framework for the early assessment of hypertensive retinopathy signs. In: 2011 E-Health and Bioengineering Conference (EHB), pp. 1–6. IEEE (2011)
- 38.Muramatsu, C., Hatanaka, Y., Iwase, T., Hara, T., Fujita, H.: Automated detection and classification of major retinal vessels for determination of diameter ratio of arteries and veins. In: Medical Imaging 2010: Computer-Aided Diagnosis, vol. 7624, p. 76240J. International Society for Optics and Photonics (2010)
- 39.Nguyen, U.T., Bhuiyan, A., Park, L.A., Kawasaki, R., Wong, T.Y., Wang, J.J., Mitchell, P., Ramamohanarao, K.: An automated method for retinal arteriovenous nicking quantification from color fundus images. IEEE Transactions on Biomedical Engineering 60(11), 3194–3203 (2013) [DOI] [PubMed]
- 40.Noronha, K., Navya, K., Nayak, K.P.: Support system for the automated detection of hypertensive retinopathy using fundus images. In: International Conference on Electronic Design and Signal Processing (ICEDSP), pp. 7–11 (2012)
- 41.van Overveld, I.M.: Contrast, noise, and blur affect performance and appreciation of digital radiographs. Journal of digital imaging 8(4), 168 (1995) [DOI] [PubMed]
- 42.Perez-Rovira, A., MacGillivray, T., Trucco, E., Chin, K., Zutis, K., Lupascu, C., Tegolo, D., Giachetti, A., Wilson, P.J., Doney, A., et al.: Vampire: vessel assessment and measurement platform for images of the retina. In: 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3391–3394. IEEE (2011) [DOI] [PubMed]
- 43.Rani, A., Mittal, D.: Measurement of arterio-venous ratio for detection of hypertensive retinopathy through digital color fundus images. Journal of Biomedical Engineering and Medical Imaging 2(5), 35 (2015)
- 44.Rodrigues, L.C., Marengoni, M.: Segmentation of optic disc and blood vessels in retinal images using wavelets, mathematical morphology and hessian-based multi-scale filtering. Biomedical Signal Processing and Control 36, 39–49 (2017)
- 45.Roy, P.K., Nguyen, U.T., Bhuiyan, A., Ramamohanarao, K.: An effective automated system for grading severity of retinal arteriovenous nicking in colour retinal images. In: 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 6324–6327. IEEE (2014) [DOI] [PubMed]
- 46.Sathananthavathi, V., Indumathi, G., et al.: Parallel architecture of fully convolved neural network for retinal vessel segmentation. Journal of digital imaging 33(1), 168–180 (2020) [DOI] [PMC free article] [PubMed]
- 47.Son, J., Park, S.J., Jung, K.: Towards accurate segmentation of retinal and the optic disc in fundoscopic images with generative adversarial networks. Journal of digital imaging 32(3), 499–512 (2019) [DOI] [PMC free article] [PubMed]
- 48.Stabingis, G., Bernatavičienė, J., Dzemyda, G., Paunksnis, A., Stabingienė, L., Treigys, P., Vaičaitienė, R.: Adaptive eye fundus vessel classification for automatic artery and vein diameter ratio evaluation. Informatica 29(4), 757–771 (2018)
- 49.Sun, C., Wang, J.J., Mackey, D.A., Wong, T.Y.: Retinal vascular caliber: systemic, environmental, and genetic associations. Survey of ophthalmology 54(1), 74–95 (2009) [DOI] [PubMed]
- 50.Suzuki, Y.: Direct measurement of retinal vessel diameter: comparison with microdensitometric methods based on fundus photographs. Survey of ophthalmology 39, S57–S65 (1995) [DOI] [PubMed]
- 51.Ünver, H.M., Kökver, Y., Duman, E., Erdem, O.A.: Statistical edge detection and circular hough transform for optic disk localization. Applied Sciences 9(2), 350 (2019)
- 52.Wegmann-Burns, M., Gugger, M., Goldblum, D.: Hypertensive retinopathy. The Lancet 363(9407), 456 (2004) [DOI] [PubMed]
- 53.Welikala, R., Fraz, M., Williamson, T., Barman, S.: The automated detection of proliferative diabetic retinopathy using dual ensemble classification. International Journal of Diagnostic Imaging 2(2), 64–71 (2015) [DOI] [PubMed]
- 54.Wolz, J., Audebert, H., Laumeier, I., Ahmadi, M., Steinicke, M., Ferse, C., Michelson, G.: Telemedical assessment of optic nerve head and retina in patients after recent minor stroke or tia. International ophthalmology 37(1), 39–46 (2017) [DOI] [PubMed]
- 55.Xu, X., Ding, W., Abràmoff, M.D., Cao, R.: An improved arteriovenous classification method for the early diagnostics of various diseases in retinal image. Computer methods and programs in biomedicine 141, 3–9 (2017) [DOI] [PubMed]
- 56.Yang, T., Wu, T., Li, L., Zhu, C.: Sud-gan: Deep convolution generative adversarial network combined with short connection and dense block for retinal vessel segmentation. Journal of Digital Imaging pp. 1–12 (2020) [DOI] [PMC free article] [PubMed]
- 57.Zahoor, M.N., Fraz, M.M.: Fast optic disc segmentation in retina using polar transform. IEEE Access 5, 12293–12300 (2017)
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The annotated dataset is available at http://vision.seecs.edu.pk/datasets. This work is part of the first author PhD thesis defended on 27 August 2020; the thesis and the related source code are protected by copyrights law No. 404-2021 in Ministry of Economics in UAE and 153 counties.