Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Jun 21.
Published in final edited form as: Proc SPIE Int Soc Opt Eng. 2018 Feb 19;10474:1047426. doi: 10.1117/12.2290121

Automated classification and quantitative analysis of arterial and venous vessels in fundus images

Minhaj Alam 1, Taeyoon Son 1, Devrim Toslak 1, Jennifer I Lim 2, Xincheng Yao 1,2
PMCID: PMC6013046  NIHMSID: NIHMS975313  PMID: 29937615

Abstract

It is known that retinopathies may affect arteries and veins differently. Therefore, reliable differentiation of arteries and veins is essential for computer-aided analysis of fundus images. The purpose of this study is to validate one automated method for robust classification of arteries and veins (A-V) in digital fundus images. We combine optical density ratio (ODR) analysis and blood vessel tracking algorithm to classify arteries and veins. A matched filtering method is used to enhance retinal blood vessels. Bottom hat filtering and global thresholding are used to segment the vessel and skeleton individual blood vessels. The vessel tracking algorithm is used to locate the optic disk and to identify source nodes of blood vessels in optic disk area. Each node can be identified as vein or artery using ODR information. Using the source nodes as starting point, the whole vessel trace is then tracked and classified as vein or artery using vessel curvature and angle information. 50 color fundus images from diabetic retinopathy patients were used to test the algorithm. Sensitivity, specificity, and accuracy metrics were measured to assess the validity of the proposed classification method compared to ground truths created by two independent observers. The algorithm demonstrated 97.52% accuracy in identifying blood vessels as vein or artery. A quantitative analysis upon A-V classification showed that average A-V ratio of width for NPDR subjects with hypertension decreased significantly (43.13%).

Keywords: Artery-vein classification, retinal imaging, optical density ratio, diabetic retinopathy

1. INTRODUCTION

Systemic and cardio-vascular diseases along with common eye diseases directly affect retinal microvasculature [13]. Quantitative fundus imaging is valuable for screening, diagnosis, and treatment assessment of eye diseases. Objective and automated classification of vasculature changes in fundus images holds great potential to help physicians in decision-making, to foster telemedicine, and to explore early screening of eye diseases at primary care environments [4].

Different diseases and progressing stages may have different effects on artery and vein. For example, arterial narrowing is a well-established phenomenon associated with hypertension, whereas venous widening is associated with stroke and cardiovascular diseases [58]. Many studies have suggested artery-vein (A-V) width as a predictor of these diseases [3, 5, 6, 9, 10]. However, manually identifying artery or vein is a time consuming and laborious work. Therefore, a number of algorithms have been proposed to explore automated A-V classification [1120]. This requires several steps that include, using precise tools or method to extract the blood vessel map, identify artery and vein, quantify any changes and assess the condition of patient. A large number of vessel classification algorithms are based on the color and intensity information of artery and vein [1316, 1820]. Because of the presence of oxygenated blood, the arteries have lighter color intensity. However, this difference becomes less significant as the blood vessels propagate toward the fovea. Therefore, the small vessels have to be tracked to their origin near the optic disk to be classified [21]. Semiautomatic algorithms based on vessel tracking techniques have been proposed [11, 12, 17]. In case of supervised classification [16, 22, 23], the intra and inter image light variation makes it quite challenging to get high accuracy in A-V classification. Furthermore, these algorithms require a large number of training sets with manual annotations from clinicians. Some researchers have tried incorporating functional features such as optical density ratio (ODR) to identify artery and vein in dual wavelength images obtained in red and green channels [2325]. However, high sensitivity was only achieved for large vessels, leaving reliable A-V classification difficult for small vessels at macular area.

In this work, we introduce an automated method which combines ODR analysis and a blood vessel tracking algorithm to allow A-V and arteriole-venule classifications. As a functional feature, ODR is used to identify artery and vein near the optic disk while a vessel tracking algorithm maps vein or arteries from source to endpoint using vessel curvature and angle information. Incorporating vessel enhancement algorithm with the tracking algorithm allows reliable arteriole-venule classification. We implemented the method on 50 color fundus images from 35 non-proliferative diabetic retinopathy (NPDR) patients and validated the results by comparing to manual annotations from two independent observers. We also measured a quantitative feature, A-V width to evaluate the effect of hypertension on the retinal vessels of NPDR patients.

2. MATERIALS AND METHODS

This section describes the algorithms for the fully automated classification of vein and arteries in colored fundus images. Fig.1 briefly illustrates the core steps of the algorithm. The technical details of the methodology are described in the following sections.

Figure 1.

Figure 1

Core steps of A-V identification

2.1 Data acquisition

50 color fundus images (from 35 patients) with resolution of 2392×2048 pixels were used for this study. These images were captured with Cirrus 800 nonmydriatic retinal camera with field of view of 30~45 degree. The database contains color fundus images from subjects with non-proliferative diabetic retinopathy (NPDR). All the patients were recruited from University of Illinois at Chicago (UIC) Retinal Clinic. The study was approved by the Institutional Review Board of the University of Illinois at Chicago and was in compliance with the ethical standards stated in the Declaration of Helsinki. All images were labelled by two observers (authors DT and JIL) to generate a ground truth to compare the classification results. Both the observers are highly trained Ophthalmologists (experience > 15 years) and involved in retinal imaging and analysis for clinical research.

2.2 Extraction of the vessel map

Figure 2 shows the steps of segmenting blood vessel map from the original fundus image (Fig. 2A). The green channel (Fig.2B) was used for the segmentation as it provides better contrast for the blood vessels. A matched filtering method [26] is used to enhance the blood vessels from the background. 2D Gaussian kernels of 12 different orientations and 10 different sizes are used that matched blood vessels, the kernels cover all blood vessel directions and diameters. A total 120 kernels are convolved with the green channel image after subtracting their means. The final blood vessel enhanced image (Fig. 2C) is obtained from the maximum intensity projection of 120 convolved images. A 20 x 20 bottom hat filter is used to reduce the background variance and correct uneven illumination. Then, global thresholding is used to generate the extracted blood vessel map (Fig. 2D). This binary vessel map was then skeletonized. The skeletonization process removes pixels on the boundaries of vessels but does not allow objects to break apart [27, 28]. The remaining pixels make up the image skeleton (Fig 2E). All the endpoints of the skeleton are identified. The skeleton and the endpoints would be inputs to the vessel tracking algorithm.

Figure 2.

Figure 2

(A) Original fundus image, (B) Green channel, (C) Enhanced green channel, (D) Vessel map, (E) Skeleton, (F) Vein (blue) and artery (red) identified in skeleton, (G) Classified vein and artery map.

2.3 Identifying vessel source nodes

The vessel tracking algorithm starts from the source of veins or arteries. To identify the source nodes, it is important to identify the optic disk or at least the area in the fundus image where optic disk is located. The automatic algorithm resorts to a very simple method to identify this area. The optic disk tissues have no pigmentation in them and hence reflect more light compared to other tissues in the retina [29]. So, in a fundus image, optic disk area is always the brightest. A 30 x 30 shifting window is moved through the image and the mean intensity is calculated for the whole window. The algorithm then looks for a cluster of neighboring windows with high intensity values. That area is identified as OD tissue (orange circle in Fig 2A) and the center of the area is marked (black cross in Fig 2A). From the center, a circular path is drawn which has double the diameter of approximate optic disk diameter. This path acts like a gradient line. Whenever there is a blood vessel, the intensity of those pixels is different from the background. Using this gradient intensity information, the blood vessel spots are identified (green crosses in Fig 2A).

2.4 Classifying vein and artery source nodes

The blood vessel source nodes are classified into vein and artery using a functional feature: ODR [21]. Optical density (OD) and ODR were calculated for each source nodes. OD is an indicator of light absorbance of vessel tissues relative to its surrounding background and is calculated as [21],

OD=ln(IVesselIBackground) (1)

where Ivessel is pixel value inside the vessel source node and Ibackground is the pixel value of the background. ODR is calculated as [21],

ODR=ODred-ODgreenIDgreen (2)

ODred and ODgreen are ODs at red (oxygen sensitive wavelengths) and green channel (oxygen insensitive wavelengths). Light at smaller wavelengths (green channel) are equally scattered by oxyhemoglobin (artery) and deoxyhemoglobin (vein) and thus is insensitive to oxygen presence [21, 30]. Red channel is oppositely sensitive to oxygen. Because of this phenomenon, ODR in veins are lower than artery. The algorithm identifies source nodes with high ODR as arteries and low ODR as veins (identified in fig 2B-veins as blue crosses and arteries as red crosses).

2.5 Vessel tracking algorithm

The skeleton map of the blood vessel and identified A-V source nodes were incorporated into a vessel tracking algorithm to obtain the final vein and artery map of the retinal vasculature. Figure 3 describes the steps of the tracking algorithm.

Figure 3.

Figure 3

Core steps of vessel tracking algorithm

The tracking starts from specific source node (shown in Fig 2E). It uses a 3 x 3 grid to find vessel pixels in its way. If it can’t find any vessel pixels, it increases the size of the grid. The algorithm tracks the main branch of the blood vessel first. Every time there is an intersection, it uses curve and angle information to choose the forward going main branch. The curvature can be quantified using the distance metric [3133] between two points which can be calculated as,

Curvature=1ni=1n(GeodesticdistancebetweentwoendpointsofavesselbranchEuclideandistancebetweentwoendpointsofavesselbranch) (3)

Geodesic and Euclidian distances are calculated as follows,

Geodesicdistance=t0t1(dx(t)dt)2+(dy(t)dt)2dt (4)
Euclideandistance=(x1-x2)2+(y1-y2)2 (5)

Whichever path has smaller curvature value the algorithm identifies it as the main branch of the vessel to go forward. The algorithm also takes into account the angle information of the two curvatures; the algorithm uses larger angles to identify a straight curve compared to intersections and identify main branches.

All the intersections along the way are marked so that tracking can be resumed from the intersection nodes once the main branch is identified till the endpoint. After the main branch reaches the endpoint, the algorithm comes back to each intersection and follows the same procedure to track all vessel branches associated with the main branch. The algorithm follows the flow chart illustrated in figure 3 to complete tracking of a certain branch (one such branch is identified in figure 2E). After tracking all the branches of a vein or artery, the same process starts for the next source node until all the vessels are tracked (Fig 2F). All the branches belonging to a tracked vessel are classified as vein or artery based on the identity of the respective source node. The only remaining branches are the ones the algorithm failed to classify. The algorithm measures the textural parameters of the artery and vein map respectively and compares them to all the non-classified branches. The non-classified branches are identified as artery or vein based on their similarity to either artery or vein map features. Once the whole skeleton map is classified into vein and artery, it is used to generate a vessel map with fully identified vein and artery (Fig. 2G).

3. RESULTS

3.1 A-V classification

A dataset of 50 color fundus images is used to test and validate the proposed classification method. The results of the vein and artery classification are compared to the ground truth vessel map created by the two observers. The observers had a 98.14% agreement on the identified vein and artery map. To evaluate the performance of the proposed method, sensitivity, specificity and accuracy metrics are measured. These evaluation metrics are measured separately for arteries and veins with respect to the labeled ground truths. A detailed performance analysis is shown in table 1. The algorithm classifies A-V and venule-arterioles which are much smaller in diameter and mostly located near fovea. With the incorporation of blood vessel enhancement technique, we observed an average of 15% increase in the vessel map compared to the original map. The algorithm demonstrates 97.02% accuracy in identifying blood vessels as vein or artery. Sensitivity and specificity of artery identification are 97.47% and 95.96%, respectively. Sensitivity and specificity of vein identification are 97.84% and 96.38%, respectively. The algorithm misclassified only 2.48% vessels compared to 8%, 9.92% and 11.72% observed in literature [3638].

Table 1.

Performance of A-V classification

Performance Measure Arteries Veins All vessels
Sensitivity (%) 97.47 97.84 97.66
Specificity (%) 95.96 96.38 96.17
Classification Accuracy (%) 97.43 96.61 97.02
Classification Error rate (%) 2.57 3.39 2.98

3.2 Quantitative analysis

For quantitative analysis of classified vein and artery, A-V ratio of width is measured. It is known that hypertension causes arterial narrowing and in some cases causes the veins to be more tortuous [5, 34]. With the separation of artery and vein, A-V ratios are easy to implement and analyze to identify patients with hypertension.

For measuring the blood vessel width (artery or vein), both the vessel map and skeleton are used. The average width of the blood vessels is defined as the ratio of the vascular area (calculated from the vessel map) and vascular length (calculated from the skeleton) [33, 35].

Meanvesselwidth=i=1,j=1nB(i,j)i=1,j=1nS(i,j) (6)

where B (i,j) represents vessel pixels and S(i,j) represents skeleton pixels.

In our database, 24 had hypertension among 35 NPDR patients. Figure 4 illustrates the mean A-V ratio of width (Fig 4A). It was observed that the averaged A-V ratio of width for subjects without hypertension was 0.73 compared to 0.51 for subjects with hypertension. So there was a significant 43.13% decrease in patients with hypertension (p<0.001 and Cohen’s d = 4.39). The average width of the whole blood vessel map (Fig 4B) was also measured without separating vein and artery. It was observed that the sensitivity of these parameters were quite low compared to the result obtained with vein and artery separated. For patients with hypertension, the average width and only decreased by 12% compared to 43.13% decrease in case A-V ratio of width.

Figure 4.

Figure 4

Quantitative comparison between patients without hypertension and with hypertension. (A) A-V ratio of width. (B) Average width of the whole vessel map. Error bars are standard deviations; the significance of the t-test is marked as: * for p < 0.05; ** p < 0.01; *** p < 0.001.

4. DISCUSSION

This study demonstrates an automated technique to classify retinal blood vessels into artery and vein categories using a combination of ODR information and a vessel tracking algorithm. We implemented the algorithm on 50 color fundus images from 35 NPDR patients and compared the results with ground truths created by two independent observers. Sensitivity, specificity and accuracy metrics were used to validate the classification results. The algorithm identified artery and veins with 97.02% accuracy. Upon classification of artery and vein, A-V ratio of width was measured to analyze the effect of hypertension in NPDR patients. About 68% of the population was diagnosed with hypertension. We observed that patient with hypertension had 43.13% arterial narrowing. These microvascular changes were significant and validated with student t-test and calculation of Cohen’s d.

The proposed technique for A-V classification has two major steps. In the first step the algorithm deals with finding source nodes of blood vessels coming out of the optic disk and identifying them into A-V categories. The algorithm takes advantage of the significant morphological contrast observed in the optic disk area. So we can avoid the challenge of illumination change across the retinal fundus images, especially near the foveal area. The algorithm automatically locates the optic disc using intensity based information and finds the blood vessels in its outer periphery using a gradient based measurement. As mentioned in [23], the ODR in dual wavelength red and green channels are significantly different as arteries contain oxygenated blood and thus are lighter in intensity. But this feature is less effective near the foveal area where the blood vessels become narrower and their center reflex get negligible. However, the ODR is quite useful near the optic disk as the vessels are wider and have distinguishable center reflexes. The algorithm uses ODR as a functional feature to classify the source nodes of blood vessels as artery and vein. In the first phase, parallel to the classification of source nodes, the algorithm also employs a matched filtering based edge enhancing technique that extracts and segments the blood vessel map with intricate details. This ensures the classification of thinner arterioles and venules along with the regular vein and arteries. With the comprehensive blood vessel map and classified source nodes, the algorithm moves on to its second phase. This phase involves the vessel tracking algorithm which maps the artery or vein from the identified source nodes to the endpoints. The algorithm involves vessel curvature and angle information to track the whole vessel in a systematic manner. First it tracks the main branch while locating the intersections. Then it repeats the tracking for all the branches coming out of the vessels except for the ones with four way intersections. The four way intersections are challenging as they could be crossover from other blood vessels (artery or vein). So the algorithm makes a decision based on morphological feature extraction and by comparing them to the candidate branches. The details of the tracking algorithm has been discussed in earlier sections and illustrated in Fig. 3.

The widespread interest on A-V classification is linked directly to its potential application in clinical assessments. With accurate and robust identification of artery and vein, subtle microvascular changes in retina could be analyzed for different systemic and retinal diseases. To show such an application, we considered A-V ratio of width as a quantitative feature to observe microvascular changes in patient retina. Our database consists of 35 NPDR patients among whom, 24 have pre-diagnosed hypertension. As we know, arterial narrowing has been correlated with hypertension by many studies. This is the reason we chose this particular quantitative feature. We observed significant decrease in A-V width in hypertension patients. To compare the sensitivity of A-V ratio of width we also measured average width of whole blood vessel map. We observed only 12% decrease in width in patients with hypertension compared to 43.13% decrease in A-V ratios. This further confirms that different quantitative features show improved sensitivity applied to separated vein and artery. With the A-V classification technique, many other features such as A-V ratio of tortuosity can be optimally quantified for different patient database with different diseases in future studies.

In conclusion, an automated A-V classification method was proposed which combines ODR and vessel tracking algorithm to identify arteries and veins in retinal vasculature. The algorithm was applied to 50 DR fundus images and performs with high accuracy in A-V classification. A-V ratio of width and was quantitatively measured to differentiate fundus images from NPDR patients with or without hypertension. NPDR patients with hypertension show significant decrease in A-V ratio of width.

Acknowledgments

This research was supported in part by NIH grants R01 EY023522, R01 EY024628, P30 EY001792; by unrestricted grant from Research to Prevent Blindness; by Richard and Loan Hill endowment; by Marion H. Schenck Chair endowment; by ICTD fellowship from Govt. of Bangladesh.

References

  • 1.Abràmoff MD, Garvin MK, Sonka M. Retinal imaging and image analysis. IEEE reviews in biomedical engineering. 2010;3:169–208. doi: 10.1109/RBME.2010.2084567. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Wong TY, Klein R, Klein BE, et al. Retinal microvascular abnormalities and their relationship with hypertension, cardiovascular disease, and mortality. Survey of ophthalmology. 2001;46(1):59–80. doi: 10.1016/s0039-6257(01)00234-x. [DOI] [PubMed] [Google Scholar]
  • 3.Wong TY, Knudtson MD, Klein R, et al. Computer-assisted measurement of retinal vessel diameters in the Beaver Dam Eye Study: methodology, correlation between eyes, and effect of refractive errors. Ophthalmology. 2004;111(6):1183–1190. doi: 10.1016/j.ophtha.2003.09.039. [DOI] [PubMed] [Google Scholar]
  • 4.Patton N, Aslam TM, MacGillivray T, et al. Retinal image analysis: concepts, applications and potential. Progress in retinal and eye research. 2006;25(1):99–127. doi: 10.1016/j.preteyeres.2005.07.001. [DOI] [PubMed] [Google Scholar]
  • 5.Ikram MK, Witteman JC, Vingerling JR, et al. Retinal vessel diameters and risk of hypertension. hypertension. 2006;47(2):189–194. doi: 10.1161/01.HYP.0000199104.61945.33. [DOI] [PubMed] [Google Scholar]
  • 6.Liew G, Wong TY, Mitchell P, et al. Retinopathy predicts coronary heart disease mortality. Heart. 2009;95(5):391–394. doi: 10.1136/hrt.2008.146670. [DOI] [PubMed] [Google Scholar]
  • 7.Wang JJ, Liew G, Wong TY, et al. Retinal vascular calibre and the risk of coronary heart disease-related death. Heart. 2006;92(11):1583–1587. doi: 10.1136/hrt.2006.090522. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Wong TY, Kamineni A, Klein R, et al. Quantitative retinal venular caliber and risk of cardiovascular disease in older persons: the cardiovascular health study. Archives of internal medicine. 2006;166(21):2388–2394. doi: 10.1001/archinte.166.21.2388. [DOI] [PubMed] [Google Scholar]
  • 9.Niemeijer M, Xu X, Dumitrescu AV, et al. Automated measurement of the arteriolar-to-venular width ratio in digital color fundus photographs. IEEE Transactions on medical imaging. 2011;30(11):1941–1950. doi: 10.1109/TMI.2011.2159619. [DOI] [PubMed] [Google Scholar]
  • 10.Hubbard LD, Brothers RJ, King WN, et al. Methods for evaluation of retinal microvascular abnormalities associated with hypertension/sclerosis in the Atherosclerosis Risk in Communities Study. Ophthalmology. 1999;106(12):2269–2280. doi: 10.1016/s0161-6420(99)90525-0. [DOI] [PubMed] [Google Scholar]
  • 11.Aguilar W, Martinez-Perez ME, Frauel Y, et al. Graph-based methods for retinal mosaicing and vascular characterization. Lecture Notes in Computer Science. 2007;4538:25. [Google Scholar]
  • 12.Chrástek R, Wolf M, Donath K, et al. Automated Calculation of Retinal Arteriovenous Ratio for Detection and Monitoring of Cerebrovascular Disease Based on Assessment of Morphological Changes of Retinal Vascular System. :240–243. [Google Scholar]
  • 13.Grisan E, Ruggeri A. A divide et impera strategy for automatic classification of retinal vessels into arteries and veins. 1:890–893. [Google Scholar]
  • 14.Jelinek H, Depardieu C, Lucas C, et al. Towards vessel characterization in the vicinity of the optic disc in digital retinal images. :2–7. [Google Scholar]
  • 15.Li H, Hsu W, Lee M-L, et al. A piecewise Gaussian model for profiling and differentiating retinal vessels. 1:I-1069. [Google Scholar]
  • 16.Niemeijer M, van Ginneken B, Abràmoff MD. Automatic classification of retinal vessels into arteries and veins. Medical imaging. 2009:72601F–72601F. [Google Scholar]
  • 17.Rothaus K, Jiang X, Rhiem P. Separation of the retinal vascular graph in arteries and veins based upon structural knowledge. Image and Vision Computing. 2009;27(7):864–875. [Google Scholar]
  • 18.Simó A, de Ves E. Segmentation of macular fluorescein angiographies. A statistical approach. Pattern Recognition. 2001;34(4):795–809. [Google Scholar]
  • 19.Vázquez S, Barreira N, Penedo M, et al. Automatic classification of retinal vessels into arteries and veins. 1:230–236. [Google Scholar]
  • 20.Vázquez S, Cancela B, Barreira N, et al. Improving retinal artery and vein classification by means of a minimal path approach. Machine vision and applications. 2013;24(5):919–930. [Google Scholar]
  • 21.Kagemann L, Wollstein G, Wojtkowski M, et al. Spectral oximetry assessed with high-speed ultra-high-resolution optical coherence tomography. Journal of biomedical optics. 2007;12(4):041212–041212-8. doi: 10.1117/1.2772655. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Kondermann C, Kondermann D, Yan M. Blood vessel classification into arteries and veins in retinal images. :651247–6512479. [Google Scholar]
  • 23.Narasimha-Iyer H, Beach JM, Khoobehi B, et al. Automatic identification of retinal arteries and veins from dual-wavelength images using structural and functional features. IEEE Transactions on Biomedical Engineering. 2007;54(8):1427–1435. doi: 10.1109/TBME.2007.900804. [DOI] [PubMed] [Google Scholar]
  • 24.Gao X, Bharath A, Stanton A, et al. A method of vessel tracking for vessel diameter measurement on retinal images. 2:881–884. [Google Scholar]
  • 25.Roberts DA. Analysis of vessel absorption profiles in retinal oximetry. Medical physics. 1987;14(1):124–130. doi: 10.1118/1.596131. [DOI] [PubMed] [Google Scholar]
  • 26.Chaudhuri S, Chatterjee S, Katz N, et al. Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Transactions on medical imaging. 1989;8(3):263–269. doi: 10.1109/42.34715. [DOI] [PubMed] [Google Scholar]
  • 27.Kong TY, Rosenfeld A. Topological Algorithms for Digital Image Processing. Elsevier Science; 1996. [Google Scholar]
  • 28.Lam L, Lee Seong-Whan, Suen Ching Y. Thinning Methodologies-A Comprehensive Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1992;14(9):879. [Google Scholar]
  • 29.Reese AB. Pigmentation of the optic nerve. Archives of Ophthalmology. 1933;9(4):560–570. [Google Scholar]
  • 30.Hardarson SH, Harris A, Karlsson RA, et al. Automatic retinal oximetry. Investigative ophthalmology & visual science. 2006;47(11):5011–5016. doi: 10.1167/iovs.06-0039. [DOI] [PubMed] [Google Scholar]
  • 31.Goldbaum MH. Retinal Depression Sign Indicating a Small Retinal Infarct. American Journal of Ophthalmology. 1978;86(1):45–55. doi: 10.1016/0002-9394(78)90013-2. [DOI] [PubMed] [Google Scholar]
  • 32.Hart WE, Goldbaum M, Cote B, et al. Measurement and classification of retinal vascular tortuosity. Int J Med Informatics. 1999;53:239–252. doi: 10.1016/s1386-5056(98)00163-4. [DOI] [PubMed] [Google Scholar]
  • 33.Alam M, Thapa D, Lim JI, et al. Quantitative characteristics of sickle cell retinopathy in optical coherence tomography angiography. Biomedical Optics Express. 2017;8(3):1741–1753. doi: 10.1364/BOE.8.001741. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Li H, Hsu W, Lee ML, et al. Automatic grading of retinal vessel caliber. IEEE Transactions on Biomedical Engineering. 2005;52(7):1352–1355. doi: 10.1109/TBME.2005.847402. [DOI] [PubMed] [Google Scholar]
  • 35.Chu Z, Lin J, Gao C, et al. Quantitative assessment of the retinal microvasculature using optical coherence tomography angiography. Journal of Biomedical Optics. 2016;21(6) doi: 10.1117/1.JBO.21.6.066008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Joshi VS, Garvin MK, Reinhardt JM, et al. Automated artery-venous classification of retinal blood vessels based on structural mapping method. :8315, 83151C. [Google Scholar]
  • 37.Relan D, MacGillivray T, Ballerini L, et al. Retinal vessel classification: sorting arteries and veins. :7396–7399. doi: 10.1109/EMBC.2013.6611267. [DOI] [PubMed] [Google Scholar]
  • 38.Vázquez S, Cancela B, Barreira N, et al. On the automatic computation of the arterio-venous ratio in retinal images: Using minimal paths for the artery/vein classification. :599–604. [Google Scholar]

RESOURCES