Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Keywords: color fundus photographs, detection of retinal diseases, deep neural network, segmentation of retinal landmarks
1. Introduction
Diagnosing retinal diseases at their earliest stage can save a patient’s vision since, at an early stage, the diseases are more likely to be treatable. However, ensuring regular retina checkups for each citizen by ophthalmologists is infeasible not only in developing countries with huge populations but also in developed countries with small populations. The main reason is that the number of ophthalmologists compared to citizens is very small. It is particularly true for low-income and low-middle-income countries with huge populations, such as Bangladesh and India. For example, according to a survey conducted by the International Council of Ophthalmology (ICO) in 2010 [1], there were only four ophthalmologists per million people in Bangladesh. For India, the number was 11. Even for high-income countries with a small population, such as Switzerland and Norway, the numbers of ophthalmologists per million were not very high (91 and 68, respectively). More than a decade later, in 2021, these numbers remain roughly the same. Moreover, 60+ people (who are generally at high risk of retinal diseases) are increasing in most countries. The shortage of ophthalmologists and the necessity of regular retina checkups at low cost inspired researchers to develop computer-aided systems to detect retinal diseases automatically.
Different kinds of imaging technologies (e.g., color fundus photography, monochromatic retinal photography, wide-field imaging, autofluorescence imaging, indocyanine green angiography, scanning laser ophthalmoscopy, Heidelberg retinal tomography and optical coherence tomography) have been developed for the clinical care and management of patients with retinal diseases [2]. Among them, color fundus photography is available and affordable in most parts of the world. A color fundus photograph can be captured using a non-mydriatic fundus camera, handled by non-professional personnel, and delivered online to major ophthalmic institutions for follow-up in the case a disease is suspected. Moreover, there are many publicly available data sets of color fundus photographs such as CHASE_DB1 [3,4], DRIVE [5], HRF [6], IDRiD [7], Kaggle EyePACS data set [8], Messidor [9], STARE [10,11] and UoA_DR [12] to help researchers compare the performances of their proposed approaches. Therefore, color fundus photography is used more widely than other retinal imaging techniques for automatically diagnosing retinal diseases.
In color fundus photographs, the intensity of colors reflected from the retina are recorded in three color channels, red, green, and blue. In this paper, we investigate which color channel is better for the automatic detection of retinal diseases as well as the segmentation of retinal landmarks. Although the detection of retinal diseases is the main objective of computer-aided diagnostic (CAD) systems, segmentation is also an important part of many CAD systems. For example, structural changes in the central retinal blood vessels (CRBVs) may indicate diabetic retinopathy (DR). Therefore, a technique for segmenting CRBVs is often an important step in DR detection systems. Similarly, optic disc (OD) segmentation is important for some glaucoma detection algorithms.
In this work, we first extensively survey the usage of the different color channels in previous works. Specifically, we investigate works on four retinal diseases (i.e., glaucoma, age-related macular degeneration (AMD), and DR, diabetic macular edema (DME)) which are the major causes of blindness [13,14,15,16] as well as works on the segmentation of retinal landmarks, such as OD, macula/fovea and CRBVs, and retinal atrophy. We notice that the focus of the previous works was not to investigate which of the different channels (or combination of channels) is the best for the automatic analysis of fundus photographs. At the same time, there does not seem to be complete consensus on this since different studies used different channels (or combinations of channels). Therefore, to better understand the importance of the different color channels, we develop color channel-specific U-shaped deep neural networks (i.e., U-Nets [17]) for segmenting OD, macula, and CRBVs. We also develop U-Nets for segmenting retinal atrophy. The U-Net is well-known for its excellent performance in medical image segmentation tasks. The U-Net can segment images in great detail, even using very few images in the training phase. It is shown in [17] that a U-Net trained using only 30 images outperformed a sliding window convolutional neural network for the ISBI neuronal structures in the EM stacks challenge 2012.
To the best of our knowledge, a systematic exploration of the importance of different color channels for the automatic processing of color fundus photographs has not been undertaken before. Naturally, a better understanding of the effectiveness of different color channels can reduce the amount of development time of future algorithms. In the long term, it may also affect the design of new fundus cameras and the procedures for capturing fundus photographs, e.g., the appropriate light conditions.
The organization of this paper is as follows: in Section 2, we describe briefly the different color channels of a color fundus photograph, in Section 3, we survey which color channels were used in previous works for the automatic detection of retinal diseases and segmentation. In Section 4, we describe our setup for U-Nets based experiments. In Section 5, we show the performance of color channel-specific U-Nets. At last, in Section 6, we draw conclusions about our findings. Some steps of image pre-processing in more detail and additional experiments are described in the Appendices.
2. Fundus Photography
Our retina does not have any illumination power. Moreover, it is a minimally reflective surface. Therefore, a fundus camera which is a complex optical system, needs to illuminate and capture the low reflected light of the retina simultaneously while imaging [18]. A single image sensor coated with a color filter array (CFA) is used more commonly to capture the reflected light in a fundus camera. In a CFA, in general, color filters are arranged following the Bayer pattern [19], developed by the Eastman Kodak company, as shown in Figure 1a. Instead of using three filters for capturing three primary colors (i.e., red, green and blue) reflected from the retina, only one filter is used per pixel to capture one primary color in the Bayer pattern. In this pattern, the number of green filters is twice the number of blue and red filters. Different kinds of demosaicing techniques are applied to get full color fundus photographs [20,21,22]. Some sophisticated and expensive fundus cameras do not use a CFA with a Bayer pattern to distinguish color, rather they use a direct imaging sensor with three layers of photosensitive elements as shown in Figure 1b. No demosaicing technique is necessary for getting full color fundus photographs from such fundus cameras.
As shown in Figure 2, in a color fundus photograph, we can see the major retinal landmarks, such as the optic disc (OD), macula, and central retinal blood vessels (CRBVs), on the colored foreground surrounded by the dark background. As can be seen in Figure 3, different color channels highlight different things in color fundus photographs. We can see the boundary of the OD more clearly and the choroid in more detail in the red channel. The red channel helps us segment the OD more accurately and see the choroidal blood vessels and choroidal lesions such as nevi or tumors more clearly than the other two color channels. The CRBVs and hemorrhages can be seen in the green channel with excellent contrast. The blue channel allows us to see the retinal nerve fiber layer (RNFL) defects and epiretinal membranes more clearly than the other two color channels.
3. Previous Works on Diagnosing Retinal Disease Automatically
Many diseases can be the cause of retinal damage, such as glaucoma, age-related macular degeneration (AMD), diabetic retinopathy (DR), diabetic macular edema (DME), retinal artery occlusion, retinal vein occlusion, hypertensive retinopathy, macular hole, epiretinal membrane, retinal hemorrhage, lattice degeneration, retinal tear, retinal detachment, intraocular tumors, penetrating ocular trauma, pediatric and neonatal retinal disorders, cytomegalovirus retinal infection, uveitis, infectious retinitis, central serous retinopathy, retinoblastoma, endophthalmitis, and retinitis pigmentosa. Among them, glaucoma, AMD, DR, and DME drew the main focus of researchers for color fundus photograph-based automation. One reason could be that for many cases, these causes lead to irreversible complete vision loss, i.e., blindness if they are left undiagnosed and untreated. According to the information reported in [23,24], glaucoma, AMD, and DR are among the five most common causes of vision impairment in adults. Among billion people living in 2020, million people experienced moderate or severe vision impairment (MSVI) and million people were blind. Glaucoma was the cause of MSVI for million people, whereas AMD for million and DR for million people. Glaucoma was the cause of blindness for million people, whereas AMD for million and DR for million people [24]. Therefore, in our literature survey, we investigate the color channels used in previously published studies for automatically diagnosing glaucoma, DR, AMD, and DME. We also survey works on segmentation of retinal landmarks, such as OD, macula/fovea and CRBVs, and retinal atrophy.
We consider both original studies and reviews as the source of information. However, our survey includes only original studies written in English and published in SJR ranked Q1 and Q2 journals. Note that SJR (SCImago Journal Rank) is an indicator developed by SCImago from the widely known algorithm Google PageRank [25]. This indicator shows the visibility of the journals contained in the Scopus database from 1996. We used different keywords such as ‘automatic retinal disease detection’, ‘automatic diabetic retinopathy detection’, ‘automatic glaucoma detection’, ‘detect retinal disease by deep learning’, ‘segment macula’, ‘segment optic disc’, and ‘segment central retinal blood vessels’ in the Google search engine to find previous studies. After finding a paper, we checked the SJR rank of the journal. We used the reference list of papers published in Q1/Q2 journals; we especially benefited from the review papers related to our area of interest.
In this paper, we include our findings based on information reported in 199 journal papers. As shown in Table 1, the green channel dominates non-neural network-based previous works, whereas RGB images (i.e., red, green, and blue channels together) dominate neural network-based previous works. Few works were based on the red and blue channels and they were mainly for atrophy segmentation. See Table 2, Table 3, Table 4, Table 5 and Table 6 for the color channel distribution in our studied previous works.
Table 1.
Color | Number of Papers | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Disease Detection | Segmentation | |||||||||||
Non-NN | NN | Non-NN | NN | |||||||||
Total | Q1 | Q2 | Total | Q1 | Q2 | Total | Q1 | Q2 | Total | Q1 | Q2 | |
(42) | (30) | (12) | (35) | (28) | (7) | (77) | (56) | (21) | (37) | (28) | (9) | |
RGB | 18 | 9 | 9 | 29 | 24 | 5 | 14 | 10 | 4 | 28 | 22 | 6 |
R | 7 | 5 | 2 | 2 | 1 | 1 | 15 | 9 | 6 | 0 | 0 | 0 |
G | 22 | 11 | 11 | 4 | 2 | 2 | 59 | 43 | 16 | 10 | 8 | 2 |
B | 3 | 3 | 0 | 1 | 1 | 0 | 8 | 7 | 1 | 0 | 0 | 0 |
Gr | 6 | 3 | 3 | 5 | 4 | 1 | 7 | 5 | 2 | 3 | 0 | 3 |
Table 2.
Year | Glaucoma | AMD & DME | DR | |||
---|---|---|---|---|---|---|
Reference | Color | Reference | Color | Reference | Color | |
2000 | Hipwell [26] | G, B | ||||
2002 | Walter [27] | G | ||||
2004 | Klein [28] | RGB | ||||
2007 | Scott [29] | RGB | ||||
2008 | Kose [30] | RGB | Abramoff [31] | RGB | ||
Gangnon [32] | RGB | |||||
2010 | Bock [33] | G | Kose [34] | Gr | ||
Muramatsu [35] | R, G | |||||
2011 | Joshi [36] | R | Agurto [37] | G | Fadzil [38] | RGB |
2012 | Mookiah [39] | Gr | Hijazi [40] | RGB | ||
Deepak [41] | RGB, G | |||||
2013 | Akram [42] | RGB | ||||
Oh [43] | RGB | |||||
2014 | Fuente-Arriaga [44] | R, G | Akram [45] | RGB | ||
Noronha [46] | RGB | Mookiah [47] | G | Casanova [48] | RGB | |
2015 | Issac [49] | R, G | Mookiah [50] | R, G | Jaya [51] | RGB |
Oh [52] | G, Gr | |||||
2016 | Singh [53] | G, Gr | Acharya [54] | G | Bhaskaranand [55] | RGB |
Phan [56] | G | |||||
Wang [57] | RGB | |||||
2017 | Acharya [58] | Gr | Acharya [59] | G | Leontidis [60] | RGB |
Maheshwari [61] | R, G, B, Gr | |||||
Maheshwari [62] | G | |||||
2018 | Saha [63] | G, RGB | ||||
2020 | Colomer [64] | G |
Table 3.
Year | Glaucoma | AMD & DME | DR | |||
---|---|---|---|---|---|---|
Reference | Color | Reference | Color | Reference | Color | |
1996 | Gardner [65] | RGB | ||||
2009 | Nayak [66] | R, G | ||||
2014 | Ganesan [67] | Gr | ||||
2015 | Mookiah [68] | G | ||||
2016 | Asoka [69] | Gr | Abramoff [70] | RGB | ||
Gulshan [71] | RGB | |||||
2017 | Zilly [72] | G, Gr | Burlina [73] | RGB | Abbas [74] | RGB |
Ting [75] | RGB | Burlina [76] | RGB | Gargeya [77] | RGB | |
Quellec [78] | RGB | |||||
2018 | Ferreira [79] | RGB, Gr | Grassmann [80] | RGB | Khojasteh [81] | RGB |
Raghavendra [82] | RGB | Burlina [83] | RGB | Lam [84] | RGB | |
Li [85] | RGB | |||||
Fu [86] | RGB | |||||
Liu [87] | RGB | |||||
2019 | Liu [88] | R, G, B, Gr | Keel [89] | RGB | Li [90] | RGB |
Diaz-Pinto [91] | RGB | Peng [92] | RGB | Zeng [93] | RGB | |
Matsuba [94] | RGB | Raman [95] | RGB | |||
2020 | Singh [96] | RGB | ||||
Gonzalez-Gonzalo [97] | RGB | |||||
2021 | Gheisari [98] | RGB |
Table 4.
Year | OD | Macula/Fovea | CRBVs | |||
---|---|---|---|---|---|---|
Reference | Color | Reference | Color | Reference | Color | |
1989 | Chaudhuri [99] | G | ||||
1999 | Sinthanayothin [100] | RGB | ||||
2000 | Hoover [10] | RGB | ||||
2004 | Lowell [101] | Gr | Li [102] | RGB | ||
2006 | Soares [103] | G | ||||
2007 | Xu [104] | RGB | Niemeijer [105] | G | Ricci [106] | G |
Abramoff [107] | R, G, B | Tobin [108] | G | |||
2008 | Youssif [109] | RGB | ||||
2009 | Niemeijer [110] | G | Cinsdikici [111] | G | ||
2010 | Welfer [112] | G | ||||
Aquino [113] | R, G | |||||
Zhu [114] | RGB | |||||
2011 | Lu [115] | R, G | Welfer [116] | G | Cheung [117] | RGB |
Kose [118] | RGB | |||||
You [119] | G | |||||
2012 | Bankhead [120] | G | ||||
Qureshi [121] | G | Fraz [4] | G | |||
Fraz [122] | G | |||||
Li [123] | RGB | |||||
Lin [124] | G | |||||
Moghimirad [125] | G | |||||
2013 | Morales [126] | Gr | Chin [127] | RGB | Akram [128] | G |
Gegundez [129] | G | Badsha [130] | Gr | |||
Budai [6] | G | |||||
Fathi [131] | G | |||||
Fraz [132] | G | |||||
Nayebifar [133] | G, B | |||||
Nguyen [134] | G | |||||
Wang [135] | G | |||||
2014 | Giachetti [136] | G, Gr | Kao [137] | G | Bekkers [138] | G |
Aquino [139] | R, G | Cheng [140] | G | |||
2015 | Miri [141] | R, G, B | Dai [142] | G | ||
Mary [143] | R | Hassanien [144] | G | |||
Harangi [145] | RGB, G | Imani [146] | G | |||
Lazar [147] | G | |||||
Roychowdhury [148] | G | |||||
2016 | Mittapalli [149] | RGB | Medhi [150] | R | Aslani [151] | G |
Roychowdhury [152] | G | Onal [153] | Gr | Bahadarkhan [154] | G | |
Sarathi [155] | R, G | Christodoulidis [156] | G | |||
Orlando [157] | G | |||||
2018 | Ramani [158] | G | Khan [159] | G | ||
Chalakkal [160] | RGB | Xia [161] | G | |||
2019 | Thakur [162] | Gr | Khawaja [163] | G | ||
Naqvi [164] | R, G | Wang [165] | RGB | |||
2020 | Dharmawan [166] | R, G, B | Carmona [167] | G | Saroj [168] | Gr |
Guo [169] | G | Zhang [170] | G | |||
Zhou [171] | G | |||||
2021 | Kim [172] | G |
Table 5.
Year | OD | Macula/Fovea | CRBVs | |||
---|---|---|---|---|---|---|
Reference | Color | Reference | Color | Reference | Color | |
2011 | Marin [173] | G | ||||
2015 | Wang [174] | G | ||||
2016 | Liskowski [175] | G | ||||
2017 | Barkana [176] | G | ||||
Mo [177] | RGB | |||||
2018 | Fu [178] | RGB | Al-Bander [179] | Gr | Guo [180] | G |
Guo [181] | RGB | |||||
Hu [182] | RGB | |||||
Jiang [183] | RGB | |||||
Oliveira [184] | G | |||||
Sangeethaa [185] | G | |||||
2019 | Wang [186] | RGB, Gr | Jebaseeli [187] | G | ||
Chakravarty [188] | RGB | Lian [189] | RGB | |||
Gu [190] | RGB | Noh [191] | RGB | |||
Tan [192] | RGB | Wang [193] | Gr | |||
Jiang [194] | RGB | |||||
2020 | Gao [195] | RGB | Feng [196] | G | ||
Jin [197] | RGB | Tamim [198] | G | |||
Sreng [199] | RGB | |||||
Bian [200] | RGB | |||||
Almubarak [201] | RGB | |||||
Tian [202] | RGB | |||||
Zhang [203] | RGB | |||||
Xie [204] | RGB | |||||
2021 | Bengani [205] | RGB | Hasan [206] | RGB | Gegundez-Arias [207] | RGB |
Veena [208] | RGB | |||||
Wang [209] | RGB |
Table 6.
4. Experimental Setup
4.1. Hardware & Software Tools
We performed all experiments using TensorFlow’s Keras API 2.0.0, OpenCV 4.2.0, and Python 3.6.9. We used a standard PC with 32 GB memory, Intel 10th Gen Core i5-10400 Processor with six cores per socket, and Intel UHD Graphics 630 (CML GT2).
4.2. Data Sets
We used RGB color fundus photographs from seven publicly available data sets: (1) Child Heart Health Study in England (CHASE) data set [3,4], (2) Digital Retinal Images for Vessel Extraction (DRIVE) data set [5], (3) High-Resolution Fundus (HRF) data set [6], (4) Indian Diabetic Retinopathy Image Dataset (IDRiD) [7], (5) Pathologic Myopica Challenge (PALM) data set [218], (6) STructured Analysis of the Retina (STARE) data set [10,11], and (7) University of Auckland Diabetic Retinopathy (UoA-DR) data set [12]. Images in these data sets were captured by different fundus cameras for different kinds of research objectives, as shown in Table 7.
Table 7.
Data Set | Height × Width | Field-of-View | Fundus Camera | Number of Images |
---|---|---|---|---|
CHASE_DB1 | Nidek NM-200-D | 28 | ||
DRIVE | Canon CR5-NM 3CCD | 40 | ||
HRF | Canon CR-1 | 45 | ||
IDRiD | Kowa VX-10 | 81 | ||
PALM |
|
Zeiss VISUCAM 500 NM | 400 | |
STARE | TopCon TRV-50 | 20 | ||
UoA-DR | Zeiss VISUCAM 500 | 200 |
Since all of the seven data sets do not have manually segmented images for all retinal landmarks and atrophy, we cannot use all of them for all kinds of segmentation tasks. Therefore, instead of seven data sets we used five data sets for the experiments of segmenting CRBVs, three data sets for OD, and two data sets for macula, while only one data set for the experiments of segmenting retinal atrophy. We emphasize to have reliable results. For that we used the majority of the data (i.e., 55% of the data) as the test data. We prepared one training and one validation set. By combining 25% of the data from each data set, we prepared the training set, whereas we prepared the validation set by combining 20% of the data from each data set. By taking the rest of the 55% of the data from each data set, we prepared individual test sets for each type of segmentation. See Table 8 for the number of images in the training, validation, and test sets. Note that the training set is used to tune the parameters of the U-Net (i.e., weights and biases), the validation set is used to tune the hyperparameters (such number of epochs, learning rate, and activation function), and the test set is used to evaluate the performance of the U-Net.
Table 8.
Segmentation of | Data Set | Number of Images in | ||
---|---|---|---|---|
Training Set | Validation Set | Test Set | ||
CRBVs | CHASE_DB1 | 7 | 5 | 16 |
DRIVE | 10 | 8 | 22 | |
HRF | 11 | 9 | 25 | |
STARE | 5 | 4 | 11 | |
UoA-DR | 50 | 40 | 110 | |
Optic Disc | IDRiD | 20 | 16 | 45 |
PALM | 100 | 80 | 220 | |
UoA-DR | 50 | 40 | 110 | |
Macula | PALM | 100 | 80 | 220 |
UoA-DR | 50 | 40 | 110 | |
Atrophy | PALM | 100 | 80 | 220 |
4.3. Image Pre-Processing
We prepared four types of 2D fundus photographs: , , , and . By splitting 3D color fundus photographs into three color channels (i.e., red, green and blue), we prepared , , . Moreover, by performing a weighted summation of , , , we prepared the grayscale image, . By a grayscale image, we generally mean an image whose pixels have only one value representing the amount of light. It can be visualized as different shades of gray. An 8-bit grayscale image has pixel values in the range 0–255. There are many ways to convert a color image into a grayscale image. In this paper, we use a function from the OpenCV library where each grey pixel is generated according to the following scheme: . This conversion scheme is frequently used in computer vision and implemented in different toolboxes, e.g., GIMP and MATLAB [219] including OpenCV.
The background of a fundus photograph does not contain any information about the retina, which can be helpful for manual or automatic retina-related tasks. Sometimes background noise can be misleading. In order to avoid the interference of the background noise in any decision, we need to use a binary background mask, which has zero for the pixels of the background and for the pixels of the foreground, where n is the number of bits used for the intensity of each pixel. For an 8-bit image, . Except the DRIVE and HRF data sets, background masks are not provided for the other five data sets. Therefore, we followed the steps described in Appendix A to generate the background masks for all data sets. We generated binary background masks for DRIVE and HRF data sets in order to keep the same set up for all data sets. Overall, has a higher intensity than and in all data sets, whereas has a lower intensity compared to and . Moreover, in , the foreground is less likely to overlap with the background noise than and . In , the foreground intensity has the highest possibility to be overlapped with the intensity of the background noise, as shown in Figure 4. Therefore, we use (i.e., the red channel image) for generating the binary background masks.
We used the generated background mask and followed the steps described in Appendix B for cropping out the background as much as possible and removing background noise outside the field-of-view (FOV). Since cropped fundus photographs of different data sets have different resolutions as shown in Table 7, we re-sized all masked and cropped fundus photographs to by bicubic interpolation so that we could use one U-Net. After resizing fundus photographs, we applied contrast limited adaptive histogram equalization (CLAHE) [220] to improve the contrast of each single colored image. Then we re-scaled pixel values to . Note that, re-scaling pixel values to is not necessary for fundus photographs. However, we did it to keep the input and output in the same range. We did not apply any other pre-processing techniques to the images.
Similar to the fundus photographs, reference masks provided by the data sets for segmenting OD, CRBVs and retinal atrophy can have an unnecessary and noisy background. We, therefore, cropped out the unnecessary background of the provided reference masks and removed noise outside the field-of-view area by following the steps described in Appendix B. Since some provided masks are not binary masks, we turned them into 2D binary masks by following the steps described in Appendix C. No data set provides binary masks for segmenting the macula. Instead the center of the macula are provided by the PALM and UoA-DR. We generated binary masks for segmenting macula using the center values of the macula and the OD masks of the PALM and UoA-DR by following the steps described in Appendix D. We re-sized all kinds of binary masks to by bicubic interpolation. We then re-scaled pixel values to , since we used the sigmoid function as the activation function in the output layer of the U-Net and the range of this function is .
4.4. Setup for U-Net
We trained color-specific U-Nets with an architecture as shown in Table A3 of Appendix E. To train our U-Nets, we set Jaccard co-efficient loss (JCL) as the loss function; RMSProp with a learning rate of as the optimizer and . We reduced the learning rate if there was no change in the for more than 30 consecutive epochs. We stopped the training if the did not change in 100 consecutive epochs. We trained all color-specific U-Nets five times to avoid the effect of randomness caused by different factors, including weight initialization and dropout, on the U-Net’s performance. That means, in total, we trained 100 U-Nets, among which 25 U-Nets for OD segmentation (i.e., five models for each RGB, gray, red, green, and blue), 25 U-Nets for macula segmentation, 25 U-Nets for CRBVs segmentation, and 25 U-Nets for atrophy segmentation. We estimate the performance of each model separately and then report of the performance for each category.
4.5. Evaluation Metrics
In segmentation, the U-Net shall predict whether a pixel is part of the object in question (e.g., OD) or not. Ideally, it should therefore output:
However, instead of 0/1, the output of the U-Net is in the range [0, 1] for each pixel since we use sigmoid as the activation function in the last layer. The output can be interpreted as the probability that the pixel is part of the mask. To obtain a hard prediction (0/1), we use a threshold of 0.5. By comparing the hard prediction to the reference, it is decided whether the prediction is a true positive (TP), true negative (TN), false-positive (FP), or false negative (FN). Using those results for each pixel in the test set, we estimated the performance of the U-Net using four metrics. We used three metrics that are commonly used in classification tasks (i.e., precision, recall, and area-under-curve (AUC)) and one metric which is commonly used in image segmentation tasks (i.e., mean intersection-over-union (MIoU), also known as Jaccard index or Jaccard similarity coefficient). We computed precision = TP / (TP + FP) and recall = TP / (TP + FN) for both semantic classes together. On the other hand, we computed IoU = TP / (TP + FP + FN) for each semantic class (i.e., 0/1) and then averaged over the classes to estimate MIoU. We estimated the AUC for the receiver operating characteristic (ROC) curve using a linearly spaced set of thresholds. Note that AUC is a threshold-independent metric, unlike precision, recall, and MIoU, which are threshold-dependent metrics.
5. Performance of Color Channel Specific U-Net
Comparing the results as shown in Table 9, Table 10, Table 11 and Table 12, we can say that the U-Net is more successful at segmenting the OD and less successful at segmenting CRBVs for all channels. The U-Net performs better when all three color channels (i.e., RGB images) are used together than when the color channels are used individually. For segmenting the OD, the red and gray channels are better than the green and blue channels (see Table 9). For segmenting CRBVs the green channel performs better than other single channels, whereas both the red and blue channels perform poorly (see Table 10). For macula segmentation, there is no clear winner among gray and green channels. Although, the blue channel is a bad choice for segmenting the CRBVs, it is reasonably good at segmenting macula (see Table 11). For segmenting retinal atrophy, the green channel is better than other single channel and the blue channel is also a good choice (see Table 12).
Table 9.
Color | Dataset | Precision | Recall | AUC | MIoU |
---|---|---|---|---|---|
RGB | IDRiD | 0.897 ± 0.018 | 0.877 ± 0.010 | 0.940 ± 0.005 | 0.896 ± 0.003 |
PALM | 0.859 ± 0.009 | 0.862 ± 0.013 | 0.933 ± 0.006 | 0.873 ± 0.003 | |
UoA_DR | 0.914 ± 0.012 | 0.868 ± 0.006 | 0.936 ± 0.003 | 0.895 ± 0.004 | |
Gray | IDRiD | 0.868 ± 0.020 | 0.902 ± 0.016 | 0.952 ± 0.007 | 0.892 ± 0.004 |
PALM | 0.758 ± 0.020 | 0.737 ± 0.025 | 0.870 ± 0.011 | 0.788 ± 0.009 | |
UoA_DR | 0.907 ± 0.007 | 0.840 ± 0.005 | 0.923 ± 0.002 | 0.876 ± 0.008 | |
Red | IDRiD | 0.892 ± 0.006 | 0.872 ± 0.008 | 0.936 ± 0.004 | 0.892 ± 0.004 |
PALM | 0.798 ± 0.004 | 0.824 ± 0.012 | 0.912 ± 0.006 | 0.837 ± 0.003 | |
UoA_DR | 0.900 ± 0.007 | 0.854 ± 0.006 | 0.928 ± 0.003 | 0.885 ± 0.003 | |
Green | IDRiD | 0.837 ± 0.023 | 0.906 ± 0.009 | 0.953 ± 0.004 | 0.882 ± 0.008 |
PALM | 0.708 ± 0.012 | 0.718 ± 0.013 | 0.859 ± 0.006 | 0.771 ± 0.004 | |
UoA_DR | 0.895 ± 0.009 | 0.821 ± 0.010 | 0.912 ± 0.005 | 0.869 ± 0.006 | |
Blue | IDRiD | 0.810 ± 0.038 | 0.715 ± 0.011 | 0.858 ± 0.005 | 0.799 ± 0.010 |
PALM | 0.662 ± 0.032 | 0.692 ± 0.019 | 0.845 ± 0.009 | 0.748 ± 0.008 | |
UoA_DR | 0.873 ± 0.012 | 0.800 ± 0.009 | 0.901 ± 0.004 | 0.851 ± 0.002 |
Table 10.
Color | Dataset | Precision | Recall | AUC | MIoU |
---|---|---|---|---|---|
RGB | CHASE_DB1 | 0.795 ± 0.005 | 0.638 ± 0.004 | 0.840 ± 0.002 | 0.696 ± 0.018 |
DRIVE | 0.851 ± 0.007 | 0.519 ± 0.009 | 0.781 ± 0.004 | 0.696 ± 0.013 | |
HRF | 0.730 ± 0.017 | 0.633 ± 0.007 | 0.838 ± 0.005 | 0.651 ± 0.021 | |
STARE | 0.822 ± 0.009 | 0.488 ± 0.010 | 0.766 ± 0.006 | 0.654 ± 0.011 | |
UoA_DR | 0.373 ± 0.003 | 0.341 ± 0.008 | 0.669 ± 0.005 | 0.556 ± 0.004 | |
Gray | CHASE_DB1 | 0.757 ± 0.019 | 0.635 ± 0.016 | 0.834 ± 0.009 | 0.648 ± 0.040 |
DRIVE | 0.864 ± 0.014 | 0.529 ± 0.014 | 0.786 ± 0.008 | 0.673 ± 0.032 | |
HRF | 0.721 ± 0.032 | 0.617 ± 0.008 | 0.825 ± 0.005 | 0.605 ± 0.038 | |
STARE | 0.810 ± 0.021 | 0.522 ± 0.022 | 0.784 ± 0.011 | 0.619 ± 0.031 | |
UoA_DR | 0.373 ± 0.007 | 0.298 ± 0.022 | 0.648 ± 0.012 | 0.540 ± 0.009 | |
Red | CHASE_DB1 | 0.507 ± 0.018 | 0.412 ± 0.007 | 0.703 ± 0.005 | 0.602 ± 0.001 |
DRIVE | 0.713 ± 0.026 | 0.391 ± 0.016 | 0.705 ± 0.010 | 0.637 ± 0.005 | |
HRF | 0.535 ± 0.027 | 0.349 ± 0.014 | 0.680 ± 0.008 | 0.581 ± 0.004 | |
STARE | 0.646 ± 0.040 | 0.271 ± 0.011 | 0.649 ± 0.008 | 0.563 ± 0.005 | |
UoA_DR | 0.304 ± 0.011 | 0.254 ± 0.012 | 0.621 ± 0.006 | 0.539 ± 0.002 | |
Green | CHASE_DB1 | 0.781 ± 0.017 | 0.676 ± 0.021 | 0.858 ± 0.007 | 0.691 ± 0.059 |
DRIVE | 0.862 ± 0.011 | 0.541 ± 0.026 | 0.794 ± 0.012 | 0.703 ± 0.047 | |
HRF | 0.754 ± 0.018 | 0.662 ± 0.020 | 0.856 ± 0.008 | 0.647 ± 0.077 | |
STARE | 0.829 ± 0.018 | 0.558 ± 0.028 | 0.806 ± 0.011 | 0.662 ± 0.052 | |
UoA_DR | 0.384 ± 0.007 | 0.326 ± 0.023 | 0.662 ± 0.012 | 0.552 ± 0.011 | |
Blue | CHASE_DB1 | 0.581 ± 0.024 | 0.504 ± 0.023 | 0.751 ± 0.010 | 0.638 ± 0.004 |
DRIVE | 0.771 ± 0.016 | 0.449 ± 0.015 | 0.736 ± 0.008 | 0.657 ± 0.007 | |
HRF | 0.473 ± 0.016 | 0.279 ± 0.016 | 0.633 ± 0.007 | 0.558 ± 0.004 | |
STARE | 0.446 ± 0.014 | 0.242 ± 0.018 | 0.608 ± 0.007 | 0.535 ± 0.003 | |
UoA_DR | 0.316 ± 0.010 | 0.271 ± 0.015 | 0.630 ± 0.007 | 0.540 ± 0.002 |
Table 11.
Color | Dataset | Precision | Recall | AUC | MIoU |
---|---|---|---|---|---|
RGB | PALM | 0.732 ± 0.016 | 0.649 ± 0.029 | 0.825 ± 0.014 | 0.753 ± 0.009 |
UoA_DR | 0.804 ± 0.027 | 0.713 ± 0.043 | 0.858 ± 0.021 | 0.794 ± 0.012 | |
Gray | PALM | 0.712 ± 0.024 | 0.638 ± 0.016 | 0.819 ± 0.007 | 0.744 ± 0.003 |
UoA_DR | 0.811 ± 0.017 | 0.712 ± 0.018 | 0.858 ± 0.008 | 0.796 ± 0.005 | |
Red | PALM | 0.719 ± 0.013 | 0.648 ± 0.015 | 0.823 ± 0.007 | 0.749 ± 0.005 |
UoA_DR | 0.768 ± 0.006 | 0.726 ± 0.013 | 0.863 ± 0.006 | 0.790 ± 0.003 | |
Green | PALM | 0.685 ± 0.020 | 0.641 ± 0.004 | 0.820 ± 0.002 | 0.739 ± 0.005 |
UoA_DR | 0.791 ± 0.013 | 0.693 ± 0.011 | 0.848 ± 0.005 | 0.783 ± 0.005 | |
Blue | PALM | 0.676 ± 0.020 | 0.637 ± 0.019 | 0.817 ± 0.009 | 0.734 ± 0.002 |
UoA_DR | 0.801 ± 0.035 | 0.649 ± 0.013 | 0.826 ± 0.006 | 0.769 ± 0.012 |
Table 12.
Color | Dataset | Precision | Recall | AUC | MIoU |
---|---|---|---|---|---|
RGB | PALM | 0.719 ± 0.033 | 0.638 ± 0.030 | 0.814 ± 0.014 | 0.707 ± 0.019 |
Gray | PALM | 0.630 ± 0.021 | 0.571 ± 0.025 | 0.777 ± 0.012 | 0.658 ± 0.039 |
Red | PALM | 0.514 ± 0.010 | 0.430 ± 0.029 | 0.705 ± 0.013 | 0.596 ± 0.015 |
Green | PALM | 0.695 ± 0.009 | 0.627 ± 0.032 | 0.808 ± 0.015 | 0.714 ± 0.011 |
Blue | PALM | 0.711 ± 0.015 | 0.578 ± 0.016 | 0.785 ± 0.008 | 0.687 ± 0.018 |
To better understand the performance of U-Nets, we manually inspect all images together with their reference and predicted masks. As shown in Table 13, we see that for the majority number of cases, all color-specific U-Nets can generate at least partially accurate masks for segmenting OD and macula. When the retinal atrophy severely affects any retina, no channel-specific U-Net can generate accurate masks for segmenting OD and macula, as shown in Figure 5 and Figure 6. For many cases multiple areas in the generated masks are pointed as OD (see Figure 5d–f) and macula (see Figure 6d). As shown in Table 14, it happens more in the gray channel for the macula and in the green channel for the OD.
Table 13.
Segmentation for | N | Number of Cases in | ||||
---|---|---|---|---|---|---|
RGB | Gray | Red | Green | Blue | ||
Optic Disc (OD) | 375 | 329 | 324 | 316 | 303 | 297 |
Macula | 330 | 270 | 265 | 271 | 265 | 267 |
Table 14.
Segmentation for | N | Number of Cases in | ||||
---|---|---|---|---|---|---|
RGB | Gray | Red | Green | Blue | ||
Optic Disc (OD) | 375 | 29 | 26 | 43 | 46 | 43 |
Macula | 330 | 17 | 25 | 14 | 17 | 14 |
We find that our U-Nets trained for the RGB, gray, and green channel images can segment thick vessels quite well, whereas they are in general not good at segmenting thin blood vessels. As shown in Figure 7b,e, Figure 7c,f, and Figure 7h,k, discontinuity occurs in the thin vessels segmented by our U-Nets.
The performance of U-Nets also depends to some extent on how accurately CRBVs are marked in the reference masks. Among the five data sets, the reference masks of the DRIVE data set are very accurate for both thick and thin vessels. That could be one reason we get the best performance for this data set. On the contrary, we get the worst performance for the UoA-DR data set because of the inaccurate reference masks (see Appendix F for more details). If the reference masks have inaccurate information, then the estimated performance of the U-Nets will be lower than what it should be. Two things can happen when reference masks are inaccurate. The first thing is that inaccurate reference masks in the training set may deteriorate the performance of the U-Net. However, if most reference masks are accurate enough, the deterioration may be small. The second thing is that inaccurate reference masks in the test set can generate inaccurate values for the estimated metrics. These two cases happen for the UoA-DR data set. Our U-Nets can tackle the negative effect of inaccurate reference masks in the training set of the UoA-DR. Our U-Nets learn to predict the majority of the thick vessels and some parts of thin vessels quite accurately for the UoA-DR data set. However, because of the inaccurate reference masks of the test data, the precision and recall are extremely low for all channels for the UoA-DR data set.
We also notice that quite often, the red channel is affected by the overexposure, whereas the blue channel is affected by the underexposure (see Table 15). Both kinds of inappropriate exposure wash out retinal information that causes low entropy. Therefore, the generated masks for segmenting CRBVs do not have lines in the inappropriately exposed parts of a fundus photograph (see the overexposed part of the red channel in Figure 7j and the underexposed part of the blue channel in Figure 7l). Note that histograms of inappropriately exposed images are highly skewed and have low entropy (as shown in Figure 8).
Table 15.
Data Set | N | Number of Cases in Each Color Channel | |||||||
---|---|---|---|---|---|---|---|---|---|
Where | Where | ||||||||
Gray | Red | Green | Blue | Gray | Red | Green | Blue | ||
CHASE_DB1 | 28 | 0 | 10 | 0 | 13 | 0 | 4 | 0 | 3 |
DRIVE | 40 | 0 | 12 | 0 | 0 | 0 | 1 | 0 | 3 |
HRF | 45 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
IDRiD | 81 | 0 | 2 | 0 | 6 | 0 | 0 | 0 | 23 |
PALM | 400 | 0 | 0 | 1 | 40 | 0 | 0 | 2 | 121 |
STARE | 20 | 0 | 2 | 0 | 10 | 0 | 0 | 0 | 4 |
UoA-DR | 200 | 0 | 0 | 0 | 22 | 0 | 0 | 0 | 88 |
It is not surprising that using all three color channels (i.e., RGB images) as input to the U-Net performs the best since the convolutional layers of the U-Net are flexible enough to use all information from the three color channels appropriately. By using multiple filters in each convolutional layer, U-Nets can extract multiple features from the retinal images, many of which are appropriate for segmentation. As discussed in Section 3, previous works based on non-neural network-based models usually used one color channel, most likely because these models could not be benefited from the information contained in three channels. The fact that the individual color channel performs well in certain situations raises two questions regarding the camera design:
Would it be worth it to develop cameras with only one color channel rather than red, green, and blue, possibly customized for retina analysis?
Could a more detailed representation of the spectrum than RGB improve the automatic analysis of retinas? The RGB representation captures the information from the spectrum that the human eye can recognize. Perhaps this is not all information from the spectrum that an automatic system could have used.
To fully answer those questions, many hardware developments would be needed. However, an initial analysis to address the first question could be to tune the weights used to produce the grayscale image from the RGB images.
6. Conclusions
We conduct an extensive survey to investigate which color channel in color fundus photographs is most commonly preferred for automatically diagnosing retinal diseases. We find that the green channel images dominate previous non-neural network-based works while all three color channels together, i.e., RGB images, dominate neural network-based works. In non-neural network-based works, researchers almost ignored the red and blue channels, reasoning that these channels are prone to poor contrast, noise, and inappropriate exposure. However, no works provided a conclusive experimental comparison of the performance of different color channels. In order to fill up that gap we conduct systematic experiments. We use a well-known U-shaped deep neural network (U-Net) to investigate which color channel is best for segmenting retinal atrophy and three retinal landmarks (i.e., central retinal blood vessels, optic disc, and macula). In our U-Net based segmentation approach, we see that segmentation of retinal landmarks and retinal atrophy can be conducted more accurately when RGB images are used than when a single channel is used. We also notice that as a single channel, the red channel is bad for segmenting the central retinal blood vessels, but better than other single channels for the optic disc segmentation. Although, the blue channel is a bad choice for segmenting the central retinal blood vessels, it is reasonably good for segmenting macula and very good for segmenting retinal atrophy. For all cases, RGB images perform the best which reveals the fact that the red and blue channels can provide supplementary information to the green channel. Therefore, we can conclude that all color channels are important in color fundus photographs.
Appendix A. Generating Background Mask
The background of a fundus photograph can be noisy, i.e., the background pixels can have non-zero values. Noisy background pixels, in general, are invisible to the bare eye because of their low intensity. Exceptions to this occur. For example, images in the STARE data set have visible background noise. Moreover, sometimes non-retinal information, such as image capturing date-time and patient’s name, can be present with high intensities in the background (e.g., images in the UoA-DR data set). This kind of information is also considered as noise when they are not useful for any decision. No matter whether noise in the background is visible or invisible to human eyes, or whether the intensity of background pixels are high or low, by global binary thresholding with threshold, , we detect the presence of noisy background pixels in almost all data sets as shown in Figure A1.
Using a background mask, we can get rid of background noise. A simple method for creating a background mask would be to consider all pixels with an intensity lower than or equal to a threshold, , to be part of the background and the other pixels to be part of the foreground. When the image is noiseless, setting (i.e., keeping zero-valued pixels unchanged while setting pixels with non-zero intensities to ) is good enough to generate the background mask. However, for a noisy background, if we set the threshold, , to a very small value (i.e., a value lower than the intensities of noise), then the background mask will consider the background parts as a foreground, as shown in Figure A2c–i. On the other hand, if we set a very high value to (i.e., a value higher than the intensities of foreground pixels), then some parts of the foreground may get lost in the background mask, as shown in Figure A2k,l. Of course, in reality, some background pixels may have a higher intensity than some foreground pixels so that no threshold would accurately separate the foreground from the background. Further, the optimal threshold may depend on the data set.
As a more robust procedure for generating background masks for removing background noise, we apply the following steps:
- Step-1: Generate a preliminary background mask, , by global binary thresholding, i.e., by setting the pixel intensity, p, of a single channeled image, I, to 0 or in the following way:
where n is the number of bit used for the intensity of p (see Figure A3c). For an 8-bit image, . Note that we are using the red channeled image, . By trial-and-error, we finally set to 15, 40, 35, 35, 5, 35, and 5 to get good preliminary background masks for the CHASE_DB1, DRIVE, HRF, IDRiD, PALM, STARE, and UoA-DR data sets, respectively. Step-2: Determine the boundary contour of the retina by finding the contour which has the maximum area. Note that a contour is a closed curve joining all the straight points having the same color or intensity (see Figure A3d).
Step-3: Set the pixels inside the boundary contour to and outside the boundary contour to zero in order to generate the final background mask, (see Figure A3e).
Figure A4 shows seven examples of generated binary background masks and Figure A5 illustrates the benefit of using instead of for masking out the high-intensity background noise caused by text information in an image.
Using the provided masks of the DRIVE and HRF data sets, we estimate the performance of our approach of generating binary background masks. As shown in Table A1, our approach is highly successful.
Table A1.
Data Set | Precision | Recall | AUC | MIoU |
---|---|---|---|---|
DRIVE | 0.997 | 0.997 | 0.996 | 0.995 |
HRF | 1.000 | 1.000 | 1.000 | 1.000 |
Appendix B. Cropping Out Background
The background of an image, , does not contain any information about the retina, which can be helpful for automatic retina-related tasks. Note that can be an RGB image, a single channeled image, or a binary mask for segmenting OD, macula, CRBVs, or retinal atrophy. As a robust procedure for cropping the unnecessary background and removing background noise from the , we apply the following steps:
Step-1: Generate the background mask, , using the steps described in Appendix A.
Step-2: Determine the minimum bounding rectangle (MBR) which minimally covers the background mask, (See Figure A3f).
Step-3: Crop and equal to the MBR (see Figure A3g,h).
Step-4: Remove background noise from the cropped by masking it by the cropped (see Figure A3i).
Appendix C. Turning Provided Reference Masks into Binary Masks
Although reference masks used for segmentation need to be binary masks (i.e., having only two pixel intensities, e.g., zero for the background pixels and 255 for the foreground pixels of an 8-bit image), we notice that two data sets (i.e., HRF and UoA-DR) do not fulfill this requirement, as shown in Table A2. Three out of 45 provided masks of the HRF data set, and all 200 provided masks of the UoA-DR data set have pixels of multiple intensities. There are two cases: the first case is that the noisy background pixels which are supposed to be 0 have intensities other than zero and the second case is that the foreground pixels, which are supposed to be 255 have intensities other than 255. We also notice that even though the provided masks of the IDRiD data set are binary masks, however, the maximum intensity is 29 instead of 255.
We turn all provided masks into binary masks having pixel intensity by global binary thresholding with threshold, . Before binarization, we remove noisy pixels from the outside of the field-of-view area by using the estimated background mask, (see Figure A6b for an example). As shown in Figure A6c, there will still be noisy pixels inside the FOV area. For that, we apply binary thresholding and generate the final binary mask as shown in Figure A6d.
Table A2.
Segmentation Type | CHASE_DB1 | DRIVE | HRF | IDRiD | PALM | STARE | UoA-DR | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
n | m | n | m | n | m | n | m | n | m | n | m | n | m | |
CRBVs | 28 | 0 | 40 | 0 | 45 | 0 | 0 | 0 | 0 | 0 | 40 | 0 | 200 | 200 |
Optic Disc | 0 | 0 | 0 | 0 | 0 | 0 | 81 | 0 | 400 | 0 | 0 | 0 | 200 | 200 |
Macula | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Retinal Atrophy | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 311 | 0 | 0 | 0 | 0 | 0 |
Appendix D. Generating Binary Masks for Segmenting Macula
Even though three data sets (i.e., IDRiD, PALM, and UoA-DR) provide reference masks for segmenting the optic disc (OD), five data sets (i.e., CHASE_DB1, DRIVE, HRF, STARE, and UoA-DR) for CRBVs and one data set (i.e., PALM) for retinal atrophy, none of the seven data sets provide reference masks for segmenting macula. However, two data sets (PALM and UoA-DR) provide the center of the macula. The average size of the macula in humans is around 5.5 mm. However, the average clinical size of the macula in humans is 1.5 mm, whereas the average size of the OD is 1.825 mm (vertically 1.88 mm and horizontally 1.77 mm). We assume that the size of maculas are equal to the ODs and using the provided center values we generate binary masks for segmenting the macula using the following steps:
Step-1: Get the corresponding reference mask, of a color fundus photograph for segmenting OD.
Step-2: Generate the background mask, , by following the steps described in Appendix A.
Step-3: Remove the background noise outside the foreground of by masking it by .
Step-4: Turn into a binary mask by global thresholding.
Step-5: Find the boundary contour of the foreground of .
Step-6: Determine radius, r of the minimum closing circle of .
Step-7: Draw a circle in the provided center of the macula having radius r.
Step-8: Set the pixels inside the circle to and outside the circle to 0 in order to generate the final reference mask, .
Appendix E. Architecture of U-Net
Our color specific U-Nets have the architecture shown in Table A3. Similar to the original U-Net proposed in [17], our U-Nets consist of two parts: a contracting side and an expansive side. None of these sides have any fully connected layers instead both sides have mainly convolutional layers. Unlike the original U-Net, we use convolutional layer with stride two instead of a max poling layer for down-sampling in the contracting side. Instead of using unpadded convolutions we use same padding convolutions in both the contracting side and expansive side. Note that in the same padding the output size is the same as the input size. Therefore, we do not need cropping in the expansive side which was needed in the original work due to the loss of border pixels in every convolution. We use Exponential Linear Unit (ELU) instead of Rectified Linear Unit (ReLU) as activation function in each convolutional layer except the output layer. In the output layer, we usethe sigmoid function as the activation function. An alternative would have been the softmax function with two outputs. In both the contracting and expansive sides, the two padded convolutional layers are separated by a drop-out layer. We use a drop-out layer in order to avoid over-fitting. There are 23 convolutional layers in the original U-Net, whereas in our U-Nets there are 29 convolutional layers. In the original U-Net, there are four down-sampling blocks in the contracting side and four up-sampling blocks in the expansive side, whereas in our U-Nets there are five down-sampling and five up-sampling blocks. In total, each of our U-Nets has trainable parameters.
Table A3.
Layer | Output Shape | # Params |
---|---|---|
Input | (256, 256, 1) | 0 |
Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU) | (256, 256, 16) | 160 |
Dropout (0.1) | (256, 256, 16) | 0 |
Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU, name = C1) | (256, 256, 16) | 2320 |
Convolution (strides = (2, 2), filters = 16, kernel = (3, 3), activation = ELU) | (128, 128, 16) | 2320 |
Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU) | (128, 128, 32) | 4640 |
Dropout (0.1) | (128, 128, 32) | 0 |
Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU, name = C2) | (128, 128, 32) | 9248 |
Convolution (strides = (2, 2), filters = 32, kernel = (3, 3), activation = ELU) | (64, 64, 32) | 9248 |
Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU) | (64, 64, 64) | 18,496 |
Dropout (0.2) | (64, 64, 64) | 0 |
Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU, name = C3) | (64, 64, 64) | 36,928 |
Convolution (strides = (2, 2), filters = 64, kernel = (3, 3), activation = ELU) | (32, 32, 64) | 36,928 |
Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU) | (32, 32, 128) | 73,856 |
Dropout (0.2) | (32, 32, 128) | 0 |
Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU, name = C4) | (32, 32, 128) | 147,584 |
Convolution (strides = (2, 2), filters = 128, kernel = (3, 3), activation = ELU) | (16, 16, 128) | 147,584 |
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (16, 16, 256) | 295,168 |
Dropout (0.3) | (16, 16, 256) | 0 |
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU, name = C5) | (16, 16, 256) | 590,080 |
Convolution (strides = (2, 2), filters = 256, kernel = (3, 3), activation = ELU) | (8, 8, 256) | 590,080 |
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (8, 8, 256) | 590,080 |
Dropout (0.3) | (8, 8, 256) | 0 |
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (8, 8, 256) | 590,080 |
Transposed Convolution (strides = (2, 2), filters = 256, kernel = (2, 2), activation = ELU, name = U1) | (16, 16, 256) | 262,400 |
Concatenation (C5, U1) | (16, 16, 512) | 0 |
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (16, 16, 256) | 1,179,904 |
Dropout (0.3) | (16, 16, 256) | 0 |
Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (16, 16, 256) | 590,080 |
Transposed Convolution (strides = (2, 2), filters = 128, kernel = (2, 2), activation = ELU, name = U2) | (32, 32, 128) | 131,200 |
Concatenation (C4, U2) | (32, 32, 256) | 0 |
Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU) | (32, 32, 128) | 295,040 |
Dropout (0.2) | (32, 32, 128) | 0 |
Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU) | (32, 32, 128) | 147,584 |
Transposed Convolution (strides = (2, 2), filters = 64, kernel = (2, 2), activation = ELU, name = U3) | (64, 64, 64) | 32,832 |
Concatenation (C3, U3) | (64, 64, 128) | 0 |
Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU) | (64, 64, 64) | 73,792 |
Dropout (0.2) | (64, 64, 64) | 0 |
Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU) | (64, 64, 64) | 36,928 |
Transposed Convolution (strides = (2, 2), filters = 32, kernel = (2, 2), activation = ELU, name = U4) | (128, 128, 32) | 8224 |
Concatenation (C2, U4) | (128, 128, 64) | 0 |
Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU) | (128, 128, 32) | 18,464 |
Dropout (0.1) | (128, 128, 32) | 0 |
Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU) | (128, 128, 32) | 9248 |
Transposed Convolution (strides = (2, 2), filters = 16, kernel = (2, 2), activation = ELU, name = U5) | (256, 256, 16) | 2064 |
Concatenation (C1, U5) | (256, 256, 16) | 0 |
Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU) | (256, 256, 16) | 4624 |
Dropout (0.1) | (256, 256, 16) | 0 |
Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU) | (256, 256, 16) | 2320 |
Convolution (strides = (1, 1), filters = 1, kernel = (1, 1), activation = Sigmoid, name = Output) | (256, 256, 1) | 17 |
Appendix F. Inaccurate Masks in UoA_DR for Segmenting CRBVs
Among the five data sets we experiment on, the UoA-DR data set has the largest number of masks for segmenting CRBVs. Even though it could be a good data set for training and testing U-Nets, the performance of any color-specific U-Net for the UoA-DR test set is the worst among all data sets regardless of whether U-Nets are trained by combining data of five data sets together or by using data only from the UoA-DR data set. The reason behind it is that all the reference masks provided by the UoA-DR data set for segmenting CRBVs are inaccurate. In the UoA-DR data set, the reference masks usually do not match the real blood vessels well. In many places of the reference masks, vessels are marked in the wrong places. Moreover, thick vessels are marked by thinner lines, and thin vessels are marked by thicker lines in many places of reference masks. Even in some reference masks, clearly visible thin vessels are not marked as shown in Figure A7.
Appendix G. Performance of U-Nets Trained and Tested on Individual Data Set
Since different fundus cameras capture retinal images of different data sets in different experimental setups, different data sets may be of different difficulties. We, therefore, do experiments on the different sets individually, i.e., training and testing on the same set for segmenting CRBVs. Table A4 and Table A5 show the results for CRBVs’ segmentation of five data sets: CHASE_DB1, DRIVE, HRF, STARE, and UoA_DR. The first and second blocks in these tables show the results of the U-Nets for which 25% of the data is used for training, whereas the third block shows the results of the U-Nets for which 55% of the data is used for training. In the first block, 55% of the data is sued for testing, whereas in the second and third blocks, only 25% of the data is used for testing. For all three cases, 20% of the data is used as the validation set. It should be noted that individual test sets prepared by taking 25% of the data are fairly small, so these results may not be very reliable. However, the results in the first and second blocks are fairly similar, which indicates that the results are reasonably stable. Overall, we see a substantial improvement in the third block compared to the second, suggesting that the U-Nets benefit from more training data. We also notice that both in Table 10 (same training data for all sets) and Table A4 (set specific training data), there is a large difference in the results for the different data sets which indicates that different data sets have different levels of difficulty.
Table A4.
CHASE_DB1 | |||||
---|---|---|---|---|---|
Data Split | Color | Precision | Recall | AUC | MIoU |
25% Training, 20% Validation, 55% Test |
RGB | 0.569 ± 0.203 | 0.448 ± 0.041 | 0.729 ± 0.059 | 0.537 ± 0.046 |
GRAY | 0.615 ± 0.081 | 0.412 ± 0.041 | 0.735 ± 0.024 | 0.503 ± 0.051 | |
RED | 0.230 ± 0.030 | 0.332 ± 0.053 | 0.613 ± 0.010 | 0.474 ± 0.006 | |
GREEN | 0.782 ± 0.026 | 0.526 ± 0.020 | 0.792 ± 0.007 | 0.606 ± 0.045 | |
BLUE | 0.451 ± 0.114 | 0.370 ± 0.018 | 0.683 ± 0.032 | 0.485 ± 0.008 | |
25% Training, 20% Validation, 25% Test |
RGB | 0.571 ± 0.207 | 0.441 ± 0.045 | 0.724 ± 0.062 | 0.538 ± 0.048 |
GRAY | 0.624 ± 0.080 | 0.407 ± 0.036 | 0.731 ± 0.023 | 0.502 ± 0.050 | |
RED | 0.244 ± 0.037 | 0.342 ± 0.046 | 0.619 ± 0.009 | 0.474 ± 0.007 | |
GREEN | 0.791 ± 0.026 | 0.515 ± 0.022 | 0.787 ± 0.008 | 0.602 ± 0.044 | |
BLUE | 0.449 ± 0.116 | 0.362 ± 0.017 | 0.677 ± 0.033 | 0.484 ± 0.008 | |
55% Training, 20% Validation, 25% Test |
RGB | 0.816 ± 0.012 | 0.541 ± 0.024 | 0.784 ± 0.012 | 0.684 ± 0.018 |
GRAY | 0.803 ± 0.002 | 0.515 ± 0.026 | 0.775 ± 0.010 | 0.671 ± 0.016 | |
RED | 0.389 ± 0.039 | 0.363 ± 0.027 | 0.680 ± 0.021 | 0.504 ± 0.028 | |
GREEN | 0.838 ± 0.005 | 0.583 ± 0.017 | 0.806 ± 0.009 | 0.687 ± 0.038 | |
BLUE | 0.648 ± 0.019 | 0.383 ± 0.012 | 0.698 ± 0.006 | 0.601 ± 0.010 | |
DRIVE | |||||
Data Split | Color | Precision | Recall | AUC | MIoU |
25% Training, 20% Validation, 55% Test |
RGB | 0.796 ± 0.036 | 0.443 ± 0.065 | 0.749 ± 0.028 | 0.622 ± 0.072 |
GRAY | 0.835 ± 0.016 | 0.419 ± 0.022 | 0.739 ± 0.009 | 0.590 ± 0.066 | |
RED | 0.362 ± 0.098 | 0.342 ± 0.072 | 0.628 ± 0.015 | 0.476 ± 0.007 | |
GREEN | 0.846 ± 0.010 | 0.463 ± 0.025 | 0.758 ± 0.009 | 0.671 ± 0.027 | |
BLUE | 0.537 ± 0.078 | 0.297 ± 0.028 | 0.660 ± 0.022 | 0.512 ± 0.026 | |
25% Training, 20% Validation, 25% Test |
RGB | 0.839 ± 0.035 | 0.442 ± 0.068 | 0.749 ± 0.030 | 0.626 ± 0.073 |
GRAY | 0.874 ± 0.018 | 0.413 ± 0.023 | 0.737 ± 0.009 | 0.592 ± 0.068 | |
RED | 0.400 ± 0.108 | 0.352 ± 0.073 | 0.637 ± 0.014 | 0.476 ± 0.009 | |
GREEN | 0.896 ± 0.009 | 0.462 ± 0.025 | 0.760 ± 0.009 | 0.676 ± 0.028 | |
BLUE | 0.575 ± 0.080 | 0.300 ± 0.024 | 0.663 ± 0.020 | 0.512 ± 0.027 | |
55% Training, 20% Validation, 25% Test |
RGB | 0.896 ± 0.005 | 0.539 ± 0.010 | 0.787 ± 0.006 | 0.732 ± 0.014 |
GRAY | 0.895 ± 0.004 | 0.528 ± 0.012 | 0.781 ± 0.005 | 0.731 ± 0.006 | |
RED | 0.660 ± 0.085 | 0.316 ± 0.037 | 0.674 ± 0.017 | 0.520 ± 0.038 | |
GREEN | 0.904 ± 0.003 | 0.533 ± 0.008 | 0.786 ± 0.003 | 0.718 ± 0.024 | |
BLUE | 0.783 ± 0.042 | 0.386 ± 0.044 | 0.705 ± 0.021 | 0.645 ± 0.037 | |
HRF | |||||
Data Split | Color | Precision | Recall | AUC | MIoU |
25% Training, 20% Validation, 55% Test |
RGB | 0.792 ± 0.006 | 0.537 ± 0.021 | 0.799 ± 0.013 | 0.597 ± 0.024 |
GRAY | 0.776 ± 0.004 | 0.497 ± 0.017 | 0.781 ± 0.011 | 0.579 ± 0.025 | |
RED | 0.204 ± 0.024 | 0.258 ± 0.017 | 0.591 ± 0.014 | 0.467 ± 0.002 | |
GREEN | 0.821 ± 0.013 | 0.578 ± 0.012 | 0.824 ± 0.006 | 0.624 ± 0.037 | |
BLUE | 0.155 ± 0.002 | 0.361 ± 0.010 | 0.580 ± 0.001 | 0.482 ± 0.008 | |
25% Training, 20% Validation, 25% Test |
RGB | 0.759 ± 0.006 | 0.535 ± 0.023 | 0.797 ± 0.014 | 0.593 ± 0.023 |
GRAY | 0.741 ± 0.005 | 0.503 ± 0.017 | 0.782 ± 0.011 | 0.576 ± 0.025 | |
RED | 0.197 ± 0.021 | 0.245 ± 0.017 | 0.586 ± 0.013 | 0.467 ± 0.002 | |
GREEN | 0.794 ± 0.016 | 0.581 ± 0.013 | 0.824 ± 0.006 | 0.619 ± 0.036 | |
BLUE | 0.149 ± 0.004 | 0.368 ± 0.013 | 0.578 ± 0.002 | 0.480 ± 0.007 | |
55% Training, 20% Validation, 25% Test |
RGB | 0.781 ± 0.008 | 0.608 ± 0.005 | 0.824 ± 0.004 | 0.693 ± 0.013 |
GRAY | 0.768 ± 0.010 | 0.573 ± 0.017 | 0.807 ± 0.009 | 0.677 ± 0.022 | |
RED | 0.512 ± 0.009 | 0.271 ± 0.021 | 0.641 ± 0.013 | 0.536 ± 0.011 | |
GREEN | 0.788 ± 0.006 | 0.647 ± 0.009 | 0.846 ± 0.003 | 0.674 ± 0.060 | |
BLUE | 0.274 ± 0.110 | 0.341 ± 0.047 | 0.620 ± 0.032 | 0.500 ± 0.019 | |
STARE | |||||
Data Split | Color | Precision | Recall | AUC | MIoU |
25% Training, 20% Validation, 55% Test |
RGB | 0.556 ± 0.204 | 0.300 ± 0.073 | 0.659 ± 0.073 | 0.478 ± 0.008 |
GRAY | 0.619 ± 0.050 | 0.283 ± 0.058 | 0.680 ± 0.033 | 0.478 ± 0.017 | |
RED | 0.148 ± 0.003 | 0.222 ± 0.033 | 0.516 ± 0.009 | 0.468 ± 0.000 | |
GREEN | 0.600 ± 0.242 | 0.351 ± 0.030 | 0.680 ± 0.082 | 0.483 ± 0.019 | |
BLUE | 0.167 ± 0.036 | 0.145 ± 0.034 | 0.518 ± 0.021 | 0.469 ± 0.001 | |
25% Training, 20% Validation, 25% Test |
RGB | 0.531 ± 0.195 | 0.334 ± 0.082 | 0.672 ± 0.082 | 0.482 ± 0.009 |
GRAY | 0.607 ± 0.055 | 0.314 ± 0.066 | 0.691 ± 0.039 | 0.483 ± 0.020 | |
RED | 0.143 ± 0.003 | 0.231 ± 0.038 | 0.512 ± 0.011 | 0.471 ± 0.000 | |
GREEN | 0.587 ± 0.243 | 0.376 ± 0.048 | 0.688 ± 0.092 | 0.488 ± 0.024 | |
BLUE | 0.164 ± 0.032 | 0.142 ± 0.039 | 0.517 ± 0.020 | 0.472 ± 0.001 | |
55% Training, 20% Validation, 25% Test |
RGB | 0.756 ± 0.014 | 0.448 ± 0.031 | 0.749 ± 0.015 | 0.610 ± 0.038 |
GRAY | 0.748 ± 0.010 | 0.504 ± 0.026 | 0.770 ± 0.010 | 0.656 ± 0.017 | |
RED | 0.181 ± 0.020 | 0.293 ± 0.069 | 0.558 ± 0.008 | 0.474 ± 0.006 | |
GREEN | 0.749 ± 0.013 | 0.550 ± 0.025 | 0.795 ± 0.012 | 0.659 ± 0.038 | |
BLUE | 0.163 ± 0.007 | 0.324 ± 0.059 | 0.547 ± 0.006 | 0.469 ± 0.004 | |
UoA_DR | |||||
Data Split | Color | Precision | Recall | AUC | MIoU |
25% Training, 20% Validation, 55% Test |
RGB | 0.320 ± 0.011 | 0.398 ± 0.008 | 0.699 ± 0.006 | 0.541 ± 0.015 |
GRAY | 0.315 ± 0.011 | 0.353 ± 0.016 | 0.675 ± 0.007 | 0.526 ± 0.017 | |
RED | 0.203 ± 0.013 | 0.260 ± 0.016 | 0.614 ± 0.006 | 0.516 ± 0.005 | |
GREEN | 0.332 ± 0.007 | 0.415 ± 0.018 | 0.705 ± 0.009 | 0.534 ± 0.014 | |
BLUE | 0.237 ± 0.012 | 0.260 ± 0.008 | 0.620 ± 0.007 | 0.526 ± 0.006 | |
25% Training, 20% Validation, 25% Test |
RGB | 0.313 ± 0.011 | 0.395 ± 0.008 | 0.697 ± 0.005 | 0.540 ± 0.015 |
GRAY | 0.306 ± 0.011 | 0.350 ± 0.016 | 0.673 ± 0.008 | 0.524 ± 0.017 | |
RED | 0.201 ± 0.013 | 0.259 ± 0.015 | 0.614 ± 0.006 | 0.516 ± 0.005 | |
GREEN | 0.326 ± 0.007 | 0.412 ± 0.017 | 0.704 ± 0.009 | 0.532 ± 0.014 | |
BLUE | 0.232 ± 0.011 | 0.257 ± 0.007 | 0.618 ± 0.006 | 0.524 ± 0.005 | |
55% Training, 20% Validation, 25% Test |
RGB | 0.333 ± 0.005 | 0.445 ± 0.012 | 0.717 ± 0.004 | 0.557 ± 0.007 |
GRAY | 0.330 ± 0.003 | 0.413 ± 0.014 | 0.700 ± 0.006 | 0.559 ± 0.004 | |
RED | 0.289 ± 0.011 | 0.299 ± 0.007 | 0.641 ± 0.004 | 0.543 ± 0.003 | |
GREEN | 0.335 ± 0.002 | 0.470 ± 0.010 | 0.728 ± 0.004 | 0.564 ± 0.004 | |
BLUE | 0.281 ± 0.012 | 0.280 ± 0.013 | 0.630 ± 0.006 | 0.540 ± 0.004 |
Appendix H. Effect of CLAHE
Different data sets have different qualities, which cause different levels of difficulty. One reason for poor-quality images is inappropriate contrast. In general, histogram equalization techniques such as Contrast Limited Adaptive Histogram Equalization (CLAHE) are applied to enhance the local contrast of fundus photographs. In this work, we also apply CLAHE in the pre-processing stage of the experiments mentioned above. In order to investigate the effect of CLAHE on different data sets, we conduct experiments using the fundus photographs without applying CLAHE. Table A5 shows results when CLAHE is not applied. These results are obtained using the same training/validation/test splits as in the third blocks of Table A4. Overall, CLAHE improves the results of the STARE set a lot, and also quite a lot of the DRIVE and HRF data sets. The results of the CHASE_DB1 data set are a bit mixed depending on the metric. For the UoA-DR data set, CLAHE does not seem to help at all.
Table A5.
Database | Color | Precision | Recall | AUC | MIoU |
---|---|---|---|---|---|
CHASEDB1 | RGB | 0.676 ± 0.057 | 0.419 ± 0.037 | 0.727 ± 0.020 | 0.576 ± 0.051 |
GRAY | 0.629 ± 0.078 | 0.406 ± 0.052 | 0.714 ± 0.025 | 0.570 ± 0.060 | |
RED | 0.217 ± 0.012 | 0.353 ± 0.026 | 0.611 ± 0.006 | 0.476 ± 0.009 | |
GREEN | 0.802 ± 0.017 | 0.530 ± 0.019 | 0.781 ± 0.009 | 0.672 ± 0.023 | |
BLUE | 0.589 ± 0.023 | 0.373 ± 0.016 | 0.690 ± 0.006 | 0.556 ± 0.050 | |
DRIVE | RGB | 0.856 ± 0.024 | 0.470 ± 0.017 | 0.750 ± 0.010 | 0.693 ± 0.011 |
GRAY | 0.855 ± 0.021 | 0.464 ± 0.030 | 0.746 ± 0.015 | 0.693 ± 0.024 | |
RED | 0.297 ± 0.009 | 0.376 ± 0.017 | 0.619 ± 0.003 | 0.472 ± 0.010 | |
GREEN | 0.886 ± 0.006 | 0.509 ± 0.010 | 0.771 ± 0.005 | 0.722 ± 0.004 | |
BLUE | 0.504 ± 0.171 | 0.331 ± 0.043 | 0.642 ± 0.031 | 0.551 ± 0.071 | |
HRF | RGB | 0.757 ± 0.014 | 0.533 ± 0.023 | 0.784 ± 0.010 | 0.664 ± 0.026 |
GRAY | 0.730 ± 0.010 | 0.520 ± 0.011 | 0.776 ± 0.006 | 0.655 ± 0.011 | |
RED | 0.164 ± 0.002 | 0.311 ± 0.010 | 0.577 ± 0.001 | 0.483 ± 0.005 | |
GREEN | 0.791 ± 0.007 | 0.603 ± 0.008 | 0.820 ± 0.003 | 0.705 ± 0.008 | |
BLUE | 0.153 ± 0.004 | 0.347 ± 0.022 | 0.576 ± 0.003 | 0.476 ± 0.006 | |
STARE | RGB | 0.579 ± 0.077 | 0.348 ± 0.030 | 0.696 ± 0.020 | 0.497 ± 0.023 |
GRAY | 0.379 ± 0.146 | 0.312 ± 0.067 | 0.624 ± 0.041 | 0.487 ± 0.032 | |
RED | 0.157 ± 0.004 | 0.444 ± 0.055 | 0.558 ± 0.010 | 0.456 ± 0.017 | |
GREEN | 0.592 ± 0.085 | 0.442 ± 0.021 | 0.742 ± 0.010 | 0.517 ± 0.033 | |
BLUE | 0.164 ± 0.005 | 0.327 ± 0.056 | 0.546 ± 0.013 | 0.474 ± 0.003 | |
UoADR | RGB | 0.323 ± 0.003 | 0.411 ± 0.004 | 0.699 ± 0.002 | 0.555 ± 0.004 |
GRAY | 0.319 ± 0.003 | 0.372 ± 0.019 | 0.679 ± 0.009 | 0.556 ± 0.005 | |
RED | 0.238 ± 0.017 | 0.220 ± 0.014 | 0.598 ± 0.008 | 0.522 ± 0.005 | |
GREEN | 0.328 ± 0.009 | 0.438 ± 0.019 | 0.713 ± 0.008 | 0.563 ± 0.004 | |
BLUE | 0.262 ± 0.012 | 0.261 ± 0.008 | 0.619 ± 0.004 | 0.535 ± 0.002 |
Author Contributions
Conceptualization, S.B.; methodology, S.B.; formal analysis, S.B. and J.R.; investigation, M.I.A.K., M.T.H. and A.B.; resources, M.I.A.K., M.T.H. and A.B.; data curation, M.I.A.K., M.T.H. and A.B.; writing—original draft preparation, S.B.; writing—review and editing, S.B. and J.R.; funding acquisition, S.B., M.I.A.K. and T.N. All authors have read and agreed to the published version of the manuscript.
Institutional Review Board Statement
Not applicable. We only used publicly available data sets prepared by other organizations and these data sets are standard to use for automatic diagnosis of retinal diseases.
Informed Consent Statement
Informed Consent Statement: Not applicable. We only used publicly available data sets prepared by other organizations and these data sets are standard to use for automatic diagnosis of retinal diseases.
Data Availability Statement
All data sets used in this work are publicly available as described in Section 4.2.
Conflicts of Interest
We declare no conflict of interest. Author Angkan Biswas was employed by the company CAPM Company Limited. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Funding Statement
This research was funded by “Faculty of Engineering, University of Rajshahi, Bangladesh, grant number 71/5/52/R.U./Engg.-08/2020-2021, and 70/5/52/R.U./Engg.-08/2020-2021”.
Footnotes
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Resnikoff S., Felch W., Gauthier T.M., Spivey B. The number of ophthalmologists in practice and training worldwide: A growing gap despite more than 200000 practitioners. Br. J. Ophthalmol. 2012;96:783–787. doi: 10.1136/bjophthalmol-2011-301378. [DOI] [PubMed] [Google Scholar]
- 2.Abràmoff M.D., Garvin M.K., Sonka M. Retinal Imaging and Image Analysis. IEEE Rev. Biomed. Eng. 2010;3:169–208. doi: 10.1109/RBME.2010.2084567. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Owen C.G., Rudnicka A.R., Mullen R., Barman S.A., Monekosso D., Whincup P.H., Ng J., Paterson C. Measuring Retinal Vessel Tortuosity in 10-Year-Old Children: Validation of the Computer-Assisted Image Analysis of the Retina (CAIAR) Program. Investig. Ophthalmol. Vis. Sci. 2009;50:2004–2010. doi: 10.1167/iovs.08-3018. [DOI] [PubMed] [Google Scholar]
- 4.Fraz M., Remagnino P., Hoppe A., Uyyanonvara B., Rudnicka A., Owen C., Barman S. An Ensemble Classification-Based Approach Applied to Retinal Blood Vessel Segmentation. IEEE Trans. Biomed. Eng. 2012;59:2538–2548. doi: 10.1109/TBME.2012.2205687. [DOI] [PubMed] [Google Scholar]
- 5.Staal J.J., Abramoff M.D., Niemeijer M., Viergever M.A., van Ginneken B. Ridge based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging. 2004;23:501–509. doi: 10.1109/TMI.2004.825627. [DOI] [PubMed] [Google Scholar]
- 6.Budai A., Bock R., Maier A., Hornegger J., Michelson G. Robust Vessel Segmentation in Fundus Images. Int. J. Biomed. Imaging. 2013;2013:154860. doi: 10.1155/2013/154860. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Porwal P., Pachade S., Kamble R., Kokare M., Deshmukh G., Sahasrabuddhe V., Meriaudeau F. Indian Diabetic Retinopathy Image Dataset (IDRiD): A Database for Diabetic Retinopathy Screening Research. Data. 2018;3:25. doi: 10.3390/data3030025. [DOI] [Google Scholar]
- 8.Cuadros J., Bresnick G. EyePACS: An Adaptable Telemedicine System for Diabetic Retinopathy Screening. J. Diabetes Sci. Technol. 2009;3:509–516. doi: 10.1177/193229680900300315. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Decencière E., Zhang X., Cazuguel G., Lay B., Cochener B., Trone C., Gain P., Ordonez R., Massin P., Erginay A., et al. Feedback on a publicly distributed database: The Messidor database. Image Anal. Stereol. 2014;33:231–234. doi: 10.5566/ias.1155. [DOI] [Google Scholar]
- 10.Hoover A., Kouznetsova V., Goldbaum M. Locating Blood Vessels in Retinal Images by Piece-wise Threshold Probing of a Matched Filter Response. IEEE Trans. Med. Imaging. 2000;19:203–210. doi: 10.1109/42.845178. [DOI] [PubMed] [Google Scholar]
- 11.Hoover A., Goldbaum M. Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels. IEEE Trans. Med. Imaging. 2003;22:951–958. doi: 10.1109/TMI.2003.815900. [DOI] [PubMed] [Google Scholar]
- 12.Abdulla W., Chalakkal R.J. University of Auckland Diabetic Retinopathy (UoA-DR) Database. University of Auckland; Auckland, NZ, USA: 2018. [DOI] [Google Scholar]
- 13.Davis B.M., Crawley L., Pahlitzsch M., Javaid F., Cordeiro M.F. Glaucoma: The retina and beyond. Acta Neuropathol. 2016;132:807–826. doi: 10.1007/s00401-016-1609-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Ferris F.L., Fine S.L., Hyman L. Age-Related Macular Degeneration and Blindness due to Neovascular Maculopathy. JAMA Ophthalmol. 1984;102:1640–1642. doi: 10.1001/archopht.1984.01040031330019. [DOI] [PubMed] [Google Scholar]
- 15.Wykoff C.C., Khurana R.N., Nguyen Q.D., Kelly S.P., Lum F., Hall R., Abbass I.M., Abolian A.M., Stoilov I., To T.M., et al. Risk of Blindness Among Patients with Diabetes and Newly Diagnosed Diabetic Retinopathy. Diabetes Care. 2021;44:748–756. doi: 10.2337/dc20-0413. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Romero-Aroca P. Managing diabetic macular edema: The leading cause of diabetes blindness. World J. Diabetes. 2011;2:98–104. doi: 10.4239/wjd.v2.i6.98. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Ronneberger O., Fischer P., Brox T. International Conference on Medical Image Computing and Computer Assisted Intervention. Springer; Berlin/Heidelberg, Germany: 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation; pp. 234–241. [DOI] [Google Scholar]
- 18.DeHoog E., Schwiegerling J. Fundus camera systems: A comparative analysis. Appl. Opt. 2009;48:221–228. doi: 10.1364/AO.48.000221. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Bayer B.E. Color Imaging Array. 3971065. [(accessed on 17 June 2022)];U.S. Patent. 1976 Available online: https://patentimages.storage.googleapis.com/89/c6/87/c4fb7fbb6d0a0d/US3971065.pdf.
- 20.Zhang L., Wu X. Color demosaicking via directional linear minimum mean square-error estimation. IEEE Trans. Image Process. 2005;14:2167–2178. doi: 10.1109/TIP.2005.857260. [DOI] [PubMed] [Google Scholar]
- 21.Chung K., Chan Y. Color Demosaicing Using Variance of Color Differences. IEEE Trans. Image Process. 2006;15:2944–2955. doi: 10.1109/TIP.2006.877521. [DOI] [PubMed] [Google Scholar]
- 22.Chung K., Yang W., Yan W., Wang C. Demosaicing of Color Filter Array Captured Images Using Gradient Edge Detection Masks and Adaptive Heterogeneity-Projection. IEEE Trans. Image Process. 2008;17:2356–2367. doi: 10.1109/TIP.2008.2005561. [DOI] [PubMed] [Google Scholar]
- 23.Flaxman S.R., Bourne R.R.A., Resnikoff S., Ackland P., Braithwaite T., Cicinelli M.V., Das A., Jonas J.B., Keeffe J., Kempen J.H., et al. Global causes of blindness and distance vision impairment 1990–2020: A systematic review and meta-analysis. Lancet Glob. Health. 2017;5:1221–1234. doi: 10.1016/S2214-109X(17)30393-5. [DOI] [PubMed] [Google Scholar]
- 24.Burton M.J., Ramke J., Marques A.P., Bourne R.R.A., Congdon N., Jones I., Tong B.A.M.A., Arunga S., Bachani D., Bascaran C., et al. The Lancet Global Health Commission on Global Eye Health: Vision beyond 2020. Lancet Glob. Health. 2021;9:489–551. doi: 10.1016/S2214-109X(20)30488-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Guerrero-Bote V.P., Moya-Anegón F. A further step forward in measuring journals’ scientific prestige: The SJR2 indicator. J. Inf. 2012;6:674–688. doi: 10.1016/j.joi.2012.07.001. [DOI] [Google Scholar]
- 26.Hipwell J.H., Strachan F., Olson J.A., Mchardy K.C., Sharp P.F., Forrester J.V. Automated detection of microaneurysms in digital red-free photographs: A diabetic retinopathy screening tool. Diabet. Med. 2000;17:588–594. doi: 10.1046/j.1464-5491.2000.00338.x. [DOI] [PubMed] [Google Scholar]
- 27.Walter T., Klein J.C., Massin P., Erginay A. A contribution of image processing to the diagnosis of diabetic retinopathy—Detection of exudates in color fundus images of the human retina. IEEE Trans. Med. Imaging. 2002;21:1236–1243. doi: 10.1109/TMI.2002.806290. [DOI] [PubMed] [Google Scholar]
- 28.Klein R., Meuer S.M., Moss S.E., Klein B.E.K., Neider M.W., Reinke J. Detection of Age-Related Macular Degeneration Using a NonmydriaticDigital Camera and a Standard Film Fundus Camera. JAMA Arch. Ophthalmol. 2004;122:1642–1646. doi: 10.1001/archopht.122.11.1642. [DOI] [PubMed] [Google Scholar]
- 29.Scott I.U., Edwards A.R., Beck R.W., Bressler N.M., Chan C.K., Elman M.J., Friedman S.M., Greven C.M., Maturi R.K., Pieramici D.J., et al. A Phase II Randomized Clinical Trial of Intravitreal Bevacizumab for Diabetic Macular Edema. Am. Acad. Ophthalmol. 2007;114:1860–1867. doi: 10.1016/j.ophtha.2007.05.062. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Kose C., Sevik U., Gencalioglu O. Automatic segmentation of age-related macular degeneration in retinal fundus images. Comput. Biol. Med. 2008;38:611–619. doi: 10.1016/j.compbiomed.2008.02.008. [DOI] [PubMed] [Google Scholar]
- 31.Abramoff M.D., Niemeijer M., Suttorp-Schultan M.S.A., Viergever M.A., Russell S.R., Ginneken B.V. Evaluation of a System for Automatic Detection of Diabetic Retinopathy From Color Fundus Photographs in a Large Population of Patients With Diabetes. Diabetes Care. 2008;31:193–198. doi: 10.2337/dc07-1312. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Gangnon R.E., Davis M.D., Hubbard L.D., Aiello L.M., Chew E.Y., Ferris F.L., Fisher M.R. A Severity Scale for Diabetic Macular Edema Developed from ETDRS Data. Investig. Ophthalmol. Vis. Sci. 2008;49:5041–5047. doi: 10.1167/iovs.08-2231. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Bock R., Meier J., Nyul L.G., Hornegger J., Michelson G. Glaucoma risk index: Automated glaucoma detection from color fundus images. Med. Image Anal. 2010;14:471–481. doi: 10.1016/j.media.2009.12.006. [DOI] [PubMed] [Google Scholar]
- 34.Kose C., Sevik U., Gencalioglu O., Ikibas C., Kayikicioglu T. A Statistical Segmentation Method for Measuring Age-Related Macular Degeneration in Retinal Fundus Images. J. Med. Syst. 2010;34:1–13. doi: 10.1007/s10916-008-9210-4. [DOI] [PubMed] [Google Scholar]
- 35.Muramatsu C., Hayashi Y., Sawada A., Hatanaka Y., Hara T., Yamamoto T., Fujita H. Detection of retinal nerve fiber layer defects on retinal fundus images for early diagnosis of glaucoma. J. Biomed. Opt. 2010;15:016021. doi: 10.1117/1.3322388. [DOI] [PubMed] [Google Scholar]
- 36.Joshi G.D., Sivaswamy J., Krishnadas S.R. Optic Disk and Cup Segmentation From Monocular Color Retinal Images for Glaucoma Assessment. IEEE Trans. Med. Imaging. 2011;30:1192–1205. doi: 10.1109/TMI.2011.2106509. [DOI] [PubMed] [Google Scholar]
- 37.Agurto C., Barriga E.S., Murray V., Nemeth S., Crammer R., Bauman W., Zamora G., Pattichis M.S., Soliz P. Automatic Detection of Diabetic Retinopathy and Age-Related Macular Degeneration in Digital Fundus Images. Investig. Ophthalmol. Vis. Sci. 2011;52:5862–5871. doi: 10.1167/iovs.10-7075. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Fadzil M.H.A., Izhar L.I., Nugroho H., Nugroho H.A. Analysis of retinal fundus images for grading of diabetic retinopathy severity. Med. Biol. Eng. Comput. 2011;49:693–700. doi: 10.1007/s11517-011-0734-2. [DOI] [PubMed] [Google Scholar]
- 39.Mookiah M.R.K., Acharya U.R., Lim C.M., Petznick A., Suri J.S. Data mining technique for automated diagnosis of glaucoma using higher order spectra and wavelet energy features. Knowl.-Based Syst. 2012;33:73–82. doi: 10.1016/j.knosys.2012.02.010. [DOI] [Google Scholar]
- 40.Hijazi M.H.A., Coenen F., Zheng Y. Data mining techniques for the screening of age-related macular degeneration. Knowl.-Based Syst. 2012;29:83–92. doi: 10.1016/j.knosys.2011.07.002. [DOI] [Google Scholar]
- 41.Deepak K.S., Sivaswamy J. Automatic Assessment of Macular Edema From Color Retinal Images. IEEE Trans. Med. Imaging. 2012;31:766–776. doi: 10.1109/TMI.2011.2178856. [DOI] [PubMed] [Google Scholar]
- 42.Akram M.U., Khalid S., Tariq A., Javed M.Y. Detection of neovascularization in retinal images using multivariate m-Mediods based classifier. Comput. Med. Imaging Graph. 2013;37:346–357. doi: 10.1016/j.compmedimag.2013.06.008. [DOI] [PubMed] [Google Scholar]
- 43.Oh E., Yoo T.K., Park E. Diabetic retinopathy risk prediction for fundus examination using sparse learning: A cross-sectional study. Med. Inform. Decis. Mak. 2013;13:106. doi: 10.1186/1472-6947-13-106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Fuente-Arriaga J.A.D.L., Felipe-Riveron E.M., Garduno-Calderon E. Application of vascular bundle displacement in the optic disc for glaucoma detection using fundus images. Comput. Biol. Med. 2014;47:27–35. doi: 10.1016/j.compbiomed.2014.01.005. [DOI] [PubMed] [Google Scholar]
- 45.Akram M.U., Khalid S., Tariq A., Khan S.A., Azam F. Detection and classification of retinal lesions for grading of diabetic retinopathy. Comput. Biol. Med. 2014;45:161–171. doi: 10.1016/j.compbiomed.2013.11.014. [DOI] [PubMed] [Google Scholar]
- 46.Noronha K.P., Acharya U.R., Nayak K.P., Martis R.J., Bhandary S.V. Automated classification of glaucoma stages using higher order cumulant features. Biomed. Signal Process. Control. 2014;10:174–183. doi: 10.1016/j.bspc.2013.11.006. [DOI] [Google Scholar]
- 47.Mookiah M.R.K., Acharya U.R., Koh J.E.W., Chua C.K., Tan J.H., Chandran V., Lim C.M., Noronha K., Laude A., Tong L. Decision support system for age-related macular degeneration using discrete wavelet transform. Med. Biol. Eng. Comput. 2014;52:781–796. doi: 10.1007/s11517-014-1180-8. [DOI] [PubMed] [Google Scholar]
- 48.Casanova R., Saldana S., Chew E.Y., Danis R.P., Greven C.M., Ambrosius W.T. Application of Random Forests Methods to Diabetic Retinopathy Classification Analyses. PLoS ONE. 2014;9:e98587. doi: 10.1371/journal.pone.0098587. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Issac A., Sarathi M.P., Dutta M.K. An Adaptive Threshold Based Image Processing Technique for Improved Glaucoma Detection and Classification. Comput. Methods Programs Biomed. 2015;122:229–244. doi: 10.1016/j.cmpb.2015.08.002. [DOI] [PubMed] [Google Scholar]
- 50.Mookiah M.R.K., Acharya U.R., Chandran V., Martis R.J., Tan J.H., Koh J.E.W., Chua C.K., Tong L., Laude A. Application of higher-order spectra for automated grading of diabetic maculopathy. Med. Biol. Eng. Comput. 2015;53:1319–1331. doi: 10.1007/s11517-015-1278-7. [DOI] [PubMed] [Google Scholar]
- 51.Jaya T., Dheeba J., Singh N.A. Detection of Hard Exudates in Colour Fundus ImagesUsing Fuzzy Support Vector Machine-Based Expert System. J. Digit. Imaging. 2015;28:761–768. doi: 10.1007/s10278-015-9793-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Oh J.E., Yang H.K., Kim K.G., Hwang J.M. Automatic Computer-Aided Diagnosis of Retinal Nerve Fiber Layer Defects Using Fundus Photographs in Optic Neuropathy. Investig. Ophthalmol. Vis. Sci. 2015;56:2872–2879. doi: 10.1167/iovs.14-15096. [DOI] [PubMed] [Google Scholar]
- 53.Singh A., Dutta M.K., ParthaSarathi M., Uher V., Burget R. Image Processing Based Automatic Diagnosis of Glaucoma using Wavelet Features of Segmented Optic Disc from Fundus Image. Comput. Methods Programs Biomed. 2016;124:108–120. doi: 10.1016/j.cmpb.2015.10.010. [DOI] [PubMed] [Google Scholar]
- 54.Acharya U.R., Mookiah M.R.K., Koh J.E.W., Tan J.H., Noronha K., Bhandary S.V., Rao A.K., Hagiwara Y., Chua C.K., Laude A. Novel risk index for the identification of age-related macular degeneration using radon transform and DWT features. Comput. Biol. Med. 2016;73:131–140. doi: 10.1016/j.compbiomed.2016.04.009. [DOI] [PubMed] [Google Scholar]
- 55.Bhaskaranand M., Ramachandra C., Bhat S., Cuadros J., Nittala M.G., Sadda S., Solanki K. Automated Diabetic Retinopathy Screening and Monitoring Using Retinal Fundus Image Analysis. J. Diabetes Sci. Technol. 2016;10:254–261. doi: 10.1177/1932296816628546. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Phan T.V., Seoud L., Chakor H., Cheriet F. Automatic Screening and Grading of Age-Related Macular Degeneration from Texture Analysis of Fundus Images. J. Ophthalmol. 2016;2016:5893601. doi: 10.1155/2016/5893601. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Wang Y.T., Tadarati M., Wolfson Y., Bressler S.B., Bressler N.M. Comparison of Prevalence of Diabetic Macular Edema Based on Monocular Fundus Photography vs Optical Coherence Tomography. JAMA Ophthalmol. 2016;134:222–228. doi: 10.1001/jamaophthalmol.2015.5332. [DOI] [PubMed] [Google Scholar]
- 58.Acharya U.R., Bhat S., Koh J.E.W., Bhandary S.V., Adeli H. A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images. Comput. Biol. Med. 2017;88:72–83. doi: 10.1016/j.compbiomed.2017.06.022. [DOI] [PubMed] [Google Scholar]
- 59.Acharya U.R., Mookiah M.R.K., Koh J.E.W., Tan J.H., Bhandary S.V., Rao A.K., Hagiwara Y., Chua C.K., Laude A. Automated Diabetic Macular Edema (DME) Grading System using DWT, DCT Features and Maculopathy Index. Comput. Biol. Med. 2017;84:59–68. doi: 10.1016/j.compbiomed.2017.03.016. [DOI] [PubMed] [Google Scholar]
- 60.Leontidis G. A new unified framework for the early detection of the progression to diabetic retinopathy from fundus images. Comput. Biol. Med. 2017;90:98–115. doi: 10.1016/j.compbiomed.2017.09.008. [DOI] [PubMed] [Google Scholar]
- 61.Maheshwari S., Pachori R.B., Acharya U.R. Automated Diagnosis of Glaucoma Using Empirical Wavelet Transform and Correntropy Features Extracted From Fundus Images. IEEE J. Biomed. Health Inform. 2017;21:803–813. doi: 10.1109/JBHI.2016.2544961. [DOI] [PubMed] [Google Scholar]
- 62.Maheshwari S., Pachori R.B., Kanhangad V., Bhandary S.V., Acharya R. Iterative variational mode decomposition based automated detection of glaucoma using fundus images. Comput. Biol. Med. 2017;88:142–149. doi: 10.1016/j.compbiomed.2017.06.017. [DOI] [PubMed] [Google Scholar]
- 63.Saha S.K., Fernando B., Cuadros J., Xiao D., Kanagasingam Y. Automated Quality Assessment of Colour Fundus Images for Diabetic Retinopathy Screening in Telemedicine. J. Digit. Imaging. 2018;31:869–878. doi: 10.1007/s10278-018-0084-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Colomer A., Igual J., Naranjo V. Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images. Sensors. 2020;20:5. doi: 10.3390/s20041005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Gardner G.G., Keating D., Williamson T.H., Elliott A.T. Automatic detection of diabetic retinopathy using an artificial neural network: A screening tool. Br. J. Ophthalmol. 1996;80:940–944. doi: 10.1136/bjo.80.11.940. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Nayak J., Acharya U.R., Bhat P.S., Shetty N., Lim T.C. Automated Diagnosis of Glaucoma Using Digital Fundus Images. J. Med. Syst. 2009;33:337–346. doi: 10.1007/s10916-008-9195-z. [DOI] [PubMed] [Google Scholar]
- 67.Ganesan K., Martis R.J., Acharya U.R., Chua C.K., Min L.C., Ng E.Y.K., Laude A. Computer-aided diabetic retinopathy detection using trace transforms on digital fundus images. Med. Biol. Eng. Comput. 2014;52:663–672. doi: 10.1007/s11517-014-1167-5. [DOI] [PubMed] [Google Scholar]
- 68.Mookiah M.R.K., Acharya U.R., Fujita H., Koh J.E.W., Tan J.H., Noronha K., Bhandary S.V., Chua C.K., Lim C.M., Laude A., et al. Local Configuration Pattern Features for Age-Related Macular Degeneration Characterisation and Classification. Comput. Biol. Med. 2015;63:208–218. doi: 10.1016/j.compbiomed.2015.05.019. [DOI] [PubMed] [Google Scholar]
- 69.Asaoka R., Murata H., Iwase A., Araie M. Detecting Preperimetric Glaucoma with Standard Automated Perimetry Using a Deep Learning Classifier. Ophthalmology. 2016;123:1974–1980. doi: 10.1016/j.ophtha.2016.05.029. [DOI] [PubMed] [Google Scholar]
- 70.Abramoff M.D., Lou Y., Erginay A., Clarida W., Amelon R., Folk J.C., Niemeijer M. Improved Automated Detection of Diabetic Retinopathy on a Publicly Available Dataset Through Integration of Deep Learning. Investig. Ophthalmol. Vis. Sci. 2016;57:5200–5206. doi: 10.1167/iovs.16-19964. [DOI] [PubMed] [Google Scholar]
- 71.Gulshan V., Peng L., Coram M., Stumpe M.C., Wu D., Narayanaswamy A., Venugopalan S., Widner K., Madams T., Cuadros J., et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA. 2016;316:2402–2410. doi: 10.1001/jama.2016.17216. [DOI] [PubMed] [Google Scholar]
- 72.Zilly J., Buhmann J.M., Mahapatra D. Glaucoma Detection Using Entropy Sampling And Ensemble Learning For Automatic Optic Cup And Disc Segmentation. Comput. Med. Imaging Graph. 2017;55:28–41. doi: 10.1016/j.compmedimag.2016.07.012. [DOI] [PubMed] [Google Scholar]
- 73.Burlina P.M., Joshi N., Pekala M., Pacheco K.D., Freund D.E., Bressler N.M. Automated Grading of Age-Related Macular Degeneration From Color Fundus Images Using Deep Convolutional Neural Networks. JAMA Ophthalmol. 2017;135:1170–1176. doi: 10.1001/jamaophthalmol.2017.3782. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Abbas Q., Fondon I., Sarmiento A., Jimenez S., Alemany P. Automatic recognition of severity level for diagnosis of diabetic retinopathy using deep visual features. Med. Biol. Eng. Comput. 2017;55:1959–1974. doi: 10.1007/s11517-017-1638-6. [DOI] [PubMed] [Google Scholar]
- 75.Ting D.S.W., Cheung C.Y., Lim G., Tan G.S.W., Quang N.D., Gan A., Hamzah H., Garcia-Franco R., Yeo I.Y.S., Lee S.Y., et al. Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images from Multiethnic Populations with Diabetes. JAMA. 2017;318:2211–2223. doi: 10.1001/jama.2017.18152. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Burlina P., Pacheco K.D., Joshi N., Freund D.E., Bressler N.M. Comparing humans and deep learning performance for grading AMD: A study in using universal deep features and transfer learning for automated AMD analysis. Comput. Biol. Med. 2017;82:80–86. doi: 10.1016/j.compbiomed.2017.01.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Gargeya R., Leng T. Automated Identification of Diabetic Retinopathy Using Deep Learning. Ophthalmology. 2017;124:962–969. doi: 10.1016/j.ophtha.2017.02.008. [DOI] [PubMed] [Google Scholar]
- 78.Quellec G., Charriere K., Boudi Y., Cochener B., Lamard M. Deep Image Mining for Diabetic Retinopathy Screening. Med. Image Anal. 2017;39:178–193. doi: 10.1016/j.media.2017.04.012. [DOI] [PubMed] [Google Scholar]
- 79.Ferreira M.V.D.S., Filho A.O.D.C., Sousa A.D.D., Silva A.C., Gattass M. Convolutional neural network and texture descriptor-based automatic detection and diagnosis of Glaucoma. Expert Syst. Appl. 2018;110:250–263. doi: 10.1016/j.eswa.2018.06.010. [DOI] [Google Scholar]
- 80.Grassmann F., Mengelkamp J., Brandl C., Harsch S., Zimmermann M.E., Linkohr B., Peters A., Heid I.M., Palm C., Weber B.H.F. A Deep Learning Algorithm for Prediction of Age-Related Eye Disease Study Severity Scale for Age-Related Macular Degeneration from Color Fundus Photography. Am. Acad. Ophthalmol. 2018;125:1410–1420. doi: 10.1016/j.ophtha.2018.02.037. [DOI] [PubMed] [Google Scholar]
- 81.Khojasteh P., Aliahmad B., Kumar D.K. Fundus images analysis using deep features for detection of exudates, hemorrhages and microaneurysms. BMC Ophthalmol. 2018;18:288. doi: 10.1186/s12886-018-0954-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82.Raghavendra U., Fujita H., Bhandary S.V., Gudigar A., Tan J.H., Acharya U.R. Deep Convolution Neural Network for Accurate Diagnosis of Glaucoma Using Digital Fundus Images. Inf. Sci. 2018;441:41–49. doi: 10.1016/j.ins.2018.01.051. [DOI] [Google Scholar]
- 83.Burlina P.M., Joshi N., Pacheco K.D., Freund D.E., Kong J., Bressler N.M. Use of Deep Learning for Detailed Severity Characterization and Estimation of 5-Year Risk Among Patients with Age-Related Macular Degeneration. JAMA Ophthalmol. 2018;136:1359–1366. doi: 10.1001/jamaophthalmol.2018.4118. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 84.Lam C., Yu C., Huang L., Rubin D. Retinal Lesion Detection with Deep Learning Using Image Patches. Investig. Ophthalmol. Vis. Sci. 2018;59:590–596. doi: 10.1167/iovs.17-22721. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 85.Li Z., He Y., Keel S., Meng W., Chang R.T., He M. Efficacy of a Deep Learning System for Detecting Glaucomatous Optic Neuropathy Based on Color Fundus Photographs. Ophthalmology. 2018;125:1199–1206. doi: 10.1016/j.ophtha.2018.01.023. [DOI] [PubMed] [Google Scholar]
- 86.Fu H., Cheng J., Xu Y., Zhang C., Wong D.W.K., Liu J., Cao X. Disc-Aware Ensemble Network for Glaucoma Screening From Fundus Image. IEEE Trans. Med. Imaging. 2018;37:2493–2501. doi: 10.1109/TMI.2018.2837012. [DOI] [PubMed] [Google Scholar]
- 87.Liu S., Graham S.L., Schulz A., Kalloniatis M., Zangerl B., Cai W., Gao Y., Chua B., Arvind H., Grigg J., et al. A Deep Learning-Based Algorithm Identifies Glaucomatous Discs Using Monoscopic Fundus Photographs. Ophthalmol. Glaucoma. 2018;1:15–22. doi: 10.1016/j.ogla.2018.04.002. [DOI] [PubMed] [Google Scholar]
- 88.Liu H., Li L., Wormstone I.M., Qiao C., Zhang C., Liu P., Li S., Wang H., Mou D., Pang R., et al. Development and Validation of a Deep Learning System to Detect Glaucomatous Optic Neuropathy Using Fundus Photographs. JAMA Ophthalmol. 2019;137:1353–1360. doi: 10.1001/jamaophthalmol.2019.3501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 89.Keel S., Li Z., Scheetz J., Robman L., Phung J., Makeyeva G., Aung K., Liu C., Yan X., Meng W., et al. Development and validation of a deep-learning algorithm for the detection of neovascular age-related macular degeneration from colour fundus photographs. Clin. Exp. Ophthalmol. 2019;47:1009–1018. doi: 10.1111/ceo.13575. [DOI] [PubMed] [Google Scholar]
- 90.Li F., Liu Z., Chen H., Jiang M., Zhang X., Wu Z. Automatic Detection of Diabetic Retinopathy in Retinal Fundus Photographs Based on Deep Learning Algorithm. Transl. Vis. Sci. Technol. 2019;8:4. doi: 10.1167/tvst.8.6.4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 91.Diaz-Pinto A., Morales S., Naranjo V., Kohler T., Mossi J.M., Navea A. CNNs for automatic glaucoma assessment using fundus images: An extensive validation. BMC Biomed. Eng. Online. 2019;18:29. doi: 10.1186/s12938-019-0649-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 92.Peng Y., Dharssi S., Chen Q., Keenan T.D., Agron E., Wong W.T., Chew E.Y., Lu Z. DeepSeeNet: A Deep Learning Model for Automated Classification of Patient-based Age-related Macular Degeneration Severity from Color Fundus Photographs. Ophthalmology. 2019;126:565–575. doi: 10.1016/j.ophtha.2018.11.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 93.Zeng X., Chen H., Luo Y., Ye W. Automated Diabetic Retinopathy Detection Based on Binocular Siamese-Like Convolutional Neural Network. IEEE Access. 2019;4:30744–30753. doi: 10.1109/ACCESS.2019.2903171. [DOI] [Google Scholar]
- 94.Matsuba S., Tabuchi H., Ohsugi H., Enno H., Ishitobi N., Masumoto H., Kiuchi Y. Accuracy of ultra-wide-field fundus ophthalmoscopy-assisted deep learning, a machine-learning technology, for detecting age-related macular degeneration. Int. Ophthalmol. 2019;39:1269–1275. doi: 10.1007/s10792-018-0940-0. [DOI] [PubMed] [Google Scholar]
- 95.Raman R., Srinivasan S., Virmani S., Sivaprasad S., Rao C., Rajalakshmi R. Fundus photograph-based deep learning algorithms in detecting diabetic retinopathy. Eye. 2019;33:97–109. doi: 10.1038/s41433-018-0269-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.Singh R.K., Gorantla R. DMENet: Diabetic Macular Edema diagnosis using Hierarchical Ensemble of CNNs. PLoS ONE. 2020;15:e0220677. doi: 10.1371/journal.pone.0220677. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 97.Gonzalez-Gonzalo C., Sanchez-Gutierrez V., Hernandez-Martinez P., Contreras I., Lechanteur Y.T., Domanian A., Ginneken B.V., Sanchez C.I. Evaluation of a deep learning system for the joint automated detection of diabetic retinopathy and age-related macular degeneration. Acta Ophthalmol. 2020;98:368–377. doi: 10.1111/aos.14306. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98.Gheisari S., Shariflou S., Phu J., Kennedy P.J., Ashish A., Kalloniatis M., Golzan S.M. A combined convolutional and recurrent neural network for enhanced glaucoma detection. Sci. Rep. 2021;11:1945. doi: 10.1038/s41598-021-81554-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 99.Chaudhuri S., Chatterjee S., Katz N., Nelson M., Goldbaum M. Detection of blood vessels in retinal images using two-dimensional matched filter. IEEE Trans. Med. Imaging. 1989;8:263–269. doi: 10.1109/42.34715. [DOI] [PubMed] [Google Scholar]
- 100.Sinthanayothin C., Boyce J., Cook H., Williamson J. Automated localization of the optic disc, fovea and retinal blood vessels from digital color fundus images. Br. J. Ophthalmol. 1999;83:902–910. doi: 10.1136/bjo.83.8.902. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 101.Lowell J., Hunter A., Steel D., Basu A., Ryder R., Fletcher E., Kennedy L. Optic Nerve Head Segmentation. IEEE Trans. Med. Imaging. 2004;23:256–264. doi: 10.1109/TMI.2003.823261. [DOI] [PubMed] [Google Scholar]
- 102.Li H., Chutatape O. Automated feature extraction in color retinal images by a model based approach. IEEE Trans. Biomed. Eng. 2004;51:246–254. doi: 10.1109/TBME.2003.820400. [DOI] [PubMed] [Google Scholar]
- 103.Soares J.V.B., Leandro J.J.G., Cesar R.M., Jelinek H.F., Cree M.J. Retianl Vessel Segmentation Using the 2-D Gabor Wavelet and Supervised Classification. IEEE Trans. Med. Imaging. 2006;25:1214–1222. doi: 10.1109/TMI.2006.879967. [DOI] [PubMed] [Google Scholar]
- 104.Xu J., Chutatape O., Sung E., Zheng C., Kuan P.C.T. Optic disk feature extraction via modified deformable model technique for glaucoma analysis. Pattern Recognit. 2007;40:2063–2076. doi: 10.1016/j.patcog.2006.10.015. [DOI] [Google Scholar]
- 105.Niemeijer M., Abramoff M.D., Ginneken B.V. Segmentation of the Optic Disc, Macula and Vascular Arch in Fundus Photographs. IEEE Trans. Med. Imaging. 2007;26:116–127. doi: 10.1109/TMI.2006.885336. [DOI] [PubMed] [Google Scholar]
- 106.Ricci E., Perfetti R. Retinal Blood Vessel Segmentation Using Line Operators and Support Vector Classification. IEEE Trans. Med. Imaging. 2007;26:1357–1365. doi: 10.1109/TMI.2007.898551. [DOI] [PubMed] [Google Scholar]
- 107.Abràmoff M.D., Alward W.L.M., Greenlee E.C., Shuba L., Kim C.Y., Fingert J.H., Kwon Y.H. Automated segmentation of the optic disc from stereo color photographs using physiologically plausible features. Investig. Ophthalmol. Vis. Sci. 2007;48:1665–1673. doi: 10.1167/iovs.06-1081. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 108.Tobin K.W., Chaum E., Govindasamy V.P., Karnowski T.P. Detection of Anatomic Structures in Human Retinal Imagery. IEEE Trans. Med. Imaging. 2007;26:1729–1739. doi: 10.1109/TMI.2007.902801. [DOI] [PubMed] [Google Scholar]
- 109.Youssif A., Ghalwash A.Z., Ghoneim A. Optic Disc Detection From Normalized Digital Fundus Images by Means of a Vessels’ Direction Matched Filter. IEEE Trans. Med. Imaging. 2008;27:11–18. doi: 10.1109/TMI.2007.900326. [DOI] [PubMed] [Google Scholar]
- 110.Niemeijer M., Abramoff M.D., Ginneken B.V. Fast Detection of the Optic Disc and Fovea in Color Fundus Photographs. Med. Image Anal. 2009;13:859–870. doi: 10.1016/j.media.2009.08.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 111.Cinsdikici M., Aydin D. Detection of blood vessels in ophthalmoscope images using MF/ant (matched filter/ant colony) algorithm. Comput. Methods Programs Biomed. 2009;96:85–95. doi: 10.1016/j.cmpb.2009.04.005. [DOI] [PubMed] [Google Scholar]
- 112.Welfer D., Scharcanski J., Kitamura C.M., Pizzol M.M.D., Ludwig L.W.B., Marinho D.R. Segmentation of the optic disk in color eye fundus images using an adaptive morphological approach. Comput. Biol. Med. 2010;40:124–137. doi: 10.1016/j.compbiomed.2009.11.009. [DOI] [PubMed] [Google Scholar]
- 113.Aquino A., Gegundez-Arias M.E., Marín D. Detecting the Optic Disc Boundary in Digital Fundus Images Using Morphological, Edge Detection, and Feature Extraction Techniques. IEEE Trans. Med. Imaging. 2010;29:1860–1869. doi: 10.1109/TMI.2010.2053042. [DOI] [PubMed] [Google Scholar]
- 114.Zhu X., Rangayyan R.M., Ells A.L. Detection of the Optic Nerve Head in Fundus Images of the Retina Using the Hough Transform for Circles. J. Digit. Imaging. 2010;23:332–341. doi: 10.1007/s10278-009-9189-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 115.Lu S. Accurate and Efficient Optic Disc Detection and Segmentation by a Circular Transformation. IEEE Trans. Med. Imaging. 2011;30:2126–2133. doi: 10.1109/TMI.2011.2164261. [DOI] [PubMed] [Google Scholar]
- 116.Welfer D., Scharcanski J., Marinho D.R. Fovea center detection based on the retina anatomy and mathematical morphology. Comput. Methods Programs Biomed. 2011;104:397–409. doi: 10.1016/j.cmpb.2010.07.006. [DOI] [PubMed] [Google Scholar]
- 117.Cheung C., Butty Z., Tehrani N., Lam W.C. Computer-assisted image analysis of temporal retinal vessel width and tortuosity in retinopathy of prematurity for the assessment of disease severity and treatment outcome. Am. Assoc. Pediatr. Ophthalmol. Strabismus. 2011;15:374–380. doi: 10.1016/j.jaapos.2011.05.008. [DOI] [PubMed] [Google Scholar]
- 118.Kose C., Ikibas C. A personal identification system using retinal vasculature in retinal fundus images. Expert Syst. Appl. 2011;38:13670–13681. doi: 10.1016/j.eswa.2011.04.141. [DOI] [Google Scholar]
- 119.You X., Peng Q., Yuan Y., Cheung Y., Lei J. Segmentation of retinal blood vessels using the radial projection and semi-supervised approach. Pattern Recognit. 2011;44:2314–2324. doi: 10.1016/j.patcog.2011.01.007. [DOI] [Google Scholar]
- 120.Bankhead P., Scholfield N., Mcgeown G., Curtis T. Fast Retinal Vessel Detection and Measurement Using Wavelets and Edge Location Refinement. PLoS ONE. 2012;7:e32435. doi: 10.1371/journal.pone.0032435. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 121.Qureshi R.J., Kovacs L., Harangi B., Nagy B., Peto T., Hajdu A. Combining algorithms for automatic detection of optic disc and macula in fundus images. Comput. Vis. Image Underst. 2012;116:138–145. doi: 10.1016/j.cviu.2011.09.001. [DOI] [Google Scholar]
- 122.Fraz M., Barman S.A., Remagnino P., Hoppe A., Basit A., Uyyanonvara B., Rudnicka A.R., Owen C. An approach to localize the retinal blood vessels using bit planes and centerline detection. Comput. Methods Programs Biomed. 2012;108:600–616. doi: 10.1016/j.cmpb.2011.08.009. [DOI] [PubMed] [Google Scholar]
- 123.Li Q., You J., Zhang D. Vessel segmentation and width estimation in retinal images using multiscale production of matched filter responses. Expert Syst. Appl. 2012;39:7600–7610. doi: 10.1016/j.eswa.2011.12.046. [DOI] [Google Scholar]
- 124.Lin K.S., Tsai C.L., Sofka M., Chen S.J., Lin W.Y. Retinal Vascular Tree Reconstruction with Anatomical Realism. IEEE Trans. Biomed. Eng. 2012;59:3337–3347. doi: 10.1109/TBME.2012.2215034. [DOI] [PubMed] [Google Scholar]
- 125.Moghimirad E., Rezatofighi S.H., Soltanian-Zadeh H. Retinal vessel segmentation using a multi-scale medialness function. Comput. Biol. Med. 2012;42:50–60. doi: 10.1016/j.compbiomed.2011.10.008. [DOI] [PubMed] [Google Scholar]
- 126.Morales S., Naranjo V., Angulo J., Alcaniz M. Automatic Detection of Optic Disc Based on PCA and Mathematical Morphology. IEEE Trans. Med. Imaging. 2013;32:786–796. doi: 10.1109/TMI.2013.2238244. [DOI] [PubMed] [Google Scholar]
- 127.Chin K.S., Trucco E., Tan L.L., Wilson P.J. Automatic Fovea Location in Retinal Images Using Anatomical Priors and Vessel Density. Pattern Recognit. Lett. 2013;34:1152–1158. doi: 10.1016/j.patrec.2013.03.016. [DOI] [Google Scholar]
- 128.Akram M., Khan S. Multilayered thresholding-based blood vessel segmentation for screening of diabetic retinopathy. Eng. Comput. 2013;29:165–173. doi: 10.1007/s00366-011-0253-7. [DOI] [Google Scholar]
- 129.Gegundez M.E., Marin D., Bravo J.M., Suero A. Locating the fovea center position in digital fundus images using thresholding and feature extraction techniques. Comput. Med. Imaging Graph. 2013;37:386–393. doi: 10.1016/j.compmedimag.2013.06.002. [DOI] [PubMed] [Google Scholar]
- 130.Badsha S., Reza A.W., Tan K.G., Dimyati K. A New Blood Vessel Extraction Technique Using Edge Enhancement and Object Classification. J. Digit. Imaging. 2013;26:1107–1115. doi: 10.1007/s10278-013-9585-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 131.Fathi A., Naghsh-Nilchi A. Automatic wavelet-based retinal blood vessels segmentation and vessel diameter estimation. Biomed. Signal Process. Control. 2013;8:71–80. doi: 10.1016/j.bspc.2012.05.005. [DOI] [Google Scholar]
- 132.Fraz M., Basit A., Barman S.A. Application of Morphological Bit Planes in Retinal Blood Vessel Extraction. J. Digit. Imaging. 2013;26:274–286. doi: 10.1007/s10278-012-9513-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 133.Nayebifar B., Moghaddam H.A. A novel method for retinal vessel tracking using particle filters. Comput. Biol. Med. 2013;43:541–548. doi: 10.1016/j.compbiomed.2013.01.016. [DOI] [PubMed] [Google Scholar]
- 134.Nguyen U.T.V., Bhuiyan A., Park L.A.F., Ramamohanarao K. An effective retinal blood vessel segmentation method using multi-scale line detection. Pattern Recognit. 2013;46:703–715. doi: 10.1016/j.patcog.2012.08.009. [DOI] [Google Scholar]
- 135.Wang Y., Ji G., Lin P. Retinal vessel segmentation using multiwavelet kernels and multiscale hierarchical decomposition. Pattern Recognit. 2013;46:2117–2133. doi: 10.1016/j.patcog.2012.12.014. [DOI] [Google Scholar]
- 136.Giachetti A., Ballerini L., Trucco E. Accurate and reliable segmentation of the optic disc in digital fundus images. J. Med. Imaging. 2014;1:024001. doi: 10.1117/1.JMI.1.2.024001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 137.Kao E.F., Lin P.C., Chou M.C., Jaw T.S., Liu G.C. Automated detection of fovea in fundus images based on vessel-free zone and adaptive Gaussian template. Comput. Methods Programs Biomed. 2014;117:92–103. doi: 10.1016/j.cmpb.2014.08.003. [DOI] [PubMed] [Google Scholar]
- 138.Bekkers E., Duits R., Berendschot T., Romeny B.T.H. A Multi-Orientation Analysis Approach to Retinal Vessel Tracking. J. Math. Imaging Vis. 2014;49:583–610. doi: 10.1007/s10851-013-0488-6. [DOI] [Google Scholar]
- 139.Aquino A. Establishing the macular grading grid by means of fovea centre detection using anatomical-based and visual-based features. Comput. Biol. Med. 2014;55:61–73. doi: 10.1016/j.compbiomed.2014.10.007. [DOI] [PubMed] [Google Scholar]
- 140.Cheng E., Du L., Wu Y., Zhu Y.J., Megalooikonomou V., Ling H. Discriminative vessel segmentation in retinal images by fusing context-aware hybrid features. Mach. Vis. Appl. 2014;25:1779–1792. doi: 10.1007/s00138-014-0638-x. [DOI] [Google Scholar]
- 141.Miri M.S., Abràmoff M.D., Lee K., Niemeijer M., Wang J.K., Kwon Y.H., Garvin M.K. Multimodal Segmentation of Optic Disc and Cup From SD-OCT and Color Fundus Photographs Using a Machine-Learning Graph-Based Approach. IEEE Trans. Med. Imaging. 2015;34:1854–1866. doi: 10.1109/TMI.2015.2412881. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 142.Dai P., Luo H., Sheng H., Zhao Y., Li L., Wu J., Zhao Y., Suzuki K. A New Approach to Segment Both Main and Peripheral Retinal Vessels Based on Gray-Voting and Gaussian Mixture Model. PLoS ONE. 2015;10:e0127748. doi: 10.1371/journal.pone.0127748. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 143.Mary M.C.V.S., Rajsingh E.B., Jacob J.K.K., Anandhi D., Amato U., Selvan S.E. An empirical study on optic disc segmentation using an active contour model. Biomed. Signal Process. Control. 2015;18:19–29. doi: 10.1016/j.bspc.2014.11.003. [DOI] [Google Scholar]
- 144.Hassanien A.E., Emary E., Zawbaa H.M. Retinal blood vessel localization approach based on bee colony swarm optimization, fuzzy c-means and pattern search. J. Vis. Commun. Image Represent. 2015;31:186–196. doi: 10.1016/j.jvcir.2015.06.019. [DOI] [Google Scholar]
- 145.Harangi B., Hajdu A. Detection of the Optic Disc in Fundus Images by Combining Probability Models. Comput. Biol. Med. 2015;65:10–24. doi: 10.1016/j.compbiomed.2015.07.002. [DOI] [PubMed] [Google Scholar]
- 146.Imani E., Javidi M., Pourreza H.R. Improvement of Retinal Blood Vessel Detection Using Morphological Component Analysis. Comput. Methods Programs Biomed. 2015;118:263–279. doi: 10.1016/j.cmpb.2015.01.004. [DOI] [PubMed] [Google Scholar]
- 147.Lazar I., Hajdu A. Segmentation of retinal vessels by means of directional response vector similarity and region growing. Comput. Biol. Med. 2015;66:209–221. doi: 10.1016/j.compbiomed.2015.09.008. [DOI] [PubMed] [Google Scholar]
- 148.Roychowdhury S., Koozekanani D.D., Parhi K.K. Iterative Vessel Segmentation of Fundus Images. IEEE Trans. Biomed. Eng. 2015;62:1738–1749. doi: 10.1109/TBME.2015.2403295. [DOI] [PubMed] [Google Scholar]
- 149.Pardhasaradhi M., Kande G. Segmentation of optic disk and optic cup from digital fundus images for the assessment of glaucoma. Biomed. Signal Process. Control. 2016;24:34–46. doi: 10.1016/j.bspc.2015.09.003. [DOI] [Google Scholar]
- 150.Medhi J.P., Dandapat S. An effective Fovea detection and Automatic assessment of Diabetic Maculopathy in color fundus images. Comput. Biol. Med. 2016;74:30–44. doi: 10.1016/j.compbiomed.2016.04.007. [DOI] [PubMed] [Google Scholar]
- 151.Aslani S., Sarnel H. A new supervised retinal vessel segmentation method based on robust hybrid features. Biomed. Signal Process. Control. 2016;30:1–12. doi: 10.1016/j.bspc.2016.05.006. [DOI] [Google Scholar]
- 152.Roychowdhury S., Koozekanani D., Kuchinka S., Parhi K. Optic Disc Boundary and Vessel Origin Segmentation of Fundus Images. J. Biomed. Health Inform. 2016;20:1562–1574. doi: 10.1109/JBHI.2015.2473159. [DOI] [PubMed] [Google Scholar]
- 153.Onal S., Chen X., Satamraju V., Balasooriya M., Dabil-Karacal H. Automated and simultaneous fovea center localization and macula segmentation using the new dynamic identification and classification of edges model. J. Med. Imaging. 2016;3:034002. doi: 10.1117/1.JMI.3.3.034002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 154.Bahadarkhan K., Khaliq A.A., Shahid M. A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding. PLoS ONE. 2016;11:e0158996. doi: 10.1371/journal.pone.0158996. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 155.Sarathi M.P., Dutta M.K., Singh A., Travieso C.M. Blood vessel inpainting based technique for efficient localization and segmentation of optic disc in digital fundus images. Biomed. Signal Process. Control. 2016;25:108–117. doi: 10.1016/j.bspc.2015.10.012. [DOI] [Google Scholar]
- 156.Christodoulidis A., Hurtut T., Tahar H.B., Cheriet F. A Multi-scale Tensor Voting Approach for Small Retinal Vessel Segmentation in High Resolution Fundus Images. Comput. Med. Imaging Graph. 2016;52:28–43. doi: 10.1016/j.compmedimag.2016.06.001. [DOI] [PubMed] [Google Scholar]
- 157.Orlando J.I., Prokofyeva E., Blaschko M.B. A Discriminatively Trained Fully Connected Conditional Random Field Model for Blood Vessel Segmentation in Fundus Images. IEEE Trans. Biomed. Eng. 2016;64:16–27. doi: 10.1109/TBME.2016.2535311. [DOI] [PubMed] [Google Scholar]
- 158.Ramani R.G., Balasubramanian L. Macula segmentation and fovea localization employing image processing and heuristic based clustering for automated retinal screening. Comput. Methods Programs Biomed. 2018;160:153–163. doi: 10.1016/j.cmpb.2018.03.020. [DOI] [PubMed] [Google Scholar]
- 159.Khan K.B., Khaliq A.A., Jalil A., Shahid M. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising. PLoS ONE. 2018;13:e0192203. doi: 10.1371/journal.pone.0192203. [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
- 160.Chalakkal R.J., Abdulla W.H., Thulaseedharan S.S. Automatic detection and segmentation of optic disc and fovea in retinal images. IET Image Process. 2018;12:2100–2110. doi: 10.1049/iet-ipr.2018.5666. [DOI] [Google Scholar]
- 161.Xia H., Jiang F., Deng S., Xin J., Doss R. Mapping Functions Driven Robust Retinal Vessel Segmentation via Training Patches. IEEE Access. 2018;6:61973–61982. doi: 10.1109/ACCESS.2018.2869858. [DOI] [Google Scholar]
- 162.Thakur N., Juneja M. Optic disc and optic cup segmentation from retinal images using hybrid approach. Expert Syst. Appl. 2019;127:308–322. doi: 10.1016/j.eswa.2019.03.009. [DOI] [Google Scholar]
- 163.Khawaja A., Khan T.M., Naveed K., Naqvi S.S., Rehman N.U., Nawaz S.J. An Improved Retinal Vessel Segmentation Framework Using Frangi Filter Coupled With the Probabilistic Patch Based Denoiser. IEEE Access. 2019;7:164344–164361. doi: 10.1109/ACCESS.2019.2953259. [DOI] [Google Scholar]
- 164.Naqvi S.S., Fatima N., Khan T.M., Rehman Z.U., Khan M.A. Automatic Optic Disc Detection and Segmentation by Variational Active Contour Estimation in Retinal Fundus Images. Signal Image Video Process. 2019;13:1191–1198. doi: 10.1007/s11760-019-01463-y. [DOI] [Google Scholar]
- 165.Wang X., Jiang X., Ren J. Blood Vessel Segmentation from Fundus Image by a Cascade Classification Framework. Pattern Recognit. 2019;88:331–341. doi: 10.1016/j.patcog.2018.11.030. [DOI] [Google Scholar]
- 166.Dharmawan D.A., Ng B.P., Rahardja S. A new optic disc segmentation method using a modified Dolph-Chebyshev matched filter. Biomed. Signal Process. Control. 2020;59:101932. doi: 10.1016/j.bspc.2020.101932. [DOI] [Google Scholar]
- 167.Carmona E.J., Molina-Casado J.M. Simultaneous segmentation of the optic disc and fovea in retinal images using evolutionary algorithms. Neural Comput. Appl. 2020;33:1903–1921. doi: 10.1007/s00521-020-05060-w. [DOI] [Google Scholar]
- 168.Saroj S.K., Kumar R., Singh N.P. Fréchet PDF based Matched Filter Approach for Retinal Blood Vessels Segmentation. Comput. Methods Programs Biomed. 2020;194:105490. doi: 10.1016/j.cmpb.2020.105490. [DOI] [PubMed] [Google Scholar]
- 169.Guo X., Wang H., Lu X., Hu X., Che S., Lu Y. Robust Fovea Localization Based on Symmetry Measure. J. Biomed. Health Inform. 2020;24:2315–2326. doi: 10.1109/JBHI.2020.2971593. [DOI] [PubMed] [Google Scholar]
- 170.Zhang Y., Lian J., Rong L., Jia W., Li C., Zheng Y. Even faster retinal vessel segmentation via accelerated singular value decomposition. Neural Comput. Appl. 2020;32:1893–1902. doi: 10.1007/s00521-019-04505-1. [DOI] [Google Scholar]
- 171.Zhou C., Zhang X., Chen H. A New Robust Method for Blood Vessel Segmentation in Retinal fundus Images based on weighted line detector and Hidden Markov model. Comput. Methods Programs Biomed. 2020;187:105231. doi: 10.1016/j.cmpb.2019.105231. [DOI] [PubMed] [Google Scholar]
- 172.Kim G., Lee S., Kim S.M. Automated segmentation and quantitative analysis of optic disc and fovea in fundus images. Multimed. Tools Appl. 2021;80:24205–24220. doi: 10.1007/s11042-021-10815-1. [DOI] [Google Scholar]
- 173.Marin D., Aquino A., Gegundez M., Bravo J.M. A New Supervised Method for Blood Vessel Segmentation in Retinal Images by Using Gray-Level and Moment Invariants-Based Features. IEEE Trans. Med. Imaging. 2011;30:146–158. doi: 10.1109/TMI.2010.2064333. [DOI] [PubMed] [Google Scholar]
- 174.Wang S., Yin Y., Cao G., Wei B., Zheng Y., Yang G. Hierarchical retinal blood vessel segmentation based on feature and ensemble learning. Neurocomputing. 2015;149:708–717. doi: 10.1016/j.neucom.2014.07.059. [DOI] [Google Scholar]
- 175.Liskowski P., Krawiec K. Segmenting Retinal Blood Vessels With Deep Neural Networks. IEEE Trans. Med. Imaging. 2016;35:2369–2380. doi: 10.1109/TMI.2016.2546227. [DOI] [PubMed] [Google Scholar]
- 176.Barkana B.D., Saricicek I., Yildirim B. Performance analysis of descriptive statistical features in retinal vessel segmentation via fuzzy logic, ANN, SVM, and classifier fusion. Knowl.-Based Syst. 2017;118:165–176. doi: 10.1016/j.knosys.2016.11.022. [DOI] [Google Scholar]
- 177.Mo J., Zhang L. Multi-level deep supervised networks for retinal vessel segmentation. Int. J. Comput. Assist. Radiol. Surg. 2017;12:2181–2193. doi: 10.1007/s11548-017-1619-0. [DOI] [PubMed] [Google Scholar]
- 178.Fu H., Cheng J., Xu Y., Wong D.W.K., Liu J., Cao X. Joint Optic Disc and Cup Segmentation Based on Multi-Label Deep Network and Polar Transformation. IEEE Trans. Med. Imaging. 2018;37:1597–1605. doi: 10.1109/TMI.2018.2791488. [DOI] [PubMed] [Google Scholar]
- 179.Al-Bander B., Al-Nuaimy W., Williams B.M., Zheng Y. Multiscale sequential convolutional neural networks for simultaneous detection of fovea and optic disc. Biomed. Signal Process. Control. 2018;40:91–101. doi: 10.1016/j.bspc.2017.09.008. [DOI] [Google Scholar]
- 180.Guo Y., Budak U., Sengur A. A Novel Retinal Vessel Detection Approach Based on Multiple Deep Convolution Neural Networks. Comput. Methods Programs Biomed. 2018;167:43–48. doi: 10.1016/j.cmpb.2018.10.021. [DOI] [PubMed] [Google Scholar]
- 181.Guo Y., Budak U., Vespa L.J., Khorasani E.S., Şengur A. A Retinal Vessel Detection Approach Using Convolution Neural Network with Reinforcement Sample Learning Strategy. Measurement. 2018;125:586–591. doi: 10.1016/j.measurement.2018.05.003. [DOI] [Google Scholar]
- 182.Hu K., Zhang Z., Niu X., Zhang Y., Cao C., Xiao F., Gao X. Retinal vessel segmentation of color fundus images using multiscale convolutional neural network with an improved cross-entropy loss function. Neurocomputing. 2018;309:179–191. doi: 10.1016/j.neucom.2018.05.011. [DOI] [Google Scholar]
- 183.Jiang Z., Zhang H., Wang Y., Ko S.B. Retinal blood vessel segmentation using fully convolutional network with transfer learning. Comput. Med. Imaging Graph. 2018;68:1–15. doi: 10.1016/j.compmedimag.2018.04.005. [DOI] [PubMed] [Google Scholar]
- 184.Oliveira A., Pereira S., Silva C.A. Retinal Vessel Segmentation based on Fully Convolutional Neural Networks. Expert Syst. Appl. 2018;112:229–242. doi: 10.1016/j.eswa.2018.06.034. [DOI] [Google Scholar]
- 185.Sangeethaa S.N., Maheswari P.U. An Intelligent Model for Blood Vessel Segmentation in Diagnosing DR Using CNN. J. Med. Syst. 2018;42:175. doi: 10.1007/s10916-018-1030-6. [DOI] [PubMed] [Google Scholar]
- 186.Wang L., Liu H., Lu Y., Chen H., Zhang J., Pu J. A coarse-to-fine deep learning framework for optic disc segmentation in fundus images. Biomed. Signal Process. Control. 2019;51:82–89. doi: 10.1016/j.bspc.2019.01.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 187.Jebaseeli T.J., Durai C.A.D., Peter J.D. Retinal Blood Vessel Segmentation from Diabetic Retinopathy Images using Tandem PCNN Model and Deep Learning Based SVM. Optik. 2019;199:163328. doi: 10.1016/j.ijleo.2019.163328. [DOI] [Google Scholar]
- 188.Chakravarty A., Sivaswamy J. RACE-net: A Recurrent Neural Network for Biomedical Image Segmentation. J. Biomed. Health Inform. 2019;23:1151–1162. doi: 10.1109/JBHI.2018.2852635. [DOI] [PubMed] [Google Scholar]
- 189.Lian S., Li L., Lian G., Xiao X., Luo Z., Li S. A Global and Local Enhanced Residual U-Net for Accurate Retinal Vessel Segmentation. IEEE/ACM Trans. Comput. Biol. Bioinform. 2019;18:852–862. doi: 10.1109/TCBB.2019.2917188. [DOI] [PubMed] [Google Scholar]
- 190.Gu Z., Cheng J., Fu H., Zhou K., Hao H., Zhao Y., Zhang T., Gao S., Liu J. CE-Net: Context Encoder Network for 2D Medical Image Segmentation. IEEE Trans. Med. Imaging. 2019;38:2281–2292. doi: 10.1109/TMI.2019.2903562. [DOI] [PubMed] [Google Scholar]
- 191.Noh K.J., Park S.J., Lee S. Scale-Space Approximated Convolutional Neural Networks for Retinal Vessel Segmentation. Comput. Methods Programs Biomed. 2019;178:237–246. doi: 10.1016/j.cmpb.2019.06.030. [DOI] [PubMed] [Google Scholar]
- 192.Jiang Y., Tan N., Peng T. Optic Disc and Cup Segmentation Based on Deep Convolutional Generative Adversarial Networks. IEEE Access. 2019;7:64483–64493. doi: 10.1109/ACCESS.2019.2917508. [DOI] [Google Scholar]
- 193.Wang C., Zhao Z., Ren Q., Xu Y., Yu Y. Dense U-net Based on Patch-Based Learning for Retinal Vessel Segmentation. Entropy. 2019;21:168. doi: 10.3390/e21020168. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 194.Jiang Y., Duan L., Cheng J., Gu Z., Xia H., Fu H., Li C., Liu J. JointRCNN: A Region-based Convolutional Neural Network for Optic Disc and Cup Segmentation. IEEE Trans. Biomed. Eng. 2019;67:335–343. doi: 10.1109/TBME.2019.2913211. [DOI] [PubMed] [Google Scholar]
- 195.Gao J., Jiang Y., Zhang H., Wang F. Joint disc and cup segmentation based on recurrent fully convolutional network. PLoS ONE. 2020;15:e0238983. doi: 10.1371/journal.pone.0238983. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 196.Feng S., Zhuo Z., Pan D., Tian Q. CcNet: A Cross-connected Convolutional Network for Segmenting Retinal Vessels Using Multi-scale Features. Neurocomputing. 2020;392:268–276. doi: 10.1016/j.neucom.2018.10.098. [DOI] [Google Scholar]
- 197.Jin B., Liu P., Wang P., Shi L., Zhao J. Optic Disc Segmentation Using Attention-Based U-Net and the Improved Cross-Entropy Convolutional Neural Network. Entropy. 2020;22:844. doi: 10.3390/e22080844. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 198.Tamim N., Elshrkawey M., Azim G.A., Nassar H. Retinal Blood Vessel Segmentation Using Hybrid Features and Multi-Layer Perceptron Neural Networks. Symmetry. 2020;12:894. doi: 10.3390/sym12060894. [DOI] [Google Scholar]
- 199.Sreng S., Maneerat N., Hamamoto K., Win K.Y. Deep Learning for Optic Disc Segmentation and Glaucoma Diagnosis on Retinal Images. Appl. Sci. 2020;10:4916. doi: 10.3390/app10144916. [DOI] [Google Scholar]
- 200.Bian X., Luo X., Wang C., Liu W., Lin X. Optic Disc and Optic Cup Segmentation Based on Anatomy Guided Cascade Network. Comput. Methods Programs Biomed. 2020;197:105717. doi: 10.1016/j.cmpb.2020.105717. [DOI] [PubMed] [Google Scholar]
- 201.Almubarak H., Bazi Y., Alajlan N. Two-Stage Mask-RCNN Approach for Detecting and Segmenting the Optic Nerve Head, Optic Disc, and Optic Cup in Fundus Images. Appl. Sci. 2020;10:3833. doi: 10.3390/app10113833. [DOI] [Google Scholar]
- 202.Tian Z., Zheng Y., Li X., Du S., Xu X. Graph convolutional network based optic disc and cup segmentation on fundus images. Biomed. Opt. Express. 2020;11:3043–3057. doi: 10.1364/BOE.390056. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 203.Zhang L., Lim C.P. Intelligent optic disc segmentation using improved particle swarm optimization and evolving ensemble models. Appl. Soft Comput. 2020;92:106328. doi: 10.1016/j.asoc.2020.106328. [DOI] [Google Scholar]
- 204.Xie Z., Ling T., Yang Y., Shu R., Liu B.J. Optic Disc and Cup Image Segmentation Utilizing Contour-Based Transformation and Sequence Labeling Networks. J. Med. Syst. 2020;44:96. doi: 10.1007/s10916-020-01561-2. [DOI] [PubMed] [Google Scholar]
- 205.Bengani S., Jothi J.A.A. Automatic segmentation of optic disc in retinal fundus images using semi-supervised deep learning. Multimed. Tools Appl. 2021;80:3443–3468. doi: 10.1007/s11042-020-09778-6. [DOI] [Google Scholar]
- 206.Hasan M.K., Alam M.A., Elahi M.T.E., Roy S., Martí R. DRNet: Segmentation and localization of optic disc and Fovea from diabetic retinopathy image. Artif. Intell. Med. 2021;111:102001. doi: 10.1016/j.artmed.2020.102001. [DOI] [PubMed] [Google Scholar]
- 207.Gegundez-Arias M.E., Marin-Santos D., Perez-Borrero I., Vasallo-Vazquez M.J. A new deep learning method for blood vessel segmentation in retinal images based on convolutional kernels and modified U-Net model. Comput. Methods Programs Biomed. 2021;205:106081. doi: 10.1016/j.cmpb.2021.106081. [DOI] [PubMed] [Google Scholar]
- 208.Veena H.N., Muruganandham A., Kumaran T.S. A Novel Optic Disc and Optic Cup Segmentation Technique to Diagnose Glaucoma using Deep Learning Convolutional Neural Network over Retinal Fundus Images. J. King Saud Univ. Comput. Inf. Sci. 2021. in press . [DOI]
- 209.Wang L., Gu J., Chen Y., Liang Y., Zhang W., Pu J., Chen H. Automated segmentation of the optic disc from fundus images using an asymmetric deep learning network. Pattern Recognit. 2021;112:107810. doi: 10.1016/j.patcog.2020.107810. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 210.Lu C.K., Tang T.B., Laude A., Deary I.J., Dhillon B., Murray A.F. Quantification of parapapillary atrophy and optic disc. Investig. Ophthalmol. Vis. Sci. 2011;52:4671–4677. doi: 10.1167/iovs.10-6572. [DOI] [PubMed] [Google Scholar]
- 211.Cheng J., Tao D., Liu J., Wong D.W.K., Tan N.M., Wong T.Y., Saw S.M. Peripapillary atrophy detection by sparse biologically inspired feature manifold. IEEE Trans. Med. Imaging. 2012;31:2355–2365. doi: 10.1109/TMI.2012.2218118. [DOI] [PubMed] [Google Scholar]
- 212.Lu C.K., Tang T.B., Laude A., Dhillon B., Murray A.F. Parapapillary atrophy and optic disc region assessment (PANDORA): Retinal imaging tool for assessment of the optic disc and parapapillary atrophy. J. Biomed. Opt. 2012;17:106010. doi: 10.1117/1.JBO.17.10.106010. [DOI] [PubMed] [Google Scholar]
- 213.Septiarini A., Harjoko A., Pulungan R., Ekantini R. Automatic detection of peripapillary atrophy in retinal fundus images using statistical features. Biomed. Signal Process. Control. 2018;45:151–159. doi: 10.1016/j.bspc.2018.05.028. [DOI] [Google Scholar]
- 214.Li H., Li H., Kang J., Feng Y., Xu J. Automatic detection of parapapillary atrophy and its association with children myopia. Comput. Methods Programs Biomed. 2020;183:105090. doi: 10.1016/j.cmpb.2019.105090. [DOI] [PubMed] [Google Scholar]
- 215.Chai Y., Liu H., Xu J. A new convolutional neural network model for peripapillary atrophy area segmentation from retinal fundus images. Appl. Soft Comput. J. 2020;86:105890. doi: 10.1016/j.asoc.2019.105890. [DOI] [Google Scholar]
- 216.Son J., Shin J.Y., Kim H.D., Jung K.H., Park K.H., Park S.J. Development and Validation of Deep Learning Models for Screening Multiple Abnormal Findings in Retinal Fundus Images. Ophthalmology. 2020;127:85–94. doi: 10.1016/j.ophtha.2019.05.029. [DOI] [PubMed] [Google Scholar]
- 217.Sharma A., Agrawal M., Roy S.D., Gupta V., Vashisht P., Sidhu T. Deep learning to diagnose Peripapillary Atrophy in retinal images along with statistical features. Biomed. Signal Process. Control. 2021;64:102254. doi: 10.1016/j.bspc.2020.102254. [DOI] [Google Scholar]
- 218.Fu H., Li F., Orlando J.I., Bogunović H., Sun X., Liao J., Xu Y., Zhang S., Zhang X. PALM: PAthoLogic Myopia Challenge. IEEE Dataport. 2019 doi: 10.21227/55pk-8z03. [DOI] [Google Scholar]
- 219.Kanan C., Cottrell G.W. Color-to-Grayscale: Does the Method Matter in Image Recognition? PLoS ONE. 2012;7:e29740. doi: 10.1371/journal.pone.0029740. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 220.Zuiderveld K.J. Contrast Limited Adaptive Histogram Equalization. In: Heckbert P.S., editor. Graphics Gems. Elsevier; Amsterdam, The Netherlands: 1994. pp. 474–485. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
All data sets used in this work are publicly available as described in Section 4.2.