Skip to main content
Journal of Digital Imaging logoLink to Journal of Digital Imaging
. 2018 Apr 27;31(6):869–878. doi: 10.1007/s10278-018-0084-9

Automated Quality Assessment of Colour Fundus Images for Diabetic Retinopathy Screening in Telemedicine

Sajib Kumar Saha 1,, Basura Fernando 2, Jorge Cuadros 3, Di Xiao 1, Yogesan Kanagasingam 1
PMCID: PMC6261197  PMID: 29704086

Abstract

Fundus images obtained in a telemedicine program are acquired at different sites that are captured by people who have varying levels of experience. These result in a relatively high percentage of images which are later marked as unreadable by graders. Unreadable images require a recapture which is time and cost intensive. An automated method that determines the image quality during acquisition is an effective alternative. To determine the image quality during acquisition, we describe here an automated method for the assessment of image quality in the context of diabetic retinopathy. The method explicitly applies machine learning techniques to access the image and to determine ‘accept’ and ‘reject’ categories. ‘Reject’ category image requires a recapture. A deep convolution neural network is trained to grade the images automatically. A large representative set of 7000 colour fundus images was used for the experiment which was obtained from the EyePACS that were made available by the California Healthcare Foundation. Three retinal image analysis experts were employed to categorise these images into ‘accept’ and ‘reject’ classes based on the precise definition of image quality in the context of DR. The network was trained using 3428 images. The method shows an accuracy of 100% to successfully categorise ‘accept’ and ‘reject’ images, which is about 2% higher than the traditional machine learning method. On a clinical trial, the proposed method shows 97% agreement with human grader. The method can be easily incorporated with the fundus image capturing system in the acquisition centre and can guide the photographer whether a recapture is necessary or not.

Keywords: Automated quality assessment, Colour fundus image, Diabetic retinopathy, Telemedicine, Deep learning

Introduction

Diabetic retinopathy (DR) is a microvascular complication of diabetes and is the leading cause of blindness in the world [13]. By the year 2030, there will be about 552 million people with diabetes and over half of these individuals will develop DR [4]. Though DR cannot be cured or prevented, the blindness risk can be reduced through proper care and management of the disease [1]. Early detection and appropriate treatment of DR can reduce the risk of blindness by more than 90% [5, 6]. Early detection, routine referral, regular follow-up examination, and timely treatment represent an effective paradigm for diabetes eye care. However, providing such care with a limited number of ophthalmologists is a challenge. In addition how to approach remote or rural areas which lack ophthalmologists is another issue. Telemedicine programs for diabetic retinopathy have the potential to provide quality eye care to virtually any location and address the lack of access to ophthalmic services.

Telemedicine programs have the following key components: (1) a remote image acquisition centre capable of capturing digital images of the eye; (2) an image reading centre to detect and analyse disease severity; and (3) a clinical coordinating centre that communicates the findings and take actions as required. Retinal images obtained in a screening programme are acquired at different sites, using different cameras that are operated by qualified people who have varying levels of experience [7]. These result in a large variation in image quality and a relatively high percentage of images with inadequate quality for diagnosis. In [8], Abramoff et al. reported that about 12% of the images collected from a screening programme are later marked as unreadable by ophthalmologists. Unreadable images require a recapture for diagnosis. However, in most of the cases, the reading centres are situated in different locations than the image acquisition centre [8]. Since the image acquisition is time and location independent from its medical assessment, a reacquisition of the images will be time consuming, expensive, or even impossible sometimes. Thus, assuring sufficient image quality during acquisition is undoubtedly important. On that perspective, we have developed an automated image quality assessment method to be used in the image acquisition centre that determines the quality of the captured images and guides the photographer whether a recapture is necessary or not. The method explicitly applies machine learning techniques to determine the image quality. Worth mentioning, machine learning techniques have already been applied for automated quality assessment of fundus photographs. In comparison to these methods that used human-crafted features to learn information from the image, in this work we have allowed machine to learn those features by itself, relying on deep learning [9].

The development of a precise image quality index is not a straightforward task, mainly because quality is a subjective concept which varies even between experts, especially for images that are in the middle of the quality scale. In addition, image quality is dependent upon the type of diagnosis being made. For example, an image with dark regions can be considered as good for glaucoma grading, however, can be considered bad for diabetic retinopathy grading [10].

The proposed image quality metric relies on the definition of acceptable image quality proposed in [11] for diabetic retinopathy (DR); which eventually sets DR as the applicable area of the proposed method. According to [11], image quality can be categorised as ‘good’, ‘adequate’ and ‘inadequate’.

Image quality specification according to UK national screening committee [11]

Good quality Adequate quality Inadequate quality
Macula centered images Centre of fovea ≤ 1 disc diameter (DD) from centre of image and vessels clearly visible within 1DD of centre of fovea and vessels visible across > 90% of image Centre of fovea > 2 DD from edge of image and vessels visible within 1DD of centre of fovea Failure to meet adequate image quality standards
Optic disc (OD) centered images Center of disc ≤ 1 DD from center of image and fine vessels clearly visible on surface of disc and vessels visible across > 90% of image Complete optic disc > 2 DD from edge of image and fine vessels visible on surface of disc Failure to meet adequate image quality standards

In this work, we categories image quality into ‘accept’ and ‘reject’ class, where ‘accept’ class include images with ‘good’ and ‘adequate’ quality standards; ‘reject’ class include images of ‘inadequate’ quality. The reason for choosing only two categories for image quality has been partially biased by the comments of Giancardo et al. in [10]—“human experts are likely to disagree if many categories of image quality are used”.

A deep convolution neural network (CNN) is trained based on ‘accept’ and ‘reject’ image classes. Once the training is done, the network is applied to categorise test images. It is worth mentioning, even though we had precise definition for image categories, we realised discrepancy between individual subjective opinions for categorising images, specifically for images that were borderline. This is consistent with the comments of Paulus et al. in [12] “it is an individual decision at which point the image quality becomes too bad for a stable diagnosis”. We identified discrepancy between subjects on about 2% of our images and we called these images as ‘ambiguous’. The ‘ambiguous’ image set is not used during training; however, used during testing.

Review of the Retinal Image Quality Assessment Methods

Several approaches have been developed to automatically determine the quality of the retinal images. These approaches can be divided into two groups based on which image parameters/criteria they consider to classify the image quality [13]. The first group is based on generic image quality parameters such as sharpness and contrast. In 1999, Lee et al. [14] proposed an automated retinal image quality assessment method based on global image intensity histogram. In 2001, Lalonde et al. [15] proposed a method based on analysis of the global edge histogram in combination with localised image intensity histograms. Davis et al. [16] proposed a method of quality assessment based on contrast and luminance features. A method based on sharpness and illumination parameters was proposed by Bartling [17] in 2009. Illumination was measured through evaluation of contrast and brightness and the degree of sharpness was calculated from the spatial frequencies of the image. In 2014, Dias et al. [18] proposed a method based on fusion of generic image quality indicators such as image colour, focus, contrast, and illumination.

The second group is based on structural information of the image. To the best of our knowledge the first quality assessment method based on eye structure criteria was proposed by Usher et al. [19] in 2003.This method is based on the clarity and area of the detected eye vasculature and achieved a sensitivity of 84.3% and a specificity of 95.0% in a dataset of 1746 retinal images [20, 21]. In 2005, Lowell et al. [22] and in 2006, Fleming et al. [20] presented methods that are very specific to retinal image analysis. Both methods classify images by assigning them to a number of quality classes. An analysis was made on the vasculature in a circular area around the macula and the presence of small vessels in this region was used as the indicator of image quality. In [7], Niemeijer et al. proposed a method based on clustering the filter bank response vectors in order to obtain a compact representation of the image structures. The authors tested the method on 2000 images and reported an area of 99.68% under the receiver operating characteristic (ROC) curve. Inspired by the work of Niemeijer et al., in [23], Giancardo et al. proposed a method focused on eye vasculature only. The proposed method achieved an accuracy of 100% on the identification of ‘good’ images, 83% on ‘fair’ images, 0% on ‘poor’ images and 11% on ‘outlier’ images in a dataset of 84 retinal images. In 2011, Hunter et al. [21] proposed a method based on clarity of retinal vessels within the macula region and contrast between fovea region and retina backgrounds. The method achieved a sensitivity of 100% and a specificity of 93% in a dataset of 200 retinal images.

The two groups of image features mentioned above were first combined in a work by Paulus et al. [12] in 2010. Image structure clustering [7], Heralick features, and sharpness measures based on image gradient magnitudes were used to classify poor-quality retinal images. The method achieved a sensitivity of 96.9% and a specificity of 80.0% on a dataset composed of 301 retinal images.

Deep Learning

Deep learning, also known as deep structured learning, hierarchical learning or deep machine learning, is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers [9, 24]. While traditional machine learning approaches rely on hand-crafted features to extract useful information from data, deep learning aims to employ machine to learn the features by itself [25].

In recent years, deep learning architectures, such as deep convolutional neural networks (CNNs), have gained significant attention in computer vision [9, 26, 27]. Deep convolutional neural networks are typically considered as layers of interconnected “neurons” which exchange information. Raw data (e.g., an image) is fed into the network and “representations” of the data are then generated by each successive layer (for example, the first layer may represent the location and orientation of edges within an image, while successive layers may deal with higher levels of abstraction). Ultimately, output neurons are activated and the data is classified. A key feature of these networks is that the connections between each layer has numeric weights that can be tuned based on experience, allowing them to adapt to their inputs and become capable of learning. A schematic representation of a deep CNN is shown in Fig. 1.

Fig. 1.

Fig. 1

AlexNet convolutional neural network [26]

Materials and Methods

We train a deep convolution neural network (CNN) based on colour fundus image sets that were classified as ‘accept’ or ‘reject’ in the context of DR. Once learned, the network is used for two-class image classification task of accepting or rejecting test images.

Materials

A total of 7000 images were obtained from the EyePACS (http://www.eyepacs.com/) which were made available by the California Healthcare Foundation. The images were chosen to cover the greater demographic diversity and different fundus cameras that were found in EyePACS. The images were optic disc (OD) or macula centred.

Three retinal image analysis experts including one ophthalmologist were asked to grade those images as ‘accept’ and ‘reject’ as described in the “Introduction” section. A computer platform was developed where each of the experts was shown the images and was asked to provide his/her opinion to ‘accept’ or ‘reject’ the images. Even with the precise definition of ‘accept’ and ‘reject’ classes, we observed a discrepancy between individual subjective opinions especially for images that were borderline. The images for which we observed a discrepancy between individual subjects were categorised as ‘ambiguous’. We identified 147 images (about 2%) as ‘ambiguous’; 249 images (about 4%) were classified as ‘reject’ and the rest were classified as ‘accept’. The ‘accept’ and ‘reject’ classes were used to train the CNN. Fifty percent of the images were used for training and 50% were used for testing. Figure 2a, b shows exemplary images of the ‘accept’ and ‘reject’ classes. Figure 3 shows sample images of the ‘ambiguous’ class.

Fig. 2.

Fig. 2

Sample images of the a ‘accept’ and b ‘reject’ class

Fig. 3.

Fig. 3

Sample images of the ‘ambiguous’ class

Pre-processing

Each image was first subjected to a pre-processing phase which cropped the image to include only pixels showing retinal data. The implemented cropping algorithm is inspired by [18] in which the image mask is used to find a bounding box around the retinal image region of interest (ROI). To obtain the image mask, a simple thresholding-based strategy [28] was implemented in this work, where only the green channel of the image was used and the threshold value was determined through a statistical study of 500 randomly selected images from EyePACS. The cropping algorithm first located the retinal image ROI edges (left, right, upper and lower edges) and then, after fitting an enclosing box which included a small all around border, cropped the original image to that box area.

Training the Deep Learning Network

We used Alexnet CNN architecture [26]. The network was modified for two-class classification task and we used the hinge loss as the loss function [29]. Our CNN architecture consists of five convolution layers, two fully connected layers and a binary classification layer. Each convolution layer performs the convolution operation on the input images of variable number of channels. The first convolution layer consists of 96 filters of size 11 by 11. The second convolution layer of the CNN consists of 256 filters. The third and fourth convolution layers consist of 384 filters each. The final convolution layer consists of 256 filters. The activation size of the first and the final fully connected layer has 4096 filters in each. The final fully connected layer produces an output of dimensionality 4096 by 1. So, the image information is encoded to a 4096 dimensional feature vector, and finally, the binary classifier makes predictions based on these 4096 dimensional encoding of image data. The input to our network is a 256 by 256 RGB image consisting of three channels. The EyePACS images were first cropped so that the dark background surrounding the fundus is little as possible. Following that the images were resized to 256 by 256.

Mathematically, given pairs of training data denoted by D = {(x1,y1), (x2,y2), … (xn,yn)} which consist of n training samples, each xi is an image and yi is the class label. The final CNN classifier consists of 4096 parameters which we denote by w. During the training, we seek the CNN classifier and filter parameters θ by minimising the hinge loss over the training data as given by the following equation.

LossθwD=i=1nmax0yi×wTCNNθxi.

During the training of CNN, we minimise the hinge loss for binary classification case. Here, the CNN function takes each image xi and returns a vector. Then we take the dot product between this vector and the classifier w to get the classification score. The label for each image is either + 1 or − 1. If the classifier score is positive and the label is also positive (+ 1), then there is no loss occurred. However, if the CNN classifier score is negative and the sample ground truth label is positive, then we obtain positive loss and the objective of CNN training is to reduce such losses. To make sure we find good image representations and the classifier, we minimise this hinge loss over the entire training set using stochastic gradient descent.

To obtain probabilistic values for the predictions we need to calibrate the scores. For this task, one can use generalised logistic function. However, in this work, we use the scores returned by the CNN to make the final predictions.

To train the CNN, we used transfer learning instead of full training [30]. Transfer learning fine-tunes a CNN that has already been trained using a large labelled dataset from a different application, and is a promising alternative to full training, especially when the training data is not significantly large. More specifically, transfer learning uses pre-trained model to initialise the network parameters which are then fine-tuned based on the provided image data [3032]. In this work, we used the Caffe reference model [33] to initialise the CNN and then fine-tuned the parameters using our image dataset. Worth mentioning, the Caffe reference model is the most widely used pre-trained model which is trained based on a very large image collection named ImageNet [34]. For fine-tuning, we used the stochastic gradient decent method and used a variable learning rate starting from 0.01 and ending at 0.0001. We fine-tuned the entire network for 20 epochs.

Results

Evaluation on EyePACS Dataset

The CNN was trained using 3428 colour fundus images, the rest 3425 images were used for evaluation. The test dataset had 3302 ‘accept’ images, 123 ‘reject’ images categorised based on subjective evaluation. We computed the accuracy (i.e. the number of correctly classified images divided by the total number of images) and plotted the ROC curve to analyse the performance of the two-class classification task. ROC curves plot the true positive fraction against the false positive fraction. The VLFeat software package [35] was used to generate the ROC curves. Figure 4 shows the ROC plots for the ‘accept’ and ‘reject’ class.

Fig. 4.

Fig. 4

ROC plots of the a ‘accept’ and b ‘reject’ class

The area under the ROC curve and the accuracy was 100% both for the ‘accept’ and ‘reject’ classes.

While we did not use the ‘ambiguous’ images to train our CNN, we did employ the trained CNN to compute the classification scores for those images. For the ‘ambiguous’ images, the computed score is far apart from the ‘accept’ image scores and somewhere in between ‘accept’ and ‘reject’ scores, which eventually means along with ‘accept’ and ‘reject’ images, the network is able to detect images that are borderline. Figure 5 shows the mean classification scores returned by the CNN for the three different image categories. The ‘accept’ images get a score between 0.5 ~ 1.0, the ‘reject’ images have a score between − 2.6 ~ − 2.3 and the ‘ambiguous’ images get a score between − 2.1 ~ − 1.5.

Fig. 5.

Fig. 5

Classification scores returned by the CNN for the three different image categories. Whiskers show the standard deviations

We compared our proposed method with Dias et al.’s method [18], which is considered as the best performing traditional machine learning method. We implemented the method based on the description provided in the manuscript and we used images from EyePACS to compensate non-publicly available images to train the method (when necessary). As a classifier we used ‘Feed-Forward Backpropagation Neural Network’ [18]. To compare the two methods we computed the commonly used ‘sensitivity’, ‘specificity’ and ‘accuracy’. Table 1 represents the findings of this comparison.

Table 1.

Comparative performance of algorithms

Statistic Proposed method (%) Dias et al.’s method (%) [18]
Sensitivity 100 99.07
Specificity 100 97.95
Accuracy 100 98.03

From the above comparisons, it is observable that the proposed method outperforms the best performing traditional machine learning method.

Evaluation on Clinical Trial Data

From September 2016 to June 2017, we set up a diabetic retinopathy screening trial at one clinic centre at Perth, Australia. The project was funded by Australian National Health and Medical Research Council for evaluating our automatic DR image analysis and DR grading approach. A brief workflow is shown in Fig. 6. Our telemedicine system, which integrated patient basic information recording, retinal imaging, DR automatic analysis and human grader grading process, was in parallel running with the medical centre’s electronic medical record (EMR) system.

Fig. 6.

Fig. 6

Workflow of automatic retinal image DR analysis in a clinical trial (EMR electronic medical record, GP general practitioner, DR diabetic retinopathy)

In the period, a total of 214 patients were scanned. The patients were 21 ~ 81 years old and the numbers of male and female patients were exactly equal. From this clinical trial, we collected 200 fundus images randomly and applied the proposed method to access the quality of these images. We also employed an experienced grader to assess the quality of these images. We compared the results produced by human and the proposed automated method, and found an agreement of 97%.

Discussions and Conclusion

We have proposed a deep learning method to automatically determine the image quality with the aim to help/assist the photographer to decide on whether a recapture is necessary or not. We have observed that even with precise definition of ‘accept’ and ‘reject’ categories, individual subjective opinion varies especially for images that are borderline. We identified about 2% of the images where individual subjective opinion varied and we categorised those images as ‘ambiguous’. With that finding, we considered two different cases to train the CNN. In the first case we used ‘accept’, ‘reject’ and ‘ambiguous’ images to train the network. In this case, we trained the network to solve a three-class classification task and used categorical cross-entropy as loss function. In the second case, we used ‘accept’, ‘reject’ classes only to train the network as explained in “Training the Deep Learning Network”. Our experimental findings have revealed that when ‘ambiguous’ images are also used for training, it confuses the whole network and overall performance decreases. Figure 7 shows the ROC plots for the ‘accept’, ‘reject’ and ‘ambiguous’ classes. In comparison, when the CNN was trained for the ‘accept’, ‘reject’ classes, the accuracy was significantly high (Fig. 4). Hence, the proposed method considered training using the ‘accept’ and ‘reject’ classes only. It is worth mentioning that the proposed method returns a score and the scores for the ‘ambiguous’ images have different ranges in comparison to that of the ‘accept’ and ‘reject’ images. Which eventually means along with accurately identifying ‘accept’ and ‘reject’ images, the proposed method is able to detect images that are on the borderline and help/assist the photographer to decide on whether a recapture is necessary or not.

Fig. 7.

Fig. 7

ROC plots of the a ‘accept’, b ‘reject’ and c ‘ambiguous’ class

The automated image quality verification system proposed in this paper shows excellent results. Two different training setups for the CNN were tested and the best one has been identified. The experiments were conducted on a large, representative set of screening images obtained from EyePACS and from a clinical trial in a reading centre in Perth, Australia. The total running time to categorise a given image is about 15 s.1 The software has not been optimised extensively and therefore further increases in speed can be expected.

In conclusion, image quality assessment during acquisition is essential to get the full benefit that telemedicine can offer and to ensure cost and time efficient eye care. The proposed automated method for retinal image quality assessment achieves an accuracy of 100% to categorise the ‘accept’ and ‘reject’ images on EyePACS dataset. On a clinical trial the proposed method shows 97% agreement with human grader. Image categorised as ‘reject’ will require a recapture. The method can easily be incorporated with the image acquisition system and will guide the photographer whether a recapture is necessary or not.

It is worth mentioning that the algorithm developed in this manuscript is in the context of diabetic retinopathy and may not be directly applicable for other eye disease. The current processing time for each image by the proposed method is around 15 s using a desktop CPU, which may not be suitable for portable devices in telemedicine.

Footnotes

1

Processor: Intel Core i7 2.90 GHz, RAM: 32 GB

References

  • 1.Michelson G Ed: Teleophthalmology in preventive medicine. Berlin Heidelberg: Springer, 2015
  • 2.Patton Niall, Aslam Tariq M., MacGillivray Thomas, Deary Ian J., Dhillon Baljean, Eikelboom Robert H., Yogesan Kanagasingam, Constable Ian J. Retinal image analysis: Concepts, applications and potential. Progress in Retinal and Eye Research. 2006;25(1):99–127. doi: 10.1016/j.preteyeres.2005.07.001. [DOI] [PubMed] [Google Scholar]
  • 3.Luzio S, Hatcher S, Zahlmann G, Mazik L, Morgan M, Liesenfeld B, Bek T, Schuell H, Schneider S, Owens DR, Kohner E. Feasibility of using the TOSCA telescreening procedures for diabetic retinopathy. Diabet Med. 2004;21(10):1121–1128. doi: 10.1111/j.1464-5491.2004.01305.x. [DOI] [PubMed] [Google Scholar]
  • 4.Sim DA, Keane PA, Tufail A, Egan CA, Aiello LP, Silva PS. Automated retinal image analysis for diabetic retinopathy in telemedicine. Current Diabetes Reports. 2015;15:14. doi: 10.1007/s11892-015-0577-6. [DOI] [PubMed] [Google Scholar]
  • 5.Vashist P, Singh S, Gupta N, Saxena R. Role of early screening for diabetic retinopathy in patients with diabetes mellitus: an overview. Indian journal of community medicine: official publication of Indian Association of Preventive & Social Medicine. 2011;36(4):247. doi: 10.4103/0970-0218.91324. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.ETDRS Research Group ETDRS report number 9. Early treatment diabetic retinopathy study research group. Ophthalmology. 1991;98(5 Suppl):766–785. [PubMed] [Google Scholar]
  • 7.Niemeijer M, Abramoff MD, van Ginneken B. Image structure clustering for image quality verification of color retina images in diabetic retinopathy screening. Med Image Anal. 2006;10(6):888–898. doi: 10.1016/j.media.2006.09.006. [DOI] [PubMed] [Google Scholar]
  • 8.Abramoff MD, Suttorp-schulten MSA. Web-based screening for diabetic retinopathy in a primary care population: the EyeCheck project. J Telemed e-Health. 2005;11(6):668–675. doi: 10.1089/tmj.2005.11.668. [DOI] [PubMed] [Google Scholar]
  • 9.LeCun Y, Bengio Y, Hinton G. Deep learning. Nature [Internet] 2015;521(7553):436–444. doi: 10.1038/nature14539%5Cn10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
  • 10.Giancardo Luca, Meriaudeau Fabrice, P Thomas, Chaum Edward, Tobi Kenneth. New Developments in Biomedical Engineering. 2010. Quality Assessment of Retinal Fundus Images using Elliptical Local Vessel Density. [Google Scholar]
  • 11.UK National Screening Committee. Essential Elements in Developing a Diabetic Retinopathy Screening Programme. Available at https://bulger.co.uk/dacorumhealth/daccom/PDF%20Documents/Diabetic%20Retinopathy%20Screening%20(Workbook%20R4.1%202Aug07).pdf. Accessed 26 January 2017.
  • 12.Paulus J, Meier J, Bock R, Hornegger J, Michelson G. Automated quality assessment of retinal fundus photos. Int J Comput Assist Radiol Surg. 2010;5(6):557–564. doi: 10.1007/s11548-010-0479-7. [DOI] [PubMed] [Google Scholar]
  • 13.Imani E., Pourreza H. R., Banaee T. Integral Methods in Science and Engineering. Cham: Springer International Publishing; 2015. Retinal Image Quality Assessment Using Shearlet Transform; pp. 329–339. [Google Scholar]
  • 14.Lee SC, Wang Y: Automatic retinal image quality assessment and enhancement. Proceedings of SPIE Image Processing, 1999, pp 1581–1591
  • 15.Lalonde M, Gagnon L, Boucher M: Automatic visual quality assessment in optical fundus images. Ottawa Proceedings of Vision Interface, 2001, pp 259–264
  • 16.Davis H, Russell S, Barriga E, Abramoff M, Soliz P: Vision-based, real-time retinal image quality assessment. 22nd IEEE International Symposium on Computer-Based Medical Systems, 2009, pp 1–6
  • 17.Bartling H, Wanger P, Martin L. Automated quality evaluation of digital fundus photographs. Acta Ophthalmol. 2009;87(6):643–647. doi: 10.1111/j.1755-3768.2008.01321.x. [DOI] [PubMed] [Google Scholar]
  • 18.Dias J, Oliveira CM, Da Silva Cruz LA. Retinal image quality assessment using generic image quality indicators. Inf Fusion. 2014;19(1):73–90. doi: 10.1016/j.inffus.2012.08.001. [DOI] [Google Scholar]
  • 19.Usher DB, Himaga M, Dumskyj MJ: Automated assessment of digital fundus image quality using detected vessel area. Proceedings of Medical Image Understanding and Analysis, 2003, pp 81–84
  • 20.Fleming AD, Philip S, Goatman KA, Olson JA, Sharp PF. Automated assessment of diabetic retinal image quality based on clarity and field definition. Investig Ophthalmol Vis Sci. 2006;47(3):1120–1125. doi: 10.1167/iovs.05-1155. [DOI] [PubMed] [Google Scholar]
  • 21.Hunter A, Lowell JA, Habib M, Ryder B, Basu A, Steel D: An automated retinal image quality grading algorithm. Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society. EMBS, 2011, pp 5955–5958 [DOI] [PubMed]
  • 22.Lowell J, Hunter A, Habib M, Steel D: Automated quantification of fundus image quality. 3rd European Medical and Biological Engineering Conference: 1618, 2005
  • 23.Giancardo L, Abramoff MD, Chaum E, Karnowski TP, Meriaudeau F, Tobin KW: Elliptical local vessel density: a fast and robust quality metric for retinal images. Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society. EMBS, 2008, pp 3534–3537 [DOI] [PubMed]
  • 24.Bengio Y. Learning deep architectures for AI. Found Trends®. Mach Learn. 2009;2(1):1–127. doi: 10.1561/2200000006. [DOI] [Google Scholar]
  • 25.Schmidhuber J. Deep learning in neural networks: an overview. Neural Networks. 2015;61:85–117. doi: 10.1016/j.neunet.2014.09.003. [DOI] [PubMed] [Google Scholar]
  • 26.Krizhevsky A, Sutskever I, Hinton GE: ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 2012, pp 1–9
  • 27.Saha S, Fletcher A, Xiao D, Kanagasingam Y. A novel method for automated correction of non-uniform/poor illumination of retinal images without creating false artifacts. J Vis Commun Image Represent. 2018;51:95–103. doi: 10.1016/j.jvcir.2018.01.005. [DOI] [Google Scholar]
  • 28.Goatman KA, Whitwam AD, Manivannan A, Olson JA, Sharp PF: Colour normalisation of retinal images. Proceedings of medical image understanding and analysis: 49-52, 2003
  • 29.Rosasco L, De Vito E, Caponnetto A, Piana M, Verri A. Are loss functions all the same? Neural Comput [Internet] 2004;16(5):1063–1076. doi: 10.1162/089976604773135104. [DOI] [PubMed] [Google Scholar]
  • 30.Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, Liang J. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE transactions on medical imaging. 2016;35(5):1299–1312. doi: 10.1109/TMI.2016.2535302. [DOI] [PubMed] [Google Scholar]
  • 31.Saha SK, Xiao D, Fernando B, Tay-Kearney ML, An D, Kanagasingam Y. Deep learning based decision support system for automated diagnosis of age-related macular degeneration (AMD) Investigative Ophthalmology & Visual Science. 2017;58(8):25–25. [Google Scholar]
  • 32.Saha SK, Fernando B, Xiao D, Tay-Kearney ML, Kanagasingam Y. Deep learning for automatic detection and classification of microaneurysms, hard and soft exudates, and hemorrhages for diabetic retinopathy diagnosis. Investigative Ophthalmology & Visual Science. 2016;57(12):5962–5962. [Google Scholar]
  • 33.Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T: Caffe: convolutional architecture for fast feature embedding. Proceedings of the 22nd ACM International Conference on Multimedia, 2014, pp 675–678
  • 34.Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L: ImageNet: a large-scale hierarchical image database. IEEE Computer Vision and Pattern Recognition, 2009, pp 248–255
  • 35.Vedaldi A, Fulkerson B: An open and portable library of computer vision algorithms. Proceedings of the 18th ACM international conference on Multimedia, 2010, pp 1469–1472

Articles from Journal of Digital Imaging are provided here courtesy of Springer

RESOURCES