Skip to main content
Computational Intelligence and Neuroscience logoLink to Computational Intelligence and Neuroscience
. 2023 Jan 3;2023:1305583. doi: 10.1155/2023/1305583

Classification of Diabetic Retinopathy Severity in Fundus Images Using the Vision Transformer and Residual Attention

Zongyun Gu 1,2, Yan Li 3,, Zijian Wang 1, Junling Kan 1, Jianhua Shu 1, Qing Wang 1
PMCID: PMC9831706  PMID: 36636467

Abstract

Diabetic retinopathy (DR) is a common retinal vascular disease, which can cause severe visual impairment. It is of great clinical significance to use fundus images for intelligent diagnosis of DR. In this paper, an intelligent DR classification model of fundus images is proposed. This method can detect all the five stages of DR, including of no DR, mild, moderate, severe, and proliferative. This model is composed of two key modules. FEB, feature extraction block, is mainly used for feature extraction of fundus images, and GPB, grading prediction block, is used to classify the five stages of DR. The transformer in the FEB has more fine-grained attention that can pay more attention to retinal hemorrhage and exudate areas. The residual attention in the GPB can effectively capture different spatial regions occupied by different classes of objects. Comprehensive experiments on DDR datasets well demonstrate the superiority of our method, and compared with the benchmark method, our method has achieved competitive performance.

1. Introduction

Diabetic retinopathy (DR) is an ocular complication caused by diabetes. It is a leading cause of visual impairment and even blindness. It has become a major medical problem worldwide [1, 2]. However, up to now, there is no effective treatment for this disease. Studies have shown that early diagnosis and timely treatment of diabetic retinopathy are helpful to prevent blindness. This goal can be achieved through regular screening programs [3]. As a result, many national health agencies are promoting DR screening, which is effective in reducing blindness due to DR [4]. Digital fundus images are the most widely used imaging mode for ophthalmologists to screen and identify the severity of DR, and it can show the severity of the disease. However, due to the lack of ophthalmologists, DR screening is a heavy burden for many underdeveloped countries. For this reason, automatic classification technology for DR severity has become a trend in diagnosis.

With the development and application of artificial intelligence technology, deep learning [5] is playing a more and more important role in the field of medical image analysis. In recent years, the convolutional neural network (CNN) has been successfully applied to medical image classification [6, 7], medical image segmentation [8, 9], medical image registration [10, 11], medical image fusion [12, 13], and medical image report generation [14, 15] because it can learn highly complicated representations in a data-driven way. Although CNN shows great potential in medical image analysis, it also has some limitations. Local receptive fields such as convolution operations limit the capture of long-range pixel relationships. Inspired by the success of transformers in NLP, Alexey Dosovitskiy et al. [16] proposed the vision transformer (ViT), which takes image classification as a sequence prediction task for image patch sequences, to capture the long-range correlation of input image. In addition, recent research shows that compared with CNN, ViT is more in line with the prediction error of mankind [17, 18].

The biggest challenge of DR severity classification is that the classification accuracy of fundus disease images is more precise than other image categories. It shows that the differences of DR lesion points between main adjacent classes are very subtle, and it is difficult to distinguish. Although the attention module in ViT plays an important role in object classification, if the attention modules are simply superimposed, the performance of the model will decrease. In addition, ViT ignores the different spatial regions occupied by different kinds of objects. Motivated by the previously mentioned observations, we propose a deep network model to identify and classify DR, which consists of two key modules: FEB and GPB. The FEB module extracts the features of the image by the ViT model. The token in the ViT model has more fine-grained attention and pays more attention to the retinal hemorrhage area. The GPB module effectively captures the different spatial regions occupied by objects from different classes and generates class-specific features for each class by referring to a simple spatial attention score. By integrating the previous modules, our network can more accurately classify DR lesions of the different degrees.

To sum up, our contributions are as follows:

  1. Extracting fundus image features via vision transformer's excellent modeling ability.

  2. Using the residual attention module to make use of the individual spatial attention of each object class so as to improve the accuracy of DR classification.

  3. Experiments on DDR datasets show that this method has achieved good results in DR classification tasks. Specifically, our method achieves the best performance on grading 0, 2, 3, and 4.

2. Related Works

2.1. Pathological Analysis of the DR Severity

DR classification refers to the classification of retinal fundus images according to the severity of DR. In the fundus image, the bright region where the blood vessels converge is the optic disc, while the dark region on the other side is the fovea. Mild DR appears as dark red dots, which are small hemorrhages or small red dot-like microaneurysms. Moderate DR is the addition of some yellow lesions to the small red lesions, and some yellow-white punctate hard exudates appear. Severe DR adds some white cotton-like soft exudates to the red and yellow lesions, which have various types of shapes, from small spots to large plaques. The more such lesions, the more serious DR will be. Proliferative DR refers to the formation of new retinal blood vessels in or around the optic disc. It can cause vitreous hemorrhage and retina hemorrhage, and in severe cases, it can lead to retinal detachment. All these changes will be reflected in the fundus image.

According to the international classification of DR [19], DR can be divided into five stages. They are class 0 (no DR), class 1 (mild DR), class 2 (moderate DR), class 3 (severe DR), and class 4 (proliferative DR). Figures 1(a)1(e) show the five stages of DR, respectively [20]. As we all know, the image quality has a great influence on deep learning models. However, in clinical practice, due to exposure and other reasons, low-quality images are inevitable. Therefore, as shown in Figure 1(f), the DDR dataset [20] divided the fundus images that do not meet the quality standard into class 5 (ungradable).

Figure 1.

Figure 1

Example fundus images from the DDR dataset. (a) Healthy, which is labelled as class 0; (b) mild, which is labelled as class 1; (c) moderate, which is labelled as class 2; (d) severe, which is labelled as class 3; (e) proliferative, which is labelled as class 4; (f) ungradable, which is not up to quality standards to be used for model training and is labelled as class 5.

2.2. Deep Learning in Medicine Images

With the rapid development of artificial intelligence (AI) technology, deep learning (DL) methods have been widely used in various tasks related to medical images and have achieved remarkable results. In the medical field, the types of images to be processed usually include X-ray, ultrasound, computed tomography (CT), and magnetic resonance imaging (MRI) [21, 22]. The processing tasks include image classification, object recognition, image segmentation, image reconstruction, and so on.

Medical image classification can assist doctors in diagnosing diseases. Esteva et al. [23] directly used 130,000 clinical image data to train the model based on the Inception v3 backbone network. The results showed that it was better than human experts. At the same time, the experiment proves that ordinary CNN can also produce good prediction results on large-scale and high-quality annotation data sets. Yi et al. [24] proposed a novel graph regularized NMF algorithm called NMF-LCAG that handles the adaptive graph learning issue in NMF. Compared with other related algorithms, the accuracy of the NMF-LCAG algorithm can be improved by at least 1%∼3% in most cases. To achieve efficient and rapid diagnosis of patients with COVID-19, Li et al. [25] proposed a computer-aided diagnosis algorithm based on ensemble deep learning. The experimental results show that the algorithm has good classification performance for COVID-19's disease patients, common pneumonia patients, and normal control groups and can significantly improve the performance of deep neural networks in multiclass prediction tasks.

It is of great significance in clinical treatment to accurately detect or identify lesions in medical images. The object recognition task is divided into two stages and one stage. The two-stage algorithm is represented by the R-CNN series [2628], and the one-stage algorithm is most representative of the YOLO series [2932]. Andrew Ng's group proposed the CheXnet algorithm [33], which is a 121-layer convolutional neural network. This algorithm can automatically detect pneumonia from chest X-rays. The accuracy rate is even higher than that of radiologists. Aoki et al. [34] completed the detection and probability prediction of erosion and ulcer lesions in wireless capsule endoscopy based on CNN for the first time. The detection accuracy reached 88.2%. In addition, properly improving the sensitivity of the model in clinical application will help doctors to reduce the missed detection rate.

The purpose of the medical image segmentation is to provide a reliable basis for clinical diagnosis and pathological research. The fully convolutional network (FCN) [35] was first used for segmented tasks. Although the final output layer has correct semantics, it is short of detailed information. U-Net [36] borrowed the idea of FCN, designed a more elegant image segmentation framework, and realized richer and more detailed segmentation results. In order to achieve precise segmentation of retinal blood vessels, Guo et al. [37] proposed a lightweight network named SA-Unet that achieves state-of-the-art performance on DRIVE and CHASE_DB1 datasets.

In addition, DL is also widely used in medical image reconstruction [38, 39], medical image report generation [40], and other tasks. Deep learning provides important theoretical basis and technical support for intelligent medicine. However, there are still many problems with intelligent medical imaging. For example, the lack of high-quality labelled training samples and the model obtained by deep learning is poorly interpretable.

2.3. Deep Learning in DR Classification

Accurate classification of medical images is an important means to assist clinical care and treatment. In recent years, DL has made remarkable achievements in medical image analysis, making DR-assisted diagnosis more reliable and efficient.

Bravo and Arbelez Pablo [41] investigated the performance of different preprocessing methods, designed a classification model based on the VGG16 architecture, and achieved an average classification accuracy of 50.5% in DR classification. A multi-cell architecture [42], which can gradually increase the depth of the deep neural network and the input image resolution, improves classification accuracy while reducing training time. To fully utilize images in different stages of deep learning, they also propose a multitask learning strategy. To solve the problem of lack of data, a deep learning architecture was proposed in [43], and the MESSIDOR dataset was used to train and test their architecture. Methods by developing the convolution layer and maximum pool layer in the first eight layers and the full connection layer in the last three layers, the AlexNet architecture is simply modified. The model is suitable for smaller datasets and provides acceptable accuracy. Golub et al. [44] put forward a method to identify and classify DR. This method could not only segment any retinal region of the fundus image but also evaluate the quality of the original image. In order to simulate the diagnosis process, a double-stream binocular network is proposed in [45] to capture subtle correlations between the left eye and the right eye, and its advantages over monocular methods are demonstrated on the EyePACS dataset. Zhang et al. [46] designed source-free transfer learning (SFTL) for DR detection, which utilizes unannotated retinal images and only employs a source model throughout the training process. On the EyePACS dataset, it achieved 91.2% accuracy, 0.951 sensitivity, and 0.858 specificity. [47] discusses existing DR detection and classification techniques, their advantages and disadvantages, and available DR datasets. The research achievements and progress in the field of DR detection are introduced in detail.

Although all these algorithms are devoted to extracting the features of lesions, there is still a problem of insufficient recognition performance of lesion, especially for small lesions. There are several reasons: (1) only high-resolution images can detect small pathological tissues, so the resolution of retinal images is very high. (2) Compared with other types of image, the classification accuracy of DR lesions is more accurate. Moreover, too small lesion points make the differences of DR lesion points between adjacent classes very subtle, which make it difficult to distinguish. (3) Identifying a severe class with a large local receptive field may lead to gradient disappearance or explosion problems. (4) The calculation cost of DR images is high, and it is difficult to train the model.

2.4. Visual Transformer in Medicine Images

Following the unprecedented success in natural language tasks, transformers [48] have also made great achievements in image recognition tasks recently. The ViT model has become very popular in various computer vision tasks including image classification [16], image detection [49], image segmentation [50], and so on. In the field of natural image recognition, ViT and its derived instances have achieved state-of-the-art performance on several benchmark datasets.

Recently, the ViT has been successfully applied in medical image classification. Yu et al. [51] proposed the MIL-ViT model, which was first pretrained on a large dataset of fundus images, and then fine-tuned on the downstream task of retinal disease classification. MIL-based headers are used in the MIL-ViT system and can be used with ViT in a plug-and-play way. Experiments on APTOS2019 and RFMiD2020 datasets show that the performance of MIL-ViT is better than that of the baselines based on CNN. Most data-driven methods regard DR classification and lesion detection as two independent tasks, which may not be optimal, because errors may be propagated from one stage to the next. To handle these two tasks together, it is proposed in [52] to the lesion aware transformer (LAT), which consists of an encoder-based pixel relationship and a decoder of the lesion-aware transformer. In particular, they take advantage of the transformer decoder to express lesion detection as a weakly supervised lesion localization problem. The LAT model has achieved the performance of the state of the art on the Messidor-1, Messidor-2, and EyePACS datasets. Yang et al. [53] proposed a hybrid structure consisting of the convolutional layer and the transformer layer to classify fundus diseases on OIA datasets. Similarly, Wu et al. [54] and AlDahoul et al. [55] also verified that the ViT model is more accurate than the CNN model in DR classification. As can be seen from the previous references, most methods directly use the original ViT model as a plug-and-play way to improve classification performance. Based on the previous observations, we think that using ViT as the backbone network to integrate domain-specific contexts can improve DR classification performance.

Apart from medical image classification, ViT is widely used in medical image segmentation [56], medical image detection [57], medical image reconstruction [58], medical image synthesis [59], and medical image report generation [60] and other tasks. However, some studies [61] have shown that the transformer is highly dependent on massive data, and its performance can surpass CNN only after training on large datasets. Most medical images have small public datasets and few labels, which limit the application of transformers in this field.

3. Methods

In this section, we first briefly outline our proposed network and then explain the key network components of the network in detail. Finally, the loss function of the designed is given.

3.1. Overview

Classification is usually based on differences or distinctions between categories. Diabetic retinopathy develops from mild to severe, and there is correlation between adjacent classes. For example, the severe DR stage follows the moderate DR stage, and the moderate DR stage follows the mild DR stage. Keeping this in mind, we propose a classification model that distinguishes five classes of DR through the ability of the ViT network to capture subtle changes and the discriminative ability of CSRA's different-category features. The model outputs five probability scores (which sum to 1), corresponding to the five classes of DR. The DR classification network architecture we designed is shown in Figure 2. It is composed of two key modules, FEB and GPB. FEB is mainly used for image feature extraction, while GPB is mainly used for classification prediction.

Figure 2.

Figure 2

Pipeline of the proposed method.

The proposed approach is presented as an algorithm in Algorithm 1.

3.2. Feature Extraction Block

As mentioned previously, the FEB is mainly used for extracting features from images. The standard transformer accepts 1-D token embedded sequences as input. To process a 2D image, we reshape the image x with the original shape [H × W × C] into a sequence of flattened 2D patches xPN×(P2C), where (H, W) represents the resolution of the original image, C represents the number of channels, (P, P) represents the resolution of each patch, and N=HW/P2 represents the number of patches. The transformer uses the same vector dimension D for all its layers, so we use a linear mapping layer to map the image patches to the D dimension. Similar to the [class] character in the BERT model, learnable embedding z00=xclass is added before the block embedding sequence. The output state embedded in the transformer encoder is treated as an image representation. The processing of this process is shown in the following equation:

z0=xclass;xp1E;xp2E;;xpNE+Epos,ERP2C×D,EposRN+1×D, (1)

where E is the weight vector of the linear mapping layer and Epos is the positional embedding, which is directly added to the image patch embedding. The purpose of position embedding is to preserve the position information of different blocks, and the resulting sequence of embedded vectors is used as the input of the transformer encoder.

Figure 3 shows the network structure of the transformer encoder [16]. It consists of a stack of six identical encoding layers, each of which has two encoding sublayers. The first encoding sublayer is a multihead attention layer, while the second encoding sublayer is a position-wise feed-forward network. Residual connection is used between the two encoding sublayers, and the output of each encoding sublayer is normalized by the Layer Norm. Therefore, each sublayer can be represented as LN(Sublayer(x)), where Sublayer(x) is a function of the sublayer itself. For the convenience of residual connection, the output dimensions of all sublayers in the model are dmode l=512. In the transformer, the attention function maps the queries, keys, and value vectors into an output vector, which is packaged into matrices Q, K, and V, respectively. Attention is described in the following equation:

AttentionQ,K,V=softmaxQKTdkV. (2)

Figure 3.

Figure 3

Transform encoder.

The multihead attention mechanism performs L different learnable linear mappings on the queries, keys, and value vectors and maps them into vectors of dimensions dk, dk, and dv, respectively. The head and multihead are described as follows:

headi=AttentionQWiQ,KWiK,VWiV. (3)
MultiHeadQ,K,V=Concathead1,,headhWO, (4)

where the parameter matrix WiQdmode l×dk, WiKdmode l×dk, WiVdmode l×dv, and WOhdv×dmode l.

3.3. Grading Prediction Block

Fundus image classification is a challenging computer vision task for practical applications. To capture the different spatial regions occupied by objects from different classes more efficiently, we introduce a class-specific residual attention algorithm [62] in GPB. With spatial attention scoring, the class-specific residual attention (CSRA) generates specific features for each class and then uses average pooling on these features for feature fusion.

As shown in Figure 4, the feature matrix xd×h×w of the input image is extracted by the FEB. Here, d, h, and W represent the dimension, height, and width of the feature matrix, respectively, and we assume that d is 2048, h is 7, and W is 7. Firstly, the feature matrix x is decoupled into a position feature matrix group x1, x2,…, x49 (xj2048). Then, a fully connected layer (1 × 1 convolution) is used as the classifier. Note that each class has its own specific fully connected layer classifier, and the parameter of the classifier mi corresponding to the ith class is mi2048.

Figure 4.

Figure 4

CSRA model.

The CSRA score is defined in [62] by the following equation:

sji=expTxjTmik=149expTxkTmi, (5)

where T (>0) is the temperature control factor and sji is the probability that class i appears at the position j.

The CSRA fi for the class i is given by the following equation:

fi=g+λai. (6)

Here, ai is a class-specific feature vector and ai=∑k=149skixk. λis a hyperparameter (setting λ = 0.3), g=1/49∑k=149xk.

According to [62], the dot product of the CSRA fi of the ith class and the classifier mi corresponding to this class obtain the final logical output, as shown in the following equation:

y^y1,y2,,yC=m1Tf1,m2Tf2,,mCTfC. (7)

Here, C is the number of classification categories.

3.4. Loss Function

In this paper, the binary cross-entropy (BCE) loss function given in (8) is used to calculate the loss between prediction y and label y^ of ground truth. The stochastic gradient descent (SGD) method is used to optimize the loss function.

L=i=1Nyilogy^i+1yilog1y^i. (8)

4. Experiments and Results' Discussion

4.1. Datasets

The DDR dataset is provided by Ocular Disease Intelligent Recognition (ODIR-2019) for lesion segmentation and lesion detection [20]. This dataset consists of 13,673 fundus images from 147 hospitals, covering 23 provinces in China. For DR classification tasks, the division of the training set, validation set, and test set is provided on DDR, of which 6835 are used for training, 2733 are used for validation, and the remaining 4105 are used for testing. The DR images in DDR are divided into six classes: no DR, mild nonproliferative DR, moderate nonproliferative DR, severe nonproliferative DR, proliferative DR, and ungradable.

The IDRiD dataset (The Indian Diabetic Retinopathy Image Dataset) is the first database representing the Indian population [63]. It is a dataset consisting of typical DR and normal retinal structures and is divided into three parts, namely, segmentation, classification, and location. Among them, classification consists of 516 original color fundus images which are divided into the train set (413 images) and test set (103 images). In addition, this dataset provides information on the disease severity of DR and diabetic macular edema for each image. This makes it ideal for the development and evaluation of image analysis algorithms for the early detection of DR.

4.2. Implementation Details

In order to prepare more trainable data, we do some operations on the original images. In this paper, the pretrained backbone model parameters are used, and the training is fine-tuned on the used datasets. Limited by the memory received, the large images are randomly resized 512 × 512 sizes. In addition, we apply random horizontal flips, vertical flips, and random rotation as forms of data augmentation to reduce overfitting. Our framework is implemented by PyTorch 1.6 and runs on NVIDIA Quadro RTX 6000 GPU with 24 GB of memory. Table 1 highlights the hyperparameters used in training.

Table 1.

The DR model training hyperparameters.

Hyperparameters Valor
Optimizing function SGD optimizer
Momentum 0.9
Weight decay 5 × 10−4
Epochs 20
Batch size 32
Initial learning rate 1 × 10−3
Dropout 0
Classifier 0.01
Number of classes 5 and 6 classes

4.3. Evaluation Metrics

We select the following indicators to evaluate the performance of the classification model. These indicators are calculated according to equations (9)–(13): (9) precision, which means the proportion of samples that are correctly classified as positive; (10) recall/sensitivity, which is the probability that the DR image of the lesion is not missed as negative; (11) specificity, which is the probability that a DR image with normal specificity will not be misjudged as positive; (12) accuracy represents the correct proportion of model classification; (13) F1-score, which is the harmonic mean between precision and recall. These initial metrics are added to a confusion matrix of multiclassification. Equations (14)–(17) make it possible to extend the definitions of these performance indicators to N classes. This work uses the following indicators (macro-average): accuracy, sensitivity, specificity, and F1-score to evaluate the DR classification process.

Precision=TPTP+FP, (9)
Recall=TPTP+FN, (10)
Specificity=TNTN+FP, (11)
Accuracy=TP+TNTP+FP+FN+TN, (12)
F1score=2PrecisionRecallPrecision+Recall, (13)
TPforclassk=Ak,k, (14)
FPforclassk=i=0nAi,kAk,k, (15)
FNforclassk=i=0nAk,iAk,k, (16)
TNforclassk=i=0nj=0nAi,ji=0nAi,ki=0nAk,i+Ak,k, (17)

where TP (true positive) represents the positive samples predicted by the model to be in a positive class, TN (true negative) represents the negative samples predicted by the model to be in the negative class, FP (false positive) represents the negative samples predicted to be positive by the model, and FN (false negative) indicates the positive sample predicted by the model as a negative class.

In addition, the area under the curve (AUC) of the receiving operating characteristic (ROC) curve is employed, which is also recognized as a metric of fundus image grading in previous research. The AUC reflects the performance of the data predicted positively and also characterizes the effectiveness of the model. The higher the AUC value, the better the effect of model classification.

4.4. Evaluation of the Model Performance

In this work, the DR classification method in 5 severity categories from 0 to 4 is proposed. An additional category (class 5), similar to [20], is related to images presenting artifacts, which prevent clear evaluation of information generated. A second experiment was performed to exclude images with artifacts (class 5). To evaluate the performance of our model, we trained and tested it on DDR and IDRiD datasets. Figure 5 is the loss curve of our model training process on the DDR dataset. It can be seen that the loss decreases with the increase in training times. In the first 6 epochs, the loss value of the test set and the train set decreases significantly. The loss value of the test set does not decrease after the 7th epoch and the train set after the 9th epoch, and the model training tends to be saturated. From the change of the loss curves, it can be seen that there is little difference between the train loss and the test loss, which means that our model does not show overfitting.

Figure 5.

Figure 5

Loss curves on the train set and test of DDR.

For the prediction classification model, we hope that the more accurate the prediction results of the model, the better. That is, the larger the value of TP and TN in the confusion matrix, the better and the smaller the value of corresponding FP and FN. However, confusing the matrix will calculate the number. With a large amount of data, it is difficult to directly judge whether the model is good or bad. Besides, accuracy is not a good indicator for unbalanced data sets. Therefore, on the basic statistical results of the confusion matrix, we introduced five metrics: precision, recall, specificity, F1-score, and accuracy.

The six classification confusion matrices and the five classification confusion matrices obtained by this method are shown in Tables 2 and 3, respectively. From the two tables, it can be seen that although there are misclassifications between classes, most of them are classified into adjacent classes. Most of the data fall on the diagonal line. In addition, most of the data fall on the diagonal line, which also proves that this algorithm is suitable for DR image classification.

Table 2.

Confusion matrix on the DDR dataset for DR classification model.

Actual label Predict label
0 1 2 3 4 5
0 1847 5 28 0 0 0
1 65 79 42 0 1 2
2 576 34 639 14 28 53
3 6 1 17 38 8 1
4 9 0 87 2 152 25
5 3 0 6 0 12 325

Bold font indicates the best result in each column.

Table 3.

Confusion matrix on the DDR dataset for the IDRiD classification model.

Actual label Predict label
0 1 2 3 4
0 20 0 14 0 0
1 2 3 0 0 0
2 3 0 29 0 0
3 0 0 9 10 0
4 1 0 2 0 10

Bold font indicates the best result in each column.

Tables 4 and 5 present the metrics obtained from this work, separated by class and database. It was observed that in both results, in cases with no DR and proliferative stage classes, the DR model has a high classification index. It can be analyzed that the model can distinguish the categories with distinct characteristics. In the intermediate classes (from 1 to 3), the classification index of the DR model is not high. This is because there are no obvious differences between the characteristics of these categories, and it is easy to be confused with nearby categories. Comparing the results in Tables 4 and 5, the results obtained show that including ungradable category (class 5) improves accuracy in all categories. This also reflects that the image quality plays an important role in the classification of the model.

Table 4.

Performance measures of the DR classification model for DDR.

Class Precision Recall Specificity F1-score Accuracy
0 0.7329 0.9824 0.0208 0.8395 0.8280
1 0.6639 0.4180 0.0276 0.5130 0.9635
2 0.7607 0.4754 0.2159 0.5852 0.7793
3 0.7037 0.5352 0.0081 0.6080 0.9881
4 0.7562 0.5527 0.0315 0.6387 0.9581
5 0.8005 0.9393 0.0057 0.8644 0.9752

Table 5.

Performance measures of the DR classification model for IDRiD.

Class Precision Recall Specificity F1-score Accuracy
0 0.7692 0.5882 0.1818 0.6667 0.8058
1 1.0000 0.6000 0.0200 0.7500 0.9806
2 0.5370 0.9063 0.0612 0.6744 0.7282
3 1.0000 0.5263 0.0968 0.6897 0.9126
4 1.0000 0.7692 0.0323 0.8696 0.9709

The ROC curve and AUC value are used to evaluate the performance of our model. As shown in Figure 6, the AUC values of class 0, class 1, class 2, class 3, class 4, and class 5 were 0.9980, 0.6129, 0.9509, 0.9455, 0.9741, and 0.9293, respectively. Our model performed well enough in class 0, class 2, class 3, class 4, and class 5. However, the performance of class 1 was not satisfactory. In addition to the possible reasons we have analyzed previously, the serious imbalance of the amount of data is also a very important factor. After all, the total sample size in the DDR dataset is 13673, while the sample size of class 1 is only 630. To solve this problem, we tried to resample the data and retrain the model by binary classification. The AUC of class 1 can reach 0.9430. In this way, if it is just a simple DR screening, the binary classification model can be used in clinical applications for mild patients. If you want to evaluate the risk level of patients and image quality, you can use the multiclass model.

Figure 6.

Figure 6

The ROC and AUC per category for DR classification on the DDR dataset.

4.5. Comparing with Other Methods

Comparing the accuracy and average accuracy (AA) of our method with previous work reported on the DDR dataset (Table 6), our method ranks first in class 1, class 2, class 3, class 4, class 5, and AA. Our model achieved an accuracy of 0.9635 on class 1. The state-of-the-art performance is achieved among all models, which is almost 4 times the accuracy of the second-best DenseNet-121 model (0.2275). This shows that our model has made great improvement on the most difficult part to identify mild DR, and our model has also achieved the best performance in class 2, class 3, and class 4. The performance has greatly improved. For the purposes of image quality control, our model has also been improved by 3.30% compared with the second-best VGG-16 model. For the AA metric, our model achieved a performance of 0.9154, which was 30.35% higher than the second-best DenseNet-121 model. However, our model does not perform as well as other models at level 0. Our model is not set manually, but the optimal threshold is obtained according to the Youden index. It can be speculated that the reason is that the model made concessions in order to reduce the rate of missed detection and improve overall performance. In conclusion, compared with other benchmark models, the model which is based on ViT and CSRA is highly competitive in DR severity classification.

Table 6.

Performance comparison of different models on the DDR dataset.

Model 0 1 2 3 4 5 AA
VGG-16 [64] 0.9537 0.0423 0.5625 0.3944 0.6436 0.9422 0.5898
ResNet-18 [65] 0.9548 0.0582 0.622 0.3662 0.5818 0.9133 0.5827
GoogLeNet [66] 0.9574 0.0265 0.5759 0.3380 0.5782 0.9162 0.5654
DenseNet-121 [67] 0.8930 0.2275 0.5751 0.4085 0.6364 0.9306 0.6119
SE-BN-Inception [68] 0.9452 0.0476 0.6458 0.1268 0.5818 0.9046 0.5418
ViT + CSRA 0.8280 0.9635 0.7793 0.9881 0.9581 0.9752 0.9154

The bold font represents the best performance on that class. The italic font indicates the second-best performance on that class.

4.6. Ablation Studies on the DDR Dataset

In this paper, the influence of each component in the network is studied by an ablation experiment. First, we replace the transformer with a different backbone to verify the influence of FEB on our model. Then, CSRA is replaced by MLP, and there was no change in the FEB module, to verify the influence of GPB on our model. At last, on the basis of keeping the existing model unchanged, we set the different number of heads of CSRA in GPB to verify the influence of CSRA parameters setting on the overall performance of the model. Tables 7 and 8 detail the sensitivity, specificity, accuracy, and AUC values obtained from different experiments and can be compared.

  1. Analyze the effect of FEB: first of all, the FEB part takes ResNet50 as the backbone to extract image features. Compared to this design, our model improves sensitivity by nearly 2%, specificity by nearly 4%, accuracy by nearly 3%, and AUC by more than 6%. Then, the FEA part uses ResNet101 as the backbone of extracting image features. Compared with this design, the sensitivity and specificity of our model are only slightly improved, and the accuracy and average AUC values are increased by over 1%, respectively.

  2. Analyze the effect of GPB: next, we keep the FEB module unchanged and replace it with MLP by CSRA. Compared with this design, the sensitivity and accuracy of our model are improved by more than 2%, the specificity by more than 4%, and the AUC value by nearly 4%, respectively. As can be seen from Table 8, our model has achieved the best performance in all the evaluation indexes.

  3. Analyze the effect of attention heads in GPB: according to the research results of [62], we set the head to 2 by default. In order to verify the influence of attention heads on our model and keep the existing model unchanged, the number of CSRA heads is set to 1, 4, and 6, respectively. It can be seen from the experimental results in Table 8 that our parameter setting achieves the best performance (Head = 2).

Table 7.

The results of different ablation experiments in the DDR dataset. Sensitivity, specificity, accuracy, and AUC are average values, respectively.

Method Sensitivity Specificity Accuracy AUC
ResNet50 + CSRA 0.7941 0.7846 0.7940 0.8405
ResNet101+CSRA 0.8120 0.8242 0.8130 0.8903
ViT + MLP 0.7929 0.7815 0.7980 0.8647
ViT + CSRA 0.8140 0.8245 0.8235 0.9018

The bold font represents the best performance in each column. The italic font represents the second-best performance in each column.

Table 8.

The effect of the number of attention heads in the DDR dataset. Similar to Table 7, here sensitivity, specificity, accuracy, and AUC are average values, respectively.

Head setting Sensitivity Specificity Accuracy AUC
Head = 1 0.8037 0.7925 0.7827 0.8436
Head = 4 0.8064 0.8082 0.8241 0.8912
Head = 6 0.7932 0.8030 0.8148 0.8643
Ours (head = 2) 0.8140 0.8245 0.8235 0.9018

The bold font represents the best performance in each column. The italic font represents the second-best performance in each column.

5. Conclusions

According to the data of the International Diabetes Federation, diabetes is one of the fastest-growing global health emergencies in the 21st century. By 2030, it is estimated that 643 million people will have diabetes (accounting for about 11.3% of the global population) [1]. DR is one of the common chronic complications of diabetes. Due to the different stage of DR severity, it can be divided into five stages from mild to severe. In this paper, we design a new network to classify the fundus images of DR different stages by using vision transformers and residual attention. The model is trained and tested on two publicly available fundus image datasets (DDR dataset and IDRiD dataset). The experimental results show that compared with the existing five DR classification benchmark methods, the proposed model has better performance. However, limited by the number of labelled samples and the imbalance of data, there is still a lot of room for improvement in the identification and classification of mild DR, which leads to the deficiency of our network. Therefore, in future work, we will continue to improve the network structure and further modify the learning strategy to achieve better classification performance of DR severity.

Algorithm 1.

Algorithm 1

Training the classification of the DR model.

Acknowledgments

This work was partially supported by the University Synergy Innovation Program of Anhui Province (nos. GXXT-2020-015 and GXXT-2021-065), the Natural Science Research Project of Anhui Universities (nos. KJ2020A0392 and KJ2020A0394), the Support Project of University Outstanding Youth Talent of Anhui Province (no. gxyq2020016), and the Natural Science Research Fund Project of the Department of Education of Anhui Province (nos. 2020zrzd16 and 2021zrzd12).

Data Availability

The DDR and IDRiD datasets used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  • 1.Briant P. S. Causes of blindness and vision impairment in 2020 and trends over 30 years, and prevalence of avoidable blindness in relation to VISION 2020: the Right to Sight: an analysis for the Global Burden of Disease Study. Lancet Global Health . 2021;9(2):e144–e160. doi: 10.1016/S2214-109X(20)30489-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Thomas R., Halim S., Gurudas S., Sivaprasad S., Owens D. Idfdiabetes atlas: a review of studies utilising retinal photography on the global prevalence of diabetes related retinopathy between 2015 and 2018. Diabetes Research and Clinical Practice . 2019;157 doi: 10.1016/j.diabres.2019.107840.107840 [DOI] [PubMed] [Google Scholar]
  • 3.Wong T. Y., Sun J., Kawasaki R., et al. Guidelines on diabetic eye care: the international council of ophthalmology recommendations for screening, follow-up, referral, and treatment based on resource settings. Ophthalmology . 2018;125(10):1608–1622. doi: 10.1016/j.ophtha.2018.04.007. [DOI] [PubMed] [Google Scholar]
  • 4.Ting D. S. W., Cheung G. C. M., Wong T. Y. Diabetic retinopathy: global prevalence, major risk factors, screening practices and public health challenges: a review. Clinical and Experimental Ophthalmology . 2016;44(4):260–277. doi: 10.1111/ceo.12696. [DOI] [PubMed] [Google Scholar]
  • 5.Goodfellow I., Bengio Y., Courville A. Deeplearning . USA: MIT press; 2016. [Google Scholar]
  • 6.Masquelin A. H., Cheney N., Kinsey C. M., Bates J. H. T. Wavelet decomposition facilitates training on small datasets for medical image classification by deep learning. Histochemistry and Cell Biology . 2021;155(2):309–317. doi: 10.1007/s00418-020-01961-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Lakhani P., Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology . 2017;284(2):574–582. doi: 10.1148/radiol.2017162326. [DOI] [PubMed] [Google Scholar]
  • 8.Müller D., Kramer F. MIScnn: a framework for medical image segmentation with convolutional neural networks and deep learning. BMC Medical Imaging . 2021;21(1):12–11. doi: 10.1186/s12880-020-00543-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.You H., Yu L., Tian S., Cai W. DR-Net: dual-rotation network with feature map enhancement for medical image segmentation. Complex & Intelligent Systems . 2022;8(1):611–623. doi: 10.1007/s40747-021-00525-4. [DOI] [Google Scholar]
  • 10.Song L., Liu G., Ma M. Switzerland: Springer Nature; 2022. TD-Net:unsupervised medical image registration network based on Transformer and CNN. [Google Scholar]
  • 11.Haskins G., Kruger U., Yan P. Deep learning in medical image registration: a survey. Machine Vision and Applications . 2020;31(1-2):8–18. doi: 10.1007/s00138-020-01060-x. [DOI] [Google Scholar]
  • 12.Kaur M., Singh D. Multi-modality medical image fusion technique using multi-objective differential evolution based deep neural networks. Journal of Ambient Intelligence and Humanized Computing . 2021;12(2):2483–2493. doi: 10.1007/s12652-020-02386-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Li Y., Zhao J., Lv Z., Li J. Medical image fusion method by deep learning. International Journal of Cognitive Computing in Engineering . 2021;2:21–29. doi: 10.1016/j.ijcce.2020.12.004. [DOI] [Google Scholar]
  • 14.Zhang Y., Wang X., Xu Z., Yu Q., Yuille A., Xu D. When radiology report generation meets knowledge graph. Proceedings of the AAAI Conference on Artificial Intelligence; February 2020; CA, USA. pp. 12910–12917. [DOI] [Google Scholar]
  • 15.Liu F., Wu X., Ge S., Fan W., Zou Y. Exploring and distilling posterior and prior knowledge for radiology report generation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; June 2021; Nashville, TN, USA. pp. 13753–13762. [Google Scholar]
  • 16.Alexey Dosovitskiy, Beyer L., Alexander K. An Image Is worth 16x16 words:Transformers for Image Recognition at Scale . Vienna, Austria: ICLR; 2021. [Google Scholar]
  • 17.Liu Z., Lin Y., Cao Y. Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE/CVF International Conference on Computer Vision; October 2021; Montreal, BC, Canada. pp. 10012–10022. [Google Scholar]
  • 18.Chen J., He Y., Frey E. C., Li Y., Du Y., ViT-V-Net . Vision Transformer for Unsupervised Volumetric Medical Image Registration . 2021. arXiv preprint arXiv:2104.06468. [Google Scholar]
  • 19.International Council of Ophthalmology. ICO Guidelines for Diabetic Eye Care . Geneva Switzerland: International Council of Ophthalmology; 2017. [Google Scholar]
  • 20.Li T., Gao Y., Wang K., Guo S., Liu H., Kang H. Diagnostic assessment of deep learning algorithms for diabetic retinopathy screening. Information Sciences . 2019;501:511–522. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0020025519305377. [Google Scholar]
  • 21.Lei K., Mardani M., Pauly J. M., Vasanawala S. S. Wasserstein GANs for MR imaging: from paired to unpaired training. IEEE Transactions on Medical Imaging . 2021;40(1):105–115. doi: 10.1109/tmi.2020.3022968. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Lei K., Syed A. B., Zhu X., Pauly J. M., Vasanawala S. S. Artifact-and content-specific quality assessment for MRI with image rulers. Medical Image Analysis . 2022;77 doi: 10.1016/j.media.2021.102344.102344 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Esteva A., Kuprel B., Novoa R. A., et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature . 2017;542(7639):115–118. doi: 10.1038/nature21056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Yi Y., Wang J., Zhou W., Zheng C., Kong J., Qiao S. Non-negative matrix factorization with locality constrained adaptive graph. IEEE Transactions on Circuits and Systems for Video Technology . 2020;30(2):427–441. doi: 10.1109/tcsvt.2019.2892971. [DOI] [Google Scholar]
  • 25.Li X., Tan W., Liu P., Zhou Q., Yang J. Classification of COVID-19 chest CT images based on ensemble deep learning. Journal of Healthcare Engineering . 2021;2021:7. doi: 10.1155/2021/5528441.5528441 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Girshick R., Donahue J., Darrell T. Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition; June 2014; San Juan, PR, USA. pp. 580–587. [Google Scholar]
  • 27.Girshick R. Sped-up PatchMatch Belief propagation for continuous MRFs. Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV); December 2015; Santiago. IEEE; pp. 1440–1448. [Google Scholar]
  • 28.Ren S., He K., Girshick R., Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence . 2017;39(6):1137–1149. doi: 10.1109/tpami.2016.2577031. [DOI] [PubMed] [Google Scholar]
  • 29.Joseph R., Divvala S., Girshick R., Ali F. You only look once: Unified, real-time object detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; June 2016; Las Vegas, NV, USA. CVPR); pp. 779–788. [Google Scholar]
  • 30.Joseph R., Ali F. YOLO9000: better, faster,stronger. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; July 2017; Las Vegas, NV, USA. CVPR); pp. 7263–7271. [Google Scholar]
  • 31.Joseph R., Ali F. YOLOv3: an incremental improvement. 2018. arXiv preprint arXiv:1804.02767.
  • 32.Wang C. Y., Bochkovskiy A., Liao M. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. 2022. arXiv preprint arXiv:2207.02696.
  • 33.Rajpurkar P., Irvin J., Zhu K. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. 2017. arXiv preprint arXiv:1711.05225.
  • 34.Aoki T., Yamada A., Aoyama K., et al. Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointestinal Endoscopy . 2019;89(2):357–363. doi: 10.1016/j.gie.2018.10.027. [DOI] [PubMed] [Google Scholar]
  • 35.Long J., Shelhamer E., Darrell T. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence . 2017;39(4):640–651. doi: 10.1109/tpami.2016.2572683. [DOI] [PubMed] [Google Scholar]
  • 36.Ronneberger O., Fischer P., Brox T. U-net: convolutional networks for biomedical image segmentation. Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention; October 2015; VerlagBerlin, Heidelberg. Munich: Springer; pp. 234–241. [Google Scholar]
  • 37.Guo C., Szemenyei M., Yi Y. SA-UNet: spatial attention U-net for retinal vessel segmentation. IEEE ; Proceedings of the 2020 25th international conference on pattern recognition (ICPR); January 2021; Milan, Italy. pp. 1236–1242. [Google Scholar]
  • 38.Reader A. J., Corda G., Mehranian A., Costa-Luis C. d., Ellis S., Schnabel J. A. Deep learning for PET image reconstruction. IEEE Transactions on Radiation and Plasma Medical Sciences . Jan. 2021;5(1):1–25. doi: 10.1109/trpms.2020.3014786. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Yaqub M., Jinchao F., Arshid K., et al. Deep learning-based image reconstruction for different medical imaging Modalities. Computational and Mathematical Methods in Medicine . 2022;2022:18. doi: 10.1155/2022/8750648.8750648 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Ramirez-Alonso G., Prieto-Ordaz O., López-Santillan R., Montes-Y-Gómez M. Medical report generation through radiology images: an Overview. IEEE Latin America Transactions . June 2022;20(6):986–999. doi: 10.1109/tla.2022.9757742. [DOI] [Google Scholar]
  • 41.Bravo M. A., Arbelez Pablo A. Automatic diabetic retinopathy classification. Proceedings of the 13th International Symposium on Medical Information Processing and Analysis; October 2017; San Andres Island, Colombia. SPIE; pp. 1–10. [Google Scholar]
  • 42.Zhou K., Z W Gu, Liu W, Luo W X, Cheng J, Gao S. H. Multi-cell multi-task convolutional neural networks for diabetic retinopathy grading. Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; July 2018; Honolulu USA. IEEE; pp. 2724–2727. [DOI] [PubMed] [Google Scholar]
  • 43.Shanthi T., Sabeenian R. S. Modified Alexnet architecture for classification of diabetic retinopathy images. Computers & Electrical Engineering . 2019;76:56–64. doi: 10.1016/j.compeleceng.2019.03.004. [DOI] [Google Scholar]
  • 44.Golub Y., Lukashevich M., Starovoitov V. Retinal image analysis approach for diabetic retinopathy grading. Proceedings of the International Conference on Pattern Recognition and Information Processing; September 2021; Minsk, Belarus. Springer; pp. 152–165. [Google Scholar]
  • 45.Qian P., Zhao Z., Chen C., Zeng Z., Li X. Two eyes are better than one: Exploiting binocular correlation for diabetic retinopathy severity grading. Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); November 2021; Mexico. pp. 2115–2118. [DOI] [PubMed] [Google Scholar]
  • 46.Zhang C., Lei T., Chen P. Diabetic retinopathy grading by a source-free transfer learning approach. Biomedical Signal Processing and Control . 2022;73 doi: 10.1016/j.bspc.2021.103423.103423 [DOI] [Google Scholar]
  • 47.Li T., Bo W., Hu C., et al. Applications of deep learning in fundus images: a review. Medical Image Analysis . 2021;69 doi: 10.1016/j.media.2021.101971.101971 [DOI] [PubMed] [Google Scholar]
  • 48.Vaswani A., Shazeer N., Parmar N. Attention is all you need. Advances in Neural Information Processing Systems . 2017;30:5998–6008. [Google Scholar]
  • 49.Fang Y. Unleashing Vanilla vision transformer with Masked image modeling for object detection. 2022. arXiv preprint arXiv:2204.02964.
  • 50.Kim N., Kim D., Lan C. ReSTR: Convolution-free Referring Image Segmentation Using Transformers . New Orleans, Louisiana: CVPR; 2022. [Google Scholar]
  • 51.Yu S., Ma K., Qi Bi, et al. IL-ViT: multiple instance learning enhanced vision transformer for fundus image classMification. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; September 2021; Strasbourg, France. Springer; pp. 45–54. [Google Scholar]
  • 52.Sun R., Li Y., Zhang T., Mao Z., Wu F., Zhang Y. Lesion-aware transformers for diabetic retinopathy grading. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; June 2021; Nashville, TN, USA. pp. 10938–10947. [Google Scholar]
  • 53.Yang H., Chen J., Xu M. Fundus disease image classification based on improved transformer. Proceedings of the 2021 International Conference on Neuromorphic Computing (ICNC); October 2021; Wuhan, China. IEEE; pp. 207–214. [Google Scholar]
  • 54.Wu J., Hu R., Xiao Z., Chen J., Liu J. Vision transformer-based recognition of diabetic retinopathy grade. Medical Physics . 2021;48(12):7850–7863. doi: 10.1002/mp.15312. [DOI] [PubMed] [Google Scholar]
  • 55.AlDahoul N., Abdul Karim H., Joshua Toledo Tan M., Momo M. A., Ledesma Fermin J. Encoding retina image to Words using ensemble of vision transformers for diabetic retinopathy grading. F1000Research . 2021;10:p. 948. doi: 10.12688/f1000research.73082.1. [DOI] [Google Scholar]
  • 56.Huang S., Li J., Xiao Y., Shen N., Xu T. RTNet: relation transformer network for diabetic retinopathy multi-lesion segmentation. IEEE Transactions on Medical Imaging . 2022;41(6):1596–1607. doi: 10.1109/TMI.2022.3143833. [DOI] [PubMed] [Google Scholar]
  • 57.Lee S., Elton D. C. Lymph node detection in t2 mri with transformers. 2021. arXiv preprint arXiv:2111.04885.
  • 58.Luo Y., Wang Y., Zu C. 3d transformer-gan for high-quality pet reconstruction. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; October 2021; Lima, Peru. Springer; pp. 276–285. [Google Scholar]
  • 59.Dalmaz O., Yurt M., ukur T. C. Resvit: residual vision transformers for multi-modal medical image synthesis. 2021. arXiv preprint arXiv:2106.16031. [DOI] [PubMed]
  • 60.Amrouch M. Automatic generation of chest x-ray reports using a transformer-based deep learning model. Proceedings of the 2021 Fifth International Conference On Intelligent Computing in Data Sciences (ICDS); October 2021; Fez, Morocco. IEEE; [Google Scholar]
  • 61.Matsoukas C., oderberg M. S., Smith K. Is it time to replace cnns with transformers for medical images? 2021. arXiv preprint arXiv:2108.09038.
  • 62.Zhu K., Wu J. Residual attention: a simple but effective method for multi-label recognition. Proceedings of the IEEE/CVF International Conference on Computer Vision; October 2021; Montreal, BC, Canada. pp. 184–193. [Google Scholar]
  • 63.Porwal P., Pachade S., Kokare M., et al. Idrid: diabetic retinopathy–segmentation and grading challenge. Medical Image Analysis . 2020;59 doi: 10.1016/j.media.2019.101561.101561 [DOI] [PubMed] [Google Scholar]
  • 64.Simonyan K., Zisserman A. Very deep convolutional networks for large-scale image recognition. Proceedings of the International Conference on Learning Representations; May 2015; San Diego, CA, USA. [Google Scholar]
  • 65.He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; June 2016; San Juan, PR, USA. pp. 770–778. [Google Scholar]
  • 66.Szegedy C., Liu W., Jia Y., et al. Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; June 2015; Boston, MA. pp. 1–9. [Google Scholar]
  • 67.Huang G., Liu Z., Maaten L. V. D., Weinberger K. Q. Densely connected convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; July 2017; Honolulu, HI, USA. pp. 2261–2269. [Google Scholar]
  • 68.Hu J., Shen L., Sun G. Squeeze-and-excitation networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; June 2018; San Juan, PR, USA. pp. 7132–7141. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The DDR and IDRiD datasets used to support the findings of this study are available from the corresponding author upon request.


Articles from Computational Intelligence and Neuroscience are provided here courtesy of Wiley

RESOURCES