Skip to main content
PLOS One logoLink to PLOS One
. 2020 Apr 16;15(4):e0227240. doi: 10.1371/journal.pone.0227240

Accuracy of a deep convolutional neural network in the detection of myopic macular diseases using swept-source optical coherence tomography

Takahiro Sogawa 1, Hitoshi Tabuchi 1,2, Daisuke Nagasato 1,2,*, Hiroki Masumoto 1,2, Yasushi Ikuno 3, Hideharu Ohsugi 4, Naofumi Ishitobi 1, Yoshinori Mitamura 5
Editor: Demetrios G Vavvas6
PMCID: PMC7161961  PMID: 32298265

Abstract

This study examined and compared outcomes of deep learning (DL) in identifying swept-source optical coherence tomography (OCT) images without myopic macular lesions [i.e., no high myopia (nHM) vs. high myopia (HM)], and OCT images with myopic macular lesions [e.g., myopic choroidal neovascularization (mCNV) and retinoschisis (RS)]. A total of 910 SS-OCT images were included in the study as follows and analyzed by k-fold cross-validation (k = 5) using DL's renowned model, Visual Geometry Group-16: nHM, 146 images; HM, 531 images; mCNV, 122 images; and RS, 111 images (n = 910). The binary classification of OCT images with or without myopic macular lesions; the binary classification of HM images and images with myopic macular lesions (i.e., mCNV and RS images); and the ternary classification of HM, mCNV, and RS images were examined. Additionally, sensitivity, specificity, and the area under the curve (AUC) for the binary classifications as well as the correct answer rate for ternary classification were examined.

The classification results of OCT images with or without myopic macular lesions were as follows: AUC, 0.970; sensitivity, 90.6%; specificity, 94.2%. The classification results of HM images and images with myopic macular lesions were as follows: AUC, 1.000; sensitivity, 100.0%; specificity, 100.0%. The correct answer rate in the ternary classification of HM images, mCNV images, and RS images were as follows: HM images, 96.5%; mCNV images, 77.9%; and RS, 67.6% with mean, 88.9%.Using noninvasive, easy-to-obtain swept-source OCT images, the DL model was able to classify OCT images without myopic macular lesions and OCT images with myopic macular lesions such as mCNV and RS with high accuracy. The study results suggest the possibility of conducting highly accurate screening of ocular diseases using artificial intelligence, which may improve the prevention of blindness and reduce workloads for ophthalmologists.

Introduction

Myopia is a kind of refractive error wherein an image is formed in front of the retina due to increases in axial length and refractive power, regardless of the intensity of the error and age of onset [1]. Myopia is associated with macular complications such as myopic choroidal neovascularization (mCNV), retinoschisis (RS), and myopic chorioretinal atrophy, which can lead to blindness. Recently, the prevalence of myopia has been increasing annually around the world, especially in East Asia, and vision loss caused by myopia is considered a global social problem [26].

Traditionally, the evaluation of the retina has largely been conducted by ophthalmoscope. However, this device only observes the retina directly, making the completion of an objective evaluation difficult. Optical coherence tomography (OCT) has recently made it possible to obtain detailed tomographic images of the retina noninvasively and in a small amount of time. Swept-source OCT (SS-OCT), in particular, can capture high-quality images using a light source with deep penetration into the tissue, arithmetic mean, and tracking function of the fundus [79]. With the advancement of such OCT technology, research on myopic macular diseases such as RS and mCNV, which are directly related to decreased visual function, has progressed dramatically. Studies using OCT have shown that early surgical intervention is important for the maintenance of long-term visual function in the context of RS [1015] and that early anti–vascular endothelial growth factor drug administration can help to maintain long-term visual function in mCNV [1620]. Therefore, early detection and treatment of macular lesions associated with myopia are crucial to maintaining better vision. However, administering screening tests to all people with myopia is not realistic from the human resource or economic perspective [21].

In recent years, artificial intelligence (AI) technologies, including deep learning (DL), have made remarkable progress and, in the medical field, various applications in diagnostic imaging have been reported [22]. In the field of ophthalmology, many researchers, including the authors, have already reported applications of DL to image analysis using OCT, OCT angiography, and ultrawide-field fundus ophthalmoscopy [2333].

To the best of our knowledge, however, there have been no studies performed involving the automatic diagnosis of myopic macular disease using DL technology for SS-OCT images. If AI can establish diagnoses as accurately as ophthalmologists can using DL, such would significantly contribute to the early detection of myopia-related complications, which may help decrease the number of patients who would suffer from loss of vision.

In light of the above, this study sought to examine and compare DL's classification performance using OCT images without myopic macular lesions [i.e., no high myopia (nHM) vs. high myopia (HM)] and OCT images with myopic macular lesions (e.g., mCNV and RS).

Materials and methods

Image dataset

A total of 910 SS-OCT images of HM with normal eyes or myopic macular lesions were included in our study; images with reduced clarity of the eye due to severe cataract and/or severe vitreous hemorrhage were excluded. All images were taken using SS-OCT (Topcon DRI OCT-1 Atlantis; Topcon Corp., Tokyo, Japan). Horizontal scans (12 mm) on the fovea were performed by trained certified orthoptists. nHM was defined as having an axial length of less than 26 mm, while HM was defined having an axial length of 26 mm or more and with neither involving other obvious ocular diseases. The purpose of this study was to evaluate the DL's ability to detect a single condition; therefore, in cases with mCNV or RS with myopic macular lesions, images showing complications of other retinal diseases (e.g., diabetic retinopathy, retinal vein occlusion) were also excluded. These SS-OCT images were classified into nHM, HM, mCNV, and RS by retinal specialists. Some nHM and HM images included with comorbidities (mild cat, chorioretinal atrophy, epiretinal membrane, macular hole and so on). Some RS images included retinoschisis with retinal detachment. Representative images of each class are presented in Fig 1.

Fig 1. Representative horizontal scans of SS-OCT.

Fig 1

Normal OCT image without HM (A), OCT image with HM and no macular lesions (B), and OCT images with mCNV (C) and RS (D) of the left eye using SS-OCT.

The obtained images were trained and validated using k-fold cross-validation (k = 5). With this approach, image data were split into k groups, and (k − 1) groups were used as training data, while one group was used for validation [34,35]. The process was repeated k times until each of the k groups reached the validation dataset.

Data augmentation techniques, including brightness, gamma correction, histogram equalization, noise addition, and inversion, were applied to the images in the training dataset to increase the amount of training data by sixfold. Then, deep neural network (DNN) models were constructed and trained using the preprocessed image data.

Because of the retrospective and observational nature of the study, the need for written informed consent was waived by the ethics committees. The data acquired in the course of the data analysis were anonymized before we accessed them. This study was conducted in compliance with the principles of the Declaration of Helsinki and was approved by the local ethics committees of Tsukazaki Hospital,

Deep-learning model and training

In this study, the following nine DNN models were constructed and trained: Visual Geometry Group-16 (VGG16), Visual Geometry Group-19 (VGG19), Residual Network-50 (ResNet50), InceptionV3, InceptionResNetV2, Xception, DenseNet121, DenseNet169, and DenseNet201. After the training, the performance of each model was evaluated using test data [3638].

The convolutional DNN automatically learns local features of images and identifies images based on said information [3941]. Among the nine DNN models used in this study, the network architecture of a well-known model, VGG16, in particular is explained (Fig 2). The original SS-OCT image size was 1,038 × 802 pixels but it was resized to 256 × 192 pixels to shorten the analysis time. The images were read as color images; the size of the input tensor was 256 × 192 × 3. Since each pixel value was in the range of zero to 255 pixels, it the value was first divided by 255 and normalized according to the range of zero to one pixel(s).

Fig 2. Overall architecture of the VGG16 model.

Fig 2

The image data were converted to a pixel resolution of 256 × 192 pixels and set as the input tensor. After placing the convolution layers (Conv 1, 2, and 3), activation function (ReLU), pooling layers (MP 1 and 2) after Conv 1 and 3 and a dropout layer (drop rate: 0.25) all were passed through two fully connected layers (FC 1 and 2). In the final output layer, the classification was performed using some class softmax functions.

The VGG16 model consists of five convolutional blocks and some fully connected layers. Each block involves a convolutional layer and a maximum pooling layer. A convolutional layer is a block that captures features in images. Since the stride was set to 1, the downsizing of images was not performed in the convolutional layer. The Rectified Linear Unit (ReLU) activation function was used to solve the vanishing gradient problem [42]. Additionally, the MaxPooling layer's stride was set to 2, and the images were downsized to compress the information [43].

Next, after passing through five blocks, a flattened layer and two fully connected layers were realized. The flattened layer deletes position information from the tensor representing the features extracted by the convolutional block, while the fully connected layers compress the information received from the previous layers and pass it on to the next layer. The softmax function produce the probability of each class was deemed as the final output.

Fine-tuning was applied to increase the learning speed to achieve high performance with limited data [44]. The parameters obtained by learning ImageNet were used as initial parameters of the convolutional layer blocks.

For weights and biases, a Stochastic Gradient Descent (learning rate = 0.0001, momentum term = 0.9) was used as an optimizer to update the parameters [45,46]. The code used to perform the above is shown in S1 File.

An ensemble model was constructed by averaging the output of any network type among the nine network types. Thus, 29−1 types of ensemble models were constructed. Among them, the classification performance of the model with the highest AUC for binary classification and that of the model with the highest overall correct answer rate for ternary classification were evaluated and compared with those abilities of human ophthalmologists, described later.

The models were constructed and evaluated using Python Keras (https://keras.io/ja/) [Backend is TensorFlow (https://www.tensorflow.org/)]. For development and validation, the following computing setup was used: Intel Core i7-7700K® (Intel, Santa Clara, CA, USA) as the central processing unit and NVIDIA GeForce GTX 1080 (Nvidia, Santa Clara, CA, USA) as the graphics processing unit.

Comparison with ophthalmologists

For the binary classification of OCT images with or without myopic macular lesions, 46 OCT images without myopic macular lesions (nHM: 23 images; HM: 23 images) and 46 OCT images with myopic macular lesions (mCNV: 23 images; RS: 23 images) were included. For the binary classification of HM images and images with myopic macular lesions (mCNV images and RS images), 44 images of HM and 44 images of myopic macular disease (mCNV: 22 images; RS: 22 images) were included. In addition, for the ternary classification of HM images, mCNV images, and RS images, 23 images of each were included. The task of classifying these images was given to three human ophthalmologists and their results were compared with those of the neural networks. The metrics used to evaluate the classification performance of the neural networks and ophthalmologists were the AUC for the binary classifications and overall accuracy for the ternary classification, respectively.

Outcome

Performance results of the binary classification of OCT images with or without myopic macular lesions, the binary classification of HM images and images with myopic macular lesions (mCNV images and RS images), and the ternary classification of HM images, mCNV images, and RS images were examined. Outcome measures for the binary classifications were AUC, sensitivity, and specificity, obtained from the receiver operating characteristic (ROC) curve. Based on the probability value output by the neural network as an abnormal group, the ROC curve was obtained by changing the threshold value to pass judgment regarding whether or not they were myopic macular disease images. For outcomes of the ternary classification, among the three diagnostic possibilities output by the neural network, the maximum value was used for diagnosis. The overall accuracy and accuracy within each group were obtained by comparing the diagnosis given by the network and the actual diagnosis from the ophthalmologists.

Heatmap

A gradient-weighted class activation mapping (Grad-CAM) method [47] was used to create heatmap images that indicated where the DNN was focused. As an example, each heatmap in the VGG16 network is shown in Fig 3. The output of the second convolutional layer of the second convolutional block was maximized, and the Grad-CAM method was used. The ReLU function was employed to correct the loss function during backpropagation. This process was performed by Python Keras-Vis (https://raghakot.github.io/keras-vis/).

Fig 3. Representative horizontal scans of SS-OCT and corresponding heatmaps.

Fig 3

Presented are a normal SS-OCT image with nHM (A), and its corresponding superimposed heatmap (B); OCT image with HM and no macular lesions (C) and its corresponding superimposed heatmap (D); OCT image with myopic choroidal neovascularization (E) and its corresponding superimposed heatmap (F); and OCT image with myopic retinoschisis (G) and its corresponding superimposed heatmap (H). For all of them, the convolutional DNN focused on the macular area (red color) on the SS-OCT images (B, D, F, and H). In particular, the DNN focused on the lesion area of the SS-OCT images in the images with retinoschisis and myopic choroidal neovascularization.

Statistical analysis

In the comparison of subjects' demographic data, an analysis of variance was used for age and axial length. The chi-squared test was used for categorical variables (sex ratio and right:left ratio).The 95% confidence interval of the AUC was calculated using the following formula, assuming a normal distribution [48]:

95%CI=A+Z(0.05/2)*SE(A)
Z(x)=12πex22
SE(A)=A*(1A)+(np1)*(Q1A2)+(nn1)*(Q2A2)nP*nN
Q1=A2A
Q2=2A21+A

nP・ ・ ・ The amount of Good groups, (1) 563 (2) 456

nN・ ・ ・ The amount of Normal images, (1) 233 (2) 233

Sensitivity and specificity when the threshold to determine prevalence was set to 0.5 were obtained. The 95% CIs for sensitivity and specificity were calculated using the Clopper–Pearson method. The correct answer rate in the context of ternary classification was also obtained using the Clopper–Pearson method.

A significant difference was determined when the p-value was less than 0.05 (p < 0.05). These statistical analyses were performed using Python SciPy (https://www.scipy.org/), Python statsmodel (http://www.statsmodels.org/stable/index.html), and R pROC (https://cran.r-project.org/web/packages/pROC/pROC.pdf).

Results

Table 1 shows demographic data of 910 subjects from whom 910 study images were obtained. There was no significant difference between the four groups in terms of the ratio of left and right eyes (p = 0.6585, chi-squared test); however, significant differences were found in age, sex, and axial length. (p < 0.001, p < 0.005, and p < 0.001, respectively; chi-squared test and analysis of variance).

Table 1. Subject demographics.

nHM HM mCNV RS p-value
N 146 531 122 111
Age (years) 64.5 ± 13.5 58.3 ± 14.0 68.6 ± 9.3 64.7 ± 11.5 <0.001*
Sex (female) 73 (50.0%) 356(67.0%) 97 (79.5%) 87 (78.4%) <0.005**
Eye (left) 76 (52.1%) 273 (51.6%) 71 (41.8%) 57 (48.6%) 0.66**
AL (mm) 24.4 ± 1.3 28.1 ± 1.7 29.2 ± 1.7 29.4 ± 1.8 <0.001*

nHM, no high myopia; HM, high myopia; mCNV, myopic choroidal neovascularization; RS, retinoschisis; AL, axial length;

*analysis of variance,

**chi-squared test.

Neural network performance

For the binary classification of OCT images with or without myopic macular lesions, an ensemble model of VGG16, VGG19, DenseNet121, InceptionV3 and ResNet50 showed the best performance as follows: AUC, 0.970; sensitivity, 90.6%; and specificity, 94.2%.

For the binary classification of MH images and images with myopic macular lesions (mCNV and RS), VGG16 showed the best performance as follows: AUC, 1.000; sensitivity, 100.0%; and specificity, 100.0% (Table 2).

Table 2. Results of the binary classifications.

nHM and HM vs. mCNV and RS HM vs. mCNV and RS
AUC 0.970 (0.939–1.000) 1.000 (1.000–1.000)
Sensitivity 90.6 (86.1–95.1) 100.0 (98.3–100.0)
Specificity 94.2 (92.1–96.3) 100.0 (99.2–100.0)

nHM, no high myopia; HM, high myopia; mCNV, myopic choroidal neovascularization; RS, retinoschisis; AUC, area under the curve.

95% CIs are presented in parentheses.

Finally, for the ternary classification of HM images, mCNV images, and RS images, VGG16 and DenseNet121 showed the best performance as follows: HM, 96.5%; mCNV, 77.9%; and RS, 67.6%. The overall correct answer rate was 88.9% (Table 3).

Table 3. Results of the ternary classification.

HM mCNV RS Average
Correct answer rate 96.5 77.9 67.6 88.9

HM, high myopia; mCNV, myopic choroidal neovascularization; RS, retinoschisis.

Data are presented in %.

Comparison of neural network and ophthalmologist outcomes

For the binary classification of a total of 92 OCT images with or without myopic macular lesions, the neural networks' performance was AUC: 0.837, whereas the ophthalmologists' performance was AUC: 0.877 (p = 0.86).

For the binary classification of a total of 88 HM images and images with myopic macular lesions (mCNV and RS), the neural networks' performance was AUC: 1.000, whereas the ophthalmologists' performance was AUC: 0.875 (p = 0.48).

Finally, for the ternary classification of a total of 69 images (HM, mCNV, and RS), the neural networks' performance for overall accuracy was 79.7%, whereas the ophthalmologists' performance for the same was 86.0% (p = 0.76).

In all three classifications, no significant difference was found between the results of neural networks and those of the ophthalmologists (Table 4).

Table 4. Results of the comparison between outcomes of neural networks and humans.

Neural networks Ophthalmologists p-value
nHM and HM vs. mCNV and RS 0.837 (0.745–0.906) 0.877 (0.832–0.913) 0.86
HM vs. mCNV and RS 1.000 (0.959–1.000) 0.875 (0.829–0.912) 0.48
Overall accuracy of HM, RS, and mCNV 79.7 (68.3–88.4) 86.0% (80.5–90.4) 0.76

nHM, no high myopia; HM, high myopia; mCNV, myopic choroidal neovascularization; RS, retinoschisis.

95% CIs are presented in parentheses.

Heatmap

The corresponding heatmaps of the representative SS-OCT images of nHM, HM, mCNV, and RS are shown in Fig 3. In the heatmaps, red is used to indicate the strength of deep convolutional neural network focus.Increases in color intensity were observed around the macula in nHM and HM images, in the highlighted area due to choroidal neovascular at the macula in mCNV images, and in the RS area at the macula in RS images.

Discussion

In this study, using the combination of nine DNN models including VGG16, VGG19, ResNet50, InceptionV3, InceptionResNetV2, Xception, DenseNet121, DenseNet169, and DenseNet201, the classification of myopic macular diseases (mCNV and RS) and no myopic macular disease was conducted using SS-OCT images. The results showed that our DL models was able to classify both no myopic macular disease and myopic macular diseases with high accuracy. The combination of DNN models provided a correct answer rate that was equivalent to that of the ophthalmologists for each classification. To our knowledge, this study is the first to report on the classification ability of DL with high accuracy among RS and mCNV images using SS-OCT images.

A few recent studies considering AI’s detection ability using OCT images have been conducted on age-related macular degeneration (AMD). Treder et al. [49] developed and evaluated a DL program to detect AMD in spectral-domain OCT(SD-OCT). Their approach was tested using 100 OCT images (AMD: 50; healthy control: 50) and yielded correct answer rates of 0.997 in the AMD group and 0.920 in the healthy control group for a high level of significance (p < 0.001). Yoo et al. [50] also evaluated the automated detection of AMD in both OCT and fundus images using a DL program. Here, the DL with OCT images alone showed an AUC of 0.906 and a correct answer rate of 82.6%, the DL with fundus images alone presented an AUC of 0.914 and a correct answer rate of 83.5%, and the DL with a combination of OCT and fundus images showed an AUC of 0.969 and a correct answer rate of 90.5%. Similarly, AI's diagnostic efficiency in OCT images has been reported in correlation with other diseases. Our results concerning AI's diagnostic performance in images with myopic macular diseases also showed similar sensitivity and AUC outcomes as those in previous reports. A neural network can devise and construct an optimal structure to learn and detect local features of complex image data with individual differences [39,41,51].

In our study, we succeeded in obtaining diagnostic accuracy comparable to that of a human ophthalmologist by using the ensemble method, which combines various DL models as an AI algorithm. In the classifications directed by AI, the lesion sites where AI actually detected the reported findings often differ from the essential lesion sites that ophthalmologists examine. However, in this study, heatmaps were used to show where the neural network focused, revealing increases in color intensity at the following sites: around the macula in nHM and HM SS-OCT images, at the RS site in RS images, and at the mCNV site in mCNV images. These focus sites match with the sites on which the ophthalmologists focus on during diagnosis, indicating that DNN accurately identified the locations of RS and mCNV lesion sites and classified between normal images and images of myopic macular diseases based on the features of the lesions. However, strictly speaking, it is difficult to compare the diagnostic performance between humans and AI. Liu et al. [52] conducted a systematic review and meta-analysis to compare the diagnostic accuracy of DL algorithms with that of health care providers using medical images. Among the previous studies they examined, there were 14 studies that compared DL models and health care providers using the same sample data and met other criteria, including the publication of raw data. When aggregating the performance data of these 14 studies, they found a mean sensitivity of 87.0% (95% CI: 83.0–90.2) and a mean specificity of 92.5% (95% CI: 85.1–96.4) for the DL models and a mean sensitivity of 86.4% (95% CI: 79.9–91.0) and a mean specificity of 90.5% (95% CI: 80.6–95.7) for health care providers. Their study therefore suggested that the diagnostic performance of DL models was equivalent to that of health care providers. However, they also pointed out the existing lack of quality studies comparing AI and medical professionals, with no established comparison method currently available for use. In the present study, the DL model we used was able to obtain the same correct answer rate relative to the ophthalmologists using the same sample data. However, there is still room for improvement in this type of research, including in the areas of increasing the number of images for training, further improving the AI algorithms, and combining OCT and fundus images.

At present, the early detection of myopic macular diseases requires an examination performed by an ophthalmologist, but there are not enough ophthalmologists worldwide to pursue this. Our study results found no significant difference in classification performance between the neural networks and ophthalmologists in the binary classification of OCT images with or without myopic macular lesions; in the binary classification of HM images and images with myopic macular lesions; or in the ternary classification of HM images, mCNV images, or RS images. This suggests that the conduct of automated diagnosis by AI using SS-OCT image data, which can be acquired noninvasively and easily, may be very useful in myopic macular disease screening.

Our study has a few limitations that should be considered. First, imaging diagnosis by AI would be impossible among patients with reduced clarity of the eye due to severe cataracts or severe vitreous hemorrhage and in patients for whom detailed imaging cannot be obtained due to severely poor fixation. For these reasons, such SS-OCT images were excluded from this study. Second, in this study, the demographic data varied between groups. Myopia is significantly more common in females than in males and the prevalence of myopic macular diseases is significantly higher in older populations; therefore, the influence of such demographic data seems to be unavoidable [5355]. Third, the AI algorithms created and tested herein might not be generelizable to other commercially available similar imaging devices, because we investigated by only the Topcon DRI OCT-1. Fourth, to shorten the analysis time, the original SS-OCT image with 1,038 × 802 pixels was resized to 256 × 192 pixels. Finally, mCNV and retinal hemorrhage showed similar findings in SS-OCT images. Therefore, data from sources other than OCT images are required to distinguish these conditions [56,57].

Conclusion

The DL model was able to classify between myopic macular diseases (mCNV and RS) and no myopic macular disease with high accuracy using SS-OCT images. These findings suggest that DL is useful in reducing ophthalmologists' workloads in screening and preventing vision loss in myopic macular disease patients.

Supporting information

S1 File

(ZIP)

Acknowledgments

We thank Masayuki Miki and the orthoptists at Tsukazaki Hospital for support in collecting the data.

Data Availability

All relevant data are within the paper and its Supporting Information files.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.McBrien NA, Adams DW. A longitudinal investigation of adult-onset and adult-progression of myopia in an occupational group. Refractive and biometric findings. Invest Ophthalmol Vis Sci. 1997;38: 321–333. [PubMed] [Google Scholar]
  • 2.Holden BA, Fricke TR, Wilson DA, Jong M, Naidoo KS, Sankaridurg P, et al. Global prevalence of myopia and high myopia and temporal trends from 2000 through 2050. Ophthalmology. 2016;123: 1036–1042. 10.1016/j.ophtha.2016.01.006 [DOI] [PubMed] [Google Scholar]
  • 3.Hsu WM, Cheng CY, Liu JH, Tsai SY, Chou P. Prevalence and causes of visual impairment in an elderly Chinese population in Taiwan: the Shihpai Eye Study. Ophthalmology. 2004;111: 62–69. 10.1016/j.ophtha.2003.05.011 [DOI] [PubMed] [Google Scholar]
  • 4.Iwase A, Araie M, Tomidokoro A, Yamamoto T, Shimizu H, Kitazawa Y; Tajimi Study Group. Prevalence and causes of low vision and blindness in a Japanese adult population: the Tajimi Study. Ophthalmology. 2006;113: 1354–1362. 10.1016/j.ophtha.2006.04.022 [DOI] [PubMed] [Google Scholar]
  • 5.Yamada M, Hiratsuka Y, Roberts CB, Pezzullo ML, Yates K, Takano S, et al. Prevalence of visual impairment in the adult Japanese population by cause and severity and future projections. Ophthalmic Epidemiol. 2010;17: 50–57. 10.3109/09286580903450346 [DOI] [PubMed] [Google Scholar]
  • 6.Bourne RR, Stevens GA, White RA, Smith JL, Flaxman SR, Price H, et al. Causes of vision loss worldwide, 1990–2010: a systematic analysis. Lancet Glob Health.2013;1: e339–349. 10.1016/S2214-109X(13)70113-X [DOI] [PubMed] [Google Scholar]
  • 7.Tan CS, Chan JC, Cheong KX, Ngo WK, Sadda SR. Comparison of retinal thicknesses measured using swept-source and spectral-domain optical coherence tomography devices. Osli Retina. 2015;46: 172–179. 10.3928/23258160-20150213-23 [DOI] [PubMed] [Google Scholar]
  • 8.Mrejen S, Spaide RF. Optical coherence tomography: imaging of the choroid and beyond. Surv Ophthalmol. 2013;58: 387–429. 10.1016/j.survophthal.2012.12.001 [DOI] [PubMed] [Google Scholar]
  • 9.Yasuno Y, Hong Y, Makita S, Yamanari M, Akiba M, Miura M, et al. In vivo high-contrast imaging of deep posterior eye by 1-μm swept source optical coherence tomography and scattering optical coherence angiography. Optics express. 2007;15: 6121–6139. 10.1364/oe.15.006121 [DOI] [PubMed] [Google Scholar]
  • 10.Gaucher D, Haouchine B, Tadayoni R, Massin P, Erginay A, Benhamou N, et al. Long-term follow-up of high myopic foveoschisis: natural course and surgical outcome. Am J Ophthalmol. 2007;143: 455–462. 10.1016/j.ajo.2006.10.053 [DOI] [PubMed] [Google Scholar]
  • 11.Gao X, Ikuno Y, Fujimoto S, Nishida K. Risk factors for development of full-thickness macular holes after pars plana vitrectomy for myopic foveoschisis. Am J Ophthalmol. 2013;155: 1021–1027. 10.1016/j.ajo.2013.01.023 [DOI] [PubMed] [Google Scholar]
  • 12.Fujimoto S, Ikuno Y, Nishida K. Postoperative optical coherence tomographic appearance and relation to visual acuity after vitrectomy for myopic foveoschisis. Am J Ophthalmol. 2013;156: 968–973. 10.1016/j.ajo.2013.06.011 [DOI] [PubMed] [Google Scholar]
  • 13.Lehmann M, Devin F, Rothschild PR, Gaucher D, Morin B, Philippakis E, et al. Preoperative factors influencing visual recovery after vitrectomy for myopic foveoschisis. Retina. 2019;39: 594–600. 10.1097/IAE.0000000000001992 [DOI] [PubMed] [Google Scholar]
  • 14.Hattori K, Kataoka K, Takeuchi J, Ito Y, Terasaki H. Predictive factors of surgical outcomes in vitrectomy for myopic traction maculopathy. Retina. 2018;38 Suppl 1: S23–S30. [DOI] [PubMed] [Google Scholar]
  • 15.Sun Z, Gao H, Wang M, Chang Q, Xu G. Rapid progression of foveomacular retinoschisis in young myopics. Retina.2019;39: 1278–1288. 10.1097/IAE.0000000000002203 [DOI] [PubMed] [Google Scholar]
  • 16.Ikuno Y, Sayanagi K, Soga K, Sawa M, Tsujikawa M, Gomi F, et al. Intravitreal bevacizumab for choroidal neovascularization attributable to pathological myopia: one-year results. Am J Ophthalmol. 2009;147: 94–100. 10.1016/j.ajo.2008.07.017 [DOI] [PubMed] [Google Scholar]
  • 17.Wu TT, Kung YH. Five-year outcomes of intravitreal injection of ranibizumab for the treatment of myopic choroidal neovascularization. Retina. 2017;37: 2056–2061. 10.1097/IAE.0000000000001453 [DOI] [PubMed] [Google Scholar]
  • 18.Ohno-Matsui K, Ikuno Y, Lai TYY, Gemmy Cheung CM. Diagnosis and treatment guideline for myopic choroidal neovascularization due to pathologic myopia. Prog Retin Eye Res. 2018;63: 92–106. 10.1016/j.preteyeres.2017.10.005 [DOI] [PubMed] [Google Scholar]
  • 19.Tan NW, Ohno-Matsui K, Koh HJ, Nagai Y, Pedros M, Freitas RL, et al. Long-term outcomes of ranibizumab treatment of myopic choroidal neovascularization in East-Asian patients from the radiance study. Retina. 2018;38: 2228–2238. 10.1097/IAE.0000000000001858 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Onishi Y, Yokoi T, Kasahara K, Yoshida T, Nagaoka N, Shinohara K, et al. Five-year outcomes of intravitreal ranibizumab for choroidal neovascularization in patients with pathologic myopia. Retina. 2019;39: 1289–1298. 10.1097/IAE.0000000000002164 [DOI] [PubMed] [Google Scholar]
  • 21.Mrsnik M. Global Aging 2013: Rising to the challenge. Standard & poor’s rating services; 2013. https://www.nact.org/resources/2013_NACT_Global_Aging.pdf [Google Scholar]
  • 22.Todoroki K, Nakano T, Ishii Y et al. Automatic analyzer for highly polar carboxylic acids based on fluorescence derivatization-liquid chromatography. Biomed Chromatogr. 2015;29:445–451. 10.1002/bmc.3295 [DOI] [PubMed] [Google Scholar]
  • 23.Nagasato D, Tabuchi H, Masumoto H, Goto K, Tomita R, Fujioka T, et al. Automated detection of a nonperfusion area caused by retinal vein occlusion in optical coherence tomography angiography images using deep learning. PLoS One. 2019;14: e0223965 10.1371/journal.pone.0223965 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Masumoto H, Tabuchi H, Nakakura S et al. Accuracy of a deep convolutional neural network in detection of retinitis pigmentosa on ultrawide-field images. PeerJ 2019;7: e6900 10.7717/peerj.6900 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Nagasawa T, Tabuchi H, Masumoto H, Ohsugi H, Enno H, Ishitobi N, et al. Accuracy of ultrawide-field fundus ophthalmoscopy-assisted deep learning for detecting treatment-naïve proliferative diabetic retinopathy. Int Ophthalmol. 2019;39: 2153–2159. 10.1007/s10792-019-01074-z [DOI] [PubMed] [Google Scholar]
  • 26.Ohsugi H, Tabuchi H, Enno H, Ishitobi N. Accuracy of deep learning, a machine-learning technology, using ultra–wide-field fundus ophthalmoscopy for detecting rhegmatogenous retinal detachment. Sci Rep. 2017;7: 9425 10.1038/s41598-017-09891-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Nagasato D, Tabuchi H, Ohsugi H et al. Deep neural network-based method for detecting central retinal vein occlusion using ultrawide-field fundus ophthalmoscopy. J Ophthalmol. 2018:1875431 10.1155/2018/1875431 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Nagasawa T, Tabuchi H, Masumoto H, Masumoto H, Enno H, Ishitobi N, et al. Accuracy of deep learning, a machine-learning technology, using ultra–widefield fundus ophthalmoscopy for detecting idiopathic macular holes. PeerJ.2018;22;6: e5696. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Sonobe T, Tabuchi H, Ohsugi H, Masumoto H, Ishitobi N, Morita S, et al. Comparison between support vector machine and deep learning, machine-learning technologies for detecting epiretinal membrane using 3D-OCT. Int Ophthalmol. 2019;39: 1871–1877. 10.1007/s10792-018-1016-x [DOI] [PubMed] [Google Scholar]
  • 30.Matsuba S, Tabuchi H, Ohsugi H, Enno H, Ishitobi N, Masumoto H, et al. Accuracy of ultra–wide-field fundus ophthalmoscopy-assisted deep learning, a machine-learning technology, for detecting age related macular degeneration. Int Ophthalmol. 2019;39: 1269–1275. 10.1007/s10792-018-0940-0 [DOI] [PubMed] [Google Scholar]
  • 31.Masumoto H, Tabuchi H, Nakakura S, Ishitobi N, Miki M, Enno H. Deep-learning classifier with an ultrawide-field scanning laser ophthalmoscope detects glaucoma visual field severity. J Glaucoma. 2018;27: 647–652. 10.1097/IJG.0000000000000988 [DOI] [PubMed] [Google Scholar]
  • 32.Grewal PS, Oloumi F, Rubin U, Tennant MTS. Deep learning in ophthalmology: a review. Can J Ophthalmol. 2018;53: 309–313. 10.1016/j.jcjo.2018.04.019 [DOI] [PubMed] [Google Scholar]
  • 33.De Fauw J, Ledsam JR, Romera-Paredes B, Nikolov S, Tomasev N, Blackwell S, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018;24: 1342–1350. 10.1038/s41591-018-0107-6 [DOI] [PubMed] [Google Scholar]
  • 34.Mosteller F, Tukey JW. Data analysis, including statistics In: Lindzey G, Aronson E, editors. Handbook of social psychology. Reading, MA: Addison–Wesley; 1968. pp. 80–203. [Google Scholar]
  • 35.Kohavi R. A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proceedings of the 14th International joint conference on artificial intelligence. Montreal, Quebec, Canada: Morgan Kaufmann Publishers Inc.; 1995. pp. 1137–1143.
  • 36.Simonyan, K., Andrew, Z. Very deep convolutional networks for large-scale image recognition. https://arxiv.org/pdf/1409.1556.pdf
  • 37.Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016; 2818–2826
  • 38.Szegedy C, Ioffe S, Vanhoucke V, Alemi AA. Inception-v4, inception-resnet and the impact of residual connections on learning. AAAI. 2017;4: 12 [Google Scholar]
  • 39.Deng J, Dong W, Socher R, Li L, Kai L, Li F-F. ImageNet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. Miami, FL: IEEE; 2009. pp. 248–255.
  • 40.Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. ImageNet large scale visual recognition challenge. Int J Comp Vision. 2015;115: 211–252. [Google Scholar]
  • 41.Lee CY, Xie S, Gallagher P, Zhang Z, Tu Z. Deeply-supervised nets. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics (AISTATS). San Diego, CA, USA: Journal of Machine Learning Research Workshop and Conference Proceedings; 2015. pp. 562–570.
  • 42.Glorot X, Bordes A, Bengio Y. Deep sparse rectifier neural networks. In: Proceedings of the 14th International conference on artificial intelligence and statistics. Fort Lauderdale, FL: PMLR; 2011. pp. 315–323.
  • 43.Scherer D, Müller A, Behnke S. Evaluation of pooling operations in convolutional architectures for object recognition In: Diamantaras K, Duch W, Iliadis LS, editors. Artificial neural networks–ICANN 2010. Berlin, Heidelberg: Springer Berlin; 2010. pp. 92–101. [Google Scholar]
  • 44.Agrawal P, Girshick R, Malik J. Analyzing the performance of multilayer neural networks for object recognition In: Fleet D, Pajdla T, Schiele B, Tuytelaars T, editors. Computer vision–ECCV 2014. Cham: Springer International Publishing; 2014. pp. 329–344. [Google Scholar]
  • 45.Qian N. On the momentum term in gradient descent learning algorithms. Neural Networks. 1999;12: 145–151. 10.1016/s0893-6080(98)00116-6 [DOI] [PubMed] [Google Scholar]
  • 46.Nesterov Y. A method for unconstrained convex minimization problem with the rate of convergence O (1/k^2). Doklady AN USSR. 1983;269: 543–547. [Google Scholar]
  • 47.Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: visual explanations from deep networks via gradient-based localization. In: IEEE International Conference on Computer Vision (ICCV), 2017; 618–626.
  • 48.Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982;143: 29–36. 10.1148/radiology.143.1.7063747 [DOI] [PubMed] [Google Scholar]
  • 49.Treder M, Lauermann JL, Eter N. Automated detection of exudative age-related macular degeneration in spectral domain optical coherence tomography using deep learning. Graefes Arch Clin Exp Ophthalmol.2018;256: 259–265. 10.1007/s00417-017-3850-3 [DOI] [PubMed] [Google Scholar]
  • 50.Yoo TK, Choi JY, Seo JG, Ramasubramanian B, Selvaperumal S, Kim DW. The possibility of the combination of OCT and fundus images for improving the diagnostic accuracy of deep learning for age-related macular degeneration: a preliminary experiment. Med Biol Eng Comput. 2018;57: 677–687. 10.1007/s11517-018-1915-z [DOI] [PubMed] [Google Scholar]
  • 51.Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. ImageNet large scale visual recognition challenge. arXiv preprint arXiv 2014;1409.0575 [Google Scholar]
  • 52.Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. The Lancet Digital Health. 2019;1: e271–e297. [DOI] [PubMed] [Google Scholar]
  • 53.Asakuma T, Yasuda M, Ninomiya T, Noda Y, Arakawa S, Hashimoto S, et al. Prevalence and risk factors for myopic retinopathy in a Japanese population: the Hisayama Study. Ophthalmology. 2012;119: 1760–1765. 10.1016/j.ophtha.2012.02.034 [DOI] [PubMed] [Google Scholar]
  • 54.Vongphanit J, Mitchell P, Wang JJ. Prevalence and progression of myopic retinopathy in an older population. Ophthalmology 2002:109: 704–711. 10.1016/s0161-6420(01)01024-7 [DOI] [PubMed] [Google Scholar]
  • 55.Liu HH, Xu L, Wang YX, Wang S, You QS, Jonas JB. Prevalence and progression of myopic retinopathy in Chinese adults: the Beijing Eye Study. Ophthalmology 2010;117: 1763–1768. 10.1016/j.ophtha.2010.01.020 [DOI] [PubMed] [Google Scholar]
  • 56.Mi L, Zuo C, Zhang X, Liu B, Peng Y, Wen F. Fluorescein Leakage within Recent Subretinal Hemorrhage in Pathologic Myopia: Suggestive of CNV? J Ophthalmol. 2018: 4707832 10.1155/2018/4707832 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Liu B, Zhang X, Mi L, Chen L, Wen F. Long-term natural outcomes of simple hemorrhage associated with lacquer crack in high myopia: A risk factor for Myopic CNV? J Ophthalmol. 2018:3150923 10.1155/2018/3150923 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Demetrios G Vavvas

20 Jan 2020

PONE-D-19-34572

Accuracy of a deep convolutional neural network in the detection of myopic macular diseases using swept-source optical coherence tomography

PLOS ONE

Dear Dr. Nagasato,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

The reviewer made several important comments that we would like to see addressed. I suggest that in addition to the current cohort used in DL analysis and that was presented intros MS, you add a group of patients with comorbidities that mimic what is encountered in real life and sand see how well the algorithm fares and present these data as well

We would appreciate receiving your revised manuscript by Mar 05 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Demetrios G. Vavvas

Academic Editor

PLOS ONE

Journal Requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please include in your financial disclosure statement the name of the funders of this study (as well as grant numbers if available). If your study was unfunded, please revise your financial disclosure statement to “The author(s) received no specific funding for this work.

3. Thank you for stating the following financial disclosure:

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

a) Please provide an amended Funding Statement that declares *all* the funding or sources of support received during this specific study (whether external or internal to your organization) as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now.  

b) Please state what role the funders took in the study.  If any authors received a salary from any of your funders, please state which authors and which funder. If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: 1. Finacial Disclosure is missing:Although it is implied that this study was funded as per the phrase 'the funders had no role in stud design, decision to publish or interpretation of the manuscript', the funding source is not mentioned.

2. The use of heatmaps to show areas of SS-OCT that the neural network focused compared to where retina specialists focused is very interesting.

3. Although the results of this study seem promising for Depp Learning, significant limitations still apply such as:

A. As mentioned in the manuscript, 'the original SS-OCT image size was 1,038 x 802 pixels but it was resized to 256 x 192 pixels to shorten the analysis time'

B. The exclusion of eyes with any other ocular coomrbidities including very common ones such as lens opacification, that favours DL as opposed to Retina Specialsts. The major limitation of this study is the design

of DL in evaluating a single condition at a time, excluding eyes with coexisting retinal diseases. Improvement in this aspect is needed before DL could potentially be used in real life clinical settings.

C. It would be interesting to train a DL network pass the limited binary or ternary classification and compaaae results accuracy with that of human Ophthalmologists.

4. In the discussion, it is mentioned the ' in the present study, the DL model we used was able to obtain the same correct answer rate in a shorter time relative to Ophthalmologists using the same sample data', yet both in methods and results of the present study such comparisons of time are missing.

5. In the baseline characteristics, AL and sex may be unavoidable to a certain extent in this type of study, yet the groups could have been matched in terms of age. There are significant differences in the age among the 4 groups of the present study p<0.001

6. line 133: please correct 'itthe'

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Apr 16;15(4):e0227240. doi: 10.1371/journal.pone.0227240.r002

Author response to Decision Letter 0


4 Mar 2020

We appreciate the careful reviews and instructive suggestions from the reviewers. We have revised our manuscript following your suggestions. In the course of doing so, we discovered incorrect word in Table 4. And we have added patients with comorbidities (mild cat, epiretinal membrane, chorioretinal atrophy, macular hole and so on) to the nHM and HM groups that mimic those encountered in real life, as you and the reviewer kindly pointed out. And the sentence “Some nHM and HM images included with comorbidities (mild cat, epiretinal membrane, macular hole and so on).” has been added (page7, lines100-102). Also, the sentences “Third, the DL performed in this study, though it showed high classification performance results, was limited to yielding only three types of classification. However, HM is often associated with various complications such as chorioretinal atrophy and macular hole retinal detachment. Therefore, it is necessary to train a network using a large dataset of HM fundus images with other complications.”have been deleted (Page 24, Line 368). Furthermore, as you advised, we added comorbidities images and re-analyzed. Therefore, the result data has changed slightly (All are highlighted).

We hope you will accept these changes. Our responses to the editor and reviewers are as follows:

Journal Requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Reply: Thank you for your suggestion. We confirmed that our revised manuscript meets PLOS ONE’s style.

2. Please include in your financial disclosure statement the name of the funders of this study (as well as grant numbers if available). If your study was unfunded, please revise your financial disclosure statement to “The author(s) received no specific funding for this work.

Reply: Thank you for your suggestion. The authors received no specific funding for this work. As your pointed out, the sentence “The authors received no specific funding for this work” has been added in the financial disclosure section of the cover letter.

3. Thank you for stating the following financial disclosure:

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

a) Please provide an amended Funding Statement that declares *all* the funding or sources of support received during this specific study (whether external or internal to your organization) as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now.

b) Please state what role the funders took in the study. If any authors received a salary from any of your funders, please state which authors and which funder. If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

Reply: Thank you for your helpful suggestion. The authors received no specific funding for this work. As your pointed out, the sentence “The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.” has been added in the financial disclosure section of the cover letter.

Reviewers' comments:

Review Comments to the Author

Reviewer #1: 1. Finacial Disclosure is missing:Although it is implied that this study was funded as per the phrase 'the funders had no role in stud design, decision to publish or interpretation of the manuscript', the funding source is not mentioned.

Reply: Thank you for your helpful suggestion. As your and our editor pointed out, the sentences “The authors received no specific funding for this work. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.” have been added in the financial disclosure section of the cover letter.

2. The use of heatmaps to show areas of SS-OCT that the neural network focused compared to where retina specialists focused is very interesting.

Reply: Thank you for your comment. We think heat maps is very helpful to know how AI diagnosis.

3. Although the results of this study seem promising for Depp Learning, significant limitations still apply such as:

A. As mentioned in the manuscript, 'the original SS-OCT image size was 1,038 x 802 pixels but it was resized to 256 x 192 pixels to shorten the analysis time'

Reply: We agree your suggestion. Your suggestion is the objective limitation of our study. As your pointed out, the sentence “Third, to shorten the analysis time, the original SS-OCT image with 1,038 × 802 pixels was resized to 256 × 192 pixels.” has been added (page24, lines 368-369).

B. The exclusion of eyes with any other ocular coomrbidities including very common ones such as lens opacification, that favours DL as opposed to Retina Specialsts. The major limitation of this study is the design of DL in evaluating a single condition at a time, excluding eyes with coexisting retinal diseases. Improvement in this aspect is needed before DL could potentially be used in real life clinical settings.

Reply: Thank you for your helpful suggestions. As your pointed out, we have added patients with comorbidities (mild cat, epiretinal membrane, chorioretinal atrophy, macular hole and so on) to the nHM and HM groups that mimic those encountered in real life. And we analyzed again including additional images. Thus, we rewrote Table 1. And the sentence “Some nHM and HM images included with comorbidities (mild cat, epiretinal membrane, chorioretinal atrophy, macular hole and so on). ”has been added (page7, line100-102).

C. It would be interesting to train a DL network pass the limited binary or ternary classification and compare results accuracy with that of human Ophthalmologists.

Reply: We are glad you are interested in our analysis method in this study.

4. In the discussion, it is mentioned the ' in the present study, the DL model we used was able to obtain the same correct answer rate in a shorter time relative to Ophthalmologists using the same sample data', yet both in methods and results of the present study such comparisons of time are missing.

Reply: Thank you for your suggestion. This is our mistake. According to this comment, the sentence “in a shorter time” has been deleted (Page 23, Line 348).

5. In the baseline characteristics, AL and sex may be unavoidable to a certain extent in this type of study, yet the groups could have been matched in terms of age. There are significant differences in the age among the 4 groups of the present study p<0.001

Reply: Thank you for your helpful comment. We agree that age adjustment is necessary. However, the patients with retinoschisis and mCNV increase with age, and the age could not be adjusted to secure a sufficient number of cases. If we can increase the number of target patients like a conducting joint research with other institutions, we think that we can adjust the age and lead to high-quality results. Please let us consider it for future study.

6. line 133: please correct 'itthe'

Reply: Thank you for your comment. the word “itthe” have been changed to “it the” (Page 9, Lines 134).

Attachment

Submitted filename: Response to Revierers.docx

Decision Letter 1

Demetrios G Vavvas

19 Mar 2020

PONE-D-19-34572R1

Accuracy of a deep convolutional neural network in the detection of myopic macular diseases using swept-source optical coherence tomography

PLOS ONE

Dear Dr. Nagasato,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please adjust your limitations sections appropriately. We look forward ot the revised version 

We would appreciate receiving your revised manuscript by May 03 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Demetrios G. Vavvas

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: One additional comment:

In Machine Learning and Deep Learning, a major concern and current limitation of AI algorithms created via training sets obtained using only one imaging device is the extent of generalizability to other commercially available similar imaging devices. In this study, the Topcon DRI OCT-1 was used for all OCT images. The authors should state in their limitations section that the AI algorithms created and tested herein might not be generelizable to other commercially available similar imaging devices - the extent of generalizability across different devices is yet to be investigated.

Lines 311-314: the respective reference needs to be cited in the manuscript text.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Apr 16;15(4):e0227240. doi: 10.1371/journal.pone.0227240.r004

Author response to Decision Letter 1


23 Mar 2020

Editor,

PLoS ONE

March 22, 2020

Re: PONE-D-19-34572R1 entitled " Accuracy of a deep convolutional neural network in the detection of myopic macular diseases using swept-source optical coherence tomography".

To the Editor,

Thank you for your letter of March 20th, 2020 and for sending us the referee’s comments on our manuscript. We are returning the manuscript revised according to the comments.

We have studied the comments carefully and have made corrections which we hope meet with your approval. In the revised manuscript, all changes have been highlighted in yellow. Each of the coauthors has seen and agreed with each of the changes made to this manuscript in the revision.

Attachment

Submitted filename: Response to Revierer.docx

Decision Letter 2

Demetrios G Vavvas

30 Mar 2020

Accuracy of a deep convolutional neural network in the detection of myopic macular diseases using swept-source optical coherence tomography

PONE-D-19-34572R2

Dear Dr. Nagasato,

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

With kind regards,

Demetrios G. Vavvas

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Demetrios G Vavvas

2 Apr 2020

PONE-D-19-34572R2

Accuracy of a deep convolutional neural network in the detection of myopic macular diseases using swept-source optical coherence tomography

Dear Dr. Nagasato:

I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

For any other questions or concerns, please email plosone@plos.org.

Thank you for submitting your work to PLOS ONE.

With kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Demetrios G. Vavvas

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File

    (ZIP)

    Attachment

    Submitted filename: Response to Revierers.docx

    Attachment

    Submitted filename: Response to Revierer.docx

    Data Availability Statement

    All relevant data are within the paper and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES