Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2021 Mar 31;16(3):e0248526. doi: 10.1371/journal.pone.0248526

Automated system for diagnosing endometrial cancer by adopting deep-learning technology in hysteroscopy

Yu Takahashi 1, Kenbun Sone 1,*, Katsuhiko Noda 2, Kaname Yoshida 2, Yusuke Toyohara 1, Kosuke Kato 1, Futaba Inoue 1, Asako Kukita 1, Ayumi Taguchi 1, Haruka Nishida 1, Yuichiro Miyamoto 1, Michihiro Tanikawa 1, Tetsushi Tsuruga 1, Takayuki Iriyama 1, Kazunori Nagasaka 3, Yoko Matsumoto 1, Yasushi Hirota 1, Osamu Hiraike-Wada 1, Katsutoshi Oda 4, Masanori Maruyama 5, Yutaka Osuga 1, Tomoyuki Fujii 1
Editor: Tao Song6
PMCID: PMC8011803  PMID: 33788887

Abstract

Endometrial cancer is a ubiquitous gynecological disease with increasing global incidence. Therefore, despite the lack of an established screening technique to date, early diagnosis of endometrial cancer assumes critical importance. This paper presents an artificial-intelligence-based system to detect the regions affected by endometrial cancer automatically from hysteroscopic images. In this study, 177 patients (60 with normal endometrium, 21 with uterine myoma, 60 with endometrial polyp, 15 with atypical endometrial hyperplasia, and 21 with endometrial cancer) with a history of hysteroscopy were recruited. Machine-learning techniques based on three popular deep neural network models were employed, and a continuity-analysis method was developed to enhance the accuracy of cancer diagnosis. Finally, we investigated if the accuracy could be improved by combining all the trained models. The results reveal that the diagnosis accuracy was approximately 80% (78.91–80.93%) when using the standard method, and it increased to 89% (83.94–89.13%) and exceeded 90% (i.e., 90.29%) when employing the proposed continuity analysis and combining the three neural networks, respectively. The corresponding sensitivity and specificity equaled 91.66% and 89.36%, respectively. These findings demonstrate the proposed method to be sufficient to facilitate timely diagnosis of endometrial cancer in the near future.

Introduction

Endometrial cancer is the most common gynecologic malignancy, and its incidence has increased significantly in recent years [1]. Patients demonstrating early symptoms of the disease or suffering from low-risk endometrial cancer can be prescribed a favorable prognosis. However, patients diagnosed with endometrial cancer in its later stages have very few treatment or prognosis options available at their disposal [2]. Additionally, patients demonstrating conditions, such as atypical endometrial hyperplasia (AEH), precancerous condition of endometrial cancer, or stage 1A endometrial cancer without muscle invasion, are eligible for progestin therapy. Accordingly, they might potentially be able to preserve their fertility [3]. Therefore, early diagnosis of endometrial cancer assumes paramount importance. Cervical cytology through pap smear is a common screening method employed in cervical cancer diagnosis [4]. However, endometrial cytology is not a reliable screening technique because its underlying procedure comprises a blind test, results of which may lead to a large number of false negatives. Although the standard diagnostic procedure for endometrial cancer involves endometrial biopsy performed via dilation and curettage, a clinically established screening for endometrial cancer does not exist to date [5]. Hysteroscopy is, in general, considered the standard procedure for examining endometrial lesions by directly evaluating the uterine cavity. It is noteworthy that recent studies have suggested that hysteroscopy can be considered an effective technique for accurate endometrial-cancer diagnosis [6,7]. We have previously reported the usefulness of biopsy through office hysteroscopy with regard to endometrial cancer [8].

Artificial intelligence (AI) enables computers to perform intellectual actions, such as language understanding, reasoning, and problem solving, on behalf of humans. Machine learning is a cutting-edge approach for developing AI models based on the scientific study of algorithms and statistical models used by computer systems to perform tasks efficiently. The use of an appropriate AI model also enables computers to learn patterns in available datasets and make inferences from given data without the need for providing explicit instructions [9]. The deep neural network (DNN) facilitates realization of deep-learning concepts. Additionally, it is a machine-learning method that focuses on the use of multiple layers of neural networks [1012]. From the machine-learning perspective, a neural network comprises a network or circuit of artificial neurons or nodes [13]. Deep learning has garnered much interest in the medical field because deep-learning techniques are particularly suitable for image analysis. They are used for classification, image quality improvement, and segmentation of medical images. Conversely, shallow machine learning is not suitable for image recognition [14]. Recently, several systems developed for use in medical applications, such as image-based diagnosis and radiographic imaging of breast and lung cancers [15,16], have adopted AI models based on the implementation of DNN technology. Numerous examples of such systems employing endoscopic images in the diagnosis of gastric and colon cancer have been reported. However, no such system has been developed with specific focus on endometrial cancer [17,18]. In general, a voluminous amount of data is required for training a model to be highly accurate; this can be possible only if a large number of participants is considered. With the development of deep learning, it is expected that the accuracy rate will be high when the number of samples is large. However, when deep learning is applied to the medical field, some diseases must be analyzed with a small number of samples. Therefore, the challenge for medical AI research is to develop a system analysis method to improve accuracy with a small number of samples.

Therefore, the proposed study aims at developing a DNN-based automated endometrial-cancer diagnosis system that can be applied to hysteroscopy. Hysteroscopy has not yet found widespread utilization in diagnostic applications for endometrial cancers. This further limits the availability of training data for DNNs. Thus, the objective of this study is to develop a method that facilitates high-accuracy endometrial-cancer diagnosis, despite the limited number of cases available in the training dataset. In addition, the purpose of this research is to establish a system for shifting to large-scale research in the future. Because no standard method has been established for use in such scenarios to date, this study focuses on the determination of an optimum method.

In this study, we have achieved a high accuracy for diagnosis of endometrial cancer by hysteroscopy with such a small sample in using deep learning.

Materials and methods

Dataset overview

The data utilized in this study were extracted from videos of the uterine lumen captured using a hysteroscope. The breakdown of the extracted data is presented in Table 1 and Fig 1. The shortest video lasted 10.5 s, whereas the longest lasted 395.3 s. The corresponding mean and median durations equaled 77.5 s and 63.5 s, respectively. Because the videos were captured using different hysteroscopic systems with no consistency in terms of the resolution and image position, only parts of the captured images were extracted with the resolution reduced to 256 × 256 px for Xception [19] and 224 × 224 px for MobileNetV2 [20] and EfficientNetB0 [21]. Representative hysteroscopic images pertaining to each condition are depicted in Fig 1. The said hysteroscopic data were collected from 177 patients recruited in this study. These patients had a history of hysteroscopy, and they were categorized into five groups—those demonstrating conditions of a normal endometrium (60), uterine myoma (21), endometrial polyp (60), AEH (15), and endometrial cancer (21) (S1 Table). The above-mentioned data collection was performed at the University of Tokyo Hospital between 2011 and 2019 after obtaining prior patient consent and approval from the Research Ethics Committee at the University of Tokyo (approval no. 3084-(3) and 2019127NI-(1)).

Table 1. Images extracted from hysteroscopy videos per each disease category.

Still image Video image
Total number 411, 800 images 177 videos
Clinical diagnosis n (%)
Normal 113,357 (27.5%) 60 (33.8.%)
Polyp 143,449 (34.8%) 60 (33.8%)
Myoma 45,037 (11.0%) 21 (11.8%)
Atypical endometrial hyperplasia 42,146 (10.2%) 15 (8.4%)
Endometrial cancer 67,811 (16.4%) 21 (11.8%)

Fig 1.

Fig 1

Representative images of detected lesions for conditions of (A) normal endometrium; (B) endometrial polyp; (C) myoma; (D) AEH, and (E) endometrial cancer.

The consent was obtained by allowing the patients to opt-out. Patients were identified as those showing symptoms such as abnormal bleeding or menorrhagia, which required them to visit the outpatient department for the diagnosis of intrauterine lesion via hysteroscopy. The pathological diagnosis of AEH and endometrial cancer was obtained by biopsy or surgery. Normal endometrium, hysteromyoma, and endometrial polyp were diagnosed based on endometrial cytology, histology, hysteroscopic findings by a gynecologist, imaging findings such as MRI and ultrasound findings, and clinical course.

Training and evaluation data

The prepared videos were divided into four groups at random—three groups were used for training, and the remaining group was used for evaluation. The four groups were denoted pair-A, pair-B, pair-C, and pair-D and used for cross validation. S2 Table presents the number of training and evaluation videos for each pair. The accuracy of the trained model was evaluated based on image and video units. Owing to the limited number of cases available for this study, we defined two classes—"Malignant" and "Others"—for training and prediction. The "Malignant’ class included AEH and cancer, whereas the "Others" class included uterine myoma, endometrial polyps, and normal endometrium. As listed in S3 Table, the "Malignant" class comprised 36 videos and 109,957 images, whereas the "Others" class comprised 141 videos and 301,843 images. The overall architecture of the model developed in this project is depicted in Fig 2.

Fig 2. Overall architecture of the model developed in this project.

Fig 2

Training data

The training data pertaining to the malignant class were distributed into the following two sets (Fig 3A).

Fig 3.

Fig 3

(A) Schematic of the training method: The training data pertaining to the malignant class were separated into two sets, Set X and Set Y. (B) Schematic of the evaluation method: image by image. (C) Schematic of the evaluation method: video unit. During image-by-image evaluation, 100 images that clearly included the lesion site were extracted from the hysteroscopic video of each patient diagnosed with a malignant tumor (Continuity analysis).

Set X: comprising all frames included in the video stream.

Set Y: comprising images excluding the outside of the uterine cavity, such as the cervical and extrauterine images from Set X.

The number of frames within each set is listed in S3 Table.

Evaluation methods

In this study, the accuracy of the trained model was evaluated in two ways—image-by-image evaluation and video-unit evaluation. During image-by-image evaluation, 100 images that clearly included the lesion site were extracted from the hysteroscopic video of each patient diagnosed with a malignant tumor (Fig 3B). For patients diagnosed with benign and normal tumors, all frames were used during evaluation. In contrast, during video-unit evaluation, the judgment was made depending on the number of consecutive frames classified as "Malignant" in a given video stream (Fig 3C) (Continuity analysis). The threshold value of 50 was set for the number of consecutive frames in accordance with the results of a pre-study we performed, as described in Fig 4A. The threshold was taken from the points where the malignant score intersects with the other scores rather than the point where the average of two scores was the best, because the threshold should be set lower to reduce oversight cases in the actual clinical devices (Fig 4A).

Fig 4.

Fig 4

(A) Trend depicting accuracy displacement of malignant and benign diagnoses in accordance with threshold value for continuity analysis. (B) Comparison between learning times required by the three neural networks. The physical time depends on the computer specifications and image size; however, the ratio of the learning time required by each network is independent of such conditions.(C) Average accuracy values obtained via image-by-image-based predictions grouped in terms of dataset and network type. (D) Average accuracy values obtained via video-unit-based predictions grouped in terms of dataset and network type.

Neural network types

As already stated, three different neural networks—Xception, MobileNetV2, and EfficientNetB0—were adopted in this study to classify the images extracted from the video stream. These networks can exhibit relatively high accuracy with smaller size datasets and less expensive learning costs. We built these models using Keras implemented on TensorFlow and then trained them on an Intel core i7-9700 CPU + Nvidia GTX 1080ti GPU. The number of parameters used with each network is shown in S4 Table. The time spent to learn 3,000,000 images is shown in Fig 4B.

The network structure of Xception is shown in S5 Table. The most unique feature of Xception is that it divides the normal convolutional network layer into micro-networks called Inception modules as much as possible and replaces them with “Depthwise Separable Convolution.” The “Depthwise Separable Convolution” network structure divides the normal convolutional network into two network segments, Depthwise Convolution and Pointwise Convolution [19]. The network structure of MobileNetV2 is shown in S6 Table. The most unique feature of MobileNet is the adoption of the network layers called “Inverted Residual” widely to almost every network layer to reduce the total number of parameters [20]. The network structure of EfficientNetB0 is shown in S7 Table. The most unique feature of EfficientNet is the introduction of compound coefficients based on how the depth, width, resolution, etc. of the network within a convolutional network affect the performance of the model [21].

Model generation—execution of training

Owing to the nature of neural networks, even when the same type of neural network is trained using the same dataset, each model yields a different accuracy. Therefore, in this study, we trained three types of DNN models six times using two datasets (Set X, Set Y), which were grouped into four training and evaluation pairs—A, B, C, and D. Thus, 144 (3 × 6 × 2 × 4) trained models were acquired.

Results

Results of image by image evaluation

In this study, we first evaluated the accuracies of the predicted results obtained using each of the above-mentioned 144 models to each individual image. Subsequently, we calculated the average accuracy values by dividing the results into two groups based on the applicable data class and neural network type. Comparisons between the average prediction accuracies obtained for each dataset and network type are presented in Figs 4C and S1A and S8 Table. As can be realized, the difference between the average accuracy values (0.7891 and 0.8093, respectively) obtained for datasets X and Y equaled 0.0201, whereas that between the accuracy values obtained using the different network types equaled 0.0047 (0.7969 (minimum) and 0.8016 (maximum)) (S8 Table). As observed in this study, MobileNetV2 demonstrated the shortest learning time, whereas Xception required the longest learning duration—approximately thrice that required by MobileNetV2, as described in Fig 4B.

Results of video-unit-based evaluation: Continuity analysis

As already stated, the continuity analysis method for use in hysteroscopy applications has been developed in this study to increase the diagnostic accuracy realized when performing video-unit-based evaluations. As mentioned in the Materials and methods section, hysteroscopy video samples were considered representative of malignant tumors when 50 or more consecutive image frames extracted from them were classified as "Malignant." Comparisons between the average prediction accuracies obtained for each dataset and network type are presented in Figs 4D and S1B and S9 Table. As can be seen, the difference between the average accuracy values (0.8394 and 0.8913, respectively) obtained for datasets X and Y equaled 0.0512, whereas that between accuracy values obtained using the different network types equaled 0.0052 (0.8622 (minimum) and 0.8675 (maximum)) (Figs 4D and S1B and S9 Table).

Evaluation of accuracy improvements realized by combining multiple models

Finally, we evaluated the improvement in diagnostic accuracy realizable by using a combination of multiple DNN models. The evaluation was performed using 72 models (6 iterations × 4 data pairs × 3 model types) trained using Set Y. The video-unit-based continuity-analysis method was used owing to its demonstrated superior performance compared to the image-by-image-based technique. The results of this evaluation (Fig 5 and Table 2) revealed that the combination of 72 models could classify cancers and AEH as part of the malignant group accuracies of 0.8571 and 1.000, respectively. Likewise, the diagnostic accuracies for myomas, endometrial polyps, and normal endometrium equaled 0.8571, 0.8500, and 0.9500, respectively. The overall average accuracy equaled 0.9029 with corresponding sensitivity and specificity values of 91.66% (95% confidence interval (CI) = 77.53–98.24%) and 89.36% (95% CI = 83.06–93.92%), respectively (Table 2). In addition, the value of F-score was 0.757. These results confirm the realization of superior diagnostic accuracy when using the combination of prediction models compared to their standalone utilization.

Fig 5. Average diagnostic accuracies for different conditions obtained using combination of 72 trained deep neural network models.

Fig 5

Table 2. Diagnosis results obtained using combination of 72 trained deep neural network models.

Truth Prediction Total Correct Sensitivity Specificity F-score Accuracy Average
Malignant Others
Cancer Malignant 18 3 21 18 0.8571
AEH Malignant 15 0 15 15 1
Myoma Others 3 18 21 18 0.8571 0.9029
Polyp Others 9 51 60 51 0.85
Normal Others 3 57 60 57 0.95
Total 48 129 177 159 0.9167 0.894 0.7857 0.8983
Correct 33 126
Precision 0.6875 0.9767

AEH: Atypical endometrial hyperplasia.

Discussion

In this study, we aimed to develop a DNN-based automated system to detect the presence of endometrial tumors in hysteroscopic images. As observed in this study, an average diagnostic accuracy exceeding 90% was realized when using the combination of 72 trained DNN models. Overall, we were able to realize a relatively high diagnostic accuracy, despite the consideration of only a limited number of endometrial cancer and AEH cases.

As described in the Introduction section, several deep-learning models for use in image-recognition applications have been developed in recent years. Additionally, their utilization in medical applications has been thoroughly investigated. For example, Esteva et al. [22] developed a deep-learning algorithm trained on a dataset comprising more than 129,000 images of over 2,000 different skin diseases. Subsequently, they evaluated whether their proposed classification system could successfully distinguish skin-cancer cases from those corresponding to benign skin diseases. They observed that their proposed system could demonstrate diagnostic performance on par with that proposed by a group of clinical specialists [22]. Automated systems that perform disease diagnoses by applying deep-learning models to endoscopic images, such as those captured by gastrointestinal endoscopes and cystoscopes, have been developed in recent years [17,18]. Although colorectal neoplastic polyps represent the precancerous lesions of colorectal cancer, their presence can be typically diagnosed by an endoscopist with the naked eye. However, the presence of these polyps can remain undetected in cases where they are either very small or possess shapes that make it difficult to identify them. Yamada et al. [18] developed a convolutional neural network-based deep-learning model that they applied to endoscopic images captured for approximately 5,000 cases; their proposed analysis yielded a polyps and precancerous-lesion detection rate of 98%.

In general, the application of deep-learning techniques to image-recognition problems requires collection of 100,000–1,000,000 images to constitute a viable training dataset. However, as described earlier, in the medical field it can be difficult to obtain such a large number of samples depending on diseases and circumstances. Because the diagnosis of cancer by hysteroscopy is not a common method, it is difficult to obtain a large number of samples from a single institution at present. Therefore, in recent AI research in the medical field, a major focus is to achieve a high accuracy rate with a small sample size; there are some reports that address this. For example, Sakai et al. [23] extracted small regions from a small number of endoscopic images obtained during the early stages of gastric cancer. Data expansion technology was utilized to increase the number of images to approximately 360,000. The application of a convolutional neural network to the said image dataset yielded positive and negative predictive values of 93.4% and 83.6%, respectively. A major limitation of this study is that the video stream contained a significant number of frames that did not capture the lesions to be identified [23]. Therefore, we deleted all frames that did not capture lesions in the extracted image in Set Y. However, even frames that do not depict lesions might include malignant-tumor-specific features, such as cloudy uterine luminal fluid. Moreover, even when the degree of cloudiness is too small to be recognized by the naked eye, it can be accurately recognized by computers. Therefore, we divided the learning data into two datasets—Set X and Set Y. As described in the Results section, the results obtained using Set Y yielded a higher diagnostic accuracy compared to Set X. This suggests that the diagnostic accuracy can be improved by exclusively analyzing the lesion sites instead of all extracted images comprising the dataset. Moreover, given the limited use of hysteroscopy in medical practice and the need for consideration of several training cases to leverage the existing deep-learning models for analysis of medical images, we developed a continuity-analysis method based on a combination of neural networks. The proposed method demonstrates the realization of high diagnostic accuracies, despite the use of a limited training dataset.

It is noteworthy that accuracies of 90% or more can be obtained with such a small sample size. The proposed system is our original idea and is the most significant aspect of this research. The method can also be applied to other types of medical images with fewer samples, as well as hysteroscopic images. While gastrointestinal endoscopy is commonly used in the diagnosis of gastric and colorectal cancers, in general, hysteroscopy is seldom used in the diagnosis of endometrial cancer. However, our previous study [8] demonstrates the usefulness of hysteroscopy in the diagnosis of endometrial cancer. Therefore, if a hysteroscopy-based automated system employing deep-learning models is established for clinical diagnosis of endometrial cancer, an increase in the use of hysteroscopes, can be expected as well.

As already mentioned, early diagnosis of endometrial cancer can help patients retain their fertility, and it may even eliminate the need for post-therapy, which involves the use of anticancer drugs and radiation therapy, despite a surgery being performed [1,3,24]. The diagnostic system presented in this paper demonstrates the potential to be an effective system for accurate diagnosis of endometrial cancer in future. In the future, a large-scale study will be conducted using the algorithm established in this study. Therefore, the current study is a pilot to determine whether large-scale research is possible. Notably, implementation of the proposed system in its entirety is necessary to improve the positive and negative predictive values to around 100%. To facilitate high-accuracy diagnosis, it is necessary to (1) use a large number of images as well as add notations to all existing and new images and (2) develop a high-accuracy engine. Another limitation of this study is that although the use of the combinational model facilitated realization of a high diagnostic accuracy, the capacity was large when considering medical device development. Thus, the development of a more compact system must be pursued to accommodate a large number of cases. However, as mentioned before, it is difficult to significantly increase the number of hysteroscopic images in a single facility, and as future study, we aim to increase the number of samples by using this system in a multi-facility joint research collaboration.

To the best of our knowledge, this study represents the first attempt toward the diagnosis of endometrial cancer using a combination of deep learning and hysteroscopy. Although two studies [25,26] concerning hysteroscopy and deep learning have been previously reported, they exclusively concern uterine myomas and in vitro fertilization, respectively, and they have not yet been used in endometrial-cancer diagnosis.

As described in the Materials and methods section, three neural networks—Xception [19], MobileNetV2 [20], and EfficientNetB0 [21]—were used in this study to classify frame images extracted from video samples. These networks were selected because they are computationally inexpensive and demonstrate high accuracy, thereby facilitating real-time diagnosis while incurring low manufacturing costs. Therefore, it is important to clarify the relationship between the execution speed and neural network accuracy. From the viewpoint of the future development of deep-learning-based medical devices, it is necessary to compare real-time and post-hysteroscopy analyses. Additionally, we examined the images for which the deep-learning algorithms considered in this study could not perform an accurate diagnosis. The following two features were identified—(1) the flatness of the tumor and (2) difficulty in tumor identification due to excessive bleeding. The issues can be resolved by increasing the number of images in the training dataset. However, the size of the tumor cannot be considered a cause of error. In the future, when considering a large number of cases, it is necessary to perform subgroup analysis in accordance with the patient’s age, stage of the disease, histology, etc. Moreover, it is necessary to make a comparison with hysteroscopic specialists.

Conclusion

The challenge in medical AI research is to develop a system analysis method for improving the accuracy with a small number of samples. It is noteworthy that a high accuracy for diagnosis of endometrial cancer can be obtained with such a small sample in this study and we believe that the capability of the basic system has been established in this study. The accuracy rate of conventional diagnostic techniques, such as pathological diagnoses by curettage and cytology, is low, and screening for endometrial cancer has not been established. In the future, multi-institutional joint research should be conducted to develop this system. If this system is properly developed, it can be utilized for the screening of endometrial cancer.

Supporting information

S1 Fig

(A) Diagnostic accuracy realized when applying the neural networks on individual datasets. Image-classification accuracy was compared using dataset–neural-network combination. (B) Diagnostic accuracy realized when employing proposed continuity analysis using dataset–neural-network combination.

(TIF)

S1 Table. Stages and histological types endometrial cancer identified in patients recruited in this study.

(DOCX)

S2 Table. Training and evaluation data in this study.

(DOCX)

S3 Table. Datasets used in this study.

(DOCX)

S4 Table. Number of parameters of each network.

(DOCX)

S5 Table. Network structure of EfficientNet B0.

(DOCX)

S6 Table. Network structure of MobileNet V2.

(DOCX)

S7 Table. Network structure of Xception.

(DOCX)

S8 Table. Average accuracies obtained through image-by-image-based predictions grouped in terms of dataset and network types.

(DOCX)

S9 Table. Average accuracies obtained through video-unit-based predictions grouped in terms of dataset and network types.

(DOCX)

Acknowledgments

The authors thank Editage for English language editing (https://www.editage.com/).

Data Availability

All relevant data are within the paper and its Supporting Information files.

Funding Statement

This work was financially supported by Japanese Foundation for Research and Promotion of Endoscopy. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Anderson AS, Key TJ, Norat T, Scoccianti C, Cecchini M, Berrino F, et al. European code against cancer 4th edition: Obesity, body fatness and cancer. Cancer Epidemiol. 2015;39: S34–45. 10.1016/j.canep.2015.01.017 [DOI] [PubMed] [Google Scholar]
  • 2.Lachance JA, Darus CJ, Rice LW. Surgical management and postoperative treatment of endometrial carcinoma. Rev Obstet Gynecol. 2008;1(3): 97–105. [PMC free article] [PubMed] [Google Scholar]
  • 3.Harrison RF, He W, Fu S, Zhao H, Sun CC, Suidan RS, et al. National patterns of care and fertility outcomes for reproductive-aged women with endometrial cancer or atypical hyperplasia. Am J Obstet Gynecol. 2019;221(5): 474.e1-474.e11. 10.1016/j.ajog.2019.05.029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Meggiolaro A, Unim B, Semyonov L, Miccoli S, Maffongelli E, La Torre G. The role of pap test screening against cervical cancer: A systematic review and meta-analysis. Clin Ter. 2016;167(4): 124–39. 10.7417/CT.2016.1942 [DOI] [PubMed] [Google Scholar]
  • 5.Yanoh K, Norimatsu Y, Hirai Y, Takeshima N, Kamimori A, Nakamura Y, et al. New diagnostic reporting format for endometrial cytology based on cytoarchitectural criteria. Cytopathology 2009;20(6): 388–394. 10.1111/j.1365-2303.2008.00581.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Yang B, Xu Y, Zhu Q, Xie L, Shan W, Ning C, et al. Treatment efficiency of comprehensive hysteroscopic evaluation and lesion resection combined with progestin therapy in young women with endometrial atypical hyperplasia and endometrial cancer. Gynecol Oncol. 2019;153: 55–62. 10.1016/j.ygyno.2019.01.014 [DOI] [PubMed] [Google Scholar]
  • 7.Trojano G, Damiani GR, Casavola VC, Loiacono R, Malvasi A, Pellegrino A, et al. The role of hysteroscopy in evaluating postmenopausal asymptomatic women with thickened endometrium. Gynecol Minim Invasive Ther. 2018;7(1): 6–9. 10.4103/GMIT.GMIT_10_17 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Sone K, Eguchi S, Asada K, Inoue F, Miyamoto Y, Tanikawa M, et al. Usefulness of biopsy by office hysteroscopy for endometrial cancer: A case report. Mol Clin Oncol. 2020;13: 141–145. 10.3892/mco.2020.2053 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.McCarthy JF, Marx KA, Hoffman PE, Gee AG, O’Neil P, Ujwal ML, et al. Applications of machine learning and high-dimensional visualization in cancer detection, diagnosis, and management. Ann N Y Acad Sci. 2004;1020: 239–262, 10.1196/annals.1310.020 [DOI] [PubMed] [Google Scholar]
  • 10.Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural Comput. 2006;18: 1527–1554. 10.1162/neco.2006.18.7.1527 [DOI] [PubMed] [Google Scholar]
  • 11.LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521: 436–444. 10.1038/nature14539 [DOI] [PubMed] [Google Scholar]
  • 12.He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. IEEE Int Conf on Comp Vis. 2015; 1026–1034. 10.1109/ICCV.2015.123 [DOI] [Google Scholar]
  • 13.Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci U S A 1982;79(8): 2554–2558. 10.1073/pnas.79.8.2554 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Hamamoto R, Suvarna K, Yamada M, Kobayashi K, Shinkai N, et al. Application of artificial intelligence technology in oncology: Towards the establishment of precision medicine. Cancers (Basel). 2020;12(12): 3532. 10.3390/cancers12123532 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Yala A, Lehman C, Schuster T, Portnoi T, Barzilay RA. Deep learning mammography-based model for improved breast cancer risk prediction. Radiology 2019;292(1): 60–66. 10.1148/radiol.2019182716 [DOI] [PubMed] [Google Scholar]
  • 16.Zhao W, Yang J, Sun Y, Li C, Wu W, Jin L, et al. 3D deep learning from CT scans predicts tumor invasiveness of subcentimeter pulmonary adenocarcinomas. Cancer Res. 2018;78(24): 6881–6889. 10.1158/0008-5472.CAN-18-0696 [DOI] [PubMed] [Google Scholar]
  • 17.Hirasawa T, Aoyama K, Tanimoto T, Ishihara S, Shichijo S, Ozawa T, et al. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer 2018;21(4): 653–660. 10.1007/s10120-018-0793-2 [DOI] [PubMed] [Google Scholar]
  • 18.Yamada M, Saito Y, Imaoka H, Saiko M, Yamada S, Kondo H, et al. Development of a real-time endoscopic image diagnosis support system using deep learning technology in colonoscopy. Sci Rep. 2019;9(1): 1–9. 10.1038/s41598-018-37186-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Chollet F. Xception: Deep learning with depthwise separable convolutions. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. 10.1109/CVPR.2017.195 [DOI]
  • 20.Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L. MobileNetV2: Inverted residuals and linear bottlenecks. 2018; arXiv:1801.04381, https://arxiv.org/abs/1801.04381.
  • 21.Tan M, Le QV. EfficientNet: Rethinking model scaling for convolutional neural networks. 2019; arXiv:1905.11946v3, https://arxiv.org/abs/1905.11946.
  • 22.Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542(7639): 115–118. 10.1038/nature21056 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Sakai Y, Takemoto S, Hori K, Nishimura M, Ikematsu H, Yano T, et al. Automatic detection of early gastric cancer in endoscopic images using a transferring convolutional neural network. Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018. 10.1109/EMBC.2018.8513274 [DOI] [PubMed]
  • 24.Taylan E, Oktay K. Fertility preservation in gynecologic cancers. Gynecol Oncol. 2019;155(3): 522–529. 10.1016/j.ygyno.2019.09.012 [DOI] [PubMed] [Google Scholar]
  • 25.Török P, Harangi B. Digital image analysis with fully connected convolutional neural network to facilitate hysteroscopic fibroid resection. Gynecol Obstet Invest. 2018;83(6): 615–619. 10.1159/000490563 [DOI] [PubMed] [Google Scholar]
  • 26.Burai P, Hajdu A, Manuel FE, Harangi B. Segmentation of the uterine wall by an ensemble of fully convolutional neural networks. Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018, 49–52. 10.1109/EMBC.2018.8512245 [DOI] [PubMed]

Decision Letter 0

Tao Song

7 Dec 2020

PONE-D-20-34280

Automated system for diagnosing endometrial cancer by adopting deep-learning technology in hysteroscopy

PLOS ONE

Dear Dr. Sone,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jan 21 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Tao Song

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following in the Competing Interests section:

"Kenbun Sone has a joint research agreement with Predicthy LLC. The other authors have no competing interests to disclose"

We note that one or more of the authors are employed by a commercial company: Predicthy LLC.

2.1. Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form.

Please also include the following statement within your amended Funding Statement.

“The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.”

If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement.

2.2. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc.  

Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and  there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

3. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This work aims to establish deep learning models for classifying the presence of endometrial tumors in hysteroscopic images. And an average diagnostic accuracy exceeding 90% was realized when using the combination of 72 trained DNN models. However, I have the following concerns:

1)I am a bit curious why they use this deep learning architecture for endometrial tumors detection, rather than shallow machine learning models.

2)There are several errors in this manuscript, such as “The corresponding sensitivity and specificity equaled 91.66% and 89.36, respectively”. Is it 89.36%? The authors should double check the manuscript.

3)The manuscript should give the overall model architecture.

4)The metric method for the model is too simple, the author should add more metric method. Please refer to several literatures, such as:

Pang Shanchen, Ding Tong, Qiao Sibo, Meng Fan, Wang Shuo, Li pibao, WangXun . A novel YOLOv3-arch model for identifying cholelithiasis and classifying gallstones on CT images,2019, Plos one, 6(14):e0217647.DOI: 10.1371

Wang Shudong, Dong Liyuan, Wang Xun, Wang Xingguang. Classification of Pathological Types of Lung Cancer from CT Images by Deep Residual Neural Networks with Transfer Learning Strategy. Open Medicine, 2020, 15(1): 190-197.

Shanchen Pang, Yaqin Zhang, Mao Ding, Xun Wang, Xianjin Xie. A Deep Model for Lung Cancer Type Identification by Densely Connected Convolutional Networks and Adaptive Boosting. IEEE Access 2020,8: 4799-4805.

Shanchen Pang, Fan Meng, Xun Wang, et al. VGG16-T: A Novel Deep Convolutional Neural Network with Boosting to Identify Pathological Type of Lung Cancer in Early Stage by CT Images, International Journal of Computational Intelligence Systems. Vol.13(1), pp. 771-780, 2020.

Reviewer #2: In the paper, authors present an artificial-intelligence-based system to detect the regions affected by endometrial cancer automatically from hysteroscopic images. The diagnosis accuracy is increased. However, there are some details that can be improved.

The models used in the paper are not presented well.

The set of threshold value is 50, maybe you can explain some details about that.

The writing of the paper should be taken care. For example, the text size on the tables, the text-transform on subtitle of page 7.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Mar 31;16(3):e0248526. doi: 10.1371/journal.pone.0248526.r002

Author response to Decision Letter 0


12 Jan 2021

Reviewer #1:

This work aims to establish deep learning models for classifying the presence of endometrial tumors in hysteroscopic images. And an average diagnostic accuracy exceeding 90% was realized when using the combination of 72 trained DNN models. However, I have the following concerns:

Comment1

I am a bit curious why they use this deep learning architecture for endometrial tumors detection, rather than shallow machine learning models.

Response 1

We appreciate your critical comments and useful suggestions. Deep learning is highly anticipated in the medical field because deep learning techniques are particularly suitable for image analysis. They can be used for classification, image quality improvement, and segmentation of medical images. In contrast, shallow machine learning is not suitable for image recognition. We have added this information to the revised manuscript considering your comment (Lines 66-69).

Comment2

There are several errors in this manuscript, such as “The corresponding sensitivity and specificity equaled 91.66% and 89.36, respectively”. Is it 89.36%? The authors should double check the manuscript.

Response 2

We appreciate your critical comments and useful suggestions. It is 89.36%(Lines36). We have corrected the oversight.

Comment3

The manuscript should give the overall model architecture.

Response 3

We appreciate your critical comments and useful suggestions. We have added the overall architecture of the model (Figure2) in accordance with your suggestion.

Comment4

The metric method for the model is too simple, the author should add more metric method. Please refer to several literatures, such as:

Pang Shanchen, Ding Tong, Qiao Sibo, Meng Fan, Wang Shuo, Li pibao, WangXun . A novel YOLOv3-arch model for identifying cholelithiasis and classifying gallstones on CT images,2019, Plos one, 6(14):e0217647.DOI: 10.1371

Wang Shudong, Dong Liyuan, Wang Xun, Wang Xingguang. Classification of Pathological Types of Lung Cancer from CT Images by Deep Residual Neural Networks with Transfer Learning Strategy. Open Medicine, 2020, 15(1): 190-197.

Shanchen Pang, Yaqin Zhang, Mao Ding, Xun Wang, Xianjin Xie. A Deep Model for Lung Cancer Type Identification by Densely Connected Convolutional Networks and Adaptive Boosting. IEEE Access 2020,8: 4799-4805.

Shanchen Pang, Fan Meng, Xun Wang, et al. VGG16-T: A Novel Deep Convolutional Neural Network with Boosting to Identify Pathological Type of Lung Cancer in Early Stage by CT Images, International Journal of Computational Intelligence Systems. Vol.13(1), pp. 771-780, 2020.

Response 4

We appreciate your critical comments and useful suggestions. We have added the metric methods in accordance with your comments. F-score and Precision have been added to Table 2. In addition, the description and structure of each network are also given (Tables S4, S5, S6, S7, Lines 164-179).

Reviewer #2:

In the paper, authors present an artificial-intelligence-based system to detect the regions affected by endometrial cancer automatically from hysteroscopic images. The diagnosis accuracy is increased. However, there are some details that can be improved.

Comment1

The models used in the paper are not presented well.

Response 1

We appreciate your critical comments and useful suggestions. We have added the overall architecture of the model (Fgure2) to provide further details of the model used. In addition, the description and structure of each network are also given (Tables S4, S5, S6, S7, Lines 164-179).

Comment2

The set of threshold value is 50, maybe you can explain some details about that.

Response 2

We appreciate your critical comments and useful suggestions. The threshold was taken from the points where the malignant score intersects with the other scores rather than the point where the average of two scores was the best, because the threshold should be set lower to reduce oversight cases in the actual clinical devices. We have added this information to the revised manuscript considering your comment (Lines 152-154).

Comment3

The writing of the paper should be taken care. For example, the text size on the tables, the text-transform on subtitle of page 7.

Response 3

We appreciate your critical comments and useful suggestions. We have revised the manuscript in accordance with your suggestion and PLOS ONE's style requirements.

Attachment

Submitted filename: Response to Reviewers .docx

Decision Letter 1

Tao Song

1 Mar 2021

Automated system for diagnosing endometrial cancer by adopting deep-learning technology in hysteroscopy

PONE-D-20-34280R1

Dear Dr. Sone,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Tao Song

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thanks for your efforts, all comments have been addressed by the authors, so, I recommand to accpet the manuscript.

Reviewer #2: In the paper, authors present an artificial-intelligence-based system to detect the regions affected by endometrial cancer automatically from hysteroscopic images. The diagnosis accuracy is increased. The authors replied well to the suggestions I proposed. It can be accepted.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Acceptance letter

Tao Song

5 Mar 2021

PONE-D-20-34280R1

Automated system for diagnosing endometrial cancer by adopting deep-learning technology in hysteroscopy

Dear Dr. Sone:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Tao Song

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig

    (A) Diagnostic accuracy realized when applying the neural networks on individual datasets. Image-classification accuracy was compared using dataset–neural-network combination. (B) Diagnostic accuracy realized when employing proposed continuity analysis using dataset–neural-network combination.

    (TIF)

    S1 Table. Stages and histological types endometrial cancer identified in patients recruited in this study.

    (DOCX)

    S2 Table. Training and evaluation data in this study.

    (DOCX)

    S3 Table. Datasets used in this study.

    (DOCX)

    S4 Table. Number of parameters of each network.

    (DOCX)

    S5 Table. Network structure of EfficientNet B0.

    (DOCX)

    S6 Table. Network structure of MobileNet V2.

    (DOCX)

    S7 Table. Network structure of Xception.

    (DOCX)

    S8 Table. Average accuracies obtained through image-by-image-based predictions grouped in terms of dataset and network types.

    (DOCX)

    S9 Table. Average accuracies obtained through video-unit-based predictions grouped in terms of dataset and network types.

    (DOCX)

    Attachment

    Submitted filename: Response to Reviewers .docx

    Data Availability Statement

    All relevant data are within the paper and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES