Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2022 Jun 17;17(6):e0269931. doi: 10.1371/journal.pone.0269931

Artificial intelligence–based technology for semi-automated segmentation of rectal cancer using high-resolution MRI

Atsushi Hamabe 1, Masayuki Ishii 1, Rena Kamoda 2, Saeko Sasuga 2, Koichi Okuya 1, Kenji Okita 1, Emi Akizuki 1, Yu Sato 1, Ryo Miura 1, Koichi Onodera 3, Masamitsu Hatakenaka 3, Ichiro Takemasa 1,*
Editor: Kumaradevan Punithakumar4
PMCID: PMC9205476  PMID: 35714069

Abstract

Aim

Although MRI has a substantial role in directing treatment decisions for locally advanced rectal cancer, precise interpretation of the findings is not necessarily available at every institution. In this study, we aimed to develop artificial intelligence-based software for the segmentation of rectal cancer that can be used for staging to optimize treatment strategy and for preoperative surgical simulation.

Method

Images from a total of 201 patients who underwent preoperative MRI were analyzed for training data. The resected specimen was processed in a circular shape in 103 cases. Using these datasets, ground-truth labels were prepared by annotating MR images with ground-truth segmentation labels of tumor area based on pathologically confirmed lesions. In addition, the areas of rectum and mesorectum were also labeled. An automatic segmentation algorithm was developed using a U-net deep neural network.

Results

The developed algorithm could estimate the area of the tumor, rectum, and mesorectum. The Dice similarity coefficients between manual and automatic segmentation were 0.727, 0.930, and 0.917 for tumor, rectum, and mesorectum, respectively. The T2/T3 diagnostic sensitivity, specificity, and overall accuracy were 0.773, 0.768, and 0.771, respectively.

Conclusion

This algorithm can provide objective analysis of MR images at any institution, and aid risk stratification in rectal cancer and the tailoring of individual treatments. Moreover, it can be used for surgical simulations.

Introduction

In rectal cancer treatment, accurate diagnosis is crucial in determining individual treatment strategies and achieving curable resection. Multidisciplinary treatment including preoperative chemoradiotherapy is standard therapy for locally advanced rectal cancer (LARC) to prevent local recurrence after total mesorectal excision (TME), and here MRI has the pivotal role of defining the baseline stage of rectal cancer [1, 2]. ESMO and NCCN guidelines recommend MRI as a mandatory preoperative examination [3, 4].

Although the accuracy of MRI in predicting the stage of rectal cancer has been high in previous studies comparing MRI findings with histopathology in relatively small series, the MERCURY study that prospectively incorporated larger series did not replicate the prior excellent results [511]. In addition, when expert radiologists interpreted the MR images according to strictly defined protocols, satisfactory accuracy was maintained, but this is not necessarily the practice at every institution [12]. Other possible concerns include inter-observer differences in difficult cases, or the shortage of specialized radiologists in some developed countries [13, 14]. If a system supporting MRI diagnosis could be implemented, it would be useful in many circumstances.

Recent progress in applied artificial intelligence (AI) has increased its importance in medical care, especially in medical image analysis [1517]. The use of AI-based diagnostic supporting technology is enabled by advances in deep learning technology (DL). With the use of a substantial number of high-quality training datasets, DL can make an algorithm that predicts clinical output with high accuracy. Ronneberger et al. introduced the U-net for the segmentation of two-dimensional (2D) biomedical images [18], and Milletrai et al. extended the U-net to three-dimensional (3D) images [19]. Regarding tumor segmentation from MR images, the previous studies used these 2D or 3D U-nets and showed that the results of segmentation were comparable to those achieved by human experts in multiple types of cancer [20, 21]. While there have been several studies attempting to segment rectal cancers, the depth of tumor invasion could not be assessed or the accuracy of segmentation could stand further improvement [22, 23]. We have performed the PRODUCT study (UMIN000034364), in which we measured the circumferential resection margin (CRM) of LARC as a primary endpoint in laparoscopic surgery. Resected specimens including rectal cancer were processed in a circular shape with mesorectum attached for pathological diagnosis, though this has not been the general practice in Japan. In addition, we started to measure CRM according to the practice in Western countries, not only in the cases enrolled in the PRODUCT study but also in other LARC cases as a clinical practice. As a spin-off, available sections of these specimens show the areas of LARC that correspond to the MR images, thus providing high-quality training datasets which we consider advantageous in making ground-truth labels that can be used for DL.

Based on this background, we hypothesized that DL might resolve the difficulties related to MRI diagnosis by using MR images annotated with ground-truth labels reflecting the pathologically proved cancer area. In this study, we aimed to develop AI-based software to support the staging diagnosis of rectal cancer and to visualize the segmentation of rectal cancer, which can be used to optimize treatment strategy and in surgical simulations.

Materials and methods

Patients

The patients who underwent surgery for rectal cancer between January 2016 and July 2020 in our institution were retrospectively analyzed (Fig 1). A total of 201 MRI exams were used for training data (Table 1). Of these, a resected specimen was processed in a circular shape in 103 cases, and neoadjuvant treatment was administered in 55 cases. A total of 98 opened specimens in which mesorectum was detached according to the standard Japanese procedure were included in the analysis. The protocol for this research project was approved by the Ethics Committee of Sapporo Medical University. Informed consent was not required due to the fact that data was anonymized. The procedures were in accordance with the provisions of the Declaration of Helsinki of 1995 (as revised in Brazil, 2013).

Fig 1. Details of a total of 201 cases used as training data.

Fig 1

Group 1 images were used to prepare ground-truth labels for segmentation. Group 2 images were used as ground-truth labels having pathological information of T staging alone.

Table 1. Summary of the analyzed cases.

N = 201
Sex (male/female) 115/86
T factor (≤T2/T3/T4) 82/103/16
Neoadjuvant treatment (yes/no) 55/146
Processing for pathological examination (circular/open) 103/98

Magnetic resonance imaging

MR images were acquired using a 3.0-T (N = 93) or 1.5-T (N = 108) MR scanner (Ingenia; Philips Healthcare, Best, the Netherlands). A phased-array coil (dStream Torso coil; Philips Healthcare, Best, the Netherlands) was used for signal reception. In 4 patients who were referred from the other hospitals, different MR scanners were used (3.0-T Skyra; Siemens, Erlangen, Germany in 2 and 1.5-T Signa HDXt; GE Healthcare, Cleveland, OH, USA in 2, respectively). Before examination, bowel peristalsis was prevented by intramuscular injection of butylscopolamine if possible. Neither bowel preparation nor air insufflation was performed. After identifying the tumor on sagittal T2-weighted images, axial T2-weighted images were acquired in which the angle of the plane was made perpendicular to the long axis of the tumor (TR/TE, 4000/90 ms; 3-mm slice thickness; 0.5-mm interslice gap; 150-mm field of view; 288 × 288 matrix; spatial resolution, 0.52 × 0.52 pixel size). Three-dimensional isotropic T2-weighted fast spin-echo was also acquired routinely since October 2018 (TR/TE, 1500/200 ms; 256-mm field of view; 288 × 288 matrix; spatial resolution 0.89 × 0.89 mm).

Processing of resected specimen

In the PRODUCT study, we developed a new method to precisely measure the pathological CRM, which we named “transverse slicing of a semi-opened rectal specimen” [24]. First, the anterior side of the rectum is opened longitudinally from the oral stump to the anal side up to 2 cm oral to the tumor border. Similarly, the rectum is opened on the anal side to the tumor if sufficient distal margin is resected. That is, the area of rectum between 2 cm above and below the borders of the rectal cancer is not incised. The mesorectum attached to the opened region of the rectum is removed to harvest embedded lymph nodes, while the mesorectum is left attached where the rectum is not opened. After the removal of the mesorectum, the dissection plane is marked using India ink for the purpose of demarcating it and supporting CRM measurement. Next to the inking, a piece of soft sponge is inserted in the rectal lumen to keep the in situ circular shape and the specimen is pinned to a cork board under gentle tension, followed by fixation in 10% formalin. After fixation, a circular area of the rectum is transversely sliced as thinly as possible. Pathologists analyzed all sections after staining with hematoxylin-eosin and diagnosed pathological findings.

Ground-truth label

Since we use a supervised training method to develop automatic segmentation algorithms, ground-truth labels were required. For all 201 cases, baseline T stages were labeled based on the pathological diagnosis or on the assessment of pathological sections if the patients had undergone neoadjuvant treatment. Segmentation labels, which represent whether each voxel of an MR image belongs to the target subject or not, were prepared for 135 of the 201 cases by two surgeons (AH and MI) who each has more than 10 years’ clinical experience treating colorectal cancer. Before starting the analysis, they received several lectures from a qualified pathologist to train them to find the area of rectal cancer or to predict the baseline area of rectal cancer before neoadjuvant treatment by discriminating fibrosis or necrosis on hematoxylin and eosin sections. These surgeons created MR images annotated with ground-truth segmentation labels, including the areas of tumor, rectum, and mesorectum, using 3D MRI analysis software (Fig 2). The rectal area was defined as the area within the muscularis propria.

Fig 2. Preparation for ground-truth segmentation labels.

Fig 2

(a) Section of a circular specimen. (b) Pathological section of the specimen stained with hematoxylin-eosin revealing areas of tumor, rectum, and mesorectum. (c) Axial MR image of the rectal cancer. (d) Ground-truth segmentation labels were used to annotate the MR images. The areas colored magenta, yellow, and cyan represent tumor, rectum, and mesorectum, respectively.

Automatic segmentation algorithm

We developed an automatic segmentation algorithm that extracts the tumor, rectum, and mesorectum areas in 3D from T2-weighted MR images using a deep neural network. The network architecture is a 3D variant of U-net, which is popular for biomedical image segmentation [18]. It consists of encoder and decoder parts with skip connections (Fig 3). The convolutional block in each encoder and decoder consists of a 3 × 3 × 3 or 1 × 3 × 3 convolution layer, a batch normalization layer, and rectified linear unit operations. The deconvolution blocks are transposed convolutional operators with a kernel size of 4 × 4 × 4 voxels. The skip connections include a 1 × 1 × 1 convolution layer, a batch normalization layer, and rectified linear unit operations. The input to the network is a 3D MR image. The output has same spatial dimensions as the input, with 3 channels each for the mesorectum area, rectum, and tumor area probabilities. The last three channels have values from 0.0 to 1.0 with application of the sigmoid function. Final segmentation results were obtained by binarizing the values, using a threshold of 0.5.

Fig 3. U-net.

Fig 3

The architecture of the segmentation network for the areas of tumor, rectum, and mesorectum.

Our algorithm calculates the T stage, following the binary segmentation results. The case is classified as T2 or below when the tumor area is not in contact with the contour of the area of the rectum and completely included in the area of the rectum. Otherwise, the case is classified as T3 or above when at least a part of the tumor area is outside the rectum. This rule exactly follows the T staging rules of tumor invasion into the area of the rectum (Fig 4). Generally, the DL-based segmentation method works to maximize the volume overlap between the segmentation result and the ground-truth label image. However, the risk of disagreement for T-staging would be inherent if T-staging were based on segmentation results of tumor and rectum that were mutually independent. To deal with this concern, we introduced a novel loss that can directly maximize T-staging accuracies in model training. The loss consists of two terms, as follows. The first term is so-called Dice loss [19], which for segmentation purposes is defined as follows:

LossSEG=12×i=1Npigii=1Npi+i=1Ngi

where N is the number of voxels, p is the probability that is outputted by the network, and g is the ground-truth label. This term works to maximize the overlap between the ground-truth label and the probability maps.

Fig 4. Staging algorithm.

Fig 4

Left, T2 case, and right, T3 case. The magenta, yellow, and cyan areas represent tumor, rectum and mesorectum, respectively.

The second term of the loss function is cross entropy loss, which for accurate staging purposes is defined as follows:

LossSTG=1gstaging/2+gstaging×pstaging

where

pstaging=maxi1,,N(pcancer,i×1prectaltube,i,
gstaging=1ifgroundtruthTstageisoverT3,1otherwise.

pcancer and prectal tube represent the probability maps of the tumor and rectum, respectively. pstaging indicates the probability of the predicted staging. It takes a high number when there is any voxel simultaneously having low rectum probability and high tumor probability. This term works to reduce the tumor area outside of the rectum for T2 cases. On the other hand, it works to increase the tumor area outside of the rectum for T3 cases.

To summarize, we minimize the loss function to train the network:

LossSEG+λ×LossSTG

λ is a parameter used to balance the two terms and it was experimentally determined to be 0.02. During the training, LossSEG is evaluated only for the cases with ground-truth segmentation labels, while LossSTG is evaluated for all cases. We used the Adam optimizer to minimize the loss function, with the following parameters: base learning rate, 0.003; beta1, 0.9; beta2, 0.999; and epsilon, 1 × 10−8. The batch size was 5 samples, including 3 cases with ground-truth segmentation labels and 2 cases with only ground-truth staging. All experiments were conducted on an NVIDIA DGX-2 machine using the NVIDIA V100 GPU with 80 GB of memory.

In the network training, each training image is augmented by several image-processing techniques such as scaling, rotation, and slice thickness conversion to improve segmentation accuracies. Also, the input image is cropped around the tumor area and rescaled to a 0.5 mm3 isotropic voxel size and 256 × 256 × 128 voxel number. In the test phase, a user inputs an estimated center position of the tumor, and then the image around the tumor position is processed.

Workflow for evaluation and statistical analysis

We evaluated two aspects of the algorithm: segmentation accuracy and staging accuracy. Ten-fold cross validation was conducted. The data were randomly divided into 10 datasets. Eight datasets out of 10 were used for training the network parameters. The remaining two datasets were used for validation and evaluation, respectively. During the training iteration, the performance of the network was evaluated at every 100th iteration on the validation dataset. We chose the best network parameter for the validation dataset, using the sum of the dice score, sensitivity, and specificity, and then applied it to the evaluation dataset. We repeated this procedure ten times, changing the role of training, validation, and evaluation of each dataset.

Regarding the segmentation accuracy, we calculated the Dice similarity coefficients (DSC) between manual segmentation and automatic segmentation [25]. The DSC is defined as follows:

DSC=2×PGP+G

where P is the segmentation result and G is the ground truth. The DSC ranges from 0.0 to 1.0, and DSC = 1.0 means that the results overlap completely. Note that, since not all of the training data have corresponding ground-truth segmentation, we evaluated the segmentation accuracies using 135 cases.

Next, the T staging accuracies were evaluated with all 201 cases by calculating the sensitivity and specificity. The sensitivity is defined as follows:

Sensitivity=PT3GT3GT3

where PT3 represents the predicted T stage as being over T3. GT3 represents the ground-truth T stage as being over T3. Specificity is defined as follows:

Specificity=PT2GT2GT2

where PT2 means the predicted T stage is under T2 and GT2 is means the ground-truth T stage is under T2.

Results are presented as the number of cases evaluated for categorical data and expressed as the median and interquartile range (IQR) for quantitative data. Univariate analysis was performed using the Wilcoxon rank-sum test. Statistical analyses were performed using JMP Pro 15.1.0 software (SAS Institute, Cary, NC, USA).

Results

Segmentation accuracy

The developed algorithm could successfully estimate the areas of the tumor, rectum, and mesorectum, in which the ground-truth labels and segmentation results of typical cases corresponded well (Fig 5a). The summary of evaluation results regarding the segmentation accuracy demonstrated that the median DSCs for tumor, rectum, and mesorectum were 0.727, 0.930, and 0.917, respectively (Fig 5b). Mucinous cancer exhibits high intensity on T2 in contrast to the most common histology of adenocarcinoma. Therefore, we investigated DSCs in mucinous cancer patients (N = 6) to analyze whether this feature affects segmentation accuracy. As a result, the DSC was lower in the cases of mucinous cancer compared with those of the other histology (0.358 [0.167–0.596] vs 0.736 [0.605–0.801], P = 0.0024). In addition, on the assumption that the DSC of the tumor might easily have been lowered by a slight positional deviation in the smaller tumor, the correlation between the DSC and the diameter of the tumor was investigated after excluding mucinous cancer (Fig 5c). We then observed a significant correlation between the two values (Pearson correlation coefficient = 0.2418; P = 0.0081). After excluding cancers of diameters less than 20 mm, the median DSC of the tumor was slightly elevated, to 0.739 [0.615–0.801].

Fig 5. Results of segmentation accuracy.

Fig 5

(a) Representative images of MRI, the ground-truth segmentation labels, and AI-predicated segmentations. (b) Summary of evaluation results regarding the segmentation accuracy. (c) Scatter plots showing the relationships between tumor diameter and the Dice similarity coefficients.

Correlation between pathological and AI T stage

The guidelines used worldwide regard distinguishing between T2 and T3 as one of the important factors directing treatment decisions. Therefore, we investigated our method’s diagnostic accuracy in discriminating T2 from T3 as an initial assessment. The summary of correlation between pathological T stage and AI-predicted T stage was analyzed (Table 2). The T-staging sensitivity, specificity, and overall accuracy were 0.773, 0.768, and 0.771, respectively. For comparison, we evaluated a baseline model that was trained by using a standard dice loss with only ground-truth segmentation labels. The baseline model obtained a sensitivity, specificity, and overall accuracy of 0.765, 0.756, and 0.761, showing that the AI developed in this study could achieve better performance in T-staging. As in the analysis of segmentation accuracy, the diagnostic accuracy was recalculated after the exclusion of small cancers and mucinous cancer. As a result, the T-staging sensitivity, specificity, and overall accuracy were 0.789, 0.714, and 0.762, respectively.

Table 2. Summary of pathological T stage and AI-predicted T stage.

Ground-truth pathological T staging
≤T2 ≥T3 Total
AI-predicted T staging ≤T2 63 27 90
≥T3 19 92 111
Total 82 119 201

Discussion

In this study, an algorithm for diagnosing and staging rectal cancer was successfully developed using DL technology. It could be used in future semi-automation software to aid physicians. The characteristic feature of this algorithm is that it can output the segmentation that visualizes the areas of tumor, rectum, and mesorectum. This could be used not only for T-factor staging, but also for preoperative surgical simulation. In the future, based on the provided visual information, we will be able to choose the surgical plane to be dissected or decide whether the combined resection of an adjacent organ is necessary. In addition, we think the algorithm will also help multidisciplinary teams tailor treatment to individual patients.

Two meta-analyses have investigated the diagnostic accuracy of MRI and shown favorable results, with about 85% sensitivity and 75% specificity for diagnosing tumor invasion beyond the muscularis propria [10, 11]. However, these results are subject to substantial selection bias, which can be associated with higher reported than actual accuracy. This is partly reflected by the fact that the carefully designed prospective study, MERCURY, demonstrated diagnostic accuracy that was acceptable but that did not reach the values reported in the meta-analyses. Accurately diagnosing rectal cancer using MRI would, in reality, not be easy. Furthermore, although MRI scanners are plentiful in Japan, certified radiologists are in quite short supply, leaving individual radiologists with excessive workloads. This is also the case in other developed countries [13, 14]. Given this situation, a method that can improve the acquisition of objective MRI findings at every institution is needed. We think the current algorithm might play a substantial role in providing equal access to MRI diagnosis in institutions or regions where there are shortages of trained personnel.

As MRI technology has advanced in recent decades, it is important to re-evaluate the accuracy of MRI. Since neoadjuvant CRT was established as a standard treatment in Western countries, it has become difficult to validate the accuracy of baseline MRI findings by simply comparing them with the corresponding pathology. In the current study, we made a training dataset by annotating the pathologically proven tumor areas on MRI images. In the cases with neoadjuvant therapy, the baseline area of the tumor was predicted by the pathological evidence of fibrosis or necrosis. These processes might be useful in making reliable training datasets even in cases with neoadjuvant treatment, suggesting that the algorithm for segmentation might reflect the typical results of MRI today.

Some recent studies have tried to estimate rectal cancer–related parameters on preoperative MR images using AI, and have shown that the accuracy was acceptable [22, 2628]. However, these studies had several limitations: tumor tissue was not visualized on the MR image, the relationship of the tumor with the mesorectal fascia was difficult to assess, the results were not based on high-resolution MRI, or the ground-truth labels were not based on pathological assessment, the last issue being the one we consider to be most critical. We think there is much room for improvement in the clinical application of AI. However, the software developed in this study has various strengths. First, the ground-truth labels are based on the pathological findings in circular specimens, providing the high-quality training datasets that are essential in establishing a reliable algorithm. Second, the algorithm can output the segmentation of the tumor, rectum, and mesorectum. This feature is valuable for staging the tumor, for individual multidisciplinary treatment decision making, and for the preoperative simulation that is required by colorectal surgeons in order to obtain curative resection. Third, we used high-resolution MRI in this analysis, though the MRI acquisition protocols differ from those used in the MERCURY study. Thus, this system can be applied anywhere if the appropriate protocol and an adequate scanner are used for image acquisition. We note that the accuracy of our algorithm was insufficient in analyzing some types of tumors, including mucinous cancer and small tumors. Although the quality of segmentation can also be regarded as favorable as a whole, it would be ideal if these hurdles were cleared with future refinement. However, because these small tumors rarely infiltrate the mesorectum or surrounding tissues, this algorithm can still be regarded as useful for diagnosing locally advanced rectal cancers.

The current study has several limitations. First, validation using the test data acquired in various conditions should be performed to confirm the generalizability of the algorithm. Currently, we are planning a validation study using an independent large series to investigate the algorithm’s effectiveness. Simultaneously, we will continue to improve the software’s performance in assessing other important factors, including mesorectal fascia involvement. Second, the workload involved in preparing individual ground-truth labels is too heavy for the number of training sets to be readily increased. Third, as explained in the results, the accuracy of this system is still insufficient to be used for mucinous tumors and it is not able to estimate the shape of small tumors. We think this limitation can be overcome with the use of more training datasets in the future.

In conclusion, we have successfully developed the first AI-based algorithm for segmenting rectal cancer. This system can provide stable results at any institution and contribute to rectal cancer risk stratification and the tailoring of individual treatments, and is likely to gain importance in the era of individualized medical care.

Supporting information

S1 Dataset

(XLSX)

Acknowledgments

We are grateful to Shintaro Sugita, Associate Professor in the Department of Surgical Pathology at Sapporo Medical University, for giving lectures on finding areas of rectal cancer prior to preparing ground-truth labels.

Data Availability

Raw data of MRI and pathological images contain potentially identifying patient information (patient-specific ID). Non-author contact information: the institutional review board of Sapporo Medical University Hospital. E-mail: ji-rskk@sapmed.ac.jp.

Funding Statement

This study was funded by the FUJIFILM Corporation (https://www.fujifilm.com). The funder had no role in study design, data collection and analysis, or decision to publish, but supported the analysis using deep learning and the preparation of manuscript related to deep learning. No authors received personal support from the funder.

References

  • 1.Taylor FG, Quirke P, Heald RJ, Moran BJ, Blomqvist L, Swift IR et al. Preoperative magnetic resonance imaging assessment of circumferential resection margin predicts disease-free survival and local recurrence: 5-year follow-up results of the MERCURY study. J Clin Oncol 2014; 32: 34–43. doi: 10.1200/JCO.2012.45.3258 [DOI] [PubMed] [Google Scholar]
  • 2.Battersby NJ, How P, Moran B, Stelzner S, West NP, Branagan G et al. Prospective Validation of a Low Rectal Cancer Magnetic Resonance Imaging Staging System and Development of a Local Recurrence Risk Stratification Model: The MERCURY II Study. Ann Surg 2016; 263: 751–760. doi: 10.1097/SLA.0000000000001193 [DOI] [PubMed] [Google Scholar]
  • 3.Glynne-Jones R, Wyrwicz L, Tiret E, Brown G, Rödel C, Cervantes A et al. Rectal cancer: ESMO Clinical Practice Guidelines for diagnosis, treatment and follow-up†. Ann Oncol 2017; 28: iv22–iv40. doi: 10.1093/annonc/mdx224 [DOI] [PubMed] [Google Scholar]
  • 4.Benson AB, Venook AP, Al-Hawary MM, Arain MA, Chen YJ, Ciombor KK et al. NCCN Guidelines Insights: Rectal Cancer, Version 6.2020. J Natl Compr Canc Netw 2020; 18: 806–815. doi: 10.6004/jnccn.2020.0032 [DOI] [PubMed] [Google Scholar]
  • 5.Burton S, Brown G, Daniels I, Norman A, Swift I, Abulafi M et al. MRI identified prognostic features of tumors in distal sigmoid, rectosigmoid, and upper rectum: treatment with radiotherapy and chemotherapy. Int J Radiat Oncol Biol Phys 2006; 65: 445–451. doi: 10.1016/j.ijrobp.2005.12.027 [DOI] [PubMed] [Google Scholar]
  • 6.Akasu T, Iinuma G, Takawa M, Yamamoto S, Muramatsu Y, Moriyama N. Accuracy of high-resolution magnetic resonance imaging in preoperative staging of rectal cancer. Ann Surg Oncol 2009; 16: 2787–2794. doi: 10.1245/s10434-009-0613-3 [DOI] [PubMed] [Google Scholar]
  • 7.Kim H, Lim JS, Choi JY, Park J, Chung YE, Kim MJ et al. Rectal cancer: comparison of accuracy of local-regional staging with two- and three-dimensional preoperative 3-T MR imaging. Radiology 2010; 254: 485–492. doi: 10.1148/radiol.09090587 [DOI] [PubMed] [Google Scholar]
  • 8.Group MS. Extramural depth of tumor invasion at thin-section MR in patients with rectal cancer: results of the MERCURY study. Radiology 2007; 243: 132–139. doi: 10.1148/radiol.2431051825 [DOI] [PubMed] [Google Scholar]
  • 9.Kim CK, Kim SH, Chun HK, Lee WY, Yun SH, Song SY et al. Preoperative staging of rectal cancer: accuracy of 3-Tesla magnetic resonance imaging. Eur Radiol 2006; 16: 972–980. doi: 10.1007/s00330-005-0084-2 [DOI] [PubMed] [Google Scholar]
  • 10.Zhang G, Cai YZ, Xu GH. Diagnostic Accuracy of MRI for Assessment of T Category and Circumferential Resection Margin Involvement in Patients With Rectal Cancer: A Meta-Analysis. Dis Colon Rectum 2016; 59: 789–799. doi: 10.1097/DCR.0000000000000611 [DOI] [PubMed] [Google Scholar]
  • 11.Al-Sukhni E, Milot L, Fruitman M, Beyene J, Victor JC, Schmocker S et al. Diagnostic accuracy of MRI for assessment of T category, lymph node metastases, and circumferential resection margin involvement in patients with rectal cancer: a systematic review and meta-analysis. Ann Surg Oncol 2012; 19: 2212–2223. doi: 10.1245/s10434-011-2210-5 [DOI] [PubMed] [Google Scholar]
  • 12.Balyasnikova S, Brown G. Optimal Imaging Strategies for Rectal Cancer Staging and Ongoing Management. Curr Treat Options Oncol 2016; 17: 32. doi: 10.1007/s11864-016-0403-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Nishie A, Kakihara D, Nojo T, Nakamura K, Kuribayashi S, Kadoya M et al. Current radiologist workload and the shortages in Japan: how many full-time radiologists are required? Jpn J Radiol 2015; 33: 266–272. doi: 10.1007/s11604-015-0413-6 [DOI] [PubMed] [Google Scholar]
  • 14.Nakajima Y, Yamada K, Imamura K, Kobayashi K. Radiologist supply and workload: international comparison—Working Group of Japanese College of Radiology. Radiat Med 2008; 26: 455–465. doi: 10.1007/s11604-008-0259-2 [DOI] [PubMed] [Google Scholar]
  • 15.Misawa M, Kudo SE, Mori Y, Cho T, Kataoka S, Yamauchi A et al. Artificial Intelligence-Assisted Polyp Detection for Colonoscopy: Initial Experience. Gastroenterology 2018; 154: 2027–2029.e2023. doi: 10.1053/j.gastro.2018.04.003 [DOI] [PubMed] [Google Scholar]
  • 16.Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts H. Artificial intelligence in radiology. Nature reviews Cancer 2018; 18: 500–510. doi: 10.1038/s41568-018-0016-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Martin Noguerol T, Paulano-Godino F, Martin-Valdivia MT, Menias CO, Luna A. Strengths, Weaknesses, Opportunities, and Threats Analysis of Artificial Intelligence and Machine Learning Applications in Radiology. J Am Coll Radiol. 2019; 16(9 Pt B):1239–1247. doi: 10.1016/j.jacr.2019.05.047 [DOI] [PubMed] [Google Scholar]
  • 18.Ronneberger O, Fischer, P., Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv:150504597 2015. https://arxiv.org/abs/1505.04597
  • 19.Milletari F, Navab, N., Ahmadi, S.A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. arXiv:160604797. https://arxiv.org/abs/1606.04797
  • 20.Dolz J, Xu X, Rony J, Yuan J, Liu Y, Granger E et al. Multiregion segmentation of bladder cancer structures in MRI with progressive dilated convolutional networks. Med Phys 2018; 45: 5482–5493. doi: 10.1002/mp.13240 [DOI] [PubMed] [Google Scholar]
  • 21.Hodneland E, Dybvik JA, Wagner-Larsen KS, Šoltészová V, Munthe-Kaas AZ, Fasmer KE et al. Automated segmentation of endometrial cancer on MR images using deep learning. Sci Rep 2021; 11: 179. doi: 10.1038/s41598-020-80068-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Trebeschi S, van Griethuysen JJM, Lambregts DMJ, Lahaye MJ, Parmar C, Bakers FCH et al. Deep Learning for Fully-Automated Localization and Segmentation of Rectal Cancer on Multiparametric MR. Sci Rep 2017; 7: 5301. doi: 10.1038/s41598-017-05728-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Huang YJ, Dou Q, Wang ZX, Liu LZ, Jin Y, Li CF et al. 3-D RoI-Aware U-Net for Accurate and Efficient Colorectal Tumor Segmentation. IEEE Trans Cybern 2021; 51: 5397–5408. doi: 10.1109/TCYB.2020.2980145 [DOI] [PubMed] [Google Scholar]
  • 24.Ishii M, Takemasa I, Okita K, Okuya K, Hamabe A, Nishidate T et al. A modified method for resected specimen processing in rectal cancer: Semi-opened with transverse slicing for measuring of the circumferential resection margin. Asian J Endosc Surg 2021. Online ahead of print. doi: 10.1111/ases.13003 [DOI] [PubMed] [Google Scholar]
  • 25.Zou KH, Warfield SK, Bharatha A, Tempany CM, Kaus MR, Haker SJ et al. Statistical validation of image segmentation quality based on a spatial overlap index. Acad Radiol 2004; 11: 178–189. doi: 10.1016/s1076-6332(03)00671-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Ma X, Shen F, Jia Y, Xia Y, Li Q, Lu J. MRI-based radiomics of rectal cancer: preoperative assessment of the pathological features. BMC Med Imaging 2019; 19: 86. doi: 10.1186/s12880-019-0392-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Wang D, Xu J, Zhang Z, Li S, Zhang X, Zhou Y et al. Evaluation of Rectal Cancer Circumferential Resection Margin Using Faster Region-Based Convolutional Neural Network in High-Resolution Magnetic Resonance Images. Diseases of the colon and rectum 2020; 63: 143–151. doi: 10.1097/DCR.0000000000001519 [DOI] [PubMed] [Google Scholar]
  • 28.Kim J, Oh JE, Lee J, Kim MJ, Hur BY, Sohn DK et al. Rectal cancer: Toward fully automatic discrimination of T2 and T3 rectal cancers using deep convolutional neural network. International Journal of Imaging Systems and Technology 2019; 29: 247–259. doi: 10.1002/ima.22311 [DOI] [Google Scholar]

Decision Letter 0

Kumaradevan Punithakumar

2 Feb 2022

PONE-D-21-25683Artificial intelligence–based technology for semi-automated segmentation of rectal cancer using high-resolution MRIPLOS ONE

Dear Dr. Takemasa,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Mar 19 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Kumaradevan Punithakumar

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf"

2. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

3. Please clarify the nature of these restrictions, ie. If due to ethical or legal reasons.

4. Please amend your Methods section to include the information provided in your Ethics Statement that informed consent was not required due to the fact that data was anonymized.

5. Thank you for stating the following in the Funding Section of your manuscript:

“This study was funded by the FUJIFILM Corporation”

We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

“This study was funded by the FUJIFILM Corporation (https://www.fujifilm.com). The funder had no role in study design, data collection and analysis, or decision to publish, but supported the analysis using deep learning and the preparation of manuscript related to deep learning. No authors received personal support from the funder.”

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The entitled "Artificial intelligence–based technology for semi-automated segmentation of rectal

cancer using high-resolution MRI " is well organized. Literature review and methodology explained by author is appreciable. minor revision is suggested. Author should address following queries:

1.in introduction section(3rd paragraph), some recent papers of this area should be covered which uses U-net in MRI images or other techniques apart from your used technique.

2.In method section, automatic segmentation algorithm part, Author should add more detail about the algorithm and qualitatively analyse the scales of images before and after the application of segmentation algorithm.

3.In line 148, proper citations are missing. Please place the appropriate citations for evaluation coefficients.

4. in figure 5, pearson correlation coefficient and p value should be noted in the figure,and the y-axis is fuzzy.

Reviewer #2: This manuscript presents a new high-resolution MRI semi-automatic rectal cancer segmentation technology based on deep learning. I believe this work will contribute to the accuracy of adjuvant therapy. However, I hope the author will revise the manuscript and solve the following problems before publishing it in the journal.

1. Firstly, the description of the research status is not perfect, especially the research status of deep learning in medical image aided diagnosis, including two-dimensional image and three-dimensional image.

2. Secondly, it is necessary to make a detailed description of the hardware configuration and super parameter configuration of network training.

3. Finally, some other networks should be added to the experimental part as a contrast, so as to make the experimental results more convincing.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Jun 17;17(6):e0269931. doi: 10.1371/journal.pone.0269931.r002

Author response to Decision Letter 0


11 Apr 2022

Responses to the reviewers’ comments

We thank the reviewers for their fair comments and useful suggestions for improving our manuscript. As indicated below, we have considered all comments and suggestions, and we made corrections on a proof in red for the revised parts of the manuscript.

Reviewer's Responses to Questions

Comments to the Author

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

Response: According to the suggestion, we uploaded the underlying dataset for the findings as supporting information files.

Reviewer #1:

The entitled "Artificial intelligence–based technology for semi-automated segmentation of rectal cancer using high-resolution MRI " is well organized. Literature review and methodology explained by author is appreciable. minor revision is suggested. Author should address following queries:

1.In introduction section (3rd paragraph), some recent papers of this area should be covered which uses U-net in MRI images or other techniques apart from your used technique.

Response:

We thank the reviewer for the constructive comment. We updated the current status regarding the U-net based segmentation techniques for MRI in introduction section (page 7, line 99-105). We additionally cited the following 6 references herein.

#18. Ronneberger O, Fischer, P., Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv:1505.04597 2015.

#19. Milletari F, Navab, N., Ahmadi, S.A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. arXiv:1606.04797.

#20. Dolz J, Xu X, Rony J et al. Multiregion segmentation of bladder cancer structures in MRI with progressive dilated convolutional networks. Med Phys 2018; 45: 5482-5493.

#21. Hodneland E, Dybvik JA, Wagner-Larsen KS et al. Automated segmentation of endometrial cancer on MR images using deep learning. Sci Rep 2021; 11: 179.

#22. Trebeschi S, van Griethuysen JJM, Lambregts DMJ et al. Author Correction: Deep Learning for Fully-Automated Localization and Segmentation of Rectal Cancer on Multiparametric MR. Sci Rep 2018; 8: 2589.

#23. Huang YJ, Dou Q, Wang ZX et al. 3-D RoI-Aware U-Net for Accurate and Efficient Colorectal Tumor Segmentation. IEEE Trans Cybern 2021; 51: 5397-5408.

2. In method section, automatic segmentation algorithm part, Author should add more detail about the algorithm and qualitatively analyse the scales of images before and after the application of segmentation algorithm.

Response:

According to the indication, we added proposed algorithm features and implementation details (page 14, line 213-218). The salient feature of the algorithm is that we introduced a novel loss which can directly maximize T-staging accuracies in model training, thereby showing better performance than usual segmentation methods which try to optimize the volume overlap with the ground truth label images. We additionally compared the T-stage diagnostic accuracy by comparing the above two algorithm in this revision, and clearly stated the strength of this algorithm in the results section (page 19, line 311-page 20, line 315).

3.In line 148, proper citations are missing. Please place the appropriate citations for evaluation coefficients.

Response:

We decided that the indicated point was described in line 248. We used Dice similarity coefficients (DSC) to assess the segmentation accuracy, and in this revision, we cited a reference #25 to explain this method (page 17, line 265).

#25. Zou KH, Warfield SK, Bharatha A et al. Statistical validation of image segmentation quality based on a spatial overlap index. Acad Radiol 2004; 11: 178-189.

4. In figure 5, pearson correlation coefficient and p value should be noted in the figure,and the y-axis is fuzzy.

Response:

Thank you very much for your suggestions. Pearson correlation coefficient was 0.2418 and P value was 0.0081, which were added in the figure 5(c). In addition, y-axis “Dice similarity coefficients” was made clearly in this revision.

Additional revision

We cited a previous report from our institution showing the method for “transverse slicing of a semi-opened rectal specimen” in this revision (page 11, line 157).

#24. Ishii M, Takemasa I, Okita K et al. A modified method for resected specimen processing in rectal cancer: Semi-opened with transverse slicing for measuring of the circumferential resection margin. Asian J Endosc Surg 2021.

Reviewer #2:

This manuscript presents a new high-resolution MRI semi-automatic rectal cancer segmentation technology based on deep learning. I believe this work will contribute to the accuracy of adjuvant therapy. However, I hope the author will revise the manuscript and solve the following problems before publishing it in the journal.

1. Firstly, the description of the research status is not perfect, especially the research status of deep learning in medical image aided diagnosis, including two-dimensional image and three-dimensional image.

Response:

We thank the reviewer for the constructive comment. We have added the current status regarding the tumor segmentation using 2D and 3D U-net in introduction section (page 7, line 99-105). We additionally cited the following 6 references herein.

#18. Ronneberger O, Fischer, P., Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv:1505.04597 2015.

#19. Milletari F, Navab, N., Ahmadi, S.A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. arXiv:1606.04797.

#20. Dolz J, Xu X, Rony J et al. Multiregion segmentation of bladder cancer structures in MRI with progressive dilated convolutional networks. Med Phys 2018; 45: 5482-5493.

#21. Hodneland E, Dybvik JA, Wagner-Larsen KS et al. Automated segmentation of endometrial cancer on MR images using deep learning. Sci Rep 2021; 11: 179.

#22. Trebeschi S, van Griethuysen JJM, Lambregts DMJ et al. Author Correction: Deep Learning for Fully-Automated Localization and Segmentation of Rectal Cancer on Multiparametric MR. Sci Rep 2018; 8: 2589.

#23. Huang YJ, Dou Q, Wang ZX et al. 3-D RoI-Aware U-Net for Accurate and Efficient Colorectal Tumor Segmentation. IEEE Trans Cybern 2021; 51: 5397-5408.

2. Secondly, it is necessary to make a detailed description of the hardware configuration and super parameter configuration of network training.

Response:

Thank you very much for your suggestions. The indicated comment is important, and we added the implementation details for network training as follows; The parameter λ in the loss function was experimentally determined to be 0.02. To minimize the loss function, the Adam optimizer with a base learning rate of 0.003, beta1 0.9, beta2 0.999, epsilon 1e-8. Batch-size was 5 samples which consisted of 3 cases with ground-truth segmentation labels and 2 cases with only ground-truth staging. All experiments are conducted on an NVIDIA DGX-2 machine using the NVIDIA V100 GPU with 80GBs of memory. During the training iteration, the performance of the network was evaluated every 100 iterations on the validation dataset. We chose the best network parameter for the validation dataset using the sum of dice score, sensitivity and specificity, then applied it to the evaluation dataset. This information was added in the materials and methods section (page 16, line 241, line 244-247, and page 17, line 259-260.).

3. Finally, some other networks should be added to the experimental part as a contrast, so as to make the experimental results more convincing.

Response:

According to the indication, we added the comparative analysis between general methods and ours. The salient feature of the algorithm is a novel loss which can directly maximize T-staging accuracies in model training. To reveal the effectiveness of the proposed loss function, we evaluated a baseline model which was trained by using a standard dice loss with the ground truth label images. We additionally compared the T-stage diagnostic accuracy by comparing the above two algorithms in this revision, and clearly stated the strength of this algorithm in the results section (page 14, line 213-218, and page 19, line 311-page 20, line 315).

Additional revision

We cited a previous report from our institution showing the method for “transverse slicing of a semi-opened rectal specimen” in this revision (page 11, line 157).

#24. Ishii M, Takemasa I, Okita K et al. A modified method for resected specimen processing in rectal cancer: Semi-opened with transverse slicing for measuring of the circumferential resection margin. Asian J Endosc Surg 2021.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Kumaradevan Punithakumar

1 Jun 2022

Artificial intelligence–based technology for semi-automated segmentation of rectal cancer using high-resolution MRI

PONE-D-21-25683R1

Dear Dr. Takemasa,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Kumaradevan Punithakumar

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

Acceptance letter

Kumaradevan Punithakumar

9 Jun 2022

PONE-D-21-25683R1

Artificial intelligence–based technology for semi-automated segmentation of rectal cancer using high-resolution MRI

Dear Dr. Takemasa:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Kumaradevan Punithakumar

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Dataset

    (XLSX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    Raw data of MRI and pathological images contain potentially identifying patient information (patient-specific ID). Non-author contact information: the institutional review board of Sapporo Medical University Hospital. E-mail: ji-rskk@sapmed.ac.jp.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES