Skip to main content
PLOS One logoLink to PLOS One
. 2022 Dec 15;17(12):e0277168. doi: 10.1371/journal.pone.0277168

Utility of adding Radiomics to clinical features in predicting the outcomes of radiotherapy for head and neck cancer using machine learning

Tarun Gangil 1,#, Krishna Sharan 1,#, B Dinesh Rao 2,, Krishnamoorthy Palanisamy 3,#, Biswaroop Chakrabarti 3,, Rajagopal Kadavigere 4,*,#
Editor: Panagiotis Balermpas5
PMCID: PMC9754241  PMID: 36520945

Abstract

Background

Radiomics involves the extraction of quantitative information from annotated Computed-Tomography (CT) images, and has been used to predict outcomes in Head and Neck Squamous Cell Carcinoma (HNSCC). Subjecting combined Radiomics and Clinical features to Machine Learning (ML) could offer better predictions of clinical outcomes. This study is a comparative performance analysis of ML models with Clinical, Radiomics, and Clinico-Radiomic datasets for predicting four outcomes of HNSCC treated with Curative Radiation Therapy (RT): Distant Metastases, Locoregional Recurrence, New Primary, and Residual Disease.

Methodology

The study used retrospective data of 311 HNSCC patients treated with radiotherapy between 2013–2018 at our centre. Binary prediction models were developed for the four outcomes with Clinical-only, Clinico-Radiomic, and Radiomics-only datasets, using three different ML classification algorithms namely, Random Forest (RF), Kernel Support Vector Machine (KSVM), and XGBoost. The best-performing ML algorithms of the three dataset groups was then compared.

Results

The Clinico-Radiomic dataset using KSVM classifier provided the best prediction. Predicted mean testing accuracy for Distant Metastases, Locoregional Recurrence, New Primary, and Residual Disease was 97%, 72%, 99%, and 96%, respectively. The mean area under the receiver operating curve (AUC) was calculated and displayed for all the models using three dataset groups.

Conclusion

Clinico-Radiomic dataset improved the predictive ability of ML models over clinical features alone, while models built using Radiomics performed poorly. Radiomics data could therefore effectively supplement clinical data in predicting outcomes.

1. Introduction

Head and Neck Squamous Cell Carcinoma (HNSCC) refers to a constellation of cancers in the Head and Neck region and is an important contributor to cancer-related morbidity and mortality. HNSCC is a significant health issue in India with an annual incidence of 77,000 new cases contributing to one-third of all cancers diagnosed. Treatment outcomes of HNSCC can be pretty variable, and there is scope for improvement in predicting treatment outcomes to improve survival, reduce toxicity, and further the understanding of HNSCC.

Development in the field of Artificial Intelligence (AI) has enabled the analysis of large volume data, and Machine Learning (ML) tools are being increasingly utilized in diverse fields within medicine. They are used for studying HNSCC [1]. Radiomics is an emerging method used to obtain additional information from imaging. Radiomics involves the conversion of medical images into quantitative data, thereby potentially enabling healthcare personnel in better diagnostic, therapeutic and prognostic decision-making [2]. Statistical, shape-based, histogram, and texture-based features are extracted from the Regions of Interest (ROI) for analysis and clinical correlation [3, 4]. Studies have shown that Radiomics facilitates diagnosis, treatment planning, and predicting outcomes. The development of AI, principally ML and deep learning algorithms has boosted the potential of the typically large-volume quantitative Radiomics data [5].

This research evaluates the additional benefit of Radiomics from annotated diagnostic CT images over clinical data in predicting the outcomes of HNSCC patients treated with radiotherapy. This study is an extension of our previous study [6]. It presents the change in the performance of individual models to predict Distant and Locoregional Recurrence, New Primary and Residual Disease after adding Radiomics data to the previously built models using only clinical data.

2. Methodology

This study developed and compared the analytics model to predict the clinical outcomes of HNSCC. The clinical information was retrieved from the hospital medical records, and Radiomics information was obtained from diagnostic contrast CT images. The study included HNSCC patients treated with curative intent radiotherapy at our centre. Though the minimum sample size was calculated to be 256 (S1B in S1 Appendix), we included all eligible patients treated between 2013–2018 [7]. Eligible cases included those treated with curative intent radiotherapy (either as definitive or as an adjuvant to surgery), completed their prescribed treatment, and had a minimum follow-up of three months post-treatment in addition to having the CT images necessary for the extraction of Radiomics. Clearance was obtained from the Institutional Ethics Committee before collecting the data from patient records, and the study was registered with the Clinical Trials Registry of India (CTRI Number: CTRI/2018/04/013517). The details of the dataset are presented in Fig 1.

Fig 1. Flowchart of the study.

Fig 1

2.1. CT image acquisition

All CT scans were acquired on Incisive CT 128-slice, or Big bore Philips CT-16 slice (Philips Healthcare, Netherlands) multidetector CT scanner. The standard protocol was followed for image acquisition. The scanning parameters were namely, region- skull base down to the thoracic inlet; energy, 120 kV; milliampere-seconds, 250 mAs; pitch, 0.993; detector collimation, 0.625 mm; rotation time, 0.75s; matrix, 512×512; section thickness, 3 mm; and field-of-view, 250 mm. Only contrast-enhanced CT scans were taken for Radiomics analysis. Non-ionic iodinated contrast medium was used (Ultravist 300, Bayer Schering Pharma, Berlin, Germany, 50ml @ 4 mL/sec). Scans were obtained 70 seconds after intravenous contrast administration.

2.2. Radiomics data extraction

The radiation oncologist annotated the diagnostic CT images separately for the primary tumour region and lymph node/s. Radiomics features were thereafter extracted from these annotated regions. An example of annotation of tumour region and lymph node/s using open source 3D Slicer software, version 4.10.0, is shown in Fig 2.

Fig 2.

Fig 2

Illustration of original contrast CT-slices in axial view and its respective annotated slice using 3D slicer; a) Tumour region b) Lymph nodes region.

Pyradiomics toolbox [8], provided as an extension in the 3D-Slicer tool, was used to extract Radiomics features from the gross tumour and lymph nodes. The extracted features were structured in columns as inputs for each model. The data constituted of quantitative features from medical images using various data characterization algorithms, including quantitative information on Region of Interest (ROI) for statistical-based, transform-based, model-based, and shape-based features [3]. The Radiomics dataset included:

  • Shape-based features: 14

  • Gray-level Dependence matrix features: 14

  • Gray-level Cooccurence matrix features: 24

  • First-order statistics features:18

  • Gray-level run length matrix features:16

  • Gray-level size zone matrix: 16

  • Neighbouring gray-tone difference matrix features: 5

2.3. Workflow of the study

2.3.1 Structuring of the data

The clinical factors and Radiomics data were structured in separate non-mutually exclusive columns. The Radiomics dataset was placed alongside the clinical features [6] in separate columns, making a total of 602 columns (388 Clinical and 214 Radiomics features) and 311 rows (samples).

2.3.2 Data pre-processing

The raw collected data must be pre-processed to make it suitable for ML algorithms. The collected data consisted of missing values, necessitating imputed corrections beforehand. We used iterative imputation techniques [6] to impute the missing values. To handle class-imbalance in the dataset, minority oversampling techniques such as Randomoversampler, Synthetic Minority Oversampling Technique (SMOTE), BorderlineSMOTE, Support Vector Machine SMOTE (SVM SMOTE), and Adaptive Synthetic (ADASYN) were used [6]. To build each model, all five minority oversampling techniques were used; the results from the best-performing technique are reported.

After performing minority oversampling on the original dataset, the synthetic samples were separated from the original samples. The original samples were divided into two parts: majority and minority, based on class-labels. In our case, patients without any recurrence constituted the majority (class-label 0). The training-testing split was then performed on samples from majority-class using 70:30 ratio. To build the ML models, the training dataset was assembled in such a way that it would contain train-split and synthetic samples, and testing dataset would unconditionally include all original minority samples along with test-split.

Thereafter, the training dataset was scaled to zero mean and unit standard deviation using standard scalar function. Due to curse-of-dimensionality resulting from a large number of variables in relation to sample size, feature selection algorithms, namely Sequential Forward Floating selection (SFFS) and Boruta algorithms, were implemented on the dataset. The best k-features based on the highest accuracy (hyperparameter of SFFS) scores were chosen to build ML models.

2.3.3 Fitting model on the training dataset

Predictive ML models using Radiomics alone and combined Clinical-Radiomics data were then built to predict four clinical outcomes: Distant Recurrence, Locoregional Recurrence, Residual Disease, and development of New Primary. RF, KSVM and XGBoost algorithms were used for fitting onto the training dataset. The selected features were made to fit on the dataset, and the mean training accuracy was calculated for ten iterations.

2.3.4 Evaluation using test-split

The performance of the designed classifier was evaluated using the testing dataset. Each value of the performance metrics [9] was reported as the mean of ten iterations.

2.3.5 Visualising ROC plots and performance comparison

Stratified k-fold cross-validation was performed for ten folds, and its plots were visualized for each model to ensure consistency of the performance reported throughout the dataset.

The best-performing ML model for each group- Clinical-only, Radiomics-only, and combined Clinical-Radiomics datasets- was identified based on accuracy, macro and weighted F-1 score, and area under the Receiver Operating Characteristic (ROC) curve (Fig 3). The representative ML model for each of the three dataset groups was then compared to determine if Radiomics could supplant or replace clinical data (S1C in S1 Appendix).

Fig 3. Flowchart describing workflow of the study using Clinical, Clinico-Radiomic and Radiomics datasets.

Fig 3

3. Results

A total of 311 patients with HNSCC were included in the study. The mean/median follow-up duration of the cohort was 18 months (3–85.1 months). The details of the dataset are summarized in Table 1. The Radiomics Quality score (RQS) was calculated to be 45% (S1A in S1 Appendix) [10]. The best-performing ML algorithm with each of the three datasets is compared and reported below.

Table 1. Demographics of the retrospectively collected data of HNSCC (n = 311).

Patient Variable Frequency (%) Frequency of any recurrence (n = 96) p-value
Age in years Mean: 56.5 (Range:22–92)
< 60 years 178 (57.2) 59 (33.1%) 0.32
≥ 60 years 133(42.8) 37(27.8%)
Gender:
Female 54 (17.4) 17(31.5%) 0.51
Male 257 (82.6) 79(30.7%)
Site:
Oral Cavity 156 (50.2) 44(28.2%) 0.52
Oropharynx 38 (12.2) 13(34.2%)
Larynx, Hypopharynx 112 (36) 37(33%)
Other sites 5 (1.6) 2(40%)
Staging:
T1 35 (11.3) 15(42.8%) 0.33
T2 98 (31.6) 26(26.5%)
≥T3 178 (57.1) 55(30.8%)
N0 120 (38.6) 38(31.6%) 0.28
N1 91 (29.2) 19(20.8%)
≥ N2 100 (32.2) 39(39%)
Group:
I -II 86 (27.6) 20(23.2%) 0.56
≥ III 225 (72.4) 76(33.7%)
Radiotherapy type:
Definitive 152 (48.9) 53(34.8%) 0.47
Adjuvant 159 (51.1) 43(27.0%)
Concurrent Chemotherapy:
Yes 173 (55.6) 58(33.5%) 0.15
No 138 (44.4) 38(27.5%)

The SFFS algorithm, and utilizing accuracy score for selecting k_features, performed better than Boruta for all three datasets in the optimal feature selection. Also, KSVM performed better than RF and XGBoost while implementing the ML algorithm on the datasets.

It was observed that the model developed using Radiomics alone performed the poorest for all four clinical outcomes. In contrast, the Clinico-Radiomics dataset provided the best predictive performance. The persistence of residual disease on completion of treatment was the most challenging clinical outcome to predict, and the Radiomics-alone dataset performed poorly in this regard.

With Clinico-Radiomic data, the mean training and testing accuracy were calculated to be 100% and 97% for Distant Metastases, 100% and 72% for Locoregional Recurrence, 100% and 99% for development of New Primary, and 100% and 96% for persistent Residual Disease. Sensitivity and specificity were calculated to be 96% and 98% for Distant Metastases, 100% and 82% for Locoregional Recurrence, 94% and 100% for New Primary, and 94% and 100% for Residual Disease, respectively. Mean ROC was calculated to be 99%, 94%, 99%, and 98% for Distant Metastases, Locoregional Recurrence, New Primary, and Residual Disease, respectively. Further, the performance of the individual class of the designed binary classifier model was measured for training and testing datasets using a weighted F1-score for imbalanced classes. It was calculated to be typically over 90%, except for Locoregional Recurrence, where the score was only 70% (Tables 25), despite evidence of overfitting. In order to reduce overfitting here, we attempted reiterating after changing learning-rate hyperparameters of SFSS and also of KSVM and RF, with no significant improvement.

Table 2. Comparative results for Distant Recurrence (311 samples).

The column of the model utilizing only clinical data from our previous study [6] is included for comparison.

Model Name Distant Recurrence
Dataset Type Only Clinical Clinico-Radiomic Only Radiomics
Total number of independent variables (columns) 388 602 214
Best performing ML algorithm KSVM KSVM KSVM
Minority Oversampling method ADASYN SMOTE ADASYN
No. of samples after oversampling 573 562 571
No. of synthetic samples 262 251 260
No. of features selected by SFFS 18 42 23
No. of original samples for class-0 (no recurrence) 281 281 281
No. of original samples for class -1 (recurrence) 30 30 30
No. of samples in Training dataset for class 0 (no recurrence) 188 188 188
No. of samples in Training dataset for class 1 (recurrence) 262 251 260
No. of samples in Testing dataset for class 0 (no recurrence) 93 93 93
No. of samples in Testing dataset for class 1 (recurrence) 30 30 30
Mean training accuracy (CI) 0.99 (0.989–0.991) 1 (1.000–1.000) 0.87 (0.870–0.870)
Mean testing accuracy (CI) 0.94 (0.938–0.942) 0.97 (0.968–0.972) 0.84 (0.829–0.851)
Sensitivity (CI) 0.87 (0.869–0.871) 0.96 (0.958–0.962) 0.95 (0.949–0.951)
Specificity (CI) 0.96 (0.959–0.961) 0.98 (0.978–0.982) 0.92 (0.919–0.921)
Training F1 score Class 0 (no recurrence) 0.99 (0.998 – 0.998) 1(0.999 – 0.999) 0.84 (0.835 – 0.847)
Training F1 score Class 1 (recurrence) 0.99 (0.998 – 0.998) 1(0.999 – 0.999) 0.89 (0.883 – 0.895)
Macro Training F1 score (CI) 0.96 (0.958–0.962) 0.98 (0.979–0.981) 0.85 (0.849–0.851)
Weighted Training F1 score (CI) 0.99(0.989 – 0.990) 1(0.996 – 1.000) 0.87 (0.868–0.873)
Testing F1 score Class 0 (no recurrence) 0.97(0.975 – 0.977) 0.98(0.977 – 0.989) 0.88 (0.873–0.882)
Testing F1 score Class 1 (recurrence) 0.91(0.905 – 0.917) 0.94(0.933 – 0.945) 0.75 (0.741–0.752)
Macro Testing F1 score (CI) 0.93 (0.928–0.932) 0.95 (0.948–0.952) 0.80 (0.798–0.802)
Weighted Testing F1 score (CI) 0.96 (0.958 – 0.969) 0.97(0.962–0.971) 0.85 (0.843–0.856)
Base Algorithm for SFFS KSVM KSVM KSVM
Mean AUC_ROC(CI) 0.97 (0.969–0.971) 0.99 (0.989–0.991) 0.79 (0.783–0.797)

Table 5. Comparative results for Residual Disease (152 samples).

The column of the model utilizing only clinical data from our previous study [6] is included for comparison.

Model Name Residual Disease
Dataset Type Only Clinical Clinico- Radiomics Only Radiomics
Total number of independent variables (columns) 354 568 214
Best performing ML algorithm KSVM KSVM KSVM
Minority Oversampling method SMOTE ADASYN ADASYN
No. of samples after oversampling 270 270 270
No. of synthetic samples 118 118 118
No. of features selected by SFFS 312 98 15
No. of original samples for class-0 (no recurrence) 135 135 135
No. of original samples for class -1 (recurrence) 17 17 17
No. of samples in Training dataset for class 0 (no recurrence) 90 90 90
No. of samples in Training dataset for class 1 (recurrence) 118 118 118
No. of samples in Testing dataset for class 0 (no recurrence) 45 45 45
No. of samples in Testing dataset for class 1(recurrence) 17 17 17
Mean training accuracy (CI) 1.0 (0.999–1.001) 1.0 (0.999–1.001) 0.88 (0.879–0.881)
Mean testing accuracy (CI) 0.92 (0.906–0.914) 0.96 (0.957–0.963) 0.83 (0.822–0.838)
Sensitivity (CI) 0.89 (0.886–0.894) 0.94 (0.937–0.943) 0.85 (0.848–0.852)
Specificity (CI) 1.0 (0.999–1.000) 1.0 (0.999–1.000) 0.88 (0.877–0.883)
Training F1 score Class 0 (no recurrence) 1 (0.999–1.000) 1 (0.999–1.000) 0.86 (0.852–0.862)
Training F1 score Class 1 (recurrence) 1 (0.999–1.000) 1 (0.999–1.000) 0.9 (0.899–0.912)
Macro Training F1 score (CI) 0.99 (0.986–0.994) 0.99 (0.986–0.994) 0.85 (0.847–0.853)
Weighted Training F1 score (CI) 1 (0.999–1.000) 1 (0.999–1.000) 0.88 (0.874–0.882)
Testing F1 score Class 0 (no recurrence) 0.95 (0.939–0.954) 0.98 (0.965–0.986) 0.87 (0.862–0.878)
Testing F1 score Class 1 (recurrence) 0.88 (0.876–0.892) 0.94 (0.934–0.954) 0.75 (0.749–0.763)
Macro Testing F1 score (CI) 0.89 (0.886–0.894) 0.95 (0.944–0.956) 0.80 (0.797–0.803)
Weighted Testing F1 score (CI) 0.93 (0.924–0.931) 0.97 (0.968–0.985) 0.84 (0.832–0.841)
Base Algorithm for SFFS KSVM KSVM KSVM
Mean AUC_ROC(CI) 0.99 (0.986–0.994) 0.98 (0.977– 0.983) 0.85 (0.838–0.862)

Table 3. Comparative results for Locoregional Recurrence (311 samples).

The column of the model utilizing only clinical data from our previous study [6] is included for comparison.

Model Name Locoregional Recurrence
Dataset Type Only Clinical Clinico- Radiomics Only Radiomics
Total number of independent variables (columns) 384 598 214
Best performing ML algorithm KSVM KSVM RF
Minority Oversampling method SMOTE BorderlineSMOTE SMOTE
No. of samples after oversampling 514 514 514
No. of synthetic samples 203 203 203
No. of features selected by SFFS 24 89 16
No. of original samples for class-0 (no recurrence) 257 257 257
No. of original samples for class -1 (recurrence) 54 54 54
No. of samples in Training dataset for class 0 (no recurrence) 172 172 172
No. of samples in Training dataset for class 1(recurrence) 203 203 203
No. of samples in Testing dataset for class 0 (no recurrence) 85 85 85
No. of samples in Testing dataset for class 1 (recurrence) 54 54 54
Mean training accuracy (CI) 0.96 (0.958–0.962) 1 (1.000–1.000) 0.87 (0.869 – 0.871)
Mean testing accuracy (CI) 0.73 (0.718–0.742) 0.72 (0.718 – 0.722) 0.71 (0.706 – 0.714)
Sensitivity (CI) 0.83 (0.828 – 0.832) 1 (1.000–1.000) 0.62 (0.618 – 0.622)
Specificity (CI) 0.73 (0.728–0.732) 0.82 (0.818–0.822) 0.83 (0.828–0.832)
Training F1 score Class 0 (no recurrence) 0.97 (0.964–0.976) 1 (0.996–1.000) 0.83 (0.813–0.835)
Training F1 score Class 1 (recurrence) 0.97 (0.952–0.975) 1 (1.000–1.000) 0.89 (0.872–0.892)
Macro Training F1 score (CI) 0.95 (0.945–0.955) 0.98 (0.978–0.982) 0.85 (0.848–0.852)
Weighted Training F1 score (CI) 0.97 (0.961–0.976) 1 (1.000–1.000) 0.86 (0.852–0.869)
Testing F1 score Class 0 (no recurrence) 0.82 (0.813–0.821) 0.63 (0.625–0.638) 0.73 (0.724–0.740)
Testing F1 score Class 1(recurrence) 0.69 (0.682–0.693) 0.72 (0.718–0.736) 0.68 (0.674–0.692)
Macro Testing F1 score (CI) 0.69 (0.687–0.693) 0.70 (0.697–0.703) 0.69 (0.688–0.692)
Weighted Testing F1 score (CI) 0.77 (0.752–0.779) 0.66 (0.652–0.669) 0.71 (0.701–0.721)
Base Algorithm KSVM KSVM RF
Mean AUC_ROC(CI) 0.73 (0.729–0.731) 0.94 (0.938–0.942) 0.78 (0.776–0.784)

Table 4. Comparative results for New Primary (311 samples).

The column of model utilizing only clinical data from our previous study [6] is included for comparison.

Model Name New Primary
Dataset Type Only Clinical Clinico- Radiomics Only Radiomics
Total number of independent variables (columns) 388 602 214
Best performing ML algorithm KSVM KSVM KSVM
Minority Oversampling method SMOTE ADASYN ADASYN
No. of samples after oversampling 588 594 587
No. of synthetic samples 277 283 276
No. of features selected by SFFS 42 43 53
No. of original samples for class-0 (no recurrence) 294 294 294
No. of original samples for class -1 (recurrence) 17 17 17
No. of samples in Training dataset for class 0 (no recurrence) 196 196 196
No. of samples in Training dataset for class 1 (recurrence) 277 283 276
No. of samples in Testing dataset for class 0 (no recurrence) 98 98 98
No. of samples in Testing dataset for class 1 (recurrence) 17 17 17
Mean training accuracy (CI) 1 (1.000–1.000) 1 (1.000–1.000) 0.87 (0.869–0.871)
Mean testing accuracy (CI) 0.96 (0.959–0.961) 0.99 (0.989–0.991) 0.78 (0.776–0.784)
Sensitivity (CI) 0.94 (0.938–0.942) 0.94 (0.938–0.942) 0.46 (0.457–0.463)
Specificity (CI) 0.98 (0.978–0.982) 1 (1.000–1.000) 1.0 (0.999–1.000)
Training F1 score Class 0 (no recurrence) 1 (1.000–1.000) 1 (1.000–1.000) 0.82 (0.813–0.829)
Training F1 score Class 1 (recurrence) 1 (1.000–1.000) 1 (1.000–1.000) 0.90 (0.889–0.910)
Macro Training F1 score (CI) 0.99 (0.987–0.993) 0.99 (0.987–0.993) 0.85 (0.848–0.852)
Weighted Training F1 score (CI) 1 (1.000–1.000) 1 (1.000–1.000) 0.87 (0.865–0.876)
Testing F1 score Class 0 (no recurrence) 0.99 (0.986–0.991) 0.99 (0.986–0.990) 0.85 (0.843–0.852)
Testing F1 score Class 1 (recurrence) 0.95 (0.948–0.952) 0.98 (0.978–0.982) 0.57 (0.562–0.583)
Macro Testing F1 score (CI) 0.92 (0.917–0.923) 0.97 (0.967–0.973) 0.70 (0.698 – 0.702)
Weighted Testing F1 score (CI) 0.98 (0.978–0.982) 0.99 (0.987–0.993) 0.81 (0.806–0.820)
Base Algorithm for SFFS KSVM KSVM KSVM
Mean AUC_ROC (CI) 0.98 (0.978–0.982) 0.99 (0.990–0.990) 0.80 (0.796–0.804)

ROC curves were obtained to visualize the results. Ten-split stratified k-fold cross-validation was performed using the best combination of ML workflow identified to ensure consistency of reported ROC values (Fig 4).

Fig 4. ROC curves comparing the ML performance with three datasets groups (the thin line represents each iteration’s ROC, and the thick line is the mean ROC).

Fig 4

The column of the model utilizing only clinical data from our previous study [6] is included for comparison.

The ROC value using only the clinical dataset was 0.99 (±0.00), 0.96 (±0.02), 0.99 (±0.00) and 0.98 (±0.03), respectively, for Distance Recurrence, Locoregional Recurrence, New Primary, and Residual Disease. Similarly, using the Clinico-Radiomics dataset, the ROC values were 0.99 (±0.00), 0.98 (±0.01), 0.99 (±0.00) and 0.95 (±0.05), respectively. Using only Radiomics, the ROC values were the lowest, with values of 0.74 (±0.10), 0.85 (±0.07), 0.82 (±0.06) and 0.73 (±0.16).

4. Discussion

Accurate prediction of eventual outcomes carries great importance in medicine, where outcomes can be pretty variable across patients, scenarios, and treatments. Radiomics has an unexplored potential in clinical prediction and has been utilized in various degrees in radiotherapy for HNSCC, including tumour segmentation, prognostication, and monitoring of changes in normal tissues following radiotherapy [11]. Recent developments in AI have exponentially enhanced the ability to analyze large volume data, further increasing the potential of Radiomics. However, it has already been reasoned that it would be best to use Radiomics along with additional information, including clinical features, rather than in isolation to maximize the potential of predictive modeling [12]. Therefore, this study was conducted assuming that Radiomics when added to Clinico-Pathological data would improve the performance of ML models in predicting outcomes of HNSCC treated with RT.

Clinico-Radiomic dataset performed the best in all the four outcome predictions referred to in this study. Despite the models working on clinical data alone are already providing excellent predictive abilities in our study, adding Radiomics further improved the predictive ability.

Radiomics have been studied for a myriad of applications across all disciplines involved in HNSCC management, including molecular characterization [13], image segmentation [14], pre-surgical decision making [15], prognostication, improving treatment quality and efficiency, etc. However, most prognostication studies on Radiomics in HNSCC have utilized PET-CT scans and MRIs to a lesser extent, given the wealth of biological and chemical data captured in these images. In contrast, Radiomics has been used on diagnostic CT scans because these are the most widely available scans used to evaluate the extent and planning of treatment. CT scan Radiomics have been studied to a lesser extent with conflicting conclusions. For example, Ger et al., concluded that neither CT nor PET was independently capable of predicting survival in HNSCC [16], while Cozzi et al., were able to successfully predict survival and local control in a retrospective series of 110 patients with HNSCC treated with radiotherapy [17].

However, fewer studies have attempted to combine Radiomics with AI to enhance its potential. For example, a survey by Diamant et al., was able to successfully predict distant metastases in HNSCC by applying CNN to Radiomics data [18]. A systematic review by Giraud et al., summarizes some possible applications in HNSCC [19]. More recent reviews have looked explicitly at this integrated application for predicting disease outcomes and treatment toxicities [20, 21]. Despite promising reports, they all agree on the need for prospective validation studies prior to clinical translation.

The ML algorithms gave the best performance when fitted onto the dataset under optimal hypermeter settings that were obtained using grid search [22]. Also, the feature selection method SFFS [23] automatically selected k_best features from an exhaustive list of variables so that features with the best accuracy scores are chosen for model-building, thereby preventing the curse of dimensionality [24].

While it provides proof-of-concept of integrated Clinical-Radiomic predictive ML models, this study has several limitations. For one, it is a single-center study that relied on retrospective data with small sample size and a considerably limited follow-up duration. Second, it used only contrast CT images, which can add a degree of heterogeneity to the collected Radiomics data. Radiomics naturally demands high-quality, standardized imaging for optimum performance [25]. Factors such as motion artifacts, time-lapse between contrast administration and image acquisition, image processing algorithms, etc., have been shown to affect data. The study also does not work to determining the mechanisms underlying the possible correlation between radiological findings and outcomes. Moreover, as with most other research [26] on Radiomics and AI, the findings need multicentric protocol-constrained imaging and prospective validation prior to routine use. Developing healthcare data management systems with the ability to recognize, classify and segregate data in real-time could greatly expedite the implementation of such AI applications in the clinic.

5. Conclusion

Radiomics features supplemented with clinical features predicted the clinical outcomes of HNSCC treated with radiotherapy to a high degree of accuracy. In contrast, using only Radiomics data as input offered suboptimal performance. For all the models, KSVM performed better than either RF or XGBoost. Weighted average training and testing F1-scores were equally good with clinical and Clinico-Radiomic datasets but were poor with only the Radiomics dataset. We conclude that only Radiomics data could be insufficient in predicting the HNSCC outcomes, while Clinico-Radiomic data can provide predictions of clinical utility. With prospective validation, such predictive models can be of great utility for clinical exploitation.

Supporting information

S1 Checklist. TREND statement checklist.

(PDF)

S1 Appendix

(PDF)

S1 Protocol

(PDF)

Acknowledgments

The authors would like to acknowledge Mr. Manjunatha Maiya, Project Manager, Philips Research India, Bangalore, Karnataka, Mr. Prasad R V, Program Manager Philips Research India, Bangalore, Karnataka, and Mr. Shrinidhi G. C., Chief Physicist, Dept. of Radiotherapy, KMC, Manipal for their help and support in the conduct of this research.

List of abbreviations

HNSCC

Squamous Cell Head and Neck Cancer

RT

Radiation Therapy

RF

Random Fores

KSVM

Kernel Support Vector Machine

AUC

Area under the Receiver operating curve

SFFS

Sequential Forward Floating selection

ML

Machine Learning

ROC

Receiver operating curve

CNN

Convolution Neural Network

CT

Computed Tomography

AI

Artificial Intelligence

PET

Positron Emission Tomography

MRI

Magnetic Resonance Imaging

Data Availability

The data could not be shared because it is an Intellectual property of Manipal Academy of Higher Education, Manipal, India and Philips Healthcare, Bangalore. They may have future commercial interest using the dataset. The details of the person who can be contacted regarding the same is as follows: Name: Mr. Manjunatha Maiya Designation: Senior Project Manager, Philips, Bangalore email: manjunatha.maiya@philips.com. The authors had no special access privileges to the data others would not have.

Funding Statement

The study was funded by Manipal Academy of Higher Education and Philips, India. The role of the funding agency was in designing the study.

References

  • 1.Forghani R, Chatterjee A, Reinhold C, Pérez-Lara A, Romero-Sanchez G, Ueno Y, et al. Head and neck squamous cell carcinoma: prediction of cervical lymph node metastasis by dual-energy CT texture analysis with machine learning. Eur Radiol. 2019. Nov;29(11):6172–6181. doi: 10.1007/s00330-019-06159-y Epub 2019 Apr 12. . [DOI] [PubMed] [Google Scholar]
  • 2.Tsougos I, Vamvakas A, Kappas C, Fezoulidis I, Vassiou K. Application of Radiomics and Decision Support Systems for Breast MR Differential Diagnosis. Comput Math Methods Med. 2018. Sep 23;2018:7417126. doi: 10.1155/2018/7417126 ; PMCID: PMC6174735. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Mayerhoefer ME, Materka A, Langs G, Häggström I, Szczypiński P, Gibbs P, et al. Introduction to Radiomics. J Nucl Med. 2020. Apr;61(4):488–495. doi: 10.2967/jnumed.118.222893 Epub 2020 Feb 14. ; PMCID: PMC9374044. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Zwanenburg A, Leger S, Vallières M, Löck S. Image biomarker standardisation initiative. arXiv:161207003 [Internet]. 2019;1. Available from: https://arxiv.org/abs/1612.07003
  • 5.Peng Z, Wang Y, Wang Y, Jiang S, Fan R, Zhang H, et al. Application of radiomics and machine learning in head and neck cancers. Int J Biol Sci. 2021. Jan 1;17(2):475–486. doi: 10.7150/ijbs.55716 ; PMCID: PMC7893590. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Gangil T, Shahabuddin AB, Dinesh Rao B, Palanisamy K, Chakrabarti B, Sharan K. Predicting clinical outcomes of radiotherapy for head and neck squamous cell carcinoma patients using machine learning algorithms. J Big Data [Internet]. 2022;9(1):25. Available from: doi: 10.1186/s40537-022-00578-3 [DOI] [Google Scholar]
  • 7.Schoenfeld DA. Sample-Size Formula for the Proportional-Hazards Regression Model. Biometrics [Internet]. 1983. Oct 9;39(2):499–503. Available from: http://www.jstor.org/stable/2531021 [PubMed] [Google Scholar]
  • 8.Van Griethuysen JJM, Fedorov A, Parmar C, Hosny A, Aucoin N, Narayan V, et al. Computational Radiomics System to Decode the Radiographic Phenotype. Cancer Res. 2017. Nov 1;77(21):e104–e107. doi: 10.1158/0008-5472.CAN-17-0339 ; PMCID: PMC5672828. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Ferri C, Hernández-Orallo J, Modroiu R. An experimental comparison of performance measures for classification. Pattern Recognit Lett [Internet]. 2009;30(1):27–38. Available from: https://www.sciencedirect.com/science/article/pii/S0167865508002687 [Google Scholar]
  • 10.Lambin P, Leijenaar RTH, Deist TM, Peerlings J, de Jong EEC, van Timmeren J, et al. Radiomics: the bridge between medical imaging and personalized medicine. Nat Rev Clin Oncol. 2017. Dec;14(12):749–762. doi: 10.1038/nrclinonc.2017.141 Epub 2017 Oct 4. . [DOI] [PubMed] [Google Scholar]
  • 11.Wong AJ, Kanwar A, Mohamed AS, Fuller CD. Radiomics in head and neck cancer: from exploration to application. Transl Cancer Res. 2016. Aug;5(4):371–382. doi: 10.21037/tcr.2016.07.18 ; PMCID: PMC6322843. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Holzinger A, Haibe-Kains B, Jurisica I. Why imaging data alone is not enough: AI-based integration of imaging, omics, and clinical data. Eur J Nucl Med Mol Imaging. 2019. Dec;46(13):2722–2730. doi: 10.1007/s00259-019-04382-9 Epub 2019 Jun 15. . [DOI] [PubMed] [Google Scholar]
  • 13.Huang C, Cintra M, Brennan K, Zhou M, Colevas AD, Fischbein N, et al. Development and validation of radiomic signatures of head and neck squamous cell carcinoma molecular features and subtypes. EBioMedicine. 2019. Jul;45:70–80. doi: 10.1016/j.ebiom.2019.06.034 Epub 2019 Jun 27. ; PMCID: PMC6642281. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Cardenas CE, Yang J, Anderson BM, Court LE, Brock KB. Advances in Auto-Segmentation. Semin Radiat Oncol [Internet]. 2019;29(3):185–97. Available from: https://www.sciencedirect.com/science/article/pii/S1053429619300104 doi: 10.1016/j.semradonc.2019.02.001 [DOI] [PubMed] [Google Scholar]
  • 15.Wu W, Ye J, Wang Q, Luo J, Xu S. CT-Based Radiomics Signature for the Preoperative Discrimination Between Head and Neck Squamous Cell Carcinoma Grades. Front Oncol [Internet]. 2019;9. Available from: https://www.frontiersin.org/articles/10.3389/fonc.2019.00821 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Ger RB, Zhou S, Elgohari B, Elhalawani H, Mackin DM, Meier JG, et al. Radiomics features of the primary tumor fail to improve prediction of overall survival in large cohorts of CT- and PET-imaged head and neck cancer patients. PLoS One [Internet]. 2019;14(9):1–13. Available from: doi: 10.1371/journal.pone.0222509 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Cozzi L, Franzese C, Fogliata A, Franceschini D, Navarria P, Tomatis S, et al. Predicting survival and local control after radiochemotherapy in locally advanced head and neck cancer by means of computed tomography based radiomics. Strahlenther Onkol. 2019. Sep;195(9):805–818. English. doi: 10.1007/s00066-019-01483-0 Epub 2019 Jun 20. . [DOI] [PubMed] [Google Scholar]
  • 18.Diamant A, Chatterjee A, Vallières M, Shenouda G, Seuntjens J. Deep learning in head & neck cancer outcome prediction. Sci Rep [Internet]. 2019;9(1):2764. Available from: 10.1038/s41598-019-39206-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Giraud P, Giraud P, Gasnier A, El Ayachy R, Kreps S, Foy JP, et al. Radiomics and Machine Learning for Radiotherapy in Head and Neck Cancers. Front Oncol. 2019. Mar 27;9:174. doi: 10.3389/fonc.2019.00174 ; PMCID: PMC6445892. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Chinnery T, Arifin A, Tay KY, Leung A, Nichols AC, Palma DA, et al. Utilizing Artificial Intelligence for Head and Neck Cancer Outcomes Prediction From Imaging. Can Assoc Radiol J. 2021. Feb;72(1):73–85. doi: 10.1177/0846537120942134 Epub 2020 Jul 31. . [DOI] [PubMed] [Google Scholar]
  • 21.Carbonara R, Bonomo P, Di Rito A, Didonna V, Gregucci F, Ciliberti MP, et al. Investigation of Radiation-Induced Toxicity in Head and Neck Cancer Patients through Radiomics and Machine Learning: A Systematic Review. Ahmad N, editor. J Oncol [Internet]. 2021;2021:5566508. Available from: doi: 10.1155/2021/5566508 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Belete DM, Huchaiah MD. Grid search in hyperparameter optimization of machine learning models for prediction of HIV/AIDS test results. Int J Comput Appl [Internet]. 2022. Sep 2;44(9):875–86. Available from: 10.1080/1206212X.2021.1974663 [DOI] [Google Scholar]
  • 23.Pudil P, Novovičová J, Kittler J. Floating search methods in feature selection. Pattern Recognit Lett. 1994;15(11):1119–25. [Google Scholar]
  • 24.Verleysen M, François D. The Curse of Dimensionality in Data Mining and Time Series Prediction BT—Computational Intelligence and Bioinspired Systems. In: Cabestany J, Prieto A, Sandoval F, editors. Berlin, Heidelberg: Springer Berlin Heidelberg; 2005. p. 758–70. [Google Scholar]
  • 25.Papadimitroulas P, Brocki L, Christopher Chung N, Marchadour W, Vermet F, Gaubert L, et al. Artificial intelligence: Deep learning in oncological radiomics and challenges of interpretability and data harmonization. Phys Medica [Internet]. 2021;83:108–21. Available from: https://www.sciencedirect.com/science/article/pii/S1120179721001253 doi: 10.1016/j.ejmp.2021.03.009 [DOI] [PubMed] [Google Scholar]
  • 26.Zhai TT, van Dijk LV, Huang BT, Lin ZX, Ribeiro CO, Brouwer CL, et al. Improving the prediction of overall survival for head and neck cancer patients using image biomarkers in combination with clinical parameters. Radiother Oncol. 2017. Aug;124(2):256–262. doi: 10.1016/j.radonc.2017.07.013 Epub 2017 Jul 29. . [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Panagiotis Balermpas

20 Jul 2022

PONE-D-22-16089Utility of adding Radiomics to clinical features in predicting outcomes of radiotherapy for Head and Neck Cancer using Machine Learning

PLOS ONE

Dear Dr. Kadavigere,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

Dear authors,

the reviewers have now completed their reports and found that your manuscript in its present for is not suitbale for publication.

There are some major concerns that have to be answered if you want to resubmit it:

1) There are some ethical concerns, as you state that no consent was acquired from the participants for use of the data because "the study was retrospective"

2) The methodology used is not described sufficiently

3) There is no split in training and validation set and maybe an additional cohort should be analyzed

4) The english language used is unacceptable and the manuscript should be edited by a native speaker/ scientific writer

==============================

Please submit your revised manuscript by Sep 03 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Panagiotis Balermpas

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

2. We suggest you thoroughly copyedit your manuscript for language usage, spelling, and grammar. If you do not know anyone who can help you do this, you may wish to consider employing a professional scientific editing service. 

Whilst you may use any professional scientific editing service of your choice, PLOS has partnered with both American Journal Experts (AJE) and Editage to provide discounted services to PLOS authors. Both organizations have experience helping authors meet PLOS guidelines and can provide language editing, translation, manuscript formatting, and figure formatting to ensure your manuscript meets our submission guidelines. To take advantage of our partnership with AJE, visit the AJE website (http://learn.aje.com/plos/) for a 15% discount off AJE services. To take advantage of our partnership with Editage, visit the Editage website (www.editage.com) and enter referral code PLOSEDIT for a 15% discount off Editage services.  If the PLOS editorial team finds any language issues in text that either AJE or Editage has edited, the service provider will re-edit the text for free.

Upon resubmission, please provide the following:

The name of the colleague or the details of the professional service that edited your manuscript

A copy of your manuscript showing your changes by either highlighting them or using track changes (uploaded as a *supporting information* file)

A clean copy of the edited manuscript (uploaded as the new *manuscript* file).

3. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please ensure that your ethics statement is included in your manuscript, as the ethics statement entered into the online submission form will not be published alongside your manuscript. 

4. Please provide additional details regarding participant consent. In the Methods section, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

5. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. 

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Dear authors,

1. I suggest that a native speaker proofreads the manuscript. There are several grammar mistakes, and the general style could be improved.

2. Please check the row for age in the patient characteristics table.

3. Even though you point to a previous study, I find the paper is lacking some high-level explanation of the methodology (how was the train/test split performed, which feature selection algorithms were employed, what are the weights in the F1-score computation, etc.). Right now it is very hard for the reader to understand what methodology was used in this study.

4. I have concerns whether the follow-up time is too little to be clinically significant.

Reviewer #2: Thanks for the interesting article evaluating the value of radiomics in CT imaging.

I have a couple of questions mainly on the methodolgy, which need to be clarified.

1) How was the annotation of the PT and the LN done? Manually? By whom? was each LN delineated separatly?

2)PLease provide more details on the extraction of features:

- how many features were extracted per feature type?

-Which pre-processing was done?

- Were cubic voxels used?

- Which binning method?

3) Why did you not split the dataset into a trianing and validation dataset? Since this is not performed a cross-validation is needed.

To better adress all these questions please determine the radiomics quality score:

https://www.radiomics.world/rqs

and put your score to the manuscript including your answers as an appendix

Results:

With clinico-radiomic data, the mean training and testing accuracy -> From Materials and Methods it is not clear that you have splitted in training and test set, please explain.

All your results on accuracy, sensitivity,... need confidence intervals.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Dec 15;17(12):e0277168. doi: 10.1371/journal.pone.0277168.r002

Author response to Decision Letter 0


2 Sep 2022

Date: 16th August 2022

Subject: Response to the reviewers’ queries and suggestions [PONE-D-22-16089]

(PONE-D-22-16089: Utility of adding Radiomics to clinical features in predicting outcomes of radiotherapy for Head and Neck Cancer using Machine Learning)

The authors would like to express their profuse gratitude for the reviewers’ time and efforts, and for the excellent criticisms raised in order to facilitate the betterment of our article. The queries have been addressed in the manuscript, and the replies are stated below:

Queries summarized (editor):

1) There are some ethical concerns, as you state that no consent was acquired from the participants for use of the data because "the study was retrospective"

Ethical clearance for this study was obtained from our Institutional Ethics Committee (approval no. 165/2018). Since the study involved retrospective data retrieved from medical records/imaging archives, and the patients were not contacted, participation consent couldn’t be taken, and was waived off by the IEC.

2) The methodology used is not described sufficiently

The methodology section has been updated to provide greater details in the original manuscript. (section 2.3.1-2.3.6)(page-3-6)

3) There is no split in training and validation set and maybe an additional cohort should be analysed.

Multiple iterations of Training and testing splits at 70:30 ratio was performed within the collected dataset. The details have been added and clarified in the revised manuscript (section 2.3.3-2.3.5)(page 5-6)

4) The english language used is unacceptable and the manuscript should be edited by a native speaker/ scientific writer.

The manuscript has been extensively re-written to address the grammatical and clarity errors, and has been screened through language-analysis software (professional version of Grammarly) for inaccuracies.

Other observations: Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please ensure that your ethics statement is included in your manuscript, as the ethics statement entered into the online submission form will not be published alongside your manuscript.

Statement moved to the methodology section as suggested.(page 4-5)

Other observations: Please provide additional details regarding participant consent. In the Methods section, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

The data being retrospective, informed consent from the patients was waived off by the Institutional Ethics Committee. Its details are mentioned in the methodology section of manuscript. (page 4-5)

Other observations: We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

The collected data is the intellectual property of Manipal Academy of Higher Education, Manipal (India), and Philips, Bangalore (India), and as per the exhibit, we are not permitted to share the collected data. Moreover, we don’t have the government regulatory body (Health Ministry Screening Committee, Indian Council of Medical Research) approval for data sharing.

Reviewers' comments to the authors:

Reviewer #1:

1. I suggest that a native speaker proofreads the manuscript. There are several grammar mistakes, and the general style could be improved.

The manuscript has been extensively re-written to address the grammatical and clarity errors, and has been screened through language-analysis software (professional version of Grammarly) for inaccuracies.

2. Please check the row for age in the patient characteristics table.

The data entered was incorrect; thank you very much for bringing it to our notice. The entire table has been thoroughly reviewed and corrected.(page 6-7)

3. Even though you point to a previous study, I find the paper is lacking some high-level explanation of the methodology (how was the train/test split performed, which feature selection algorithms were employed, what are the weights in the F1-score computation, etc). Right now it is very hard for the reader to understand what methodology was used in this study.

The methodology section has been updated to describe the workflow in a more coherent manner.(page 3-6)

4. I have concerns whether the follow-up time is too little to be clinically significant.

We agree that the follow-up duration, while sufficient for reporting early response, is indeed an inadequate represent of the overall clinical picture. The same has been reiterated in the final paragraph of the discussion stating the limitations of our study.(page 13-14)

Reviewer #2:

1) How was the annotation of the PT and the LN done? Manually? By whom? was each LN delineated separately?

The annotations of PT and LN were performed using the 3D slicer tool manually by the Radiation Oncologist. The same has been mentioned in the manuscript (page 4).

2)Please provide more details on the extraction of features:

- how many features were extracted per feature type?

We extracted features under six major domains, including shape-based (14), gray-level dependence matrix (14), gray-level cooccurence matrix (24), first-order statistics (18), gray-level run length matrix (16), gray-level size zone (16) and neighboring gray-tone difference matrix (5). The same has been mentioned in the manuscript (page 4-5).

-Which pre-processing was done?

No separate pre-processing of the images was performed; radiomics features were obtained after annotating the CT images with primary and nodal volumes with the pyradiomics toolbox (an extension provided by 3D-slicer annotation tool). Prior to subjecting the data to ML training algorithms, standard scalar was performed on all the variables.(page 5-6)

- Were cubic voxels used?

Yes. Shape-based 3D-radiomics features were used in the analysis.

- Which binning method?

No binning method used, as the structured radiomics features were directly extracted from the Pyradiomics toolbox.

3) Why did you not split the dataset into a training and validation dataset? Since this is not performed a cross-validation is needed.

The dataset was split into training and testing dataset in the ratio of (70:30). The details are illustrated in the updated methodology section (). Thereafter, cross-validation was performed on complete data for all the models using stratified K-fold cross validation (page 5-6)

4) To better address all these questions please determine the radiomics quality score:

https://www.radiomics.world/rqs and put your score to the manuscript including your answers as an appendix.

Thank you for this suggestion! On attempting the suggested questionnaire, the determined score was, unfortunately, only 45%. However, we would like to state here that at least six of the 16 questions (No.s 3,4,7,8,11 and 15) are not applicable to our study. (Appendix)

5) Results: With clinico-radiomic data, the mean training and testing accuracy->From Materials and Methods it is not clear that you have splitted in training and test set, please explain.

The dataset was split into training-testing ratio of 70:30, and the means of 10 such iterations were the performance was reported by evaluating the designed model using test dataset. (page 8-12)

6) All your results on accuracy, sensitivity,... need confidence intervals.

Thank you again for pointing this out! Confidence intervals values have been added to all the metrics in the original manuscript. (page 8-12)

Attachment

Submitted filename: Response to reviewers.docx

Decision Letter 1

Panagiotis Balermpas

6 Oct 2022

PONE-D-22-16089R1Utility of adding Radiomics to clinical features in predicting outcomes of radiotherapy for Head and Neck Cancer using Machine LearningPLOS ONE

Dear Dr. Kadavigere,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================Dear authors,

thank you for addressing all comments.

Please answer the minor issues raised from reviewer 1 and then the manuscript is suitable for acceptance.

==============================

Please submit your revised manuscript by Nov 20 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Panagiotis Balermpas

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments:

Dear authors,

thank you for addressing all comments.

Please answer the minor issues raised from reviewer 1 and then the manuscript is suitable for acceptance.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: 1. Please check capitalization rules in English and change the capitalized letters accordingly throughout the whole manuscript (e.g., in one sentence you write "artificial intelligence" and "Machine Learning").

2. It is not clear what you mean by "Though the minimum sample size was estimated to be 256, we included all the eligible patients treated between 2013 -2018".

3. "Synthetic samples were taken out from the original samples. The training: testing split was

performed with a 70:30 ratio on the class having majority samples. The training dataset was

generated by adding the train-split of majority samples to the synthetic data, and the testing

dataset was generated by adding test-split of the majority samples to the original minority

samples".

Please rephrase this paragraph and clearly state the class imbalance on the training set and on the test set for each clinical endpoint.

4. Please specify the weights of the F1-score calculation and also provide the non-weighted F1-score.

5. The LRC model is heavily overfitted. Could you implement any measure to prevent this from happening? Such as other feature reduction techniques.

Reviewer #2: Thanks. It is a pitty that the data cannot be made fully available. Please consider for th future to apply for this. I understand this is not easy in medicine but it should still be an aim. I appriciate the calculation of radiomics quality score, It has an average score, the score is important that the reader can easily understand if the manuscript is more hypothesis generating or statisticall significant.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Dec 15;17(12):e0277168. doi: 10.1371/journal.pone.0277168.r004

Author response to Decision Letter 1


21 Oct 2022

Date: Oct 06 2022 11:46AM

To: "Rajagopal Kadavigere" rajarad@gmail.com

From: "PLOS ONE" plosone@plos.org

Subject: PLOS ONE Decision: Revision required [PONE-D-22-16089R1]

PONE-D-22-16089R1

Utility of adding Radiomics to clinical features in predicting outcomes of radiotherapy for Head and Neck Cancer using Machine Learning

PLOS ONE

The authors would again like to thank the reviewers profusely for critically reviewing the manuscript and highlighting the points which have helped us considerably improve the quality of the manuscript. The answers to the issues raised by the reviewers have been addressed below.

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

The reference list is updated and checked for its correctness.

Additional Editor Comments:

Dear authors,

thank you for addressing all comments.

Please answer the minor issues raised from reviewer 1 and then the manuscript is suitable for acceptance.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

________________________________________

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

________________________________________

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

________________________________________

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: No

________________________________________

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

________________________________________

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: 1. Please check capitalization rules in English and change the capitalized letters accordingly throughout the whole manuscript (e.g., in one sentence you write "artificial intelligence" and "Machine Learning").

The sentences were checked thoroughly for capitalization and corrected throughout the manuscript.

2. It is not clear what you mean by "Though the minimum sample size was estimated to be 256, we included all the eligible patients treated between 2013 -2018".

The minimum sample size was calculated for this study as 256 using the proportions of relative hazards formula. The sample size calculation is shown in section 9.2 Appendix. This was the minimum sample size required; however, we screened all the 482 records of patients treated between 2013-18 and selected all the 311 eligible patient records, because a larger sample size is expected to provide better performing models.

3. "Synthetic samples were taken out from the original samples. The training: testing split was

performed with a 70:30 ratio on the class having majority samples. The training dataset was

generated by adding the train-split of majority samples to the synthetic data, and the testing

dataset was generated by adding test-split of the majority samples to the original minority

samples".

Please rephrase this paragraph and clearly state the class imbalance on the training set and on the test set for each clinical endpoint.

The paragraph has been simplified and rephrased in the manuscript. Also, the number of samples for the training and testing dataset is specified for each clinical endpoints in table 2, 3,4 and 5.

4. Please specify the weights of the F1-score calculation and also provide the non-weighted F1-score.

In the manuscript, Tables 2,3,4 and 5 have been updated with Training and testing f1 score, macro f1 score and weighted f1 score for each class (label 0 and 1). The number of samples for Training and testing class label 0 and 1 serves as the weights to calculate weighted f1 score. The calculation for each performance metrics is presented in section 9.3 appendix.

5. The LRC model is heavily overfitted. Could you implement any measure to prevent this from happening? Such as other feature reduction techniques.

In this research we have applied intrinsic pre-processing steps to clean the dataset and balance for the class labels. In order to prevent overfitting for locoregional recurrence models, we tried the following additional steps:

1) Computed Principal Component Analysis(PCA) for the original dataset without and with feature selection using Sequential Forward Floating Selection and visualised the results for two principal components. However, a clear class boundary which separates two variables couldn’t be drawn. The PCA plots are shown as follows:

a) b)

Figure: PCA plot of two components for only clinical data a) without feature selection and b) with feature selection

a) b)

Figure: PCA plot of two components for clinico radiomics data a) without feature selection and b) with feature selection

a) b)

Figure: PCA plot of two components for only radiomics data a) without feature selection and b) with feature selection

2) Originally have used the ‘accuracy’ hyperparameter for selecting optimal features. We have redone the analysis for locoregional models by changing the hyperparameter setting to ‘f1’and ‘f1_weighted’

3) Finally, we tried to vary the learning rate ‘C’ in KSVM for Only Clinical and Clinico-Radiomics dataset and ‘max_depth’ in RF for Only Radiomics dataset. The training and testing accuracy variation with the variability of learning rate parameters is shown in the table as follows:

Hyper Parameter variation Only clinical KSVM Clinico radiomics

KSVM

KSVM C= 0.1 Training Weighted F1:0.53

Testing Weighted F1: 0.33 Training Weighted F1: 0.44

Testing Weighted F1: 0.25

C= 0.2 Training Weighted F1:0.69

Testing Weighted F1:0.48 Training Weighted F1: 0.66

Testing Weighted F1: 0.41

C= 0.3 Training Weighted F1:0.76

Testing Weighted F1: 0.49 Training Weighted F1: 0.86

Testing Weighted F1: 0.53

C= 0.4 Training Weighted F1: 0.95

Testing Weighted F1: 0.50 Training Weighted F1: 0.99

Testing Weighted F1: 0.53

C= 0.5 Training Weighted F1: 0.96

Testing Weighted F1: 0.52 Training Weighted F1:0.99

Testing Weighted F1: 0.59

C= 0.6 Training Weighted F1:0.96

Testing Weighted F1: 0.55 Training Weighted F1: 0.99

Testing Weighted F1: 0.66

C= 0.7 Training Weighted F1: 0.97

Testing Weighted F1: 0.57 Training Weighted F1: 1

Testing Weighted F1: 0.82

C= 0.8 Training Weighted F1:0.96

Testing Weighted F1: 0.73 Training Weighted F1:1

Testing Weighted F1:0.81

C= 0.9 Training Weighted F1:0.97

Testing Weighted F1: 0.72 Training Weighted F1:1

Testing Weighted F1:0.81

C= 1.0 Training Weighted F1:0.97

Testing Weighted F1: 0.72 Training Weighted F1:1

Testing Weighted F1:0.81

C= 1000 Training Weighted F1:1

Testing Weighted F1: 0.76 Training Weighted F1:1

Testing Weighted F1:0.78

RF max_depth=5 Training Weighted F1:0.84

Testing Weighted F1: 0.65

max_depth=10 Training Weighted F1:0.86

Testing Weighted F1: 0.68

max_depth=50 Training Weighted F1:0.86

Testing Weighted F1: 0.66

max_depth=100 Training Weighted F1:0.86

Testing Weighted F1: 0.64

max_depth=1000 Training Weighted F1:0.85

Testing Weighted F1: 0.65

Thus, we concluded that class labels have high overlap due to limitation of low number of samples, the algorithm was bound to overfit. With a greater sample size and higher numbers of positive samples, it might be feasible to prevent this.

Reviewer #2: Thanks. It is a pity that the data cannot be made fully available. Please consider for th future to apply for this. I understand this is not easy in medicine but it should still be an aim. I appriciate the calculation of radiomics quality score, It has an average score, the score is important that the reader can easily understand if the manuscript is more hypothesis generating or statistically significant.

________________________________________

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

________________________________________

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 2

Panagiotis Balermpas

24 Oct 2022

Utility of adding Radiomics to clinical features in predicting outcomes of radiotherapy for Head and Neck Cancer using Machine Learning

PONE-D-22-16089R2

Dear Dr. Kadavigere,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Panagiotis Balermpas

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have addressed my comments and concerns and the manuscript is now adequate and complete to publish.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

Acceptance letter

Panagiotis Balermpas

17 Nov 2022

PONE-D-22-16089R2

Utility of Adding Radiomics to Clinical Features in Predicting the Outcomes of Radiotherapy for Head and Neck Cancer Using Machine Learning

Dear Dr. Kadavigere:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Panagiotis Balermpas

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Checklist. TREND statement checklist.

    (PDF)

    S1 Appendix

    (PDF)

    S1 Protocol

    (PDF)

    Attachment

    Submitted filename: Response to reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    The data could not be shared because it is an Intellectual property of Manipal Academy of Higher Education, Manipal, India and Philips Healthcare, Bangalore. They may have future commercial interest using the dataset. The details of the person who can be contacted regarding the same is as follows: Name: Mr. Manjunatha Maiya Designation: Senior Project Manager, Philips, Bangalore email: manjunatha.maiya@philips.com. The authors had no special access privileges to the data others would not have.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES