Skip to main content
PLOS Digital Health logoLink to PLOS Digital Health
. 2023 Jun 7;2(6):e0000054. doi: 10.1371/journal.pdig.0000054

Validation and implementation of a mobile app decision support system for prostate cancer to improve quality of tumor boards

Yasemin Ural 1,*, Thomas Elter 2, Yasemin Yilmaz 1, Michael Hallek 2, Rabi Raj Datta 3, Robert Kleinert 3, Axel Heidenreich 1, David Pfister 1
Editor: Haleh Ayatollahi4
PMCID: PMC10246794  PMID: 37285355

Abstract

Certified Cancer Centers must present all patients in multidisciplinary tumor boards (MTB), including standard cases with well-established treatment strategies. Too many standard cases can absorb much of the available time, which can be unfavorable for the discussion of complex cases. In any case, this leads to a high quantity, but not necessarily a high quality of tumor boards. Our aim was to develop a partially algorithm-driven decision support system (DSS) for smart phones to provide evidence-based recommendations for first-line therapy of common urological cancers. To assure quality, we compared each single digital decision with recommendations of an experienced MTB and obtained the concordance.1873 prostate cancer patients presented in the MTB of the urological department of the University Hospital of Cologne from 2014 to 2018 have been evaluated. Patient characteristics included age, disease stage, Gleason Score, PSA and previous therapies. The questions addressed to MTB were again answered using DSS. All blinded pairs of answers were assessed for discrepancies by independent reviewers. Overall concordance rate was 99.1% (1856/1873). Stage specific concordance rates were 97.4% (stage I), 99.2% (stage II), 100% (stage III), and 99.2% (stage IV). Quality of concordance were independent of age and risk profile. The reliability of any DSS is the key feature before implementation in clinical routine. Although our system appears to provide this safety, we are now performing cross-validation with several clinics to further increase decision quality and avoid potential clinic bias.

Author summary

The quality of therapeutic decisions provided in tumor boards is perhaps the most relevant criterion for optimal cancer outcome. This tool aims to provide optimal recommendations, to assess the quality on a case-by-case basis and furthermore to objectively display the quality of oncological care.

Author summary

Everyday clinicians face the difficult task to choose the optimal treatment for their cancer patients due to the emergence of newly available therapeutics and continuously altering treatment guidelines. The resulting flood of information is impossible for clinicians to keep up with. Therefore, clinicians decide as a team, in so called tumor boards, upon the best possible cancer treatment for each patient. Even though the treatment decisions recommended by tumor boards play a critical role for the long-term survival of cancer patients, their accuracy in decision-making has hardly ever been assessed. Unfortunately, current digital tools that have been developed to support clinicians on the process of decision-making, have difficulties to provide treatment recommendations with sufficient accuracy. Therefore, we evaluated the quality of a novel decision-making application by comparing the decision concordance generated by the App with therapeutic recommendations given by a tumor board of a University Cancer Center. For newly diagnosed cancer patients we found that the novel tool matched the decisions made by the tumor board in almost 100% of the cases. These promising results not only show the potential of providing digital support for patient care, but also provide objective quality management while saving board time in favor of discussing more complex cases.

Introduction

Uro-Oncologists and Oncologists worldwide face the challenge of ensuring that their patients receive the best possible, individualized care for their cancer disease. Keeping up with the fast developments in medicine is very difficult for many physicians, as the rapid growth of scientific knowledge leads to an almost unmanageable variety of new treatment options [1]. The large amount of data is overwhelming and consequently it becomes a demanding task, to decide for the best possible individualized therapy for the patient.

Therefore, case discussion in multidisciplinary tumor board (MTB) conferences is one of the most important factors to assure highest quality standards in oncological care. Driven by this assumption, German hospitals are required to present and discuss each cancer patient in MTB in order to get registered as certified cancer center by the German Cancer Society (Deutsche Krebshilfe) [2].

In clinical reality, certification requirements for presenting all routine cases leads to a significant increase in the number of case discussions, wasting valuable time and attention that is needed for the discussion of more complex tumor cases.

In the field of evidence-based oncology, it is almost paradoxical that the quality of individual therapeutic decisions of the tumor boards is hardly ever qualitatively assessed. Despite their widespread use in clinical routine, few data is available about the effects of tumor boards on quality of care and long-term survival of cancer patients [3,4].

This results in the need to objectively define and measure the quality of MTB at the level of individual therapeutic recommendations.

Without doubt, AI based clinical decision support systems will play a major role in the near future to close the gap between complex data and clinical decision-making [46].

Up to this point, systems based on artificial intelligence have been unable to offer reliable assistance in this area, as they are still failing to provide treatment recommendations with sufficient certainty even to standard questions on first-line therapy [7]. For example, one of the leading AI-based systems, Watson for Oncology, matched only 12% (stomach), 80% (colon and breast carcinoma) and 93% (ovarian carcinoma) of the treatment recommendations given by medical experts [813].

An important reason for the poor performance of AI systems is the lack of high-quality training datasets. Masses of normalized training data are available for AI image recognition systems, but not for AI applications that are intended to model regular oncology care.

In addition to the lack of properly organized and validated training data, another problem is the limited resource of experts initially required for human interpretation and evaluation of AI results.

At the end of the day, then, a machine learning tool can only be as good as the data available for training and the trainers who evaluate the AI results.

Another approach to provide digital therapeutic recommendations with sufficient certainty could be implemented by developing software based on clinical network expertise. This concept of expert-curated digital decision support systems (DSS) was described in Nature Biotechnology in 2018 and a comparison with approaches of artificial intelligence showed multiple benefits [14]. The main advantage described here is certainly that the expert systems seem to represent clinical reality better than the AI-based systems used so far.

The DSS smartphone application EasyOncology (EO), whose digital treatment recommendations are based on continuous matching with real tumor boards, follows this approach and led to the design of this research study.

To the best of our knowledge, tumor board decisions have not been validated on a case by case basis by using an expert-curated DSS. In addition, our study allows a direct comparison of the level of concordance reached by an AI-based DSS with an expert-curated DSS with respect to MTB decisions for prostate cancer patients.

The aim of this clinical research is to implement the aforementioned technology for validation and quality assurance of a urological tumor board at the University Hospital of Cologne.

Materials and methods

Study design and patients

The study obtained ethics approval by the Ethics Commission of Cologne University’s Faculty of Medicine.

We present the results of 1873 cases of prostate carcinoma. We compared and analyzed the concordance rates between the tumor board recommendations of the urological multidisciplinary tumor board at the University of Cologne and the query results of the digital application “EasyOncology”.

Inclusion criteria for the study were prostate cancer cases for which a therapy recommendation was given in the uro-oncological tumor board of the University Hospital in the period from 2014 to 2018. Data sets of 2412 patient cases included then and screened for exclusion criteria: 140 case discussions not addressing therapeutic procedures, such as specific questions about histopathology or how to obtain a biopsy were excluded, as well as another 264 cases without therapeutic decisions due to pending clinical information. Another 135 cases recommended for clinical trials (n = 50) or complex cases with more than one active tumor entity (n = 85) were also excluded (Fig 1).

Fig 1. Flow chart of patient case selection process.

Fig 1

The real-world and digital treatment recommendations given for each tumor board question were compiled as response pairs and blinded to their origin. Subsequently, the response pairs were first examined for agreement and responses that were not obviously identical.

The similarities and deviations of each pair of answers were evaluated by independent uro-oncology specialists and reported according to previous publications on Watson for Oncology by IBM (Fig 2) [912,15]:

Fig 2. Evaluation flow chart.

Fig 2

Testing results were categorized into 4 color-coded groups: greena represents “concordant treatment recommendation”; blueb represents “concordant, for consideration”; redc represents “non-concordant, not recommended” and greyd represents “non-concordant, not available” recommendations. In the second round of analysis, the mismatched pairs of responses were reviewed in detail in order to identify limitations in the query algorithm leading to non-concordancy and, subsequently, to improve the query. In summary, the evaluation method in this study involved comparing and analyzing the concordance rates between the tumor board recommendations of the urological multidisciplinary tumor board and the query results of the digital application "EasyOncology" and classifying the responses into different categories based on their agreement and compliance with best clinical practices.

  1. “concordant, recommended” if both recommendations of the DSS application and MTB were identical

  2. At first evaluation non-concordant cases, that were reviewed by independent specialist and judged as correct alternative treatment option were considered as "concordant, for consideration”

  3. Case pairs were classified as "non-concordant, not recommended" if one of the recommendations, either App or MTB, did not meet current best-clinical-practice treatment guidelines

  4. Pairs of cases were “non-concordant, not available” if the DSS could not provide a treatment recommendation due to missing information that is needed by the APP to give a treatment recommendation

Smartphone application

The content of EasyOncology was created by experienced clinical specialists from different cancer centers and oncology practices. Diagnostic and therapeutic recommendations follow the usual evidence-based guidelines of the professional societies, best clinical practice and the approval status of oncological therapeutics. The intuitive user interface and quality of the application led to a top 3 ranking in a worldwide comparison of 157 oncological applications in 2017 [16]. Certification as a class I medical device followed in July 2020.

Medical editors revised new content, discussed conflicting information with the authors, and assured EO updates in predetermined time intervals. Quality of decision algorithms was assured by implementing a constant comparison with real world decisions given in tumor boards.

As depicted in Fig 3 EO requests clinicopathologic patient data to generate treatment recommendations in a stepwise fashion. The number of clinicopathologic variables necessary to generate treatment recommendations depends on the complexity of the patient case.

Fig 3. Query algorithm.

Fig 3

The relevant information is requested by EO’s query algorithm depending on the selected initial clinical status. Abbreviations: TNM: Classification of Malignant Tumors; ECOG: Eastern Cooperative Oncology Group; PSA: Prostate-Specific Antigen; ISUP: International Society of Urological Pathology; PC: prostate cancer; mHSPC: metastatic Hormone-Sensitive Prostate Cancer; nmHSPC: non-metastatic Hormone-Sensitive Prostate Cancer; nmCRPC: non-metastatic Castration-Resistant Prostate Cancer; mCRPC: metastatic Castration-Resistant Prostate Cancer; TUR-P: Transurethral Resection of the Prostate; SPE: Suprapubic enucleation.

For example in case of a “localized PC” only two clinicopathologic variables (i.e. risk group and histology) are required by EO to give a treatment recommendation. In more complex cases (i.e. nmCRCP) three clinicopathologic variables are requested by EO to provide a treatment recommendation.

Data analysis and statistics

Descriptive statistics and data analysis were carried out using IBM´s statistics software SPSS version 25 and Microsoft Excel version 16. The patient characteristics age, cancer stage, risk stratification, Gleason Score, and PSA-level (prostate specific antigen) were documented. Descriptive statistics were depicted as number of percentages or mean ± standard deviation (SD). After assigning patients to the concordant or the non-concordant group, a chi-squared test was used to compare categorical variables and the Mann-Whitney U test was applied to compare ordinal variables between the groups. Multivariate logistic regression analysis was used to analyze the association between the concordance rate and clinicopathological data. Statistical significance was assumed if the p-value was < 0.05 for all statistical analysis. Graphics, charts and tables were generated using SPSS, Microsoft Excel and Power Point.

Results

The mean age of all patients was 68 years. Regarding stage classification, 238 (13%) of cases were classified as stage I, 519 (28%) as stage II, 262 (14%) as stage III, and 848 (45%) as stage IV. Of the 776 cases with localized disease, 331 cases (43%) presented with good prognosis, 330 (42%) with intermediate prognosis, and 115 cases (15%) with poor prognosis according to D´Amico classification.

Cases were categorized according to clinical status as non-metastatic hormone-naïve and treatment-naïve prostate cancer (46%); metastatic hormone-naïve prostate cancer (11%); and castrate resistant prostate cancer (19%). A further subset included cases dealing with follow-up; or treatment options after radical prostatectomy (RPE) with a histological R1 or pN1 situation; local or biochemical relapses; or questions regarding trans urethral or suprapubic resection of the prostate (TUR-P/SPE) after incidental detection of prostate cancer (24%).

Using multivariate analysis, the Gleason score and the prognostic Grade (ISUP) were significantly associated with concordance rate (p = .001 each). This analysis and other patient characteristics are summarized in Table 1.

Table 1. Baseline clinical characteristics.

Characteristic Total, n = 1873 Concordant group Discordant Group P Value
Age, mean ± SD 68.3 ± 8.6 68.2 ± 8.6 71.1 ± 10.1 .168
Disease stage (UICC), n (%)
    I
    II
    III
    IV
    N/A

238 (12.7)
519 (27.7)
262 (14.0)
848 (45.3)
6 (0.3)

232 (12.4)
515 (27.5)
262 (14.0)
841 (45.0)
6 (0.3)

6 (0,3)
4 (0,2)
0 (0.0)
7 (0,4)
0 (0.0)

.140
PSA-level ng/ml, n(%)
    ≤ 10
    10–20
    20–50
    50–100
    ≥ 1000

994 (53.1)
393 (21.0)
213 (11.4)
120 (6.4)
153 (8.2)

982 (52.4)
393 (21.0)
211 (11.3)
119 (6.4)
151 (8.1)

12 (0.6)
0 (0.0)
2 (0,1)
1 (0,0)
2 (0,1)

.399
Gleason Score, n(%)
    5
    6
    7a
    7b
    8
    9
    10
    N/A

7 (0.4)
351 (18.7)
378 (20.2)
268 (14.3)
285 (15.2)
336 (17.7)
54 (2.9)
194 (10.4)

7 (0.4)
344 (18.3)
374 (20.0)
267 (14.2)
284 (15.2)
336 (17.9)
54 (2.9)
190 (10.1)

0 (0.0)
7 (0,4)
4 (0,2)
1 (0,0)
1 (0,0)
0 (0.0)
0 (0.0)
4 (0,2)

.001
ISUP prognostic grade, n(%)
    I
    II
    III
    IV
    V
    N/A

358 (19.1)
378 (20.2)
268 (14.3)
285 (15.2)
390 (20.8)
194 (10.4)

351 (18.7)
374 (20.0)
267 (14.3)
284 (15.1)
390 (20.8)
190 (10.1)

7 (0,4)
4 (0,2)
1 (0,0)
1 (0,0)
0 (0.0)
4 (0,2)

.001
Clinical stage (D’Amico), n(%)
    localized disease
    good prognosis
    intermediate prognosis
    poor prognosis
    advanced disease

331 (17.7)
330 (17.6)
115 (6.1)

324 (17.3)
328 (17.5)
114 (6.0)

7 (0,4)
2 (0,1)
1 (0,0)

.086
Clinical stratification
    non-metastatic
    hormone-naive and
    treatment-naïve cancer
    metastatic hormone-naïve
    prostate cancer
    castrate resistant prostate
    cancer
    follow-up/therapy options
    after RPE, local/
    biochemical relapse, TUR-
    P/SPE with incidental
    prostate cancer

856 (45.9)
203 (10.9)
361 (19.4)
444 (23.8)

846 (45.1)
202 (10.7)
357 (19.0)
442 (23.6)

10 (0,5)
1 (0,0)
4 (0,2)
2 (0,1)

.694

bold values indicate p < .005.

N/A info not given in board protocol

The overall concordance rate between the actual treatments received by patients and cancer treatment recommendations given by EasyOncology for prostate cancer was 99%. Fig 4 shows the overall concordance rate.

Fig 4. Overall treatment recommendation concordance between a multidisciplinary tumor board and the application EasyOncology.

Fig 4

As illustrated in Fig 5, stage specific concordance rates were 97.4% (stage I), 99.2% (stage II), 100% (stage III), and 99.2% (stage IV). Quality of concordance was independent of age, stage of disease and risk profile. The treatment concordance rates by age for < 50 years, 50–60 years, 60–70 years, 70–80 years, and ≥ 80 years were 100%, 99.0%, 99.6%, 99.0% and 97.0%, respectively. Patients with stage III cancer or who were <50 years old showed the highest concordance rates (100%).

Fig 5. Treatment concordance rates between a MTB and DSS according to prostate cancer tumor stage.

Fig 5

The queries of the APP on how many biopsy specimens were obtained, the patient’s wish against any active therapy and the presence of a neuroendocrine tumor could thus be identified as systematic errors for divergent treatment recommendations. These valuable insights can be used to optimize the APP in order to increase the reliability of its recommendations.

Overall, non-concordant results were found in 17 cases (Fig 5). As requested by protocol, all non-concordant cases were reviewed by an independent uro-oncological specialist for exact sub-classification of non-matching cases.

After review, eight of these cases were classified as “non-concordant, not recommended”; nine cases as "non-concordant, not available".

Exemplary for a result that was rated "non-concordant, not recommended" to the disadvantage of the MTB was the case of a patient with newly diagnosed localized prostate cancer (UICC stage IIIA). Since the patient had a high PSA-level, the application recommended surgery or radiation, whereas MTB decided for active surveillance.

The correct recommendation of the APP according to guidelines was confirmed by the reviewer. Nevertheless, MTB decision followed the patient’s request for non-intervention.

Another similar example of a “non-concordant, not recommended” case for independent review was a patient with localized prostate cancer (UICC stage I). The application recommended a therapeutic intervention in accordance with current guidelines, as two positive biopsies were documented in the board protocol [17,18]. Here again, individual decision (patient request) led to the MTB recommendation for active surveillance.

In the remaining six "non-concordant, not recommended" cases, the DSS recommended an active therapy based on the information that more than two biopsies were positive, indicating a higher-risk disease. MTB attending specialists realized that these biopsies were obtained only from one single tumor lesion, which is not fulfilling high-risk criteria, and therefore correctly dismissed the idea of an active therapy in favor of active surveillance.

Nine "non-concordant, unavailable" cases were identified as stage III neuroendocrine carcinomas, a histologic subtype that is not thematically considered by the APP and thus, no therapeutic recommendation was provided by DSS.

Using multivariate logistic regression analysis with independent variables age, PSA value and stage of disease (I/II vs. III/IV) no variables were found to be significantly associated with a decrease in the concordance rate. The results of the multivariate logistic regression are detailed in Table 2.

Table 2. Multivariate regression analyses of the concordance rate between EasyOncology and the multidisciplinary tumor board.

Variables Multivariate Analysis
OR 95% CI P value
Age .955 .898–1.015 .141
PSA (ng/ml) 1.000 1.000–1.001 .985
Stage (≥3) 2.210 .832–5.870 .112

Discussion

The rapid development of new, innovative oncological treatment options leads more than ever to the requirement of quality-assured therapeutic decisions [17]. In order to give optimal treatment recommendations, physicians usually follow guidelines of medical societies, inform themselves through professional journals, participate in congresses, further their medical education and discuss cases in multidisciplinary tumor boards (MTB).

However, who can guarantee that all doctors working in oncology have the time and motivation to handle the information overload? How do they deal with this situation when even current guidelines of the medical associations sometimes fail to mention highly effective and newly approved therapeutics? Who ensures that the expertise of the doctors attending the MTB is actually given and that decisions are not (consciously or unconsciously) influenced by economic motives? Is there any evidence at all that tumor boards really improve the quality of oncology care [3,1719]?

AI-based systems seem to have the potential to support clinical decision making, as they have already impressively demonstrated their outstanding superiority in medical image processing and interpretation for different cancer entities [2023]. Especially since AI-based systems seem perfectly suited to capture and correlate the immense amount of oncological knowledge, the results of all relevant clinical trials and all published case reports, and, based on this knowledge, to finally generate therapeutic recommendations. As obvious as this sounds, it is almost surprising that AI-based systems have yet failed to establish themselves in clinical oncological routine. So far, most attempts of AI-systems to reliably provide even standard therapeutic recommendations for first-line therapy have been disappointing. For example, Watson for Oncology, the leading application in this field, showed a concordance rate of only 73.6% compared to recommendations made by medical professionals for the first-line therapy of prostate cancer [16]. Watson for Oncology also obtained comparatively low concordance rates in other tumor entities, such as 12% in gastric cancer [13,24], 46.4% to 65.8% in colon cancer [10,11] or 77% in differentiated thyroid carcinoma [25] and others [9,26,27]. When considering the implementation of AI systems, the framework provided by the existing healthcare system must be carefully considered.

The effort required to implement systems such as Watson for Oncology in hospitals is enormous. The use of AI systems requires data protection-compliant interoperability between many hospital information systems and the required data must be completely accessible in predefined files.

In Germany in particular, data protection requirements of 16 federal states and the non-standardized norms for data processing and storage pose considerable challenges for developers of AI systems. In addition, clinical data continues to be frequently stored in paper files and it should not be forgotten that the exchange of diagnostic reports between clinics, pathologists, oncologists and practices is to this day commonly carried out by a fax machine.

Another point of criticism of AI systems is often that the decision-making process is not easy to understand and that one has to trust almost blindly in the correctness of the machine response. Furthermore, the validation efforts that are needed when using AI systems ties up considerable human resources since only medical experts can judge if the recommendations are correct.

Until these structural problems are solved, expert-curated solutions offer an alternative, as described in Nature Biotechnology in 2018 [14]. This approach was adopted by medical professionals using the DSS and is the basis of this research. In order to ensure the quality of the recommendations given by the application, a continuous comparison with tumor boards of certified cancer centers was implemented.

This comparably simple and resource-saving technical solution proved to be beneficial here, enabling the large number of tumor board recommendations to be effectively compared retrospectively.

For prostate carcinoma, our expert curated digital decision support system provides an optimal concordance rate with the therapeutic recommendations of a university tumor board.

Yet, the very high concordance rate of our system is probably not surprising, since we evaluated predominantly first-line cases, for which guidelines generally apply. Of course, the degree of complexity increases with each tumor recurrence and additional concomitant diseases.

However, this is exactly the specification of our work, which aims to reduce the workload of tumor boards by providing digitalized answers to non-complex routine cases.

Despite the fact that this approach achieves better results than other methods published so far, further limitations have to be taken into account.

It should be stated as a limiting factor of our work that a high concordance rate is easier to achieve when therapeutic strategies do not show a significant change over a longer period of time, as given during the time period studied, from 2014 to 2018 [27].

The increasing dynamics of diagnostic and therapeutic options in the treatment of prostate cancer thus leads to significantly more frequent and shorter testing intervals of the application, which has been certified as a medical device in the meantime, and to continuous adaptation to best-clinical-practice.

By continuously comparing digital and analog recommendations, systemic deficits that lead to deviations usually become quickly apparent, thus enabling the prompt adaptation of the query logic to the dynamic development of therapeutic options.

Second limiting bias, a 100% concordance rate is of little value if the recommendation quality of the reference board is not validated. This leads to the necessity of establishing decision networks in order to generate a recommendation basis that is as reliable as possible and provides the basis for the required safety of recommendations. Therefore, cross-validation of different urological cancer centers is ongoing in order to eliminate the single-center bias. Another limitation is certainly that the reference recommendations are based on the German S3 guidelines and thus the methods and results cannot be transferred to other countries uncritically.

The main goal of this development is to provide reliable recommendations for standard cases in advance of the tumor board conference with the aim of allowing more time for the discussion of complex cases. It is not in the developers’ interest to achieve 100% agreement between tumor board responses and digital recommendations, as no automated system will be able to consider all the complex clinical circumstances. The agreement of 100% achieved in our analysis in stage III indicates only the simpler decision for active therapy in this risk constellation compared with the early stages of prostate cancer.

Rather, a trusted DSS must be able to reliably identify complex clinical constellations, which should then be discussed by experts attending the tumor board. Particularly in the case of complex diseases, medical expertise is irreplaceable and must remain so. However, expertise requires time, and this time should not be spent discussing universally accepted standard procedures.

Conclusion

In summary, the study evaluated a decision support system (DSS) for first-line therapy of prostate cancer by comparing its recommendations with those made by a multidisciplinary tumor board (MTB) at a university cancer center. The study found a high level of concordance between the DSS-generated recommendations and those of the MTB, indicating a high level of reliability. Continuous analysis of mismatched cases ensures early adjustment of DSS recommendations to account for changes in best clinical practice. Overall, our results suggest that EO is a promising tool to assist clinicians in providing reliable treatment recommendations for prostate cancer patients.

Perspective options

It is almost paradoxical in evidence-based driven oncology that the actually relevant quality of individual therapeutic decisions is virtually unknown.

The use of intelligent software could ensure the quality of treatment on a case-by-case basis and thus serve as an instrument for quality assurance that can be transparently accessed and compare the quality of oncological care provided by hospitals and medical practices.

Based on the smartphone application used in this work for recommendation matching, we developed an interface that enables the necessary inputs in the decision process of tumor boards. Easily integrated into any system, this validated and reliable application could unburden tumor boards from standard cases, thereby allowing more time for discussion of complex cases.

Data Availability

Our data was published on the open repository Zenodo (DOI: 10.1371/journal.pdig.0000054.). The full data set can be accessed by everyone using the following link: https://zenodo.org/record/6951736#.Y5rxDOzMLyj.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Global Oncology Trends 2019: Therapeutics, Clinical Development and Health System Implications [press release]. QVIA Institute, May 2019 2019. [Google Scholar]
  • 2.Deutsche Krebsgesellschaft und Deutsche Krebshilfe. Nationales Zertifizierungsprogramm Krebs. Erhebungsbogen für Onkologische Spitzenzentren und Onkologische Zentren, Inkraftsetzung am 03.12. 2019. [Google Scholar]
  • 3.Keating NL, Landrum MB, Lamont EB, Bozeman SR, Shulman LN, McNeil BJ. Tumor boards and the quality of cancer care. J Natl Cancer Inst. 2013;105(2):113–21. doi: 10.1093/jnci/djs502 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Soukup T, Lamb BW, Weigl M, Green JSA, Sevdalis N. An Integrated Literature Review of Time-on-Task Effects With a Pragmatic Framework for Understanding and Improving Decision-Making in Multidisciplinary Oncology Team Meetings. Front Psychol. 2019;10:1245. doi: 10.3389/fpsyg.2019.01245 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Chen Y, Elenee Argentinis JD, Weber G. IBM Watson: How Cognitive Computing Can Be Applied to Big Data Challenges in Life Sciences Research. Clin Ther. 2016;38(4):688–701. [DOI] [PubMed] [Google Scholar]
  • 6.Letzen B, Wang CJ, Chapiro J. The Role of Artificial Intelligence in Interventional Oncology: A Primer. J Vasc Interv Radiol. 2019;30(1):38–41.e1. doi: 10.1016/j.jvir.2018.08.032 [DOI] [PubMed] [Google Scholar]
  • 7.Schmidt C. M. D. Anderson Breaks With IBM Watson, Raising Questions About Artificial Intelligence in Oncology. J Natl Cancer Inst. 2017;109(5). [DOI] [PubMed] [Google Scholar]
  • 8.Seidman AD, Pilewskie ML, Robson ME, Kelvin JF, Zauderer MG, Epstein AS, et al. Integration of multi-modality treatment planning for early stage breast cancer (BC) into Watson for Oncology, a Decision Support System. Seeing the forest and the trees. Journal of Clinical Oncology; 2015. p. e12042–e. [Google Scholar]
  • 9.Somashekhar SP, Sepulveda MJ, Puglielli S, Norden AD, Shortliffe EH, Rohit Kumar C, et al. Watson for Oncology and breast cancer treatment recommendations: agreement with an expert multidisciplinary tumor board. Ann Oncol. 2018;29(2):418–23. doi: 10.1093/annonc/mdx781 [DOI] [PubMed] [Google Scholar]
  • 10.Kim EJ, Woo HS, Cho JH, Sym SJ, Baek JH, Lee WS, et al. Early experience with Watson for oncology in Korean patients with colorectal cancer. PLoS One. 2019;14(3):e0213640. doi: 10.1371/journal.pone.0213640 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Lee WS, Ahn SM, Chung JW, Kim KO, Kwon KA, Kim Y, et al. Assessing Concordance With Watson for Oncology, a Cognitive Computing Decision Support System for Colon Cancer Treatment in Korea. JCO Clin Cancer Inform. 2018;2:1–8. doi: 10.1200/CCI.17.00109 [DOI] [PubMed] [Google Scholar]
  • 12.Zhou N, Zhang CT, Lv HY, Hao CX, Li TJ, Zhu JJ, et al. Concordance Study Between IBM Watson for Oncology and Clinical Practice for Patients with Cancer in China. Oncologist. 2019;24(6):812–9. doi: 10.1634/theoncologist.2018-0255 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Choi YI, Chung JW, Kim KO, Kwon KA, Kim YJ, Park DK, et al. Concordance Rate between Clinicians and Watson for Oncology among Patients with Advanced Gastric Cancer: Early, Real-World Experience in Korea. Can J Gastroenterol Hepatol. 2019;2019:8072928. doi: 10.1155/2019/8072928 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Bungartz KD, Lalowski K, Elkin SK. Making the right calls in precision oncology. Nat Biotechnol. 2018;36(8):692–6. doi: 10.1038/nbt.4214 [DOI] [PubMed] [Google Scholar]
  • 15.Yu SH, Kim MS, Chung HS, Hwang EC, Jung SI, Kang TW, et al. Early experience with Watson for Oncology: a clinical decision-support system for prostate cancer treatment recommendations. World J Urol. 2021;39(2):407–13. doi: 10.1007/s00345-020-03214-y [DOI] [PubMed] [Google Scholar]
  • 16.Calero JJ, Oton LF, Oton CA. Apps for Radiation Oncology. A Comprehensive Review. Transl Oncol. 2017;10(1):108–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Specchia ML, Frisicale EM, Carini E, Di Pilla A, Cappa D, Barbara A, et al. The impact of tumor board on cancer care: evidence from an umbrella review. BMC Health Serv Res. 2020;20(1):73. doi: 10.1186/s12913-020-4930-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Devitt B, Philip J, McLachlan SA. Re: Tumor boards and the quality of cancer care. J Natl Cancer Inst. 2013;105(23):1838. doi: 10.1093/jnci/djt311 [DOI] [PubMed] [Google Scholar]
  • 19.Krasna M, Freeman RK, Petrelli NJ. Re: Tumor boards and the quality of cancer care. J Natl Cancer Inst. 2013;105(23):1839–40. doi: 10.1093/jnci/djt313 [DOI] [PubMed] [Google Scholar]
  • 20.Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115–8. doi: 10.1038/nature21056 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Ardila D, Kiraly AP, Bharadwaj S, Choi B, Reicher JJ, Peng L, et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat Med. 2019;25(6):954–61. doi: 10.1038/s41591-019-0447-x [DOI] [PubMed] [Google Scholar]
  • 22.Rubin R. Artificial Intelligence for Cervical Precancer Screening. JAMA. 2019;321(8):734. doi: 10.1001/jama.2019.0888 [DOI] [PubMed] [Google Scholar]
  • 23.McKinney SM, Sieniek M, Godbole V, Godwin J, Antropova N, Ashrafian H, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):89–94. doi: 10.1038/s41586-019-1799-6 [DOI] [PubMed] [Google Scholar]
  • 24.Tian Y, Liu X, Wang Z, Cao S, Liu Z, Ji Q, et al. Concordance Between Watson for Oncology and a Multidisciplinary Clinical Decision-Making Team for Gastric Cancer and the Prognostic Implications: Retrospective Study. J Med Internet Res. 2020;22(2):e14122. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Kim M, Kim BH, Kim JM, Kim EH, Kim K, Pak K, et al. Concordance in postsurgical radioactive iodine therapy recommendations between Watson for Oncology and clinical practice in patients with differentiated thyroid carcinoma. Cancer. 2019;125(16):2803–9. doi: 10.1002/cncr.32166 [DOI] [PubMed] [Google Scholar]
  • 26.Liu C, Liu X, Wu F, Xie M, Feng Y, Hu C. Using Artificial Intelligence (Watson for Oncology) for Treatment Recommendations Amongst Chinese Patients with Lung Cancer: Feasibility Study. J Med Internet Res. 2018;20(9):e11087. doi: 10.2196/11087 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Interdisziplinaere Leitlinie der Qualitaet S3 zur Frueherkennung, Diagnose und Therapie der verschiedenen Stadien des Prostatakarzinoms. Version 5.0 April 2018. [Google Scholar]
PLOS Digit Health. doi: 10.1371/journal.pdig.0000054.r001

Decision Letter 0

Padmanesan Narasimhan

31 May 2022

PDIG-D-22-00125

Validation and implementation of a mobile app decision support system for quality assurance of tumor boards. Analyzing the concordance rates for prostate cancer from a multidisciplinary tumor board of a University Cancer Center

PLOS Digital Health

Dear Dr. Ural,

Thank you for submitting your manuscript to PLOS Digital Health. After careful consideration, we feel that it has merit but does not fully meet PLOS Digital Health's publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

The major weakness of the paper lies in its methodology section. This could be improved by providing more details on

1. Framework or theory involved in the methods

2. Rationale for statistical analysis including appropriate statistical analyses/methods

3. Rationale for the study design and why a design like RCT wasnt chosen

4. Inadequate details on the EasyOncology application and this could be improved in the main text

5. Funding source of this study- Needs to be explicit. How was this study funded?

==============================

Please submit your revised manuscript by 15th June 2022. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at digitalhealth@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pdig/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

* A rebuttal letter that responds to each point raised by the editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

* A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

* An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

We look forward to receiving your revised manuscript.

Kind regards,

Padmanesan Narasimhan, MBBS MPH PhD

Section Editor

PLOS Digital Health

Journal Requirements:

1. Please provide separate figure files in .tif or .eps format only and remove any figures embedded in your manuscript file. Please also ensure that all files are under our size limit of 10MB.

For more information about how to convert your figure files please see our guidelines: https://journals.plos.org/digitalhealth/s/figures

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLOS Digit Health. doi: 10.1371/journal.pdig.0000054.r003

Decision Letter 1

Padmanesan Narasimhan

14 Nov 2022

PDIG-D-22-00125R1

Validation and implementation of a mobile app decision support system for quality assurance of tumor boards. Analyzing the concordance rates for prostate cancer from a multidisciplinary tumor board of a University Cancer Center

PLOS Digital Health

Dear Dr. Ural,

Thank you for submitting your manuscript to PLOS Digital Health. Our reviewers have got back to us and suggested minor revisions. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript within 30 days Dec 14 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at digitalhealth@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pdig/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

* A rebuttal letter that responds to each point raised by the editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

* A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

* An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

We look forward to receiving your revised manuscript.

Kind regards,

Padmanesan Narasimhan, MBBS MPH PhD

Section Editor

PLOS Digital Health

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article's retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments (if provided):

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

--------------------

2. Does this manuscript meet PLOS Digital Health’s publication criteria? Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe methodologically and ethically rigorous research with conclusions that are appropriately drawn based on the data presented.

Reviewer #1: (No Response)

Reviewer #2: Yes

--------------------

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: (No Response)

Reviewer #2: N/A

--------------------

4. Have the authors made all data underlying the findings in their manuscript fully available (please refer to the Data Availability Statement at the start of the manuscript PDF file)?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: (No Response)

Reviewer #2: Yes

--------------------

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS Digital Health does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: (No Response)

Reviewer #2: Yes

--------------------

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I thank the authors for this manuscript draft, and especially the great work they are doing in developing EasyOncology DSS for oncology. I believe their approach in using DSS, in combination with / controlled by the expert clinicians is exactly the right method to approach as complex and heterogenous indication area as oncology with various cancers and subpopulations is. Authors’ high level objective in this work is in replacing the valuable time of MTB in assessing trivial cases with DSS based automation is also correct approach when moving stepwise from AI assisted humans to more fully automated decision making.

I do have some comments and proposals on the latest article revision. I believe some of them are more critical before this manuscript is ready to be published.

GENERAL COMMENTS

• You correctly mention in the abstract and discussion, that cross-validation over multiple clinics is required to further increase decision quality and avoid potential clinic bias. This is exactly as it should be, however as the EO application and advice by DSS evaluated in this manuscript are based(?) on PCa treatment practises specifically in Germany, I am left to consider whether your DSS and cross-validation is then applicable to treatment practises in Germany only. At least EO is available in German only. I can only assume based on my personal experience that SoC varies a lot in different countries, especially so in the earlier stages on PCa. I believe this should be mentioned as a limitation, at least in the discussion, or if the CV is done over several countries, perhaps this should be then mentioned to tackle this challenge?

• Authors use both MTD and MTB abbreviations for multidisciplinary tumor board. Please unify.

• In general, should the figure titles be also within the figures too? The quality of figure images is also rather poor.

• Check the font for different header levels in the manuscript for ease readability.

• Overall, the references in the text to table 1, figure 3 and figure 4 do not seem to be perfect in relation to their aimed location within the text.

• Use of references, square brackets vary through the manuscript a bit, at least in discussion. Sometimes after the period, sometimes before. Please unify.

Table 1:

• Some values contain size of N/A population, in some cases such as clinical stage & stratification the size of not mentioned.

• What is the “a” in superscript in the footnote?

Figure 1:

• please check the language of figure title

Figure 4:

• please elaborate the title

ABSTRACT

• In the author summary you write “Unfortunately, current digital tools that have been developed to support clinicians on the process of decision-making, have failed to provide treatment recommendations with sufficient accuracy” – now this is a rather bold and should perhaps be softened by using “having difficulties” or similar, instead of “failing”.

• You also speak here about newly diagnosed patients here but do not mention anything about this in the study design and patients? Was this an inclusion criteria for retrospective analysis, and if not, are the results applicable for patients with longer patient journey? There seems to something in the discussion about this but I believe it should belong to design and patients, as well.

INTRODUCTION

• In the end of introduction I would suggest a small change: “The aim of this clinical research is to implement the aforementioned technology for validation and quality assurance of a urological tumor board in NAME OF HOSPITAL”.

MATERIAL AND METHODS

Smartphone application

• I do believe the authors that EO has been adequately developed, and credible, but the manuscript itself is still in my opinion failing to answer to original review critique #4: what is it that EO does? The reader would need to download the application (not available in the country where I live). What are the possible treatment recommendations (input => OUTPUT)? I can understand what the input is, but I don’t understand what the output from DSS is here? It would increase the contextual understanding of these results a lot.

• What is the medical device classification of EO?

Study design and patients

• How is the second round of analysis relevant for these results or this manuscript?

DATA ANALYSIS AND RESULTS

• MS Excel version, if it really has been used for data analysis?

DISCUSSION

• 2nd paragraph: I would change the questions into statements that are then backed by references.

• You are writing that “all attempts of AI-systems to reliably provide even standard therapeutic recommendations for first-line therapy have been disappointing”. I wouldn’t use such a strong statement with one example.

• Finally, I would touch shortly the use case vs reliability here – is it better that system over-recommends treatments as what are the consequences when someone who needs to be treated won’t get it? On the other hand, safety issues are waiting if over-recommending treatments.

Reviewer #2: (No Response)

--------------------

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

Do you want your identity to be public for this peer review? If you choose “no”, your identity will remain anonymous but your review may still be made public.

For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Yacine HADJIAT

--------------------

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLOS Digit Health. doi: 10.1371/journal.pdig.0000054.r005

Decision Letter 2

Haleh Ayatollahi

24 Feb 2023

PDIG-D-22-00125R2

Validation and implementation of a mobile app decision support system for quality assurance of tumor boards. Analyzing the concordance rates for prostate cancer from a multidisciplinary tumor board of a University Cancer Center

PLOS Digital Health

Dear Dr. Ural,

Thank you for submitting your manuscript to PLOS Digital Health. After careful consideration, we feel that it has merit but does not fully meet PLOS Digital Health's publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript within 30 days Mar 26 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at digitalhealth@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pdig/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

* A rebuttal letter that responds to each point raised by the editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

* A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

* An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

We look forward to receiving your revised manuscript.

Kind regards,

Haleh Ayatollahi

Section Editor

PLOS Digital Health

Journal Requirements:

1. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article's retracted status in the References list and also include a citation and full reference for the retraction notice.

2. Please send a completed 'Competing Interests' statement, including any COIs declared by your co-authors. If you have no competing interests to declare, please state "The authors have declared that no competing interests exist". Otherwise please declare all competing interests beginning with the statement "I have read the journal's policy and the authors of this manuscript have the following competing interests:"

3. Please ensure that Funding Information and Financial Disclosure Statement are matched.

4. In the Funding Information you indicated that no funding was received. Please revise the Funding Information field to reflect funding received.

Additional Editor Comments (if provided):

The topic of the manuscript is interesting and it is well-written. Please consider addressing the following issues in your manuscript as well.

1- The title seems too long. Please make it shorter. The authors can remove the second part of the title starting with “Analyzing…”.

2- Please choose the keywords based on the MeSH terms, as well.

3- Please follow the journal instructions to provide a concise unstructured abstract.

4- In the introduction, although the authors reviewed the existing literature, they need to explain the gap in the existing knowledge, too.

5- In the methods section, please explain the study design and participants first and then explain the design of the smart phone application.

6- In page 5, the authors noted “The content of EasyOncology was developed by experienced specialists…”. It is important to elaborate on this part and make it clear how and using which methodology the content was developed.

7- In the methods section, please add the inclusion and exclusion criteria for the patients.

8- The readers may need to know more about the technical aspects of the CDSS. Please add some figures of the system and provide an example (flow chart) to show how the system worked. Moreover, please make sure that the figures are visible in the submitted file.

9- The evaluation methods need to be explained in detail. The authors could also evaluate the specificity, sensitivity, accuracy, etc.

10- In Table 1, some figures are the same and their percentages are different and vice versa. Please re-check Table 1.

11- The possible reasons for reaching concordance rates of (100%) need to be explained in the discussion section.

12- We need to see a conclusion section after the discussion.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

--------------------

2. Does this manuscript meet PLOS Digital Health’s publication criteria? Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe methodologically and ethically rigorous research with conclusions that are appropriately drawn based on the data presented.

Reviewer #1: Yes

--------------------

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

--------------------

4. Have the authors made all data underlying the findings in their manuscript fully available (please refer to the Data Availability Statement at the start of the manuscript PDF file)?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

--------------------

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS Digital Health does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

--------------------

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: All the comments were addressed in a satisfying manner.

--------------------

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

Do you want your identity to be public for this peer review? If you choose “no”, your identity will remain anonymous but your review may still be made public.

For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Sammeli Liikkanen

--------------------

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLOS Digit Health. doi: 10.1371/journal.pdig.0000054.r007

Decision Letter 3

Haleh Ayatollahi

27 Apr 2023

Validation and implementation of a mobile app decision support system for prostate cancer to improve quality of tumor boards

PDIG-D-22-00125R3

Dear None Ural,

We are pleased to inform you that your manuscript 'Validation and implementation of a mobile app decision support system for prostate cancer to improve quality of tumor boards' has been provisionally accepted for publication in PLOS Digital Health.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow-up email from a member of our team. 

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they'll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact digitalhealth@plos.org.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Digital Health.

Best regards,

Haleh Ayatollahi

Section Editor

PLOS Digital Health

***********************************************************

Reviewer Comments (if any, and for reference):

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: Response to Reviewers.pdf

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: 230331Response to reviewers.docx

    Data Availability Statement

    Our data was published on the open repository Zenodo (DOI: 10.1371/journal.pdig.0000054.). The full data set can be accessed by everyone using the following link: https://zenodo.org/record/6951736#.Y5rxDOzMLyj.


    Articles from PLOS Digital Health are provided here courtesy of PLOS

    RESOURCES