Skip to main content
PLOS One logoLink to PLOS One
. 2020 Nov 6;15(11):e0241897. doi: 10.1371/journal.pone.0241897

Is reporting quality in medical publications associated with biostatisticians as co-authors? A registered report protocol

Ulrike Held 1,*, Klaus Steigmiller 1, Michael Hediger 2, Martina Gosteli 3, Kelly A Reeve 1, Stefanie von Felten 1, Eva Furrer 1
Editor: Alan D Hutson4
PMCID: PMC7647072  PMID: 33156885

Abstract

Background

Quality in medical research has recently been criticized for being low, especially in observational research. Methodology is increasingly difficult, but collaboration between clinical researchers and biostatisticians may improve research and reporting quality. The aim of this study is to quantify the value of a biostatistician in the team of authors.

Methods

Single-center, retrospective observational study following the STROBE reporting guidelines. We will systematically review all medical publications with biostatisticians from our center as co-authors or authors and review corresponding papers without biostatisticians from our center during the same time range. We will compare aspects of reporting quality, overall and for the three study types observational, randomized trial, and prognostic separately.

Discussion

We anticipate that the results of the study will raise awareness of the importance of high methodological quality, as well as appropriate reporting quality in clinical research.

Conclusion

Our study will have a direct impact on our center by making each of us more aware of the reporting guidelines for various research designs. This in turn will enhance reporting quality in future research with our involvement. Our study will also raise awareness of the important role that biostatisticians play in the design and analysis of health research projects.

Introduction

In recent years, the methodological and reporting quality of biomedical research has been extensively discussed. In times when the mere number of published papers seems to pave the academic career path for health research scientists, there is increasing motivation to publish for the sake of another paper rather than for the sake of scientific advancement [1]. In Switzerland, the government paid 0.9% of the gross domestic product for research and development in 2016, and more than 20% of all publications in Switzerland between 2011 and 2015 were in the field of “clinical medicine” [2]. The government is paying large amounts of money for medical research, but as Doug Altman said, “To maximise the benefit to society, you need to not just do research, but do it well.” [3]. Some medical journals like JAMA, BMJ, PLOS Medicine, The Lancet, and BMC Medicine, among others, identified this problem many years ago, sparking groups of methodological experts to discuss reliability, perceived value, and reporting quality of research. Simera and co-authors discussed the deficiencies in reporting medical research, making it difficult to assess “how the research was conducted” and to “evaluate the reliability of presented findings” [4], thus hampering its utilization in clinical practice. To maximize the value from funded research, so-called reporting guidelines were developed to foster good research practices among clinical scientists, which are now organized within the EQUATOR network (www.equator-network.org). The first evidence-based recommendation on how to report on results of randomized controlled trials (RCTs) was the CONSORT guideline, published in its first version in 1996. Since then, the CONSORT guideline has been revised [5, 6] and further reporting guidelines have been developed. These address common study types in medical research, with STROBE guidelines for observational studies [7], and TRIPOD guidelines for reporting of prediction or prognostic models [8] among the most frequently used.

Despite a large effort put into the development and publication of these reporting guidelines by medical journals, the quality standards in medical publications are often not met. An indication for this is e.g. increasing retraction rates observed in high rank medical journals [9]. In recent studies by Dechartres et al. [10], Zuniga-Hernandez et al. [11] and Sharp et al. [7] the authors found that medical journals should enforce the use of the reporting guidelines more strictly. Among other reasons, limited understanding of research methodology and biostatistics among clinicians might be a reason for poor conduct and reporting quality [12]. It is common practice in clinical research for clinical authors to use unsuitable methods on complex data sets rather than seek advice from methodological experts. Interdisciplinary teams of medical researchers and biostatisticians, however, would be expected to produce higher quality publications [13] and could thus increase public benefit from medical research.

Authorship guidelines at our center (University of Zurich, Switzerland) are based on fulfilling the criteria of Swiss Academies of Arts and Sciences, by (1) making a substantial contribution to the planning, execution, evaluation and supervision of research; (2) involvement in writing the manuscript; and (3) approving the final version of the manuscript. Sometimes, a collaborating biostatistician is mentioned in the acknowledegements section. However, only by being a co-author the biostatistician can take the responsibility to write parts of the manuscript (e.g. statistical methods, results), critically revise the manuscript, and approve the final version.

The objective of our study is to quantify the association between biostatistical co-authorship and the quality of reporting in medical publications, based on a set of pre-defined quality criteria. A secondary objective is to identify study types with methodological knowledge gaps and to promote awareness of the complexity in these areas among clinical researchers as well as among methodologists.

Methods

The study is a retrospective, single-center observational cohort study, conducted at the University of Zurich (UZH) and its University Hospital (USZ). The group of 13 consulting academic biostatisticians and statisticians from the Epidemiology, Biostatistics and Prevention Institute, and the Institute of Mathematics, both of University of Zurich will be referred to as “biostatisticians” in the following.

Selection of exposed and non-exposed publications cohorts

Two groups of publications will be compared, and these will be referred to as “exposed” and “non-exposed”, corresponding to their “exposure” to a biostatistician of our group in their team. The two cohorts are matched for location, publication year, and study type, with a frequency matching approach [14]. To define the group of exposed publications, all medical research publications from PubMed between 2017 and 2018 with at least one of the biostatisticians as main or co-author was retrieved on Dec 9, 2019, with a search term as specified in S1 Appendix. Methodological publications of these biostatisticians were not included. Publications had to be in English language.

To define the group of “non-exposed” publications for comparison, all medical research publications found in PubMed between 2017 and 2018, with affiliation University of Zurich (UZH) or University Hospital Zurich (USZ) or any of the affiliated university hospitals for first and / or second author were extracted on Dec 16, 2019 (S1 Appendix). The non-exposed publications do not have any of the biostatisticians of our group on the author list. The full list of affiliations considered can be found in S1 File. This large number of publications will be used in a random but replicable order—aiming to remove potential chronological ordering or any other systematic ordering while adhering to high standards of reproducibility.

Categorization into study types

For each of the exposed publications, the study type was determined, and the subset of all observational studies, RCTs and prognostic studies was evaluated further regarding reporting quality. Categorization into study type was performed by the group of biostatisticians. For most publications, the authors themselves determined the study type. For some publications, the authoring biostatistician had left our group, and thus the study type was categorized independently and in duplicate by two authors (UH, EF). After consensus on study type was reached, record count for each study type for each publication year was obtained.

As the number of potential non-exposed publications was much larger, the categorization of these publications into observational studies, RCTs and prognostic studies was performed in the above described random but replicable order until the numbers of non-exposed publications of these study types matched the corresponding number of exposed publications per year. Categorization was performed independently and in duplicate by the authors UH and MH (for papers published in 2017), and by EF and MH (for 2018). Any discrepancies were resolved by discussion and third-party arbitration (KS). This set of publications was considered the non-exposed group.

Selection of items from reporting guideline to measure reporting quality

For each of these three study types, a set of six items measuring reporting quality have been identified by reaching group consensus in the group of biostatisticians at our university. The quality criteria are based on the reporting guidelines STROBE, CONSORT and TRIPOD and reflect characteristics of a publication, that are especially important for judging the validity of the results.

Specification of the reporting quality items

The reporting guideline items chosen for the ratings represent the following general quality dimensions for all three study types

  1. variable specification,

  2. how study size was arrived at,

  3. missing data,

  4. statistical methods,

  5. precision of results,

  6. whether the corresponding reporting guideline (i.e. STROBE, CONSORT, TRIPOD) was mentioned.

The rating of publications regarding the six items for each of the study types was operationalized and piloted, such that they could be used efficiently and robustly to rate each publication’s reporting quality. Each dimension has different possible answer categories, resulting in the overall rating. Details of the operationalization can be found in S2 File.

Outcomes and quantification of reporting quality

The primary outcome of this study is the reporting quality of exposed and non-exposed publications with respect to the six quality dimensions. The primary outcome will be assessed in blinded fashion and in duplicate by two raters. Blinding will be guaranteed by removing author names, affiliation lists, journal name, corresponding author name, author contributions, date, acknowledegements, references and DOI from every PDF. Discrepancies in the rating will be resolved by a third rater’s judgement and discussion until consensus is reached.

To quantify reporting quality, we assign equal weights to each answer category. In that case, the range of the quality dimension score per publication would be between 0 (representing lowest possible reporting quality), and 11 (indicating highest possible reporting quality).

The secondary outcome of this study is the number of citations of both the exposed and the non-exposed publications at a fixed date.

Outcome rating and rater training

The outcome rating and its operationalization was developed and discussed among four authors (UH, KS, MH, EF), until a consensus was reached. After operationalization was finalized, the resulting questions for each study type were programmed to be shown in an R shiny app, which underwent a quality review and testing period. The raters of exposed and non-exposed publications will be instructed and trained by using vignette publications for calibration. These vignette publications will be related to the study, but published in 2019, and will be rated with scrutiny by the authors of this study. These vignettes contain examples of poor and good reporting quality. The raters will be trained by authors of this study, but the authors of this study will not be involved in the reporting quality rating themselves. The raters will be obliged to rate the reporting quality based on the blinded PDFs alone, and not to use additional information from the internet while doing so. Ratings will be performed in blinded fashion, meaning that the raters will be unaware of the classification of publications as exposed or non-exposed and of authors on the publications. Ratings will be performed in duplicate, and any discrepancies will be resolved by discussion until consensus is reached.

Sample size considerations

With the given sample size of 95 exposed and 95 non-exposed publications in 2017 and 2018, at a significance level of 5% and with a power of 80%, an effect size of 0.41 (Cohen’s d) could be detected using a two-sample 2-sided t-test with equal variances. The effect size would be considered a medium effect size.

Data management

Categorization into one of the three study types was performed with the help of a specifically programmed R shiny app, in which title and abstract, as well as link to the full text was provided.

Reporting quality rating will be performed again using a shiny app. Electronic records of the reporting quality ratings before and after consensus will be made available. The use of shiny apps in this research guarantees highly reliable data entry.

Risk of bias

Risk of detection bias will be addressed with blinded outcome ratings. Risk of selection bias will be addressed with reproducible, random subsampling of PubMed publications from medical publications with UZH/USZ affiliation for the group of non-exposed publications. The results of this study could be confounded by indication, if more complex projects were brought to our attention whereas less complex projects were addressed by the clinicians without asking for help from an academic biostatistician.

Statistical methods and programming

Statistical methods for the primary outcome will include visualization of results with spider plots for reporting quality dimension according to study type, and in total across study types. For both analyses, the raw primary outcome, and the mean difference in reporting quality rating between exposed and non-exposed publications will be reported with a 95% confidence interval. The two-sample t-test or the two-sample Wilcoxon test will be used for group comparisons, the decision to use a parametric or a non-parametric test will be based on visual inspection of histograms and corresponding QQ-plots. Descriptive statistics of the primary outcome for all exposed and non-exposed publications in subgroups of study types will be shown in a table.

Number of citations on a specified date will be collected and compared between the groups of exposed and non-exposed publications, again with either t-tests or Wilcoxon tests. The decision will be based upon visual inspection of histograms and QQ-plots. Between-group differences will be reported with 95% confidence intervals. Confidence intervals will be calculated based on the t-distribution assuming equal variances or with Hodges-Lehman estimate based on the Wilcoxon test, corresponding to the statistical test chosen for each outcome.

For assessing the level of agreement of reporting quality between the two raters, Cohen’s kappa (κ) values will be estimated, again with 95% confidence intervals. κ values will be weighted with squared weights. Low levels of agreement would indicate that the ratings are complex, and that third-party arbitration was required in many publications. Again, these κ values will be reported overall and in subgroups of study types. Dependence of κ values on marginals will be taken into consideration during interpretation of results.

All statistical programming will be performed with R, in combination with dynamic reporting. Statistical programming included downloading all potential non-exposed publications, random reordering, development of a shiny app for categorization of the publications, development of a shiny app for reporting quality rating of the publications, as well as statistical methods for comparison of the exposed and non-exposed groups, and its graphical display.

The results of the study will be reported according to the STROBE guidelines [15]. All data (anonymized) and code, together with the shiny apps will be made available upon publication.

Current status

The electronic search was performed in December 2019, and it resulted in 132 publications with biostatisticians from our group as authors or co-authors, in 2017 and 2018. Of these studies, 77 were observational studies, six were RCTs, and 12 were prognostic studies, 95 in total. The remaining papers were of other study types as the ones considered in this study. In the group of potential non-exposed publications, there were 3559 papers with suspected affiliation University of Zurich / University Hospital Zurich. After removal of unsuitable affiliations, there were 3420 papers with a first or second author having an affiliation at UZH/USZ in 2017 and 2018. Details can be found in Fig 1.

Fig 1. Flowchart of study selection.

Fig 1

The flowchart shows selection of exposed publications and non-exposed publications, after creation of an affiliation list, and categorization into one of the three study types observational study, RCT, and prognostic study.

Timeline

While the registered report protocol is under review, we will simultaneously upload the submitted version on the Open Science Framework (https://osf.io/), to raise awareness, receive additional comments, and inform potential raters about the planned research. Upon successful registration of the protocol, requested changes to study design will be made, the raters will be appointed, and rater training will be initiated. Ratings of reporting quality are planned to start in fall 2020. Data will be available in December 2020, and data analysis as well as writing of the manuscript will take place in early 2021.

Discussion

The study was designed to evaluate and quantify the impact of a biostatistician in an academic setting on the reporting quality of medical publications. From our perspective, the topic is relevant and raises the awareness at our center as well as in a broader audience. Related publications have also addressed this topic recently [16].

This study has several strengths. First, it takes a very systematic and reproducible approach. The approach could easily be adapted to other publication years and universities. Rating of reporting quality is done in a blinded fashion and in duplicate. However, potential sources of bias, such as confounding by indication, may affect our results. This might be the case if biostatisticians were asked for advice only in the more complex research projects, whereas less complex projects were completed without a biostatistician. In that case, it would be more difficult to obtain high scores for reporting quality in exposed publications as compared to in non-exposed publications, resulting in an underestimated between-group difference. Another limitation of this study might be our consensus decision for the reporting items addressed, as well as the weight they were given in the ratings. These might represent our personal views and will be opened for discussion. Results will depend on these choices and need discussion. Another limitation of our study is that we will not account for other biostatisticians (outside of our group) potentially involved in the research in the non-exposed publications. This limitation will lead to a conservative estimate of the association of a biostatistician on reporting quality. Finally, there is potential for misclassification of publications with a biostatistician responsible for the analysis, but with a mention only in the acknowledegements. However, in such situations, the biostatistician would not get the chance to critically revise the manuscript and to approve the final version—which could nevertheless lead to low reporting quality or even false interpretations of results.

As a consequence of this research, awareness of low reporting quality and guidelines to enable improved reporting will be raised in our centre. It would provide quantified evidence for the need of implementing higher quality reviews by journals and to increase biostatistics knowledge. It could also reveal that if biostatisticians are members of medical research groups, involved at early stages of research projects, rather than just consulting biostatisticians or data analysts, this would generally increase the quality, conduct, and reporting of medical research. Study design is intended to facilitate the use of our approach in other academic centres with biostatistics units involved in medical research. A certain number of publications were not addressed in this manuscript. These were studies with different study types, such as systematic reviews and meta-analyses, study protocols and pre-clinical studies. We aim to address these study types in future research.

Our research has implications for practice in the sense that frequent discussions about the necessity and importance of the reporting items fostered awareness in our group of biostatisticians. Benchmarking in future years against the findings of this study will take place and already now made the biostatisticians at our centre insist on using the guidelines in their most recent health research publications.

Conclusion

Our study will have a direct impact on our center by making each of us more aware of the reporting guidelines for various research designs. This in turn will enhance reporting quality in future research with our involvement. Our study will also raise awareness of the important role that biostatisticians play in the design and analysis of health research projects.

Supporting information

S1 Appendix. Search string and date of search.

The file contains the search string for exposed and non-exposed publications for use in PubMed.

(PDF)

S1 File. Affiliation list.

The affiliation list was generated from the electronic search, with indication of included and excluded affiliations.

(PDF)

S2 File. Questionnaire for assessment of reporting quality.

The questionnaire is intended to be used for rating the reporting quality in each study type.

(PDF)

Acknowledgments

We thank Steffi Muff, Leonhard Held, Sarah Haile, Julia Braun, Gilles Kratzer, Andrea Götschi, and Charlotte Micheloud for contributing to the selection of reporting items. We also express our gratitude to Tina Wünn who was involved in the operationalization.

Data Availability

This is a registered report protocol. The data collected in this research project will be made available upon finalization of the study together with corresponding statistical programming code and Shiny Apps.

Funding Statement

EF and MH received funding from the Center for Reproducible Science (CRS) at the University of Zurich. The funder had and will not have a role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Kleinert S, Horton R. How should medical science change? The Lancet. Lancet Publishing Group; 2014. pp. 197–198. 10.1016/S0140-6736(13)62678-1 [DOI] [PubMed] [Google Scholar]
  • 2.Schweizerische Eidgenosssenschaft. Forschung und Entwicklung (F+E) beim Bund | Steckbrief. 2018 [cited 25 Jun 2020]. https://www.bfs.admin.ch/bfs/de/home/statistiken/bildung-wissenschaft/erhebungen/fe-bund.assetdetail.13207592.html
  • 3.Sauerbrei W, Bland M, Evans SJW, Riley RD, Royston P, Schumacher M, et al. Doug Altman: Driving critical appraisal and improvements in the quality of methodological and medical research. Biometrical Journal. 2020. 10.1002/bimj.202000053 [DOI] [PubMed] [Google Scholar]
  • 4.Simera I, Moher D, Hirst A, Hoey J, Schulz KF, Altman DG. Transparent and accurate reporting increases reliability, utility, and impact of your research: Reporting guidelines and the EQUATOR Network. BMC Medicine. 2010. 10.1186/1741-7015-8-24 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. CONSORT 2010 explanation and elaboration: Updated guidelines for reporting parallel group randomised trials. International Journal of Surgery. 2012. 10.1016/j.ijsu.2011.10.001 [DOI] [PubMed] [Google Scholar]
  • 6.Schulz KF, Altman DG, Moher D. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340: c332 10.1136/bmj.c332 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Sharp MK, Tokalić R, Gómez G, Wager E, Altman DG, Hren D. A cross-sectional bibliometric study showed suboptimal journal endorsement rates of STROBE and its extensions. Journal of Clinical Epidemiology. 2019;107: 42–50. 10.1016/j.jclinepi.2018.11.006 [DOI] [PubMed] [Google Scholar]
  • 8.Moons KGM, Altman DG, Reitsma JB, Ioannidis JPA, Macaskill P, Steyerberg EW, et al. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): Explanation and elaboration. Annals of Internal Medicine. 2015. 10.7326/M14-0698 [DOI] [PubMed] [Google Scholar]
  • 9.Cokol M, Ozbay F, Rodriguez-Esteban R. Retraction rates are on the rise. EMBO reports. 2008;9: 2 10.1038/sj.embor.7401143 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Dechartres A, Trinquart L, Atal I, Moher D, Dickersin K, Boutron I, et al. Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in Cochrane reviews: Research on research study. BMJ (Online). 2017;357 10.1136/bmj.j2490 [DOI] [PubMed] [Google Scholar]
  • 11.Zuñiga-Hernandez JA, Dorsey-Trevinõ EG, González-González JG, Brito JP, Montori VM, Rodriguez-Gutierrez R. Endorsement of reporting guidelines and study registration by endocrine and internal medicine journals: Meta-epidemiological study. BMJ Open. 2019. 10.1136/bmjopen-2019-031259 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Cobo E, Selva-O’Callagham A, Ribera JM, Cardellach F, Dominguez R, Vilardell M. Statistical reviewers improve reporting in biomedical articles: A randomized trial. PLoS ONE. 2007;2 10.1371/journal.pone.0000332 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Han S, Olonisakin TF, Pribis JP, Zupetic J, Yoon JH, Holleran KM, et al. A checklist is associated with increased quality of reporting preclinical biomedical research: A systematic review. Boltze J, editor. PLOS ONE. 2017;12: e0183591 10.1371/journal.pone.0183591 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Rothman KJ, Greenland S, Associate TLL. Modern Epidemiology, 3rd Edition The Hastings Center report. 2014. [Google Scholar]
  • 15.von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Journal of Clinical Epidemiology. 2008. 10.1016/j.jclinepi.2007.11.008 [DOI] [PubMed] [Google Scholar]
  • 16.Sharp JL, Wrenn J, Gerard PD. Identifying the perceived value of statistical consulting in a university setting. Journal of Statistical Theory and Practice. 2016;10: 216–225. [Google Scholar]

Decision Letter 0

Alan D Hutson

14 Sep 2020

PONE-D-20-21077

Is Reporting Quality in Medical Publications Associated with Biostatisticians as Co-Authors? A Registered Report Protocol.

PLOS ONE

Dear Dr. Held,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please attend to the minor comments.  All of the reviewers thoroughly found this work important.

Please submit your revised manuscript by Oct 29 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Alan D Hutson

Academic Editor

PLOS ONE

Additional Editor Comments:

Please attend to the minor comments. All of the reviewers thoroughly found this work important.

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions?

The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field.

Reviewer #1: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses?

The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory.

Reviewer #1: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable?

Reviewer #1: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

4. Have the authors described where all data underlying the findings will be made available when the study is complete?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics.

You may also provide optional suggestions and comments to authors that they might find helpful in planning their study.

(Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I have only minor comments.

1) Your use of 'case' and 'control' is non-standard and may confuse some readers. This is the distinction that is used in observational studies where sampling is by outcome. Sometimes these are called retrospective studies but more commonly the preferable term 'case-control' study is used. For example, you might sample cases of lung-cancer and controls who do not have lung cancer. Other types of study classify groups by exposure, not by outcome. For example, you might classify subjects as exposed 'smoker' or not-exposed 'non-smoker'. Such studies are referred to as cohort studies. It is a standard epidemiological teaching that case control studies are good at studying multiple putative causal factors and cohort studies are good at studying multiple possible effects. Here you have the analogue of a cohort study, albeit with papers rather than subjects as the unit of study. I think it would be more usual to refer to the two groups as either exposed or non-exposed (as in an observational study) or 'treated' or possibly 'experimental' versus 'control' as for an interventional study. I agree that this terminology is not ideal here but I worry about what you have chosen. Perhaps all you need to note at the beginning is that 'case' and 'control' are not being used as in many observational studies.

2) I have checked your sample size calculation and find it to be correct provided that you intend to use a two-sample t-test with equal variances to perform a conventional significance test. Is this what you intend? You don't actually say what the intended analysis corresponding to your sample size calculation is.

3) Later you refer to t-tests and Wilcoxon test and say they will be used 'as appropriate' but you don't actually say how you will decide what is appropriate. (Note I am definitely not encouraging you to use a test of Normality to decide between them. Nevertheless this sort of 'as appropriate' statement is something that would not generally be accepted as adequate for a statistical analysis plan in drug development.)

4) Various typos and minor issues

a) L40 'Doug' not 'Dough'

b)L41 This quotation does not sound quite right. Please check it and provide a reference.

c)L50. Suggest 'most frequently used' rather than 'most frequent'

d) L120. 'TRIPOD' not 'TROPID'

e) L158 0.41 is a rather unusual choice of effect size. Is it the case that this is the value that guarantees 80% power for the sample size you intend to use?

f) L179. How will the confidence intervals be calculated? T-test. Hodges-Lehmann Wilcoxon

Reviewer #3: This is a novel and important research idea that is much needed. As statistical techniques increase in sophistication, somewhat due to increased computing power, we can’t expect physicians and practitioners to always know every statistical nuance that could be applied to their research. Reliance on statisticians to guide methodological decision-making should be normalized to advance science in the best, most sophisticated way.

The authors nicely lay out the purpose of this work, especially in the introductory discussion to include biostatisticians on interdisciplinary research teams despite following STROBE check-lists.

My suggestions to improve this report protocol are as follows:

1. Consider re-labeling the “control” publications group as a “comparison” publications group. Since biostatisticians were not randomly assigned to author teams / projects, it is not a true control, but rather a comparison.

2. In the introductory paragraph that describes the comparison / control group, please be explicitly clear that this group is of papers who do not have a statistician listed as an author. This is implied in the introduction, but it could be clearer in this paragraph. (Lines 84-90, page 5).

3. One item to discuss or acknowledge is whether the research center from which you pull these papers has an authorship policy of some sort that would impact whether a biostatistician is added to the authorship list. For example, is there a standard policy of deciding authorship on papers? If yes or no, please describe. Also, is there an informal or collegial policy that this research center follows with reagrds to authorship? For example, if you know that statisticians are always included as authors when they help with methodologies in this research center, that would be good to state in this report protocol as a certainty that the study cases will be correctly classified.

4. Another thought, that may or may not be relevant, but you may wish to address – are statisticians sometimes acknowledged or thanked in papers but not added as authors? If they are thanked and were a part of the research, but not an author on the manuscript which is then classified in the control group, this may represent a mis-classification of that paper.

Otherwise, the proposed methodology that is laid out in this submission is very well-thought and rigorous. I look forward to reading about the results of this important and interesting work.

Reviewer #4: This is a very interesting study. I especially like the use of R and Shiny. Here are some suggested edits.

Intro section, first sentence - if possible, please say a few words on what you mean by quality.

Second sentence - should be "In times when the mere..." Add the word 'the'?

You quote Doug Altman - but it is misspelled. Please add reference if there is one.

'Some medical journals' are mentioned. can you give examples: (e.g., Lancet, ...)

You mention reliability, perceived value, and reporting quality. Please define these briefly here.

The EQUATOR network is mentioned. Please give a reference or website link.

Methods section:

The frequency matching approach is mentioned. Please give a definition and/or reference.

In the fifth paragraph, initials are used. I realized later these are for the authors. Please mention that here.

I liked how you identified the quality measures.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Stephen Senn

Reviewer #3: No

Reviewer #4: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Nov 6;15(11):e0241897. doi: 10.1371/journal.pone.0241897.r002

Author response to Decision Letter 0


5 Oct 2020

Dear Dr. Hutson

Thank you for informing me and my co-authors about the minor revision of the registered report protocol.

We are resubmitting a revised version of the protocol, that was improved by the reviewer suggestions.

All reviewer comments were addressed in our response letter.

Kind regards

Ulrike Held

Attachment

Submitted filename: response.pdf

Decision Letter 1

Alan D Hutson

23 Oct 2020

Is Reporting Quality in Medical Publications Associated with Biostatisticians as Co-Authors? A Registered Report Protocol.

PONE-D-20-21077R1

Dear Dr. Held,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Alan D Hutson

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions?

The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field.

Reviewer #3: Yes

Reviewer #4: Yes

**********

2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses?

The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory.

Reviewer #3: Yes

Reviewer #4: Yes

**********

3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable?

Reviewer #3: Yes

Reviewer #4: Yes

**********

4. Have the authors described where all data underlying the findings will be made available when the study is complete?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #3: Yes

Reviewer #4: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #3: Yes

Reviewer #4: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics.

You may also provide optional suggestions and comments to authors that they might find helpful in planning their study.

(Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #3: The authors have nicely addressed all of the reviewers' comments. I recommend the article for publication.

Reviewer #4: The paper is much improved. All of my comments were addressed adequately, and I have no further edits or issues to address. Thank you.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #3: Yes: Melissa Kovacs

Reviewer #4: No

Acceptance letter

Alan D Hutson

29 Oct 2020

PONE-D-20-21077R1

Is reporting quality in medical publications associated with biostatisticians as co-authors? A registered report protocol

Dear Dr. Held:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Alan D Hutson

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. Search string and date of search.

    The file contains the search string for exposed and non-exposed publications for use in PubMed.

    (PDF)

    S1 File. Affiliation list.

    The affiliation list was generated from the electronic search, with indication of included and excluded affiliations.

    (PDF)

    S2 File. Questionnaire for assessment of reporting quality.

    The questionnaire is intended to be used for rating the reporting quality in each study type.

    (PDF)

    Attachment

    Submitted filename: response.pdf

    Data Availability Statement

    This is a registered report protocol. The data collected in this research project will be made available upon finalization of the study together with corresponding statistical programming code and Shiny Apps.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES