Abstract
Objectives
As much as 50%–90% of research is estimated to be irreproducible, costing upwards of $28 billion in USA alone. Reproducible research practices are essential to improving the reproducibility and transparency of biomedical research, such as including preregistering studies, publishing a protocol, making research data and metadata publicly available, and publishing in open access journals. Here we report an investigation of key reproducible or transparent research practices in the published oncology literature.
Design
We performed a cross-sectional analysis of a random sample of 300 oncology publications published from 2014 to 2018. We extracted key reproducibility and transparency characteristics in a duplicative fashion by blinded investigators using a pilot tested Google Form.
Primary outcome measures
The primary outcome of this investigation is the frequency of key reproducible or transparent research practices followed in published biomedical and clinical oncology literature.
Results
Of the 300 publications randomly sampled, 296 were analysed for reproducibility characteristics. Of these 296 publications, 194 contained empirical data that could be analysed for reproducible and transparent research practices. Raw data were available for nine studies (4.6%). Five publications (2.6%) provided a protocol. Despite our sample including 15 clinical trials and 7 systematic reviews/meta-analyses, only 7 included a preregistration statement. Less than 25% (65/194) of publications provided an author conflict of interest statement.
Conclusion
We found that key reproducibility and transparency characteristics were absent from a random sample of published oncology publications. We recommend required preregistration for all eligible trials and systematic reviews, published protocols for all manuscripts, and deposition of raw data and metadata in public repositories.
Keywords: oncology, cross-sectional study, reproducibility, transparency, replication crisis
Strengths and limitations of this study.
This investigation is an observational study using a cross-sectional design based on a broad sample of the oncology literature, which increases the generalisability of our findings.
We extracted eight key reproducibility and transparency characteristics finding that 29 publications had 0 indicators, 62 publications had 1 indicator, 209 publications had 2–5 indicators and 0 publications had 6 or more.
We engaged in extensive training as a research team prior to analysis, and conducted all data extraction and data analysis in a double blind manner to avoid bias.
Because of the breadth of this analysis, questions remain about the reproducibility and transparency in specific study designs (eg, randomised trials).
A lack of reporting reproducibility or transparency characteristics may not equate to failure to engage in reproducible and transparent research practices.
Introduction
The ability to reproduce, or replicate, research results is a cornerstone of scientific advancement.1 2 Absent efforts to advance the reproducibility of scientific research, advancements in patient care and outcomes may be delayed,3 4 in part due to a failure in the translation of evidence to practice.5 Evidence may fail translation to practice owing to bias,6 7 lack of publication4 or poor reporting.8 Thus, it may not be surprising that recent estimates of irreproducible research span a range of 50%–90% of all articles, costing upwards of $28 billion in USA alone.9 Moreover, it may not be surprising that large-scale efforts to replicate (ie, re-enact or reconduct previously published research studies) have failed,10 in part due to an inability to navigate published methods. What is lost when scientific research fails to be reproducible carries significant weight; namely, the ability of science to be self-correcting11 and produce trustworthy results.12
It is commonly accepted that certain items are essential to improving the reproducibility of biomedical research. Examples of such items include preregistering studies, publishing a protocol, making research data and metadata publicly available, and publishing in such a way to allow free access to the final manuscript. Preregistering a study and publishing a protocol are important to prevent selective publication of studies with ‘positive’ results13 and preventing the reordering of endpoints based on statistical significance.14 15 Providing access to one’s raw research data, metadata and analysis script allows independent researchers to computationally reproduce results, tailor results to specific patient populations and determine the rigour of statistical analysis.16 17 Publishing in open access journals or using preprint servers allows readers across economically diverse countries to access research articles that have implications for clinical practice.18 Altogether, reproducible research practices aim to increase the efficiency, usefulness and rigour of published research.5
Despite a high rate of author endorsement of reproducible practices,19 20 some evidence suggests that authors infrequently implement them.21 In the absence of such reproducible research practices, attempts to validate study findings may be thwarted. For example, Bayer and Amgen both attempted to replicate oncology research studies, with each failing to do so.22 23 Bayer’s attempt to reproduce prior research studies is especially significant because they attempted to reproduce internal studies. Other non-pharmaceutical entities have attempted to replicate cancer research studies with similar results.24 One may hypothesise that improved use and reporting of key reproducible or transparent research practices would improve future efforts to reproduce oncology research studies and build trust in existing evidence. Building on recent, similar analyses,25–27 here we report an investigation of key reproducible or transparent research practices in the published oncology literature as part of a larger initiative to examine reproducible and transparent research practices across medical specialties
Methods
We performed an observational study using a cross-sectional design based on methods developed by Hardwicke et al 25 with modifications. Our study employed best-practice design in accordance with published guidance, where relevant.28 29 Study protocol, raw data and other pertinent materials are available on the Open Science Framework (https://osf.io/x24n3/). This study did not meet US regulation requirements to be classified as human research, therefore it is exempt from Institutional Review Board approval.30
Journal selection
We used the National Library of Medicine (NLM) catalogue to search for all oncology journals using the subject terms tag Neoplasms[ST]. This search was performed on 29 May 2019 which identified 344 journals. The inclusion criteria required that journals were both in ‘English’ and ‘MEDLINE indexed’. We extracted electronic ISSN (International Standard Serial Number) (or linking if electronic was unavailable) for each journal to use in a PubMed search on 31 May 2019. We selected publications between 1 January 2014 and 31 December 2018. This date range is consistent with Hardwicke et al (2014–2017), but we chose to also include the most current year (2018) at the time of data extraction and was expanded to include 2018. Publications were evenly distributed across years. From search returns, we selected a random sample of 300 publications using Excel’s random number function (https://osf.io/wpev7/).
Data extraction
We used a pilot-tested Google Form based on the one provided by Hardwicke et al 25 with modifications (https://osf.io/3nfa5/). The first modifications were extracting the 5-year impact factor and the date of the most recent impact factor, neither of which were extracted by Hardwicke et al. Second, additional study designs were added to include cohort, case series, secondary analyses, chart reviews and cross-sectional studies. Third, funding options were expanded that allowed for greater specification of university, hospital, public, private/industry or non-profit sources. When screening studies, we relied on the authors’ descriptions of their study designs.
The Google Form contained questions for investigators aimed at identifying whether a study demonstrated the information necessary to be reproducible (online supplementary table 1, table 1). Variations in study design changed the data that were extracted from each study. For example, publications with no empirical data (eg, editorials, commentaries (without reanalysis), simulations, news, reviews and poems) were unable to examined for reproducibility characteristics. However, for all publications, the following data were extracted: title of publication, 5-year impact factor, impact factor of the most recent year, country of corresponding author and publishing journal, type of study participants (eg, human or animal), study design, author conflicts of interest, funding source, whether the publication claimed to be a replication study, and whether the article was open access (table 2). Publications with empirical data were examined for the following characteristics in addition to those stated above: material and data availability, analysis scripts and linkable protocol. Preregistration statements were further assessed in publications for which preregistration through trial databases, such as ClinicalTrials.gov, is the norm. Observational designs may also be registered on clinical trial registries. Systematic reviews and meta-analyses may be preregistered through PROSPERO. Preregistration for chart reviews and case studies and series is not typically performed. As, to our knowledge, there is not currently a registration site for preclinical studies,31 thus we have excluded these publications from examination of preregistration statements. Together, the eight key reproducibility and transparency indicators analysed were as follows: material availability, raw data availability, analysis scripts, linkable protocol, trial preregistration statements, author conflict of interest statement and funding source. Open access was determined using www.openaccessbutton.org, an online service that searches for open access publications freely available to the public without a journal subscription. In the event a publication could not be found, investigators performed a Google search to see if the publication was freely available elsewhere. Novelty was assessed by searching each publication for whether the publication claimed to be novel, a replication study or provided no statement related to study novelty. Web of Science was used to evaluate whether each examined publication (1) had been replicated in other works and (2) was included in future systematic reviews or meta-analyses.
Table 1.
Reproducibility characteristics of oncology studies
| Characteristics | Variables | |
| N (%) | 95% CI | |
| Data availability (N=194 studies) | ||
| Statement, some data are available | 21 (10.8) | 7.2 to 16.0 |
| Statement, data are not available | 0 | 0 |
| No data availability statement | 173 (89.2) | 84.0 to 92.8 |
| Material availability (N=194 studies) | ||
| Statement, some materials are available | 6 (3.1) | 1.5 to 6.8 |
| Statement, materials are not available | 0 | 0 |
| No materials availability statement | 188 (96.8) | 93.2 to 98.5 |
| Protocol available (N=194 studies) | ||
| Full protocol | 5 (2.6) | 1.1 to 5.90 |
| No protocol | 189 (97.4) | 94.1 to 98.9 |
| Analysis scripts (N=194 studies) | ||
| Statement, some analysis scripts are available | 0 | 0 |
| Statement, analysis scripts are not available | 0 | 0 |
| No analysis script availability statement | 194 | 1 |
| Replication studies (N=194 studies) | ||
| Novel study | 193 (99.5) | 97.1 to 99.9 |
| Replication | 1 (0.05) | 0.0 to 2.9 |
| Preregistration (N=88 studies) | ||
| Statement, says was preregistration | 7 (8.0) | 3.9 to 15.4 |
| Statement, was not preregistration | 0 | 0 |
| No there is no preregistration statement | 81 (92.0) | 83.3 to 95.4 |
Table 2.
Characteristics of oncology studies
| Characteristics (N=296 studies) |
Variables | |
| N (%) | 95% CI | |
| Test subjects | ||
| Animals | 25 (8.5) | – |
| Humans | 154 (52.0) | – |
| Both | 0 | – |
| Neither | 117 (39.5) | – |
| Country of journal publication | ||
| USA | 156 (52.7) | – |
| UK | 71 (24.0) | – |
| Greece | 18 (6.1) | – |
| Netherlands | 11 (3.7) | – |
| Ireland | 11 (3.7) | – |
| South Korea | 6 (2.0) | – |
| India | 4 (1.4) | – |
| Italy | 2 (0.7) | – |
| Japan | 2 (0.7) | – |
| Germany | 1 (0.3) | – |
| Unclear | 9 (3.0) | – |
| Other | 5 (1.7) | – |
| Country of corresponding author | ||
| USA | 87 (29.4) | – |
| China | 52 (17.6) | – |
| Japan | 19 (6.4) | – |
| Germany | 16 (5.4) | – |
| South Korea | 13 (4.4) | – |
| UK | 12 (4.0) | – |
| Italy | 10 (3.4) | – |
| Canada | 7 (2.4) | – |
| France | 6 (2.0) | – |
| India | 6 (2.0) | – |
| Unclear | 8 (2.7) | – |
| Other | 60 (20.3) | – |
| Funding | ||
| University | 32 (10.8) | 7.8 to 14.9 |
| Hospital | 8 (2.7) | 1.4 to 5.2 |
| Public | 95 (32.1) | 27.0 to 37.6 |
| Private/industry | 6 (2.0) | 0.9 to 4.4 |
| Non-profit | 7 (2.4) | 1.2 to 4.8 |
| No statement listed | 109 (36.8) | 31.5 to 42.5 |
| No funding received | 18 (6.1) | 3.9 to 9.4 |
| Mixed | 21 (7.1) | 4.7 to 10.6 |
| Conflict of interest statement | ||
| Statement, one or more conflicts of interest | 57 (19.2) | 15.2 to 24.1 |
| Statement, no conflict of interest | 174 (58.8) | 53.1 to 64.2 |
| No conflict of interest statement | 65 (22.0) | 17.6 to 27.0 |
| Publication year | ||
| 2014 | 63 (21.3%) | 17.0 to 26.3 |
| 2015 | 54 (18.2%) | 14.3 to 23.0 |
| 2016 | 49 (16.5%) | 12.8 to 21.2 |
| 2017 | 57 (19.3%) | 15.2 to 24.1 |
| 2018 | 73 (24.7%) | 20.1 to 29.9 |
| Open access | ||
| Yes, found via open access button | 139 (47.0) | 41.4 to 52.7 |
| Yes, found article via other means | 26 (8.8) | 6.1 to 12.6 |
| No, could only access through paywall | 131 (44.2) | 38.7 to 50.0 |
| 5-year impact factor | ||
| Median | 3.445 | – |
| First quartile | 2.2705 | – |
| Third quartile | 5.95 | – |
| IQR | 2.2705–5.95 | – |
| Most recent impact factor year | ||
| 2014 | 4 (1.4) | – |
| 2015 | 0 | – |
| 2016 | 4 (1.4) | – |
| 2017 | 271 (91.5) | – |
| 2018 | 1 (0.3) | – |
| Not found | 16 (5.4) | – |
| Most recent impact factor | ||
| Median | 3.346 | – |
| First quartile | 2.37375 | – |
| Third quartile | 6.471 | – |
| IQR | 2.37375–6.471 | – |
| Cited within a systematic review/meta-analysis, N=296* | ||
| No citations | 257 (86.8%) | 83.5 to 90.2 |
| A single citation | 22 (7.4%) | 5.0 to 11.0 |
| 1–5 citations | 17 (5.8%) | 3.6 to 9.0 |
| Greater than five citations | 0 | – |
*Five studies were explicitly excluded from the systematic reviews/meta-analyses that cited the original article.
bmjopen-2019-033962supp001.pdf (68.6KB, pdf)
Prior to data extraction, each investigator underwent a full day of training to increase the inter-rater reliability of the results between authors. This training consisted of an in-person session that reviewed study design, protocol and Google Form. Investigators (CGW, NV) extracted data from three sample articles and differences were reconciled following extraction. A recording of this training session is available and listed online for reference (https://osf.io/tf7nw/). One investigator (CGW) extracted data from all 300 publications. ZJH extracted data for 200 publications and NV extracted data for 100 publications. CGW’s data were compared with ZJH’s and NV’s with discrepancies being resolved via group discussion. All authors were blinded to each other’s results. A final consensus meeting was held by all authors to resolve disagreements. If no agreement could be made, final judgement was made by an additional author (DT). Our manuscript has been made available as a preprint, online at www.medRxiv.org (https://doi.org/10.1101/19001917).
Statistical analysis
Descriptive statistics were calculated for each category with 95% CIs using the Wilson formula for binomial proportions.32 The total number of each data point present in the publications was presented in addition to the proportion of the whole sample.
Results
The NLM search identified 344 journals but only 204 fit our inclusion criteria. Our initial search string retrieved 199 420 oncology publications, from which, 300 were randomly sampled. Approximately 296 publications were analysed for study reproducibility characteristics; four publications were not accessible, thus they were excluded from our analysis. Of these 296 publications, 215 contained empirical data and 81 did not. Publications without empirical data were unable to be analysed for study reproducibility characteristics. Additionally, 21 publications with empirical data were case studies or case series. These case studies and series are unable to be replicated, thus are excluded from the analysis of reproducibility characteristics. In total, we were able to extract study reproducibility characteristics for 194 oncology publications (figure 1).
Figure 1.
Preferred Reporting Items for Systematic Reviews and Meta-Analyses diagram of included studies.
Study characteristics
In our sample of oncology publications, the publishing journals had a median 5-year impact factor of 3.445 (IQR 2.27–5.95). The majority (156/296, 52.7%) of journals were located in USA. Over half (165/296, 55.8%) of the publications were available for free via open access networks. The remaining 131 publications (44.2%) were located behind a paywall—making the publications inaccessible to the public—available only through paid reader access. Approximately 109 publications (36.8%) made no mention of funding source. Public funding (95/296, 32.1%), such as state or government institutions, comprised the next most prevalent source of study funding. Publication authors disclosed no conflict of interest more frequently than conflicts of interest (174/296, 58.8 vs 57/296, 19.2%); however, 65 publications (22.0%) had no author conflict of interest statement. Human participants were the most common study population in sample (154/269, 52.0%). Citation rates of these 296 publications by systematic reviews and meta-analyses can be found in table 2.
Reproducibility characteristics
Only 21 publications (21/194, 10.8%) made their raw data available. Nine of these publications with available raw data were downloadable by readers, while the rest was available on request from the corresponding author of the publication. Of these nine publications, only three provided complete raw datasets (online supplementary table 2). An expanded description of study materials required to reproduce the study—laboratory instruments, stimuli, computer software—was provided as a supplement in 6/194 publications (3.1%). Of those publications with available materials, most (4/6) were only accessible to readers on request to the corresponding author, rather than being listed in a protocol or methods section. Two publications provided their materials accessible as a supplement, but neither publication provided all of the materials necessary to replicate the study. None of the included publications made their analysis scripts accessible, which details the steps the authors used to prepare the data for interpretation. Only five (5/194, 2.6%) publications provided a protocol detailing the a priori study design, methods and analysis plan. One publication (1/194, 0.05%) claimed to be a replication study; all remaining publications studies (193/194, 99.5%) claimed to be novel or did not provide a clear statement about being a replication study. Twenty-two publications (22/194, 11.3%) were cited within future systematic reviews/meta-analyses. Excluding preclinical publications (n=79), chart reviews (n=7), systematic reviews or meta-analyses (n=7), or publications with multiple study designs (n=13) in which preregistration with trial databases, such as ClinicalTrials.gov, would not be relevant, we found seven publications (7/88, 8.0%) with preregistration statements. Of these 88 publications, 15 were clinical trials; however, only 6 (6/88, 6.8%) were preregistered with ClinicalTrials.gov prior to commencement of the study. None of the systematic reviews and meta-analyses (n=7) were preregistered with PROSPERO. A subgroup analysis of the eight key reproducibility and transparency indicators demonstrated that 29 publications had 0 indicators, 62 publications had 1 indicator, 209 publications had 2–5 indicators and 0 publications had 6 or more.
bmjopen-2019-033962supp002.pdf (77KB, pdf)
Discussion
Our cross-sectional investigation of a sample of the published oncology literature found that key reproducibility and transparency practices were lacking or entirely absent. Namely, we found that publications rarely preregistered their methods, published their full protocol, or deposited raw data and analysis scripts into a publicly-accessible repository. Moreover, conflicts of interest were not discussed approximately 20% of the time and just over half of the included publications were not accessible due to journal paywalls. Given the challenges in understanding the molecular mechanisms that drive cancer, the continuum of research in the field of oncology is slow, laborious and inefficient.33 To combat these inherent obstacles, transferring outcomes and information from preclinical to clinical research demands consistency and precision across the continuum. Otherwise, publications downstream in the cancer research continuum may be based on spurious results incapable of independent confirmation due to a lack of access to study data, protocols or analysis scripts. Science advances more rapidly when people spend less time pursuing false leads,34 thus, for patients with cancer and for whom rapid scientific advancement is most significant, it is paramount that scientists, researchers and physicians advocate for an efficient research system that is transparent, reproducible and free from bias.
Preregistration of research study methods is a mechanism to improve the reproducibility of published results and prevents bias—either from selective reporting of outcomes or selective publication of a study.35 Previously, it has been shown that the selective reporting of study endpoints affects the research portfolio of drugs or diseases.15 36 37 For example, Wayant et al found that 109 randomised control trials of malignant haematology interventions selectively reported their trial endpoints 118 times, with a significant portion doing so in a manner that highlighted statistically significant findings.36 Had trial registries not been available, these trials may have never been found to exhibit selective outcome reporting. Now, through trial registries, haematologists and other interested researchers are able to independently assess the robustness of not only study rationale and results, but also study rigour and reporting. The present study indicates that preregistration of study methods was rare, even among trials and systematic reviews that have available registries. The importance of preregistration across the continuum of cancer research cannot be understated. For example, preclinical animal models serve as the foundation for clinical trials, but have exhibited suboptimal methods,38 which may explain why animal study results fail to successfully translate to clinical benefit. In fact, it was recently shown that many phase 3 trials in Oncology are conducted despite no significant phase 2 results.39 One possible explanation for why phase 3 trials proceed despite non-significant phase 2 results is the strong bioplausibility demonstrated in preclinical studies. If it is true that preclinical studies exhibit poor research methods, it is not unlikely that they are affected by selective outcome reporting bias, just like clinical research studies. Thus, to strengthen oncology research evidence—from foundational, preclinical research to practice-changing trials—we recommend either the creation of relevant study registers or the adherence to existing registration policies. In so doing, one key aspect of research—the accurate reporting of planned study endpoints—could be monitored, detected and mitigated.
Equally important to self-correcting, rigorous cancer research is the publication of protocols, raw data and analysis scripts. Protocols include much more information than study outcomes—they may elaborate on statistical analysis plans or decisions fundamental to the critical appraisal of study results.40 It is unlikely that anyone would be able to fully appraise a published study without access to a protocol, and far less likely that anyone would be capable of replicating the results independently. In fact, two recent efforts to reproduce preclinical studies revealed extant barriers to independent verification of published findings,20 41 including the absence of protocols, data and analysis scripts. Our present investigation found that only five (2.6%) studies published a protocol, nine (4.6%) fully published their data and none published their analysis scripts. In the context of the recent failures to reproduce cancer research publications, one may reasonably conclude that our study corroborates the belief that oncology research is not immune to the same shortcomings that contribute to an ever-expanding cohort of irreproducible research findings.42 Oncology research, like all biomedical research, is at an inflection point, wherein it may progress toward more transparent, reproducible, efficient research findings. However, in order to do so, the availability of protocols, data and analysis scripts should be considered fundamental.
In summary, we found that key reproducibility and transparency characteristics were absent from a random sample of published oncology studies. The implication of this finding is a research system that is incapable of rapid self-correction, or a research system that places a stronger emphasis on what is reported rather than what is correct. We recommend three key action items which we believe benefit oncology research and all its stakeholders. First, require preregistration for eligible trials and systematic reviews, since these study designs have existing registries available, and support the development of registries for preclinical studies. Second, understand that published reports are snapshots of a research study, and require protocols be published. Last, encourage a scientific culture that relies on data that is true and robust, rather than author reports of their data, by requiring the deposition of raw data, meta data and analysis scripts in public repositories.
This study has several strengths and limitations. First, for strengths, we sampled 300 published oncology articles indexed in PubMed. In doing so we captured a diverse array of research designs in an even more diverse range of journals. As such, all oncology researchers can read our paper and glean useful information and enact changes to improve the reproducibility of new evidence. With respect to our limitations, our study is too broad to make absolute judgements about specific study designs. All signals that suggest irreproducible research practices from our study fall in line with prior data in other areas of medicine,25–27 but are nonetheless signals rather than answers. For example, an examination of biomedical literature by Wallach et al found that less than 30% provided study materials as a supplement; however, none of the available materials allowed for replication of the protocol or contained analysis scripts and exactly one study (1/104) had a linkable protocol. Furthermore, about 18% provided data availability statements, yet none of these publications shared the complete raw data for the study.27 Similarly, an examination of the social sciences by Hardwicke et al found that no publications made their protocol publicly available, less than 2% provided the raw data, and exactly one publication had an accessible link to the study’s analysis scripts.25 Therefore, we suggest more narrow investigations of the reproducibility of specific study designs and suggest trials and animal studies be prioritised due to their potential influence (present or future) on patient care. Moreover, we do not suggest that irreproducible research findings are false; however, the trust in the results may be blunted. Further, replicating (ie, reconducting) a study is not necessary in all cases to assess the rigour of the results. If a protocol, statistical analysis plan and raw data (including metadata) are available, one fundamental pillar of science would be reinforced: self-correction.
Supplementary Material
Footnotes
Twitter: @ColeWayant_OK
Contributors: DT and MV developed the protocol and conceptualised the study. CWal, ZJH, CWay, NV, MW, JC, DT and MV will conduct all literature searches. CWal, ZJH, CWay and NV will conduct all statistical analyses. CWal and DT will manage all data, including the management of the OSF repository. CWal, ZJH, CWay, NV, MW, JC, DT and MV will participate in all writing. CWal, ZJH, CWay, NV, MW, JC, DT and MV are equally the guarantors of the study and the integrity of the data.
Funding: This work was supported by the 2019 Presidential Research Fellowship Mentor—Mentee Program at Oklahoma State University Center for Health Sciences.
Competing interests: MV is funded through the US Department of Health and Human Services Office of Research Integrity and the Oklahoma Center for the Advancement of Science and Technology.
Patient consent for publication: Not required.
Provenance and peer review: Not commissioned; externally peer reviewed.
Data availability statement: Data are available in a public, open access repository. All datasets, materials, and the protocol are available online at https://osf.io/usb28/
References
- 1. Munafò MR, Nosek BA, Bishop DVM, et al. A manifesto for reproducible science. Nat Hum Behav 2017;1:s41562–016 - 0021. 10.1038/s41562-016-0021 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Peng R. The reproducibility crisis in science: a statistical counterattack. Signif 2015;12:30–2. 10.1111/j.1740-9713.2015.00827.x [DOI] [Google Scholar]
- 3. Liberati A. An unfinished trip through uncertainties. BMJ 2004;328 10.1136/bmj.328.7438.531 [DOI] [Google Scholar]
- 4. Scherer RW, Langenberg P, von Elm E, et al. Full publication of results initially presented in Abstracts. Cochrane Database Syst Rev 2007;82 10.1002/14651858.MR000005.pub3 [DOI] [PubMed] [Google Scholar]
- 5. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. The Lancet 2009;374:86–9. 10.1016/S0140-6736(09)60329-9 [DOI] [PubMed] [Google Scholar]
- 6. Hewitt C, Hahn S, Torgerson DJ, et al. Adequacy and reporting of allocation concealment: review of recent trials published in four general medical journals. BMJ 2005;330:1057–8. 10.1136/bmj.38413.576713.AE [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Whiting PF, Rutjes AWS, Westwood ME, et al. A systematic review classifies sources of bias and variation in diagnostic test accuracy studies. J Clin Epidemiol 2013;66:1093–104. 10.1016/j.jclinepi.2013.05.014 [DOI] [PubMed] [Google Scholar]
- 8. Glasziou P, Meats E, Heneghan C, et al. What is missing from descriptions of treatment in trials and reviews? BMJ 2008;336:1472–4. 10.1136/bmj.39590.732037.47 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Freedman LP, Cockburn IM, Simcoe TS. The economics of reproducibility in preclinical research. PLoS Biol 2015;13:e1002165 10.1371/journal.pbio.1002165 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Kaiser J. Plan to replicate 50 high-impact cancer papers shrinks to just 18. Science 2018;243 10.1126/science.aau9619 [DOI] [Google Scholar]
- 11. Ioannidis JPA. Why science is not necessarily Self-Correcting. Perspect Psychol Sci 2012;7:645–54. 10.1177/1745691612464056 [DOI] [PubMed] [Google Scholar]
- 12. Vazire S. Quality uncertainty erodes trust in science. Collabra: Psychology 2017;3. [Google Scholar]
- 13. Begg CB, Berlin JA. Publication bias and dissemination of clinical research. J Natl Cancer Inst 1989;81:107–15. 10.1093/jnci/81.2.107 [DOI] [PubMed] [Google Scholar]
- 14. You B, Gan HK, Pond G, et al. Consistency in the analysis and reporting of primary end points in oncology randomized controlled trials from registration to publication: a systematic review. JCO 2012;30:210–6. 10.1200/JCO.2011.37.0890 [DOI] [PubMed] [Google Scholar]
- 15. Mathieu S, et al. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA 2009;302:977–84. 10.1001/jama.2009.1242 [DOI] [PubMed] [Google Scholar]
- 16. Voytek B. The virtuous cycle of a data ecosystem. PLoS Comput Biol 2016;12:e1005037 10.1371/journal.pcbi.1005037 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Steegen S, Tuerlinckx F, Gelman A, et al. Increasing transparency through a Multiverse analysis. Perspect Psychol Sci 2016;11:702–12. 10.1177/1745691616658637 [DOI] [PubMed] [Google Scholar]
- 18. Sarabipour S, Debat HJ, Emmott E, et al. On the value of preprints: an early career researcher perspective. PLoS Biol 2019;17:e3000151 10.1371/journal.pbio.3000151 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Anderson MS, Ronning EA, Vries RD, et al. Extending the Mertonian norms: scientists' subscription to norms of research. J Higher Educ 2010;81:366–93. 10.1080/00221546.2010.11779057 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Baker M. 1,500 scientists lift the lid on reproducibility. Nature 2016;533:452–4. 10.1038/533452a [DOI] [PubMed] [Google Scholar]
- 21. Harris JK, Johnson KJ, Carothers BJ, et al. Use of reproducible research practices in public health: a survey of public health analysts. PLoS One 2018;13:e0202447 10.1371/journal.pone.0202447 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov 2011;10:712 10.1038/nrd3439-c1 [DOI] [PubMed] [Google Scholar]
- 23. Begley CG, Ellis LM. Raise standards for preclinical cancer research. Nature 2012;483:531–3. 10.1038/483531a [DOI] [PubMed] [Google Scholar]
- 24. Errington TM, Iorns E, Gunn W, et al. An open investigation of the reproducibility of cancer biology research. eLife 2014;3:e04333 10.7554/eLife.04333 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Hardwicke TE, Wallach JD, Kidwell M, et al. An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014-2017) 2019. [DOI] [PMC free article] [PubMed]
- 26. Iqbal SA, Wallach JD, Khoury MJ, et al. Reproducible research practices and transparency across the biomedical literature. PLoS Biol 2016;14:e1002333 10.1371/journal.pbio.1002333 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Wallach JD, Boyack KW, Ioannidis JPA. Reproducible research practices, transparency, and open access data in the biomedical literature, 2015–2017. PLoS Biol 2018;16:e2006930 10.1371/journal.pbio.2006930 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evid Based Med 2017;22:139–42. 10.1136/ebmed-2017-110713 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Moher D, Shamseer L, Clarke M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 2015;4 10.1186/2046-4053-4-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Office for Human Research Protections (OHRP) HHS.gov. Available: https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/index.html [Accessed 10 Jul 2018].
- 31. Jansen of Lorkeers SJ, Doevendans PA, Chamuleau SAJ. All preclinical trials should be registered in advance in an online registry. Eur J Clin Invest 2014;44:891–2. 10.1111/eci.12299 [DOI] [PubMed] [Google Scholar]
- 32. Dunnigan K. Confidence interval calculation for binomial proportions. MWSUG Conference, Indianapolis, 2008. http://www.mwsug.org/proceedings/2008/pharma/MWSUG-2008-P08.pdf [Google Scholar]
- 33. Celis JE, Pavalkis D. A mission-oriented approach to cancer in Europe: a joint mission/vision 2030. Mol Oncol 2017;11:1661–72. 10.1002/1878-0261.12143 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Nuzzo R. How scientists fool themselves - and how they can stop. Nature 2015;526:182–5. 10.1038/526182a [DOI] [PubMed] [Google Scholar]
- 35. Kaplan RM, Irvin VL. Likelihood of null effects of large NHLBI clinical trials has increased over time. PLoS One 2015;10:e0132382 10.1371/journal.pone.0132382 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Wayant C, Scheckel C, Hicks C, et al. Evidence of selective reporting bias in hematology journals: a systematic review. PLoS One 2017;12:e0178379 10.1371/journal.pone.0178379 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Chan AW, Hróbjartsson A, Haahr MT, et al. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA 2004;291:2457–65. 10.1001/jama.291.20.2457 [DOI] [PubMed] [Google Scholar]
- 38. Bara M, Joffe AR. The methodological quality of animal research in critical care: the public face of science. Ann Intensive Care 2014;4:26 10.1186/s13613-014-0026-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Addeo A, Weiss GJ, Gyawali B. Association of industry and academic sponsorship with negative phase 3 oncology trials and reported outcomes on participant survival: a pooled analysis. JAMA Netw Open 2019;2:e193684 10.1001/jamanetworkopen.2019.3684 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Chan A-W, Hróbjartsson A. Promoting public access to clinical trial protocols: challenges and recommendations. Trials 2018;19:116 10.1186/s13063-018-2510-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41. Center for open Science Reproducibility Project: Cancer Biology (RP:CB). Available: https://elifesciences.org/collections/9b1e83d1/reproducibility-project-cancer-biology [Accessed 21 Nov 2017].
- 42. Ioannidis JPA. Why most published research findings are false. PLoS Med 2005;2:e124 10.1371/journal.pmed.0020124 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
bmjopen-2019-033962supp001.pdf (68.6KB, pdf)
bmjopen-2019-033962supp002.pdf (77KB, pdf)

