Abstract
It is commonplace for science leaders and others to claim that the future of biomedical research rests in large part upon the public’s trust. If true, it behooves the biomedical research community to understand how it avoids taking chances with that trust. This commentary, which builds upon comments of noted trust scholar Russell Hardin about how best to enjoy trust, assumes that the key to being trusted is deserving to be trusted. Thus, it proposes using “deserved trust” to identify ways that the public’s trust in biomedical research could be better supported. Employing deserved trust to support the public’s trust leads us to consider what it is that the biomedical research community should be trusted to do, examine evidence about the effectiveness of current safeguards meant to assure that those things routinely get done, and identify new ways to equip individual researchers, research teams, and research institutions to assure that the public’s trust in their research is deserved rather than misplaced.
Keywords: Public Trust, Research Integrity, Translational Research, Responsible Conduct of Research
A. Introduction
Such matters as concerns over public receptivity to coronavirus disease 19 (COVID-19) vaccines,(Agiesta 2020) multiple retractions of recent COVID-19 studies,(Joseph 2020)(Abritis, Marcus, and Oransky 2020) and polling indicating, at least in some jurisdictions, that an erosion of public trust in research has already begun(Matthews 2020) remind us anew about the importance of the public’s trust in biomedical research. With past as prologue, we can expect public statements from research champions echoing past proclamations from luminaries such as Francis Collins, Director of the US National Institutes of Health (NIH). Following public backlash in 2010 about the financial ties between drug companies and academic scientists, he opined that “[t]he public trust in [biomedical research] is just essential, and we cannot afford to take any chances with the integrity of the research process.” (Wadman 2010)
Any similar attestations made in this time of COVID-19 will occur in a research ecology little changed from the one found in 2010, despite extensive work devoted to research integrity that has occurred in the interim. A recent commentary about research integrity, written by an interdisciplinary group of scholars with complimentary expertise in scientific research, research methodologies, bioethics, and research integrity, captures the scope of this work. The authors write that “there have been [multiple] declarations that outline the components of trustworthy research and the principles of research integrity. These include the Singapore Statement in 2010, the Montreal Statement in 2013, the Hong Kong Principles in 2019 and the European Code of Conduct for Research Integrity in 2011, revised in 2017, among others.” They further mention that “[m]any hundreds of articles have been written on the topic [of research integrity]: about threats to research quality from hyper-competitiveness and poor training; the unquestioning and inept reliance on metrics in evaluation [of researchers]; and systematic biases in peer review and publication.”(Mejlgaard et al. 2020) If this important body of reform-minded work is to have any chance of achieving its intended purpose of changing the research ecology in order to improve research, more researchers and research institutions need to be persuaded that the insights found in this substantial body of work can help to improve their research.
I want to put forth the proposition that we may be able to so persuade them if we leverage the widely shared belief in the research community, reflected by Director Collin’s and similar statements from other leading voices in science,(Smith 2014) (Cicerone 2010)(Nature 2010) that it is important to the future of the research community that it continues to enjoy the public’s trust. One way to leverage this belief is to follow the advice of noted trust scholar Russell Hardin, who claims that the key to enjoying trust is doing the things that support trustworthiness. (Hardin 2002, 30) In other words, if the research community wants to continue to enjoy the public’s trust it knows it will need, then it should make sure that it deserves their trust.
Knowing how to deserve the public’s trust in the context of biomedical research is far from straightforward, though. There is no consensus about what it is, exactly, that makes this research deserving of the public’s trust and thus no consensus either about what practices promote trustworthy biomedical research. This means that, currently, future public trust is being left as much to chance as it is design. (Yarborough 2014b) In what follows, I offer a proposal for how using the concept of “deserved trust” might change this.
Some may not see the need for such a strategy because they may disagree that we currently do leave the public’s trust too much to chance. Supporting their disagreement are current safeguards meant to protect research and its integrity. The safeguards include the numerous international statements and professional codes referenced above; professional norms, transmitted in science education; quality control mechanisms such as peer-review; research regulation activities like Research Ethics Committee (REC) review; and a growing global reliance on both research integrity programs and mandatory instruction in the responsible conduct of research. Hence, they may think that we are not leaving future public trust to chance because we are already heeding Hardin’s advice. In what follows, I will show why we need to do a much better job than we currently are of following Hardin’s advice. I will do this by offering a two-pronged process that will help us recognize, first, the extent to which, in fact, we are leaving the public’s trust to chance and, second, how we could stop doing so.
One prong consists in three general questions about the trustworthiness of research that relate to its reliability, social value, and ethical conduct that are relevant to biomedical research in general. I review ample published evidence below that helps answer those questions. The other prong consists in specific questions directed at individual research projects, teams, and/or institutions that researchers and research institutions themselves can answer about their own work. I provide a brief description below of how they could go about answering the questions. Though the two prongs are complimentary, they both have stand-alone value as well. We will see that the general questions reveal deficiencies in current safeguards and show the extent of work yet to be done if we are to heed Hardin’s advice. We will see how the specific questions can prompt the practices needed in specific research settings to ensure that any given research project does deserve the public’s trust.
B. Knowing the extent to which research currently deserves the public’s trust
The three questions in the first prong (see Table 1) about the reliability, social value, and ethical conduct of research all reflect characteristics of research that the public should rightly expect are routinely found in it.(Yarborough 2014a) Considering relevant evidence that speaks to the frequency with which these characteristics are present will shed light on both the effectiveness of current safeguards and whether we are “taking chances” with the trust that is “just essential” to the future of biomedical research.
Table 1:
Characteristic | Explanation |
---|---|
Disseminated research results are reliable | The purpose of biomedical research, like all scientific research, is to advance understanding of the natural world. |
Research creates increased social value | The desire for greater human well-being and improved health care steers public and other resources to the biomedical research community. |
Research is conducted ethically | There are international norms and standards regarding the use and treatment of animals and humans that should be observed in the conduct of biomedical research. |
B. 1. Are disseminated research results routinely reliable?
Though the reliability of published research reports is critical across the entire landscape of biomedical research, due to space constraints we will look at evidence about reliability in the critical area of preclinical research. This is the research that creates the diverse body of knowledge that advances basic understanding across the entire life sciences spectrum, including our understanding of disease and its mechanisms. When appropriately conducted, these studies, whether they produce positive or negative results, critically inform clinical translation efforts. This research frequently involves animal studies that, for example, can “identify new druggable targets and design or test specific therapeutic modalities, as well as detect diagnostic or therapeutic biomarkers. [It also] allow[s] the comparison of pathological phenotypes with the human disease, and [provides] insight into the underlying reasons for differences at varying levels of complexity, [thereby] allowing the identification of protective pathways in animals that could be enhanced in humans.”(Willmann et al. 2020) Preclinical studies both develop and test “novel therapeutics, including small molecules, biologics, gene modifiers and cell therapies”, as well as inform “optimal dosing regime[s] and route[s].”(Willmann et al. 2020) In short, these studies help determine which investigational treatment modalities migrate out of the laboratory into human trials.
To appreciate why the reliability of these preclinical studies matter, we need only consider patients with fatal neurodegenerative disorders like Amyotrophic Lateral Sclerosis who are asked to volunteer for Phase 1 clinical trials based upon specific preclinical studies. The individuals who do should be able to trust that the trial is worth conducting because there is a sound scientific rationale for it. Would such a level of trust be deserved? There is a substantial body of evidence that suggests that it frequently would not.
In saying that it would not, readers should not infer that this is due to the fact that new treatment modalities usually do not pan out. That would be an unfair and unrealistic standard of deserved trust, given the biological complexities of disease and its mechanisms that make successful translation exceptionally difficult. Rather, trust would be undeserved because, as one commentator has noted, “the real reason [behind failed translation is often due to the fact that] the preclinical experiments were not rigorously designed.”(Perrin 2014) This verdict echoes a similar sentiment expressed in the context of oncology studies, where commentators noted that “[u]nquestionably, a significant contributor to failure in oncology trials is the quality of published preclinical data.”(Begley and Ellis 2012) When these deficiencies occur, our ability to make reliable inferences about whether or not there is a reasonable ratio between the risks and benefits of conducting a Phase 1 trial on a new treatment modality is severely compromised.(Yarborough et al. 2018) This means that the trust of Phase 1 trial volunteers can be misplaced rather than deserved.
The above quotes about deficient preclinical studies is substantiated by more than a decade’s worth of research that focuses on preclinical studies.i(Ioannidis 2017)(Vogt et al. 2016)(Reichlin, Vogt, and Würbel 2016)(Egan et al. 2016)(Hartung 2013)(Hirst et al. 2014) (Macleod et al. 2015)(Howells, Sena, and Macleod 2014)(O’Connor and Sargeant 2014)(Lindner 2007)(Peers et al. 2014)(Garner et al. 2017) It reveals multiple, prevalent, and often interconnected design and reporting problems. To appreciate the extent the deficiencies can contribute to undeserved trust, consider, for example, the conclusion reached by the authors of a landmark study assessing excess significance bias in preclinical neurosciences studies. They concluded that, based upon an examination of more than 4,000 preclinical studies related to 160 human trials, only 8 of those clinical studies were sufficiently scientifically warranted.(Tsilidis et al. 2013) Equally disconcerting is a more recent analysis of failure rates in clinical trials of acute stroke.(Schmidt-Pogoda et al. 2019) that reaffirm worries raised in numerous prior analyses (Macleod et al. 2015)(Tsilidis et al. 2013)(Sena et al. 2010)(Crossley et al. 2008) Its authors estimate that a majority of the reports of positive findings from animal studies meant to inform clinical studies of acute stroke actually report what are likely to be false positive results.
Readers may question whether my concern about the trust implications of the reliability of published preclinical studies is based upon a handful of experiments that are being used to paint too negative a picture about the reliability of preclinical research. There is ample evidence showing that it is not. Consider, for example, the actions of prominent research journals and sponsors who are actively trying to instigate methodologic reforms in preclinical and other research in order to foster greater reliability. These include the more than one thousand journals and two dozen funding agencies(Enserink 2017) who endorse use of the ARRIVE Guidelines. First developed in 2010(Kilkenny et al. 2010) and just recently revised,(Sert et al. 2020) these guidelines are meant to improve research reports about animal research.
Other leading journals have adopted additional measures. For example, the journal Stroke employs reporting requirements that are more stringent than the ARRIVE guidelines. Its guidelines are specifically targeted to stroke-related research reports.(Minnerup, Dirnagl, and Schabitz 2020) The Nature family of research journals, on the other hand, in an effort to improve research reliability and reproducibility, requires authors “to make materials, data, code, and associated protocols promptly available to readers without undue qualifications.”(Reporting Standards and Availability of Data, Materials, Code and Protocols) Also worth noting is a series of Lancet articles published in 2014 that address ways that researchers and research institutions can increase value and reduce waste in research. Just in the first year of their publication, the articles in the series were downloaded more than 46,000 times.(Moher et al. 2016) As of early 2019, articles in the series had been cited over 900 times in PubMed Central registered articles.(Yarborough, Nadon, and Karlin 2019)
Finally, we must note that concerns about the reliability of research have not escaped the attention of major government research sponsors either. For example, both the NIH and the Deutsche Forschungsgemeinschaft have undertaken initiatives meant to increase research rigor and reproducibility in order to improve clinical translation.(Collins and Tabak 2014)(Deutsche Forschungsgemeinschaft) Such extensive reform efforts would not be in place if problems in research that erode the reliability of published research findings were occasional and not endemic. Collectively, this evidence paints a troubling picture about the extent to which the current preclinical research endeavor can be counted upon to serve as a reliable precursor for the Phase 1 trials that so many patients have no choice but to trust that they are worth volunteering for.
B. 2. Does research routinely create increased social value?
When we think about the countless patients with cancer or chronic disorders such as diabetes, heart disease, or high blood pressure who volunteer for Phase 3 studies, we find comparable concerns about potential misplaced trust due to the questionable social value of many of these trials. Due to space constraints, we note just three of several examples causing concern. They are of concern not because they are trials that do not result in new and improved treatments. That would be a counterproductive criterion of social value to impose on clinical trials. Just as we noted previously that preclinical studies that produce negative findings are valuable, so too are clinical trials that fail to demonstrate that a new drug is therapeutic. That is because, as Kimmelman and London have noted, the desired output of clinical research is actually not so much new contributions to the clinical arsenal as it is information instead “about the coordinated set of materials, practices, and constraints needed to safely unlock the therapeutic or preventive activities of drugs, biologics, and diagnostics.”(Kimmelman and London 2015) Accordingly, the clinical trials of questionable social value are those that are uninformative(Zarin, Goodman, and Kimmelman 2019) because steps that could be taken to prevent them from being uninformative are not taken. When this occurs, there is a “serious breach of trust …”(Zarin, Goodman, and Kimmelman 2019)
The first group of trials of questionable social value are redundant ones that launch in the absence of any effort to systematically determine what prior relevant clinical trials have been reported that, if consulted, would influence the design, conduct, and need of subsequent studies. (Robinson and Goodman 2011)(Lund et al. 2016) The second group are trials whose knowledge they are meant to generate never gets sufficiently distributed. Too many trial results are never published or, if they are, too often only partial, possibly misleading, results get disseminated. (Riveros et al. 2013)(Strand et al. 2017)(Hakala, Fergusson, and Kimmelman 2017)(Jefferson and Jorgensen 2018)(Goldacre et al. 2018) The third group is clinical trials that employ flawed designs that cause uncertainty about the drug’s ability to contribute to improved patient outcomes that nevertheless produce datasets that result in regulatory approval for either new drugs or new indications for previously approved ones. A recent study of anticancer drug trials illustrates this concern. It reports that fully two-thirds of newly approved anti-cancer drugs are based on clinical trials “with at least one of the following limitations: nonrandomized design, lack of demonstrated survival advantage, inappropriate use of crossover, or the use of suboptimal control arms.”(Hilal, Gonzalez-Velez, and Prasad 2020) This spectrum of evidence surely casts a pall of skepticism over claims that clinical research can routinely be trusted to produce social value sufficient to offset the moral, material, and financial resources it consumes.
B. 3. Is research routinely conducted ethically?
When one’s attention turns to evidence that speaks to whether the general public can trust that biomedical research can routinely be trusted to be ethically conducted, one might expect to see a more encouraging picture. After all, there is now decades long work by RECs that are meant to assure that certain benchmarks, such as an appropriate balance between the risks of conducting a trial and its anticipated benefits and the respectful treatment of research participants, are met. Despite this REC role, there are genuine worries that we cannot trust that such benchmarks are met with the regularity that they should be.(Yarborough 2020b) In the case of balancing potential risks and benefits in early trials, we all would presumably agree that research volunteers should be able to trust that no one would ever recruit them into a clinical trial in the absence of a favorable risk/benefit ratio. Evidence pertinent to Phase 1 trials yet again calls this presumption into question. It shows that too often RECs are unable to access study design information about preclinical efficacy studies that is needed to vet the reliability of inferences in those studies about potential efficacy.(Wieschowski et al. 2018) When a REC cannot adequately assess efficacy inferences, it cannot reliably assess the potential for benefit and thus determine whether the risk/benefit ratio in early phase trials based upon the efficacy inferences is reasonable. This matter is all the more acute when we combine the inability to vet efficacy inferences with what is known about the high prevalence of false positive findings in preclinical studies.
As for the respectful treatment of research participants, there is also ample reason to worry that informed consent processes are routinely deficient. For both early and late stage clinical trials, those processes regularly conceal rather than alert research candidates to evidence like that recounted above about both the scientific merit and social value of clinical trials that is germane to their deliberations about enrolling in clinical studies. The upshot is that the very regulatory regimes meant to assure that the rights of people to grant informed consent for research are honored routinely have the opposite effect instead.(Yarborough 2020a)
Considering this evidence relevant to the first prong of questions shows the imprudence of relying so heavily upon professional norms, peer-review, research regulation, research integrity programs, and mandatory instruction in the responsible conduct of research to show that the research community deserves the public’s trust. This means that research teams and institutions need to employ additional safeguards if they want to be truly confident that they deserve the trust that they solicit, which brings us back to a key challenge noted at the outset. How can we inspire more researchers and research institutions to be more reform-minded so that they will be motivated to put to good use the insights found in the previously noted substantial body of work about how to improve research and its integrity? The second prong of questions is offered in hopes that it might help motivate them to do so.
C. Knowing how individual researchers, teams, and institutions deserve the public’s trust
The second prong consists in three simple questions (see Table 2) that individual researchers, research teams, research institutions, and even research organizations and societies can all regularly ask and answer to help them demonstrate why they are deserving of the public’s trust. Answering the questions will identify what researchers are asking people to trust them with and why; what the accountability methods and practices are that are in place, as well as how well they work; and whether any additional steps to support trust should be taken. Here is a brief illustration of how the questions can work.
Table 2:
Question | Justification |
---|---|
What is that we are asking people to trust us with, in order to accomplish what goal/purpose? | This question helps us remember the dependency of research on others and what the good is that we seek on their behalf. |
What accountability/safety/precautionary methods and practices are in place that help assure that we can be trusted to do the things we want to do with what we are soliciting from others and how effective are they? | This question helps us recall that, just as a chain is only as strong as its weakest link, an endeavor like research is made vulnerable by its weakest accountability measure/practice. |
What additional steps, if any, do we need to take to ensure that we have effective safeguards in place? | This question steers us away from complacency with current safeguards towards vigilance regarding matters pertaining to individual research projects. |
A preclinical research team doing animal studies reflects on question 1 about what the team is asking people to trust them with and why. It notes it is asking taxpayers to trust it to use their tax dollars to support its research to, for example, identify “protective pathways in animals that could be enhanced in humans.”(Willmann et al. 2020) Since progress toward this goal will depend upon the rigor of its research and the accuracy and robustness of its research reports, by asking question 2 about the accountability methods and practices that are in place, as well as how well they work, the team realizes that, in order to assure the quality of its work, it cannot just wait to learn whether a journal will publish its results, given how weak a quality assurance mechanism peer review is. Thus, it is led to question 3 asking whether any additional steps should be taken and subsequently consider how it can buttress its work. By answering it the team can learn from the extensive research reform literature that there are multiple ways to do this.
The team members might decide, for example, upon at least three complimentary strategies. They will pre-register their protocol so that they can both receive feedback about their study design and establish for their eventual readers fealty between their research and what they eventually write about it.(Nosek et al. 2018)(Dirnagl 2020) Next, in order to ensure greater efficiency in the research community’s translation efforts, they will clearly state in their published reports whether theirs was a hypothesis generating or hypothesis testing study.(Kimmelman, Mogil, and Dirnagl 2014) Finally, they will share their data so that others can independently check their analyses and, should they desire since they will also have their protocol, attempt to replicate the study findings.(Institute of Medicine 2013) The same process of asking and answering the three questions could produce equally helpful insights, for example, to a team of clinical investigators or the leadership team of a research institution, since the questions are simple, intuitive, and highly adaptive to a range of settings. Again, they will be able to draw upon the extensive research reform literature to help them address any deficiencies the questions uncover.
D. Concluding remarks
The value of the two prongs of questions will obviously need to be established by research that studies their impact before we can know the extent to which they do in fact contribute to deserved trust. And even if they prove to be effective, we should not expect them to be panaceas that will ensure that the trust in research that is continuously solicited will always be repaid with research worthy of that trust. This would overlook far too many structural deficits in today’s research ecology, not the least of which is the frequent misalignment between success metrics in academic science and high quality research itself.(Barnett and Moher 2019)(Benedictus, Miedema, and Ferguson 2016)(Ware and Munafo 2014)
Nevertheless, their use could help to nudge the research community away from its heavy reliance on current safeguards the extensive evidence reviewed above has shown to be ineffective. There is reason to think this nudging could ensue because of the liberating effect of the questions. They permit researchers to temporarily imagine themselves and their work apart from the demands of, for example, peer reviewers, animal care committees, and RECs, to see what practices they could best employ that would cause them to conclude that they deserve the trust that they currently solicit and hope to continue to enjoy in the future. By employing those practices, they could also be more confident that they are not “taking chances” with the public’s trust that is “just essential” to the future of research.
Acknowledgements
I am most grateful to Sarah Perrault, PhD for helpful comments and suggestions she made about prior versions of this essay, David Carter, JD for conversations we shared about the questions found in Table 2, and journal reviewers for the many helpful suggestions they made on prior versions of the manuscript. I would like to thank Fondation Brocher for the support it provided in the form of a Researcher Stay during the preparation of this manuscript.
Funding
A portion of the author’s time was supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through Grant Number UL1 TR001860.
Footnotes
Declaration of Interests The author reports that there are no competing financial interests to declare.
Ulrich Dirnagl compiled the list of references, which was first used in a previous publication.(M. Yarborough et al. 2018)
References
- Abritis Alison, Marcus Adam, and Oransky Ivan. 2020. “An ‘Alarming’ and ‘Exceptionally High’ Rate of COVID-19 Retractions?” Accountability in Research 0 (0): 1–2. 10.1080/08989621.2020.1793675. [DOI] [PubMed] [Google Scholar]
- Joseph Andrew. 2020. “Lancet, NEJM Retract Covid-19 Studies That Sparked Backlash.” STAT (blog). June 4, 2020. https://www.statnews.com/2020/06/04/lancet-retracts-major-covid-19-paper-that-raised-safety-concerns-about-malaria-drugs/. [Google Scholar]
- Barnett AG, and Moher D. 2019. “Turning the Tables: A University League-Table Based on Quality Not Quantity [Version 1; Peer Review: 1 Approved].” F1000Research 8: 583 ( 10.12688/f1000research.18453.1). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Begley CG, and Ellis LM. 2012. “Drug Development: Raise Standards for Preclinical Cancer Research.” Nature 483 (7391): 531–33. 10.1038/483531a. [DOI] [PubMed] [Google Scholar]
- Benedictus R, Miedema F, and Ferguson MW. 2016. “Fewer Numbers, Better Science.” Nature 538 (7626): 453–55. 10.1038/538453a. [DOI] [PubMed] [Google Scholar]
- Cicerone Ralph J. 2010. “Ensuring Integrity in Science.” Science 327 (5966): 624–624. 10.1126/science.1187612. [DOI] [PubMed] [Google Scholar]
- Crossley NA, Sena E, Goehler J, Horn J, van der Worp B, Bath PM, Macleod M, and Dirnagl U. 2008. “Empirical Evidence of Bias in the Design of Experimental Stroke Studies: A Metaepidemiologic Approach.” Stroke 39 (3): 929–34. 10.1161/STROKEAHA.107.498725. [DOI] [PubMed] [Google Scholar]
- Matthews David. 2020. “French Trust in Science Drops as Coronavirus Backlash Begins.” Times Higher Education (THE). June 8, 2020. https://www.timeshighereducation.com/news/french-trust-science-drops-coronavirus-backlash-begins. [Google Scholar]
- Deutsche Forschungsgemeinschaft. n.d. “Recommendations for the Promotion of Translational Research in University Medicine.” https://www.dfg.de/download/pdf/dfg_im_profil/reden_stellungnahmen/2019/190919_stellungnahme_empfehlung_ag_translation.pdf.
- Dirnagl Ulrich. 2020. “Preregistration of Exploratory Research: Learning from the Golden Age of Discovery.” PLOS Biology 18 (3): e3000690. 10.1371/journal.pbio.3000690. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Egan KJ, Vesterinen HM, Beglopoulos V, Sena ES, and Macleod MR. 2016. “From a Mouse: Systematic Analysis Reveals Limitations of Experiments Testing Interventions in Alzheimer’s Disease Mouse Models.” Evidence-Based Preclinical Medicine 3 (1): e00015. 10.1002/ebm2.15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Enserink M 2017. “Sloppy Reporting on Animal Studies Proves Hard to Change.” Science 357 (6358): 1337–38. 10.1126/science.357.6358.1337. [DOI] [PubMed] [Google Scholar]
- Collins Francis S., and Tabak Lawrence A.. 2014. “Policy: NIH Plans to Enhance Reproducibility.” Nature 505: 612–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Garner Joseph P., Gaskill Brianna N., Weber Elin M., Jamie Ahloy-Dallaire, and Kathleen R. Pritchett-Corning. 2017. “Introducing Therioepistemology: The Study of How Knowledge Is Gained from Animal Research.” Lab Animal 46 (4): 103–13. 10.1038/laban.1224. [DOI] [PubMed] [Google Scholar]
- Goldacre B, DeVito NJ, Heneghan C, Irving F, Bacon S, Fleminger J, and Curtis H. 2018. “Compliance with Requirement to Report Results on the EU Clinical Trials Register: Cohort Study and Web Resource.” BMJ 362 (September): k3218. 10.1136/bmj.k3218. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hakala AK, Fergusson D, and Kimmelman J. 2017. “Nonpublication of Trial Results for New Neurological Drugs: A Systematic Review.” Ann Neurol 81 (6): 782–89. 10.1002/ana.24952. [DOI] [PubMed] [Google Scholar]
- Hardin Russell. 2002. Trust and Trustworthiness. Russell Sage Foundation. [Google Scholar]
- Hartung Thomas. 2013. “Food for Thought Look Back in Anger – What Clinical Studies Tell Us About Preclinical Work.” ALTEX 30 (3): 275–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hilal Talal, Miguel Gonzalez-Velez, and Vinay Prasad. 2020. “Limitations in Clinical Trials Leading to Anticancer Drug Approvals by the US Food and Drug Administration.” JAMA Internal Medicine, June. 10.1001/jamainternmed.2020.2250. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hirst JA, Howick J, Aronson JK, Roberts N, Perera R, Koshiaris C, and Heneghan C. 2014. “The Need for Randomization in Animal Trials: An Overview of Systematic Reviews.” PLoS One 9 (6): e98856. 10.1371/journal.pone.0098856. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Howells David W., Sena Emily S., and Macleod Malcolm R.. 2014. “Bringing Rigour to Translational Medicine.” Nature Reviews Neurology 10 (1): 37–43. 10.1038/nrneurol.2013.232. [DOI] [PubMed] [Google Scholar]
- Institute of Medicine. 2013. “Sharing Clinical Research Data:Workshop Summary.” 9780309268745 0309268745. Washington (DC). [Google Scholar]
- Ioannidis John P. A. 2017. “Acknowledging and Overcoming Nonreproducibility in Basic and Preclinical Research.” JAMA 317 (10): 1019–20. 10.1001/jama.2017.0549. [DOI] [PubMed] [Google Scholar]
- Jefferson T, and Jorgensen L. 2018. “Redefining the ‘E’ in EBM.” BMJ Evid Based Med 23 (2): 46–47. 10.1136/bmjebm-2018-110918. [DOI] [PubMed] [Google Scholar]
- Jennifer Agiesta, Jennifer. n.d. “CNN Poll: Most Americans Would Be Uncomfortable Returning to Regular Routines Today.” CNN. Accessed July 11, 2020. https://www.cnn.com/2020/05/12/politics/cnn-poll-americans-uncomfortable-routines/index.html. [Google Scholar]
- Kilkenny C, Browne WJ, Cuthill IC, Emerson M, and Altman DG. 2010. “Improving Bioscience Research Reporting: The ARRIVE Guidelines for Reporting Animal Research.” PLoS Biol 8 (6): e1000412. 10.1371/journal.pbio.1000412. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kimmelman J, and London AJ. 2015. “The Structure of Clinical Translation: Efficiency, Information, and Ethics.” The Hastings Center Report 45 (2): 27–39. 10.1002/hast.433. [DOI] [PubMed] [Google Scholar]
- Kimmelman J, Mogil JS, and Dirnagl U. 2014. “Distinguishing between Exploratory and Confirmatory Preclinical Research Will Improve Translation.” PLoS Biol 12 (5): e1001863. 10.1371/journal.pbio.1001863. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lindner Mark D. 2007. “Clinical Attrition Due to Biased Preclinical Assessments of Potential Efficacy.” Pharmacology & Therapeutics 115 (1): 148–75. 10.1016/j.pharmthera.2007.05.002. [DOI] [PubMed] [Google Scholar]
- Lund H, Brunnhuber K, Juhl C, Robinson K, Leenaars M, Dorch BF, Jamtvedt G, Nortvedt MW, Christensen R, and Chalmers I. 2016. “Towards Evidence Based Research.” BMJ 355 (October): i5440. 10.1136/bmj.i5440. [DOI] [PubMed] [Google Scholar]
- Macleod Malcolm R., Aaron Lawson McLean Aikaterini Kyriakopoulou, Serghiou Stylianos, Arno de Wilde Nicki Sherratt, Hirst Theo, et al. 2015. “Risk of Bias in Reports of In Vivo Research: A Focus for Improvement.” PLOS Biology 13 (10): e1002273. 10.1371/journal.pbio.1002273. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mejlgaard Niels, Bouter Lex M., Gaskell George, Kavouras Panagiotis, Allum Nick, Bendtsen Anna-Kathrine, Charitidis Costas A., et al. 2020. “Research Integrity: Nine Ways to Move from Talk to Walk.” Nature 586 (7829): 358–60. 10.1038/d41586-020-02847-8. [DOI] [PubMed] [Google Scholar]
- Minnerup J, Dirnagl U, and Schabitz WR. 2020. “Checklists for Authors Improve the Reporting of Basic Science Research.” Stroke 51 (1): 6–7. 10.1161/STROKEAHA.119.027626. [DOI] [PubMed] [Google Scholar]
- Moher D, Glasziou P, Chalmers I, Nasser M, Bossuyt PM, Korevaar DA, Graham ID, Ravaud P, and Boutron I. 2016. “Increasing Value and Reducing Waste in Biomedical Research: Who’s Listening?” Lancet 387 (10027): 1573–86. 10.1016/S0140-6736(15)00307-4. [DOI] [PubMed] [Google Scholar]
- Nature. 2010. “A Question of Trust.” Nature 466 (7302): 7–7. 10.1038/466007a. [DOI] [PubMed] [Google Scholar]
- Nosek Brian A., Ebersole Charles R., Alexander C. DeHaven, and David T. Mellor. 2018. “The Preregistration Revolution.” Proceedings of the National Academy of Sciences 115 (11): 2600. 10.1073/pnas.1708274114. [DOI] [PMC free article] [PubMed] [Google Scholar]
- O’Connor Annette M., and Sargeant Jan M.. 2014. “Critical Appraisal of Studies Using Laboratory Animal Models.” ILAR Journal 55 (3): 405–17. 10.1093/ilar/ilu038. [DOI] [PubMed] [Google Scholar]
- Peers Ian S., South Marie C., Ceuppens Peter R., Bright Jonathan D., and Pilling Elizabeth. 2014. “Can You Trust Your Animal Study Data?” Nature Reviews Drug Discovery 13 (7): 560–560. 10.1038/nrd4090-c1. [DOI] [PubMed] [Google Scholar]
- Perrin Steve. 2014. “Preclinical Research: Make Mouse Studies Work.” Nature News 507 (7493): 423. 10.1038/507423a. [DOI] [PubMed] [Google Scholar]
- Reichlin Thomas S., Vogt Lucile, and Würbel Hanno. 2016. “The Researchers’ View of Scientific Rigor—Survey on the Conduct and Reporting of In Vivo Research.” PLoS ONE 11 (12). 10.1371/journal.pone.0165999. [DOI] [PMC free article] [PubMed] [Google Scholar]
- “Reporting Standards and Availability of Data, Materials, Code and Protocols.” n.d. https://www.nature.com/nature-research/editorial-policies/reporting-standards.
- Riveros C, Dechartres A, Perrodeau E, Haneef R, Boutron I, and Ravaud P. 2013. “Timing and Completeness of Trial Results Posted at ClinicalTrials.Gov and Published in Journals.” PLoS Med 10 (12): e1001566; discussion e1001566. 10.1371/journal.pmed.1001566. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Robinson KA, and Goodman SN. 2011. “A Systematic Examination of the Citation of Prior Research in Reports of Randomized, Controlled Trials.” Ann Intern Med 154 (1): 50–55. 10.7326/0003-4819-154-1-201101040-00007. [DOI] [PubMed] [Google Scholar]
- Schmidt-Pogoda A, Bonberg N, Koecke MHM, Strecker JK, Wellmann J, Bruckmann NM, Beuker C, et al. 2019. “Why Most Acute Stroke Studies Are Positive in Animals but Not in Patients.” Ann Neurol, November. 10.1002/ana.25643. [DOI] [PubMed] [Google Scholar]
- Sena ES, van der Worp HB, Bath PM, Howells DW, and Macleod MR. 2010. “Publication Bias in Reports of Animal Stroke Studies Leads to Major Overstatement of Efficacy.” PLoS Biol 8 (3): e1000344. 10.1371/journal.pbio.1000344. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nathalie Percie du Sert, Ahluwalia Amrita, Alam Sabina, Avey Marc T., Baker Monya, Browne William J., Clark Alejandra, et al. 2020. “Reporting Animal Research: Explanation and Elaboration for the ARRIVE Guidelines 2.0.” PLOS Biology 18 (7): e3000411. 10.1371/journal.pbio.3000411. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Richard Smith. 2014. “Richard Smith: Why Scientists Should Be Held to a Higher Standard of Honesty than the Average Person.” The BMJ, September. [Google Scholar]
- Strand LB, Clarke P, Graves N, and Barnett AG. 2017. “Time to Publication for Publicly Funded Clinical Trials in Australia: An Observational Study.” BMJ Open 7 (3): e012212. 10.1136/bmjopen-2016-012212. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tsilidis Konstantinos K., Panagiotou Orestis A., Sena Emily S., Aretouli Eleni, Evangelou Evangelos, Howells David W., Al-Shahi Salman Rustam, Macleod Malcolm R., and Ioannidis John P. A.. 2013. “Evaluation of Excess Significance Bias in Animal Studies of Neurological Diseases.” Edited by Bero Lisa. PLoS Biology 11 (7): e1001609. 10.1371/journal.pbio.1001609. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vogt Lucile, Reichlin Thomas S., Nathues Christina, and Würbel Hanno. 2016. “Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor.” PLoS Biology 14 (12). 10.1371/journal.pbio.2000598. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wadman Meredith. 2010. “NIH to Tighten Rules on Conflicts.” Nature, May. 10.1038/news.2010.257. [DOI] [Google Scholar]
- Ware JJ, and Munafo MR. 2014. “Significance Chasing in Research Practice: Causes, Consequences and Possible Solutions.” Addiction, July. 10.1111/add.12673. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wieschowski S, Chin WWL, Federico C, Sievers S, Kimmelman J, and Strech D. 2018. “Preclinical Efficacy Studies in Investigator Brochures: Do They Enable Risk-Benefit Assessment?” PLoS Biol 16 (4): e2004879. 10.1371/journal.pbio.2004879. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Raffaella Willmann, Lee Joanne, Turner Cathy, Nagaraju Kanneboyina, Aartsma-Rus Annemieke, Wells Dominic J., Wagner Kathryn R., et al. 2020. “Improving Translatability of Preclinical Studies for Neuromuscular Disorders: Lessons from the TREAT-NMD Advisory Committee for Therapeutics (TACT).” Disease Models & Mechanisms 13 (2). 10.1242/dmm.042903. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yarborough M, Bredenoord A, D’Abramo F, Joyce NC, Kimmelman J, Ogbogu U, Sena E, Strech D, and Dirnagl U. 2018. “The Bench Is Closer to the Bedside than We Think: Uncovering the Ethical Ties between Preclinical Researchers in Translational Neuroscience and Patients in Clinical Trials.” PLoS Biol 16 (6): e2006343. 10.1371/journal.pbio.2006343. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yarborough Mark. 2014a. “Taking Steps to Increase the Trustworthiness of Scientific Research.” The FASEB Journal 28 (9): 3841–46. 10.1096/fj.13-246603. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yarborough Mark.. 2014b. “Openness in Science Is Key to Keeping Public Trust.” Nature News 515 (7527): 313. 10.1038/515313a. [DOI] [PubMed] [Google Scholar]
- Yarborough Mark.. 2020a. “Rescuing Informed Consent: How the New ‘Key Information’ and ‘Reasonable Person’ Provisions in the Revised U.S. Common Rule Open the Door to Long Overdue Informed Consent Disclosure Improvements and Why We Need to Walk Through That Door.” Science and Engineering Ethics 26 (3): 1423–43. 10.1007/s11948-019-00170-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yarborough Mark.. 2020b. “Do We Really Know How Many Clinical Trials Are Conducted Ethically? Why Research Ethics Committee Review Practices Need to Be Strengthened and Initial Steps We Could Take to Strengthen Them.” Journal of Medical Ethics, June, medethics-2019–106014. 10.1136/medethics-2019-106014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yarborough Mark, Nadon Robert, and Karlin David G. 2019. “Four Erroneous Beliefs Thwarting More Trustworthy Research.” ELife 8 (July): e45261. 10.7554/eLife.45261. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zarin Deborah A., Goodman Steven N., and Kimmelman Jonathan. 2019. “Harms From Uninformative Clinical Trials.” JAMA 322 (9): 813–14. 10.1001/jama.2019.9892. [DOI] [PubMed] [Google Scholar]