Skip to main content
The FASEB Journal logoLink to The FASEB Journal
. 2014 Sep;28(9):3841–3846. doi: 10.1096/fj.13-246603

Taking steps to increase the trustworthiness of scientific research

Mark Yarborough 1,
PMCID: PMC4139906  PMID: 24928193

Abstract

To enjoy the public's trust, the research community must first be clear about what it is expected to do and then avoid the incidents that prevent it from meeting those expectations. Among other things, there are expectations that published scientific results will be reliable, that research has the potential to contribute to the common good, and that research will be conducted ethically. Consequently, the scientific community needs to avoid lapses that prevent it from meeting these three expectations. This requires a strong commitment to trustworthy research practices, as well as mechanisms that diminish lapses that inevitably occur in complex endeavors such as scientific research. The author presents a model to assess the strength of commitment to trustworthy research and explores proven quality assurance mechanisms that can diminish lapses in research injurious to the public's trust. Some mechanisms identify in advance ways that things can go wrong so that steps can be taken to prevent them from going wrong in the first place. Other mechanisms investigate past errors or near misses to discover their causes so that they can be addressed to avoid similar future instances. The author explains why such methods are useful to efforts to promote research worthy of the public's trust.—Yarborough, M. Taking steps to increase the trustworthiness of scientific research.

Keywords: public trust, quality improvement


The future of scientific research depends in very large part on the public's trust. Accordingly, many strategies are being discussed and employed around the world to promote that trust, ranging from increased ethics education for researchers to increased education for the public about science (14). A prominent effort to build trust in the United States is the community engagement emphasis that the U.S. National Institutes of Health (NIH) required as an initial central tenet of its ambitious Clinical Translational Science Award (CTSA) program (5). A recent Institute of Medicine report about the CTSAs heartily endorsed that emphasis (6). As important as such initiatives are, they are insufficient on their own to sustain the public's abiding trust. As one commentator on trust has noted, the key to enjoying trust is deserving trust (7). If those of us in the research community who care about the public's trust in our work accept this premise, it will not suffice to limit our efforts to such matters as community engagement or research ethics education. We also need to be sure about how, exactly, we deserve the trust that we continuously solicit. We would all benefit from greater consensus on what constitutes trustworthy research in the first place, as well as what the practices are that help to assure it. Neither of these topics receives much notice, despite the growing attention and fret about the public's trust in research, fret reflected by leading voices in the research community (810). What follows is an attempt to move beyond fret to action, by focusing on two things the research community must do to demonstrate its trustworthiness. First, we need to be clear about what, exactly, our research is expected and trusted to do and second, we need to employ strategies that avoid or minimize the incidents that prevent it from meeting those expectations.

BEING CLEAR ABOUT WHAT TRUSTWORTHY SCIENTIFIC RESEARCH IS

There are at least three things scientific research is trusted to do. First, readers of scientific research expect published results to be reliable. Since the purpose of science is discovery and extension of knowledge about the world, efforts at discovery must be properly conducted and accurately reported so that others can incorporate published discoveries into their own scientific investigations. Unreliable published results, no matter whether they stem from honest error, carelessness, or misconduct, impede this process. Second, very public promises made by modern science generate expectations that research is socially valuable. For example, the American Association for the Advancement of Science states that its mission is to “advance science, engineering, and innovation throughout the world for the benefit of all people” (11). Such statements imply a rational connection between scientific investigations and some aspect of ecological, physical, psychological, or social well-being. Otherwise, research cannot do the good it promises to do. Third, there are ethical and legal norms that frame how research is supposed to be conducted. Research can be inherently risky and exploitative. As a result, practices meant to protect against ecological, biological, psychological, social, and other risks have evolved. Responsibilities to demonstrate respect for the interests and welfare of the humans and other animals used in research have also been established. These three characteristics of trustworthy research merit a fuller explication elsewhere but this brief discussion of them succinctly shows for our present purposes why research that fails to fulfill any of these basic expectations of reliability, social value, and ethical conduct is research that diminishes the public's trust in science. It also alerts the scientific community to the need to place a premium on avoiding lapses that prevent it from meeting any of these expectations. The remainder of this essay will look at ways to avoid lapses that undermine both the reliability and ethical conduct of research. How to foster greater accountability for the social value of at least some research has been discussed elsewhere and the recommendations found there can be extrapolated to other types of research (12).

BECOMING TRUSTWORTHY BY AVOIDING INCIDENTS THAT DIMINISH TRUST

Conversations about trustworthy science need to start with the assurance that exploring concerns about the trustworthiness of research are not meant to call into question the integrity and trustworthiness of the individual people who do research. Instead, the urgency about trustworthy research stems from the complexity of research itself. For example, research now routinely employs advanced technologies and instrumentation, as well as large, multisite interdisciplinary teams, all of which create ways that things can go wrong. As a result, today's research is prone to multiple shortcomings that have nothing at all to do with misconduct. The traditional safeguards used by the research community—safeguards such as peer review and the protections afforded by the scientific method itself coupled with regulatory compliance—are not able on their own anymore to fend off the many ways that things can and do go wrong in research today. To appreciate the significance of this, consider the extent to which research that relies on extremely large and/or unique data sets, that employs proprietary software or other research tools, or that reports on observations of rare if not unique events renders such traditional safeguards inadequate (13).

This suggests that new accountability measures are needed to supplement traditional safeguards if the research community is to avoid, or at least diminish, the incidents that undermine trust. One option for identifying such measures is to look to other sectors that also require the public's trust. These sectors implement stringent accountability practices, often in excess of those required by government and other official regulations, to prevent lapses that undermine trust (14). They rely on openness and transparency to continually monitor risks and investigate errors in pursuit of safety, quality, and improvement, all of which support trust.

These proactive sectors stand in stark contrast to other domains where people scramble after the fact to redress the harms and reverse the diminution in trust that results from lapses (15). Too often, this characterizes the approach of the research community. When it focuses on problems that diminish trust, it typically employs ineffective and counterproductive accountability practices characterized by an “excessive reliance on professional expertise” (14). The result of this approach is a “punitive environment whose focus on individual responsibility leads to secrecy” in much of the research community (14). The ensuing silence weakens accountability because it conceals rather than prevents potential problems. As a result, trustworthiness is diminished. In contrast, rather than focus on individuals and what they did or did not do, more proactive sectors use transparent systems-based approaches to ask how an incident might or did occur so that it can be avoided in the future.

This brief overview illustrates the benefits the research community could enjoy if it changed its approach to accountability. But change will never occur in the absence of a strong commitment to trustworthiness and the adoption of mechanisms that can promote it. Figure 1 is designed to help us appreciate both why we need to focus more on trustworthiness and how we can achieve it. It is a modification of a previously adapted model (14) to measure the evolution of safety promotion (15). The new model replaces developmental stages promoting safety with developmental stages promoting trustworthiness. At the lowest point of development is a stage characterized as “pathological,” indicating a maximum amount of reactivity toward trustworthiness on the part of those in charge. At this stage, one only worries about whether work is trustworthy when something bad happens that causes others to question trustworthiness. In contrast, the highest point of development is a stage characterized as “generative,” reflecting a collective commitment and ability to conduct work in a way that generates trustworthiness.

Figure 1.

Figure 1.

The manner in which scientific research is conducted confers varying degrees of trustworthiness upon it. The farther along the path to trustworthiness a research group or institution is, the more practices there are that have been introduced for the specific purpose of reducing the number of lapses that diminish trust in research.

The model is offered for use as a simple diagnostic that any number of individuals in positions of leadership in scientific research can use. Research team leaders, lab directors, and principal investigators could use it. Institutional Review Board (IRB) and Institutional Animal Care and Use Committee (IACUC) directors and other administrative leaders, including deans and compliance officials, could also use it in their research oversight roles. The model assesses whether or not there are proactive accountability practices in place that both drive and assure a continuous focus on trustworthiness. If there are, it means one is either approaching or at the proactive or generative stages of trustworthiness. If there are not, it reveals that better accountability measures are needed.

At what stage along the path to trustworthiness is scientific research? While this is a question that may produce different answers for different areas of science and which is best directed at specific research teams and their institutional settings, in general terms one would conclude that much, possibly even most, research has yet to consistently achieve across the board the higher stages of development most conducive to trustworthiness (14). One reason for this is that the common accountability measures used in so much research fail to place the research community on a sufficiently proactive footing. To illustrate why this is so, let us look briefly at one representative example from biomedical research, the IRB review process. This process involves extensive regulatory requirements meant to protect the safety, welfare, and rights of people who participate in research involving human subjects. It makes sense to look here, because IRBs anchor the regulatory regime pertaining to research on human subjects and thus employ the major accountability practices in place to help assure the public that it can trust that human subject research will be both safe and ethical.

IRB review and oversight focuses extensively on compliance with official regulations and guidance and mainly utilizes review of records and applications to assure compliance. IRBs, for example, spend much time reviewing the content and wording of consent forms, despite the wealth of evidence regarding the ineffectiveness of written consent forms to promote genuine informed consent. For instance, both the length and readability of consent forms have been found to be problematic (ref. 16 and multiple references therein) and the problems research participants have distinguishing between what is research and what is individualized medical care remain entrenched (17, 18). Given the central role that informed consent plays in the ethical conduct, and thus the trustworthiness, of medical research, a research community at the proactive or generative phases of trustworthiness would systematically do more than comply with regulatory requirements.

To be sure, there are many IRBs and many individual research units that go beyond regulatory requirements and work proactively to improve the consent process, but that belies the problem. These additional proactive measures meant to improve quality and thus demonstrate trustworthiness are episodic; they are not uniform. Thus it is chance, not design, which determines when research is “research [we know] we can trust to be ethical.”

This brief example is not meant to negate the importance of either IRB review or the other extensive compliance measures also in place in science to assure both that research is conducted ethically and that published results are reliable. Responsible Conduct of Research education, mandatory disclosures about financial conflicts of interest, and investigations of individuals who engage in misconduct all surely contribute to efforts to assure that research is trustworthy. As important as these efforts are, though, one has to consider whether they are any more capable than IRB review of transitioning the research community to the proactive and generative stages that can continually assure the public that research will be conducted in a manner worthy of its trust. So long as their principal focus is on assigning responsibility to individuals rather than understanding the causes of events and how to prevent them, and on educating individuals rather than improving systems, then they will fall short of the mark. None of these major accountability practices are capable of routinely generating insights about why things go wrong at times in research. So they are ineffective at preventing and reducing future lapses that can diminish the public's trust in research. Thus, something more, as illustrated below, is needed to improve accountability. Otherwise, no sustained progress toward the proactive and generative stages on the path to trustworthy research can occur.

TAKING CONCRETE STEPS TO AVOID RESEARCH THAT DIMINISHES THE PUBLIC'S TRUST

Progress must start with aspirations to reach the proactive and generative practices that can do so much to promote trustworthy research. Yet as necessary as these high aspirations are, we need more than aspiration alone. Equally important are practical measures that can move us up along the path to trustworthy research so that our high aspirations can be realized. That is where we now turn.

When things go wrong, as they inevitably do in any sustained complex activity like scientific research, sectors at the proactive and generative stages of trustworthiness employ quality assurance measures that discover their vulnerabilities to error and mistakes so that they can reduce or eliminate them. Measures like this are needed in the research setting if “science you can trust” is to be more than an aspiration. Auditing is one measure used in multiple settings to improve quality that has been recommended for use in the research setting (19). There are at least two additional candidate mechanisms that may also be able to help assure both the reliability of scientific publications and the ethical conduct of research. They are root cause analysis (RCA) and failure modes and effect analysis (FMEA). FMEA is a well-established process that identifies in advance ways that things can go wrong so that steps can be taken to avoid them going wrong in the first place. RCA, on the other hand, is a widely established process used to investigate past errors or near misses to discover their causes so that those can be addressed to avoid future instances. I now briefly explain both, beginning with RCA.

RCA “has been widely adopted as a central method to learn from mistakes” (20). It has been used with great success in such diverse activities as aviation and clinical medicine to reduce error and increase safety. Its success in such different settings is a result of it being a straightforward approach to answer 3 basic questions: 1) what happened; 2) why; and 3) can anything be done to prevent it from occurring again in the future? (21). If one is interested in preventing problems, it is not enough to understand the “who, what, and how” something happened, which are the typical questions asked. Understanding “why it happened… is the key to preventing similar recurrences” and that is the point of RCA (22).

For example, it is not enough to know that “Maria got paid half of what she was owed on her contract,” the what, because “Jim made a mistake in his paperwork”—the who and how. To prevent the same mistake from occurring again one must also know that Jim made the mistake because the procedures manual he was instructed to use was out of date, the why. Knowing that the use of an outdated manual, rather than Jim, is the root cause leads to efforts to collect outdated manuals and replace them with updated ones, thereby significantly reducing the chances of future underpayments not only by Jim but by everyone else who processes reimbursements to each and every contract worker. This simple example indicates the key features of RCA. It begins with a gathering of facts, followed by identification of the causes that contribute to the facts under review and ultimately the discovery of the root causes, followed by a set of recommendations for implementation to prevent future recurrences (22).

Like RCA, FMEA has been used with great success in diverse settings to reduce error and increase quality. In contrast to RCA, though, a FMEA process is used to anticipate ahead of time what problems could happen; that is, failure modes, so that actions can be taken to prevent them. Not only does it identify the various ways a process might fail, it also determines the severity of the consequences; that is, effect analysis, of each possible failure so that people can identify and prioritize corrective measures (23). For example, if Jim's company had conducted an FMEA on its bulk purchasing practices, it would have discovered that some key procedures and policies are changed more frequently than bulk printed materials are depleted. Thus, it would know that at some point in time the procedures manual would become outdated, resulting in, among other things, inappropriate reimbursement of contract employees. Corrective measures could then be implemented. For instance, the company might switch to electronic publishing of the procedures manual to make sure that manuals are updated more quickly. Or, it could require electronic rather than paper forms for requesting checks so that completed forms would comply with procedures. Both would reduce the chances that people like Maria would be underpaid.

Since FMEA and RCA might help the research community transition to the proactive and generative stages that foster trustworthy research, both warrant investigation to see if they can provide understanding about the myriad ways things do and can go wrong throughout the life of research projects, from the design and conduct of research to the publication of results, and recommend corrective actions that will reduce the chances that lapses occur. For example, research teams and/or the institutional officials where they work could use FMEA to improve team science. They could use both existing research practices and hypothetical cases to troubleshoot how and where problems performing complicated multistep tasks that need to be completed at multiple sites could arise and then take steps in advance to prevent their occurrence. Similarly, they could use RCA on actual errors in data entry to identify the causes of them and implement changes based on these findings in an attempt to prevent future similar occurrences.

There are several reasons that suggest that people in the research community would be open to using both FMEA and RCA to progress to the proactive and generative stages of trustworthiness. First of all, researchers are used to a culture of research oversight and review. Whether it is the requirement for biological hazards safety review, IACUC or IRB review, scientific merit review, or even review of alleged misconduct, there are sponsor and institutional practices in place to enforce various standards. FMEA and RCA could join these practices. Similarly, many of the country's biomedical research institutions are already benefitting from the ability of FMEA and RCA to change culture to improve quality and reduce errors in patient care. Hospitals accredited by the Joint Commission, as well as the U.S. Department of Veterans Affairs (VA), already use FMEA and RCA to investigate clinical errors in order to improve quality and safety. Therefore, many research institutions are already familiar with and accept these processes. Expanding FMEA and RCA to the research setting would be a reasonable evolution of an institution's focus on quality improvement. Third, due to mandatory review and reporting requirements, IRBs and IACUCs are required to investigate incidents of suspected noncompliance with regulations and higher education institutions are required to investigate research misconduct. Consequently, research institutions already know where they could target both of these preventive strategies to good effect.

Finally, FMEA and RCA bring a distinct advantage to the table compared to existing approaches to research lapses. The individuals targeted by the current practices to investigate research misconduct allegations frequently view the process as punitive (24). Since investigations of noncompliance with regulations relating to research involving human subjects similarly target individual investigators, the same punitive, stigmatizing effect likely occurs. FMEA and RCA should avoid much of this negativity because they focus on events, why they occur, and how to prevent them rather than on individual actors and what they may have done wrong. Unlike the current punitive approaches, FMEA and RCA could better bring the research community together in self-reflection and problem-focused activities since they avoid assigning blame and creating stigma. For all these considerations, it makes sense to think that they could make important contributions to the scientific research community (14, 25).

CHALLENGES

As potentially beneficial as FMEA and RCA might prove to be to the research community, implementing them will be challenging. Since neither would replace mandated investigations of certain types of suspected wrongdoing, institutions must perceive their value to be worth the additional work they will entail. Individuals may also be reluctant to participate in either. But since both FMEA and RCA focus on understanding the causes of lapses in order to prevent future mishaps, not blaming or exonerating individuals, both should eventually be accepted by individuals, as happened when the VA began to use RCA to address medical errors. There was a noticeable “shift in the root causes identified, blaming individuals less and increasingly attributing the problem to systemic causes” (20).

The ultimate challenge in the research setting that FMEA and RCA will likely face is getting research teams and institutions to implement the recommendations they will produce. This has been found to be a problem in the clinical arena, where institutions often fail to enact the recommendations stemming from RCAs (20). The challenge of being willing to implement recommendations reminds us of the significance of aspiring to trustworthiness in the first place.

CONCLUSIONS

The more the research community responds after the fact to incidents that diminish trust, the more it leaves to chance the public's support for its work. Given this reality, it is disconcerting that more effort is not being focused on transformative accountability practices that can eliminate lapses, large and small, that erode the trustworthiness of research. This lack of requisite effort stems at least in part, no doubt, from our tendency in the research community to conflate being trusted with being trustworthy. Thus, we may fail to appreciate that it is possible to enjoy trust without deserving it. The spate of efforts highlighted at the outset to promote the public's trust in research suggests that we may be placing too much emphasis on being trusted, possibly at the expense of being trustworthy.

Even if these efforts succeed at increasing trust, that increased trust will do nothing to advance the research community itself along the path to the generative stage of trustworthy research. More likely, it will provide both a false sense of security that the trust enjoyed today will also be there tomorrow and a sense of complacency that trustworthy practices themselves are unimportant. The only way to reach the generative stage, where our work is work we know the public can trust, is to have standard accountability practices in place, along with a deliberately crafted culture, that assure that level of aspiration and success.

FMEA and RCA, given their proven success in so many other settings, are obvious accountability practices warranting great interest by the scientific community. Their adoption can equip every research team and institution with straightforward mechanisms for realizing a number of benefits. Focus can shift from “problem individuals” to “trustworthy science.” Administrative leaders, compliance officials and researchers can begin to work together to improve processes and reduce errors. Institutional cultures can become more collaborative. These changes would all bode well for the future of research and the important benefits it pursues.

Acknowledgments

The author thanks Dr. Jeffrey Elias, Dr. Lee Hilborne, Dr. Michael S. Wilkes, and anonymous reviewers for helpful editorial comments pertaining to portions of the manuscript.

A portion of the author's time was supported by the U.S. National Institutes of Health, through grant TR 000002.

Footnotes

CTSA
Clinical and Translational Science Award
FMEA
failure modes and effect analysis
IACUC
Institutional Animal Care and Use Committee
IRB
Institutional Review Board
NIH
U.S. National Institutes of Health
RCA
root cause analysis
VA
U.S. Department of Veterans Affairs

REFERENCES

  • 1. Rayner S. (2010) Science and Trust Expert Group Report and Action Plan: Starting a National Conversation about Good Science, Department for Business Innovation and Skills, London, UK: Retrieved from http://interactive.bis.gov.uk/scienceandsociety/site/trust/files/2010/03/Accessible-BIS-R9201-URN10-699-FAW.pdf [Google Scholar]
  • 2. Steneck N., Mayer T., Anderson M.; Statement Drafting Committee (2010) Singapore Statement on Research Integrity, 2nd World Conference on Research Integrity, July 21–24, 2010, Singapore; http://www.singaporestatement.org [Google Scholar]
  • 3. Drenth P., Ftacnikova S., Hiney M., Puljak L., eds (2010) Fostering Research Integrity in Europe. A Report by the ESF Member Organization Forum on Research Integrity, European Science Foundation, Strasbourg, France; http://www.esf.org/fileadmin/Public_documents/Publications/ResearchIntegrity_report.pdf [Google Scholar]
  • 4. II BRISPE (2013) Joint Statement of the II Brazilian Meeting on Research Integrity, Science, and Publication Ethics, Second Brazilian Meeting on Research Integrity, Science, and Publication Ethics (II BRISPE), May 28–June 1, 2012, Rio de Janiero, São Paulo, and Porto Alegre, Brazil [Google Scholar]
  • 5. U.S. Department of Health and Human Services (2012) Institutional Clinical and Translational Science Award (U54), Funding Opportunity Announcement (FOA) Number RFA-TR-12-006, National Institutes of Health, Bethesda, MD, USA; http://grants.nih.gov/grants/guide/rfa-files/RFA-TR-12-006.html [Google Scholar]
  • 6. Leshner A. I., Terry S. F., Schultz A. M., Liverman C. T.; Committee to Review the Clinical and Translational Science Awards Program at the National Center for Advancing Translational Sciences, Board on Health Sciences Policy, Institute of Medicine (2013) The CTSA Program at NIH: Opportunities for Advancing Clinical and Translational Research, National Academies Press, Washington, DC: [PubMed] [Google Scholar]
  • 7. Hardin R. (2002) Trust and Trustworthiness, Russell Sage Foundation, New York [Google Scholar]
  • 8. Anonymous (2010) A question of trust. Nature 466, 7 (editorial) [DOI] [PubMed] [Google Scholar]
  • 9. Cicerone R. J. (2010) Ensuring integrity in science. Science 327, 624. [DOI] [PubMed] [Google Scholar]
  • 10. Basken P. (2010, May 20) NIH proposes tougher rules on financial conflicts of interest. Chron. Higher Educ. http://www.chroniclereview.info/article/NIH-Proposes-Tougher-Rules-on/65636/
  • 11. American Association for the Advancement of Science (2010) AAAS Mission. American Association for the Advancement of Science, New York; http://www.aaas.org/about-aaas [Google Scholar]
  • 12. Yarborough M. (2013) Increasing enrollment in drug trials: the need for greater transparency about the social value of research in recruitment efforts. Mayo Clin. Proc. 88, 442–445 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Jasny B. R., Chin G., Chong L., Vignieri S. (2011) Again, and again, and again. Science 334, 1225. [DOI] [PubMed] [Google Scholar]
  • 14. Yarborough M., Fryer-Edwards K., Geller G., Sharp R. R. (2009) Transforming the culture of biomedical research from compliance to trustworthiness: insights from nonmedical sectors. Acad. Med. 84, 472–477 [DOI] [PubMed] [Google Scholar]
  • 15. Hudson P. (2003) Applying the lessons of high risk industries to health care. Qual. Saf. Health Care 12(Suppl. 1), i7–i12 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Kass N. E., Chaisson L., Taylor H. A., Lohse J. (2011) Length and complexity of US and international HIV consent forms from federal HIV network trials. J. Gen. Int. Med. 26, 1324–1328 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Appelbaum P. S., Roth L. H., Lidz C. (1982) The therapeutic misconception: informed consent in psychiatric research. Int. J. Law Psych. 5, 319–329 [DOI] [PubMed] [Google Scholar]
  • 18. Kimmelman J. (2007) The therapeutic misconception at 25: treatment, research, and confusion. Hastings Cent. Rep. 37, 36–42 [DOI] [PubMed] [Google Scholar]
  • 19. Shamoo A. E. (2013) Data audit as a way to prevent/contain misconduct. Account. Res. 20, 369–379 [DOI] [PubMed] [Google Scholar]
  • 20. Wu A., Lipshutz A., Pronovost P. (2008) Effectiveness and efficiency of root cause analysis in medicine. JAMA 299, 685–687 [DOI] [PubMed] [Google Scholar]
  • 21. U.S. Department of Veterans Affairs (2007) VA National Center for Patient Safety: Root Cause Analysis (RCA), U.S. Department of Veterans Affairs, Washington, DC; http://www.patientsafety.va.gov/professionals/onthejob/rca.asp [Google Scholar]
  • 22. Rooney J., Heuvel L. V. (2004) Root cause analysis for beginners. Qual. Prog. 37, 45–53 [Google Scholar]
  • 23. Apkon M., Leonard J., Probst L., DeLizio L., Vitale R. (2004) Design of a safer approach to intravenous drug infusions: failure mode effects analysis. Qual. Saf. Health Care 13, 265–271 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Research Triangle Institute (1996) Survey of Accused but Exonerated Individuals in Research Misconduct Cases: Final Report, Research Triangle Institute, Washington, DC [Google Scholar]
  • 25. Opel D., Diekema D., Marcuse E. (2011) Assuring research integrity in the wake of Wakefield. BMJ 342, d2. [DOI] [PubMed] [Google Scholar]

Articles from The FASEB Journal are provided here courtesy of The Federation of American Societies for Experimental Biology

RESOURCES